ReplicaCount configured in values.yaml are not being respected #410

Closed
opened 2026-04-05 16:19:37 +02:00 by MrUnknownDE · 0 comments
Owner

Originally created by @gmourahub on 7/31/2025

Describe the bug
In helm Chart the statusPage.replicaCount and adminDashboard.replicaCount configured in values.yaml are not being respected during deployment generation. Both deployments are created with 0 replicas instead of the configured values, making these services unavailable.

To Reproduce
Steps to reproduce the behavior:
Configure the helm chart values.yaml with:

deployment:
  # Default replica count for all deployments
  replicaCount: 0

testServer:
  replicaCount: 0
  enabled: false
  disableTelemetryCollection: false
  disableAutoscaler: false

statusPage:
  replicaCount: 1
  cnameRecord:
  disableTelemetryCollection: false
  disableAutoscaler: false

openTelemetryCollector: 
  replicaCount: 0
  disableTelemetryCollection: false
  disableAutoscaler: false
  sendingQueue: 
    enabled: true
    size: 1000
    numConsumers: 3

accounts: 
  replicaCount: 1
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

home: 
  replicaCount: 1
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

dashboard:
  replicaCount: 1
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

adminDashboard:
  replicaCount: 1
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

worker:
  replicaCount: 0
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

workflow:
  replicaCount: 0
  disableTelemetryCollection: false
  disableAutoscaler: false
  workflowTimeoutInMs: 5000
  resources:

apiReference:
  replicaCount: 0
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

docs:
  replicaCount: 0
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

app: 
  replicaCount: 1
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

probeIngest:
  replicaCount: 0
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

openTelemetryIngest:
  replicaCount: 0
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:
  # KEDA autoscaling configuration based on queue metrics
  keda:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    # Scale up when queue size exceeds this threshold
    queueSizeThreshold: 100
    # Polling interval for metrics (in seconds)
    pollingInterval: 30
    # Cooldown period after scaling (in seconds)
    cooldownPeriod: 300

fluentIngest:
  replicaCount: 0
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

incomingRequestIngest:
  replicaCount: 0
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

isolatedVM:
  replicaCount: 1
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

serverMonitorIngest:
  replicaCount: 0
  disableTelemetryCollection: false
  disableAutoscaler: false
  resources:

Kubernetes deployments result:

kg deploy -n oneuptime-staging
NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
oneuptime-accounts                  1/1     1            1           3h
oneuptime-admin-dashboard           0/0     0            0           3h
oneuptime-api-reference             0/0     0            0           3h
oneuptime-app                       1/1     1            1           3h
oneuptime-dashboard                 1/1     1            1           3h
oneuptime-docs                      0/0     0            0           3h
oneuptime-fluent-ingest             0/0     0            0           3h
oneuptime-home                      1/1     1            1           3h
oneuptime-incoming-request-ingest   0/0     0            0           3h
oneuptime-isolated-vm               1/1     1            1           3h
oneuptime-nginx                     1/1     1            1           3h
oneuptime-open-telemetry-ingest     0/0     0            0           3h
oneuptime-otel-collector            0/0     0            0           3h
oneuptime-probe-ingest              0/0     0            0           3h
oneuptime-probe-one                 0/0     0            0           3h
oneuptime-server-monitor-ingest     0/0     0            0           3h
oneuptime-status-page               0/0     0            0           3h
oneuptime-worker                    0/0     0            0           3h
oneuptime-workflow                  0/0     0            0           3h

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Deployment Type
Is this issue on SaaS (at https://oneuptime.com) or self hosted (the version that you deployed on your server)?
self hosted

Additional context
Add any other context about the problem here.

*Originally created by @gmourahub on 7/31/2025* **Describe the bug** In helm Chart the statusPage.replicaCount and adminDashboard.replicaCount configured in values.yaml are not being respected during deployment generation. Both deployments are created with 0 replicas instead of the configured values, making these services unavailable. **To Reproduce** Steps to reproduce the behavior: Configure the helm chart values.yaml with: ``` deployment: # Default replica count for all deployments replicaCount: 0 testServer: replicaCount: 0 enabled: false disableTelemetryCollection: false disableAutoscaler: false statusPage: replicaCount: 1 cnameRecord: disableTelemetryCollection: false disableAutoscaler: false openTelemetryCollector: replicaCount: 0 disableTelemetryCollection: false disableAutoscaler: false sendingQueue: enabled: true size: 1000 numConsumers: 3 accounts: replicaCount: 1 disableTelemetryCollection: false disableAutoscaler: false resources: home: replicaCount: 1 disableTelemetryCollection: false disableAutoscaler: false resources: dashboard: replicaCount: 1 disableTelemetryCollection: false disableAutoscaler: false resources: adminDashboard: replicaCount: 1 disableTelemetryCollection: false disableAutoscaler: false resources: worker: replicaCount: 0 disableTelemetryCollection: false disableAutoscaler: false resources: workflow: replicaCount: 0 disableTelemetryCollection: false disableAutoscaler: false workflowTimeoutInMs: 5000 resources: apiReference: replicaCount: 0 disableTelemetryCollection: false disableAutoscaler: false resources: docs: replicaCount: 0 disableTelemetryCollection: false disableAutoscaler: false resources: app: replicaCount: 1 disableTelemetryCollection: false disableAutoscaler: false resources: probeIngest: replicaCount: 0 disableTelemetryCollection: false disableAutoscaler: false resources: openTelemetryIngest: replicaCount: 0 disableTelemetryCollection: false disableAutoscaler: false resources: # KEDA autoscaling configuration based on queue metrics keda: enabled: false minReplicas: 1 maxReplicas: 100 # Scale up when queue size exceeds this threshold queueSizeThreshold: 100 # Polling interval for metrics (in seconds) pollingInterval: 30 # Cooldown period after scaling (in seconds) cooldownPeriod: 300 fluentIngest: replicaCount: 0 disableTelemetryCollection: false disableAutoscaler: false resources: incomingRequestIngest: replicaCount: 0 disableTelemetryCollection: false disableAutoscaler: false resources: isolatedVM: replicaCount: 1 disableTelemetryCollection: false disableAutoscaler: false resources: serverMonitorIngest: replicaCount: 0 disableTelemetryCollection: false disableAutoscaler: false resources: ``` Kubernetes deployments result: ``` kg deploy -n oneuptime-staging NAME READY UP-TO-DATE AVAILABLE AGE oneuptime-accounts 1/1 1 1 3h oneuptime-admin-dashboard 0/0 0 0 3h oneuptime-api-reference 0/0 0 0 3h oneuptime-app 1/1 1 1 3h oneuptime-dashboard 1/1 1 1 3h oneuptime-docs 0/0 0 0 3h oneuptime-fluent-ingest 0/0 0 0 3h oneuptime-home 1/1 1 1 3h oneuptime-incoming-request-ingest 0/0 0 0 3h oneuptime-isolated-vm 1/1 1 1 3h oneuptime-nginx 1/1 1 1 3h oneuptime-open-telemetry-ingest 0/0 0 0 3h oneuptime-otel-collector 0/0 0 0 3h oneuptime-probe-ingest 0/0 0 0 3h oneuptime-probe-one 0/0 0 0 3h oneuptime-server-monitor-ingest 0/0 0 0 3h oneuptime-status-page 0/0 0 0 3h oneuptime-worker 0/0 0 0 3h oneuptime-workflow 0/0 0 0 3h ``` **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: [e.g. iOS] - Browser [e.g. chrome, safari] - Version [e.g. 22] **Smartphone (please complete the following information):** - Device: [e.g. iPhone6] - OS: [e.g. iOS8.1] - Browser [e.g. stock browser, safari] - Version [e.g. 22] **Deployment Type** Is this issue on SaaS (at https://oneuptime.com) or self hosted (the version that you deployed on your server)? self hosted **Additional context** Add any other context about the problem here.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github/oneuptime#410