Compare commits

...

98 Commits

Author SHA1 Message Date
Simon Larsen
c4c6793b29 feat: Implement KEDA autoscaling configuration for probes and add metrics endpoints 2025-08-01 15:38:04 +01:00
Simon Larsen
c894b112e6 fix: Await monitorResource call to ensure proper error handling in incoming request processing 2025-08-01 14:34:17 +01:00
Simon Larsen
304baf1bb4 fix: Await monitorResource call to ensure proper error handling in probe response processing 2025-08-01 14:33:17 +01:00
Simon Larsen
9adea6b1ba feat: Remove Helm annotations for post-install and post-upgrade hooks from templates 2025-08-01 14:01:04 +01:00
Simon Larsen
5498521e02 feat: Add Helm annotations for post-install and post-upgrade hooks 2025-08-01 13:47:52 +01:00
Simon Larsen
9e97c6ddbc feat: Update autoscaler conditions for fluent-ingest, incoming-request-ingest, probe-ingest, and server-monitor-ingest templates 2025-08-01 13:23:39 +01:00
Nawaz Dhandala
63272e09f8 refactor: Simplify function parameter formatting and improve readability in various files 2025-08-01 10:45:55 +01:00
Simon Larsen
327c28afdc feat: Implement fluent ingest worker for processing queue jobs 2025-08-01 10:34:17 +01:00
Simon Larsen
896020b93b feat: Add KEDA autoscaling configuration for various ingests
- Introduced KEDA autoscaling configuration in values.yaml for probeIngest, fluentIngest, incomingRequestIngest, and serverMonitorIngest.
- Added endpoints for queue statistics, size, and failed jobs in IncomingRequestIngest and ProbeIngest APIs.
- Implemented asynchronous processing of incoming requests and probes using job queues.
- Created Metrics API for KEDA metrics integration in IncomingRequestIngest, ProbeIngest, and ServerMonitorIngest.
- Refactored IncomingRequest and Probe APIs to utilize queue services for processing.
- Added job processing logic for incoming requests and probes in respective job files.
- Implemented queue service classes for managing job addition and retrieval of queue statistics.
2025-08-01 10:29:02 +01:00
Simon Larsen
15a68472b0 feat: comment out ClusterKeyAuthorization import for KEDA debugging 2025-07-31 21:23:41 +01:00
Simon Larsen
0210480d97 feat: remove Prometheus metrics endpoint for KEDA debugging 2025-07-31 21:22:04 +01:00
Simon Larsen
72fdc06687 feat: temporarily disable authentication middleware for KEDA debugging in metrics endpoint 2025-07-31 20:52:10 +01:00
Simon Larsen
3710b81b9a feat: add replica count support for deployments in Helm templates 2025-07-31 20:03:25 +01:00
Simon Larsen
9fcb3dc2e0 feat: update cluster key handling for KEDA compatibility in authorization middleware and Helm chart 2025-07-31 19:50:25 +01:00
Simon Larsen
43e2ccf51a feat: improve secret handling in Helm chart for upgrade scenarios 2025-07-31 19:29:07 +01:00
Nawaz Dhandala
48c3d8603a fix: format code for better readability and consistency in MonitorResource and Metrics 2025-07-31 12:57:39 +01:00
Simon Larsen
9cfc912161 feat: enhance response messages for incoming request checks with time difference 2025-07-31 12:56:06 +01:00
Simon Larsen
29e3ee57ab feat: add metrics-api endpoint for queue size retrieval in KEDA autoscaling 2025-07-31 12:44:48 +01:00
Simon Larsen
be7e849822 feat: add KEDA ScaledObjects for OpenTelemetry Ingest with configurable metrics 2025-07-31 12:34:43 +01:00
Simon Larsen
59d76b601a feat: add KEDA autoscaling support for OpenTelemetry Ingest with configurable metrics 2025-07-31 12:18:57 +01:00
Simon Larsen
b77ef336b8 feat: add replica count configuration for multiple deployments in Helm templates 2025-07-31 11:47:03 +01:00
Nawaz Dhandala
7df21fe8e5 refactor: add type annotations for pagination parameters in OTelIngest 2025-07-30 22:42:24 +01:00
Nawaz Dhandala
f39e1943c7 refactor: improve code formatting and readability in Queue and TelemetryQueueService 2025-07-30 22:40:45 +01:00
Simon Larsen
966a903646 feat: implement pagination for retrieving failed jobs from the queue 2025-07-30 22:39:58 +01:00
Simon Larsen
1d9d37c6d1 refactor: optimize queue size and stats calculations by using count methods 2025-07-30 22:37:46 +01:00
Simon Larsen
7edcc4dbce feat: add endpoint to retrieve failed jobs from the queue 2025-07-30 22:37:08 +01:00
Simon Larsen
0939294d22 Refactor code structure for improved readability and maintainability 2025-07-30 19:45:46 +01:00
Simon Larsen
dbcbfe5f79 refactor: simplify telemetry processing worker initialization and remove unused export 2025-07-30 19:25:52 +01:00
Simon Larsen
a638972817 feat: update @oneuptime/common dependency in test-release workflow 2025-07-30 16:44:05 +01:00
Simon Larsen
37c6310465 feat: update @oneuptime/common to version 7.0.4800 and adjust workflow dependencies 2025-07-30 16:42:39 +01:00
Nawaz Dhandala
a7d38389fd style: improve formatting and consistency in Prometheus metrics generation 2025-07-30 16:30:26 +01:00
Nawaz Dhandala
2f55336db7 Merge branch 'master' of https://github.com/OneUptime/oneuptime 2025-07-30 16:29:25 +01:00
Nawaz Dhandala
f99a15b95b refactor: enhance type annotations for better clarity in queue processing methods 2025-07-30 16:28:53 +01:00
Simon Larsen
de5bff2ffe feat: add Prometheus metrics endpoint for telemetry queue monitoring 2025-07-30 16:26:38 +01:00
Nawaz Dhandala
cef2764499 style: format code for better readability and consistency across multiple files 2025-07-30 16:22:26 +01:00
Simon Larsen
a7014ac3ff fix: update projectId handling to ensure proper ObjectID conversion in telemetry processing 2025-07-30 16:12:19 +01:00
Simon Larsen
fa31dc670c feat: implement telemetry ingestion processing with queue integration 2025-07-30 15:37:08 +01:00
Simon Larsen
4c2a12cf31 feat: add queue stats and size endpoints with authorization middleware 2025-07-30 15:34:51 +01:00
Simon Larsen
b4115e1529 feat: initialize telemetry processing worker with logging for better monitoring 2025-07-30 15:30:23 +01:00
Simon Larsen
3883790c50 feat: add getQueueSize and getQueueStats methods to Queue class for improved queue monitoring 2025-07-30 15:29:53 +01:00
Simon Larsen
1702558d73 chore: update package-lock.json files to add @types/web-push and web-push dependencies
- Added "@types/web-push": "^3.6.4" to multiple package-lock.json files.
- Added "web-push": "^3.6.7" to multiple package-lock.json files.
- Removed "lodash" and "@types/lodash" from multiple package-lock.json files.
2025-07-30 15:21:03 +01:00
Simon Larsen
cacdbff50e Implement feature X to enhance user experience and optimize performance 2025-07-30 13:58:46 +01:00
Nawaz Dhandala
0bc6b432a2 refactor: Update getNestedValue function signature for improved type safety in Detail and TableRow components 2025-07-30 13:57:23 +01:00
Simon Larsen
eaa09d4a13 refactor: Replace lodash get with custom nested value helper in Detail and TableRow components 2025-07-30 13:52:30 +01:00
Simon Larsen
08c85dd31c refactor: Remove lodash and its type definitions from package dependencies 2025-07-30 12:48:19 +01:00
Nawaz Dhandala
42e82b6fb7 refactor: Clean up whitespace in various components for improved readability 2025-07-30 12:08:53 +01:00
Simon Larsen
463a20f342 feat: Add ingestedAt timestamp to ProbeMonitorResponse and update ingestion logic 2025-07-30 12:08:21 +01:00
Simon Larsen
1b8a7e3261 chore: Add sw.js to .gitignore to prevent tracking of service worker file 2025-07-30 11:23:26 +01:00
Simon Larsen
8b27dd1f26 refactor: Remove deprecated service worker implementation from Dashboard 2025-07-30 11:23:09 +01:00
Simon Larsen
17c72f65e3 refactor: Update service worker template and generated version information for Dashboard 2025-07-30 11:22:29 +01:00
Simon Larsen
5eee900fd3 feat: Implement service worker generation script and update build process for Dashboard 2025-07-30 11:18:55 +01:00
Simon Larsen
0a6cdd11af refactor: Simplify esbuild configuration by disabling minification and removing build version generation 2025-07-30 10:47:23 +01:00
Simon Larsen
8514b6b82e refactor: Enhance PWA service worker with caching strategies and update notifications 2025-07-30 10:45:18 +01:00
Simon Larsen
dfa8f6cd24 refactor: Remove unused LocalStorage import and update status page ID handling in DashboardMasterPage 2025-07-30 10:41:19 +01:00
Simon Larsen
61614227e1 refactor: Update dependency version for @oneuptime/common to allow any version 2025-07-30 10:01:46 +01:00
Simon Larsen
f3d20eb544 refactor: Add spacing before nav element in Pagination component for improved layout 2025-07-29 13:47:33 +01:00
Simon Larsen
a11ff57fda refactor: Clean up layout in Pagination component with improved mobile navigation and added comments for clarity 2025-07-29 13:46:23 +01:00
Simon Larsen
deb635bc80 refactor: Remove unnecessary height style from Modal component for improved responsiveness 2025-07-29 11:25:26 +01:00
Simon Larsen
c707830811 refactor: Adjust margin for label in ProbePicker component for better alignment 2025-07-29 11:20:28 +01:00
Nawaz Dhandala
24ada68d1e refactor: Replace lodash Dictionary import with local Dictionary type in multiple files 2025-07-29 11:12:37 +01:00
Nawaz Dhandala
ca23234ba9 refactor: Update import statement for Dictionary in Route.ts 2025-07-29 11:10:22 +01:00
Nawaz Dhandala
ea40a955e9 refactor: Enhance Slack notification structure for balance refill in NotificationService 2025-07-29 11:07:56 +01:00
Nawaz Dhandala
a46ee07d70 refactor: Format imports and improve error logging in NotificationService 2025-07-29 11:07:10 +01:00
Simon Larsen
5c5bab408d refactor: Simplify Terraform examples by removing unused monitor configurations and variables 2025-07-29 11:02:57 +01:00
Simon Larsen
540d632baf feat: Add Slack notification for balance refill in NotificationService 2025-07-28 12:03:40 +01:00
Simon Larsen
74718017ad refactor: Update jest.config.json for module name mapping and transform ignore patterns 2025-07-25 16:03:01 +01:00
Simon Larsen
d16897db1b refactor: update dependencies in package.json
- Replaced "Common" dependency with "@oneuptime/common" version 7.0.4773.
- Maintained existing versions for "dotenv" and "ts-node".
2025-07-25 15:38:07 +01:00
Simon Larsen
be3fc6f077 refactor: Update monitorTask type from PromiseVoidFunction to Promise<void> for consistency 2025-07-25 14:37:52 +01:00
Nawaz Dhandala
b7b577517c refactor: Improve type safety by defining PromiseVoidFunction for monitorTask in CheckOnlineStatus 2025-07-25 14:34:04 +01:00
Simon Larsen
ccf7a96e43 refactor: Streamline monitor processing logic in CheckOnlineStatus for improved error handling and parallel execution 2025-07-25 14:26:54 +01:00
Simon Larsen
892f3c052a feat: Add timeNow property to ServerMonitorResponse and update related logic in ServerMonitorCriteria and CheckOnlineStatus 2025-07-25 14:25:01 +01:00
Simon Larsen
00833a06f4 fix: Update time calculation in CheckOnlineStatus to use three minutes ago 2025-07-25 14:20:32 +01:00
Simon Larsen
472adf610a refactor: Update comment to clarify SSL monitor check in MonitorResourceUtil 2025-07-25 14:14:12 +01:00
Nawaz Dhandala
976c36de9a feat: Add migration for new default values and indexes in OnCallDutyPolicyScheduleLayer 2025-07-25 13:50:08 +01:00
Simon Larsen
6026c9c9af refactor: Remove unused import for ProbeApiIngestResponse in ServerMonitor.ts 2025-07-25 13:49:34 +01:00
Simon Larsen
791aa1421b feat: Optimize server monitor response handling by returning early and streamlining processing 2025-07-25 13:49:14 +01:00
Simon Larsen
79dbc94f82 feat: Add log viewing instructions and improve error logging in agent 2025-07-25 13:42:25 +01:00
Simon Larsen
ded41fc7ec feat: Enhance logging functionality with log file path configuration and log directory management 2025-07-25 13:38:23 +01:00
Simon Larsen
581c374745 refactor: Remove PWA install prompt to streamline user experience 2025-07-25 13:31:26 +01:00
Simon Larsen
64c0c8b4cb haraka delete 2025-07-25 09:07:09 +01:00
Simon Larsen
7d2241ba98 feat: Add indexes to improve query performance across multiple database models 2025-07-24 20:02:23 +01:00
Nawaz Dhandala
30bada5b7a feat: Add additional performance indexes and update migration index list 2025-07-24 18:40:40 +01:00
Simon Larsen
61bfb37747 Merge branch 'release' of github.com:OneUptime/oneuptime into release 2025-07-24 18:39:42 +01:00
Simon Larsen
4686aa941a feat: Add performance indexes to improve query efficiency across multiple database models 2025-07-24 18:39:04 +01:00
Nawaz Dhandala
3c065c76b0 feat: Add missing indexes for improved query performance in IncidentStateTimeline and MonitorProbe 2025-07-24 18:26:03 +01:00
Simon Larsen
5dccd03ed4 refactor: Remove ProcessMetrics job to streamline monitor metric handling 2025-07-24 18:16:32 +01:00
Simon Larsen
a395a95997 feat: Add composite indexes for efficient querying in IncidentStateTimeline, MonitorProbe, and MonitorStatusTimeline 2025-07-24 18:16:06 +01:00
Nawaz Dhandala
89082b1232 refactor: Improve error handling and type definitions in core operations across multiple services 2025-07-24 17:42:38 +01:00
Nawaz Dhandala
7cb33de450 refactor: Enhance error handling and type definitions in core operations for AlertService and IncidentService 2025-07-24 17:34:46 +01:00
Nawaz Dhandala
353ac875fb refactor: Simplify promise handling and error logging in service operations 2025-07-24 17:25:54 +01:00
Simon Larsen
d6560fdb32 feat: Refactor incident creation to execute core operations asynchronously 2025-07-24 17:23:53 +01:00
Simon Larsen
5115e21a7a feat: Refactor scheduled maintenance creation to execute core operations asynchronously 2025-07-24 17:17:52 +01:00
Simon Larsen
0e6119ddce feat: Execute owner assignment asynchronously in StatusPageService 2025-07-24 17:14:46 +01:00
Simon Larsen
b842a49cfb feat: Refactor monitor creation to run core operations in parallel, deferring workspace operations 2025-07-24 16:42:29 +01:00
Nawaz Dhandala
9737e50467 refactor: Clean up whitespace and improve code formatting in migration and service files 2025-07-24 16:00:49 +01:00
Simon Larsen
91beb6091d feat: Optimize monitor creation by parallelizing workspace, billing, and probe operations 2025-07-24 15:56:35 +01:00
Simon Larsen
68e610aa9f fix: Disable workflow operations for MetricType entity 2025-07-24 15:25:04 +01:00
Simon Larsen
d673ef3a01 feat: Enhance memory management and error handling in telemetry ingestion processes 2025-07-24 15:22:29 +01:00
197 changed files with 12656 additions and 15197 deletions

View File

@@ -209,22 +209,6 @@ jobs:
- name: build docker image
run: sudo docker build -f ./Dashboard/Dockerfile .
docker-build-haraka:
runs-on: ubuntu-latest
env:
CI_PIPELINE_ID: ${{github.run_number}}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Preinstall
run: npm run prerun
# build images
- name: build docker image
run: sudo docker build -f ./Haraka/Dockerfile .
docker-build-probe:
runs-on: ubuntu-latest
env:

View File

@@ -70,7 +70,7 @@ jobs:
publish-mcp-server:
runs-on: ubuntu-latest
needs: [generate-build-number]
needs: [generate-build-number, publish-npm-packages]
env:
CI_PIPELINE_ID: ${{ github.run_number }}
NPM_AUTH_TOKEN: ${{ secrets.NPM_AUTH_TOKEN }}
@@ -138,6 +138,7 @@ jobs:
- name: Install dependencies
run: |
cd MCP
npm update @oneuptime/common
npm install
- name: Build MCP server
@@ -1052,67 +1053,6 @@ jobs:
GIT_SHA=${{ github.sha }}
APP_VERSION=7.0.${{needs.generate-build-number.outputs.build_number}}
haraka-docker-image-deploy:
needs: [generate-build-number]
runs-on: ubuntu-latest
steps:
- name: Docker Meta
id: meta
uses: docker/metadata-action@v4
with:
images: |
oneuptime/haraka
ghcr.io/oneuptime/haraka
tags: |
type=raw,value=release,enable=true
type=semver,value=7.0.${{needs.generate-build-number.outputs.build_number}},pattern={{version}},enable=true
- uses: actions/checkout@v4
with:
ref: ${{ github.ref }}
- uses: actions/setup-node@v4
with:
node-version: latest
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Generate Dockerfile from Dockerfile.tpl
run: npm run prerun
# Build and deploy haraka.
- name: Login to Docker Hub
uses: docker/login-action@v2.2.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v2.2.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v4
with:
file: ./Haraka/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
GIT_SHA=${{ github.sha }}
APP_VERSION=7.0.${{needs.generate-build-number.outputs.build_number}}
admin-dashboard-docker-image-deploy:
needs: [generate-build-number]
runs-on: ubuntu-latest
@@ -1838,7 +1778,7 @@ jobs:
test-e2e-release-saas:
runs-on: ubuntu-latest
needs: [open-telemetry-ingest-docker-image-deploy, copilot-docker-image-deploy, fluent-ingest-docker-image-deploy, docs-docker-image-deploy, api-reference-docker-image-deploy, workflow-docker-image-deploy, llm-docker-image-deploy, accounts-docker-image-deploy, admin-dashboard-docker-image-deploy, app-docker-image-deploy, dashboard-docker-image-deploy, haraka-docker-image-deploy, probe-ingest-docker-image-deploy, server-monitor-ingest-docker-image-deploy, isolated-vm-docker-image-deploy, home-docker-image-deploy, worker-docker-image-deploy, otel-collector-docker-image-deploy, probe-docker-image-deploy, status-page-docker-image-deploy, test-docker-image-deploy, test-server-docker-image-deploy, publish-npm-packages, e2e-docker-image-deploy, helm-chart-deploy, generate-build-number, nginx-docker-image-deploy, incoming-request-ingest-docker-image-deploy]
needs: [open-telemetry-ingest-docker-image-deploy, copilot-docker-image-deploy, fluent-ingest-docker-image-deploy, docs-docker-image-deploy, api-reference-docker-image-deploy, workflow-docker-image-deploy, llm-docker-image-deploy, accounts-docker-image-deploy, admin-dashboard-docker-image-deploy, app-docker-image-deploy, dashboard-docker-image-deploy, probe-ingest-docker-image-deploy, server-monitor-ingest-docker-image-deploy, isolated-vm-docker-image-deploy, home-docker-image-deploy, worker-docker-image-deploy, otel-collector-docker-image-deploy, probe-docker-image-deploy, status-page-docker-image-deploy, test-docker-image-deploy, test-server-docker-image-deploy, publish-npm-packages, e2e-docker-image-deploy, helm-chart-deploy, generate-build-number, nginx-docker-image-deploy, incoming-request-ingest-docker-image-deploy]
env:
CI_PIPELINE_ID: ${{github.run_number}}
steps:
@@ -1891,7 +1831,7 @@ jobs:
test-e2e-release-self-hosted:
runs-on: ubuntu-latest
# After all the jobs runs
needs: [open-telemetry-ingest-docker-image-deploy, publish-mcp-server, copilot-docker-image-deploy, incoming-request-ingest-docker-image-deploy, fluent-ingest-docker-image-deploy, docs-docker-image-deploy, api-reference-docker-image-deploy, workflow-docker-image-deploy, llm-docker-image-deploy, accounts-docker-image-deploy, admin-dashboard-docker-image-deploy, app-docker-image-deploy, dashboard-docker-image-deploy, haraka-docker-image-deploy, probe-ingest-docker-image-deploy, server-monitor-ingest-docker-image-deploy, isolated-vm-docker-image-deploy, home-docker-image-deploy, worker-docker-image-deploy, otel-collector-docker-image-deploy, probe-docker-image-deploy, status-page-docker-image-deploy, test-docker-image-deploy, test-server-docker-image-deploy, publish-npm-packages, e2e-docker-image-deploy, helm-chart-deploy, generate-build-number, nginx-docker-image-deploy]
needs: [open-telemetry-ingest-docker-image-deploy, publish-mcp-server, copilot-docker-image-deploy, incoming-request-ingest-docker-image-deploy, fluent-ingest-docker-image-deploy, docs-docker-image-deploy, api-reference-docker-image-deploy, workflow-docker-image-deploy, llm-docker-image-deploy, accounts-docker-image-deploy, admin-dashboard-docker-image-deploy, app-docker-image-deploy, dashboard-docker-image-deploy, probe-ingest-docker-image-deploy, server-monitor-ingest-docker-image-deploy, isolated-vm-docker-image-deploy, home-docker-image-deploy, worker-docker-image-deploy, otel-collector-docker-image-deploy, probe-docker-image-deploy, status-page-docker-image-deploy, test-docker-image-deploy, test-server-docker-image-deploy, publish-npm-packages, e2e-docker-image-deploy, helm-chart-deploy, generate-build-number, nginx-docker-image-deploy]
env:
CI_PIPELINE_ID: ${{github.run_number}}
steps:

View File

@@ -144,6 +144,7 @@ jobs:
- name: Install dependencies and build
run: |
cd MCP
npm update @oneuptime/common
npm install
npm run build
@@ -1146,67 +1147,6 @@ jobs:
GIT_SHA=${{ github.sha }}
APP_VERSION=7.0.${{needs.generate-build-number.outputs.build_number}}
haraka-docker-image-deploy:
needs: generate-build-number
runs-on: ubuntu-latest
steps:
- name: Docker Meta
id: meta
uses: docker/metadata-action@v4
with:
images: |
oneuptime/haraka
ghcr.io/oneuptime/haraka
tags: |
type=raw,value=test,enable=true
type=semver,value=7.0.${{needs.generate-build-number.outputs.build_number}}-test,pattern={{version}},enable=true
- uses: actions/checkout@v4
with:
ref: ${{ github.ref }}
- uses: actions/setup-node@v4
with:
node-version: latest
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Generate Dockerfile from Dockerfile.tpl
run: npm run prerun
# Build and deploy haraka.
- name: Login to Docker Hub
uses: docker/login-action@v2.2.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v2.2.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v4
with:
file: ./Haraka/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
GIT_SHA=${{ github.sha }}
APP_VERSION=7.0.${{needs.generate-build-number.outputs.build_number}}
dashboard-docker-image-deploy:
needs: generate-build-number
runs-on: ubuntu-latest
@@ -1768,7 +1708,7 @@ jobs:
test-helm-chart:
runs-on: ubuntu-latest
needs: [infrastructure-agent-deploy, publish-mcp-server, llm-docker-image-deploy, publish-terraform-provider, open-telemetry-ingest-docker-image-deploy, copilot-docker-image-deploy, docs-docker-image-deploy, worker-docker-image-deploy, workflow-docker-image-deploy, isolated-vm-docker-image-deploy, home-docker-image-deploy, api-reference-docker-image-deploy, test-server-docker-image-deploy, test-docker-image-deploy, probe-ingest-docker-image-deploy, server-monitor-ingest-docker-image-deploy, probe-docker-image-deploy, haraka-docker-image-deploy, dashboard-docker-image-deploy, admin-dashboard-docker-image-deploy, app-docker-image-deploy, accounts-docker-image-deploy, otel-collector-docker-image-deploy, status-page-docker-image-deploy, nginx-docker-image-deploy, e2e-docker-image-deploy, fluent-ingest-docker-image-deploy, incoming-request-ingest-docker-image-deploy]
needs: [infrastructure-agent-deploy, publish-mcp-server, llm-docker-image-deploy, publish-terraform-provider, open-telemetry-ingest-docker-image-deploy, copilot-docker-image-deploy, docs-docker-image-deploy, worker-docker-image-deploy, workflow-docker-image-deploy, isolated-vm-docker-image-deploy, home-docker-image-deploy, api-reference-docker-image-deploy, test-server-docker-image-deploy, test-docker-image-deploy, probe-ingest-docker-image-deploy, server-monitor-ingest-docker-image-deploy, probe-docker-image-deploy, dashboard-docker-image-deploy, admin-dashboard-docker-image-deploy, app-docker-image-deploy, accounts-docker-image-deploy, otel-collector-docker-image-deploy, status-page-docker-image-deploy, nginx-docker-image-deploy, e2e-docker-image-deploy, fluent-ingest-docker-image-deploy, incoming-request-ingest-docker-image-deploy]
env:
CI_PIPELINE_ID: ${{github.run_number}}
steps:

4
.gitignore vendored
View File

@@ -86,9 +86,6 @@ Backups/*.tar
.env
Haraka/dkim/keys/private_base64.txt
Haraka/dkim/keys/public_base64.txt
.eslintcache
HelmChart/Values/*.values.yaml
@@ -129,3 +126,4 @@ terraform-provider-example/**
MCP/build/
MCP/.env
MCP/node_modules
Dashboard/public/sw.js

View File

@@ -55,6 +55,7 @@
"@types/react-highlight": "^0.12.8",
"@types/react-syntax-highlighter": "^15.5.13",
"@types/uuid": "^8.3.4",
"@types/web-push": "^3.6.4",
"acme-client": "^5.3.0",
"airtable": "^0.12.2",
"axios": "^1.7.2",
@@ -74,7 +75,6 @@
"json5": "^2.2.3",
"jsonwebtoken": "^9.0.0",
"jwt-decode": "^4.0.0",
"lodash": "^4.17.21",
"marked": "^12.0.2",
"moment": "^2.30.1",
"moment-timezone": "^0.5.45",
@@ -118,6 +118,7 @@
"universal-cookie": "^7.2.1",
"use-async-effect": "^2.2.6",
"uuid": "^8.3.2",
"web-push": "^3.6.7",
"zod": "^3.25.30"
},
"devDependencies": {
@@ -132,7 +133,6 @@
"@types/jest": "^28.1.4",
"@types/json2csv": "^5.0.3",
"@types/jsonwebtoken": "^8.5.9",
"@types/lodash": "^4.14.202",
"@types/node": "^17.0.45",
"@types/node-cron": "^3.0.7",
"@types/nodemailer": "^6.4.7",

View File

@@ -59,6 +59,7 @@
"@types/react-highlight": "^0.12.8",
"@types/react-syntax-highlighter": "^15.5.13",
"@types/uuid": "^8.3.4",
"@types/web-push": "^3.6.4",
"acme-client": "^5.3.0",
"airtable": "^0.12.2",
"axios": "^1.7.2",
@@ -78,7 +79,6 @@
"json5": "^2.2.3",
"jsonwebtoken": "^9.0.0",
"jwt-decode": "^4.0.0",
"lodash": "^4.17.21",
"marked": "^12.0.2",
"moment": "^2.30.1",
"moment-timezone": "^0.5.45",
@@ -122,6 +122,7 @@
"universal-cookie": "^7.2.1",
"use-async-effect": "^2.2.6",
"uuid": "^8.3.2",
"web-push": "^3.6.7",
"zod": "^3.25.30"
},
"devDependencies": {
@@ -136,7 +137,6 @@
"@types/jest": "^28.1.4",
"@types/json2csv": "^5.0.3",
"@types/jsonwebtoken": "^8.5.9",
"@types/lodash": "^4.14.202",
"@types/node": "^17.0.45",
"@types/node-cron": "^3.0.7",
"@types/nodemailer": "^6.4.7",

View File

@@ -58,6 +58,7 @@
"@types/react-highlight": "^0.12.8",
"@types/react-syntax-highlighter": "^15.5.13",
"@types/uuid": "^8.3.4",
"@types/web-push": "^3.6.4",
"acme-client": "^5.3.0",
"airtable": "^0.12.2",
"axios": "^1.7.2",
@@ -77,7 +78,6 @@
"json5": "^2.2.3",
"jsonwebtoken": "^9.0.0",
"jwt-decode": "^4.0.0",
"lodash": "^4.17.21",
"marked": "^12.0.2",
"moment": "^2.30.1",
"moment-timezone": "^0.5.45",
@@ -121,6 +121,7 @@
"universal-cookie": "^7.2.1",
"use-async-effect": "^2.2.6",
"uuid": "^8.3.2",
"web-push": "^3.6.7",
"zod": "^3.25.30"
},
"devDependencies": {
@@ -135,7 +136,6 @@
"@types/jest": "^28.1.4",
"@types/json2csv": "^5.0.3",
"@types/jsonwebtoken": "^8.5.9",
"@types/lodash": "^4.14.202",
"@types/node": "^17.0.45",
"@types/node-cron": "^3.0.7",
"@types/nodemailer": "^6.4.7",

View File

@@ -21,7 +21,7 @@ import React, { FunctionComponent, ReactElement, useEffect } from "react";
const Settings: FunctionComponent = (): ReactElement => {
const [emailServerType, setemailServerType] = React.useState<EmailServerType>(
EmailServerType.Internal,
EmailServerType.CustomSMTP,
);
const [isLoading, setIsLoading] = React.useState<boolean>(true);
@@ -43,7 +43,7 @@ const Settings: FunctionComponent = (): ReactElement => {
if (globalConfig) {
setemailServerType(
globalConfig.emailServerType || EmailServerType.Internal,
globalConfig.emailServerType || EmailServerType.CustomSMTP,
);
}
@@ -127,7 +127,7 @@ const Settings: FunctionComponent = (): ReactElement => {
/>
<CardModelDetail
name="Internal SMTP Settings"
name="Email Server Settings"
cardProps={{
title: "Email Server Settings",
description:
@@ -172,7 +172,7 @@ const Settings: FunctionComponent = (): ReactElement => {
cardProps={{
title: "Custom Email and SMTP Settings",
description:
"If you have not enabled Internal SMTP server to send emails. Please configure your SMTP server here.",
"Please configure your SMTP server here to send emails.",
}}
isEditable={true}
editButtonText="Edit SMTP Config"

View File

@@ -1,10 +1,8 @@
import Hostname from "Common/Types/API/Hostname";
import TwilioConfig from "Common/Types/CallAndSMS/TwilioConfig";
import Email from "Common/Types/Email";
import EmailServer from "Common/Types/Email/EmailServer";
import BadDataException from "Common/Types/Exception/BadDataException";
import ObjectID from "Common/Types/ObjectID";
import Port from "Common/Types/Port";
import { AdminDashboardClientURL } from "Common/Server/EnvironmentConfig";
import GlobalConfigService from "Common/Server/Services/GlobalConfigService";
import GlobalConfig, {
@@ -12,24 +10,6 @@ import GlobalConfig, {
} from "Common/Models/DatabaseModels/GlobalConfig";
import Phone from "Common/Types/Phone";
export const InternalSmtpPassword: string =
process.env["INTERNAL_SMTP_PASSWORD"] || "";
export const InternalSmtpHost: Hostname = new Hostname(
process.env["INTERNAL_SMTP_HOST"] || "haraka",
);
export const InternalSmtpPort: Port = new Port(2525);
export const InternalSmtpSecure: boolean = false;
export const InternalSmtpEmail: Email = new Email(
process.env["INTERNAL_SMTP_EMAIL"] || "noreply@oneuptime.com",
);
export const InternalSmtpFromName: string =
process.env["INTERNAL_SMTP_FROM_NAME"] || "OneUptime";
type GetGlobalSMTPConfig = () => Promise<EmailServer | null>;
export const getGlobalSMTPConfig: GetGlobalSMTPConfig =
@@ -132,10 +112,10 @@ export const getEmailServerType: GetEmailServerTypeFunction =
});
if (!globalConfig) {
return EmailServerType.Internal;
return EmailServerType.CustomSMTP;
}
return globalConfig.emailServerType || EmailServerType.Internal;
return globalConfig.emailServerType || EmailServerType.CustomSMTP;
};
export interface SendGridConfig {

View File

@@ -1,10 +1,4 @@
import {
InternalSmtpEmail,
InternalSmtpFromName,
InternalSmtpHost,
InternalSmtpPassword,
InternalSmtpPort,
InternalSmtpSecure,
SendGridConfig,
getEmailServerType,
getGlobalSMTPConfig,
@@ -202,19 +196,6 @@ export default class MailService {
};
}
public static getInternalEmailServer(): EmailServer {
return {
id: undefined,
username: InternalSmtpEmail.toString(),
password: InternalSmtpPassword,
host: InternalSmtpHost,
port: InternalSmtpPort,
fromEmail: InternalSmtpEmail,
fromName: InternalSmtpFromName,
secure: InternalSmtpSecure,
};
}
public static async getGlobalFromEmail(): Promise<Email> {
const emailServer: EmailServer | null = await this.getGlobalSmtpSettings();
@@ -540,17 +521,6 @@ export default class MailService {
options.emailServer = globalEmailServer;
}
if (
emailServerType === EmailServerType.Internal &&
(!options || !options.emailServer)
) {
if (!options) {
options = {};
}
options.emailServer = this.getInternalEmailServer();
}
if (options && options.emailServer && emailLog) {
emailLog.fromEmail = options.emailServer.fromEmail;
}

4
App/package-lock.json generated
View File

@@ -65,6 +65,7 @@
"@types/react-highlight": "^0.12.8",
"@types/react-syntax-highlighter": "^15.5.13",
"@types/uuid": "^8.3.4",
"@types/web-push": "^3.6.4",
"acme-client": "^5.3.0",
"airtable": "^0.12.2",
"axios": "^1.7.2",
@@ -84,7 +85,6 @@
"json5": "^2.2.3",
"jsonwebtoken": "^9.0.0",
"jwt-decode": "^4.0.0",
"lodash": "^4.17.21",
"marked": "^12.0.2",
"moment": "^2.30.1",
"moment-timezone": "^0.5.45",
@@ -128,6 +128,7 @@
"universal-cookie": "^7.2.1",
"use-async-effect": "^2.2.6",
"uuid": "^8.3.2",
"web-push": "^3.6.7",
"zod": "^3.25.30"
},
"devDependencies": {
@@ -142,7 +143,6 @@
"@types/jest": "^28.1.4",
"@types/json2csv": "^5.0.3",
"@types/jsonwebtoken": "^8.5.9",
"@types/lodash": "^4.14.202",
"@types/node": "^17.0.45",
"@types/node-cron": "^3.0.7",
"@types/nodemailer": "^6.4.7",

View File

@@ -64,6 +64,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "AlertOwnerTeam",
})
@Index(["alertId", "teamId", "projectId"])
export default class AlertOwnerTeam extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -63,6 +63,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "AlertOwnerUser",
})
@Index(["alertId", "userId", "projectId"])
export default class AlertOwnerUser extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -76,6 +76,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "AlertSeverity",
})
@Index(["projectId", "order"])
export default class AlertSeverity extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -76,6 +76,10 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "AlertState",
})
@Index(["projectId", "isCreatedState"])
@Index(["projectId", "isResolvedState"])
@Index(["projectId", "isAcknowledgedState"])
@Index(["projectId", "order"])
export default class AlertState extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -60,6 +60,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "AlertStateTimeline",
})
@Index(["alertId", "startsAt"])
@TableMetadata({
tableName: "AlertStateTimeline",
singularName: "Alert State Timeline",

View File

@@ -17,7 +17,6 @@ import Port from "../../Types/Port";
import { Column, Entity } from "typeorm";
export enum EmailServerType {
Internal = "Internal",
Sendgrid = "Sendgrid",
CustomSMTP = "Custom SMTP",
}

View File

@@ -64,6 +64,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "IncidentOwnerTeam",
})
@Index(["incidentId", "teamId", "projectId"])
export default class IncidentOwnerTeam extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -63,6 +63,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "IncidentOwnerUser",
})
@Index(["incidentId", "userId", "projectId"])
export default class IncidentOwnerUser extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -76,6 +76,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "IncidentSeverity",
})
@Index(["projectId", "order"])
export default class IncidentSeverity extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -76,6 +76,9 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "IncidentState",
})
@Index(["projectId", "isCreatedState"])
@Index(["projectId", "isResolvedState"])
@Index(["projectId", "order"])
export default class IncidentState extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -24,6 +24,8 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@EnableDocumentation()
@CanAccessIfCanReadOn("incident")
@TenantColumn("projectId")
@Index(["incidentId", "startsAt"]) // Composite index for efficient incident timeline queries
@Index(["incidentId", "projectId", "startsAt"]) // Alternative composite index including project
@TableAccessControl({
create: [
Permission.ProjectOwner,
@@ -60,6 +62,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "IncidentStateTimeline",
})
@Index(["incidentId", "startsAt"])
@TableMetadata({
tableName: "IncidentStateTimeline",
singularName: "Incident State Timeline",

View File

@@ -57,10 +57,10 @@ import TelemetryService from "./TelemetryService";
],
})
@EnableWorkflow({
create: true,
delete: true,
update: true,
read: true,
create: false,
delete: false,
update: false,
read: false,
})
@CrudApiEndpoint(new Route("/metric-type"))
@SlugifyColumn("name", "slug")

View File

@@ -72,6 +72,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "MonitorOwnerTeam",
})
@Index(["monitorId", "teamId", "projectId"])
export default class MonitorOwnerTeam extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -71,6 +71,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "MonitorOwnerUser",
})
@Index(["monitorId", "userId", "projectId"])
export default class MonitorOwnerUser extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -24,6 +24,8 @@ export type MonitorStepProbeResponse = Dictionary<ProbeMonitorResponse>;
@EnableDocumentation()
@TenantColumn("projectId")
@Index(["monitorId", "probeId"]) // Composite index for efficient monitor-probe relationship queries
@Index(["monitorId", "projectId"]) // Alternative index for monitor queries within project
@TableAccessControl({
create: [
Permission.ProjectOwner,

View File

@@ -76,6 +76,8 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "MonitorStatus",
})
@Index(["projectId", "isOperationalState"])
@Index(["projectId", "isOfflineState"])
export default class MonitorStatus extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -25,6 +25,8 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@EnableDocumentation()
@CanAccessIfCanReadOn("monitor")
@TenantColumn("projectId")
@Index(["monitorId", "projectId", "startsAt"]) // Composite index for efficient timeline queries
@Index(["monitorId", "startsAt"]) // Alternative index for monitor-specific timeline queries
@TableAccessControl({
create: [
Permission.ProjectOwner,
@@ -62,6 +64,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "MonitorStatusTimeline",
})
@Index(["monitorId", "startsAt"])
@TableMetadata({
tableName: "MonitorStatusTimeline",
singularName: "Monitor Status Event",

View File

@@ -51,6 +51,9 @@ import Alert from "./Alert";
@Entity({
name: "OnCallDutyPolicyExecutionLogTimeline",
})
@Index(["onCallDutyPolicyExecutionLogId", "createdAt"])
@Index(["projectId", "createdAt"])
@Index(["alertSentToUserId", "projectId"])
@TableMetadata({
tableName: "OnCallDutyPolicyExecutionLogTimeline",
singularName: "On-Call Duty Execution Log Timeline",

View File

@@ -72,6 +72,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "OnCallDutyPolicyOwnerTeam",
})
@Index(["onCallDutyPolicyId", "teamId", "projectId"])
export default class OnCallDutyPolicyOwnerTeam extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -71,6 +71,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "OnCallDutyPolicyOwnerUser",
})
@Index(["onCallDutyPolicyId", "userId", "projectId"])
export default class OnCallDutyPolicyOwnerUser extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -64,6 +64,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "ScheduledMaintenanceOwnerTeam",
})
@Index(["scheduledMaintenanceId", "teamId", "projectId"])
export default class ScheduledMaintenanceOwnerTeam extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -63,6 +63,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "ScheduledMaintenanceOwnerUser",
})
@Index(["scheduledMaintenanceId", "userId", "projectId"])
export default class ScheduledMaintenanceOwnerUser extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -76,6 +76,9 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "ScheduledMaintenanceState",
})
@Index(["projectId", "order"])
@Index(["projectId", "isOngoingState"])
@Index(["projectId", "isEndedState"])
export default class ScheduledMaintenanceState extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -59,6 +59,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "ScheduledMaintenanceStateTimeline",
})
@Index(["scheduledMaintenanceId", "startsAt"])
@TableMetadata({
tableName: "ScheduledMaintenanceStateTimeline",
icon: IconProp.List,

View File

@@ -72,6 +72,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "StatusPageOwnerTeam",
})
@Index(["statusPageId", "teamId", "projectId"])
export default class StatusPageOwnerTeam extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -71,6 +71,7 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "StatusPageOwnerUser",
})
@Index(["statusPageId", "userId", "projectId"])
export default class StatusPageOwnerUser extends BaseModel {
@ColumnAccessControl({
create: [

View File

@@ -52,6 +52,9 @@ import { Column, Entity, Index, JoinColumn, ManyToOne } from "typeorm";
@Entity({
name: "UserOnCallLogTimeline",
})
@Index(["userId", "createdAt"])
@Index(["onCallDutyPolicyExecutionLogId", "status"])
@Index(["projectId", "status"])
@TableMetadata({
tableName: "UserOnCallLogTimeline",
singularName: "User On-Call Log Timeline",

View File

@@ -0,0 +1,144 @@
#!/usr/bin/env node
/**
* Universal Service Worker Generator for OneUptime Services
*
* This script can be used by any OneUptime service to generate
* a service worker from a template with dynamic versioning.
*
* Usage:
* node generate-service-worker.js [template-path] [output-path]
*
* Example:
* node generate-service-worker.js sw.js.template public/sw.js
*/
const fs = require('fs');
const path = require('path');
const crypto = require('crypto');
// Default values
const DEFAULT_APP_VERSION = '1.0.0';
const DEFAULT_GIT_SHA = 'local';
/**
* Get app version from environment or package.json
*/
function getAppVersion(packageJsonPath) {
// First try environment variable (Docker build)
if (process.env.APP_VERSION) {
return process.env.APP_VERSION;
}
// Fallback to package.json version
try {
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
return packageJson.version || DEFAULT_APP_VERSION;
} catch (error) {
console.warn('Could not read package.json, using default version');
return DEFAULT_APP_VERSION;
}
}
/**
* Get git SHA from environment
*/
function getGitSha() {
// Try environment variable first (Docker build)
if (process.env.GIT_SHA) {
return process.env.GIT_SHA.substring(0, 8); // Short SHA
}
// Try to get from git command if available
try {
const { execSync } = require('child_process');
const gitSha = execSync('git rev-parse --short HEAD', { encoding: 'utf8' }).trim();
return gitSha;
} catch (error) {
// Fallback to timestamp-based hash for local development
const timestamp = Date.now().toString();
const hash = crypto.createHash('md5').update(timestamp).digest('hex');
return hash.substring(0, 8);
}
}
/**
* Generate service worker from template
*/
function generateServiceWorker(templatePath, outputPath, serviceName = 'OneUptime') {
// Check if template exists
if (!fs.existsSync(templatePath)) {
console.error('❌ Service worker template not found:', templatePath);
process.exit(1);
}
// Read template
const template = fs.readFileSync(templatePath, 'utf8');
// Get version information
const packageJsonPath = path.join(path.dirname(templatePath), 'package.json');
const appVersion = getAppVersion(packageJsonPath);
const gitSha = getGitSha();
const buildTimestamp = new Date().toISOString();
console.log(`🔧 Generating service worker for ${serviceName}...`);
console.log(` App Version: ${appVersion}`);
console.log(` Git SHA: ${gitSha}`);
console.log(` Build Time: ${buildTimestamp}`);
// Replace placeholders
const generatedContent = template
.replace(/\{\{APP_VERSION\}\}/g, appVersion)
.replace(/\{\{GIT_SHA\}\}/g, gitSha)
.replace(/\{\{BUILD_TIMESTAMP\}\}/g, buildTimestamp)
.replace(/\{\{SERVICE_NAME\}\}/g, serviceName);
// Add generation comment at the top
const header = `/*
* Generated Service Worker for ${serviceName}
*
* Generated at: ${buildTimestamp}
* App Version: ${appVersion}
* Git SHA: ${gitSha}
*
* DO NOT EDIT THIS FILE DIRECTLY
* Edit the template file instead and run the generator script
*/
`;
const finalContent = header + generatedContent;
// Ensure output directory exists
const outputDir = path.dirname(outputPath);
if (!fs.existsSync(outputDir)) {
fs.mkdirSync(outputDir, { recursive: true });
}
// Write generated service worker
fs.writeFileSync(outputPath, finalContent, 'utf8');
console.log('✅ Service worker generated successfully:', outputPath);
console.log(` Cache version: oneuptime-v${appVersion}-${gitSha}`);
}
// Command line interface
if (require.main === module) {
const args = process.argv.slice(2);
const templatePath = args[0] || 'sw.js.template';
const outputPath = args[1] || 'public/sw.js';
const serviceName = args[2] || path.basename(process.cwd());
try {
// Resolve paths relative to current working directory
const resolvedTemplatePath = path.resolve(templatePath);
const resolvedOutputPath = path.resolve(outputPath);
generateServiceWorker(resolvedTemplatePath, resolvedOutputPath, serviceName);
} catch (error) {
console.error('❌ Failed to generate service worker:', error.message);
process.exit(1);
}
}
module.exports = { generateServiceWorker, getAppVersion, getGitSha };

View File

@@ -1,14 +1,17 @@
import { MigrationInterface, QueryRunner } from "typeorm";
export class MigrationName1753343522987 implements MigrationInterface {
public name = 'MigrationName1753343522987'
public name = "MigrationName1753343522987";
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`ALTER TABLE "UserPush" ALTER COLUMN "deviceToken" TYPE text`);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`ALTER TABLE "UserPush" ALTER COLUMN "deviceToken" TYPE character varying(500)`);
}
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`ALTER TABLE "UserPush" ALTER COLUMN "deviceToken" TYPE text`,
);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`ALTER TABLE "UserPush" ALTER COLUMN "deviceToken" TYPE character varying(500)`,
);
}
}

View File

@@ -0,0 +1,47 @@
import { MigrationInterface, QueryRunner } from "typeorm";
export class MigrationName1753377161288 implements MigrationInterface {
public name = "MigrationName1753377161288";
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`CREATE INDEX "IDX_16db786b562f1db40c93d463c7" ON "IncidentStateTimeline" ("incidentId", "projectId", "startsAt") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_410cf30b966f88c287d368aa48" ON "IncidentStateTimeline" ("incidentId", "startsAt") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_ac648c5f1961bc1d5ec1ba21bd" ON "MonitorProbe" ("monitorId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_bde10e600047b06718db90a636" ON "MonitorProbe" ("monitorId", "probeId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_570f164ca5b3559eb8555eb1b1" ON "MonitorStatusTimeline" ("monitorId", "startsAt") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_466d392af405ccf2e8b552eb0e" ON "MonitorStatusTimeline" ("monitorId", "projectId", "startsAt") `,
);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`DROP INDEX "public"."IDX_466d392af405ccf2e8b552eb0e"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_570f164ca5b3559eb8555eb1b1"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_bde10e600047b06718db90a636"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_ac648c5f1961bc1d5ec1ba21bd"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_410cf30b966f88c287d368aa48"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_16db786b562f1db40c93d463c7"`,
);
}
}

View File

@@ -0,0 +1,131 @@
import { MigrationInterface, QueryRunner } from "typeorm";
export class AddPerformanceIndexes1753378524062 implements MigrationInterface {
public name = "AddPerformanceIndexes1753378524062";
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`ALTER TABLE "OnCallDutyPolicyScheduleLayer" ALTER COLUMN "rotation" SET DEFAULT '{"_type":"Recurring","value":{"intervalType":"Day","intervalCount":{"_type":"PositiveNumber","value":1}}}'`,
);
await queryRunner.query(
`ALTER TABLE "OnCallDutyPolicyScheduleLayer" ALTER COLUMN "restrictionTimes" SET DEFAULT '{"_type":"RestrictionTimes","value":{"restictionType":"None","dayRestrictionTimes":null,"weeklyRestrictionTimes":[]}}'`,
);
await queryRunner.query(
`CREATE INDEX "IDX_3c2f8998deba67cedb958fc08f" ON "IncidentSeverity" ("projectId", "order") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_2283c2d1aab23419b784db0d84" ON "IncidentState" ("projectId", "order") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_4ed23cf5e6614ee930972ab6b5" ON "IncidentState" ("projectId", "isResolvedState") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_b231eb3cdc945e53947495cf76" ON "IncidentState" ("projectId", "isCreatedState") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_5c9760b0f7df9fe68efd52151d" ON "MonitorStatus" ("projectId", "isOfflineState") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_9c64d2b5df8c5cac0ece90d899" ON "MonitorStatus" ("projectId", "isOperationalState") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_4490b10d3394a9be5f27f8fc3b" ON "IncidentOwnerTeam" ("incidentId", "teamId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_1d8d2229e31e4ec13ec99c79ae" ON "IncidentOwnerUser" ("incidentId", "userId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_7b7272644aab237d503ed3429a" ON "MonitorOwnerTeam" ("monitorId", "teamId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_6f6246149ab744fd62ada06ee5" ON "MonitorOwnerUser" ("monitorId", "userId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_c98e7e9e31d674cf5c47b15f36" ON "AlertSeverity" ("projectId", "order") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_3bb6dc217814170a3b37e21bf5" ON "AlertState" ("projectId", "order") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_b20be7b2ca1a6dc602da305f8a" ON "AlertState" ("projectId", "isAcknowledgedState") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_ae2854ea86740fdd56eaf2fea9" ON "AlertState" ("projectId", "isResolvedState") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_91ad158d170a9b51a2046fcc87" ON "AlertState" ("projectId", "isCreatedState") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_d640454e87b3dd4f24f9c527d2" ON "AlertStateTimeline" ("alertId", "startsAt") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_dfbcaebaa02d06a556fd2e155c" ON "AlertOwnerTeam" ("alertId", "teamId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_042a7841d65141fb940de9d881" ON "AlertOwnerUser" ("alertId", "userId", "projectId") `,
);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`DROP INDEX "public"."IDX_042a7841d65141fb940de9d881"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_dfbcaebaa02d06a556fd2e155c"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_d640454e87b3dd4f24f9c527d2"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_91ad158d170a9b51a2046fcc87"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_ae2854ea86740fdd56eaf2fea9"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_b20be7b2ca1a6dc602da305f8a"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_3bb6dc217814170a3b37e21bf5"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_c98e7e9e31d674cf5c47b15f36"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_6f6246149ab744fd62ada06ee5"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_7b7272644aab237d503ed3429a"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_1d8d2229e31e4ec13ec99c79ae"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_4490b10d3394a9be5f27f8fc3b"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_9c64d2b5df8c5cac0ece90d899"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_5c9760b0f7df9fe68efd52151d"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_b231eb3cdc945e53947495cf76"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_4ed23cf5e6614ee930972ab6b5"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_2283c2d1aab23419b784db0d84"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_3c2f8998deba67cedb958fc08f"`,
);
await queryRunner.query(
`ALTER TABLE "OnCallDutyPolicyScheduleLayer" ALTER COLUMN "restrictionTimes" SET DEFAULT '{"_type": "RestrictionTimes", "value": {"restictionType": "None", "dayRestrictionTimes": null, "weeklyRestrictionTimes": []}}'`,
);
await queryRunner.query(
`ALTER TABLE "OnCallDutyPolicyScheduleLayer" ALTER COLUMN "rotation" SET DEFAULT '{"_type": "Recurring", "value": {"intervalType": "Day", "intervalCount": {"_type": "PositiveNumber", "value": 1}}}'`,
);
}
}

View File

@@ -0,0 +1,119 @@
import { MigrationInterface, QueryRunner } from "typeorm";
export class MigrationName1753383711511 implements MigrationInterface {
public name = "MigrationName1753383711511";
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`ALTER TABLE "OnCallDutyPolicyScheduleLayer" ALTER COLUMN "rotation" SET DEFAULT '{"_type":"Recurring","value":{"intervalType":"Day","intervalCount":{"_type":"PositiveNumber","value":1}}}'`,
);
await queryRunner.query(
`ALTER TABLE "OnCallDutyPolicyScheduleLayer" ALTER COLUMN "restrictionTimes" SET DEFAULT '{"_type":"RestrictionTimes","value":{"restictionType":"None","dayRestrictionTimes":null,"weeklyRestrictionTimes":[]}}'`,
);
await queryRunner.query(
`CREATE INDEX "IDX_b03e14b5a5fc9f5b8603283c88" ON "OnCallDutyPolicyExecutionLogTimeline" ("alertSentToUserId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_114e3f761691867aa919ab6b6e" ON "OnCallDutyPolicyExecutionLogTimeline" ("projectId", "createdAt") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_f34e1244e487f705e7c6b25831" ON "OnCallDutyPolicyExecutionLogTimeline" ("onCallDutyPolicyExecutionLogId", "createdAt") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_34f21c8ae164fb90be806818a8" ON "OnCallDutyPolicyOwnerTeam" ("onCallDutyPolicyId", "teamId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_1539db4bbd6ada58abb940b058" ON "OnCallDutyPolicyOwnerUser" ("onCallDutyPolicyId", "userId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_00439dd14338c3ee4e81d0714a" ON "ScheduledMaintenanceState" ("projectId", "isEndedState") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_7addde4d27f13be56651000df9" ON "ScheduledMaintenanceState" ("projectId", "isOngoingState") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_e84431ba010571147933477cff" ON "ScheduledMaintenanceState" ("projectId", "order") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_b737666365dbea2e4c914fc6d3" ON "ScheduledMaintenanceOwnerTeam" ("scheduledMaintenanceId", "teamId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_a4621b7155a01292b92569549f" ON "ScheduledMaintenanceOwnerUser" ("scheduledMaintenanceId", "userId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_c4ac940ddb05242a166567edbb" ON "ScheduledMaintenanceStateTimeline" ("scheduledMaintenanceId", "startsAt") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_4873976169085f14bdc39e168d" ON "StatusPageOwnerTeam" ("statusPageId", "teamId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_a9f80dc4f648f0957ce695dc61" ON "StatusPageOwnerUser" ("statusPageId", "userId", "projectId") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_33ba145fe2826bb953e2ce9d3d" ON "UserOnCallLogTimeline" ("projectId", "status") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_90363cc35c22e377df8fdc5dfb" ON "UserOnCallLogTimeline" ("onCallDutyPolicyExecutionLogId", "status") `,
);
await queryRunner.query(
`CREATE INDEX "IDX_89cccd6782b1ee84d20e9690d0" ON "UserOnCallLogTimeline" ("userId", "createdAt") `,
);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`DROP INDEX "public"."IDX_89cccd6782b1ee84d20e9690d0"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_90363cc35c22e377df8fdc5dfb"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_33ba145fe2826bb953e2ce9d3d"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_a9f80dc4f648f0957ce695dc61"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_4873976169085f14bdc39e168d"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_c4ac940ddb05242a166567edbb"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_a4621b7155a01292b92569549f"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_b737666365dbea2e4c914fc6d3"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_e84431ba010571147933477cff"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_7addde4d27f13be56651000df9"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_00439dd14338c3ee4e81d0714a"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_1539db4bbd6ada58abb940b058"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_34f21c8ae164fb90be806818a8"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_f34e1244e487f705e7c6b25831"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_114e3f761691867aa919ab6b6e"`,
);
await queryRunner.query(
`DROP INDEX "public"."IDX_b03e14b5a5fc9f5b8603283c88"`,
);
await queryRunner.query(
`ALTER TABLE "OnCallDutyPolicyScheduleLayer" ALTER COLUMN "restrictionTimes" SET DEFAULT '{"_type": "RestrictionTimes", "value": {"restictionType": "None", "dayRestrictionTimes": null, "weeklyRestrictionTimes": []}}'`,
);
await queryRunner.query(
`ALTER TABLE "OnCallDutyPolicyScheduleLayer" ALTER COLUMN "rotation" SET DEFAULT '{"_type": "Recurring", "value": {"intervalType": "Day", "intervalCount": {"_type": "PositiveNumber", "value": 1}}}'`,
);
}
}

View File

@@ -143,6 +143,9 @@ import { MigrationName1752774923063 } from "./1752774923063-MigrationName";
import { MigrationName1753109689244 } from "./1753109689244-MigrationName";
import { AddEnableCustomSubscriberEmailNotificationFooterText1753131488925 } from "./1753131488925-AddEnableCustomSubscriberEmailNotificationFooterText";
import { MigrationName1753343522987 } from "./1753343522987-MigrationName";
import { MigrationName1753377161288 } from "./1753377161288-MigrationName";
import { AddPerformanceIndexes1753378524062 } from "./1753378524062-AddPerformanceIndexes";
import { MigrationName1753383711511 } from "./1753383711511-MigrationName";
export default [
InitialMigration,
@@ -289,5 +292,8 @@ export default [
MigrationName1752774923063,
MigrationName1753109689244,
AddEnableCustomSubscriberEmailNotificationFooterText1753131488925,
MigrationName1753343522987
MigrationName1753343522987,
MigrationName1753377161288,
AddPerformanceIndexes1753378524062,
MigrationName1753383711511,
];

View File

@@ -16,6 +16,11 @@ import CaptureSpan from "../Utils/Telemetry/CaptureSpan";
export enum QueueName {
Workflow = "Workflow",
Worker = "Worker",
Telemetry = "Telemetry",
FluentIngest = "FluentIngest",
IncomingRequestIngest = "IncomingRequestIngest",
ServerMonitorIngest = "ServerMonitorIngest",
ProbeIngest = "ProbeIngest",
}
export type QueueJob = Job;
@@ -133,4 +138,81 @@ export default class Queue {
return jobAdded;
}
@CaptureSpan()
public static async getQueueSize(queueName: QueueName): Promise<number> {
const queue: BullQueue = this.getQueue(queueName);
const waitingCount: number = await queue.getWaitingCount();
const activeCount: number = await queue.getActiveCount();
const delayedCount: number = await queue.getDelayedCount();
return waitingCount + activeCount + delayedCount;
}
@CaptureSpan()
public static async getQueueStats(queueName: QueueName): Promise<{
waiting: number;
active: number;
completed: number;
failed: number;
delayed: number;
total: number;
}> {
const queue: BullQueue = this.getQueue(queueName);
const waitingCount: number = await queue.getWaitingCount();
const activeCount: number = await queue.getActiveCount();
const completedCount: number = await queue.getCompletedCount();
const failedCount: number = await queue.getFailedCount();
const delayedCount: number = await queue.getDelayedCount();
return {
waiting: waitingCount,
active: activeCount,
completed: completedCount,
failed: failedCount,
delayed: delayedCount,
total:
waitingCount +
activeCount +
completedCount +
failedCount +
delayedCount,
};
}
@CaptureSpan()
public static async getFailedJobs(
queueName: QueueName,
options?: {
start?: number;
end?: number;
},
): Promise<
Array<{
id: string;
name: string;
data: JSONObject;
failedReason: string;
processedOn: Date | null;
finishedOn: Date | null;
attemptsMade: number;
}>
> {
const queue: BullQueue = this.getQueue(queueName);
const start: number = options?.start || 0;
const end: number = options?.end || 100;
const failed: Job[] = await queue.getFailed(start, end);
return failed.map((job: Job) => {
return {
id: job.id || "unknown",
name: job.name || "unknown",
data: job.data as JSONObject,
failedReason: job.failedReason || "No reason provided",
processedOn: job.processedOn ? new Date(job.processedOn) : null,
finishedOn: job.finishedOn ? new Date(job.finishedOn) : null,
attemptsMade: job.attemptsMade || 0,
};
});
}
}

View File

@@ -38,6 +38,9 @@ export default class ClusterKeyAuthorization {
} else if (req.headers && req.headers["clusterkey"]) {
// Header keys are automatically transformed to lowercase
clusterKey = req.headers["clusterkey"] as string;
} else if (req.headers && req.headers["x-clusterkey"]) {
// KEDA TriggerAuthentication sends headers with X- prefix
clusterKey = req.headers["x-clusterkey"] as string;
} else if (req.body && req.body.clusterKey) {
clusterKey = req.body.clusterKey;
} else {

View File

@@ -54,6 +54,7 @@ import { MessageBlocksByWorkspaceType } from "./WorkspaceNotificationRuleService
import CaptureSpan from "../Utils/Telemetry/CaptureSpan";
import MetricType from "../../Models/DatabaseModels/MetricType";
import Dictionary from "../../Types/Dictionary";
import OnCallDutyPolicy from "../../Models/DatabaseModels/OnCallDutyPolicy";
export class Service extends DatabaseService<Model> {
public constructor() {
@@ -272,6 +273,7 @@ export class Service extends DatabaseService<Model> {
throw new BadDataException("currentAlertStateId is required");
}
// Get alert data for feed creation
const alert: Model | null = await this.findOneById({
id: createdItem.id,
select: {
@@ -304,147 +306,258 @@ export class Service extends DatabaseService<Model> {
throw new BadDataException("Alert not found");
}
const createdByUserId: ObjectID | undefined | null =
createdItem.createdByUserId || createdItem.createdByUser?.id;
// Execute core operations in parallel first
const coreOperations: Array<Promise<any>> = [];
// send message to workspaces - slack, teams, etc.
const workspaceResult: {
channelsCreated: Array<NotificationRuleWorkspaceChannel>;
} | null =
await AlertWorkspaceMessages.createChannelsAndInviteUsersToChannels({
projectId: createdItem.projectId,
alertId: createdItem.id!,
alertNumber: createdItem.alertNumber!,
});
// Create feed item asynchronously
coreOperations.push(this.createAlertFeedAsync(alert, createdItem));
logger.debug("Alert created. Workspace result:");
logger.debug(workspaceResult);
if (workspaceResult && workspaceResult.channelsCreated?.length > 0) {
// update alert with these channels.
await this.updateOneById({
id: createdItem.id!,
data: {
postUpdatesToWorkspaceChannels: workspaceResult.channelsCreated || [],
},
props: {
isRoot: true,
},
});
}
let feedInfoInMarkdown: string = `#### 🚨 Alert ${createdItem.alertNumber?.toString()} Created:
**${createdItem.title || "No title provided."}**:
${createdItem.description || "No description provided."}
`;
if (alert.currentAlertState?.name) {
feedInfoInMarkdown += `🔴 **Alert State**: ${alert.currentAlertState.name} \n\n`;
}
if (alert.alertSeverity?.name) {
feedInfoInMarkdown += `⚠️ **Severity**: ${alert.alertSeverity.name} \n\n`;
}
if (alert.monitor) {
feedInfoInMarkdown += `🌎 **Resources Affected**:\n`;
const monitor: Monitor = alert.monitor;
feedInfoInMarkdown += `- [${monitor.name}](${(await MonitorService.getMonitorLinkInDashboard(createdItem.projectId!, monitor.id!)).toString()})\n`;
feedInfoInMarkdown += `\n\n`;
}
if (createdItem.rootCause) {
feedInfoInMarkdown += `\n
📄 **Root Cause**:
${createdItem.rootCause || "No root cause provided."}
`;
}
if (createdItem.remediationNotes) {
feedInfoInMarkdown += `\n
🎯 **Remediation Notes**:
${createdItem.remediationNotes || "No remediation notes provided."}
`;
}
const alertCreateMessageBlocks: Array<MessageBlocksByWorkspaceType> =
await AlertWorkspaceMessages.getAlertCreateMessageBlocks({
alertId: createdItem.id!,
projectId: createdItem.projectId!,
});
await AlertFeedService.createAlertFeedItem({
alertId: createdItem.id!,
projectId: createdItem.projectId!,
alertFeedEventType: AlertFeedEventType.AlertCreated,
displayColor: Red500,
feedInfoInMarkdown: feedInfoInMarkdown,
userId: createdByUserId || undefined,
workspaceNotification: {
appendMessageBlocks: alertCreateMessageBlocks,
sendWorkspaceNotification: true,
},
});
await this.changeAlertState({
projectId: createdItem.projectId,
alertId: createdItem.id,
alertStateId: createdItem.currentAlertStateId,
notifyOwners: false,
rootCause: createdItem.rootCause,
stateChangeLog: createdItem.createdStateLog,
props: {
isRoot: true,
},
});
// add owners.
// Handle state change asynchronously
coreOperations.push(this.handleAlertStateChangeAsync(createdItem));
// Handle owner assignment asynchronously
if (
onCreate.createBy.miscDataProps &&
(onCreate.createBy.miscDataProps["ownerTeams"] ||
onCreate.createBy.miscDataProps["ownerUsers"])
) {
await this.addOwners(
createdItem.projectId,
createdItem.id,
(onCreate.createBy.miscDataProps["ownerUsers"] as Array<ObjectID>) ||
[],
(onCreate.createBy.miscDataProps["ownerTeams"] as Array<ObjectID>) ||
[],
false,
onCreate.createBy.props,
coreOperations.push(
this.addOwners(
createdItem.projectId,
createdItem.id,
(onCreate.createBy.miscDataProps["ownerUsers"] as Array<ObjectID>) ||
[],
(onCreate.createBy.miscDataProps["ownerTeams"] as Array<ObjectID>) ||
[],
false,
onCreate.createBy.props,
),
);
}
if (
createdItem.onCallDutyPolicies?.length &&
createdItem.onCallDutyPolicies?.length > 0
) {
for (const policy of createdItem.onCallDutyPolicies) {
await OnCallDutyPolicyService.executePolicy(
new ObjectID(policy._id as string),
{
triggeredByAlertId: createdItem.id!,
userNotificationEventType: UserNotificationEventType.AlertCreated,
},
// Execute core operations in parallel with error handling
Promise.allSettled(coreOperations)
.then((coreResults: any[]) => {
// Log any errors from core operations
coreResults.forEach((result: any, index: number) => {
if (result.status === "rejected") {
logger.error(
`Core operation ${index} failed in AlertService.onCreateSuccess: ${result.reason}`,
);
}
});
// Handle on-call duty policies asynchronously
if (
createdItem.onCallDutyPolicies?.length &&
createdItem.onCallDutyPolicies?.length > 0
) {
this.executeAlertOnCallDutyPoliciesAsync(createdItem).catch(
(error: Error) => {
logger.error(
`On-call duty policy execution failed in AlertService.onCreateSuccess: ${error}`,
);
},
);
}
// Handle workspace operations after core operations complete
if (createdItem.projectId && createdItem.id) {
// Run workspace operations in background without blocking response
this.handleAlertWorkspaceOperationsAsync(createdItem).catch(
(error: Error) => {
logger.error(
`Workspace operations failed in AlertService.onCreateSuccess: ${error}`,
);
},
);
}
})
.catch((error: Error) => {
logger.error(
`Critical error in AlertService core operations: ${error}`,
);
}
}
});
return createdItem;
}
@CaptureSpan()
private async handleAlertWorkspaceOperationsAsync(
createdItem: Model,
): Promise<void> {
try {
if (!createdItem.projectId || !createdItem.id) {
throw new BadDataException(
"projectId and id are required for workspace operations",
);
}
// send message to workspaces - slack, teams, etc.
const workspaceResult: {
channelsCreated: Array<NotificationRuleWorkspaceChannel>;
} | null =
await AlertWorkspaceMessages.createChannelsAndInviteUsersToChannels({
projectId: createdItem.projectId,
alertId: createdItem.id,
alertNumber: createdItem.alertNumber!,
});
logger.debug("Alert created. Workspace result:");
logger.debug(workspaceResult);
if (workspaceResult && workspaceResult.channelsCreated?.length > 0) {
// update alert with these channels.
await this.updateOneById({
id: createdItem.id,
data: {
postUpdatesToWorkspaceChannels:
workspaceResult.channelsCreated || [],
},
props: {
isRoot: true,
},
});
}
} catch (error) {
logger.error(`Error in handleAlertWorkspaceOperationsAsync: ${error}`);
throw error;
}
}
@CaptureSpan()
private async createAlertFeedAsync(
alert: Model,
createdItem: Model,
): Promise<void> {
try {
const createdByUserId: ObjectID | undefined | null =
createdItem.createdByUserId || createdItem.createdByUser?.id;
let feedInfoInMarkdown: string = `#### 🚨 Alert ${createdItem.alertNumber?.toString()} Created:
**${createdItem.title || "No title provided."}**:
${createdItem.description || "No description provided."}
`;
if (alert.currentAlertState?.name) {
feedInfoInMarkdown += `🔴 **Alert State**: ${alert.currentAlertState.name} \n\n`;
}
if (alert.alertSeverity?.name) {
feedInfoInMarkdown += `⚠️ **Severity**: ${alert.alertSeverity.name} \n\n`;
}
if (alert.monitor) {
feedInfoInMarkdown += `🌎 **Resources Affected**:\n`;
const monitor: Monitor = alert.monitor;
feedInfoInMarkdown += `- [${monitor.name}](${(await MonitorService.getMonitorLinkInDashboard(createdItem.projectId!, monitor.id!)).toString()})\n`;
feedInfoInMarkdown += `\n\n`;
}
if (createdItem.rootCause) {
feedInfoInMarkdown += `\n
📄 **Root Cause**:
${createdItem.rootCause || "No root cause provided."}
`;
}
if (createdItem.remediationNotes) {
feedInfoInMarkdown += `\n
🎯 **Remediation Notes**:
${createdItem.remediationNotes || "No remediation notes provided."}
`;
}
const alertCreateMessageBlocks: Array<MessageBlocksByWorkspaceType> =
await AlertWorkspaceMessages.getAlertCreateMessageBlocks({
alertId: createdItem.id!,
projectId: createdItem.projectId!,
});
await AlertFeedService.createAlertFeedItem({
alertId: createdItem.id!,
projectId: createdItem.projectId!,
alertFeedEventType: AlertFeedEventType.AlertCreated,
displayColor: Red500,
feedInfoInMarkdown: feedInfoInMarkdown,
userId: createdByUserId || undefined,
workspaceNotification: {
appendMessageBlocks: alertCreateMessageBlocks,
sendWorkspaceNotification: true,
},
});
} catch (error) {
logger.error(`Error in createAlertFeedAsync: ${error}`);
throw error;
}
}
@CaptureSpan()
private async handleAlertStateChangeAsync(createdItem: Model): Promise<void> {
try {
if (!createdItem.projectId || !createdItem.id) {
throw new BadDataException(
"projectId and id are required for state change",
);
}
await this.changeAlertState({
projectId: createdItem.projectId,
alertId: createdItem.id,
alertStateId: createdItem.currentAlertStateId!,
notifyOwners: false,
rootCause: createdItem.rootCause,
stateChangeLog: createdItem.createdStateLog,
props: {
isRoot: true,
},
});
} catch (error) {
logger.error(`Error in handleAlertStateChangeAsync: ${error}`);
throw error;
}
}
@CaptureSpan()
private async executeAlertOnCallDutyPoliciesAsync(
createdItem: Model,
): Promise<void> {
try {
if (
createdItem.onCallDutyPolicies?.length &&
createdItem.onCallDutyPolicies?.length > 0
) {
// Execute all on-call policies in parallel
const policyPromises: Promise<void>[] =
createdItem.onCallDutyPolicies.map((policy: OnCallDutyPolicy) => {
return OnCallDutyPolicyService.executePolicy(
new ObjectID(policy["_id"] as string),
{
triggeredByAlertId: createdItem.id!,
userNotificationEventType:
UserNotificationEventType.AlertCreated,
},
);
});
await Promise.allSettled(policyPromises);
}
} catch (error) {
logger.error(`Error in executeAlertOnCallDutyPoliciesAsync: ${error}`);
throw error;
}
}
@CaptureSpan()
public async getWorkspaceChannelForAlert(data: {
alertId: ObjectID;

View File

@@ -59,9 +59,10 @@ import WorkspaceType from "../../Types/Workspace/WorkspaceType";
import { MessageBlocksByWorkspaceType } from "./WorkspaceNotificationRuleService";
import NotificationRuleWorkspaceChannel from "../../Types/Workspace/NotificationRules/NotificationRuleWorkspaceChannel";
import CaptureSpan from "../Utils/Telemetry/CaptureSpan";
import { Dictionary } from "lodash";
import MetricType from "../../Models/DatabaseModels/MetricType";
import UpdateBy from "../Types/Database/UpdateBy";
import OnCallDutyPolicy from "../../Models/DatabaseModels/OnCallDutyPolicy";
import Dictionary from "../../Types/Dictionary";
// key is incidentId for this dictionary.
type UpdateCarryForward = Dictionary<{
@@ -544,6 +545,7 @@ export class Service extends DatabaseService<Model> {
throw new BadDataException("id is required");
}
// Get incident data for feed creation
const incident: Model | null = await this.findOneById({
id: createdItem.id,
select: {
@@ -576,202 +578,343 @@ export class Service extends DatabaseService<Model> {
throw new BadDataException("Incident not found");
}
// release the mutex.
if (onCreate.carryForward && onCreate.carryForward.mutex) {
const mutex: SemaphoreMutex = onCreate.carryForward.mutex;
const projectId: ObjectID = createdItem.projectId!;
// Execute core operations in parallel first
const coreOperations: Array<Promise<any>> = [];
try {
await Semaphore.release(mutex);
logger.debug(
"Mutex released - IncidentService.incident-create " +
projectId.toString() +
" at " +
OneUptimeDate.getCurrentDateAsFormattedString(),
// Create feed item asynchronously
coreOperations.push(this.createIncidentFeedAsync(incident, createdItem));
// Handle state change asynchronously
coreOperations.push(this.handleIncidentStateChangeAsync(createdItem));
// Handle owner assignment asynchronously
if (
onCreate.createBy.miscDataProps &&
(onCreate.createBy.miscDataProps["ownerTeams"] ||
onCreate.createBy.miscDataProps["ownerUsers"])
) {
coreOperations.push(
this.addOwners(
createdItem.projectId,
createdItem.id,
(onCreate.createBy.miscDataProps["ownerUsers"] as Array<ObjectID>) ||
[],
(onCreate.createBy.miscDataProps["ownerTeams"] as Array<ObjectID>) ||
[],
false,
onCreate.createBy.props,
),
);
}
// Handle monitor status change and active monitoring asynchronously
if (createdItem.changeMonitorStatusToId && createdItem.projectId) {
coreOperations.push(
this.handleMonitorStatusChangeAsync(createdItem, onCreate),
);
}
coreOperations.push(
this.disableActiveMonitoringIfManualIncident(createdItem.id!),
);
// Release mutex immediately
this.releaseMutexAsync(onCreate, createdItem.projectId!);
// Execute core operations in parallel with error handling
Promise.allSettled(coreOperations)
.then((coreResults: any[]) => {
// Log any errors from core operations
coreResults.forEach((result: any, index: number) => {
if (result.status === "rejected") {
logger.error(
`Core operation ${index} failed in IncidentService.onCreateSuccess: ${result.reason}`,
);
}
});
// Handle on-call duty policies asynchronously
if (
createdItem.onCallDutyPolicies?.length &&
createdItem.onCallDutyPolicies?.length > 0
) {
this.executeOnCallDutyPoliciesAsync(createdItem).catch(
(error: Error) => {
logger.error(
`On-call duty policy execution failed in IncidentService.onCreateSuccess: ${error}`,
);
},
);
}
// Handle workspace operations after core operations complete
if (createdItem.projectId && createdItem.id) {
// Run workspace operations in background without blocking response
this.handleIncidentWorkspaceOperationsAsync(createdItem).catch(
(error: Error) => {
logger.error(
`Workspace operations failed in IncidentService.onCreateSuccess: ${error}`,
);
},
);
}
})
.catch((error: Error) => {
logger.error(
`Critical error in IncidentService core operations: ${error}`,
);
} catch (err) {
logger.debug(
"Mutex release failed - IncidentService.incident-create " +
projectId.toString() +
" at " +
OneUptimeDate.getCurrentDateAsFormattedString(),
});
return createdItem;
}
@CaptureSpan()
private async handleIncidentWorkspaceOperationsAsync(
createdItem: Model,
): Promise<void> {
try {
if (!createdItem.projectId || !createdItem.id) {
throw new BadDataException(
"projectId and id are required for workspace operations",
);
logger.error(err);
}
// send message to workspaces - slack, teams, etc.
const workspaceResult: {
channelsCreated: Array<NotificationRuleWorkspaceChannel>;
} | null =
await IncidentWorkspaceMessages.createChannelsAndInviteUsersToChannels({
projectId: createdItem.projectId,
incidentId: createdItem.id,
incidentNumber: createdItem.incidentNumber!,
});
if (workspaceResult && workspaceResult.channelsCreated?.length > 0) {
// update incident with these channels.
await this.updateOneById({
id: createdItem.id,
data: {
postUpdatesToWorkspaceChannels:
workspaceResult.channelsCreated || [],
},
props: {
isRoot: true,
},
});
}
} catch (error) {
logger.error(`Error in handleIncidentWorkspaceOperationsAsync: ${error}`);
throw error;
}
}
const createdByUserId: ObjectID | undefined | null =
createdItem.createdByUserId || createdItem.createdByUser?.id;
@CaptureSpan()
private async createIncidentFeedAsync(
incident: Model,
createdItem: Model,
): Promise<void> {
try {
const createdByUserId: ObjectID | undefined | null =
createdItem.createdByUserId || createdItem.createdByUser?.id;
// send message to workspaces - slack, teams, etc.
const workspaceResult: {
channelsCreated: Array<NotificationRuleWorkspaceChannel>;
} | null =
await IncidentWorkspaceMessages.createChannelsAndInviteUsersToChannels({
projectId: createdItem.projectId,
incidentId: createdItem.id!,
incidentNumber: createdItem.incidentNumber!,
});
if (workspaceResult && workspaceResult.channelsCreated?.length > 0) {
// update incident with these channels.
await this.updateOneById({
id: createdItem.id!,
data: {
postUpdatesToWorkspaceChannels: workspaceResult.channelsCreated || [],
},
props: {
isRoot: true,
},
});
}
let feedInfoInMarkdown: string = `#### 🚨 Incident ${createdItem.incidentNumber?.toString()} Created:
let feedInfoInMarkdown: string = `#### 🚨 Incident ${createdItem.incidentNumber?.toString()} Created:
**${createdItem.title || "No title provided."}**:
${createdItem.description || "No description provided."}
`;
if (incident.currentIncidentState?.name) {
feedInfoInMarkdown += `🔴 **Incident State**: ${incident.currentIncidentState.name} \n\n`;
}
if (incident.incidentSeverity?.name) {
feedInfoInMarkdown += `⚠️ **Severity**: ${incident.incidentSeverity.name} \n\n`;
}
if (incident.monitors && incident.monitors.length > 0) {
feedInfoInMarkdown += `🌎 **Resources Affected**:\n`;
for (const monitor of incident.monitors) {
feedInfoInMarkdown += `- [${monitor.name}](${(await MonitorService.getMonitorLinkInDashboard(createdItem.projectId!, monitor.id!)).toString()})\n`;
if (incident.currentIncidentState?.name) {
feedInfoInMarkdown += `🔴 **Incident State**: ${incident.currentIncidentState.name} \n\n`;
}
feedInfoInMarkdown += `\n\n`;
}
if (incident.incidentSeverity?.name) {
feedInfoInMarkdown += `⚠️ **Severity**: ${incident.incidentSeverity.name} \n\n`;
}
if (createdItem.rootCause) {
feedInfoInMarkdown += `\n
if (incident.monitors && incident.monitors.length > 0) {
feedInfoInMarkdown += `🌎 **Resources Affected**:\n`;
for (const monitor of incident.monitors) {
feedInfoInMarkdown += `- [${monitor.name}](${(await MonitorService.getMonitorLinkInDashboard(createdItem.projectId!, monitor.id!)).toString()})\n`;
}
feedInfoInMarkdown += `\n\n`;
}
if (createdItem.rootCause) {
feedInfoInMarkdown += `\n
📄 **Root Cause**:
${createdItem.rootCause || "No root cause provided."}
`;
}
}
if (createdItem.remediationNotes) {
feedInfoInMarkdown += `\n
if (createdItem.remediationNotes) {
feedInfoInMarkdown += `\n
🎯 **Remediation Notes**:
${createdItem.remediationNotes || "No remediation notes provided."}
`;
}
}
const incidentCreateMessageBlocks: Array<MessageBlocksByWorkspaceType> =
await IncidentWorkspaceMessages.getIncidentCreateMessageBlocks({
const incidentCreateMessageBlocks: Array<MessageBlocksByWorkspaceType> =
await IncidentWorkspaceMessages.getIncidentCreateMessageBlocks({
incidentId: createdItem.id!,
projectId: createdItem.projectId!,
});
await IncidentFeedService.createIncidentFeedItem({
incidentId: createdItem.id!,
projectId: createdItem.projectId!,
incidentFeedEventType: IncidentFeedEventType.IncidentCreated,
displayColor: Red500,
feedInfoInMarkdown: feedInfoInMarkdown,
userId: createdByUserId || undefined,
workspaceNotification: {
appendMessageBlocks: incidentCreateMessageBlocks,
sendWorkspaceNotification: true,
},
});
await IncidentFeedService.createIncidentFeedItem({
incidentId: createdItem.id!,
projectId: createdItem.projectId!,
incidentFeedEventType: IncidentFeedEventType.IncidentCreated,
displayColor: Red500,
feedInfoInMarkdown: feedInfoInMarkdown,
userId: createdByUserId || undefined,
workspaceNotification: {
appendMessageBlocks: incidentCreateMessageBlocks,
sendWorkspaceNotification: true,
},
});
if (!createdItem.currentIncidentStateId) {
throw new BadDataException("currentIncidentStateId is required");
} catch (error) {
logger.error(`Error in createIncidentFeedAsync: ${error}`);
throw error;
}
if (createdItem.changeMonitorStatusToId && createdItem.projectId) {
// change status of all the monitors.
await MonitorService.changeMonitorStatus(
createdItem.projectId,
createdItem.monitors?.map((monitor: Monitor) => {
return new ObjectID(monitor._id || "");
}) || [],
createdItem.changeMonitorStatusToId,
true, // notifyMonitorOwners
createdItem.rootCause ||
"Status was changed because Incident #" +
createdItem.incidentNumber?.toString() +
" was created.",
createdItem.createdStateLog,
onCreate.createBy.props,
);
}
await this.changeIncidentState({
projectId: createdItem.projectId,
incidentId: createdItem.id,
incidentStateId: createdItem.currentIncidentStateId,
shouldNotifyStatusPageSubscribers: Boolean(
createdItem.shouldStatusPageSubscribersBeNotifiedOnIncidentCreated,
),
isSubscribersNotified: Boolean(
createdItem.shouldStatusPageSubscribersBeNotifiedOnIncidentCreated,
), // we dont want to notify subscribers when incident state changes because they are already notified when the incident is created.
notifyOwners: false,
rootCause: createdItem.rootCause,
stateChangeLog: createdItem.createdStateLog,
props: {
isRoot: true,
},
});
// add owners.
if (
onCreate.createBy.miscDataProps &&
(onCreate.createBy.miscDataProps["ownerTeams"] ||
onCreate.createBy.miscDataProps["ownerUsers"])
) {
await this.addOwners(
createdItem.projectId,
createdItem.id,
(onCreate.createBy.miscDataProps["ownerUsers"] as Array<ObjectID>) ||
[],
(onCreate.createBy.miscDataProps["ownerTeams"] as Array<ObjectID>) ||
[],
false,
onCreate.createBy.props,
);
}
if (
createdItem.onCallDutyPolicies?.length &&
createdItem.onCallDutyPolicies?.length > 0
) {
for (const policy of createdItem.onCallDutyPolicies) {
await OnCallDutyPolicyService.executePolicy(
new ObjectID(policy._id as string),
{
triggeredByIncidentId: createdItem.id!,
userNotificationEventType:
UserNotificationEventType.IncidentCreated,
},
);
}
}
// check if the incident is created manaull by a user and if thats the case, then disable active monitoting on that monitor.
await this.disableActiveMonitoringIfManualIncident(createdItem.id!);
return createdItem;
}
@CaptureSpan()
private async handleIncidentStateChangeAsync(
createdItem: Model,
): Promise<void> {
try {
if (!createdItem.currentIncidentStateId) {
throw new BadDataException("currentIncidentStateId is required");
}
if (!createdItem.projectId || !createdItem.id) {
throw new BadDataException(
"projectId and id are required for state change",
);
}
await this.changeIncidentState({
projectId: createdItem.projectId,
incidentId: createdItem.id,
incidentStateId: createdItem.currentIncidentStateId,
shouldNotifyStatusPageSubscribers: Boolean(
createdItem.shouldStatusPageSubscribersBeNotifiedOnIncidentCreated,
),
isSubscribersNotified: Boolean(
createdItem.shouldStatusPageSubscribersBeNotifiedOnIncidentCreated,
), // we dont want to notify subscribers when incident state changes because they are already notified when the incident is created.
notifyOwners: false,
rootCause: createdItem.rootCause,
stateChangeLog: createdItem.createdStateLog,
props: {
isRoot: true,
},
});
} catch (error) {
logger.error(`Error in handleIncidentStateChangeAsync: ${error}`);
throw error;
}
}
@CaptureSpan()
private async executeOnCallDutyPoliciesAsync(
createdItem: Model,
): Promise<void> {
try {
if (
createdItem.onCallDutyPolicies?.length &&
createdItem.onCallDutyPolicies?.length > 0
) {
// Execute all on-call policies in parallel
const policyPromises: Promise<void>[] =
createdItem.onCallDutyPolicies.map((policy: OnCallDutyPolicy) => {
return OnCallDutyPolicyService.executePolicy(
new ObjectID(policy["_id"] as string),
{
triggeredByIncidentId: createdItem.id!,
userNotificationEventType:
UserNotificationEventType.IncidentCreated,
},
);
});
await Promise.allSettled(policyPromises);
}
} catch (error) {
logger.error(`Error in executeOnCallDutyPoliciesAsync: ${error}`);
throw error;
}
}
@CaptureSpan()
private async handleMonitorStatusChangeAsync(
createdItem: Model,
onCreate: OnCreate<Model>,
): Promise<void> {
try {
if (createdItem.changeMonitorStatusToId && createdItem.projectId) {
// change status of all the monitors.
await MonitorService.changeMonitorStatus(
createdItem.projectId,
createdItem.monitors?.map((monitor: Monitor) => {
return new ObjectID(monitor._id || "");
}) || [],
createdItem.changeMonitorStatusToId,
true, // notifyMonitorOwners
createdItem.rootCause ||
"Status was changed because Incident #" +
createdItem.incidentNumber?.toString() +
" was created.",
createdItem.createdStateLog,
onCreate.createBy.props,
);
}
} catch (error) {
logger.error(`Error in handleMonitorStatusChangeAsync: ${error}`);
throw error;
}
}
@CaptureSpan()
private releaseMutexAsync(
onCreate: OnCreate<Model>,
projectId: ObjectID,
): void {
// Release mutex in background without blocking
if (onCreate.carryForward && onCreate.carryForward.mutex) {
const mutex: SemaphoreMutex = onCreate.carryForward.mutex;
setImmediate(async () => {
try {
await Semaphore.release(mutex);
logger.debug(
"Mutex released - IncidentService.incident-create " +
projectId.toString() +
" at " +
OneUptimeDate.getCurrentDateAsFormattedString(),
);
} catch (err) {
logger.debug(
"Mutex release failed - IncidentService.incident-create " +
projectId.toString() +
" at " +
OneUptimeDate.getCurrentDateAsFormattedString(),
);
logger.error(err);
}
});
}
}
@CaptureSpan()
public async disableActiveMonitoringIfManualIncident(
incidentId: ObjectID,
): Promise<void> {

View File

@@ -116,8 +116,9 @@ export class Service extends DatabaseService<IncidentStateTimeline> {
throw new BadDataException("incidentStateId is null");
}
const stateBeforeThis: IncidentStateTimeline | null =
await this.findOneBy({
// Execute queries for before and after states in parallel for better performance
const [stateBeforeThis, stateAfterThis] = await Promise.all([
this.findOneBy({
query: {
incidentId: createBy.data.incidentId,
startsAt: QueryHelper.lessThanEqualTo(createBy.data.startsAt),
@@ -138,7 +139,25 @@ export class Service extends DatabaseService<IncidentStateTimeline> {
startsAt: true,
endsAt: true,
},
});
}),
this.findOneBy({
query: {
incidentId: createBy.data.incidentId,
startsAt: QueryHelper.greaterThan(createBy.data.startsAt),
},
sort: {
startsAt: SortOrder.Ascending,
},
props: {
isRoot: true,
},
select: {
incidentStateId: true,
startsAt: true,
endsAt: true,
},
}),
]);
logger.debug("State Before this");
logger.debug(stateBeforeThis);
@@ -197,26 +216,6 @@ export class Service extends DatabaseService<IncidentStateTimeline> {
}
}
const stateAfterThis: IncidentStateTimeline | null = await this.findOneBy(
{
query: {
incidentId: createBy.data.incidentId,
startsAt: QueryHelper.greaterThan(createBy.data.startsAt),
},
sort: {
startsAt: SortOrder.Ascending,
},
props: {
isRoot: true,
},
select: {
incidentStateId: true,
startsAt: true,
endsAt: true,
},
},
);
// compute ends at. It's the start of the next status.
if (stateAfterThis && stateAfterThis.startsAt) {
createBy.data.endsAt = stateAfterThis.startsAt;

View File

@@ -502,20 +502,132 @@ ${createdItem.description?.trim() || "No description provided."}
feedInfoInMarkdown += `\n\n`;
}
// send message to workspaces - slack, teams, etc.
// Parallelize operations that don't depend on each other
const parallelOperations: Array<Promise<any>> = [];
// 1. Essential monitor status operation (must complete first)
await this.changeMonitorStatus(
createdItem.projectId,
[createdItem.id],
createdItem.currentMonitorStatusId,
false, // notifyOwners = false
"This status was created when the monitor was created.",
undefined,
onCreate.createBy.props,
);
// 2. Start core operations in parallel that can run asynchronously (excluding workspace operations)
// Add default probes if needed (can be slow with many probes)
if (
createdItem.monitorType &&
MonitorTypeHelper.isProbableMonitor(createdItem.monitorType)
) {
parallelOperations.push(
this.addDefaultProbesToMonitor(
createdItem.projectId,
createdItem.id,
).catch((error: Error) => {
logger.error("Error in adding default probes");
logger.error(error);
// Don't fail monitor creation due to probe creation issues
}),
);
}
// Billing operations
if (IsBillingEnabled) {
parallelOperations.push(
ActiveMonitoringMeteredPlan.reportQuantityToBillingProvider(
createdItem.projectId,
).catch((error: Error) => {
logger.error("Error in billing operations");
logger.error(error);
// Don't fail monitor creation due to billing issues
}),
);
}
// Owner operations
if (
onCreate.createBy.miscDataProps &&
(onCreate.createBy.miscDataProps["ownerTeams"] ||
onCreate.createBy.miscDataProps["ownerUsers"])
) {
parallelOperations.push(
this.addOwners(
createdItem.projectId,
createdItem.id,
(onCreate.createBy.miscDataProps["ownerUsers"] as Array<ObjectID>) ||
[],
(onCreate.createBy.miscDataProps["ownerTeams"] as Array<ObjectID>) ||
[],
false,
onCreate.createBy.props,
).catch((error: Error) => {
logger.error("Error in adding owners");
logger.error(error);
// Don't fail monitor creation due to owner issues
}),
);
}
// Probe status refresh (can be expensive with many probes)
parallelOperations.push(
this.refreshMonitorProbeStatus(createdItem.id).catch((error: Error) => {
logger.error("Error in refreshing probe status");
logger.error(error);
// Don't fail monitor creation due to probe status issues
}),
);
// Wait for core operations to complete, then handle workspace operations
Promise.allSettled(parallelOperations)
.then(() => {
// Handle workspace operations after core operations complete
// Run workspace operations in background without blocking response
this.handleWorkspaceOperationsAsync({
projectId: createdItem.projectId!,
monitorId: createdItem.id!,
monitorName: createdItem.name!,
feedInfoInMarkdown,
createdByUserId,
}).catch((error: Error) => {
logger.error("Error in workspace operations");
logger.error(error);
// Don't fail monitor creation due to workspace issues
});
})
.catch((error: Error) => {
logger.error("Error in parallel monitor creation operations");
logger.error(error);
});
return createdItem;
}
@CaptureSpan()
private async handleWorkspaceOperationsAsync(data: {
projectId: ObjectID;
monitorId: ObjectID;
monitorName: string;
feedInfoInMarkdown: string;
createdByUserId: ObjectID | undefined | null;
}): Promise<void> {
// send message to workspaces - slack, teams, etc.
const workspaceResult: {
channelsCreated: Array<NotificationRuleWorkspaceChannel>;
} | null =
await MonitorWorkspaceMessages.createChannelsAndInviteUsersToChannels({
projectId: createdItem.projectId,
monitorId: createdItem.id!,
monitorName: createdItem.name!,
projectId: data.projectId,
monitorId: data.monitorId,
monitorName: data.monitorName,
});
if (workspaceResult && workspaceResult.channelsCreated?.length > 0) {
// update incident with these channels.
// update monitor with these channels.
await this.updateOneById({
id: createdItem.id!,
id: data.monitorId,
data: {
postUpdatesToWorkspaceChannels: workspaceResult.channelsCreated || [],
},
@@ -527,72 +639,22 @@ ${createdItem.description?.trim() || "No description provided."}
const monitorCreateMessageBlocks: Array<MessageBlocksByWorkspaceType> =
await MonitorWorkspaceMessages.getMonitorCreateMessageBlocks({
monitorId: createdItem.id!,
projectId: createdItem.projectId!,
monitorId: data.monitorId,
projectId: data.projectId,
});
await MonitorFeedService.createMonitorFeedItem({
monitorId: createdItem.id!,
projectId: createdItem.projectId!,
monitorId: data.monitorId,
projectId: data.projectId,
monitorFeedEventType: MonitorFeedEventType.MonitorCreated,
displayColor: Green500,
feedInfoInMarkdown: feedInfoInMarkdown,
userId: createdByUserId || undefined,
feedInfoInMarkdown: data.feedInfoInMarkdown,
userId: data.createdByUserId || undefined,
workspaceNotification: {
appendMessageBlocks: monitorCreateMessageBlocks,
sendWorkspaceNotification: true,
},
});
await this.changeMonitorStatus(
createdItem.projectId,
[createdItem.id],
createdItem.currentMonitorStatusId,
false, // notifyOwners = false
"This status was created when the monitor was created.",
undefined,
onCreate.createBy.props,
);
if (
createdItem.monitorType &&
MonitorTypeHelper.isProbableMonitor(createdItem.monitorType)
) {
await this.addDefaultProbesToMonitor(
createdItem.projectId,
createdItem.id,
);
}
if (IsBillingEnabled) {
await ActiveMonitoringMeteredPlan.reportQuantityToBillingProvider(
createdItem.projectId,
);
}
// add owners.
if (
onCreate.createBy.miscDataProps &&
(onCreate.createBy.miscDataProps["ownerTeams"] ||
onCreate.createBy.miscDataProps["ownerUsers"])
) {
await this.addOwners(
createdItem.projectId,
createdItem.id,
(onCreate.createBy.miscDataProps["ownerUsers"] as Array<ObjectID>) ||
[],
(onCreate.createBy.miscDataProps["ownerTeams"] as Array<ObjectID>) ||
[],
false,
onCreate.createBy.props,
);
}
// refresh probe status.
await this.refreshMonitorProbeStatus(createdItem.id);
return createdItem;
}
@CaptureSpan()
@@ -761,21 +823,32 @@ ${createdItem.description?.trim() || "No description provided."}
const totalProbes: Array<Probe> = [...globalProbes, ...projectProbes];
if (totalProbes.length === 0) {
return;
}
// Create all monitor probes in parallel for better performance
const createPromises: Array<Promise<MonitorProbe>> = [];
for (const probe of totalProbes) {
const monitorProbe: MonitorProbe = new MonitorProbe();
monitorProbe.monitorId = monitorId;
monitorProbe.probeId = probe.id!;
monitorProbe.projectId = projectId;
monitorProbe.isEnabled = true;
await MonitorProbeService.create({
data: monitorProbe,
props: {
isRoot: true,
},
});
createPromises.push(
MonitorProbeService.create({
data: monitorProbe,
props: {
isRoot: true,
},
}),
);
}
// Execute all creates in parallel
await Promise.all(createPromises);
}
@CaptureSpan()

View File

@@ -1,4 +1,7 @@
import { IsBillingEnabled } from "../EnvironmentConfig";
import {
IsBillingEnabled,
NotificationSlackWebhookOnSubscriptionUpdate,
} from "../EnvironmentConfig";
import logger from "../Utils/Logger";
import BaseService from "./BaseService";
import BillingService from "./BillingService";
@@ -7,6 +10,9 @@ import BadDataException from "../../Types/Exception/BadDataException";
import ObjectID from "../../Types/ObjectID";
import Project from "../../Models/DatabaseModels/Project";
import CaptureSpan from "../Utils/Telemetry/CaptureSpan";
import SlackUtil from "../Utils/Workspace/Slack/Slack";
import URL from "../../Types/API/URL";
import Exception from "../../Types/Exception/Exception";
export class NotificationService extends BaseService {
public constructor() {
@@ -105,6 +111,17 @@ export class NotificationService extends BaseService {
} USD.`,
);
// Send Slack notification for balance refill
this.sendBalanceRefillSlackNotification({
project: project,
amountInUSD: amountInUSD,
currentBalanceInUSD: updatedAmount / 100,
}).catch((error: Exception) => {
logger.error(
"Error sending slack message for balance refill: " + error,
);
});
project.smsOrCallCurrentBalanceInUSDCents = updatedAmount;
return updatedAmount;
@@ -194,6 +211,34 @@ export class NotificationService extends BaseService {
return project?.smsOrCallCurrentBalanceInUSDCents || 0;
}
@CaptureSpan()
private async sendBalanceRefillSlackNotification(data: {
project: Project;
amountInUSD: number;
currentBalanceInUSD: number;
}): Promise<void> {
const { project, amountInUSD, currentBalanceInUSD } = data;
if (NotificationSlackWebhookOnSubscriptionUpdate) {
const slackMessage: string = `*SMS and Call Balance Refilled:*
*Project Name:* ${project.name?.toString() || "N/A"}
*Project ID:* ${project.id?.toString() || "N/A"}
*Refill Amount:* $${amountInUSD} USD
*Current Balance:* $${currentBalanceInUSD} USD
${project.createdOwnerName && project.createdOwnerEmail ? `*Project Created By:* ${project.createdOwnerName.toString()} (${project.createdOwnerEmail.toString()})` : ""}`;
SlackUtil.sendMessageToChannelViaIncomingWebhook({
url: URL.fromString(NotificationSlackWebhookOnSubscriptionUpdate),
text: slackMessage,
}).catch((error: Exception) => {
logger.error(
"Error sending slack message for balance refill: " + error,
);
});
}
}
}
export default new NotificationService();

View File

@@ -547,36 +547,7 @@ ${resourcesAffected ? `**Resources Affected:** ${resourcesAffected}` : ""}
onCreate: OnCreate<Model>,
createdItem: Model,
): Promise<Model> {
// create new scheduled maintenance state timeline.
const createdByUserId: ObjectID | undefined | null =
createdItem.createdByUserId || createdItem.createdByUser?.id;
// send message to workspaces - slack, teams, etc.
const workspaceResult: {
channelsCreated: Array<NotificationRuleWorkspaceChannel>;
} | null =
await ScheduledMaintenanceWorkspaceMessages.createChannelsAndInviteUsersToChannels(
{
projectId: createdItem.projectId!,
scheduledMaintenanceId: createdItem.id!,
scheduledMaintenanceNumber: createdItem.scheduledMaintenanceNumber!,
},
);
if (workspaceResult && workspaceResult.channelsCreated?.length > 0) {
// update scheduledMaintenance with these channels.
await this.updateOneById({
id: createdItem.id!,
data: {
postUpdatesToWorkspaceChannels: workspaceResult.channelsCreated || [],
},
props: {
isRoot: true,
},
});
}
// Get scheduled maintenance data for feed creation
const scheduledMaintenance: Model | null = await this.findOneById({
id: createdItem.id!,
select: {
@@ -606,83 +577,23 @@ ${resourcesAffected ? `**Resources Affected:** ${resourcesAffected}` : ""}
throw new BadDataException("Scheduled Maintenance not found");
}
let feedInfoInMarkdown: string = `#### 🕒 Scheduled Maintenance ${createdItem.scheduledMaintenanceNumber?.toString()} Created:
**${createdItem.title || "No title provided."}**:
${createdItem.description || "No description provided."}
`;
// Execute core operations in parallel first
const coreOperations: Array<Promise<any>> = [];
// add starts at and ends at.
if (scheduledMaintenance.startsAt) {
feedInfoInMarkdown += `**Starts At**: ${OneUptimeDate.getDateAsLocalFormattedString(scheduledMaintenance.startsAt)} \n\n`;
}
if (scheduledMaintenance.endsAt) {
feedInfoInMarkdown += `**Ends At**: ${OneUptimeDate.getDateAsLocalFormattedString(scheduledMaintenance.endsAt)} \n\n`;
}
if (scheduledMaintenance.currentScheduledMaintenanceState?.name) {
feedInfoInMarkdown += `⏳ **Scheduled Maintenance State**: ${scheduledMaintenance.currentScheduledMaintenanceState.name} \n\n`;
}
if (
scheduledMaintenance.monitors &&
scheduledMaintenance.monitors.length > 0
) {
feedInfoInMarkdown += `🌎 **Resources Affected**:\n`;
for (const monitor of scheduledMaintenance.monitors) {
feedInfoInMarkdown += `- [${monitor.name}](${(await MonitorService.getMonitorLinkInDashboard(createdItem.projectId!, monitor.id!)).toString()})\n`;
}
feedInfoInMarkdown += `\n\n`;
}
const scheduledMaintenanceCreateMessageBlocks: Array<MessageBlocksByWorkspaceType> =
await ScheduledMaintenanceWorkspaceMessages.getScheduledMaintenanceCreateMessageBlocks(
{
scheduledMaintenanceId: createdItem.id!,
projectId: createdItem.projectId!,
},
);
await ScheduledMaintenanceFeedService.createScheduledMaintenanceFeedItem({
scheduledMaintenanceId: createdItem.id!,
projectId: createdItem.projectId!,
scheduledMaintenanceFeedEventType:
ScheduledMaintenanceFeedEventType.ScheduledMaintenanceCreated,
displayColor: Red500,
feedInfoInMarkdown: feedInfoInMarkdown,
userId: createdByUserId || undefined,
workspaceNotification: {
appendMessageBlocks: scheduledMaintenanceCreateMessageBlocks,
sendWorkspaceNotification: true,
},
});
const timeline: ScheduledMaintenanceStateTimeline =
new ScheduledMaintenanceStateTimeline();
timeline.projectId = createdItem.projectId!;
timeline.scheduledMaintenanceId = createdItem.id!;
timeline.isOwnerNotified = true; // ignore notifying owners because you already notify for Scheduled Event, no need to notify them for timeline event.
timeline.shouldStatusPageSubscribersBeNotified = Boolean(
createdItem.shouldStatusPageSubscribersBeNotifiedOnEventCreated,
// Create feed item asynchronously
coreOperations.push(
this.createScheduledMaintenanceFeedAsync(
scheduledMaintenance,
createdItem,
),
);
timeline.isStatusPageSubscribersNotified = Boolean(
createdItem.shouldStatusPageSubscribersBeNotifiedOnEventCreated,
); // ignore notifying subscribers because you already notify for Scheduled Event, no need to notify them for timeline event.
timeline.scheduledMaintenanceStateId =
createdItem.currentScheduledMaintenanceStateId!;
await ScheduledMaintenanceStateTimelineService.create({
data: timeline,
props: {
isRoot: true,
},
});
// Create state timeline asynchronously
coreOperations.push(
this.createScheduledMaintenanceStateTimelineAsync(createdItem),
);
// Handle owner assignment asynchronously
if (
createdItem.projectId &&
createdItem.id &&
@@ -690,21 +601,200 @@ ${createdItem.description || "No description provided."}
(onCreate.createBy.miscDataProps["ownerTeams"] ||
onCreate.createBy.miscDataProps["ownerUsers"])
) {
await this.addOwners(
createdItem.projectId!,
createdItem.id!,
(onCreate.createBy.miscDataProps["ownerUsers"] as Array<ObjectID>) ||
[],
(onCreate.createBy.miscDataProps["ownerTeams"] as Array<ObjectID>) ||
[],
false,
onCreate.createBy.props,
coreOperations.push(
this.addOwners(
createdItem.projectId!,
createdItem.id!,
(onCreate.createBy.miscDataProps["ownerUsers"] as Array<ObjectID>) ||
[],
(onCreate.createBy.miscDataProps["ownerTeams"] as Array<ObjectID>) ||
[],
false,
onCreate.createBy.props,
),
);
}
// Execute core operations in parallel with error handling
Promise.allSettled(coreOperations)
.then((coreResults: any[]) => {
// Log any errors from core operations
coreResults.forEach((result: any, index: number) => {
if (result.status === "rejected") {
logger.error(
`Core operation ${index} failed in ScheduledMaintenanceService.onCreateSuccess: ${result.reason}`,
);
}
});
// Handle workspace operations after core operations complete
if (createdItem.projectId && createdItem.id) {
// Run workspace operations in background without blocking response
this.handleScheduledMaintenanceWorkspaceOperationsAsync(
createdItem,
).catch((error: Error) => {
logger.error(
`Workspace operations failed in ScheduledMaintenanceService.onCreateSuccess: ${error}`,
);
});
}
})
.catch((error: Error) => {
logger.error(
`Critical error in ScheduledMaintenanceService core operations: ${error}`,
);
});
return createdItem;
}
@CaptureSpan()
private async handleScheduledMaintenanceWorkspaceOperationsAsync(
createdItem: Model,
): Promise<void> {
try {
if (!createdItem.projectId || !createdItem.id) {
throw new BadDataException(
"projectId and id are required for workspace operations",
);
}
// send message to workspaces - slack, teams, etc.
const workspaceResult: {
channelsCreated: Array<NotificationRuleWorkspaceChannel>;
} | null =
await ScheduledMaintenanceWorkspaceMessages.createChannelsAndInviteUsersToChannels(
{
projectId: createdItem.projectId,
scheduledMaintenanceId: createdItem.id,
scheduledMaintenanceNumber: createdItem.scheduledMaintenanceNumber!,
},
);
if (workspaceResult && workspaceResult.channelsCreated?.length > 0) {
// update scheduledMaintenance with these channels.
await this.updateOneById({
id: createdItem.id,
data: {
postUpdatesToWorkspaceChannels:
workspaceResult.channelsCreated || [],
},
props: {
isRoot: true,
},
});
}
} catch (error) {
logger.error(
`Error in handleScheduledMaintenanceWorkspaceOperationsAsync: ${error}`,
);
throw error;
}
}
@CaptureSpan()
private async createScheduledMaintenanceFeedAsync(
scheduledMaintenance: Model,
createdItem: Model,
): Promise<void> {
try {
const createdByUserId: ObjectID | undefined | null =
createdItem.createdByUserId || createdItem.createdByUser?.id;
let feedInfoInMarkdown: string = `#### 🕒 Scheduled Maintenance ${createdItem.scheduledMaintenanceNumber?.toString()} Created:
**${createdItem.title || "No title provided."}**:
${createdItem.description || "No description provided."}
`;
// add starts at and ends at.
if (scheduledMaintenance.startsAt) {
feedInfoInMarkdown += `**Starts At**: ${OneUptimeDate.getDateAsLocalFormattedString(scheduledMaintenance.startsAt)} \n\n`;
}
if (scheduledMaintenance.endsAt) {
feedInfoInMarkdown += `**Ends At**: ${OneUptimeDate.getDateAsLocalFormattedString(scheduledMaintenance.endsAt)} \n\n`;
}
if (scheduledMaintenance.currentScheduledMaintenanceState?.name) {
feedInfoInMarkdown += `⏳ **Scheduled Maintenance State**: ${scheduledMaintenance.currentScheduledMaintenanceState.name} \n\n`;
}
if (
scheduledMaintenance.monitors &&
scheduledMaintenance.monitors.length > 0
) {
feedInfoInMarkdown += `🌎 **Resources Affected**:\n`;
for (const monitor of scheduledMaintenance.monitors) {
feedInfoInMarkdown += `- [${monitor.name}](${(await MonitorService.getMonitorLinkInDashboard(createdItem.projectId!, monitor.id!)).toString()})\n`;
}
feedInfoInMarkdown += `\n\n`;
}
const scheduledMaintenanceCreateMessageBlocks: Array<MessageBlocksByWorkspaceType> =
await ScheduledMaintenanceWorkspaceMessages.getScheduledMaintenanceCreateMessageBlocks(
{
scheduledMaintenanceId: createdItem.id!,
projectId: createdItem.projectId!,
},
);
await ScheduledMaintenanceFeedService.createScheduledMaintenanceFeedItem({
scheduledMaintenanceId: createdItem.id!,
projectId: createdItem.projectId!,
scheduledMaintenanceFeedEventType:
ScheduledMaintenanceFeedEventType.ScheduledMaintenanceCreated,
displayColor: Red500,
feedInfoInMarkdown: feedInfoInMarkdown,
userId: createdByUserId || undefined,
workspaceNotification: {
appendMessageBlocks: scheduledMaintenanceCreateMessageBlocks,
sendWorkspaceNotification: true,
},
});
} catch (error) {
logger.error(`Error in createScheduledMaintenanceFeedAsync: ${error}`);
throw error;
}
}
@CaptureSpan()
private async createScheduledMaintenanceStateTimelineAsync(
createdItem: Model,
): Promise<void> {
try {
const timeline: ScheduledMaintenanceStateTimeline =
new ScheduledMaintenanceStateTimeline();
timeline.projectId = createdItem.projectId!;
timeline.scheduledMaintenanceId = createdItem.id!;
timeline.isOwnerNotified = true; // ignore notifying owners because you already notify for Scheduled Event, no need to notify them for timeline event.
timeline.shouldStatusPageSubscribersBeNotified = Boolean(
createdItem.shouldStatusPageSubscribersBeNotifiedOnEventCreated,
);
timeline.isStatusPageSubscribersNotified = Boolean(
createdItem.shouldStatusPageSubscribersBeNotifiedOnEventCreated,
); // ignore notifying subscribers because you already notify for Scheduled Event, no need to notify them for timeline event.
timeline.scheduledMaintenanceStateId =
createdItem.currentScheduledMaintenanceStateId!;
await ScheduledMaintenanceStateTimelineService.create({
data: timeline,
props: {
isRoot: true,
},
});
} catch (error) {
logger.error(
`Error in createScheduledMaintenanceStateTimelineAsync: ${error}`,
);
throw error;
}
}
@CaptureSpan()
public async addOwners(
projectId: ObjectID,

View File

@@ -194,8 +194,7 @@ export class Service extends DatabaseService<StatusPage> {
onCreate: OnCreate<StatusPage>,
createdItem: StatusPage,
): Promise<StatusPage> {
// add owners.
// Execute owner assignment asynchronously
if (
createdItem.projectId &&
createdItem.id &&
@@ -203,16 +202,19 @@ export class Service extends DatabaseService<StatusPage> {
(onCreate.createBy.miscDataProps["ownerTeams"] ||
onCreate.createBy.miscDataProps["ownerUsers"])
) {
await this.addOwners(
// Run owner assignment in background without blocking
this.addOwners(
createdItem.projectId!,
createdItem.id!,
(onCreate.createBy.miscDataProps["ownerUsers"] as Array<ObjectID>) ||
(onCreate.createBy.miscDataProps!["ownerUsers"] as Array<ObjectID>) ||
[],
(onCreate.createBy.miscDataProps["ownerTeams"] as Array<ObjectID>) ||
(onCreate.createBy.miscDataProps!["ownerTeams"] as Array<ObjectID>) ||
[],
false,
onCreate.createBy.props,
);
).catch((error: Error) => {
logger.error(`Error in StatusPageService owner assignment: ${error}`);
});
}
return createdItem;

View File

@@ -137,7 +137,7 @@ export default class IncomingRequestCriteria {
input.dataToProcess.monitorId.toString() +
" is true",
);
return `Incoming request / heartbeat received in ${value} minutes.`;
return `Incoming request / heartbeat received in ${value} minutes. It was received ${differenceInMinutes} minutes ago.`;
}
return null;
}
@@ -153,7 +153,7 @@ export default class IncomingRequestCriteria {
input.dataToProcess.monitorId.toString() +
" is true",
);
return `Incoming request / heartbeat not received in ${value} minutes.`;
return `Incoming request / heartbeat not received in ${value} minutes. It was received ${differenceInMinutes} minutes ago.`;
}
return null;
}

View File

@@ -56,9 +56,13 @@ export default class ServerMonitorCriteria {
const lastCheckTime: Date = (input.dataToProcess as ServerMonitorResponse)
.requestReceivedAt;
const timeNow: Date =
(input.dataToProcess as ServerMonitorResponse).timeNow ||
OneUptimeDate.getCurrentDate();
const differenceInMinutes: number = OneUptimeDate.getDifferenceInMinutes(
lastCheckTime,
OneUptimeDate.getCurrentDate(),
timeNow,
);
let offlineIfNotCheckedInMinutes: number = 3;

View File

@@ -228,6 +228,8 @@ export default class MonitorResourceUtil {
await MonitorService.updateOneById({
id: monitor.id!,
data: {
incomingRequestMonitorHeartbeatCheckedAt:
OneUptimeDate.getCurrentDate(),
incomingMonitorRequest: {
...dataToProcess,
} as any,
@@ -1372,7 +1374,7 @@ export default class MonitorResourceUtil {
}
if (input.monitor.monitorType === MonitorType.SSLCertificate) {
// check server monitor
// check SSL monitor
const sslMonitorResult: string | null =
await SSLMonitorCriteria.isMonitorInstanceCriteriaFilterMet({
dataToProcess: input.dataToProcess,

View File

@@ -1,4 +1,3 @@
import { Dictionary } from "lodash";
import { JSONArray, JSONObject, JSONValue } from "../../../Types/JSON";
import ObjectID from "../../../Types/ObjectID";
import TelemetryType from "../../../Types/Telemetry/TelemetryType";
@@ -9,6 +8,7 @@ import logger from "../Logger";
import MetricType from "../../../Models/DatabaseModels/MetricType";
import MetricTypeService from "../../Services/MetricTypeService";
import TelemetryService from "../../../Models/DatabaseModels/TelemetryService";
import Dictionary from "../../../Types/Dictionary";
export type AttributeType = string | number | boolean | null;

View File

@@ -11,8 +11,8 @@ import SlackActionType from "./ActionTypes";
import WorkspaceProjectAuthTokenService from "../../../../Services/WorkspaceProjectAuthTokenService";
import logger from "../../../Logger";
import { JSONArray, JSONObject } from "../../../../../Types/JSON";
import { Dictionary } from "lodash";
import CaptureSpan from "../../../Telemetry/CaptureSpan";
import Dictionary from "../../../../../Types/Dictionary";
export interface SlackAction {
actionValue?: string | undefined;

View File

@@ -1,5 +1,5 @@
import { Dictionary } from "lodash";
import DatabaseProperty from "../Database/DatabaseProperty";
import Dictionary from "../Dictionary";
import BadDataException from "../Exception/BadDataException";
import { JSONObject, ObjectType } from "../JSON";
import { FindOperator } from "typeorm";

View File

@@ -16,4 +16,5 @@ export default interface ServerMonitorResponse {
onlyCheckRequestReceivedAt: boolean;
processes?: ServerProcess[] | undefined;
failureCause?: string | undefined;
timeNow?: Date | undefined; // Time when the response was generated
}

View File

@@ -27,4 +27,5 @@ export default interface ProbeMonitorResponse {
customCodeMonitorResponse?: CustomCodeMonitorResponse | undefined;
monitoredAt: Date;
isTimeout?: boolean | undefined;
ingestedAt?: Date | undefined;
}

View File

@@ -19,7 +19,6 @@ import OneUptimeDate from "../../../Types/Date";
import Dictionary from "../../../Types/Dictionary";
import BadDataException from "../../../Types/Exception/BadDataException";
import GenericObject from "../../../Types/GenericObject";
import get from "lodash/get";
import React, { ReactElement, useEffect, useState } from "react";
export interface ComponentProps<T extends GenericObject> {
@@ -144,6 +143,16 @@ const Detail: DetailFunction = <T extends GenericObject>(
type GetFieldFunction = (field: Field<T>, index: number) => ReactElement;
// Helper function to get nested property values using dot notation
const getNestedValue: (obj: any, path: string) => any = (
obj: any,
path: string,
): any => {
return path.split(".").reduce((current: any, key: string) => {
return current?.[key];
}, obj);
};
const getField: GetFieldFunction = (
field: Field<T>,
index: number,
@@ -160,8 +169,11 @@ const Detail: DetailFunction = <T extends GenericObject>(
let data: string | ReactElement = "";
if (get(props.item, fieldKey)) {
data = (get(props.item, fieldKey, "") as any) || "";
// Use helper function for both simple and nested property access
const fieldKeyStr: string = String(fieldKey);
const value: any = getNestedValue(props.item, fieldKeyStr);
if (value !== undefined && value !== null) {
data = value;
}
if (field.fieldType === FieldType.Date) {

View File

@@ -24,11 +24,11 @@ import API from "../../Utils/API/API";
import { APP_API_URL } from "../../Config";
import PageLoader from "../Loader/PageLoader";
import ErrorMessage from "../ErrorMessage/ErrorMessage";
import { Dictionary } from "lodash";
import TelemetryService from "../../../Models/DatabaseModels/TelemetryService";
import { LIMIT_PER_PROJECT } from "../../../Types/Database/LimitMax";
import SortOrder from "../../../Types/BaseDatabase/SortOrder";
import ListResult from "../../../Types/BaseDatabase/ListResult";
import Dictionary from "../../../Types/Dictionary";
export interface ComponentProps {
logs: Array<Log>;

View File

@@ -62,7 +62,7 @@ const Modal: FunctionComponent<ComponentProps> = (
<div className="fixed inset-0 z-20 overflow-y-auto">
<div className="flex min-h-screen items-end justify-center p-0 text-center md:items-center md:p-4">
<div
className={`relative transform bg-white text-left shadow-xl transition-all w-full h-full md:h-auto md:w-auto md:rounded-lg md:my-8 ${
className={`relative transform bg-white text-left shadow-xl transition-all w-full h-full md:rounded-lg md:my-8 ${
props.modalWidth && props.modalWidth === ModalWidth.Large
? "md:max-w-7xl"
: ""

View File

@@ -64,7 +64,7 @@ const Page: FunctionComponent<ComponentProps> = (
)}
{props.sideMenu && (
<main className="mx-auto max-w-full pb-10 mr-5">
<main className="mx-auto max-w-full pb-10">
<div className="lg:grid lg:grid-cols-12 lg:gap-x-5">
{props.sideMenu}

View File

@@ -71,6 +71,7 @@ const Pagination: FunctionComponent<ComponentProps> = (
className="flex items-center justify-between border-t border-gray-200 bg-white px-4"
data-testid={props.dataTestId}
>
{/* Desktop layout: Description on left, all controls on right */}
<div className="hidden md:block">
<p className="text-sm text-gray-500">
{!props.isLoading && (
@@ -88,7 +89,9 @@ const Pagination: FunctionComponent<ComponentProps> = (
)}
</p>
</div>
<div>
{/* Desktop layout: All controls together on right */}
<div className="hidden md:flex">
<nav className="inline-flex -space-x-px rounded-md shadow-sm">
<div className="my-2">
<Button
@@ -165,6 +168,84 @@ const Pagination: FunctionComponent<ComponentProps> = (
</nav>
</div>
{/* Mobile layout: Navigate button on left, pagination controls on right */}
<div className="md:hidden my-2">
<Button
dataTestId="show-pagination-modal-button-mobile"
className="my-2"
buttonSize={ButtonSize.ExtraSmall}
icon={IconProp.AdjustmentHorizontal}
buttonStyle={ButtonStyleType.ICON_LIGHT}
onClick={() => {
setShowPaginationModel(true);
}}
/>
</div>
<div className="md:hidden">
<nav className="inline-flex -space-x-px rounded-md shadow-sm">
<ul>
<li
onClick={() => {
let currentPageNumber: number = props.currentPageNumber;
if (typeof currentPageNumber === "string") {
currentPageNumber = parseInt(currentPageNumber);
}
if (props.onNavigateToPage && !isPreviousDisabled) {
props.onNavigateToPage(
currentPageNumber - 1,
props.itemsOnPage,
);
}
}}
className={` inline-flex items-center rounded-l-md border border-gray-300 bg-white px-2 py-2 text-sm font-medium text-gray-500 ${
isPreviousDisabled
? "bg-gray-100"
: "hover:bg-gray-50 cursor-pointer"
}`}
>
<span className="page-link">Previous</span>
</li>
<li
data-testid="current-page-link-mobile"
className={` z-10 inline-flex items-center border border-x-0 border-gray-300 hover:bg-gray-50 px-4 py-2 text-sm font-medium text-text-600 cursor-pointer ${
isCurrentPageButtonDisabled ? "bg-gray-100" : ""
}`}
onClick={() => {
setShowPaginationModel(true);
}}
>
<span>{props.currentPageNumber}</span>
</li>
<li
onClick={() => {
let currentPageNumber: number = props.currentPageNumber;
if (typeof currentPageNumber === "string") {
currentPageNumber = parseInt(currentPageNumber);
}
if (props.onNavigateToPage && !isNextDisabled) {
props.onNavigateToPage(
currentPageNumber + 1,
props.itemsOnPage,
);
}
}}
className={` inline-flex items-center rounded-r-md border border-gray-300 bg-white px-2 py-2 text-sm font-medium text-gray-500 ${
isNextDisabled
? "bg-gray-100"
: " hover:bg-gray-50 cursor-pointer"
}`}
>
<span>Next</span>
</li>
</ul>
</nav>
</div>
{showPaginationModel && (
<BasicFormModal<PaginationNavigationItem>
data-testid="pagination-modal"

View File

@@ -11,7 +11,6 @@ import Color from "../../../Types/Color";
import OneUptimeDate from "../../../Types/Date";
import GenericObject from "../../../Types/GenericObject";
import IconProp from "../../../Types/Icon/IconProp";
import get from "lodash/get";
import React, { ReactElement, useState, useEffect } from "react";
import { Draggable, DraggableProvided } from "react-beautiful-dnd";
import LongTextViewer from "../LongText/LongTextViewer";
@@ -42,6 +41,16 @@ type TableRowFunction = <T extends GenericObject>(
const TableRow: TableRowFunction = <T extends GenericObject>(
props: ComponentProps<T>,
): ReactElement => {
// Helper function to get nested property values using dot notation
const getNestedValue: (obj: any, path: string) => any = (
obj: any,
path: string,
): any => {
return path.split(".").reduce((current: any, key: string) => {
return current?.[key];
}, obj);
};
const [isButtonLoading, setIsButtonLoading] = useState<Array<boolean>>(
props.actionButtons?.map(() => {
return false;
@@ -240,7 +249,10 @@ const TableRow: TableRowFunction = <T extends GenericObject>(
/>
)
) : (
get(props.item, column.key, "")?.toString() ||
getNestedValue(
props.item,
String(column.key),
)?.toString() ||
column.noValueMessage ||
""
)
@@ -413,7 +425,10 @@ const TableRow: TableRowFunction = <T extends GenericObject>(
/>
)
) : (
get(props.item, column.key, "")?.toString() ||
getNestedValue(
props.item,
String(column.key),
)?.toString() ||
column.noValueMessage ||
""
)

View File

@@ -59,7 +59,6 @@
"json5": "^2.2.3",
"jsonwebtoken": "^9.0.0",
"jwt-decode": "^4.0.0",
"lodash": "^4.17.21",
"marked": "^12.0.2",
"moment": "^2.30.1",
"moment-timezone": "^0.5.45",
@@ -118,7 +117,6 @@
"@types/jest": "^28.1.4",
"@types/json2csv": "^5.0.3",
"@types/jsonwebtoken": "^8.5.9",
"@types/lodash": "^4.14.202",
"@types/node": "^17.0.45",
"@types/node-cron": "^3.0.7",
"@types/nodemailer": "^6.4.7",
@@ -4535,13 +4533,6 @@
"@types/node": "*"
}
},
"node_modules/@types/lodash": {
"version": "4.17.13",
"resolved": "https://registry.npmjs.org/@types/lodash/-/lodash-4.17.13.tgz",
"integrity": "sha512-lfx+dftrEZcdBPczf9d0Qv0x+j/rfNCMuC6OcfXmO8gkfeNAY88PgKUbvG56whcN23gc27yenwF6oJZXGFpYxg==",
"dev": true,
"license": "MIT"
},
"node_modules/@types/mdast": {
"version": "3.0.15",
"resolved": "https://registry.npmjs.org/@types/mdast/-/mdast-3.0.15.tgz",

View File

@@ -27,7 +27,6 @@
"@types/jest": "^28.1.4",
"@types/json2csv": "^5.0.3",
"@types/jsonwebtoken": "^8.5.9",
"@types/lodash": "^4.14.202",
"@types/node": "^17.0.45",
"@types/node-cron": "^3.0.7",
"@types/nodemailer": "^6.4.7",
@@ -95,7 +94,6 @@
"json5": "^2.2.3",
"jsonwebtoken": "^9.0.0",
"jwt-decode": "^4.0.0",
"lodash": "^4.17.21",
"marked": "^12.0.2",
"moment": "^2.30.1",
"moment-timezone": "^0.5.45",

View File

@@ -57,6 +57,7 @@
"@types/react-highlight": "^0.12.8",
"@types/react-syntax-highlighter": "^15.5.13",
"@types/uuid": "^8.3.4",
"@types/web-push": "^3.6.4",
"acme-client": "^5.3.0",
"airtable": "^0.12.2",
"axios": "^1.7.2",
@@ -76,7 +77,6 @@
"json5": "^2.2.3",
"jsonwebtoken": "^9.0.0",
"jwt-decode": "^4.0.0",
"lodash": "^4.17.21",
"marked": "^12.0.2",
"moment": "^2.30.1",
"moment-timezone": "^0.5.45",
@@ -120,6 +120,7 @@
"universal-cookie": "^7.2.1",
"use-async-effect": "^2.2.6",
"uuid": "^8.3.2",
"web-push": "^3.6.7",
"zod": "^3.25.30"
},
"devDependencies": {
@@ -134,7 +135,6 @@
"@types/jest": "^28.1.4",
"@types/json2csv": "^5.0.3",
"@types/jsonwebtoken": "^8.5.9",
"@types/lodash": "^4.14.202",
"@types/node": "^17.0.45",
"@types/node-cron": "^3.0.7",
"@types/nodemailer": "^6.4.7",
@@ -3257,6 +3257,7 @@
},
"../Common/node_modules/lodash": {
"version": "4.17.21",
"dev": true,
"license": "MIT"
},
"../Common/node_modules/lodash.memoize": {

View File

@@ -19,6 +19,10 @@ node_modules
env.js
# Note: public/sw.js is auto-generated from sw.js.template during build
# but should be committed to ensure it exists for deployments
# The file is regenerated with correct versions during Docker build
npm-debug.log*
yarn-debug.log*
yarn-error.log*

View File

@@ -62,6 +62,7 @@
"@types/react-highlight": "^0.12.8",
"@types/react-syntax-highlighter": "^15.5.13",
"@types/uuid": "^8.3.4",
"@types/web-push": "^3.6.4",
"acme-client": "^5.3.0",
"airtable": "^0.12.2",
"axios": "^1.7.2",
@@ -81,7 +82,6 @@
"json5": "^2.2.3",
"jsonwebtoken": "^9.0.0",
"jwt-decode": "^4.0.0",
"lodash": "^4.17.21",
"marked": "^12.0.2",
"moment": "^2.30.1",
"moment-timezone": "^0.5.45",
@@ -125,6 +125,7 @@
"universal-cookie": "^7.2.1",
"use-async-effect": "^2.2.6",
"uuid": "^8.3.2",
"web-push": "^3.6.7",
"zod": "^3.25.30"
},
"devDependencies": {
@@ -139,7 +140,6 @@
"@types/jest": "^28.1.4",
"@types/json2csv": "^5.0.3",
"@types/jsonwebtoken": "^8.5.9",
"@types/lodash": "^4.14.202",
"@types/node": "^17.0.45",
"@types/node-cron": "^3.0.7",
"@types/nodemailer": "^6.4.7",
@@ -3262,6 +3262,7 @@
},
"../Common/node_modules/lodash": {
"version": "4.17.21",
"dev": true,
"license": "MIT"
},
"../Common/node_modules/lodash.memoize": {

View File

@@ -3,13 +3,14 @@
"version": "0.1.0",
"private": false,
"scripts": {
"dev-build": "NODE_ENV=development node esbuild.config.js",
"generate-sw": "node scripts/generate-sw.js",
"dev-build": "npm run generate-sw && NODE_ENV=development node esbuild.config.js",
"dev": "npx nodemon",
"build": "NODE_ENV=production node esbuild.config.js",
"analyze": "analyze=true NODE_ENV=production node esbuild.config.js",
"build": "npm run generate-sw && NODE_ENV=production node esbuild.config.js",
"analyze": "npm run generate-sw && analyze=true NODE_ENV=production node esbuild.config.js",
"test": "react-app-rewired test",
"eject": "echo 'esbuild does not require eject'",
"compile": "tsc",
"compile": "npm run generate-sw && tsc",
"clear-modules": "rm -rf node_modules && rm package-lock.json && npm install",
"start": "node --require ts-node/register Serve.ts",
"audit": "npm audit --audit-level=low",

View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Development Service Worker Generation Script
#
# This script can be used during local development to test
# the service worker generation with sample environment variables
echo "🔧 Generating service worker for local development..."
# Set sample environment variables for testing
export APP_VERSION="1.0.0-dev"
export GIT_SHA=$(git rev-parse --short HEAD 2>/dev/null || echo "local-dev")
echo "Using APP_VERSION: $APP_VERSION"
echo "Using GIT_SHA: $GIT_SHA"
# Generate the service worker
node scripts/generate-sw.js
echo "✅ Service worker generated for development"
echo "🔍 Check public/sw.js to see the generated file"

View File

@@ -0,0 +1,22 @@
#!/usr/bin/env node
/**
* Dashboard Service Worker Generator
*
* This script generates the Dashboard service worker from a template,
* using the universal generator from Common/Scripts.
*/
const path = require('path');
const { generateServiceWorker } = require('../../Common/Scripts/generate-service-worker');
// Generate Dashboard service worker
const templatePath = path.join(__dirname, '..', 'sw.js.template');
const outputPath = path.join(__dirname, '..', 'public', 'sw.js');
try {
generateServiceWorker(templatePath, outputPath, 'OneUptime Dashboard');
} catch (error) {
console.error('❌ Failed to generate Dashboard service worker:', error.message);
process.exit(1);
}

View File

@@ -26,7 +26,7 @@ const ProbePicker: FunctionComponent<ComponentProps> = (
return (
<div className="flex">
<div className="w-fit mr-2 flex h-full align-middle items-center">
<div className="w-fit mr-2 flex h-full align-middle items-center mt-4">
<FieldLabelElement title="Select Probe:" required={true} />
</div>
<div>

View File

@@ -1,24 +1,215 @@
/* eslint-disable no-restricted-globals */
// OneUptime Progressive Web App Service Worker
// Handles push notifications only - no caching or offline functionality
// Handles push notifications and caching for PWA functionality
console.log('[ServiceWorker] OneUptime PWA Service Worker Loaded');
// Install event - just skip waiting, no caching
// Cache configuration - Updated dynamically during build
// Version format: oneuptime-v{APP_VERSION}-{GIT_SHA}
// This ensures cache invalidation on every deployment
const CACHE_VERSION = 'oneuptime-v{{APP_VERSION}}-{{GIT_SHA}}'; // Auto-generated version
const STATIC_CACHE = `${CACHE_VERSION}-static`;
const DYNAMIC_CACHE = `${CACHE_VERSION}-dynamic`;
// Cache duration configuration (in milliseconds)
const CACHE_DURATIONS = {
static: 7 * 24 * 60 * 60 * 1000, // 7 days for static assets
dynamic: 24 * 60 * 60 * 1000, // 1 day for dynamic content
};
// Assets to cache immediately during install
const STATIC_ASSETS = [
'/dashboard/',
'/dashboard/manifest.json',
'/dashboard/offline.html',
'/dashboard/assets/img/favicons/favicon.ico',
'/dashboard/assets/img/favicons/android-chrome-192x192.png',
'/dashboard/assets/img/favicons/android-chrome-512x512.png',
// Add other critical assets as needed
];
// Install event - cache static assets
self.addEventListener('install', function(event) {
console.log('[ServiceWorker] Installing...');
event.waitUntil(self.skipWaiting());
event.waitUntil(
Promise.all([
// Cache static assets
caches.open(STATIC_CACHE).then(function(cache) {
console.log('[ServiceWorker] Pre-caching static assets');
return cache.addAll(STATIC_ASSETS.filter(url => url !== '/dashboard/'));
}),
// Skip waiting to activate immediately
self.skipWaiting()
])
);
});
// Activate event - claim clients, no cache cleanup needed
// Activate event - clean up old caches
self.addEventListener('activate', function(event) {
console.log('[ServiceWorker] Activating...');
event.waitUntil(self.clients.claim());
event.waitUntil(
Promise.all([
// Clean up old caches
caches.keys().then(function(cacheNames) {
return Promise.all(
cacheNames.map(function(cacheName) {
if (cacheName.startsWith('oneuptime-') &&
!cacheName.startsWith(CACHE_VERSION)) {
console.log('[ServiceWorker] Deleting old cache:', cacheName);
return caches.delete(cacheName);
}
})
);
}),
// Claim all clients
self.clients.claim()
])
);
});
// No fetch event handling - let all requests go to network
// PWA will work entirely online without any caching
// Fetch event - implement caching strategies
self.addEventListener('fetch', function(event) {
const request = event.request;
const url = new URL(request.url);
// Skip non-GET requests
if (request.method !== 'GET') {
return;
}
// Skip chrome-extension and other non-http(s) requests
if (!url.protocol.startsWith('http')) {
return;
}
event.respondWith(handleRequest(request));
});
// Request handling with different caching strategies
async function handleRequest(request) {
const url = new URL(request.url);
const pathname = url.pathname;
try {
// Strategy 1: Network First for HTML pages (with fallback)
if (pathname.endsWith('/') || pathname.endsWith('.html') ||
pathname === '/dashboard' || pathname.startsWith('/dashboard/') && !pathname.includes('.')) {
return await networkFirstWithFallback(request, DYNAMIC_CACHE);
}
// Strategy 2: Cache First for JavaScript, CSS, and other static assets
if (pathname.includes('/dist/') || pathname.match(/\.(js|css|woff|woff2|ttf|otf|eot)$/)) {
return await cacheFirstWithUpdate(request, STATIC_CACHE);
}
// Strategy 3: Cache First for images and other media
if (pathname.match(/\.(png|jpe?g|gif|svg|ico|webp|avif)$/)) {
return await cacheFirstWithUpdate(request, STATIC_CACHE);
}
// Strategy 5: Network First for everything else
return await networkFirstWithFallback(request, DYNAMIC_CACHE);
} catch (error) {
console.error('[ServiceWorker] Request handling error:', error);
// Return offline page for navigation requests
if (request.mode === 'navigate') {
const offlineResponse = await caches.match('/dashboard/offline.html');
if (offlineResponse) {
return offlineResponse;
}
}
// Return a basic offline response
return new Response('Offline - Please check your internet connection', {
status: 503,
statusText: 'Service Unavailable',
headers: { 'Content-Type': 'text/plain' }
});
}
}
// Caching Strategy 1: Network First with Fallback (for HTML)
async function networkFirstWithFallback(request, cacheName) {
try {
const networkResponse = await fetch(request);
if (networkResponse.ok) {
// Cache successful responses
const cache = await caches.open(cacheName);
cache.put(request, networkResponse.clone());
}
return networkResponse;
} catch (error) {
console.log('[ServiceWorker] Network failed, trying cache:', request.url);
const cachedResponse = await caches.match(request);
if (cachedResponse) {
return cachedResponse;
}
throw error;
}
}
// Caching Strategy 2: Cache First with Background Update (for static assets)
async function cacheFirstWithUpdate(request, cacheName) {
const cachedResponse = await caches.match(request);
if (cachedResponse) {
// Return cached version immediately
// Background update if cache is old
const cacheDate = new Date(cachedResponse.headers.get('date') || 0);
const now = new Date();
const age = now.getTime() - cacheDate.getTime();
if (age > CACHE_DURATIONS.static) {
// Background update - don't await
updateCacheInBackground(request, cacheName);
}
return cachedResponse;
}
// Not in cache, fetch from network
try {
const networkResponse = await fetch(request);
if (networkResponse.ok) {
const cache = await caches.open(cacheName);
cache.put(request, networkResponse.clone());
}
return networkResponse;
} catch (error) {
console.error('[ServiceWorker] Failed to fetch asset:', request.url, error);
throw error;
}
}
// Background cache update
async function updateCacheInBackground(request, cacheName) {
try {
const networkResponse = await fetch(request);
if (networkResponse.ok) {
const cache = await caches.open(cacheName);
await cache.put(request, networkResponse);
console.log('[ServiceWorker] Background cache update:', request.url);
}
} catch (error) {
console.log('[ServiceWorker] Background update failed:', request.url, error);
}
}
// Handle push subscription changes
self.addEventListener('pushsubscriptionchange', function(event) {
@@ -174,7 +365,7 @@ self.addEventListener('message', function(event) {
if (event.data && event.data.type === 'SKIP_WAITING') {
self.skipWaiting();
} else if (event.data && event.data.type === 'GET_VERSION') {
event.ports[0].postMessage({ version: 'oneuptime-pwa-no-cache' });
event.ports[0].postMessage({ version: CACHE_VERSION });
}
});

View File

@@ -164,7 +164,7 @@
<!-- PWA Service Worker Registration -->
<script>
// Service Worker Registration for PWA
// Service Worker Registration for PWA with proper update handling
if ('serviceWorker' in navigator) {
window.addEventListener('load', function() {
navigator.serviceWorker.register('/dashboard/sw.js', {
@@ -179,15 +179,20 @@
newWorker.addEventListener('statechange', function() {
if (newWorker.state === 'installed' && navigator.serviceWorker.controller) {
// New version available
if (confirm('A new version of OneUptime is available. Refresh to update?')) {
newWorker.postMessage({type: 'SKIP_WAITING'});
window.location.reload();
}
console.log('OneUptime PWA: New version available');
// Show a non-intrusive update notification
showUpdateNotification(newWorker);
}
});
}
});
// Check for updates periodically
setInterval(function() {
registration.update();
}, 60000); // Check every minute
}).catch(function(error) {
console.log('OneUptime PWA: Service Worker registration failed', error);
});
@@ -199,68 +204,105 @@
}
});
// Listen for controlled change
// Listen for controlled change (when SW takes control)
navigator.serviceWorker.addEventListener('controllerchange', function() {
console.log('OneUptime PWA: New service worker took control');
window.location.reload();
});
});
}
// PWA Install Prompt
let deferredPrompt;
const installButton = document.createElement('button');
installButton.style.cssText = `
position: fixed;
bottom: 20px;
right: 20px;
background: #000000;
color: white;
border: none;
padding: 12px 16px;
border-radius: 8px;
font-size: 14px;
font-weight: 500;
cursor: pointer;
z-index: 1000;
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.2);
display: none;
font-family: Inter, sans-serif;
`;
installButton.textContent = '📱 Install OneUptime';
installButton.setAttribute('aria-label', 'Install OneUptime as PWA');
document.body.appendChild(installButton);
window.addEventListener('beforeinstallprompt', function(event) {
console.log('OneUptime PWA: Install prompt triggered');
event.preventDefault();
deferredPrompt = event;
// Show update notification
function showUpdateNotification(newWorker) {
// Create a simple notification banner
const banner = document.createElement('div');
banner.style.cssText = `
position: fixed;
top: 0;
left: 0;
right: 0;
background: #007bff;
color: white;
padding: 12px 16px;
text-align: center;
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif;
font-size: 14px;
z-index: 10000;
box-shadow: 0 2px 8px rgba(0,0,0,0.15);
transform: translateY(-100%);
transition: transform 0.3s ease;
`;
// Show install button after a delay to avoid being intrusive
banner.innerHTML = `
<div style="max-width: 800px; margin: 0 auto; display: flex; align-items: center; justify-content: space-between; flex-wrap: wrap; gap: 12px;">
<span>🚀 A new version of OneUptime is available!</span>
<div style="display: flex; gap: 12px;">
<button onclick="updateApp()" style="background: rgba(255,255,255,0.2); border: 1px solid rgba(255,255,255,0.3); color: white; padding: 6px 16px; border-radius: 4px; cursor: pointer; font-size: 13px;">Update Now</button>
<button onclick="dismissUpdate()" style="background: transparent; border: 1px solid rgba(255,255,255,0.3); color: white; padding: 6px 16px; border-radius: 4px; cursor: pointer; font-size: 13px;">Later</button>
</div>
</div>
`;
document.body.appendChild(banner);
// Animate in
setTimeout(() => {
installButton.style.display = 'block';
}, 5000);
});
banner.style.transform = 'translateY(0)';
}, 100);
// Auto-dismiss after 10 seconds
setTimeout(() => {
dismissUpdate();
}, 10000);
// Store reference for global functions
window.updateBanner = banner;
window.newServiceWorker = newWorker;
}
installButton.addEventListener('click', function() {
if (deferredPrompt) {
deferredPrompt.prompt();
deferredPrompt.userChoice.then(function(choiceResult) {
console.log('OneUptime PWA: Install prompt result:', choiceResult.outcome);
deferredPrompt = null;
installButton.style.display = 'none';
});
// Update app function
function updateApp() {
if (window.newServiceWorker) {
window.newServiceWorker.postMessage({type: 'SKIP_WAITING'});
dismissUpdate();
// Show loading indicator
const loadingDiv = document.createElement('div');
loadingDiv.style.cssText = `
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: rgba(0,0,0,0.8);
color: white;
display: flex;
align-items: center;
justify-content: center;
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif;
font-size: 16px;
z-index: 10001;
`;
loadingDiv.innerHTML = `
<div style="text-align: center;">
<div style="margin-bottom: 16px;">🔄</div>
<div>Updating OneUptime...</div>
</div>
`;
document.body.appendChild(loadingDiv);
}
});
}
window.addEventListener('appinstalled', function() {
console.log('OneUptime PWA: App installed successfully');
installButton.style.display = 'none';
deferredPrompt = null;
});
// Hide install button if already installed
if (window.matchMedia && window.matchMedia('(display-mode: standalone)').matches) {
installButton.style.display = 'none';
// Dismiss update notification
function dismissUpdate() {
if (window.updateBanner) {
window.updateBanner.style.transform = 'translateY(-100%)';
setTimeout(() => {
if (window.updateBanner && window.updateBanner.parentNode) {
window.updateBanner.parentNode.removeChild(window.updateBanner);
}
}, 300);
}
}
// Performance monitoring

View File

@@ -21,87 +21,18 @@ provider "oneuptime" {
api_key = var.oneuptime_api_key
}
# Note: Projects must be created manually in the OneUptime dashboard
# Use the project ID from your existing project
variable "project_id" {
description = "OneUptime project ID"
type = string
}
```
### Basic Monitor
```hcl
resource "oneuptime_monitor" "homepage" {
resource "oneuptime_monitor" "manual_monitor" {
name = "Homepage Monitor"
description = "Monitor for the main website homepage"
data = jsonencode({
url = "https://example.com"
interval = "5m"
timeout = "30s"
})
monitor_type = "Manual"
}
```
### API Monitor
```hcl
resource "oneuptime_monitor" "api_health" {
name = "API Health Check"
description = "Monitor for API health endpoint"
data = jsonencode({
url = "https://api.example.com/health"
method = "GET"
interval = "2m"
timeout = "15s"
headers = {
"Content-Type" = "application/json"
}
})
}
```
### Teams
```hcl
resource "oneuptime_team" "sre" {
name = "SRE Team"
description = "Site Reliability Engineering team"
}
resource "oneuptime_team" "development" {
name = "Development Team"
description = "Application development team"
}
### On-Call Duty Policy
```hcl
resource "oneuptime_on_call_duty_policy" "sre_oncall" {
name = "SRE On-Call Policy"
description = "On-call policy for SRE team"
data = jsonencode({
team_id = oneuptime_team.sre.id
schedules = [
{
name = "Business Hours"
timezone = "America/New_York"
layers = [
{
name = "Primary SRE"
users = ["sre1@example.com", "sre2@example.com"]
rotation_type = "weekly"
start_time = "09:00"
end_time = "17:00"
days = ["monday", "tuesday", "wednesday", "thursday", "friday"]
}
]
}
]
})
```
### Status Pages
```hcl
@@ -109,200 +40,5 @@ resource "oneuptime_on_call_duty_policy" "sre_oncall" {
resource "oneuptime_status_page" "public" {
name = "Public Status Page"
description = "Public status page for customer-facing services"
data = jsonencode({
domain = "status.example.com"
is_public = true
title = "Example.com Service Status"
components = [
{
name = "Website"
description = "Main website and application"
monitor_id = oneuptime_monitor.homepage.id
},
{
name = "API"
description = "REST API services"
monitor_id = oneuptime_monitor.api_health.id
}
]
})
}
```
## Multi-Environment Example
### Environment Module
```hcl
# modules/environment/main.tf
variable "environment" {
description = "Environment name"
type = string
}
variable "domain" {
description = "Domain for this environment"
type = string
}
variable "api_domain" {
description = "API domain for this environment"
type = string
}
# Environment-specific monitors
resource "oneuptime_monitor" "app" {
name = "${var.environment}-application"
description = "Application monitor for ${var.environment} environment"
data = jsonencode({
url = "https://${var.domain}"
interval = var.environment == "production" ? "1m" : "5m"
timeout = "30s"
})
}
resource "oneuptime_monitor" "api" {
name = "${var.environment}-api"
description = "API monitor for ${var.environment} environment"
data = jsonencode({
url = "https://${var.api_domain}/health"
method = "GET"
interval = var.environment == "production" ? "1m" : "3m"
timeout = "15s"
})
}
# Outputs
output "app_monitor_id" {
value = oneuptime_monitor.app.id
}
output "api_monitor_id" {
value = oneuptime_monitor.api.id
}
```
### Using the Environment Module
```hcl
# main.tf
terraform {
required_providers {
oneuptime = {
source = "oneuptime/oneuptime"
version = "~> 7.0"
}
}
}
provider "oneuptime" {
oneuptime_url = var.oneuptime_url
api_key = var.oneuptime_api_key
}
# Note: Projects must be created manually in the OneUptime dashboard
# Use the project ID from your existing project
variable "project_id" {
description = "OneUptime project ID"
type = string
}
# Development environment
module "development" {
source = "./modules/environment"
environment = "development"
domain = "dev.example.com"
api_domain = "api-dev.example.com"
}
# Staging environment
module "staging" {
source = "./modules/environment"
environment = "staging"
domain = "staging.example.com"
api_domain = "api-staging.example.com"
}
# Production environment
module "production" {
source = "./modules/environment"
environment = "production"
domain = "example.com"
api_domain = "api.example.com"
}
# Cross-environment status page
resource "oneuptime_status_page" "all_environments" {
name = "All Environments Status"
description = "Status page showing all environments"
data = jsonencode({
domain = "status-internal.example.com"
component_groups = [
{
name = "Production"
components = [
module.production.app_monitor_id,
module.production.api_monitor_id
]
},
{
name = "Staging"
components = [
module.staging.app_monitor_id,
module.staging.api_monitor_id
]
},
{
name = "Development"
components = [
module.development.app_monitor_id,
module.development.api_monitor_id
]
}
]
})
}
```
## Service Catalog Example
```hcl
# Service catalog with dependencies
resource "oneuptime_service_catalog" "frontend" {
name = "Frontend Application"
description = "Customer-facing web application"
data = jsonencode({
service_level = "customer_facing"
criticality = "high"
team_id = oneuptime_team.development.id
monitors = [oneuptime_monitor.homepage.id]
tags = {
language = "typescript"
framework = "react"
repository = "github.com/example/frontend"
}
})
}
resource "oneuptime_service_catalog" "api" {
name = "Backend API"
description = "REST API backend service"
data = jsonencode({
service_level = "internal"
criticality = "critical"
team_id = oneuptime_team.development.id
monitors = [oneuptime_monitor.api_health.id]
tags = {
language = "nodejs"
framework = "express"
repository = "github.com/example/api"
}
})
}
```
These examples demonstrate the correct usage of the OneUptime Terraform provider resources, which use a simplified schema with `name`, `description`, and `data` fields for configuration.
```

View File

@@ -55,6 +55,7 @@
"@types/react-highlight": "^0.12.8",
"@types/react-syntax-highlighter": "^15.5.13",
"@types/uuid": "^8.3.4",
"@types/web-push": "^3.6.4",
"acme-client": "^5.3.0",
"airtable": "^0.12.2",
"axios": "^1.7.2",
@@ -74,7 +75,6 @@
"json5": "^2.2.3",
"jsonwebtoken": "^9.0.0",
"jwt-decode": "^4.0.0",
"lodash": "^4.17.21",
"marked": "^12.0.2",
"moment": "^2.30.1",
"moment-timezone": "^0.5.45",
@@ -118,6 +118,7 @@
"universal-cookie": "^7.2.1",
"use-async-effect": "^2.2.6",
"uuid": "^8.3.2",
"web-push": "^3.6.7",
"zod": "^3.25.30"
},
"devDependencies": {
@@ -132,7 +133,6 @@
"@types/jest": "^28.1.4",
"@types/json2csv": "^5.0.3",
"@types/jsonwebtoken": "^8.5.9",
"@types/lodash": "^4.14.202",
"@types/node": "^17.0.45",
"@types/node-cron": "^3.0.7",
"@types/nodemailer": "^6.4.7",

4
E2E/package-lock.json generated
View File

@@ -54,6 +54,7 @@
"@types/react-highlight": "^0.12.8",
"@types/react-syntax-highlighter": "^15.5.13",
"@types/uuid": "^8.3.4",
"@types/web-push": "^3.6.4",
"acme-client": "^5.3.0",
"airtable": "^0.12.2",
"axios": "^1.7.2",
@@ -73,7 +74,6 @@
"json5": "^2.2.3",
"jsonwebtoken": "^9.0.0",
"jwt-decode": "^4.0.0",
"lodash": "^4.17.21",
"marked": "^12.0.2",
"moment": "^2.30.1",
"moment-timezone": "^0.5.45",
@@ -117,6 +117,7 @@
"universal-cookie": "^7.2.1",
"use-async-effect": "^2.2.6",
"uuid": "^8.3.2",
"web-push": "^3.6.7",
"zod": "^3.25.30"
},
"devDependencies": {
@@ -131,7 +132,6 @@
"@types/jest": "^28.1.4",
"@types/json2csv": "^5.0.3",
"@types/jsonwebtoken": "^8.5.9",
"@types/lodash": "^4.14.202",
"@types/node": "^17.0.45",
"@types/node-cron": "^3.0.7",
"@types/nodemailer": "^6.4.7",

View File

@@ -1,23 +1,17 @@
import TelemetryIngest, {
TelemetryRequest,
} from "Common/Server/Middleware/TelemetryIngest";
import OneUptimeDate from "Common/Types/Date";
import { JSONObject } from "Common/Types/JSON";
import ProductType from "Common/Types/MeteredPlan/ProductType";
import LogService from "Common/Server/Services/LogService";
import Express, {
ExpressRequest,
ExpressResponse,
ExpressRouter,
NextFunction,
} from "Common/Server/Utils/Express";
import logger from "Common/Server/Utils/Logger";
import Response from "Common/Server/Utils/Response";
import Log from "Common/Models/AnalyticsModels/Log";
import LogSeverity from "Common/Types/Log/LogSeverity";
import OTelIngestService from "Common/Server/Services/OpenTelemetryIngestService";
import ObjectID from "Common/Types/ObjectID";
import JSONFunctions from "Common/Types/JSONFunctions";
import FluentIngestQueueService from "../Services/Queue/FluentIngestQueueService";
import ClusterKeyAuthorization from "Common/Server/Middleware/ClusterKeyAuthorization";
import BadRequestException from "Common/Types/Exception/BadRequestException";
export class FluentRequestMiddleware {
public static async getProductType(
@@ -46,96 +40,107 @@ router.post(
next: NextFunction,
): Promise<void> => {
try {
logger.debug("Fluent ProbeIngest API called");
const dbLogs: Array<Log> = [];
let logItems: Array<JSONObject | string> | JSONObject = req.body as
| Array<JSONObject | string>
| JSONObject;
let oneuptimeServiceName: string | string[] | undefined =
req.headers["x-oneuptime-service-name"];
if (!oneuptimeServiceName) {
oneuptimeServiceName = "Unknown Service";
if (!(req as TelemetryRequest).projectId) {
throw new BadRequestException(
"Invalid request - projectId not found in request.",
);
}
const telemetryService: {
serviceId: ObjectID;
dataRententionInDays: number;
} = await OTelIngestService.telemetryServiceFromName({
serviceName: oneuptimeServiceName as string,
projectId: (req as TelemetryRequest).projectId,
req.body = req.body.toJSON ? req.body.toJSON() : req.body;
// Return response immediately
Response.sendEmptySuccessResponse(req, res);
// Add to queue for asynchronous processing
await FluentIngestQueueService.addFluentIngestJob(
req as TelemetryRequest,
);
return;
} catch (err) {
return next(err);
}
},
);
// Queue stats endpoint
router.get(
"/fluent/queue/stats",
ClusterKeyAuthorization.isAuthorizedServiceMiddleware,
async (
req: ExpressRequest,
res: ExpressResponse,
next: NextFunction,
): Promise<void> => {
try {
const stats: {
waiting: number;
active: number;
completed: number;
failed: number;
delayed: number;
total: number;
} = await FluentIngestQueueService.getQueueStats();
return Response.sendJsonObjectResponse(req, res, stats);
} catch (err) {
return next(err);
}
},
);
// Queue size endpoint
router.get(
"/fluent/queue/size",
ClusterKeyAuthorization.isAuthorizedServiceMiddleware,
async (
req: ExpressRequest,
res: ExpressResponse,
next: NextFunction,
): Promise<void> => {
try {
const size: number = await FluentIngestQueueService.getQueueSize();
return Response.sendJsonObjectResponse(req, res, { size });
} catch (err) {
return next(err);
}
},
);
// Queue failed jobs endpoint
router.get(
"/fluent/queue/failed",
ClusterKeyAuthorization.isAuthorizedServiceMiddleware,
async (
req: ExpressRequest,
res: ExpressResponse,
next: NextFunction,
): Promise<void> => {
try {
// Parse pagination parameters from query string
const start: number = parseInt(req.query["start"] as string) || 0;
const end: number = parseInt(req.query["end"] as string) || 100;
const failedJobs: Array<{
id: string;
name: string;
data: any;
failedReason: string;
processedOn: Date | null;
finishedOn: Date | null;
attemptsMade: number;
}> = await FluentIngestQueueService.getFailedJobs({
start,
end,
});
if (
logItems &&
typeof logItems === "object" &&
(logItems as JSONObject)["json"]
) {
logItems = (logItems as JSONObject)["json"] as
| Array<JSONObject | string>
| JSONObject;
}
if (!Array.isArray(logItems)) {
logItems = [logItems];
}
for (let logItem of logItems) {
const dbLog: Log = new Log();
dbLog.projectId = (req as TelemetryRequest).projectId;
dbLog.serviceId = telemetryService.serviceId;
dbLog.severityNumber = 0;
const currentTimeAndDate: Date = OneUptimeDate.getCurrentDate();
dbLog.timeUnixNano = OneUptimeDate.toUnixNano(currentTimeAndDate);
dbLog.time = currentTimeAndDate;
dbLog.severityText = LogSeverity.Unspecified;
if (typeof logItem === "string") {
// check if its parseable to json
try {
logItem = JSON.parse(logItem);
} catch {
// do nothing
}
}
if (typeof logItem !== "string") {
logItem = JSON.stringify(logItem);
}
dbLog.body = logItem as string;
dbLogs.push(dbLog);
}
await LogService.createMany({
items: dbLogs,
props: {
isRoot: true,
return Response.sendJsonObjectResponse(req, res, {
failedJobs,
pagination: {
start,
end,
count: failedJobs.length,
},
});
OTelIngestService.recordDataIngestedUsgaeBilling({
services: {
[oneuptimeServiceName as string]: {
dataIngestedInGB: JSONFunctions.getSizeOfJSONinGB(req.body),
dataRententionInDays: telemetryService.dataRententionInDays,
serviceId: telemetryService.serviceId,
serviceName: oneuptimeServiceName as string,
},
},
projectId: (req as TelemetryRequest).projectId,
productType: ProductType.Logs,
}).catch((err: Error) => {
logger.error(err);
});
return Response.sendEmptySuccessResponse(req, res);
} catch (err) {
return next(err);
}

View File

@@ -0,0 +1,37 @@
import Express, {
ExpressRequest,
ExpressResponse,
ExpressRouter,
NextFunction,
} from "Common/Server/Utils/Express";
import FluentIngestQueueService from "../Services/Queue/FluentIngestQueueService";
// import ClusterKeyAuthorization from "Common/Server/Middleware/ClusterKeyAuthorization";
const router: ExpressRouter = Express.getRouter();
/**
* JSON metrics endpoint for KEDA autoscaling
* Returns queue size as JSON for KEDA metrics-api scaler
*/
router.get(
"/metrics/queue-size",
// ClusterKeyAuthorization.isAuthorizedServiceMiddleware, // Temporarily disabled for KEDA debugging
async (
_req: ExpressRequest,
res: ExpressResponse,
next: NextFunction,
): Promise<void> => {
try {
const queueSize: number = await FluentIngestQueueService.getQueueSize();
res.setHeader("Content-Type", "application/json");
res.status(200).json({
queueSize: queueSize,
});
} catch (err) {
return next(err);
}
},
);
export default router;

View File

@@ -1,4 +1,5 @@
import FluentIngestAPI from "./API/FluentIngest";
import MetricsAPI from "./API/Metrics";
import { PromiseVoidFunction } from "Common/Types/FunctionTypes";
import { ClickhouseAppInstance } from "Common/Server/Infrastructure/ClickhouseDatabase";
import PostgresAppInstance from "Common/Server/Infrastructure/PostgresDatabase";
@@ -9,12 +10,14 @@ import logger from "Common/Server/Utils/Logger";
import Realtime from "Common/Server/Utils/Realtime";
import App from "Common/Server/Utils/StartServer";
import Telemetry from "Common/Server/Utils/Telemetry";
import "./Jobs/FluentIngest/ProcessFluentIngest";
const app: ExpressApplication = Express.getExpressApp();
const APP_NAME: string = "fluent-ingest";
app.use([`/${APP_NAME}`, "/"], FluentIngestAPI);
app.use([`/${APP_NAME}`, "/"], MetricsAPI);
const init: PromiseVoidFunction = async (): Promise<void> => {
try {

View File

@@ -0,0 +1,141 @@
import { FluentIngestJobData } from "../../Services/Queue/FluentIngestQueueService";
import logger from "Common/Server/Utils/Logger";
import { QueueJob, QueueName } from "Common/Server/Infrastructure/Queue";
import QueueWorker from "Common/Server/Infrastructure/QueueWorker";
import ObjectID from "Common/Types/ObjectID";
import OneUptimeDate from "Common/Types/Date";
import { JSONObject } from "Common/Types/JSON";
import ProductType from "Common/Types/MeteredPlan/ProductType";
import LogService from "Common/Server/Services/LogService";
import LogSeverity from "Common/Types/Log/LogSeverity";
import OTelIngestService from "Common/Server/Services/OpenTelemetryIngestService";
import JSONFunctions from "Common/Types/JSONFunctions";
import Log from "Common/Models/AnalyticsModels/Log";
interface FluentIngestProcessData {
projectId: ObjectID;
requestBody: JSONObject;
requestHeaders: JSONObject;
}
// Set up the worker for processing fluent ingest queue
QueueWorker.getWorker(
QueueName.FluentIngest,
async (job: QueueJob): Promise<void> => {
logger.debug(`Processing fluent ingestion job: ${job.name}`);
try {
const jobData: FluentIngestJobData = job.data as FluentIngestJobData;
// Pass job data directly to processing function
await processFluentIngestFromQueue({
projectId: new ObjectID(jobData.projectId),
requestBody: jobData.requestBody,
requestHeaders: jobData.requestHeaders,
});
logger.debug(`Successfully processed fluent ingestion job: ${job.name}`);
} catch (error) {
logger.error(`Error processing fluent ingestion job:`);
logger.error(error);
throw error;
}
},
{ concurrency: 20 }, // Process up to 20 fluent ingest jobs concurrently
);
async function processFluentIngestFromQueue(
data: FluentIngestProcessData,
): Promise<void> {
const dbLogs: Array<Log> = [];
let logItems: Array<JSONObject | string> | JSONObject = data.requestBody as
| Array<JSONObject | string>
| JSONObject;
let oneuptimeServiceName: string | string[] | undefined = data.requestHeaders[
"x-oneuptime-service-name"
] as string | string[] | undefined;
if (!oneuptimeServiceName) {
oneuptimeServiceName = "Unknown Service";
}
const telemetryService: {
serviceId: ObjectID;
dataRententionInDays: number;
} = await OTelIngestService.telemetryServiceFromName({
serviceName: oneuptimeServiceName as string,
projectId: data.projectId,
});
if (
logItems &&
typeof logItems === "object" &&
(logItems as JSONObject)["json"]
) {
logItems = (logItems as JSONObject)["json"] as
| Array<JSONObject | string>
| JSONObject;
}
if (!Array.isArray(logItems)) {
logItems = [logItems];
}
for (let logItem of logItems) {
const dbLog: Log = new Log();
dbLog.projectId = data.projectId;
dbLog.serviceId = telemetryService.serviceId;
dbLog.severityNumber = 0;
const currentTimeAndDate: Date = OneUptimeDate.getCurrentDate();
dbLog.timeUnixNano = OneUptimeDate.toUnixNano(currentTimeAndDate);
dbLog.time = currentTimeAndDate;
dbLog.severityText = LogSeverity.Unspecified;
if (typeof logItem === "string") {
// check if its parseable to json
try {
logItem = JSON.parse(logItem);
} catch {
// do nothing
}
}
if (typeof logItem !== "string") {
logItem = JSON.stringify(logItem);
}
dbLog.body = logItem as string;
dbLogs.push(dbLog);
}
await LogService.createMany({
items: dbLogs,
props: {
isRoot: true,
},
});
OTelIngestService.recordDataIngestedUsgaeBilling({
services: {
[oneuptimeServiceName as string]: {
dataIngestedInGB: JSONFunctions.getSizeOfJSONinGB(
data.requestBody as JSONObject,
),
dataRententionInDays: telemetryService.dataRententionInDays,
serviceId: telemetryService.serviceId,
serviceName: oneuptimeServiceName as string,
},
},
projectId: data.projectId,
productType: ProductType.Logs,
}).catch((err: Error) => {
logger.error(err);
});
}
logger.debug("Fluent ingest worker initialized");

View File

@@ -0,0 +1,72 @@
import { TelemetryRequest } from "Common/Server/Middleware/TelemetryIngest";
import Queue, { QueueName } from "Common/Server/Infrastructure/Queue";
import { JSONObject } from "Common/Types/JSON";
import OneUptimeDate from "Common/Types/Date";
import logger from "Common/Server/Utils/Logger";
export interface FluentIngestJobData {
projectId: string;
requestBody: JSONObject;
requestHeaders: Record<string, string>;
ingestionTimestamp: Date;
}
export default class FluentIngestQueueService {
public static async addFluentIngestJob(req: TelemetryRequest): Promise<void> {
try {
const jobData: FluentIngestJobData = {
projectId: req.projectId.toString(),
requestBody: req.body,
requestHeaders: req.headers as Record<string, string>,
ingestionTimestamp: OneUptimeDate.getCurrentDate(),
};
const jobId: string = `fluent-${req.projectId?.toString()}-${OneUptimeDate.getCurrentDateAsUnixNano()}`;
await Queue.addJob(
QueueName.FluentIngest,
jobId,
"ProcessFluentIngest",
jobData as unknown as JSONObject,
);
logger.debug(`Added fluent ingestion job: ${jobId}`);
} catch (error) {
logger.error(`Error adding fluent ingestion job:`);
logger.error(error);
throw error;
}
}
public static async getQueueSize(): Promise<number> {
return Queue.getQueueSize(QueueName.FluentIngest);
}
public static async getQueueStats(): Promise<{
waiting: number;
active: number;
completed: number;
failed: number;
delayed: number;
total: number;
}> {
return Queue.getQueueStats(QueueName.FluentIngest);
}
public static getFailedJobs(options?: {
start?: number;
end?: number;
}): Promise<
Array<{
id: string;
name: string;
data: JSONObject;
failedReason: string;
processedOn: Date | null;
finishedOn: Date | null;
attemptsMade: number;
}>
> {
return Queue.getFailedJobs(QueueName.FluentIngest, options);
}
}

View File

@@ -55,6 +55,7 @@
"@types/react-highlight": "^0.12.8",
"@types/react-syntax-highlighter": "^15.5.13",
"@types/uuid": "^8.3.4",
"@types/web-push": "^3.6.4",
"acme-client": "^5.3.0",
"airtable": "^0.12.2",
"axios": "^1.7.2",
@@ -74,7 +75,6 @@
"json5": "^2.2.3",
"jsonwebtoken": "^9.0.0",
"jwt-decode": "^4.0.0",
"lodash": "^4.17.21",
"marked": "^12.0.2",
"moment": "^2.30.1",
"moment-timezone": "^0.5.45",
@@ -118,6 +118,7 @@
"universal-cookie": "^7.2.1",
"use-async-effect": "^2.2.6",
"uuid": "^8.3.2",
"web-push": "^3.6.7",
"zod": "^3.25.30"
},
"devDependencies": {
@@ -132,7 +133,6 @@
"@types/jest": "^28.1.4",
"@types/json2csv": "^5.0.3",
"@types/jsonwebtoken": "^8.5.9",
"@types/lodash": "^4.14.202",
"@types/node": "^17.0.45",
"@types/node-cron": "^3.0.7",
"@types/nodemailer": "^6.4.7",

View File

@@ -1,62 +0,0 @@
private
public
.git
node_modules
# See https://help.github.com/ignore-files/ for more about ignoring files.
# dependencies
/node_modules
node_modules
.idea
# testing
/coverage
# production
/build
# misc
.DS_Store
env.js
npm-debug.log*
yarn-debug.log*
yarn-error.log*
yarn.lock
Untitled-1
*.local.sh
*.local.yaml
run
stop
nohup.out*
encrypted-credentials.tar
encrypted-credentials/
_README.md
# Important Add production values to gitignore.
values-saas-production.yaml
kubernetes/values-saas-production.yaml
/private
/tls_cert.pem
/tls_key.pem
/keys
temp_readme.md
tests/coverage
settings.json
GoSDK/tester/
# tls
tls_cert.pem
tls_key.pem

6
Haraka/.gitignore vendored
View File

@@ -1,6 +0,0 @@
private
public
# tls
tls_cert.pem
tls_key.pem

View File

@@ -1,78 +0,0 @@
FROM public.ecr.aws/docker/library/node:23.8-alpine3.21
RUN mkdir /tmp/npm && chmod 2777 /tmp/npm && chown 1000:1000 /tmp/npm && npm config set cache /tmp/npm --global
RUN npm config set fetch-retries 5
RUN npm config set fetch-retry-mintimeout 100000
RUN npm config set fetch-retry-maxtimeout 600000
ENV HARAKA_VERSION=3.0.5
ARG GIT_SHA
ARG APP_VERSION
ENV GIT_SHA=${GIT_SHA}
ENV APP_VERSION=${APP_VERSION}
# IF APP_VERSION is not set, set it to 1.0.0
RUN if [ -z "$APP_VERSION" ]; then export APP_VERSION=1.0.0; fi
RUN apk add bash
# install dependence
RUN apk upgrade --update && \
apk add --no-cache -t .fetch-deps \
autoconf \
g++ \
bash \
curl \
gcc \
make \
python3 && \
addgroup -g 88 -S smtp && \
adduser -u 88 -D -S -G smtp -h /harakaapp smtp && \
# Install haraka and toobusy package
npm install -g --unsafe-perm Haraka@$HARAKA_VERSION toobusy-js && \
# # Cleaning up
apk del --purge -r .fetch-deps && \
apk add --no-cache tzdata openssl execline ca-certificates && \
rm -rf /var/cache/apk/* /tmp/* ~/.pearrc
RUN haraka -i /harakaapp
COPY ./Haraka/config/plugins /harakaapp/config/plugins
COPY ./Haraka/config/smtp.ini /harakaapp/config/smtp.ini
COPY ./Haraka/config/tls.ini /harakaapp/config/tls.ini
COPY ./Haraka/config/auth_flat_file.ini /harakaapp/config/auth_flat_file.ini
COPY ./Haraka/config/dkim_sign.ini /harakaapp/config/dkim_sign.ini
COPY ./Haraka/config/host_list /harakaapp/config/host_list
COPY ./Haraka/config/loglevel /harakaapp/config/loglevel
# create plugin directory
RUN mkdir -p /harakaapp/plugins
COPY ./Haraka/plugins/email_parser.js /harakaapp/plugins/email_parser.js
COPY ./Haraka/init.sh /init.sh
RUN chmod 755 /init.sh
# Copy package.json and package-lock.json
COPY ./Haraka/package.json /harakaapp/package.json
COPY ./Haraka/package-lock.json /harakaapp/package-lock.json
# Install dependencies
RUN cd /harakaapp && npm install
# Set permission to write logs and cache in case container run as non root
RUN chown -R 1000:1000 "/tmp/npm" && chmod -R 2777 "/tmp/npm"
EXPOSE 2525
EXPOSE 110
EXPOSE 25
EXPOSE 587
EXPOSE 465
EXPOSE 143
EXPOSE 993
EXPOSE 995
CMD ["/init.sh"]

View File

@@ -1,82 +0,0 @@
# Build haraka container.
```
sudo docker build . -t haraka:latest
```
# To update a stack
```
cd haraka
sudo docker stack deploy haraka -c docker-compose.yml
```
# To restart a service
```
sudo docker service update haraka_haraka --force
```
# Send Test Outbound Email
```
swaks --to hello@nawazdhandala.com --from hello@genosyn.com --server 192.168.1.100 --port 2525 --auth-user homelab --auth-password password
```
# Set a domain to send email
Generate a DKIM key:
```
cd dkim
bash generate.sh
cd ..
```
`public` and `private` files should have been generated inside of `dkim` folder. Please also add DNS and env vars to docker compose files. Values of that should be shown on the console.
Add an A record:
```
mailer.company.com <IP address or haraka server>
```
Set SPF record:
```
# Any MX or A servers can send emails.
v=spf1 mx a -all
```
Example:
```
v=spf1 mx a ip4:37.26.74.9 include:_spf.google.com include:mail.genosyn.com ~all
```
Set DMARC record:
```
Type: TXT
Key: _dmarc
Value: "v=DMARC1; p=reject; adkim=s; aspf=r; rua=mailto:dmarc-feedback@<DOMAIN>; ruf=mailto:dmarc-feedback@<DOMAIN>; pct=100"
```
Example:
```
_dmarc TXT "v=DMARC1; p=reject; adkim=s; aspf=r; rua=mailto:dmarc-feedback@genosyn.com; ruf=mailto:dmarc-feedback@genosyn.com; pct=100"
```
# Test, Test, Test.
Use https://www.mail-tester.com/ to test emails.

View File

@@ -1,4 +0,0 @@
[core]
methods=PLAIN
[users]

View File

@@ -1,3 +0,0 @@
disabled = false
selector = haraka
headers_to_sign = From, Sender, Reply-To, Subject, Date, Message-ID, To, Cc, MIME-Version

View File

@@ -1 +0,0 @@
DEBUG

View File

@@ -1,5 +0,0 @@
tls
auth/flat_file
dkim_sign
# email_parser

View File

@@ -1,64 +0,0 @@
; address to listen on (default: all IPv6 and IPv4 addresses, port 25)
; use "[::0]:25" to listen on IPv6 and IPv4 (not all OSes)
listen=[::0]:2525
; Note you can listen on multiple IPs/ports using commas:
;listen=127.0.0.1:2529,127.0.0.2:2529,127.0.0.3:2530
; public IP address (default: none)
; If your machine is behind a NAT, some plugins (SPF, GeoIP) gain features
; if they know the servers public IP. If 'stun' is installed, Haraka will
; try to figure it out. If that doesn't work, set it here.
;public_ip=N.N.N.N
; Time in seconds to let sockets be idle with no activity
;inactivity_timeout=300
; Drop privileges to this user/group
;user=smtp
;group=smtp
; Don't stop Haraka if plugins fail to compile
;ignore_bad_plugins=0
; Run using cluster to fork multiple backend processes
;nodes=cpus
; Daemonize
;daemonize=true
;daemon_log_file=/var/log/haraka.log
;daemon_pid_file=/var/run/haraka.pid
; Spooling
; Save memory by spooling large messages to disk
;spool_dir=/var/spool/haraka
; Specify -1 to never spool to disk
; Specify 0 to always spool to disk
; Otherwise specify a size in bytes, once reached the
; message will be spooled to disk to save memory.
;spool_after=
; Force Shutdown Timeout
; - Haraka tries to close down gracefully, but if everything is shut down
; after this time it will hard close. 30s is usually long enough to
; wait for outbound connections to finish.
;force_shutdown_timeout=30
; SMTP service extensions: https://tools.ietf.org/html/rfc1869
; strict_rfc1869 = false
; Advertise support for SMTPUTF8 (RFC-6531)
;smtputf8=true
[headers]
;add_received=true
;clean_auth_results=true
; replace header_hide_version
;show_version=true
; replace max_header_lines
max_lines=1000
; replace max_received_count
max_received=100

View File

@@ -1 +0,0 @@
no_starttls_ports[]=2525

View File

@@ -1,12 +0,0 @@
#!/bin/sh
cd keys
openssl genrsa -out private 4096
chmod 0400 private
openssl rsa -in private -out public -pubout
cd ..
bash show-info.sh

View File

@@ -1 +0,0 @@
This is where all the keys are stored.

Some files were not shown because too many files have changed in this diff Show More