Files
OpenArchiver/docs/api/jobs.md
Wei S. 6e1ebbbfd7 v0.4 init: File encryption, integrity report, deletion protection, job monitoring (#187)
* open-core setup, adding enterprise package

* enterprise: Audit log API, UI

* Audit-log docs

* feat: Integrity report, allowing users to verify the integrity of archived emails and their attachments.

- When an email is archived, Open Archiver calculates a unique cryptographic signature (a SHA256 hash) for the email's raw `.eml` file and for each of its attachments. These signatures are stored in the database alongside the email's metadata.
- The integrity check feature recalculates these signatures for the stored files and compares them to the original signatures stored in the database. This process allows you to verify that the content of your archived emails has not been altered, corrupted, or tampered with since the moment they were archived.
- Add docs of Integrity report

* Update Docker-compose.yml to use bind mount for Open Archiver data.
Fix API rate-limiter warning about trust proxy

* File encryption support

* Scope attachment deduplication to ingestion source

Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source.

This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis.

Changes:
-   **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases.
-   **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source.

* Scope attachment deduplication to ingestion source

Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source.

This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis.

Changes:
-   **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases.
-   **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source.

* Add option to disable deletions

This commit introduces a new feature that allows admins to disable the deletion of emails and ingestion sources for the entire instance. This is a critical feature for compliance and data retention, as it prevents accidental or unauthorized deletions.

Changes:
-   **Configuration**: Added an `ENABLE_DELETION` environment variable. If this variable is not set to `true`, all deletion operations will be disabled.
-   **Deletion Guard**: A centralized `checkDeletionEnabled` guard has been implemented to enforce this setting at both the controller and service levels, ensuring a robust and secure implementation.
-   **Documentation**: The installation guide has been updated to include the new `ENABLE_DELETION` environment variable and its behavior.
-   **Refactor**: The `IngestionService`'s `create` method was refactored to remove unnecessary calls to the `delete` method, simplifying the code and improving its robustness.

* Adding position for menu items

* feat(docker): Fix CORS errors

This commit fixes CORS errors when running the app in Docker by introducing the `APP_URL` environment variable. A CORS policy is set up for the backend to only allow origin from the `APP_URL`.

Key changes include:
- New `APP_URL` and `ORIGIN` environment variables have been added to properly configure CORS and the SvelteKit adapter, making the application's public URL easily configurable.
- Dockerfiles are updated to copy the entrypoint script, Drizzle config, and migration files into the final image.
- Documentation and example files (`.env.example`, `docker-compose.yml`) have been updated to reflect these changes.

* feat(attachments): De-duplicate attachment content by content hash

This commit refactors attachment handling to allow multiple emails within the same ingestion source to reference attachments with identical content (same hash).

Changes:
- The unique index on the `attachments` table has been changed to a non-unique index to permit duplicate hash/source pairs.
- The ingestion logic is updated to first check for an existing attachment with the same hash and source. If found, it reuses the existing record; otherwise, it creates a new one. This maintains storage de-duplication.
- The email deletion logic is improved to be more robust. It now correctly removes the email-attachment link before checking if the attachment record and its corresponding file can be safely deleted.

* Not filtering our Trash folder

* feat(backend): Add BullMQ dashboard for job monitoring

This commit introduces a web-based UI for monitoring and managing background jobs using Bullmq.

Key changes:
- A new `/api/v1/jobs` endpoint is created, serving the Bull Board dashboard. Access is restricted to authenticated administrators.
- All BullMQ queue definitions (`ingestion`, `indexing`, `sync-scheduler`) have been centralized into a new `packages/backend/src/jobs/queues.ts` file.
- Workers and services now import queue instances from this central file, improving code organization and removing redundant queue instantiations.

* Add `ALL_INCLUSIVE_ARCHIVE` environment variable to disable jun filtering

* Using BSL license

* frontend: Responsive design for menu bar, pagination

* License service/module

* Remove demoMode logic

* Formatting code

* Remove enterprise packages

* Fix package.json in packages

* Search page responsive fix

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-10-24 17:11:05 +02:00

3.6 KiB

Jobs API

The Jobs API provides endpoints for monitoring the job queues and the jobs within them.

Overview

Open Archiver uses a job queue system to handle asynchronous tasks like email ingestion and indexing. The system is built on Redis and BullMQ and uses a producer-consumer pattern.

Job Statuses

Jobs can have one of the following statuses:

  • active: The job is currently being processed.
  • completed: The job has been completed successfully.
  • failed: The job has failed after all retry attempts.
  • delayed: The job is delayed and will be processed at a later time.
  • waiting: The job is waiting to be processed.
  • paused: The job is paused and will not be processed until it is resumed.

Errors

When a job fails, the failedReason and stacktrace fields will contain information about the error. The error field will also be populated with the failedReason for easier access.

Job Preservation

Jobs are preserved for a limited time after they are completed or failed. This means that the job counts and the jobs that you see in the API are for a limited time.

  • Completed jobs: The last 1000 completed jobs are preserved.
  • Failed jobs: The last 5000 failed jobs are preserved.

Get All Queues

  • Endpoint: GET /v1/jobs/queues
  • Description: Retrieves a list of all job queues and their job counts.
  • Permissions: manage:all
  • Responses:
    • 200 OK: Returns a list of queue overviews.
    • 401 Unauthorized: If the user is not authenticated.
    • 403 Forbidden: If the user does not have the required permissions.

Response Body

{
	"queues": [
		{
			"name": "ingestion",
			"counts": {
				"active": 0,
				"completed": 56,
				"failed": 4,
				"delayed": 3,
				"waiting": 0,
				"paused": 0
			}
		},
		{
			"name": "indexing",
			"counts": {
				"active": 0,
				"completed": 0,
				"failed": 0,
				"delayed": 0,
				"waiting": 0,
				"paused": 0
			}
		}
	]
}

Get Queue Jobs

  • Endpoint: GET /v1/jobs/queues/:queueName
  • Description: Retrieves a list of jobs within a specific queue, with pagination and filtering by status.
  • Permissions: manage:all
  • URL Parameters:
    • queueName (string, required): The name of the queue to retrieve jobs from.
  • Query Parameters:
    • status (string, optional): The status of the jobs to retrieve. Can be one of active, completed, failed, delayed, waiting, paused. Defaults to failed.
    • page (number, optional): The page number to retrieve. Defaults to 1.
    • limit (number, optional): The number of jobs to retrieve per page. Defaults to 10.
  • Responses:
    • 200 OK: Returns a detailed view of the queue, including a paginated list of jobs.
    • 401 Unauthorized: If the user is not authenticated.
    • 403 Forbidden: If the user does not have the required permissions.
    • 404 Not Found: If the specified queue does not exist.

Response Body

{
	"name": "ingestion",
	"counts": {
		"active": 0,
		"completed": 56,
		"failed": 4,
		"delayed": 3,
		"waiting": 0,
		"paused": 0
	},
	"jobs": [
		{
			"id": "1",
			"name": "initial-import",
			"data": {
				"ingestionSourceId": "clx1y2z3a0000b4d2e5f6g7h8"
			},
			"state": "failed",
			"failedReason": "Error: Connection timed out",
			"timestamp": 1678886400000,
			"processedOn": 1678886401000,
			"finishedOn": 1678886402000,
			"attemptsMade": 5,
			"stacktrace": ["..."],
			"returnValue": null,
			"ingestionSourceId": "clx1y2z3a0000b4d2e5f6g7h8",
			"error": "Error: Connection timed out"
		}
	],
	"pagination": {
		"currentPage": 1,
		"totalPages": 1,
		"totalJobs": 4,
		"limit": 10
	}
}