Files
OpenArchiver/docs/enterprise/audit-log/audit-service.md
Wei S. 6e1ebbbfd7 v0.4 init: File encryption, integrity report, deletion protection, job monitoring (#187)
* open-core setup, adding enterprise package

* enterprise: Audit log API, UI

* Audit-log docs

* feat: Integrity report, allowing users to verify the integrity of archived emails and their attachments.

- When an email is archived, Open Archiver calculates a unique cryptographic signature (a SHA256 hash) for the email's raw `.eml` file and for each of its attachments. These signatures are stored in the database alongside the email's metadata.
- The integrity check feature recalculates these signatures for the stored files and compares them to the original signatures stored in the database. This process allows you to verify that the content of your archived emails has not been altered, corrupted, or tampered with since the moment they were archived.
- Add docs of Integrity report

* Update Docker-compose.yml to use bind mount for Open Archiver data.
Fix API rate-limiter warning about trust proxy

* File encryption support

* Scope attachment deduplication to ingestion source

Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source.

This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis.

Changes:
-   **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases.
-   **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source.

* Scope attachment deduplication to ingestion source

Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source.

This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis.

Changes:
-   **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases.
-   **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source.

* Add option to disable deletions

This commit introduces a new feature that allows admins to disable the deletion of emails and ingestion sources for the entire instance. This is a critical feature for compliance and data retention, as it prevents accidental or unauthorized deletions.

Changes:
-   **Configuration**: Added an `ENABLE_DELETION` environment variable. If this variable is not set to `true`, all deletion operations will be disabled.
-   **Deletion Guard**: A centralized `checkDeletionEnabled` guard has been implemented to enforce this setting at both the controller and service levels, ensuring a robust and secure implementation.
-   **Documentation**: The installation guide has been updated to include the new `ENABLE_DELETION` environment variable and its behavior.
-   **Refactor**: The `IngestionService`'s `create` method was refactored to remove unnecessary calls to the `delete` method, simplifying the code and improving its robustness.

* Adding position for menu items

* feat(docker): Fix CORS errors

This commit fixes CORS errors when running the app in Docker by introducing the `APP_URL` environment variable. A CORS policy is set up for the backend to only allow origin from the `APP_URL`.

Key changes include:
- New `APP_URL` and `ORIGIN` environment variables have been added to properly configure CORS and the SvelteKit adapter, making the application's public URL easily configurable.
- Dockerfiles are updated to copy the entrypoint script, Drizzle config, and migration files into the final image.
- Documentation and example files (`.env.example`, `docker-compose.yml`) have been updated to reflect these changes.

* feat(attachments): De-duplicate attachment content by content hash

This commit refactors attachment handling to allow multiple emails within the same ingestion source to reference attachments with identical content (same hash).

Changes:
- The unique index on the `attachments` table has been changed to a non-unique index to permit duplicate hash/source pairs.
- The ingestion logic is updated to first check for an existing attachment with the same hash and source. If found, it reuses the existing record; otherwise, it creates a new one. This maintains storage de-duplication.
- The email deletion logic is improved to be more robust. It now correctly removes the email-attachment link before checking if the attachment record and its corresponding file can be safely deleted.

* Not filtering our Trash folder

* feat(backend): Add BullMQ dashboard for job monitoring

This commit introduces a web-based UI for monitoring and managing background jobs using Bullmq.

Key changes:
- A new `/api/v1/jobs` endpoint is created, serving the Bull Board dashboard. Access is restricted to authenticated administrators.
- All BullMQ queue definitions (`ingestion`, `indexing`, `sync-scheduler`) have been centralized into a new `packages/backend/src/jobs/queues.ts` file.
- Workers and services now import queue instances from this central file, improving code organization and removing redundant queue instantiations.

* Add `ALL_INCLUSIVE_ARCHIVE` environment variable to disable jun filtering

* Using BSL license

* frontend: Responsive design for menu bar, pagination

* License service/module

* Remove demoMode logic

* Formatting code

* Remove enterprise packages

* Fix package.json in packages

* Search page responsive fix

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-10-24 17:11:05 +02:00

2.5 KiB

Audit Log: Backend Implementation

The backend implementation of the audit log is handled by the AuditService, located in packages/backend/src/services/AuditService.ts. This service encapsulates all the logic for creating, retrieving, and verifying audit log entries.

Hashing and Verification Logic

The core of the audit log's immutability lies in its hashing and verification logic.

Hash Calculation

The calculateHash method is responsible for generating a SHA-256 hash of a log entry. To ensure consistency, it performs the following steps:

  1. Canonical Object Creation: It constructs a new object with a fixed property order, ensuring that the object's structure is always the same.
  2. Timestamp Normalization: It converts the timestamp to milliseconds since the epoch (getTime()) to avoid any precision-related discrepancies between the application and the database.
  3. Canonical Stringification: It uses a custom canonicalStringify function to create a JSON string representation of the object. This function sorts the object keys, ensuring that the output is always the same, regardless of the in-memory property order.
  4. Hash Generation: It computes a SHA-256 hash of the canonical string.

Verification Process

The verifyAuditLog method is designed to be highly scalable and efficient, even with millions of log entries. It processes the logs in manageable chunks (e.g., 1000 at a time) to avoid loading the entire table into memory.

The verification process involves the following steps:

  1. Iterative Processing: It fetches the logs in batches within a while loop.
  2. Chain Verification: For each log entry, it compares the previousHash with the currentHash of the preceding log. If they do not match, the chain is broken, and the verification fails.
  3. Hash Recalculation: It recalculates the hash of the current log entry using the same calculateHash method used during creation.
  4. Integrity Check: It compares the recalculated hash with the currentHash stored in the database. If they do not match, the log entry has been tampered with, and the verification fails.

Service Integration

The AuditService is integrated into the application through the AuditLogModule (packages/enterprise/src/modules/audit-log/audit-log.module.ts), which registers the API routes for the audit log feature. The service's createAuditLog method is called from various other services throughout the application to record significant events.