Compare commits

...

166 Commits

Author SHA1 Message Date
wayneshn
e1a673f1fb Job tab dark theme fix, demo mode 2026-02-27 12:56:59 +01:00
Wei S.
3434e8d6ef v0.4.2-fix: improve ingestion error handling and error messages (#312)
* fix(backend): improve ingestion error handling and error messages

This commit introduces a "force delete" mechanism for Ingestion Sources and improves error messages for file-based connectors.

Changes:
- Update `IngestionService.delete` to accept a `force` flag, bypassing the `checkDeletionEnabled` check.
- Use `force` deletion when rolling back failed ingestion source creations (e.g., decryption errors or connection failures) to ensure cleanup even if deletion is globally disabled.
- Enhance error messages in `EMLConnector`, `MboxConnector`, and `PSTConnector` to distinguish between missing local files and failed uploads, providing more specific feedback to the user.

* feat(ingestion): optimize duplicate handling and fix race conditions in Google Workspace

- Implement fast duplicate check (by Message-ID) to skip full content download for existing emails in Google Workspace and IMAP connectors.
- Fix race condition in Google Workspace initial import by capturing `historyId` before listing messages, ensuring no data loss for incoming mail during import.
2026-02-24 18:10:32 +01:00
Wei S.
7dac3b2bfd V0.4.2 (#310)
* fix(api): correct API key generation and proxy handling

This commit resolves an issue where generating a new API key would fail. The root cause was improper handling of POST request bodies in the frontend proxy server.

- Refactored `ApiKeyController` methods to use arrow functions to ensure correct `this` binding.

* User profile/account page, change password, API

* docs(api): update ingestion source provider values

Update the `CreateIngestionSourceDto` documentation in `ingestion.md` to reflect the current set of supported providers.

* updating tag

* feat: add REDIS_USER env variable (#172)

* feat: add REDIS_USER env variable

fixes #171

* add proper type for bullmq config

* Bulgarian UI language strings added (backend+frontend) (#194)

* Bulgarian UI Support added

* BG language UI support - Create translation.json

* update redis config logic

* Update Bulgarian language setting, register language

* Allow specifying local file path for mbox/eml/pst (#214)

* Add agents AI doc

* Allow local file path for Mbox file ingestion


---------

Co-authored-by: Wei S. <5291640+wayneshn@users.noreply.github.com>

* feat(ingestion): add local file path support and optimize EML processing

- Frontend: Updated IngestionSourceForm to allow toggling between "Upload File" and "Local File Path" for PST, EML, and Mbox providers.
- Frontend: Added logic to clear irrelevant form data when switching import methods.
- Frontend: Added English translations for new form fields.
- Backend: Refactored EMLConnector to stream ZIP entries using yauzl instead of extracting the full archive to disk, significantly improving efficiency for large archives.
- Docs: Updated API documentation and User Guides (PST, EML, Mbox) to clarify "Local File Path" usage, specifically within Docker environments.

* docs: add meilisearch dumpless upgrade guide and snapshot config

Update `docker-compose.yml` to include the `MEILI_SCHEDULE_SNAPSHOT` environment variable, defaulting to 86400 seconds (24 hours), enabling periodic data snapshots for easier recovery. Shout out to @morph027 for the inspiration!

Additionally, update the Meilisearch upgrade documentation to include an experimental "dumpless" upgrade guide while marking the previous method as the standard recommended process.

* build(coolify): enable daily snapshots for meilisearch

Configure the Meilisearch service in `open-archiver.yml` to create snapshots every 86400 seconds (24 hours) by setting the `MEILI_SCHEDULE_SNAPSHOT` environment variable.

---------

Co-authored-by: Antonia Schwennesen <53372671+zophiana@users.noreply.github.com>
Co-authored-by: IT Creativity + Art Team <admin@it-playground.net>
Co-authored-by: Jan Berdajs <mrbrdo@gmail.com>
2026-02-23 21:25:44 +01:00
albanobattistella
cf121989ae Update Italian linguage (#278) 2026-01-18 15:28:20 +01:00
Wei S.
2df5c9240d V0.4.1 dev (#276)
* fix(api): correct API key generation and proxy handling

This commit resolves an issue where generating a new API key would fail. The root cause was improper handling of POST request bodies in the frontend proxy server.

- Refactored `ApiKeyController` methods to use arrow functions to ensure correct `this` binding.

* User profile/account page, change password, API

* docs(api): update ingestion source provider values

Update the `CreateIngestionSourceDto` documentation in `ingestion.md` to reflect the current set of supported providers.

* updating tag
2026-01-17 13:21:01 +01:00
Wei S.
24afd13858 V0.4.1: API key generation fix, change password, account profile (#273)
* fix(api): correct API key generation and proxy handling

This commit resolves an issue where generating a new API key would fail. The root cause was improper handling of POST request bodies in the frontend proxy server.

- Refactored `ApiKeyController` methods to use arrow functions to ensure correct `this` binding.

* User profile/account page, change password, API

* docs(api): update ingestion source provider values

Update the `CreateIngestionSourceDto` documentation in `ingestion.md` to reflect the current set of supported providers.
2026-01-17 02:46:27 +02:00
Wei S.
c2006dfa94 V0.4 fix 2 (#210)
* formatting code

* Remove uninstalled packages

* fix(imap): Improve IMAP connection stability and error handling

This commit refactors the IMAP connector to enhance connection management, error handling, and overall stability during email ingestion.

The `isConnected` flag has been removed in favor of relying directly on the `client.usable` property from the `imapflow` library. This simplifies the connection logic and avoids state synchronization issues.

The `connect` method now re-creates the client instance if it's not usable, ensuring a fresh connection after errors or disconnects. The retry mechanism (`withRetry`) has been updated to no longer manually reset the connection state, as the `connect` method now handles this automatically on the next attempt.

Additionally, a minor bug in the `sync-cycle-finished` processor has been fixed. The logic for merging sync states from successful jobs has been simplified and correctly typed, preventing potential runtime errors when no successful jobs are present.

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-10-29 12:59:19 +01:00
Wei S.
399059a773 V0.4 fix 2 (#207)
* formatting code

* Remove uninstalled packages

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-10-28 13:39:09 +01:00
Wei S.
0cff788656 formatting code (#206)
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-10-28 13:35:53 +01:00
Wei S.
ddb4d56107 V0.4.0 fix (#205)
* Jobs page responsive fix

* feat(ingestion): Refactor email indexing into a dedicated background job

This commit refactors the email indexing process to improve the performance and reliability of the ingestion pipeline.

Previously, email indexing was performed synchronously within the mailbox processing job. This could lead to timeouts and failed ingestion cycles if the indexing step was slow or encountered errors.

To address this, the indexing logic has been moved into a separate, dedicated background job queue (`indexingQueue`). Now, the mailbox processor simply adds a batch of emails to this queue. A separate worker then processes the indexing job asynchronously.

This decoupling makes the ingestion process more robust:
- It prevents slow indexing from blocking or failing the entire mailbox sync.
- It allows for better resource management and scalability by handling indexing in a dedicated process.
- It improves error handling, as a failed indexing job can be retried independently without affecting the main ingestion flow.

Additionally, this commit includes minor documentation updates and removes a premature timeout in the PDF text extraction helper that was causing issues.

* remove uninstalled packages

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-10-28 13:19:56 +01:00
Wei S.
42b0f6e5f1 V0.4.0 fix (#204)
* Jobs page responsive fix

* feat(ingestion): Refactor email indexing into a dedicated background job

This commit refactors the email indexing process to improve the performance and reliability of the ingestion pipeline.

Previously, email indexing was performed synchronously within the mailbox processing job. This could lead to timeouts and failed ingestion cycles if the indexing step was slow or encountered errors.

To address this, the indexing logic has been moved into a separate, dedicated background job queue (`indexingQueue`). Now, the mailbox processor simply adds a batch of emails to this queue. A separate worker then processes the indexing job asynchronously.

This decoupling makes the ingestion process more robust:
- It prevents slow indexing from blocking or failing the entire mailbox sync.
- It allows for better resource management and scalability by handling indexing in a dedicated process.
- It improves error handling, as a failed indexing job can be retried independently without affecting the main ingestion flow.

Additionally, this commit includes minor documentation updates and removes a premature timeout in the PDF text extraction helper that was causing issues.

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-10-28 13:14:43 +01:00
Wei S.
6e1ebbbfd7 v0.4 init: File encryption, integrity report, deletion protection, job monitoring (#187)
* open-core setup, adding enterprise package

* enterprise: Audit log API, UI

* Audit-log docs

* feat: Integrity report, allowing users to verify the integrity of archived emails and their attachments.

- When an email is archived, Open Archiver calculates a unique cryptographic signature (a SHA256 hash) for the email's raw `.eml` file and for each of its attachments. These signatures are stored in the database alongside the email's metadata.
- The integrity check feature recalculates these signatures for the stored files and compares them to the original signatures stored in the database. This process allows you to verify that the content of your archived emails has not been altered, corrupted, or tampered with since the moment they were archived.
- Add docs of Integrity report

* Update Docker-compose.yml to use bind mount for Open Archiver data.
Fix API rate-limiter warning about trust proxy

* File encryption support

* Scope attachment deduplication to ingestion source

Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source.

This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis.

Changes:
-   **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases.
-   **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source.

* Scope attachment deduplication to ingestion source

Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source.

This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis.

Changes:
-   **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases.
-   **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source.

* Add option to disable deletions

This commit introduces a new feature that allows admins to disable the deletion of emails and ingestion sources for the entire instance. This is a critical feature for compliance and data retention, as it prevents accidental or unauthorized deletions.

Changes:
-   **Configuration**: Added an `ENABLE_DELETION` environment variable. If this variable is not set to `true`, all deletion operations will be disabled.
-   **Deletion Guard**: A centralized `checkDeletionEnabled` guard has been implemented to enforce this setting at both the controller and service levels, ensuring a robust and secure implementation.
-   **Documentation**: The installation guide has been updated to include the new `ENABLE_DELETION` environment variable and its behavior.
-   **Refactor**: The `IngestionService`'s `create` method was refactored to remove unnecessary calls to the `delete` method, simplifying the code and improving its robustness.

* Adding position for menu items

* feat(docker): Fix CORS errors

This commit fixes CORS errors when running the app in Docker by introducing the `APP_URL` environment variable. A CORS policy is set up for the backend to only allow origin from the `APP_URL`.

Key changes include:
- New `APP_URL` and `ORIGIN` environment variables have been added to properly configure CORS and the SvelteKit adapter, making the application's public URL easily configurable.
- Dockerfiles are updated to copy the entrypoint script, Drizzle config, and migration files into the final image.
- Documentation and example files (`.env.example`, `docker-compose.yml`) have been updated to reflect these changes.

* feat(attachments): De-duplicate attachment content by content hash

This commit refactors attachment handling to allow multiple emails within the same ingestion source to reference attachments with identical content (same hash).

Changes:
- The unique index on the `attachments` table has been changed to a non-unique index to permit duplicate hash/source pairs.
- The ingestion logic is updated to first check for an existing attachment with the same hash and source. If found, it reuses the existing record; otherwise, it creates a new one. This maintains storage de-duplication.
- The email deletion logic is improved to be more robust. It now correctly removes the email-attachment link before checking if the attachment record and its corresponding file can be safely deleted.

* Not filtering our Trash folder

* feat(backend): Add BullMQ dashboard for job monitoring

This commit introduces a web-based UI for monitoring and managing background jobs using Bullmq.

Key changes:
- A new `/api/v1/jobs` endpoint is created, serving the Bull Board dashboard. Access is restricted to authenticated administrators.
- All BullMQ queue definitions (`ingestion`, `indexing`, `sync-scheduler`) have been centralized into a new `packages/backend/src/jobs/queues.ts` file.
- Workers and services now import queue instances from this central file, improving code organization and removing redundant queue instantiations.

* Add `ALL_INCLUSIVE_ARCHIVE` environment variable to disable jun filtering

* Using BSL license

* frontend: Responsive design for menu bar, pagination

* License service/module

* Remove demoMode logic

* Formatting code

* Remove enterprise packages

* Fix package.json in packages

* Search page responsive fix

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-10-24 17:11:05 +02:00
Wei S.
1e048fdbc1 Update package.json 2025-09-26 17:06:40 +02:00
Wei S.
b71dd55e25 add OCR docs (#144)
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-26 12:09:23 +02:00
Wei S.
d372ef7566 Feat: Tika Integration and Batch Indexing (#132)
* Feat/tika integration (#94)

* feat(Tika) Integration von Tika zur Textextraktion

* feat(Tika) Integration of Apache Tika for text extraction

* feat(Tika): Complete Tika integration with text extraction and docker-compose setup

- Add Tika service to docker-compose.yml
- Implement text sanitization and document validation
- Improve batch processing with concurrency control

* fix(comments) translated comments into english
fix(docker) removed ports (only used for testing)

* feat(indexing): Implement batch indexing for Meilisearch

This change introduces batch processing for indexing emails into Meilisearch to significantly improve performance and throughput during ingestion. This change is based on the batch processing method previously contributed by @axeldunkel.

Previously, each email was indexed individually, resulting in a high number of separate API calls. This approach was inefficient, especially for large mailboxes.

The `processMailbox` queue worker now accumulates emails into a batch before sending them to the `IndexingService`. The service then uses the `addDocuments` Meilisearch API endpoint to index the entire batch in a single request, reducing network overhead and improving indexing speed.

A new environment variable, `MEILI_INDEXING_BATCH`, has been added to make the batch size configurable, with a default of 500.

Additionally, this commit includes minor refactoring:
- The `TikaService` has been moved to its own dedicated file.
- The `PendingEmail` type has been moved to the shared `@open-archiver/types` package.

* chore(jobs): make continuous sync job scheduling idempotent

Adds a static `jobId` to the repeatable 'schedule-continuous-sync' job.

This prevents duplicate jobs from being scheduled if the server restarts. By providing a unique ID, the queue will update the existing repeatable job instead of creating a new one, ensuring the sync runs only at the configured frequency.

---------

Co-authored-by: axeldunkel <53174090+axeldunkel@users.noreply.github.com>
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-26 11:34:32 +02:00
Wei S.
e9a65f9672 feat: Add Mbox ingestion (#117)
This commit introduces two major features:

1.  **Mbox File Ingestion:**
    Users can now ingest emails from Mbox files (`.mbox`). A new Mbox connector has been implemented on the backend, and the user interface has been updated to support creating Mbox ingestion sources. Documentation for this new provider has also been added.

Additionally, this commit includes new documentation for upgrading and migrating Open Archiver.

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-16 20:30:22 +03:00
Wei S.
ce3f379b7a Update issue templates (#110) 2025-09-14 16:25:13 +03:00
Wei S.
37a778cb6d chore(deps): Update dependencies across packages (#105)
This commit updates several dependencies in the frontend and backend packages.

- **Backend:**
  - Upgrades `xlsx` to version `0.20.3` by pointing to the official CDN URL. This ensures usage of the community edition with a permissive license.
  - Removes the unused `bull-board` development dependency.

- **Frontend:**
  - Upgrades `@sveltejs/kit` from `^2.16.0` to `^2.38.1` to stay current with the latest features and fixes.

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-11 22:07:35 +03:00
Wei S.
26a760b232 Create FUNDING.yml (#102) 2025-09-10 17:09:13 +03:00
Wei S.
6be0774bc4 Display versions: Add new version notification in footer (#101)
* feat: Add new version notification in footer

This commit implements a system to check for new application versions and notify the user.

On page load, the server-side code now fetches the latest release from the GitHub repository API. It uses `semver` to compare the current application version with the latest release tag.

If a newer version is available, an alert is displayed in the footer with a link to the release page. The current application version is also now displayed in the footer. The version check is cached for one hour to minimize API requests.

* Modify version notification

* current version 0.3.1

* Resolve conflicts

* Code formatting

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-10 12:09:12 +03:00
Wei S.
4a23f8f29f feat: Add new version notification in footer (#99)
This commit implements a system to check for new application versions and notify the user.

On page load, the server-side code now fetches the latest release from the GitHub repository API. It uses `semver` to compare the current application version with the latest release tag.

If a newer version is available, an alert is displayed in the footer with a link to the release page. The current application version is also now displayed in the footer. The version check is cached for one hour to minimize API requests.

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-09 23:36:35 +03:00
albanobattistella
074256ed59 Update it.json (#90) 2025-09-07 23:44:08 +03:00
Wei S.
7d178d786b Docs: code formatting (#92)
* Adding rate limiting docs

* update rate limiting docs

* Resolve conflict

* Code formatting

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-06 18:06:59 +03:00
Wei S.
4b11cd931a Docs: update rate limiting docs (#91)
* Adding rate limiting docs

* update rate limiting docs

* Resolve conflict

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-06 17:56:34 +03:00
scotscotmcc
0a21ad14cd Update README.md (#89)
fix folder in installation steps
2025-09-06 17:38:43 +03:00
Wei S.
63d3960f79 Adding rate limiting docs (#88)
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-04 17:44:10 +03:00
Wei S.
85a526d1b6 User api key: JSON rate limiting message & status code (#87)
* feat(auth): Implement API key authentication

This commit enables API access with an API key system. This change provides a better experience for programmatic access and third-party integrations.

Key changes include:
- **API Key Management:** Users can now generate, manage, and revoke persistent API keys through a new "API Keys" section in the settings UI.
- **Authentication Middleware:** API requests are now authenticated via an `X-API-KEY` header instead of the previous `Authorization: Bearer` token.
- **Backend Implementation:** Adds a new `api_keys` database table, along with corresponding services, controllers, and routes to manage the key lifecycle securely.
- **Rate Limiting:** The API rate limiter now uses the API key to identify and track requests.
- **Documentation:** The API authentication documentation has been updated to reflect the new method.

* Add configurable API rate limiting

Two new variables are added to `.env.example`:
- `RATE_LIMIT_WINDOW_MS`: The time window in milliseconds for which requests are checked (defaults to 15 minutes).
- `RATE_LIMIT_MAX_REQUESTS`: The maximum number of requests allowed from an IP within the window (defaults to 100).

The installation documentation has been updated to reflect these new configuration options.

* Disable API operation in demo mode

* Exclude public API endpoints from rate limiting

* JSON rate limiting message & status code

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-04 17:32:43 +03:00
Wei S.
52a1a11973 User api key: Exclude public API endpoints from rate limiting (#86)
* feat(auth): Implement API key authentication

This commit enables API access with an API key system. This change provides a better experience for programmatic access and third-party integrations.

Key changes include:
- **API Key Management:** Users can now generate, manage, and revoke persistent API keys through a new "API Keys" section in the settings UI.
- **Authentication Middleware:** API requests are now authenticated via an `X-API-KEY` header instead of the previous `Authorization: Bearer` token.
- **Backend Implementation:** Adds a new `api_keys` database table, along with corresponding services, controllers, and routes to manage the key lifecycle securely.
- **Rate Limiting:** The API rate limiter now uses the API key to identify and track requests.
- **Documentation:** The API authentication documentation has been updated to reflect the new method.

* Add configurable API rate limiting

Two new variables are added to `.env.example`:
- `RATE_LIMIT_WINDOW_MS`: The time window in milliseconds for which requests are checked (defaults to 15 minutes).
- `RATE_LIMIT_MAX_REQUESTS`: The maximum number of requests allowed from an IP within the window (defaults to 100).

The installation documentation has been updated to reflect these new configuration options.

* Disable API operation in demo mode

* Exclude public API endpoints from rate limiting

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-04 17:27:57 +03:00
Wei S.
4048f47777 User api key: Disable API operation in demo mode (#85)
* feat(auth): Implement API key authentication

This commit enables API access with an API key system. This change provides a better experience for programmatic access and third-party integrations.

Key changes include:
- **API Key Management:** Users can now generate, manage, and revoke persistent API keys through a new "API Keys" section in the settings UI.
- **Authentication Middleware:** API requests are now authenticated via an `X-API-KEY` header instead of the previous `Authorization: Bearer` token.
- **Backend Implementation:** Adds a new `api_keys` database table, along with corresponding services, controllers, and routes to manage the key lifecycle securely.
- **Rate Limiting:** The API rate limiter now uses the API key to identify and track requests.
- **Documentation:** The API authentication documentation has been updated to reflect the new method.

* Add configurable API rate limiting

Two new variables are added to `.env.example`:
- `RATE_LIMIT_WINDOW_MS`: The time window in milliseconds for which requests are checked (defaults to 15 minutes).
- `RATE_LIMIT_MAX_REQUESTS`: The maximum number of requests allowed from an IP within the window (defaults to 100).

The installation documentation has been updated to reflect these new configuration options.

* Disable API operation in demo mode

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-04 16:56:45 +03:00
Wei S.
22b173cbe4 Feat: Implement API key authentication (#84)
* feat(auth): Implement API key authentication

This commit enables API access with an API key system. This change provides a better experience for programmatic access and third-party integrations.

Key changes include:
- **API Key Management:** Users can now generate, manage, and revoke persistent API keys through a new "API Keys" section in the settings UI.
- **Authentication Middleware:** API requests are now authenticated via an `X-API-KEY` header instead of the previous `Authorization: Bearer` token.
- **Backend Implementation:** Adds a new `api_keys` database table, along with corresponding services, controllers, and routes to manage the key lifecycle securely.
- **Rate Limiting:** The API rate limiter now uses the API key to identify and track requests.
- **Documentation:** The API authentication documentation has been updated to reflect the new method.

* Add configurable API rate limiting

Two new variables are added to `.env.example`:
- `RATE_LIMIT_WINDOW_MS`: The time window in milliseconds for which requests are checked (defaults to 15 minutes).
- `RATE_LIMIT_MAX_REQUESTS`: The maximum number of requests allowed from an IP within the window (defaults to 100).

The installation documentation has been updated to reflect these new configuration options.

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-04 15:07:53 +03:00
Wei S.
774b0d7a6b Bug fix: Status API response: needsSetup and Remove SUPER_API_KEY support (#83)
* Disable system settings for demo mode

* Status API response: needsSetup

* Remove SUPER_API_KEY support

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-03 16:30:06 +03:00
Wei S.
85607d2ab3 Disable system settings for demo mode (#78)
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-01 13:29:45 +03:00
Wei S.
94021eab69 v0.3.0 release (#76)
* Remove extra ports in Docker Compose file

* Allow self-assigned cert

* Adding allow insecure cert option

* fix(IMAP): Share connections between each fetch email action

* Update docs: troubleshooting CORS error

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-09-01 12:44:22 +03:00
Wei S.
faefdac44a System settings: Copy locale files in backend build
Copy locale files in backend build
2025-08-31 15:10:40 +03:00
Wei S.
392f51dabc System settings: adding multi-language support for frontend (#72)
* System settings setup

* Multi-language support

* feat: Add internationalization (i18n) support to frontend

This commit introduces internationalization (i18n) to the frontend using the `sveltekit-i18n` library, allowing the user interface to be translated into multiple languages.

Key changes:
- Added translation files for 10 languages (en, de, es, fr, etc.).
- Replaced hardcoded text strings throughout the frontend components and pages with translation keys.
- Added a language selector to the system settings page, allowing administrators to set the default application language.
- Updated the backend settings API to store and expose the new language configuration.

* Adding greek translation

* feat(backend): Implement i18n for API responses

This commit introduces internationalization (i18n) to the backend API using the `i18next` library.

Hardcoded error and response messages in the API controllers have been replaced with translation keys, which are processed by the new i18next middleware. This allows for API responses to be translated into different languages.

The following dependencies were added:
- `i18next`
- `i18next-fs-backend`
- `i18next-http-middleware`

* Formatting code

* Translation revamp for frontend and backend, adding systems docs

* Docs site title

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-08-31 13:44:28 +03:00
Wei S.
baff1195c7 Feat: System settings (#66)
* Format checked, contributing.md update

* Middleware setup

* IAP API, create user/roles in frontend

* RBAC using CASL library

* Switch to CASL, secure search, resource-level access control

* Remove inherent behavior, index userEmail, adding docs for IAM policies

* Format

* System settings setup

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-08-28 14:12:05 +03:00
Wei S.
f1da17e484 Fix: storage chart legend overflow (#70)
* Fix storage chart legend overflow

* fix storage legend overflow

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-08-24 16:10:24 +02:00
Wei S.
a2c55f36ee Cla v2 (#68)
* Format checked, contributing.md update

* Middleware setup

* IAP API, create user/roles in frontend

* RBAC using CASL library

* Switch to CASL, secure search, resource-level access control

* Remove inherent behavior, index userEmail, adding docs for IAM policies

* Format

* CLA v2

* cla-v2

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-08-24 15:03:05 +02:00
Wei S.
9fdba4cd61 Role based access: Adding docs to docs site (#67)
* Format checked, contributing.md update

* Middleware setup

* IAP API, create user/roles in frontend

* RBAC using CASL library

* Switch to CASL, secure search, resource-level access control

* Remove inherent behavior, index userEmail, adding docs for IAM policies

* Format

* Adding IAM policy documentation to Docs site

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-08-24 14:52:08 +02:00
Wei S.
108c646596 CLA-v2
CLA-v2: Clarifying LogicLabs OÜ is the entity contributors are signing the agreement with.
2025-08-24 15:05:15 +03:00
Wei S.
61e44c81f7 Role based access (#61)
* Format checked, contributing.md update

* Middleware setup

* IAP API, create user/roles in frontend

* RBAC using CASL library

* Switch to CASL, secure search, resource-level access control

* Remove inherent behavior, index userEmail, adding docs for IAM policies

* Format

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-08-23 23:19:51 +03:00
Wei S.
f651aeab0e Role based access (#60)
* Format checked, contributing.md update

* Middleware setup

* IAP API, create user/roles in frontend

* RBAC using CASL library

* Switch to CASL, secure search, resource-level access control

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-08-22 00:51:56 +03:00
Wei S.
3fb4290934 Role based access (#59)
* Format checked, contributing.md update

* Middleware setup

* IAP API, create user/roles in frontend

* RBAC using CASL library

* Switch to CASL, secure search, resource-level access control

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-08-21 23:53:21 +03:00
Wei S.
8c33b63bdf feat: Role based access control (#58)
* Format checked, contributing.md update

* Middleware setup

* IAP API, create user/roles in frontend

* RBAC using CASL library

* Switch to CASL, secure search, resource-level access control

---------

Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-08-21 23:45:06 +03:00
David Girón
2b325f3461 feat: optimize Dockerfile (#47)
* define base image arg

* create base stage with common content

* chmod executable entrypoint file

this avoids re-copying the same file as is being modified in the docker
layer

* cache npm downloaded packages

avoids re-downloading deps if cache content is available
2025-08-19 12:17:32 +03:00
Til Wegener
4d3c164bc0 Fix UI size display and ingestion history graph (#50)
* fix: unify size display, improve graph interpolation & time readability

* fix display human-readable sizes in ingestion chart

* display human-readable sizes in ingestion chart

* fix: format code

* fix keep fallback for item.name
2025-08-19 11:06:31 +03:00
Wei S.
7288286fd9 Format checked, contributing.md update (#49)
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
2025-08-17 17:42:49 +03:00
Wei S.
ec1cf3cf0b Merge pull request #46 from axeldunkel/fix/imap-sync-skipping-emails
Fix IMAP sync marking all emails as synced before fetching
2025-08-17 17:28:31 +03:00
axeldunkel
9c9152a2ee Merge branch 'LogicLabs-OU:main' into fix/imap-sync-skipping-emails 2025-08-17 13:41:12 +02:00
Axel Dunkel
c05b3b92d9 fix the indentation, to use tabs not spaces 2025-08-17 11:34:21 +00:00
Wei S.
aed0c964c8 Merge pull request #48 from tilwegener/fix/graph-delta-query-removed-field
Fix Graph delta query: exclude unsupported @removed field
2025-08-17 13:34:37 +03:00
Til Wegener
86dda6c6d3 Fix Graph delta query: exclude unsupported @removed field 2025-08-17 09:58:17 +00:00
Axel Dunkel
6e1dd17267 Fix IMAP sync marking all emails as synced before fetching
Initialize newMaxUids with lastUid instead of mailbox maximum
to prevent marking unfetched emails as synced.

The bug sets newMaxUids to the highest UID before fetching,
causing all existing emails to be skipped when sync state
is saved early.

Fixes #45
2025-08-16 08:43:09 +00:00
Wei S.
b4d2125020 Merge pull request #43 from LogicLabs-OU/dev
Fix pnpm-lock unmatch error
2025-08-15 14:24:33 +03:00
Wayne
a2ca79d3eb Fix pnpm-lock unmatch error 2025-08-15 14:23:53 +03:00
Wei S.
8f519dc995 Merge pull request #42 from LogicLabs-OU/dev
1. Project-wide format using prettier
2. Delete single archived emails
3. Handle attachment indexing error gracefully
2025-08-15 14:20:11 +03:00
Wayne
b2ca3ef0e1 Project wide format 2025-08-15 14:18:23 +03:00
Wayne
9873228d01 Before format 2025-08-15 14:14:01 +03:00
Wei S.
94190f8b7c Merge pull request #41 from LogicLabs-OU/dev
Dev: project wide formatting setup
2025-08-15 13:46:46 +03:00
Wayne
832e29bd92 Project prettier setup 2025-08-15 13:45:58 +03:00
Wei S.
cba6dfcae1 Merge pull request #36 from tilwegener/feat/delete-mail-button
feat: delete archived emails + improve IMAP UID and PDF parsing

Note: Will do project-wide formatting in the next commit, merging this PR.
2025-08-15 13:45:31 +03:00
Til Wegener
24f5b341a8 Merge branch 'LogicLabs-OU:main' into feat/delete-mail-button 2025-08-14 10:13:16 +02:00
Til Wegener
cba7e05d98 fix: handle attachment cleanup errors safely and surface messages 2025-08-14 08:10:58 +00:00
Til Wegener
cfdfe42fb8 fix(email-deletion): redirect to archived list and purge search index 2025-08-14 07:25:12 +00:00
Til Wegener
9138c1c753 feat: remove archived emails and related data 2025-08-14 06:26:51 +00:00
Til Wegener
c4afa471cb chore: log IMAP message UID during processing 2025-08-13 19:24:34 +00:00
Til Wegener
187282c68d fix: handle gaps in IMAP UID ranges 2025-08-13 19:24:05 +00:00
Wayne
82a83a71e4 BODY_SIZE_LIMIT fix, database url encode 2025-08-13 21:55:22 +03:00
Til Wegener
ff676ecb86 * avoid hanging when pdf2json fails by resolving text extraction with an empty string 2025-08-13 20:11:47 +02:00
Wei S.
9ff6801afc Merge pull request #33 from LogicLabs-OU/dev
Increase file upload limit and improve ingestion robustness
2025-08-13 20:55:35 +03:00
Wayne
d2b4337be9 Fix error when pst file emails don't include senders 2025-08-13 20:46:02 +03:00
Wayne
b03791d9a6 adding FRONTEND_BODY_SIZE_LIMIT to allow bigger file upload for the frontend. This is to fix the pst file upload error. 2025-08-13 19:20:19 +03:00
Wei S.
4cbbb6cec3 Merge pull request #28 from LogicLabs-OU/dev
Migrate to DB users, implement IAM & add PST/EML importers
2025-08-11 11:12:53 +03:00
Wayne
f10bf93d1b eml import support 2025-08-11 10:55:50 +03:00
Wayne
512f0312ba Preserve original email path and tags. Emails with path now will be stored within their path 2025-08-09 16:40:25 +03:00
Wayne
29db34c5d8 Imap batch process, handle rate limits gracefully 2025-08-08 14:56:21 +03:00
Wayne
a87000f9dc PST Import improvement 2025-08-08 13:20:33 +03:00
Wayne
4872ed597f PST ingestion 2025-08-07 17:03:08 +03:00
Wayne
e02ad0355e Restrict createAdminUser to initial setup only 2025-08-06 11:06:09 +03:00
Wayne
23ebe942b2 IAM policies 2025-08-06 01:12:33 +03:00
Wayne
842f8092d6 Migrating user service to database, sunsetting admin user 2025-08-06 00:01:15 +03:00
Wayne
3201fbfe0b Email thread improvement, user-defined sync frequency 2025-08-05 21:12:06 +03:00
Wei S.
d5c9f9a14b Merge pull request #15 from LogicLabs-OU/dev
Thread discovery
2025-08-05 13:37:57 +03:00
Wayne
f2a5b29105 Thread discovery 2025-08-05 13:34:51 +03:00
Wei S.
c65d80e948 Merge pull request #12 from LogicLabs-OU/dev
Discord invite link
2025-08-04 16:32:24 +03:00
Wayne
f484f72994 Discord invite link 2025-08-04 16:32:04 +03:00
Wei S.
59ca07dd1a Merge pull request #11 from LogicLabs-OU/dev
Demo site info update
2025-08-04 16:07:10 +03:00
Wayne
d74d5e5308 Demo site info update 2025-08-04 16:06:25 +03:00
Wei S.
1ae7b2fd2f Merge pull request #10 from LogicLabs-OU/dev
Adding demo site
2025-08-04 16:04:10 +03:00
Wayne
e0953e270e Adding demo site 2025-08-04 16:03:45 +03:00
Wei S.
705b1e5311 Merge pull request #9 from LogicLabs-OU/dev
Demo mode enabled
2025-08-04 14:43:57 +03:00
Wayne
5a2ca3bf19 Demo mode 2025-08-04 14:42:06 +03:00
Wei S.
f0678b3aa9 Merge pull request #8 from LogicLabs-OU/wip
Error handling, force sync, UI improvement
2025-08-04 13:28:46 +03:00
Wayne
4156abcdfa Error handling, force sync, UI improvement 2025-08-04 13:24:46 +03:00
Wayne
d47f0c5b08 Docker: with short sha 2025-08-03 13:26:40 +03:00
Wayne
a18e34a486 Frontend: Page title fix 2025-08-03 12:26:09 +03:00
Wayne
be3127136f Frontend package name fix 2025-08-03 01:49:27 +03:00
Wayne
26aeaa7c2d PNPM workspace build order 2025-08-03 01:44:00 +03:00
Wayne
07cc1e5075 Build backend then frontend 2025-08-03 01:37:20 +03:00
Wayne
d9971e3ff4 Pnpm build with dependency graph 2025-08-03 01:31:51 +03:00
Wayne
3ff50ec155 types package path reference 2025-08-03 01:21:19 +03:00
Wayne
181f4fd46c Build from tsconfig.build.json 2025-08-03 01:13:52 +03:00
Wayne
ffaa9762af Use --shamefully-hoist to create a flat node_modules structure 2025-08-03 01:06:06 +03:00
Wayne
f45ed3a62a Build package separately 2025-08-03 01:00:05 +03:00
Wayne
95445dcd37 Build with tsconfig.build.json 2025-08-03 00:56:29 +03:00
Wayne
0d64eff208 Disable parallel build 2025-08-03 00:47:29 +03:00
Wayne
6fb459630e Untyped variable, types package reference 2025-08-03 00:44:12 +03:00
Wayne
227e8d8d18 Docker deployment: build all packages in one 2025-08-03 00:37:52 +03:00
Wayne
a5d3a3be86 Docker deployment: fix build command 2025-08-03 00:28:29 +03:00
Wayne
8695f484ac Docker build types package 2025-08-03 00:23:57 +03:00
Wayne
ed15c0e9bd Docker action 2025-08-03 00:20:00 +03:00
Wayne
e09c82f1fe Docker action 2025-08-03 00:17:04 +03:00
Wayne
6a154a8f02 Handle sync error: remove failed jobs, force sync 2025-08-02 12:16:02 +03:00
Wayne
ac4dae08d2 CLA 2025-08-02 11:32:11 +03:00
Wayne
c297e5a714 Docs site update 2025-08-01 19:54:23 +03:00
Wayne
5cc24d0d67 Ingestion database error fix, UI update 2025-08-01 15:09:05 +03:00
Wayne
488df16f26 IMAP connector: skip empty inboxes 2025-07-30 16:13:04 +03:00
Wayne
e9d84fb438 Support TLS option in IMAP connection 2025-07-28 20:29:04 +03:00
Wayne
32752ce90f Auth middleware: fix potential empty SUPER_API_KEY 2025-07-28 17:59:07 +03:00
Wayne
42dc884588 Docs site logo fix 2025-07-28 13:06:02 +03:00
Wayne
563e2dcae4 Docs site logo fix 2025-07-28 13:03:00 +03:00
Wayne
b2f41062f8 Fix docs site logo 2025-07-28 12:52:56 +03:00
Wayne
4e0f6ce5df Docs update 2025-07-28 11:38:14 +03:00
Wayne
e68d9a338d Docs update 2025-07-28 02:35:28 +03:00
Wayne
a7e6b93c77 Docs update 2025-07-28 02:14:38 +03:00
Wayne
9d3e6fc22e CNAME file creation 2025-07-28 01:32:55 +03:00
Wayne
16e6d04682 Docs update 2025-07-28 01:28:52 +03:00
Wayne
cb04da78a6 Docs site with domain 2025-07-27 22:02:14 +03:00
Wayne
36dbd426d5 Docs site 2025-07-27 21:54:56 +03:00
Wayne
8985655a48 Docs site 2025-07-27 21:33:53 +03:00
Wayne
9b0c136fff Dead link fix 2025-07-27 21:32:29 +03:00
Wayne
88046c38e4 Docs site deployment 2025-07-27 21:30:41 +03:00
Wayne
9c5922fd31 Docs site deploy 2025-07-27 21:28:51 +03:00
Wayne
7240da7b40 Docs site 2025-07-27 21:26:34 +03:00
Wayne
898f52ac78 Storage & auth security fix 2025-07-27 21:01:47 +03:00
Wayne
becd5f1490 Force sync API endpoint 2025-07-27 18:27:11 +03:00
Wayne
1d907abdbd Message header ID 2025-07-26 19:37:11 +03:00
Wayne
8a74838f43 IMAP: ingest all mailboxes, skip duplication 2025-07-26 15:11:18 +03:00
Wayne
6930162079 Docs update user guide 2025-07-25 18:08:21 +03:00
Wei Sheng
748240b16e GITBOOK-1: No subject 2025-07-25 15:06:17 +00:00
Wayne
88cb5340a7 Docs update 2025-07-25 17:13:11 +03:00
Wayne
e95093c439 Docs update 2025-07-25 17:11:07 +03:00
Wayne
a96b32e0e9 Readme update 2025-07-25 16:40:38 +03:00
Wayne
b081c802b7 Docker Compose deployment 2025-07-25 16:34:13 +03:00
Wayne
7d60a8fe6e Docker Compose deployment 2025-07-25 16:32:07 +03:00
Wayne
5217d24184 Docker Compose deployment 2025-07-25 16:29:09 +03:00
Wayne
8c12cda370 Docker Compose deployment 2025-07-25 15:50:25 +03:00
Wayne
946da7925b Docker deployment 2025-07-24 23:43:38 +03:00
Wayne
7646f39721 Dashboard charts refinement 2025-07-24 19:26:07 +03:00
Wayne
c3bbc84b01 Idendity inboxes in org 2025-07-24 18:46:35 +03:00
Wayne
bef92cb7d4 Dashboard revamp 2025-07-24 14:43:24 +03:00
Wayne
69846c10c0 Dashboard fix 2025-07-23 15:14:08 +03:00
Wayne
b19ec38505 Dashboard service init 2025-07-23 14:57:39 +03:00
Wayne
7bd1b2d77a Microsoft 365 syncState fix 2025-07-23 14:26:32 +03:00
Wayne
6b820e80c9 IMAP initial import repeat fix 2025-07-23 12:51:10 +03:00
Wayne
e67cf33d5f Atomically update syncState 2025-07-22 22:45:32 +03:00
Wayne
36fcaa0475 Email preview: show pure text 2025-07-22 20:20:59 +03:00
Wayne
a800d54394 Microsoft 365 sync 2025-07-22 20:18:48 +03:00
Wayne
5b967836b1 Microsoft connector 2025-07-22 18:48:03 +03:00
Wayne
1b81647ff4 Ingestion form update 2025-07-22 16:46:13 +03:00
Wayne
e1e11765d8 Credentials database schema 2025-07-22 16:29:52 +03:00
Wayne
b5c2a12739 Delete files upon ingestion deletion 2025-07-22 15:36:55 +03:00
Wayne
e7bb545cfa Continuous syncing fix 2025-07-22 13:49:13 +03:00
Wayne
5e42bef8ad IMAP syncing fix 2025-07-22 02:15:41 +03:00
Wayne
c1f2952d79 Pause a sync. 2025-07-22 02:06:38 +03:00
Wayne
3d1feedafb Continuous syncing 2025-07-22 01:51:10 +03:00
367 changed files with 56457 additions and 8360 deletions

46
.dockerignore Normal file
View File

@@ -0,0 +1,46 @@
# Git
.git
.gitignore
# Node
node_modules
.pnpm-store
# Env
.env
.env.*
!/.env.example
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# IDEs
.idea
.vscode
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
# Docker
docker-compose.yml
.dockerignore
Dockerfile
# Local data
meili_data

View File

@@ -1,26 +1,57 @@
# Application
# --- Application Settings ---
# Set to 'production' for production environments
NODE_ENV=development
PORT_BACKEND=4000
PORT_FRONTEND=3000
# The public-facing URL of your application. This is used by the backend to configure CORS.
APP_URL=http://localhost:3000
# This is used by the SvelteKit Node adapter to determine the server's public-facing URL.
# It should always be set to the value of APP_URL.
ORIGIN=$APP_URL
# The frequency of continuous email syncing. Default is every minutes, but you can change it to another value based on your needs.
SYNC_FREQUENCY='* * * * *'
# Set to 'true' to include Junk and Trash folders in the email archive. Defaults to false.
ALL_INCLUSIVE_ARCHIVE=false
# --- Docker Compose Service Configuration ---
# These variables are used by docker-compose.yml to configure the services. Leave them unchanged if you use Docker services for Postgresql, Valkey (Redis) and Meilisearch. If you decide to use your own instances of these services, you can substitute them with your own connection credentials.
# PostgreSQL
DATABASE_URL="postgresql://admin:password@postgres:5432/open_archive?schema=public"
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
POSTGRES_DB=open_archive
POSTGRES_USER=admin
POSTGRES_PASSWORD=password
DATABASE_URL="postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}"
# Meilisearch
MEILI_MASTER_KEY=aSampleMasterKey
MEILI_HOST=http://meilisearch:7700
# The number of emails to batch together for indexing. Defaults to 500.
MEILI_INDEXING_BATCH=500
# Storage
# Redis (We use Valkey, which is Redis-compatible and open source)
REDIS_HOST=valkey
REDIS_PORT=6379
REDIS_PASSWORD=defaultredispassword
# If you run Valkey service from Docker Compose, set the REDIS_TLS_ENABLED variable to false.
REDIS_TLS_ENABLED=false
# Redis username. Only required if not using the default user.
REDIS_USER=notdefaultuser
# --- Storage Settings ---
# Choose your storage backend. Valid options are 'local' or 's3'.
STORAGE_TYPE=local
# The maximum request body size to accept in bytes including while streaming. The body size can also be specified with a unit suffix for kilobytes (K), megabytes (M), or gigabytes (G). For example, 512K or 1M. Defaults to 512kb. Or the value of Infinity if you don't want any upload limit.
BODY_SIZE_LIMIT=100M
# --- Local Storage Settings ---
# The absolute path on the server where files will be stored.
# This is only used if STORAGE_TYPE is 'local'.
# The path inside the container where files will be stored.
# This is mapped to a Docker volume for persistence.
# This is not an optional variable, it is where the Open Archiver service stores application data. Set this even if you are using S3 storage.
# Make sure the user that runs the Open Archiver service has read and write access to this path.
# Important: It is recommended to create this path manually before installation, otherwise you may face permission and ownership problems.
STORAGE_LOCAL_ROOT_PATH=/var/data/open-archiver
# --- S3-Compatible Storage Settings ---
@@ -33,18 +64,37 @@ STORAGE_S3_REGION=
# Set to 'true' for MinIO and other non-AWS S3 services
STORAGE_S3_FORCE_PATH_STYLE=false
# --- Storage Encryption ---
# IMPORTANT: Generate a secure, random 32-byte hex string for this key.
# You can use `openssl rand -hex 32` to generate a key.
# This key is used for AES-256 encryption of files at rest.
# This is an optional variable, if not set, files will not be encrypted.
STORAGE_ENCRYPTION_KEY=
# --- Security & Authentication ---
# Enable or disable deletion of emails and ingestion sources. Defaults to false.
ENABLE_DELETION=false
# Rate Limiting
# The window in milliseconds for which API requests are checked. Defaults to 60000 (1 minute).
RATE_LIMIT_WINDOW_MS=60000
# The maximum number of API requests allowed from an IP within the window. Defaults to 100.
RATE_LIMIT_MAX_REQUESTS=100
# JWT
JWT_SECRET="a-very-secret-key"
# IMPORTANT: Change this to a long, random, and secret string in your .env file
JWT_SECRET=a-very-secret-key-that-you-should-change
JWT_EXPIRES_IN="7d"
# Admin users
ADMIN_EMAIL=admin@local.com
ADMIN_PASSWORD=a_strong_pass
SUPER_API_KEY=
# Master Encryption Key for sensitive data
# Master Encryption Key for sensitive data (Such as Ingestion source credentials and passwords)
# IMPORTANT: Generate a secure, random 32-byte hex string for this
# Use `openssl rand -hex 32` to generate a key
# You can use `openssl rand -hex 32` to generate a key.
ENCRYPTION_KEY=
# Apache Tika Integration
# ONLY active if TIKA_URL is set
TIKA_URL=http://tika:9998

27
.github/CLA-v2.md vendored Normal file
View File

@@ -0,0 +1,27 @@
# Contributor License Agreement (CLA)
Version: 2
This Agreement is for your protection as a Contributor as well as the protection of the maintainers of the Open Archiver software; it does not change your rights to use your own Contributions for any other purpose. Open Archiver is developed and maintained by LogicLabs OÜ, a private limited company established under the laws of the Republic of Estonia.
You accept and agree to the following terms and conditions for Your present and future Contributions submitted to LogicLabs OÜ. Except for the license granted herein to LogicLabs OÜ and recipients of software distributed by LogicLabs OÜ, You reserve all right, title, and interest in and to Your Contributions.
1. Definitions.
"You" (or "Your") shall mean the copyright owner or legal entity authorized by the copyright owner that is making this Agreement with LogicLabs OÜ. For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are under common control with that entity are considered to be a single Contributor.
"Contribution" shall mean any original work of authorship, including any modifications or additions to an existing work, that is intentionally submitted by You to LogicLabs OÜ for inclusion in, or documentation of, any of the products owned or managed by LogicLabs OÜ (the "Work"). For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to LogicLabs OÜ or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, LogicLabs OÜ for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by You as "Not a Contribution."
2. Grant of Copyright License. Subject to the terms and conditions of this Agreement, You grant to LogicLabs OÜ and to recipients of software distributed by LogicLabs OÜ a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Your Contributions and such derivative works.
3. Grant of Patent License. Subject to the terms and conditions of this Agreement, You grant to LogicLabs OÜ and to recipients of software distributed by LogicLabs OÜ a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) was submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed.
4. You represent that you are legally entitled to grant the above license. If your employer(s) has rights to intellectual property that you create that includes your Contributions, you represent that you have received permission to make Contributions on behalf of that employer, that your employer has waived such rights for your Contributions to LogicLabs OÜ, or that your employer has executed a separate Contributor License Agreement with LogicLabs OÜ.
5. You represent that each of Your Contributions is Your original creation (see section 7 for submissions on behalf of others). You represent that Your Contribution submissions include complete details of any third-party license or other restriction (including, but not limited to, related patents and trademarks) of which you are personally aware and which are associated with any part of Your Contributions.
6. You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. Unless required by applicable law or agreed to in writing, You provide Your Contributions on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
7. Should You wish to submit work that is not Your original creation, You may submit it to LogicLabs OÜ separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which you are personally aware, and conspicuously marking the work as "Submitted on behalf of a third-party: [named here]".
8. You agree to notify LogicLabs OÜ of any facts or circumstances of which you become aware that would make these representations inaccurate in any respect.

25
.github/CLA.md vendored Normal file
View File

@@ -0,0 +1,25 @@
# Contributor License Agreement (CLA)
This license is for your protection as a Contributor as well as the protection of the maintainers of the Open Archiver software; it does not change your rights to use your own Contributions for any other purpose. In the following, the maintainers of Open Archiver are referred to as "Open Archiver".
You accept and agree to the following terms and conditions for Your present and future Contributions submitted to "Open Archiver". Except for the license granted herein to Open Archiver and recipients of software distributed by "Open Archiver", You reserve all right, title, and interest in and to Your Contributions.
1. Definitions.
"You" (or "Your") shall mean the copyright owner or legal entity authorized by the copyright owner that is making this Agreement with "Open Archiver". For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are under common control with that entity are considered to be a single Contributor.
"Contribution" shall mean any original work of authorship, including any modifications or additions to an existing work, that is intentionally submitted by You to "Open Archiver" for inclusion in, or documentation of, any of the products owned or managed by "Open Archiver" (the "Work"). For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to "Open Archiver" or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, "Open Archiver" for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by You as "Not a Contribution."
2. Grant of Copyright License. Subject to the terms and conditions of this Agreement, You grant to "Open Archiver" and to recipients of software distributed by "Open Archiver" a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Your Contributions and such derivative works.
3. Grant of Patent License. Subject to the terms and conditions of this Agreement, You grant to "Open Archiver" and to recipients of software distributed by "Open Archiver" a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) was submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed.
4. You represent that you are legally entitled to grant the above license. If your employer(s) has rights to intellectual property that you create that includes your Contributions, you represent that you have received permission to make Contributions on behalf of that employer, that your employer has waived such rights for your Contributions to "Open Archiver", or that your employer has executed a separate Contributor License Agreement with "Open Archiver".
5. You represent that each of Your Contributions is Your original creation (see section 7 for submissions on behalf of others). You represent that Your Contribution submissions include complete details of any third-party license or other restriction (including, but not limited to, related patents and trademarks) of which you are personally aware and which are associated with any part of Your Contributions.
6. You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. Unless required by applicable law or agreed to in writing, You provide Your Contributions on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
7. Should You wish to submit work that is not Your original creation, You may submit it to "Open Archiver" separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which you are personally aware, and conspicuously marking the work as "Submitted on behalf of a third-party: [named here]".
8. You agree to notify "Open Archiver" of any facts or circumstances of which you become aware that would make these representations inaccurate in any respect.

1
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1 @@
github: [wayneshn]

33
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,33 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System:**
- Open Archiver Version:
**Relevant logs:**
Any relevant logs (Redact sensitive information)
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,19 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

32
.github/workflows/cla.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: 'CLA Assistant'
on:
issue_comment:
types: [created]
pull_request_target:
types: [opened, closed, synchronize]
# explicitly configure permissions, in case your GITHUB_TOKEN workflow permissions are set to read-only in repository settings
permissions:
actions: write
contents: write # this can be 'read' if the signatures are in remote repository
pull-requests: write
statuses: write
jobs:
CLAAssistant:
runs-on: ubuntu-latest
steps:
- name: 'CLA Assistant'
if: (github.event.comment.body == 'recheck' || github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA') || github.event_name == 'pull_request_target'
uses: contributor-assistant/github-action@v2.6.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
with:
path-to-signatures: 'signatures/version2/cla.json'
path-to-document: 'https://github.com/LogicLabs-OU/OpenArchiver/blob/main/.github/CLA-v2.md'
branch: 'main'
allowlist: 'wayneshn'
remote-organization-name: 'LogicLabs-OU'
remote-repository-name: 'cla-db'

43
.github/workflows/deploy-docs.yml vendored Normal file
View File

@@ -0,0 +1,43 @@
name: Deploy Docs to GitHub Pages
on:
push:
branches:
- main
paths:
- 'docs/**'
jobs:
build-and-deploy:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v2
with:
version: 10.13.1
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'pnpm'
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Build documentation site
run: pnpm docs:build
- name: Create CNAME file
run: echo "docs.openarchiver.com" > docs/.vitepress/dist/CNAME
- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: docs/.vitepress/dist

41
.github/workflows/docker-deployment.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: docker-deployment
on:
push:
branches:
- main
paths-ignore:
- '**.md'
- 'docs/**'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Extract short SHA
id: sha
run: echo "sha=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: ./apps/open-archiver/Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: logiclabshq/open-archiver:${{ steps.sha.outputs.sha }}

9
.gitignore vendored
View File

@@ -9,7 +9,6 @@ dist
**/meili_data/
# PNPM
pnpm-lock.yaml
pnpm-debug.log
# IDE
@@ -21,3 +20,11 @@ pnpm-debug.log
# Dev
.dev
# Vitepress
docs/.vitepress/dist
docs/.vitepress/cache
# TS
**/tsconfig.tsbuildinfo

13
.prettierignore Normal file
View File

@@ -0,0 +1,13 @@
# Ignore artifacts
dist
.svelte-kit
build
node_modules
pnpm-lock.yaml
meili_data/
## shadcn installs
packages/frontend/src/lib/components/ui/
# Ignore logs
*.log

View File

@@ -1,12 +1,11 @@
{
"useTabs": true,
"singleQuote": true,
"trailingComma": "none",
"trailingComma": "es5",
"semi": true,
"tabWidth": 4,
"printWidth": 100,
"plugins": [
"prettier-plugin-svelte",
"prettier-plugin-tailwindcss"
],
"plugins": ["prettier-plugin-svelte", "prettier-plugin-tailwindcss"],
"overrides": [
{
"files": "*.svelte",

View File

@@ -16,24 +16,24 @@ We pledge to act and interact in ways that are welcoming, open, and respectful.
Examples of behavior that contributes to a positive environment for our
community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
- Focusing on what is best not just for us as individuals, but for the
overall community
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
- Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or
advances of any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email
address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
- The use of sexualized language or imagery, and sexual attention or
advances of any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email
address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities

View File

@@ -6,8 +6,8 @@ First off, thank you for considering contributing to Open Archiver! It's people
Not sure where to start? You can:
- Look through the [open issues](https://github.com/LogicLabs-OU/OpenArchiver/issues) for bugs or feature requests.
- Check the issues labeled `good first issue` for tasks that are a good entry point into the codebase.
- Look through the [open issues](https://github.com/LogicLabs-OU/OpenArchiver/issues) for bugs or feature requests.
- Check the issues labeled `good first issue` for tasks that are a good entry point into the codebase.
## How to Contribute
@@ -29,6 +29,10 @@ If you have an idea for an enhancement, please open an issue to discuss it. This
6. **Update the documentation** if your changes require it.
7. **Submit a pull request** to the `main` branch of the main repository. Please provide a clear description of the problem and solution. Include the relevant issue number if applicable.
## Contributor License Agreement
By submitting a pull request to this repository, you agree to the terms and conditions of our [Contributor License Agreement](./.github/CLA.md)
## Code of Conduct
This project and everyone participating in it is governed by the [Open Archiver Code of Conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior.
@@ -37,13 +41,23 @@ This project and everyone participating in it is governed by the [Open Archiver
### Git Commit Messages
- Use the present tense ("Add feature" not "Added feature").
- Use the imperative mood ("Move cursor to..." not "Moves cursor to...").
- Limit the first line to 72 characters or less.
- Reference issues and pull requests liberally after the first line.
- Use the present tense ("Add feature" not "Added feature").
- Use the imperative mood ("Move cursor to..." not "Moves cursor to...").
- Limit the first line to 72 characters or less.
- Reference issues and pull requests liberally after the first line.
### TypeScript Styleguide
- Follow the existing code style.
- Use TypeScript's strict mode.
- Avoid using `any` as a type. Define clear interfaces and types in the `packages/types` directory.
- Follow the existing code style.
- Use TypeScript's strict mode.
- Avoid using `any` as a type. Define clear interfaces and types in the `packages/types` directory.
### Formatting
We use Prettier for code formatting. Before you commit new code, it is necessary to check code format by running this command from the root folder:
`pnpm run lint`
If there are any format issues, you can use the following command to fix them
`pnpm run format`

140
LICENSE
View File

@@ -200,23 +200,23 @@ You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
- **a)** The work must carry prominent notices stating that you modified
it, and giving a relevant date.
- **b)** The work must carry prominent notices stating that it is
released under this License and any conditions added under section 7.
This requirement modifies the requirement in section 4 to
“keep intact all notices”.
- **c)** You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
- **d)** If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
- **a)** The work must carry prominent notices stating that you modified
it, and giving a relevant date.
- **b)** The work must carry prominent notices stating that it is
released under this License and any conditions added under section 7.
This requirement modifies the requirement in section 4 to
“keep intact all notices”.
- **c)** You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
- **d)** If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
@@ -235,42 +235,42 @@ of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
- **a)** Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
- **b)** Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either **(1)** a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or **(2)** access to copy the
Corresponding Source from a network server at no charge.
- **c)** Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
- **d)** Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
- **e)** Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
- **a)** Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
- **b)** Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either **(1)** a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or **(2)** access to copy the
Corresponding Source from a network server at no charge.
- **c)** Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
- **d)** Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
- **e)** Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
@@ -344,23 +344,23 @@ Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
- **a)** Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
- **b)** Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
- **c)** Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
- **d)** Limiting the use for publicity purposes of names of licensors or
authors of the material; or
- **e)** Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
- **f)** Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
- **a)** Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
- **b)** Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
- **c)** Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
- **d)** Limiting the use for publicity purposes of names of licensors or
authors of the material; or
- **e)** Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
- **f)** Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered “further
restrictions” within the meaning of section 10. If the Program as you

112
README.md
View File

@@ -1,45 +1,79 @@
# Open Archiver
> [!WARNING]
> This project is currently in active development and has not yet reached a stable General Availability (GA) release. It is not recommended for use in production environments. We welcome contributions from the community to help us accelerate development and improve the platform. Please see our [Contributing](#contributing) section for more details.
[![Docker Compose](https://img.shields.io/badge/Docker%20Compose-2496ED?style=for-the-badge&logo=docker&logoColor=white)](https://www.docker.com)
[![PostgreSQL](https://img.shields.io/badge/PostgreSQL-4169E1?style=for-the-badge&logo=postgresql&logoColor=white)](https://www.postgresql.org/)
[![Meilisearch](https://img.shields.io/badge/Meilisearch-FF5A5F?style=for-the-badge&logo=meilisearch&logoColor=white)](https://www.meilisearch.com/)
[![TypeScript](https://img.shields.io/badge/TypeScript-3178C6?style=for-the-badge&logo=typescript&logoColor=white)](https://www.typescriptlang.org/)
[![Redis](https://img.shields.io/badge/Redis-DC382D?style=for-the-badge&logo=redis&logoColor=white)](https://redis.io)
[![SvelteKit](https://img.shields.io/badge/SvelteKit-FF3E00?style=for-the-badge&logo=svelte&logoColor=white)](https://svelte.dev/)
**A secure, sovereign, and affordable open-source platform for email archiving and eDiscovery.**
**A secure, sovereign, and open-source platform for email archiving.**
Open Archiver enables individuals and organizations to take control of their digital communication history. It provides a robust, self-hosted solution for ingesting, storing, indexing, and searching emails from major platforms, ensuring a permanent, tamper-proof record of your most critical data, free from vendor lock-in.
Open Archiver provides a robust, self-hosted solution for archiving, storing, indexing, and searching emails from major platforms, including Google Workspace (Gmail), Microsoft 365, PST files, as well as generic IMAP-enabled email inboxes. Use Open Archiver to keep a permanent, tamper-proof record of your communication history, free from vendor lock-in.
![Open Archiver Preview](assets/screenshots/email.png)
## 📸 Screenshots
## Vision
![Open Archiver Preview](assets/screenshots/dashboard-1.png)
_Dashboard_
To provide individuals and organizations with a secure, sovereign, and affordable platform to preserve and access their digital communication history.
![Open Archiver Preview](assets/screenshots/archived-emails.png)
_Archived emails_
## Key Features
![Open Archiver Preview](assets/screenshots/search.png)
_Full-text search across all your emails and attachments_
- **Universal Ingestion**: Connect to Google Workspace, Microsoft 365, and standard IMAP servers to perform initial bulk imports and maintain continuous, real-time synchronization.
- **Secure & Efficient Storage**: Emails are stored in the standard `.eml` format. The system uses deduplication and compression to minimize storage costs. All data is encrypted at rest.
- **Pluggable Storage Backends**: Start with local filesystem storage and scale to S3-compatible object storage (like AWS S3 or MinIO) as your needs grow.
- **Powerful Search & eDiscovery**: A high-performance search engine indexes the full text of emails and attachments (PDF, DOCX, etc.). The intuitive UI supports advanced search operators, filtering, and case management.
- **Compliance & Retention**: Define granular retention policies to automatically manage the lifecycle of your data. Place legal holds on communications to prevent deletion during litigation.
- **Comprehensive Auditing**: An immutable audit trail logs all system activities, ensuring you have a clear record of who accessed what and when.
- **Role-Based Access Control (RBAC)**: Enforce the principle of least privilege with pre-defined roles for Administrators, Auditors, and End Users.
## 👨‍👩‍👧‍👦 Join our community!
## Tech Stack
We are committed to build an engaging community around Open Archiver, and we are inviting all of you to join our community on Discord to get real-time support and connect with the team.
[![Discord](https://img.shields.io/badge/Join%20our%20Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/MTtD7BhuTQ)
[![Bluesky](https://img.shields.io/badge/Follow%20us%20on%20Bluesky-0265D4?style=for-the-badge&logo=bluesky&logoColor=white)](https://bsky.app/profile/openarchiver.bsky.social)
## 🚀 Live demo
Check out the live demo here: https://demo.openarchiver.com
Username: admin@local.com
Password: openarchiver_demo
## ✨ Key Features
- **Universal Ingestion**: Connect to any email provider to perform initial bulk imports and maintain continuous, real-time synchronization. Ingestion sources include:
- IMAP connection
- Google Workspace
- Microsoft 365
- PST files
- Zipped .eml files
- Mbox files
- **Secure & Efficient Storage**: Emails are stored in the standard `.eml` format. The system uses deduplication and compression to minimize storage costs. All files are encrypted at rest.
- **Pluggable Storage Backends**: Support both local filesystem storage and S3-compatible object storage (like AWS S3 or MinIO).
- **Powerful Search & eDiscovery**: A high-performance search engine indexes the full text of emails and attachments (PDF, DOCX, etc.).
- **Thread discovery**: The ability to discover if an email belongs to a thread/conversation and present the context.
- **Compliance & Retention**: Define granular retention policies to automatically manage the lifecycle of your data. Place legal holds on communications to prevent deletion during litigation (TBD).
- **File Hash and Encryption**: Email and attachment file hash values are stored in the meta database upon ingestion, meaning any attempt to alter the file content will be identified, ensuring legal and regulatory compliance.
- - Each archived email comes with an "Integrity Report" feature that indicates if the files are original.
- **Comprehensive Auditing**: An immutable audit trail logs all system activities, ensuring you have a clear record of who accessed what and when.
## 🛠️ Tech Stack
Open Archiver is built on a modern, scalable, and maintainable technology stack:
- **Frontend**: SvelteKit with Svelte 5
- **Backend**: Node.js with Express.js & TypeScript
- **Job Queue**: BullMQ on Redis for robust, asynchronous processing
- **Search Engine**: Meilisearch for blazingly fast and resource-efficient search
- **Database**: PostgreSQL for metadata, user management, and audit logs
- **Deployment**: Docker Compose for easy, one-command deployment
- **Frontend**: SvelteKit with Svelte 5
- **Backend**: Node.js with Express.js & TypeScript
- **Job Queue**: BullMQ on Redis for robust, asynchronous processing. (We use Valkey as the Redis service in the Docker Compose deployment mode, but you can use Redis as well.)
- **Search Engine**: Meilisearch for blazingly fast and resource-efficient search
- **Database**: PostgreSQL for metadata, user management, and audit logs
- **Deployment**: Docker Compose deployment
## Getting Started
## 📦 Deployment
### Prerequisites
- [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/)
- A server or local machine with at least 2GB of RAM.
- [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/)
- A server or local machine with at least 4GB of RAM (2GB of RAM if you use external Postgres, Redis (Valkey) and Meilisearch instances).
### Installation
@@ -47,7 +81,7 @@ Open Archiver is built on a modern, scalable, and maintainable technology stack:
```bash
git clone https://github.com/LogicLabs-OU/OpenArchiver.git
cd open-archiver
cd OpenArchiver
```
2. **Configure your environment:**
@@ -57,7 +91,7 @@ Open Archiver is built on a modern, scalable, and maintainable technology stack:
cp .env.example .env
```
You will need to edit the `.env` file to set your database passwords, secret keys, and other essential configuration.
You will need to edit the `.env` file to set your admin passwords, secret keys, and other essential configuration. Read the .env.example for how to set up.
3. **Run the application:**
@@ -65,21 +99,29 @@ Open Archiver is built on a modern, scalable, and maintainable technology stack:
docker compose up -d
```
This command will build the necessary Docker images and start all the services (frontend, backend, database, etc.) in the background.
This command will pull the pre-built Docker images and start all the services (frontend, backend, database, etc.) in the background.
4. **Access the application:**
Once the services are running, you can access the Open Archiver web interface by navigating to `http://localhost:3000` in your web browser.
## Contributing
## ⚙️ Data Source Configuration
We welcome contributions from the community! Whether you're a developer, a designer, or just an enthusiast, there are many ways to get involved.
After deploying the application, you will need to configure one or more ingestion sources to begin archiving emails. Follow our detailed guides to connect to your email provider:
- **Reporting Bugs**: If you find a bug, please open an issue on our GitHub repository.
- **Suggesting Enhancements**: Have an idea for a new feature? We'd love to hear it. Open an issue to start the discussion.
- **Code Contributions**: If you'd like to contribute code, please fork the repository and submit a pull request.
- [Connecting to Google Workspace](https://docs.openarchiver.com/user-guides/email-providers/google-workspace.html)
- [Connecting to Microsoft 365](https://docs.openarchiver.com/user-guides/email-providers/imap.html)
- [Connecting to a Generic IMAP Server](https://docs.openarchiver.com/user-guides/email-providers/imap.html)
## 🤝 Contributing
We welcome contributions from the community!
- **Reporting Bugs**: If you find a bug, please open an issue on our GitHub repository.
- **Suggesting Enhancements**: Have an idea for a new feature? We'd love to hear it. Open an issue to start the discussion.
- **Code Contributions**: If you'd like to contribute code, please fork the repository and submit a pull request.
Please read our `CONTRIBUTING.md` file for more details on our code of conduct and the process for submitting pull requests.
## License
## 📈 Star History
This project is licensed under the AGPL-3.0 License.
[![Star History Chart](https://api.star-history.com/svg?repos=LogicLabs-OU/OpenArchiver&type=Date)](https://www.star-history.com/#LogicLabs-OU/OpenArchiver&Date)

View File

@@ -0,0 +1,57 @@
# Dockerfile for the OSS version of Open Archiver
ARG BASE_IMAGE=node:22-alpine
# 0. Base Stage: Define all common dependencies and setup
FROM ${BASE_IMAGE} AS base
WORKDIR /app
# Install pnpm
RUN --mount=type=cache,target=/root/.npm \
npm install -g pnpm
# Copy manifests and lockfile
COPY package.json pnpm-workspace.yaml pnpm-lock.yaml* ./
COPY packages/backend/package.json ./packages/backend/
COPY packages/frontend/package.json ./packages/frontend/
COPY packages/types/package.json ./packages/types/
COPY apps/open-archiver/package.json ./apps/open-archiver/
# 1. Build Stage: Install all dependencies and build the project
FROM base AS build
COPY packages/frontend/svelte.config.js ./packages/frontend/
# Install all dependencies.
ENV PNPM_HOME="/pnpm"
RUN --mount=type=cache,id=pnpm,target=/pnpm/store \
pnpm install --shamefully-hoist --frozen-lockfile --prod=false
# Copy the rest of the source code
COPY . .
# Build the OSS packages.
RUN pnpm build:oss
# 2. Production Stage: Install only production dependencies and copy built artifacts
FROM base AS production
# Copy built application from build stage
COPY --from=build /app/packages/backend/dist ./packages/backend/dist
COPY --from=build /app/packages/backend/drizzle.config.ts ./packages/backend/drizzle.config.ts
COPY --from=build /app/packages/backend/src/database/migrations ./packages/backend/src/database/migrations
COPY --from=build /app/packages/frontend/build ./packages/frontend/build
COPY --from=build /app/packages/types/dist ./packages/types/dist
COPY --from=build /app/apps/open-archiver/dist ./apps/open-archiver/dist
# Copy the entrypoint script and make it executable
COPY docker/docker-entrypoint.sh /usr/local/bin/
# Expose the port the app runs on
EXPOSE 4000
EXPOSE 3000
# Set the entrypoint
ENTRYPOINT ["docker-entrypoint.sh"]
# Start the application
CMD ["pnpm", "docker-start:oss"]

View File

@@ -0,0 +1,24 @@
import { createServer, logger } from '@open-archiver/backend';
import * as dotenv from 'dotenv';
dotenv.config();
async function start() {
// --- Environment Variable Validation ---
const { PORT_BACKEND } = process.env;
if (!PORT_BACKEND) {
throw new Error('Missing required environment variables for the backend: PORT_BACKEND.');
}
// Create the server instance (passing no modules for the default OSS version)
const app = await createServer([]);
app.listen(PORT_BACKEND, () => {
logger.info({}, `✅ Open Archiver (OSS) running on port ${PORT_BACKEND}`);
});
}
start().catch((error) => {
logger.error({ error }, 'Failed to start the server:', error);
process.exit(1);
});

View File

@@ -0,0 +1,18 @@
{
"name": "open-archiver-app",
"version": "1.0.0",
"private": true,
"scripts": {
"dev": "ts-node-dev --respawn --transpile-only index.ts",
"build": "tsc",
"start": "node dist/index.js"
},
"dependencies": {
"@open-archiver/backend": "workspace:*",
"dotenv": "^17.2.0"
},
"devDependencies": {
"@types/dotenv": "^8.2.3",
"ts-node-dev": "^2.0.0"
}
}

View File

@@ -0,0 +1,8 @@
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"outDir": "dist"
},
"include": ["./**/*.ts"],
"references": [{ "path": "../../packages/backend" }]
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 305 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 304 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 259 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 199 KiB

View File

@@ -1,77 +1,73 @@
version: '3.8'
services:
frontend:
build:
context: ./packages/frontend
dockerfile: Dockerfile
open-archiver:
image: logiclabshq/open-archiver:latest
container_name: open-archiver
restart: unless-stopped
ports:
- '3000:3000'
depends_on:
- backend-api
- '3000:3000' # Frontend
env_file:
- ./.env
backend-api:
build:
context: ./packages/backend
dockerfile: Dockerfile
ports:
- '4000:4000'
- .env
volumes:
- ${STORAGE_LOCAL_ROOT_PATH}:${STORAGE_LOCAL_ROOT_PATH}
depends_on:
- postgres
- redis
env_file:
- ./.env
ingestion-worker:
build:
context: ./packages/backend
dockerfile: Dockerfile
command: 'pnpm ts-node-dev --respawn --transpile-only src/workers/ingestion.worker.ts'
depends_on:
- postgres
- redis
env_file:
- ./.env
indexing-worker:
build:
context: ./packages/backend
dockerfile: Dockerfile
command: 'pnpm ts-node-dev --respawn --transpile-only src/workers/indexing.worker.ts'
depends_on:
- postgres
- redis
env_file:
- ./.env
- valkey
- meilisearch
networks:
- open-archiver-net
postgres:
image: postgres:15
ports:
- '5432:5432'
image: postgres:17-alpine
container_name: postgres
restart: unless-stopped
environment:
POSTGRES_DB: ${POSTGRES_DB:-open_archive}
POSTGRES_USER: ${POSTGRES_USER:-admin}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
volumes:
- postgres_data:/var/lib/postgresql/data
env_file:
- ./.env
- pgdata:/var/lib/postgresql/data
networks:
- open-archiver-net
redis:
image: redis:7
ports:
- '6379:6379'
valkey:
image: valkey/valkey:8-alpine
container_name: valkey
restart: unless-stopped
command: valkey-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis_data:/data
- valkeydata:/data
networks:
- open-archiver-net
meilisearch:
image: getmeili/meilisearch:v1.3
ports:
- '7700:7700'
image: getmeili/meilisearch:v1.15
container_name: meilisearch
restart: unless-stopped
environment:
MEILI_MASTER_KEY: ${MEILI_MASTER_KEY:-aSampleMasterKey}
MEILI_SCHEDULE_SNAPSHOT: ${MEILI_SCHEDULE_SNAPSHOT:-86400}
volumes:
- meili_data:/meili_data
env_file:
- ./.env
- meilidata:/meili_data
networks:
- open-archiver-net
tika:
image: apache/tika:3.2.2.0-full
container_name: tika
restart: always
networks:
- open-archiver-net
volumes:
postgres_data:
redis_data:
meili_data:
pgdata:
driver: local
valkeydata:
driver: local
meilidata:
driver: local
networks:
open-archiver-net:
driver: bridge

17
docker/docker-entrypoint.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/bin/sh
# Exit immediately if a command exits with a non-zero status
set -e
# Run pnpm install to ensure all dependencies, including native addons,
# are built for the container's architecture. This is crucial for
# multi-platform Docker images, as it prevents "exec format error"
# when running on a different architecture than the one used for building.
pnpm install --frozen-lockfile --prod
# Run database migrations before starting the application to prevent
# race conditions where the app starts before the database is ready.
pnpm db:migrate
# Execute the main container command
exec "$@"

115
docs/.vitepress/config.mts Normal file
View File

@@ -0,0 +1,115 @@
import { defineConfig } from 'vitepress';
export default defineConfig({
head: [
[
'script',
{
defer: '',
src: 'https://analytics.zenceipt.com/script.js',
'data-website-id': '2c8b452e-eab5-4f82-8ead-902d8f8b976f',
},
],
['link', { rel: 'icon', href: '/logo-sq.svg' }],
],
title: 'Open Archiver Docs',
description: 'Official documentation for the Open Archiver project.',
themeConfig: {
search: {
provider: 'local',
},
logo: {
src: '/logo-sq.svg',
},
nav: [
{ text: 'Home', link: '/' },
{ text: 'Github', link: 'https://github.com/LogicLabs-OU/OpenArchiver' },
{ text: 'Website', link: 'https://openarchiver.com/' },
{ text: 'Discord', link: 'https://discord.gg/MTtD7BhuTQ' },
],
sidebar: [
{
text: 'User Guides',
items: [
{ text: 'Get Started', link: '/' },
{ text: 'Installation', link: '/user-guides/installation' },
{ text: 'Email Integrity Check', link: '/user-guides/integrity-check' },
{
text: 'Email Providers',
link: '/user-guides/email-providers/',
collapsed: true,
items: [
{
text: 'Generic IMAP Server',
link: '/user-guides/email-providers/imap',
},
{
text: 'Google Workspace',
link: '/user-guides/email-providers/google-workspace',
},
{
text: 'Microsoft 365',
link: '/user-guides/email-providers/microsoft-365',
},
{ text: 'EML Import', link: '/user-guides/email-providers/eml' },
{ text: 'PST Import', link: '/user-guides/email-providers/pst' },
{ text: 'Mbox Import', link: '/user-guides/email-providers/mbox' },
],
},
{
text: 'Settings',
collapsed: true,
items: [
{
text: 'System',
link: '/user-guides/settings/system',
},
],
},
{
text: 'Upgrading and Migration',
collapsed: true,
items: [
{
text: 'Upgrading',
link: '/user-guides/upgrade-and-migration/upgrade',
},
{
text: 'Meilisearch Upgrade',
link: '/user-guides/upgrade-and-migration/meilisearch-upgrade',
},
],
},
],
},
{
text: 'API Reference',
items: [
{ text: 'Overview', link: '/api/' },
{ text: 'Authentication', link: '/api/authentication' },
{ text: 'Rate Limiting', link: '/api/rate-limiting' },
{ text: 'Auth', link: '/api/auth' },
{ text: 'Archived Email', link: '/api/archived-email' },
{ text: 'Dashboard', link: '/api/dashboard' },
{ text: 'Ingestion', link: '/api/ingestion' },
{ text: 'Integrity Check', link: '/api/integrity' },
{ text: 'Search', link: '/api/search' },
{ text: 'Storage', link: '/api/storage' },
{ text: 'Jobs', link: '/api/jobs' },
],
},
{
text: 'Services',
items: [
{ text: 'Overview', link: '/services/' },
{ text: 'Storage Service', link: '/services/storage-service' },
{ text: 'OCR Service', link: '/services/ocr-service' },
{
text: 'IAM Service',
items: [{ text: 'IAM Policies', link: '/services/iam-service/iam-policy' }],
},
],
},
],
},
});

17
docs/SUMMARY.md Normal file
View File

@@ -0,0 +1,17 @@
# Table of contents
## User guides
- [Get started](index.md)
- [Installation](user-guides/installation.md)
- [email-providers](user-guides/email-providers/index.md)
- [Connecting to Google Workspace](user-guides/email-providers/google-workspace.md)
- [Connecting to a Generic IMAP Server](user-guides/email-providers/imap.md)
- [Connecting to Microsoft 365](user-guides/email-providers/microsoft-365.md)
---
- [api](api/index.md)
- [Ingestion Sources API Documentation](api/ingestion.md)
- [services](services/index.md)
- [Pluggable Storage Service (StorageService)](services/storage-service.md)

107
docs/api/archived-email.md Normal file
View File

@@ -0,0 +1,107 @@
# Archived Email Service API
The Archived Email Service is responsible for retrieving archived emails and their details from the database and storage.
## Endpoints
All endpoints in this service require authentication.
### GET /api/v1/archived-emails/ingestion-source/:ingestionSourceId
Retrieves a paginated list of archived emails for a specific ingestion source.
**Access:** Authenticated
#### URL Parameters
| Parameter | Type | Description |
| :------------------ | :----- | :------------------------------------------------ |
| `ingestionSourceId` | string | The ID of the ingestion source to get emails for. |
#### Query Parameters
| Parameter | Type | Description | Default |
| :-------- | :----- | :------------------------------ | :------ |
| `page` | number | The page number for pagination. | 1 |
| `limit` | number | The number of items per page. | 10 |
#### Responses
- **200 OK:** A paginated list of archived emails.
```json
{
"items": [
{
"id": "email-id",
"subject": "Test Email",
"from": "sender@example.com",
"sentAt": "2023-10-27T10:00:00.000Z",
"hasAttachments": true,
"recipients": [{ "name": "Recipient 1", "email": "recipient1@example.com" }]
}
],
"total": 100,
"page": 1,
"limit": 10
}
```
- **500 Internal Server Error:** An unexpected error occurred.
### GET /api/v1/archived-emails/:id
Retrieves a single archived email by its ID, including its raw content and attachments.
**Access:** Authenticated
#### URL Parameters
| Parameter | Type | Description |
| :-------- | :----- | :---------------------------- |
| `id` | string | The ID of the archived email. |
#### Responses
- **200 OK:** The archived email details.
```json
{
"id": "email-id",
"subject": "Test Email",
"from": "sender@example.com",
"sentAt": "2023-10-27T10:00:00.000Z",
"hasAttachments": true,
"recipients": [{ "name": "Recipient 1", "email": "recipient1@example.com" }],
"raw": "...",
"attachments": [
{
"id": "attachment-id",
"filename": "document.pdf",
"mimeType": "application/pdf",
"sizeBytes": 12345
}
]
}
```
- **404 Not Found:** The archived email with the specified ID was not found.
- **500 Internal Server Error:** An unexpected error occurred.
## Service Methods
### `getArchivedEmails(ingestionSourceId: string, page: number, limit: number): Promise<PaginatedArchivedEmails>`
Retrieves a paginated list of archived emails from the database for a given ingestion source.
- **ingestionSourceId:** The ID of the ingestion source.
- **page:** The page number for pagination.
- **limit:** The number of items per page.
- **Returns:** A promise that resolves to a `PaginatedArchivedEmails` object.
### `getArchivedEmailById(emailId: string): Promise<ArchivedEmail | null>`
Retrieves a single archived email by its ID, including its raw content and attachments.
- **emailId:** The ID of the archived email.
- **Returns:** A promise that resolves to an `ArchivedEmail` object or `null` if not found.

84
docs/api/auth.md Normal file
View File

@@ -0,0 +1,84 @@
# Auth Service API
The Auth Service is responsible for handling user authentication, including login and token verification.
## Endpoints
### POST /api/v1/auth/login
Authenticates a user and returns a JWT if the credentials are valid.
**Access:** Public
**Rate Limiting:** This endpoint is rate-limited to prevent brute-force attacks.
#### Request Body
| Field | Type | Description |
| :--------- | :----- | :------------------------ |
| `email` | string | The user's email address. |
| `password` | string | The user's password. |
#### Responses
- **200 OK:** Authentication successful.
```json
{
"accessToken": "your.jwt.token",
"user": {
"id": "user-id",
"email": "user@example.com",
"role": "user"
}
}
```
- **400 Bad Request:** Email or password not provided.
```json
{
"message": "Email and password are required"
}
```
- **401 Unauthorized:** Invalid credentials.
```json
{
"message": "Invalid credentials"
}
```
- **500 Internal Server Error:** An unexpected error occurred.
```json
{
"message": "An internal server error occurred"
}
```
## Service Methods
### `verifyPassword(password: string, hash: string): Promise<boolean>`
Compares a plain-text password with a hashed password to verify its correctness.
- **password:** The plain-text password.
- **hash:** The hashed password to compare against.
- **Returns:** A promise that resolves to `true` if the password is valid, otherwise `false`.
### `login(email: string, password: string): Promise<LoginResponse | null>`
Handles the user login process. It finds the user by email, verifies the password, and generates a JWT upon successful authentication.
- **email:** The user's email.
- **password:** The user's password.
- **Returns:** A promise that resolves to a `LoginResponse` object containing the `accessToken` and `user` details, or `null` if authentication fails.
### `verifyToken(token: string): Promise<AuthTokenPayload | null>`
Verifies the authenticity and expiration of a JWT.
- **token:** The JWT string to verify.
- **Returns:** A promise that resolves to the token's `AuthTokenPayload` if valid, otherwise `null`.

View File

@@ -0,0 +1,25 @@
# API Authentication
To access protected API endpoints, you need to include a user-generated API key in the `X-API-KEY` header of your requests.
## 1. Creating an API Key
You can create, manage, and view your API keys through the application's user interface.
1. Navigate to **Settings > API Keys** in the dashboard.
2. Click the **"Generate API Key"** button.
3. Provide a descriptive name for your key and select an expiration period.
4. The new API key will be displayed. **Copy this key immediately and store it in a secure location. You will not be able to see it again.**
## 2. Making Authenticated Requests
Once you have your API key, you must include it in the `X-API-KEY` header of all subsequent requests to protected API endpoints.
**Example:**
```http
GET /api/v1/dashboard/stats
X-API-KEY: a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2
```
If the API key is missing, expired, or invalid, the API will respond with a `401 Unauthorized` status code.

114
docs/api/dashboard.md Normal file
View File

@@ -0,0 +1,114 @@
# Dashboard Service API
The Dashboard Service provides endpoints for retrieving statistics and data for the main dashboard.
## Endpoints
All endpoints in this service require authentication.
### GET /api/v1/dashboard/stats
Retrieves overall statistics, including the total number of archived emails, total storage used, and the number of failed ingestions in the last 7 days.
**Access:** Authenticated
#### Responses
- **200 OK:** An object containing the dashboard statistics.
```json
{
"totalEmailsArchived": 12345,
"totalStorageUsed": 54321098,
"failedIngestionsLast7Days": 3
}
```
### GET /api/v1/dashboard/ingestion-history
Retrieves the email ingestion history for the last 30 days, grouped by day.
**Access:** Authenticated
#### Responses
- **200 OK:** An object containing the ingestion history.
```json
{
"history": [
{
"date": "2023-09-27T00:00:00.000Z",
"count": 150
},
{
"date": "2023-09-28T00:00:00.000Z",
"count": 200
}
]
}
```
### GET /api/v1/dashboard/ingestion-sources
Retrieves a list of all ingestion sources along with their status and storage usage.
**Access:** Authenticated
#### Responses
- **200 OK:** An array of ingestion source objects.
```json
[
{
"id": "source-id-1",
"name": "Google Workspace",
"provider": "google",
"status": "active",
"storageUsed": 12345678
},
{
"id": "source-id-2",
"name": "Microsoft 365",
"provider": "microsoft",
"status": "error",
"storageUsed": 87654321
}
]
```
### GET /api/v1/dashboard/recent-syncs
Retrieves a list of recent synchronization jobs. (Note: This is currently a placeholder and will return an empty array).
**Access:** Authenticated
#### Responses
- **200 OK:** An empty array.
```json
[]
```
### GET /api/v1/dashboard/indexed-insights
Retrieves insights from the indexed email data, such as the top senders.
**Access:** Authenticated
#### Responses
- **200 OK:** An object containing indexed insights.
```json
{
"topSenders": [
{
"sender": "user@example.com",
"count": 42
}
]
}
```

18
docs/api/index.md Normal file
View File

@@ -0,0 +1,18 @@
# API Overview
Welcome to the Open Archiver API documentation. This section provides detailed information about the available API endpoints.
All API endpoints are prefixed with `/api/v1`.
## Authentication
Before making requests to protected endpoints, you must authenticate with the API. See the [Authentication Guide](./authentication.md) for details on how to obtain and use API tokens.
## API Services
- [**Auth Service**](./auth.md): Handles user authentication.
- [**Archived Email Service**](./archived-email.md): Manages archived emails.
- [**Dashboard Service**](./dashboard.md): Provides data for the main dashboard.
- [**Ingestion Service**](./ingestion.md): Manages email ingestion sources.
- [**Search Service**](./search.md): Handles email search functionality.
- [**Storage Service**](./storage.md): Manages file storage and downloads.

View File

@@ -1,206 +1,196 @@
# Ingestion Sources API Documentation
# Ingestion Service API
A comprehensive guide to using the Ingestion Sources API.
The Ingestion Service manages ingestion sources, which are configurations for connecting to email providers and importing emails.
**Base Path:** `/v1/ingestion-sources`
## Endpoints
---
All endpoints in this service require authentication.
## Authentication
### POST /api/v1/ingestion-sources
All endpoints in this API are protected and require authentication. Requests must include an `Authorization` header containing a valid Bearer token. This can be a JWT obtained from the login endpoint or a `SUPER_API_KEY` for administrative tasks.
Creates a new ingestion source.
**Header Example:**
`Authorization: Bearer <YOUR_JWT_OR_SUPER_API_KEY>`
**Access:** Authenticated
---
#### Request Body
## Core Concepts
The request body should be a `CreateIngestionSourceDto` object.
### Ingestion Providers
The `provider` field determines the type of email source. Each provider requires a different configuration object, for example:
- `google_workspace`: For connecting to Google Workspace accounts via OAuth 2.0.
- `microsoft_365`: For connecting to Microsoft 365 accounts via OAuth 2.0.
- `generic_imap`: For connecting to any email server that supports IMAP.
### Ingestion Status
The `status` field tracks the state of the ingestion source.
- `pending_auth`: The source has been created but requires user authorization (OAuth flow).
- `active`: The source is authenticated and ready to sync.
- `syncing`: An import job is currently in progress.
- `importing`: initial syncing in progress
- `paused`: The source is temporarily disabled.
- `error`: An error occurred during the last sync.
---
## 1. Create Ingestion Source
- **Method:** `POST`
- **Path:** `/`
- **Description:** Registers a new source for email ingestion. The `providerConfig` will vary based on the selected `provider`.
#### Request Body (`CreateIngestionSourceDto`)
- `name` (string, required): A user-friendly name for the source (e.g., "Marketing Department G-Suite").
- `provider` (string, required): One of `google_workspace`, `microsoft_365`, or `generic_imap`.
- `providerConfig` (object, required): Configuration specific to the provider.
##### `providerConfig` for `google_workspace` / `microsoft_365`
```json
{
"name": "Corporate Google Workspace",
"provider": "google_workspace",
"providerConfig": {
"clientId": "your-oauth-client-id.apps.googleusercontent.com",
"clientSecret": "your-super-secret-client-secret",
"redirectUri": "https://yourapp.com/oauth/google/callback"
}
```typescript
interface CreateIngestionSourceDto {
name: string;
provider: 'google_workspace' | 'microsoft_365' | 'generic_imap' | 'pst_import' | 'eml_import' | 'mbox_import';
providerConfig: IngestionCredentials;
}
```
##### `providerConfig` for `generic_imap`
#### Example: Creating an Mbox Import Source with File Upload
```json
{
"name": "Legacy IMAP Server",
"provider": "generic_imap",
"providerConfig": {
"host": "imap.example.com",
"port": 993,
"secure": true,
"username": "archive-user",
"password": "imap-password"
}
"name": "My Mbox Import",
"provider": "mbox_import",
"providerConfig": {
"type": "mbox_import",
"uploadedFileName": "emails.mbox",
"uploadedFilePath": "open-archiver/tmp/uuid-emails.mbox"
}
}
```
#### Example: Creating an Mbox Import Source with Local File Path
```json
{
"name": "My Mbox Import",
"provider": "mbox_import",
"providerConfig": {
"type": "mbox_import",
"localFilePath": "/path/to/emails.mbox"
}
}
```
**Note:** When using `localFilePath`, the file will not be deleted after import. When using `uploadedFilePath` (via the upload API), the file will be automatically deleted after import. The same applies to `pst_import` and `eml_import` providers.
**Important regarding `localFilePath`:** When running OpenArchiver in a Docker container (which is the standard deployment), `localFilePath` refers to the path **inside the Docker container**, not on the host machine.
To use a local file:
1. **Recommended:** Place your file inside the directory defined by `STORAGE_LOCAL_ROOT_PATH` (e.g., inside a `temp` folder). Since this directory is already mounted as a volume, the file will be accessible at the same path inside the container.
2. **Alternative:** Mount a specific directory containing your files as a volume in `docker-compose.yml`. For example, add `- /path/to/my/files:/imports` to the `volumes` section and use `/imports/myfile.pst` as the `localFilePath`.
#### Responses
- **201 Created:** The newly created ingestion source.
- **500 Internal Server Error:** An unexpected error occurred.
### GET /api/v1/ingestion-sources
Retrieves all ingestion sources.
**Access:** Authenticated
#### Responses
- **200 OK:** An array of ingestion source objects.
- **500 Internal Server Error:** An unexpected error occurred.
### GET /api/v1/ingestion-sources/:id
Retrieves a single ingestion source by its ID.
**Access:** Authenticated
#### URL Parameters
| Parameter | Type | Description |
| :-------- | :----- | :------------------------------ |
| `id` | string | The ID of the ingestion source. |
#### Responses
- **200 OK:** The ingestion source object.
- **404 Not Found:** Ingestion source not found.
- **500 Internal Server Error:** An unexpected error occurred.
### PUT /api/v1/ingestion-sources/:id
Updates an existing ingestion source.
**Access:** Authenticated
#### URL Parameters
| Parameter | Type | Description |
| :-------- | :----- | :------------------------------ |
| `id` | string | The ID of the ingestion source. |
#### Request Body
The request body should be an `UpdateIngestionSourceDto` object.
```typescript
interface UpdateIngestionSourceDto {
name?: string;
provider?: 'google' | 'microsoft' | 'generic_imap';
providerConfig?: IngestionCredentials;
status?: 'pending_auth' | 'auth_success' | 'importing' | 'active' | 'paused' | 'error';
}
```
#### Responses
- **Success (`201 Created`):** Returns the full `IngestionSource` object, which now includes a system-generated `id` and default status.
- **200 OK:** The updated ingestion source object.
- **404 Not Found:** Ingestion source not found.
- **500 Internal Server Error:** An unexpected error occurred.
```json
{
"id": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
"name": "Corporate Google Workspace",
"provider": "google_workspace",
"status": "pending_auth",
"createdAt": "2025-07-11T12:00:00.000Z",
"updatedAt": "2025-07-11T12:00:00.000Z",
"providerConfig": { ... }
}
```
### DELETE /api/v1/ingestion-sources/:id
- **Error (`500 Internal Server Error`):** Indicates a server-side problem during creation.
Deletes an ingestion source and all associated data.
---
## 2. Get All Ingestion Sources
- **Method:** `GET`
- **Path:** `/`
- **Description:** Retrieves a list of all configured ingestion sources for the organization.
#### Responses
- **Success (`200 OK`):** Returns an array of `IngestionSource` objects.
- **Error (`500 Internal Server Error`):** Indicates a server-side problem.
---
## 3. Get Ingestion Source by ID
- **Method:** `GET`
- **Path:** `/:id`
- **Description:** Fetches the details of a specific ingestion source.
**Access:** Authenticated
#### URL Parameters
- `id` (string, required): The UUID of the ingestion source.
| Parameter | Type | Description |
| :-------- | :----- | :------------------------------ |
| `id` | string | The ID of the ingestion source. |
#### Responses
- **Success (`200 OK`):** Returns the corresponding `IngestionSource` object.
- **204 No Content:** The ingestion source was deleted successfully.
- **404 Not Found:** Ingestion source not found.
- **500 Internal Server Error:** An unexpected error occurred.
- **Error (`404 Not Found`):** Returned if no source with the given ID exists.
- **Error (`500 Internal Server Error`):** Indicates a server-side problem.
### POST /api/v1/ingestion-sources/:id/import
---
Triggers the initial import process for an ingestion source.
## 4. Update Ingestion Source
- **Method:** `PUT`
- **Path:** `/:id`
- **Description:** Modifies an existing ingestion source. This is useful for changing the name, pausing a source, or updating its configuration.
**Access:** Authenticated
#### URL Parameters
- `id` (string, required): The UUID of the ingestion source to update.
#### Request Body (`UpdateIngestionSourceDto`)
All fields are optional. Only include the fields you want to change.
```json
{
"name": "Marketing Dept G-Suite (Paused)",
"status": "paused"
}
```
| Parameter | Type | Description |
| :-------- | :----- | :------------------------------ |
| `id` | string | The ID of the ingestion source. |
#### Responses
- **Success (`200 OK`):** Returns the complete, updated `IngestionSource` object.
- **202 Accepted:** The initial import was triggered successfully.
- **404 Not Found:** Ingestion source not found.
- **500 Internal Server Error:** An unexpected error occurred.
- **Error (`404 Not Found`):** Returned if no source with the given ID exists.
- **Error (`500 Internal Server Error`):** Indicates a server-side problem.
### POST /api/v1/ingestion-sources/:id/pause
---
Pauses an active ingestion source.
## 5. Delete Ingestion Source
- **Method:** `DELETE`
- **Path:** `/:id`
- **Description:** Permanently removes an ingestion source. This action cannot be undone.
**Access:** Authenticated
#### URL Parameters
- `id` (string, required): The UUID of the ingestion source to delete.
| Parameter | Type | Description |
| :-------- | :----- | :------------------------------ |
| `id` | string | The ID of the ingestion source. |
#### Responses
- **Success (`204 No Content`):** Indicates successful deletion with no body content.
- **200 OK:** The updated ingestion source object with a `paused` status.
- **404 Not Found:** Ingestion source not found.
- **500 Internal Server Error:** An unexpected error occurred.
- **Error (`404 Not Found`):** Returned if no source with the given ID exists.
- **Error (`500 Internal Server Error`):** Indicates a server-side problem.
### POST /api/v1/ingestion-sources/:id/sync
---
Triggers a forced synchronization for an ingestion source.
## 6. Trigger Initial Import
- **Method:** `POST`
- **Path:** `/:id/sync`
- **Description:** Initiates the email import process for a given source. This is an asynchronous operation that enqueues a background job and immediately returns a response. The status of the source will be updated to `importing`.
**Access:** Authenticated
#### URL Parameters
- `id` (string, required): The UUID of the ingestion source to sync.
| Parameter | Type | Description |
| :-------- | :----- | :------------------------------ |
| `id` | string | The ID of the ingestion source. |
#### Responses
- **Success (`202 Accepted`):** Confirms that the sync request has been accepted for processing.
```json
{
"message": "Initial import triggered successfully."
}
```
- **Error (`404 Not Found`):** Returned if no source with the given ID exists.
- **Error (`500 Internal Server Error`):** Indicates a server-side problem.
- **202 Accepted:** The force sync was triggered successfully.
- **404 Not Found:** Ingestion source not found.
- **500 Internal Server Error:** An unexpected error occurred.

51
docs/api/integrity.md Normal file
View File

@@ -0,0 +1,51 @@
# Integrity Check API
The Integrity Check API provides an endpoint to verify the cryptographic hash of an archived email and its attachments against the stored values in the database. This allows you to ensure that the stored files have not been tampered with or corrupted since they were archived.
## Check Email Integrity
Verifies the integrity of a specific archived email and all of its associated attachments.
- **URL:** `/api/v1/integrity/:id`
- **Method:** `GET`
- **URL Params:**
- `id=[string]` (required) - The UUID of the archived email to check.
- **Permissions:** `read:archive`
- **Success Response:**
- **Code:** 200 OK
- **Content:** `IntegrityCheckResult[]`
### Response Body `IntegrityCheckResult`
An array of objects, each representing the result of an integrity check for a single file (either the email itself or an attachment).
| Field | Type | Description |
| :--------- | :------------------------ | :-------------------------------------------------------------------------- |
| `type` | `'email' \| 'attachment'` | The type of the file being checked. |
| `id` | `string` | The UUID of the email or attachment. |
| `filename` | `string` (optional) | The filename of the attachment. This field is only present for attachments. |
| `isValid` | `boolean` | `true` if the current hash matches the stored hash, otherwise `false`. |
| `reason` | `string` (optional) | A reason for the failure. Only present if `isValid` is `false`. |
### Example Response
```json
[
{
"type": "email",
"id": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
"isValid": true
},
{
"type": "attachment",
"id": "b2c3d4e5-f6a7-8901-2345-67890abcdef1",
"filename": "document.pdf",
"isValid": false,
"reason": "Stored hash does not match current hash."
}
]
```
- **Error Response:**
- **Code:** 404 Not Found
- **Content:** `{ "message": "Archived email not found" }`

128
docs/api/jobs.md Normal file
View File

@@ -0,0 +1,128 @@
# Jobs API
The Jobs API provides endpoints for monitoring the job queues and the jobs within them.
## Overview
Open Archiver uses a job queue system to handle asynchronous tasks like email ingestion and indexing. The system is built on Redis and BullMQ and uses a producer-consumer pattern.
### Job Statuses
Jobs can have one of the following statuses:
- **active:** The job is currently being processed.
- **completed:** The job has been completed successfully.
- **failed:** The job has failed after all retry attempts.
- **delayed:** The job is delayed and will be processed at a later time.
- **waiting:** The job is waiting to be processed.
- **paused:** The job is paused and will not be processed until it is resumed.
### Errors
When a job fails, the `failedReason` and `stacktrace` fields will contain information about the error. The `error` field will also be populated with the `failedReason` for easier access.
### Job Preservation
Jobs are preserved for a limited time after they are completed or failed. This means that the job counts and the jobs that you see in the API are for a limited time.
- **Completed jobs:** The last 1000 completed jobs are preserved.
- **Failed jobs:** The last 5000 failed jobs are preserved.
## Get All Queues
- **Endpoint:** `GET /v1/jobs/queues`
- **Description:** Retrieves a list of all job queues and their job counts.
- **Permissions:** `manage:all`
- **Responses:**
- `200 OK`: Returns a list of queue overviews.
- `401 Unauthorized`: If the user is not authenticated.
- `403 Forbidden`: If the user does not have the required permissions.
### Response Body
```json
{
"queues": [
{
"name": "ingestion",
"counts": {
"active": 0,
"completed": 56,
"failed": 4,
"delayed": 3,
"waiting": 0,
"paused": 0
}
},
{
"name": "indexing",
"counts": {
"active": 0,
"completed": 0,
"failed": 0,
"delayed": 0,
"waiting": 0,
"paused": 0
}
}
]
}
```
## Get Queue Jobs
- **Endpoint:** `GET /v1/jobs/queues/:queueName`
- **Description:** Retrieves a list of jobs within a specific queue, with pagination and filtering by status.
- **Permissions:** `manage:all`
- **URL Parameters:**
- `queueName` (string, required): The name of the queue to retrieve jobs from.
- **Query Parameters:**
- `status` (string, optional): The status of the jobs to retrieve. Can be one of `active`, `completed`, `failed`, `delayed`, `waiting`, `paused`. Defaults to `failed`.
- `page` (number, optional): The page number to retrieve. Defaults to `1`.
- `limit` (number, optional): The number of jobs to retrieve per page. Defaults to `10`.
- **Responses:**
- `200 OK`: Returns a detailed view of the queue, including a paginated list of jobs.
- `401 Unauthorized`: If the user is not authenticated.
- `403 Forbidden`: If the user does not have the required permissions.
- `404 Not Found`: If the specified queue does not exist.
### Response Body
```json
{
"name": "ingestion",
"counts": {
"active": 0,
"completed": 56,
"failed": 4,
"delayed": 3,
"waiting": 0,
"paused": 0
},
"jobs": [
{
"id": "1",
"name": "initial-import",
"data": {
"ingestionSourceId": "clx1y2z3a0000b4d2e5f6g7h8"
},
"state": "failed",
"failedReason": "Error: Connection timed out",
"timestamp": 1678886400000,
"processedOn": 1678886401000,
"finishedOn": 1678886402000,
"attemptsMade": 5,
"stacktrace": ["..."],
"returnValue": null,
"ingestionSourceId": "clx1y2z3a0000b4d2e5f6g7h8",
"error": "Error: Connection timed out"
}
],
"pagination": {
"currentPage": 1,
"totalPages": 1,
"totalJobs": 4,
"limit": 10
}
}
```

51
docs/api/rate-limiting.md Normal file
View File

@@ -0,0 +1,51 @@
# Rate Limiting
The API implements rate limiting as a security measure to protect your instance from denial-of-service (DoS) and brute-force attacks. This is a crucial feature for maintaining the security and stability of the application.
## How It Works
The rate limiter restricts the number of requests an IP address can make within a specific time frame. These limits are configurable via environment variables to suit your security needs.
By default, the limits are:
- **100 requests** per **1 minute** per IP address.
If this limit is exceeded, the API will respond with an HTTP `429 Too Many Requests` status code.
### Response Body
When an IP address is rate-limited, the API will return a JSON response with the following format:
```json
{
"status": 429,
"message": "Too many requests from this IP, please try again after 15 minutes"
}
```
## Configuration
You can customize the rate-limiting settings by setting the following environment variables in your `.env` file:
- `RATE_LIMIT_WINDOW_MS`: The time window in milliseconds. Defaults to `60000` (1 minute).
- `RATE_LIMIT_MAX_REQUESTS`: The maximum number of requests allowed per IP address within the time window. Defaults to `100`.
## Handling Rate Limits
If you are developing a client that interacts with the API, you should handle rate limiting gracefully:
1. **Check the Status Code**: Monitor for a `429` HTTP status code in responses.
2. **Implement a Retry Mechanism**: When you receive a `429` response, it is best practice to wait before retrying the request. Implementing an exponential backoff strategy is recommended.
3. **Check Headers**: The response will include the following standard headers to help you manage your request rate:
- `RateLimit-Limit`: The maximum number of requests allowed in the current window.
- `RateLimit-Remaining`: The number of requests you have left in the current window.
- `RateLimit-Reset`: The time when the rate limit window will reset, in UTC epoch seconds.
## Excluded Endpoints
Certain essential endpoints are excluded from rate limiting to ensure the application's UI remains responsive. These are:
- `/auth/status`
- `/settings/system`
These endpoints can be called as needed without affecting your rate limit count.

50
docs/api/search.md Normal file
View File

@@ -0,0 +1,50 @@
# Search Service API
The Search Service provides an endpoint for searching indexed emails.
## Endpoints
All endpoints in this service require authentication.
### GET /api/v1/search
Performs a search query against the indexed emails.
**Access:** Authenticated
#### Query Parameters
| Parameter | Type | Description | Default |
| :----------------- | :----- | :--------------------------------------------------------------------- | :------ |
| `keywords` | string | The search query. | |
| `page` | number | The page number for pagination. | 1 |
| `limit` | number | The number of items per page. | 10 |
| `matchingStrategy` | string | The matching strategy to use (`all` or `last`). | `last` |
| `filters` | object | Key-value pairs for filtering results (e.g., `from=user@example.com`). | |
#### Responses
- **200 OK:** A search result object.
```json
{
"hits": [
{
"id": "email-id",
"subject": "Test Email",
"from": "sender@example.com",
"_formatted": {
"subject": "<em>Test</em> Email"
}
}
],
"total": 1,
"page": 1,
"limit": 10,
"totalPages": 1,
"processingTimeMs": 5
}
```
- **400 Bad Request:** Keywords are required.
- **500 Internal Server Error:** An unexpected error occurred.

26
docs/api/storage.md Normal file
View File

@@ -0,0 +1,26 @@
# Storage Service API
The Storage Service provides an endpoint for downloading files from the configured storage provider.
## Endpoints
All endpoints in this service require authentication.
### GET /api/v1/storage/download
Downloads a file from the storage.
**Access:** Authenticated
#### Query Parameters
| Parameter | Type | Description |
| :-------- | :----- | :------------------------------------------------ |
| `path` | string | The path to the file within the storage provider. |
#### Responses
- **200 OK:** The file stream.
- **400 Bad Request:** File path is required or invalid.
- **404 Not Found:** File not found.
- **500 Internal Server Error:** An unexpected error occurred.

View File

@@ -0,0 +1,78 @@
# Audit Log: API Endpoints
The audit log feature exposes two API endpoints for retrieving and verifying audit log data. Both endpoints require authentication and are only accessible to users with the appropriate permissions.
## Get Audit Logs
Retrieves a paginated list of audit log entries, with support for filtering and sorting.
- **Endpoint:** `GET /api/v1/enterprise/audit-logs`
- **Method:** `GET`
- **Authentication:** Required
### Query Parameters
| Parameter | Type | Description |
| ------------ | -------- | --------------------------------------------------------------------------- |
| `page` | `number` | The page number to retrieve. Defaults to `1`. |
| `limit` | `number` | The number of entries to retrieve per page. Defaults to `20`. |
| `startDate` | `date` | The start date for the date range filter. |
| `endDate` | `date` | The end date for the date range filter. |
| `actor` | `string` | The actor identifier to filter by. |
| `actionType` | `string` | The action type to filter by (e.g., `LOGIN`, `CREATE`). |
| `sort` | `string` | The sort order for the results. Can be `asc` or `desc`. Defaults to `desc`. |
### Response Body
```json
{
"data": [
{
"id": 1,
"previousHash": null,
"timestamp": "2025-10-03T00:00:00.000Z",
"actorIdentifier": "e8026a75-b58a-4902-8858-eb8780215f82",
"actorIp": "::1",
"actionType": "LOGIN",
"targetType": "User",
"targetId": "e8026a75-b58a-4902-8858-eb8780215f82",
"details": {},
"currentHash": "..."
}
],
"meta": {
"total": 100,
"page": 1,
"limit": 20
}
}
```
## Verify Audit Log Integrity
Initiates a verification process to check the integrity of the entire audit log chain.
- **Endpoint:** `POST /api/v1/enterprise/audit-logs/verify`
- **Method:** `POST`
- **Authentication:** Required
### Response Body
**Success**
```json
{
"ok": true,
"message": "Audit log integrity verified successfully."
}
```
**Failure**
```json
{
"ok": false,
"message": "Audit log chain is broken!",
"logId": 123
}
```

View File

@@ -0,0 +1,31 @@
# Audit Log: Backend Implementation
The backend implementation of the audit log is handled by the `AuditService`, located in `packages/backend/src/services/AuditService.ts`. This service encapsulates all the logic for creating, retrieving, and verifying audit log entries.
## Hashing and Verification Logic
The core of the audit log's immutability lies in its hashing and verification logic.
### Hash Calculation
The `calculateHash` method is responsible for generating a SHA-256 hash of a log entry. To ensure consistency, it performs the following steps:
1. **Canonical Object Creation:** It constructs a new object with a fixed property order, ensuring that the object's structure is always the same.
2. **Timestamp Normalization:** It converts the `timestamp` to milliseconds since the epoch (`getTime()`) to avoid any precision-related discrepancies between the application and the database.
3. **Canonical Stringification:** It uses a custom `canonicalStringify` function to create a JSON string representation of the object. This function sorts the object keys, ensuring that the output is always the same, regardless of the in-memory property order.
4. **Hash Generation:** It computes a SHA-256 hash of the canonical string.
### Verification Process
The `verifyAuditLog` method is designed to be highly scalable and efficient, even with millions of log entries. It processes the logs in manageable chunks (e.g., 1000 at a time) to avoid loading the entire table into memory.
The verification process involves the following steps:
1. **Iterative Processing:** It fetches the logs in batches within a `while` loop.
2. **Chain Verification:** For each log entry, it compares the `previousHash` with the `currentHash` of the preceding log. If they do not match, the chain is broken, and the verification fails.
3. **Hash Recalculation:** It recalculates the hash of the current log entry using the same `calculateHash` method used during creation.
4. **Integrity Check:** It compares the recalculated hash with the `currentHash` stored in the database. If they do not match, the log entry has been tampered with, and the verification fails.
## Service Integration
The `AuditService` is integrated into the application through the `AuditLogModule` (`packages/enterprise/src/modules/audit-log/audit-log.module.ts`), which registers the API routes for the audit log feature. The service's `createAuditLog` method is called from various other services throughout the application to record significant events.

View File

@@ -0,0 +1,39 @@
# Audit Log: User Interface
The audit log user interface provides a comprehensive view of all significant events that have occurred within the Open Archiver system. It is designed to be intuitive and user-friendly, allowing administrators to easily monitor and review system activity.
## Viewing Audit Logs
The main audit log page displays a table of log entries, with the following columns:
- **Timestamp:** The date and time of the event.
- **Actor:** The identifier of the user or system process that performed the action.
- **IP Address:** The IP address from which the action was initiated.
- **Action:** The type of action performed, displayed as a color-coded badge for easy identification.
- **Target Type:** The type of resource that was affected.
- **Target ID:** The unique identifier of the affected resource.
- **Details:** A truncated preview of the event's details. The full JSON object is displayed in a pop-up card on hover.
## Filtering and Sorting
The table can be sorted by timestamp by clicking the "Timestamp" header. This allows you to view the logs in either chronological or reverse chronological order.
## Pagination
Pagination controls are available below the table, allowing you to navigate through the entire history of audit log entries.
## Verifying Log Integrity
The "Verify Log Integrity" button allows you to initiate a verification process to check the integrity of the entire audit log chain. This process recalculates the hash of each log entry and compares it to the stored hash, ensuring that the cryptographic chain is unbroken and no entries have been tampered with.
### Verification Responses
- **Success:** A success notification is displayed, confirming that the audit log integrity has been verified successfully. This means that the log chain is complete and no entries have been tampered with.
- **Failure:** An error notification is displayed, indicating that the audit log chain is broken or an entry has been tampered with. The notification will include the ID of the log entry where the issue was detected. There are two types of failures:
- **Audit log chain is broken:** This means that the `previousHash` of a log entry does not match the `currentHash` of the preceding entry. This indicates that one or more log entries may have been deleted or inserted into the chain.
- **Audit log entry is tampered!:** This means that the recalculated hash of a log entry does not match its stored `currentHash`. This indicates that the data within the log entry has been altered.
## Viewing Log Details
You can view the full details of any log entry by clicking on its row in the table. This will open a dialog containing all the information associated with the log entry, including the previous and current hashes.

View File

@@ -0,0 +1,27 @@
# Audit Log
The Audit Log is an enterprise-grade feature designed to provide a complete, immutable, and verifiable record of every significant action that occurs within the Open Archiver system. Its primary purpose is to ensure compliance with strict regulatory standards, such as the German GoBD, by establishing a tamper-proof chain of evidence for all activities.
## Core Principles
To fulfill its compliance and security functions, the audit log adheres to the following core principles:
### 1. Immutability
Every log entry is cryptographically chained to the previous one. Each new entry contains a SHA-256 hash of the preceding entry's hash, creating a verifiable chain. Any attempt to alter or delete a past entry would break this chain and be immediately detectable through the verification process.
### 2. Completeness
The system is designed to log every significant event without exception. This includes not only user-initiated actions (like logins, searches, and downloads) but also automated system processes, such as data ingestion and policy-based deletions.
### 3. Attribution
Each log entry is unambiguously linked to the actor that initiated the event. This could be a specific authenticated user, an external auditor, or an automated system process. The actor's identifier and source IP address are recorded to ensure full traceability.
### 4. Clarity and Detail
Log entries are structured to be detailed and human-readable, providing sufficient context for an auditor to understand the event without needing specialized system knowledge. This includes the action performed, the target resource affected, and a JSON object with specific, contextual details of the event.
### 5. Verifiability
The integrity of the entire audit log can be verified at any time. A dedicated process iterates through the logs from the beginning, recalculating the hash of each entry and comparing it to the stored hash, ensuring the cryptographic chain is unbroken and no entries have been tampered with.

42
docs/index.md Normal file
View File

@@ -0,0 +1,42 @@
# Get Started 👋
Welcome to Open Archiver! This guide will help you get started with setting up and using the platform.
## What is Open Archiver? 🛡️
**A secure, sovereign, and affordable open-source platform for email archiving and eDiscovery.**
Open Archiver provides a robust, self-hosted solution for archiving, storing, indexing, and searching emails from major platforms, including Google Workspace (Gmail), Microsoft 365, as well as generic IMAP-enabled email inboxes. Use Open Archiver to keep a permanent, tamper-proof record of your communication history, free from vendor lock-in.
## Key Features ✨
- **Universal Ingestion**: Connect to Google Workspace, Microsoft 365, and standard IMAP servers to perform initial bulk imports and maintain continuous, real-time synchronization.
- **Secure & Efficient Storage**: Emails are stored in the standard `.eml` format. The system uses deduplication and compression to minimize storage costs. All data is encrypted at rest.
- **Pluggable Storage Backends**: Support both local filesystem storage and S3-compatible object storage (like AWS S3 or MinIO).
- **Powerful Search & eDiscovery**: A high-performance search engine indexes the full text of emails and attachments (PDF, DOCX, etc.).
- **Compliance & Retention**: Define granular retention policies to automatically manage the lifecycle of your data. Place legal holds on communications to prevent deletion during litigation (TBD).
- **Comprehensive Auditing**: An immutable audit trail logs all system activities, ensuring you have a clear record of who accessed what and when (TBD).
## Installation 🚀
To get your own instance of Open Archiver running, follow our detailed installation guide:
- [Installation Guide](./user-guides/installation.md)
## Data Source Configuration 🔌
After deploying the application, you will need to configure one or more ingestion sources to begin archiving emails. Follow our detailed guides to connect to your email provider:
- [Connecting to Google Workspace](./user-guides/email-providers/google-workspace.md)
- [Connecting to Microsoft 365](./user-guides/email-providers/microsoft-365.md)
- [Connecting to a Generic IMAP Server](./user-guides/email-providers/imap.md)
## Contributing ❤️
We welcome contributions from the community!
- **Reporting Bugs**: If you find a bug, please open an issue on our GitHub repository.
- **Suggesting Enhancements**: Have an idea for a new feature? We'd love to hear it. Open an issue to start the discussion.
- **Code Contributions**: If you'd like to contribute code, please fork the repository and submit a pull request.
Please read our `CONTRIBUTING.md` file for more details on our code of conduct and the process for submitting pull requests.

19
docs/public/logo-sq.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 164 KiB

View File

@@ -0,0 +1,289 @@
# IAM Policy
This document provides a guide to creating and managing IAM policies in Open Archiver. It is intended for developers and administrators who need to configure granular access control for users and roles.
## Policy Structure
IAM policies are defined as an array of JSON objects, where each object represents a single permission rule. The structure of a policy object is as follows:
```json
{
"action": "read" OR ["read", "create"],
"subject": "ingestion" OR ["ingestion", "dashboard"],
"conditions": {
"field_name": "value"
},
"inverted": false OR true,
}
```
- `action`: The action(s) to be performed on the subject. Can be a single string or an array of strings.
- `subject`: The resource(s) or entity on which the action is to be performed. Can be a single string or an array of strings.
- `conditions`: (Optional) A set of conditions that must be met for the permission to be granted.
- `inverted`: (Optional) When set to `true`, this inverts the rule, turning it from a "can" rule into a "cannot" rule. This is useful for creating exceptions to broader permissions.
## Actions
The following actions are available for use in IAM policies:
- `manage`: A wildcard action that grants all permissions on a subject (`create`, `read`, `update`, `delete`, `search`, `sync`).
- `create`: Allows the user to create a new resource.
- `read`: Allows the user to view a resource.
- `update`: Allows the user to modify an existing resource.
- `delete`: Allows the user to delete a resource.
- `search`: Allows the user to search for resources.
- `sync`: Allows the user to synchronize a resource.
## Subjects
The following subjects are available for use in IAM policies:
- `all`: A wildcard subject that represents all resources.
- `archive`: Represents archived emails.
- `ingestion`: Represents ingestion sources.
- `settings`: Represents system settings.
- `users`: Represents user accounts.
- `roles`: Represents user roles.
- `dashboard`: Represents the dashboard.
## Advanced Conditions with MongoDB-Style Queries
Conditions are the key to creating fine-grained access control rules. They are defined as a JSON object where each key represents a field on the subject, and the value defines the criteria for that field.
All conditions within a single rule are implicitly joined with an **AND** logic. This means that for a permission to be granted, the resource must satisfy _all_ specified conditions.
The power of this system comes from its use of a subset of [MongoDB's query language](https://www.mongodb.com/docs/manual/), which provides a flexible and expressive way to define complex rules. These rules are translated into native queries for both the PostgreSQL database (via Drizzle ORM) and the Meilisearch engine.
### Supported Operators and Examples
Here is a detailed breakdown of the supported operators with examples.
#### `$eq` (Equal)
This is the default operator. If you provide a simple key-value pair, it is treated as an equality check.
```json
// This rule...
{ "status": "active" }
// ...is equivalent to this:
{ "status": { "$eq": "active" } }
```
**Use Case**: Grant access to an ingestion source only if its status is `active`.
#### `$ne` (Not Equal)
Matches documents where the field value is not equal to the specified value.
```json
{ "provider": { "$ne": "pst_import" } }
```
**Use Case**: Allow a user to see all ingestion sources except for PST imports.
#### `$in` (In Array)
Matches documents where the field value is one of the values in the specified array.
```json
{
"id": {
"$in": ["INGESTION_ID_1", "INGESTION_ID_2"]
}
}
```
**Use Case**: Grant an auditor access to a specific list of ingestion sources.
#### `$nin` (Not In Array)
Matches documents where the field value is not one of the values in the specified array.
```json
{ "provider": { "$nin": ["pst_import", "eml_import"] } }
```
**Use Case**: Hide all manual import sources from a specific user role.
#### `$lt` / `$lte` (Less Than / Less Than or Equal)
Matches documents where the field value is less than (`$lt`) or less than or equal to (`$lte`) the specified value. This is useful for numeric or date-based comparisons.
```json
{ "sentAt": { "$lt": "2024-01-01T00:00:00.000Z" } }
```
#### `$gt` / `$gte` (Greater Than / Greater Than or Equal)
Matches documents where the field value is greater than (`$gt`) or greater than or equal to (`$gte`) the specified value.
```json
{ "sentAt": { "$lt": "2024-01-01T00:00:00.000Z" } }
```
#### `$exists`
Matches documents that have (or do not have) the specified field.
```json
// Grant access only if a 'lastSyncStatusMessage' exists
{ "lastSyncStatusMessage": { "$exists": true } }
```
## Inverted Rules: Creating Exceptions with `cannot`
By default, all rules are "can" rules, meaning they grant permissions. However, you can create a "cannot" rule by adding `"inverted": true` to a policy object. This is extremely useful for creating exceptions to broader permissions.
A common pattern is to grant broad access and then use an inverted rule to carve out a specific restriction.
**Use Case**: Grant a user access to all ingestion sources _except_ for one specific source.
This is achieved with two rules:
1. A "can" rule that grants `read` access to the `ingestion` subject.
2. An inverted "cannot" rule that denies `read` access for the specific ingestion `id`.
```json
[
{
"action": "read",
"subject": "ingestion"
},
{
"inverted": true,
"action": "read",
"subject": "ingestion",
"conditions": {
"id": "SPECIFIC_INGESTION_ID_TO_EXCLUDE"
}
}
]
```
## Policy Evaluation Logic
The system evaluates policies by combining all relevant rules for a user. The logic is simple:
- A user has permission if at least one `can` rule allows it.
- A permission is denied if a `cannot` (`"inverted": true`) rule explicitly forbids it, even if a `can` rule allows it. `cannot` rules always take precedence.
### Dynamic Policies with Placeholders
To create dynamic policies that are specific to the current user, you can use the `${user.id}` placeholder in the `conditions` object. This placeholder will be replaced with the ID of the current user at runtime.
## Special Permissions for User and Role Management
It is important to note that while `read` access to `users` and `roles` can be granted granularly, any actions that modify these resources (`create`, `update`, `delete`) are restricted to Super Admins.
A user must have the `{ "action": "manage", "subject": "all" }` permission (Typically a Super Admin role) to manage users and roles. This is a security measure to prevent unauthorized changes to user accounts and permissions.
## Policy Examples
Here are several examples based on the default roles in the system, demonstrating how to combine actions, subjects, and conditions to achieve specific access control scenarios.
### Administrator
This policy grants a user full access to all resources using wildcards.
```json
[
{
"action": "manage",
"subject": "all"
}
]
```
### End-User
This policy allows a user to view the dashboard, create new ingestion sources, and fully manage the ingestion sources they own.
```json
[
{
"action": "read",
"subject": "dashboard"
},
{
"action": "create",
"subject": "ingestion"
},
{
"action": "manage",
"subject": "ingestion",
"conditions": {
"userId": "${user.id}"
}
},
{
"action": "manage",
"subject": "archive",
"conditions": {
"ingestionSource.userId": "${user.id}" // also needs to give permission to archived emails created by the user
}
}
]
```
### Global Read-Only Auditor
This policy grants read and search access across most of the application's resources, making it suitable for an auditor who needs to view data without modifying it.
```json
[
{
"action": ["read", "search"],
"subject": ["ingestion", "archive", "dashboard", "users", "roles"]
}
]
```
### Ingestion Admin
This policy grants full control over all ingestion sources and archives, but no other resources.
```json
[
{
"action": "manage",
"subject": "ingestion"
}
]
```
### Auditor for Specific Ingestion Sources
This policy demonstrates how to grant access to a specific list of ingestion sources using the `$in` operator.
```json
[
{
"action": ["read", "search"],
"subject": "ingestion",
"conditions": {
"id": {
"$in": ["INGESTION_ID_1", "INGESTION_ID_2"]
}
}
}
]
```
### Limit Access to a Specific Mailbox
This policy grants a user access to a specific ingestion source, but only allows them to see emails belonging to a single user within that source.
This is achieved by defining two specific `can` rules: The rule grants `read` and `search` access to the `archive` subject, but the `userEmail` must match.
```json
[
{
"action": ["read", "search"],
"subject": "archive",
"conditions": {
"userEmail": "user1@example.com"
}
}
]
```

1
docs/services/index.md Normal file
View File

@@ -0,0 +1 @@
# services

View File

@@ -0,0 +1,96 @@
# OCR Service
The OCR (Optical Character Recognition) and text extraction service is responsible for extracting plain text content from various file formats, such as PDFs, Office documents, and more. This is a crucial component for making email attachments searchable.
## Overview
The system employs a two-pronged approach for text extraction:
1. **Primary Extractor (Apache Tika)**: A powerful and versatile toolkit that can extract text from a wide variety of file formats. It is the recommended method for its superior performance and format support.
2. **Legacy Extractor**: A fallback mechanism that uses a combination of libraries (`pdf2json`, `mammoth`, `xlsx`) for common file types like PDF, DOCX, and XLSX. This is used when Apache Tika is not configured.
The main logic resides in `packages/backend/src/helpers/textExtractor.ts`, which decides which extraction method to use based on the application's configuration.
## Configuration
To enable the primary text extraction method, you must configure the URL of an Apache Tika server instance in your environment variables.
In your `.env` file, set the `TIKA_URL`:
```env
# .env.example
# Apache Tika Integration
# ONLY active if TIKA_URL is set
TIKA_URL=http://tika:9998
```
If `TIKA_URL` is not set, the system will automatically fall back to the legacy extraction methods. The service performs a health check on startup to verify connectivity with the Tika server.
## File Size Limits
To prevent excessive memory usage and processing time, the service imposes a general size limit on files submitted for text extraction. Files larger than the configured limit will be skipped.
- **With Apache Tika**: The maximum file size is **100MB**.
- **With Legacy Fallback**: The maximum file size is **50MB**.
## Supported File Formats
The service's ability to extract text depends on whether it's using Apache Tika or the legacy fallback methods.
### With Apache Tika
When `TIKA_URL` is configured, the service can process a vast range of file formats. Apache Tika is designed for broad compatibility and supports hundreds of file types, including but not limited to:
- Portable Document Format (PDF)
- Microsoft Office formats (DOC, DOCX, PPT, PPTX, XLS, XLSX)
- OpenDocument Formats (ODT, ODS, ODP)
- Rich Text Format (RTF)
- Plain Text (TXT, CSV, JSON, XML, HTML)
- Image formats with OCR capabilities (PNG, JPEG, TIFF)
- Archive formats (ZIP, TAR, GZ)
- Email formats (EML, MSG)
For a complete and up-to-date list, please refer to the official [Apache Tika documentation](https://tika.apache.org/3.2.3/formats.html).
### With Legacy Fallback
When Tika is not configured, text extraction is limited to the following formats:
- `application/pdf` (PDF)
- `application/vnd.openxmlformats-officedocument.wordprocessingml.document` (DOCX)
- `application/vnd.openxmlformats-officedocument.spreadsheetml.sheet` (XLSX)
- Plain text formats such as `text/*`, `application/json`, and `application/xml`.
## Features of the Tika Integration (`OcrService`)
The `OcrService` (`packages/backend/src/services/OcrService.ts`) provides several enhancements to make text extraction efficient and robust.
### Caching
To avoid redundant processing of the same file, the service implements a simple LRU (Least Recently Used) cache.
- **Cache Key**: A SHA-256 hash of the file's buffer is used as the cache key.
- **Functionality**: If a file with the same hash is processed again, the text content is served directly from the cache, saving significant processing time.
- **Statistics**: The service keeps track of cache hits, misses, and the hit rate for performance monitoring.
### Concurrency Management (Semaphore)
Extracting text from large files can be resource-intensive. To prevent the Tika server from being overwhelmed by multiple requests for the _same file_ simultaneously (e.g., during a large import), a semaphore mechanism is used.
- **Functionality**: If a request for a specific file (identified by its hash) is already in progress, any subsequent requests for the same file will wait for the first one to complete and then use its result.
- **Benefit**: This deduplicates parallel processing efforts and reduces unnecessary load on the Tika server.
### Health Check and DNS Fallback
- **Availability Check**: The service includes a `checkTikaAvailability` method to verify that the Tika server is reachable and operational. This check is performed on application startup.
- **DNS Fallback**: For convenience in Docker environments, if the Tika URL uses the hostname `tika` (e.g., `http://tika:9998`), the service will automatically attempt a fallback to `localhost` if the initial connection fails.
## Legacy Fallback Methods
When Tika is not available, the `extractTextLegacy` function in `textExtractor.ts` handles extraction for a limited set of MIME types:
- `application/pdf`: Processed using `pdf2json`. Includes a 50MB size limit and a 5-second timeout to prevent memory issues.
- `application/vnd.openxmlformats-officedocument.wordprocessingml.document` (DOCX): Processed using `mammoth`.
- `application/vnd.openxmlformats-officedocument.spreadsheetml.sheet` (XLSX): Processed using `xlsx`.
- Plain text formats (`text/*`, `application/json`, `application/xml`): Converted directly from the buffer.

View File

@@ -14,8 +14,8 @@ The `StorageService` is configured via environment variables in the `.env` file.
The `STORAGE_TYPE` variable determines which provider the service will use.
- `STORAGE_TYPE=local`: Uses the local server's filesystem.
- `STORAGE_TYPE=s3`: Uses an S3-compatible object storage service (e.g., AWS S3, MinIO, Google Cloud Storage).
- `STORAGE_TYPE=local`: Uses the local server's filesystem.
- `STORAGE_TYPE=s3`: Uses an S3-compatible object storage service (e.g., AWS S3, MinIO, Google Cloud Storage).
### 2. Local Filesystem Configuration
@@ -27,7 +27,7 @@ STORAGE_TYPE=local
STORAGE_LOCAL_ROOT_PATH=/var/data/open-archiver
```
- `STORAGE_LOCAL_ROOT_PATH`: The absolute path on the server where the archive will be created. The service will create subdirectories within this path as needed.
- `STORAGE_LOCAL_ROOT_PATH`: The absolute path on the server where the archive will be created. The service will create subdirectories within this path as needed.
### 3. S3-Compatible Storage Configuration
@@ -44,12 +44,12 @@ STORAGE_S3_REGION=us-east-1
STORAGE_S3_FORCE_PATH_STYLE=true
```
- `STORAGE_S3_ENDPOINT`: The full URL of the S3 API endpoint.
- `STORAGE_S3_BUCKET`: The name of the bucket to use for storage.
- `STORAGE_S3_ACCESS_KEY_ID`: The access key for your S3 user.
- `STORAGE_S3_SECRET_ACCESS_KEY`: The secret key for your S3 user.
- `STORAGE_S3_REGION` (Optional): The AWS region of your bucket. Recommended for AWS S3.
- `STORAGE_S3_FORCE_PATH_STYLE` (Optional): Set to `true` when using non-AWS S3 services like MinIO.
- `STORAGE_S3_ENDPOINT`: The full URL of the S3 API endpoint.
- `STORAGE_S3_BUCKET`: The name of the bucket to use for storage.
- `STORAGE_S3_ACCESS_KEY_ID`: The access key for your S3 user.
- `STORAGE_S3_SECRET_ACCESS_KEY`: The secret key for your S3 user.
- `STORAGE_S3_REGION` (Optional): The AWS region of your bucket. Recommended for AWS S3.
- `STORAGE_S3_FORCE_PATH_STYLE` (Optional): Set to `true` when using non-AWS S3 services like MinIO.
## How to Use the Service
@@ -61,31 +61,27 @@ The `StorageService` is designed to be used via dependency injection in other se
import { StorageService } from './StorageService';
class IngestionService {
private storageService: StorageService;
private storageService: StorageService;
constructor() {
// The StorageService is instantiated without any arguments.
// It automatically reads the configuration from the environment.
this.storageService = new StorageService();
}
constructor() {
// The StorageService is instantiated without any arguments.
// It automatically reads the configuration from the environment.
this.storageService = new StorageService();
}
public async archiveEmail(
rawEmail: Buffer,
userId: string,
messageId: string
): Promise<void> {
// Define a structured, unique path for the email.
const archivePath = `${userId}/messages/${messageId}.eml`;
public async archiveEmail(rawEmail: Buffer, userId: string, messageId: string): Promise<void> {
// Define a structured, unique path for the email.
const archivePath = `${userId}/messages/${messageId}.eml`;
try {
// Use the service. It doesn't know or care if this is writing
// to a local disk or an S3 bucket.
await this.storageService.put(archivePath, rawEmail);
console.log(`Successfully archived email to ${archivePath}`);
} catch (error) {
console.error(`Failed to archive email ${messageId}`, error);
}
}
try {
// Use the service. It doesn't know or care if this is writing
// to a local disk or an S3 bucket.
await this.storageService.put(archivePath, rawEmail);
console.log(`Successfully archived email to ${archivePath}`);
} catch (error) {
console.error(`Failed to archive email ${messageId}`, error);
}
}
}
```
@@ -99,9 +95,9 @@ The `StorageService` implements the `IStorageProvider` interface. All methods ar
Stores a file at the specified path. If a file already exists at that path, it will be overwritten.
- **`path: string`**: A unique identifier for the file, including its directory structure (e.g., `"user-123/emails/message-abc.eml"`).
- **`content: Buffer | NodeJS.ReadableStream`**: The content of the file. It can be a `Buffer` for small files or a `ReadableStream` for large files to ensure memory efficiency.
- **Returns**: `Promise<void>` - A promise that resolves when the file has been successfully stored.
- **`path: string`**: A unique identifier for the file, including its directory structure (e.g., `"user-123/emails/message-abc.eml"`).
- **`content: Buffer | NodeJS.ReadableStream`**: The content of the file. It can be a `Buffer` for small files or a `ReadableStream` for large files to ensure memory efficiency.
- **Returns**: `Promise<void>` - A promise that resolves when the file has been successfully stored.
---
@@ -109,9 +105,9 @@ Stores a file at the specified path. If a file already exists at that path, it w
Retrieves a file from the specified path as a readable stream.
- **`path: string`**: The unique identifier of the file to retrieve.
- **Returns**: `Promise<NodeJS.ReadableStream>` - A promise that resolves with a readable stream of the file's content.
- **Throws**: An `Error` if the file is not found at the specified path.
- **`path: string`**: The unique identifier of the file to retrieve.
- **Returns**: `Promise<NodeJS.ReadableStream>` - A promise that resolves with a readable stream of the file's content.
- **Throws**: An `Error` if the file is not found at the specified path.
---
@@ -119,8 +115,8 @@ Retrieves a file from the specified path as a readable stream.
Deletes a file from the storage backend.
- **`path: string`**: The unique identifier of the file to delete.
- **Returns**: `Promise<void>` - A promise that resolves when the file is deleted. If the file does not exist, the promise will still resolve successfully without throwing an error.
- **`path: string`**: The unique identifier of the file to delete.
- **Returns**: `Promise<void>` - A promise that resolves when the file is deleted. If the file does not exist, the promise will still resolve successfully without throwing an error.
---
@@ -128,5 +124,5 @@ Deletes a file from the storage backend.
Checks for the existence of a file.
- **`path: string`**: The unique identifier of the file to check.
- **Returns**: `Promise<boolean>` - A promise that resolves with `true` if the file exists, and `false` otherwise.
- **`path: string`**: The unique identifier of the file to check.
- **Returns**: `Promise<boolean>` - A promise that resolves with `true` if the file exists, and `false` otherwise.

View File

@@ -0,0 +1,43 @@
# EML Import
OpenArchiver allows you to import EML files from a zip archive. This is useful for importing emails from a variety of sources, including other email clients and services.
## Preparing the Zip File
To ensure a successful import, you should compress your .eml files to one zip file according to the following guidelines:
- **Structure:** The zip file can contain any number of `.eml` files, organized in any folder structure. The folder structure will be preserved in OpenArchiver, so you can use it to organize your emails.
- **Compression:** The zip file should be compressed using standard zip compression.
Here's an example of a valid folder structure:
```
archive.zip
├── inbox
│ ├── email-01.eml
│ └── email-02.eml
├── sent
│ └── email-03.eml
└── drafts
├── nested-folder
│ └── email-04.eml
└── email-05.eml
```
## Creating an EML Ingestion Source
1. Go to the **Ingestion Sources** page in the OpenArchiver dashboard.
2. Click the **Create New** button.
3. Select **EML Import** as the provider.
4. Enter a name for the ingestion source.
5. **Choose Import Method:**
* **Upload File:** Click **Choose File** and select the zip archive containing your EML files. (Best for smaller archives)
* **Local Path:** Enter the path to the zip file **inside the container**. (Best for large archives)
> **Note on Local Path:** When using Docker, the "Local Path" is relative to the container's filesystem.
> * **Recommended:** Place your zip file in a `temp` folder inside your configured storage directory (`STORAGE_LOCAL_ROOT_PATH`). This path is already mounted. For example, if your storage path is `/data`, put the file in `/data/temp/emails.zip` and enter `/data/temp/emails.zip` as the path.
> * **Alternative:** Mount a separate volume in `docker-compose.yml` (e.g., `- /host/path:/container/path`) and use the container path.
6. Click the **Submit** button.
OpenArchiver will then start importing the EML files from the zip archive. The ingestion process may take some time, depending on the size of the archive.

View File

@@ -0,0 +1,124 @@
# Connecting to Google Workspace
This guide provides instructions for Google Workspace administrators to set up a connection that allows the archiving of all user mailboxes within their organization.
The connection uses a **Google Cloud Service Account** with **Domain-Wide Delegation**. This is a secure method that grants the archiving service permission to access user data on behalf of the administrator, without requiring individual user passwords or consent.
## Prerequisites
- You must have **Super Administrator** privileges in your Google Workspace account.
- You must have access to the **Google Cloud Console** associated with your organization.
## Setup Overview
The setup process involves three main parts:
1. Configuring the necessary permissions in the Google Cloud Console.
2. Authorizing the service account in the Google Workspace Admin Console.
3. Entering the generated credentials into the OpenArchiver application.
---
### Part 1: Google Cloud Console Setup
In this part, you will create a service account and enable the APIs it needs to function.
1. **Create a Google Cloud Project:**
- Go to the [Google Cloud Console](https://console.cloud.google.com/).
- If you don't already have one, create a new project for the archiving service (e.g., "Email Archiver").
2. **Enable Required APIs:**
- In your selected project, navigate to the **"APIs & Services" > "Library"** section.
- Search for and enable the following two APIs:
- **Gmail API**
- **Admin SDK API**
3. **Create a Service Account:**
- Navigate to **"IAM & Admin" > "Service Accounts"**.
- Click **"Create Service Account"**.
- Give the service account a name (e.g., `email-archiver-service`) and a description.
- Click **"Create and Continue"**. You do not need to grant this service account any roles on the project. Click **"Done"**.
4. **Generate a JSON Key:**
- Find the service account you just created in the list.
- Click the three-dot menu under **"Actions"** and select **"Manage keys"**.
- Click **"Add Key"** > **"Create new key"**.
- Select **JSON** as the key type and click **"Create"**.
- A JSON file will be downloaded to your computer. **Keep this file secure, as it contains private credentials.** You will need the contents of this file in Part 3.
### Troubleshooting
#### Error: "iam.disableServiceAccountKeyCreation"
If you receive an error message stating `The organization policy constraint 'iam.disableServiceAccountKeyCreation' is enforced` when trying to create a JSON key, it means your Google Cloud organization has a policy preventing the creation of new service account keys.
To resolve this, you must have **Organization Administrator** permissions.
1. **Navigate to your Organization:** In the Google Cloud Console, use the project selector at the top of the page to select your organization node (it usually has a building icon).
2. **Go to IAM:** From the navigation menu, select **"IAM & Admin" > "IAM"**.
3. **Edit Your Permissions:** Find your user account in the list and click the pencil icon to edit roles. Add the following two roles:
- `Organization Policy Administrator`
- `Organization Administrator`
_Note: These roles are only available at the organization level, not the project level._
4. **Modify the Policy:**
- Navigate to **"IAM & Admin" > "Organization Policies"**.
- In the filter box, search for the policy **"iam.disableServiceAccountKeyCreation"**.
- Click on the policy to edit it.
- You can either disable the policy entirely (if your security rules permit) or add a rule to exclude the specific project you are using for the archiver from this policy.
5. **Retry Key Creation:** Once the policy is updated, return to your project and you should be able to generate the JSON key as described in Part 1.
---
### Part 2: Grant Domain-Wide Delegation
Now, you will authorize the service account you created to access data from your Google Workspace.
1. **Get the Service Account's Client ID:**
- Go back to the list of service accounts in the Google Cloud Console.
- Click on the service account you created.
- Under the **"Details"** tab, find and copy the **Unique ID** (this is the Client ID).
2. **Authorize the Client in Google Workspace:**
- Go to your **Google Workspace Admin Console** at [admin.google.com](https://admin.google.com).
- Navigate to **Security > Access and data control > API controls**.
- Under the "Domain-wide Delegation" section, click **"Manage Domain-wide Delegation"**.
- Click **"Add new"**.
3. **Enter Client Details and Scopes:**
- In the **Client ID** field, paste the **Unique ID** you copied from the service account.
- In the **OAuth scopes** field, paste the following two scopes exactly as they appear, separated by a comma:
```
https://www.googleapis.com/auth/admin.directory.user.readonly,https://www.googleapis.com/auth/gmail.readonly
```
- Click **"Authorize"**.
The service account is now permitted to list users and read their email data across your domain.
---
### Part 3: Connecting in OpenArchiver
Finally, you will provide the generated credentials to the application.
1. **Navigate to Ingestion Sources:**
From the main dashboard, go to the **Ingestion Sources** page.
2. **Create a New Source:**
Click the **"Create New"** button.
3. **Fill in the Configuration Details:**
- **Name:** Give the source a name (e.g., "Google Workspace Archive").
- **Provider:** Select **"Google Workspace"** from the dropdown.
- **Service Account Key (JSON):** Open the JSON file you downloaded in Part 1. Copy the entire content of the file and paste it into this text area.
- **Impersonated Admin Email:** Enter the email address of a Super Administrator in your Google Workspace (e.g., `admin@your-domain.com`). The service will use this user's authority to discover all other users.
4. **Save Changes:**
Click **"Save changes"**.
## What Happens Next?
Once the connection is saved and verified, the system will begin the archiving process:
1. **User Discovery:** The service will first connect to the Admin SDK to get a list of all active users in your Google Workspace.
2. **Initial Import:** The system will then start a background job to import the mailboxes of all discovered users. The status will show as **"Importing"**. This can take a significant amount of time depending on the number of users and the size of their mailboxes.
3. **Continuous Sync:** After the initial import is complete, the status will change to **"Active"**. The system will then periodically check each user's mailbox for new emails and archive them automatically.

View File

@@ -0,0 +1,66 @@
# Connecting to a Generic IMAP Server
This guide will walk you through connecting a standard IMAP email account as an ingestion source. This allows you to archive emails from any provider that supports the IMAP protocol, which is common for many self-hosted or traditional email services.
## Step-by-Step Guide
1. **Navigate to Ingestion Sources:**
From the main dashboard, go to the **Ingestions** page.
2. **Create a New Source:**
Click the **"Create New"** button to open the ingestion source configuration dialog.
3. **Fill in the Configuration Details:**
You will see a form with several fields. Here is how to fill them out for an IMAP connection:
- **Name:** Give your ingestion source a descriptive name that you will easily recognize, such as "Work Email (IMAP)" or "Personal Gmail".
- **Provider:** From the dropdown menu, select **"Generic IMAP"**. This will reveal the specific fields required for an IMAP connection.
- **Host:** Enter the server address for your email provider's IMAP service. This often looks like `imap.your-provider.com` or `mail.your-domain.com`.
- **Port:** Enter the port number for the IMAP server. For a secure connection (which is strongly recommended), this is typically `993`.
- **Username:** Enter the full email address or username you use to log in to your email account.
- **Password:** Enter the password for your email account.
4. **Save Changes:**
Once you have filled in all the details, click the **"Save changes"** button.
## Security Recommendation: Use an App Password
For enhanced security, we strongly recommend using an **"app password"** (sometimes called an "app-specific password") instead of your main account password.
Many email providers (like Gmail, Outlook, and Fastmail) allow you to generate a unique password that grants access only to a specific application (in this case, the archiving service). If you ever need to revoke access, you can simply delete the app password without affecting your main account login.
Please consult your email provider's documentation to see if they support app passwords and how to create one.
### How to Obtain an App Password for Gmail
1. **Enable 2-Step Verification:** You must have 2-Step Verification turned on for your Google Account.
2. **Go to App Passwords:** Visit [myaccount.google.com/apppasswords](https://myaccount.google.com/apppasswords). You may be asked to sign in again.
3. **Create the Password:**
- At the bottom, click **"Select app"** and choose **"Other (Custom name)"**.
- Give it a name you'll recognize, like "OpenArchiver".
- Click **"Generate"**.
4. **Use the Password:** A 16-digit password will be displayed. Copy this password and paste it into the **Password** field in the OpenArchiver ingestion source form.
### How to Obtain an App Password for Outlook/Microsoft Accounts
1. **Enable Two-Step Verification:** You must have two-step verification enabled for your Microsoft account.
2. **Go to Security Options:** Sign in to your Microsoft account and navigate to the [Advanced security options](https://account.live.com/proofs/manage/additional).
3. **Create a New App Password:**
- Scroll down to the **"App passwords"** section.
- Click **"Create a new app password"**.
4. **Use the Password:** A new password will be generated. Use this password in the **Password** field in the OpenArchiver ingestion source form.
## What Happens Next?
After you save the connection, the system will attempt to connect to the IMAP server. The status of the ingestion source will update to reflect its current state:
- **Importing:** The system is performing the initial, one-time import of all emails from your `INBOX`. This may take a while depending on the size of your mailbox.
- **Active:** The initial import is complete, and the system will now periodically check for and archive new emails.
- **Paused:** The connection is valid, but the system will not check for new emails until you resume it.
- **Error:** The system was unable to connect using the provided credentials. Please double-check your Host, Port, Username, and Password and try again.
You can view, edit, pause, or manually sync any of your ingestion sources from the main table on the **Ingestions** page.

View File

@@ -0,0 +1,12 @@
# Connecting Email Providers
Open Archiver can connect to a variety of email sources to ingest and archive your emails. This section provides guides for connecting to popular email providers.
Choose your provider from the list below to get started:
- [Google Workspace](./google-workspace.md)
- [Microsoft 365](./microsoft-365.md)
- [Generic IMAP Server](./imap.md)
- [EML Import](./eml.md)
- [PST Import](./pst.md)
- [Mbox Import](./mbox.md)

View File

@@ -0,0 +1,35 @@
# Mbox Ingestion
Mbox is a common format for storing email messages. This guide will walk you through the process of ingesting mbox files into OpenArchiver.
## 1. Exporting from Your Email Client
Most email clients that support mbox exports will allow you to export a folder of emails as a single `.mbox` file. Here are the general steps:
- **Mozilla Thunderbird**: Right-click on a folder, select **ImportExportTools NG**, and then choose **Export folder**.
- **Gmail**: You can use Google Takeout to export your emails in mbox format.
- **Other Clients**: Refer to your email client's documentation for instructions on how to export emails to an mbox file.
## 2. Uploading to OpenArchiver
Once you have your `.mbox` file, you can upload it to OpenArchiver through the web interface.
1. Navigate to the **Ingestion** page.
2. Click on the **New Ingestion** button.
3. Select **Mbox** as the source type.
4. **Choose Import Method:**
* **Upload File:** Upload your `.mbox` file.
* **Local Path:** Enter the path to the mbox file **inside the container**.
> **Note on Local Path:** When using Docker, the "Local Path" is relative to the container's filesystem.
> * **Recommended:** Place your mbox file in a `temp` folder inside your configured storage directory (`STORAGE_LOCAL_ROOT_PATH`). This path is already mounted. For example, if your storage path is `/data`, put the file in `/data/temp/emails.mbox` and enter `/data/temp/emails.mbox` as the path.
> * **Alternative:** Mount a separate volume in `docker-compose.yml` (e.g., `- /host/path:/container/path`) and use the container path.
## 3. Folder Structure
OpenArchiver will attempt to preserve the original folder structure of your emails. This is done by inspecting the following email headers:
- `X-Gmail-Labels`: Used by Gmail to store labels.
- `X-Folder`: A custom header used by some email clients like Thunderbird.
If neither of these headers is present, the emails will be ingested into the root of the archive.

View File

@@ -0,0 +1,93 @@
# Connecting to Microsoft 365
This guide provides instructions for Microsoft 365 administrators to set up a connection that allows the archiving of all user mailboxes within their organization.
The connection uses the **Microsoft Graph API** and an **App Registration** in Microsoft Entra ID. This is a secure, standard method that grants the archiving service permission to read email data on your behalf without ever needing to handle user passwords.
## Prerequisites
- You must have one of the following administrator roles in your Microsoft 365 tenant: **Global Administrator**, **Application Administrator**, or **Cloud Application Administrator**.
## Setup Overview
The setup process involves four main parts, all performed within the Microsoft Entra admin center and the OpenArchiver application:
1. Registering a new application identity for the archiver in Entra ID.
2. Granting the application the specific permissions it needs to read mail.
3. Creating a secure password (a client secret) for the application.
4. Entering the generated credentials into the OpenArchiver application.
---
### Part 1: Register a New Application in Microsoft Entra ID
First, you will create an "App registration," which acts as an identity for the archiving service within your Microsoft 365 ecosystem.
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
2. In the left-hand navigation pane, go to **Identity > Applications > App registrations**.
3. Click the **+ New registration** button at the top of the page.
4. On the "Register an application" screen:
- **Name:** Give the application a descriptive name you will recognize, such as `OpenArchiver Service`.
- **Supported account types:** Select **"Accounts in this organizational directory only (Default Directory only - Single tenant)"**. This is the most secure option.
- **Redirect URI (optional):** You can leave this blank.
5. Click the **Register** button. You will be taken to the application's main "Overview" page.
---
### Part 2: Grant API Permissions
Next, you must grant the application the specific permissions required to read user profiles and their mailboxes.
1. From your new application's page, select **API permissions** from the left-hand menu.
2. Click the **+ Add a permission** button.
3. In the "Request API permissions" pane, select **Microsoft Graph**.
4. Select **Application permissions**. This is critical as it allows the service to run in the background without a user being signed in.
5. In the "Select permissions" search box, find and check the boxes for the following two permissions:
- `Mail.Read`
- `User.Read.All`
6. Click the **Add permissions** button at the bottom.
7. **Crucial Final Step:** You will now see the permissions in your list with a warning status. You must grant consent on behalf of your organization. Click the **"Grant admin consent for [Your Organization's Name]"** button located above the permissions table. Click **Yes** in the confirmation dialog. The status for both permissions should now show a green checkmark.
---
### Part 3: Create a Client Secret
The client secret is a password that the archiving service will use to authenticate. Treat this with the same level of security as an administrator's password.
1. In your application's menu, navigate to **Certificates & secrets**.
2. Select the **Client secrets** tab and click **+ New client secret**.
3. In the pane that appears:
- **Description:** Enter a clear description, such as `OpenArchiver Key`.
- **Expires:** Select an expiry duration. We recommend **12 or 24 months**. Set a calendar reminder to renew it before it expires to prevent service interruption.
4. Click **Add**.
5. **IMMEDIATELY COPY THE SECRET:** The secret is now visible in the **"Value"** column. This is the only time it will be fully displayed. Copy this value now and store it in a secure password manager before navigating away. If you lose it, you must create a new one.
---
### Part 4: Connecting in OpenArchiver
You now have the three pieces of information required to configure the connection.
1. **Navigate to Ingestion Sources:**
In the OpenArchiver application, go to the **Ingestion Sources** page.
2. **Create a New Source:**
Click the **"Create New"** button.
3. **Fill in the Configuration Details:**
- **Name:** Give the source a name (e.g., "Microsoft 365 Archive").
- **Provider:** Select **"Microsoft 365"** from the dropdown.
- **Application (Client) ID:** Go to the **Overview** page of your app registration in the Entra admin center and copy this value.
- **Directory (Tenant) ID:** This value is also on the **Overview** page.
- **Client Secret Value:** Paste the secret **Value** (not the Secret ID) that you copied and saved in the previous step.
4. **Save Changes:**
Click **"Save changes"**.
## What Happens Next?
Once the connection is saved, the system will begin the archiving process:
1. **User Discovery:** The service will connect to the Microsoft Graph API to get a list of all users in your organization.
2. **Initial Import:** The system will begin a background job to import the mailboxes of all discovered users, folder by folder. The status will show as **"Importing"**. This can take a significant amount of time.
3. **Continuous Sync:** After the initial import, the status will change to **"Active"**. The system will use Microsoft Graph's delta query feature to efficiently fetch only new or changed emails, ensuring the archive stays up-to-date.

View File

@@ -0,0 +1,28 @@
# PST Import
OpenArchiver allows you to import PST files. This is useful for importing emails from a variety of sources, including Microsoft Outlook.
## Preparing the PST File
To ensure a successful import, you should prepare your PST file according to the following guidelines:
- **Structure:** The PST file can contain any number of emails, organized in any folder structure. The folder structure will be preserved in OpenArchiver, so you can use it to organize your emails.
- **Password Protection:** OpenArchiver does not support password-protected PST files. Please remove the password from your PST file before importing it.
## Creating a PST Ingestion Source
1. Go to the **Ingestion Sources** page in the OpenArchiver dashboard.
2. Click the **Create New** button.
3. Select **PST Import** as the provider.
4. Enter a name for the ingestion source.
5. **Choose Import Method:**
* **Upload File:** Click **Choose File** and select the PST file from your computer. (Best for smaller files)
* **Local Path:** Enter the path to the PST file **inside the container**. (Best for large files)
> **Note on Local Path:** When using Docker, the "Local Path" is relative to the container's filesystem.
> * **Recommended:** Place your file in a `temp` folder inside your configured storage directory (`STORAGE_LOCAL_ROOT_PATH`). This path is already mounted. For example, if your storage path is `/data`, put the file in `/data/temp/archive.pst` and enter `/data/temp/archive.pst` as the path.
> * **Alternative:** Mount a separate volume in `docker-compose.yml` (e.g., `- /host/path:/container/path`) and use the container path.
6. Click the **Submit** button.
OpenArchiver will then start importing the emails from the PST file. The ingestion process may take some time, depending on the size of the file.

View File

@@ -0,0 +1,341 @@
# Installation Guide
This guide will walk you through setting up Open Archiver using Docker Compose. This is the recommended method for deploying the application.
## Prerequisites
- [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/) installed on your server or local machine.
- A server or local machine with at least 4GB of RAM (2GB of RAM if you use external Postgres, Redis (Valkey) and Meilisearch instances).
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) installed on your server or local machine.
## 1. Clone the Repository
First, clone the Open Archiver repository to your machine:
```bash
git clone https://github.com/LogicLabs-OU/OpenArchiver.git
cd OpenArchiver
```
## 2. Create a Directory for Local Storage (Important)
Before configuring the application, you **must** create a directory on your host machine where Open Archiver will store its data (such as emails and attachments). Manually creating this directory helps prevent potential permission issues.
Foe examples, you can use this path `/var/data/open-archiver`.
Run the following commands to create the directory and set the correct permissions:
```bash
sudo mkdir -p /var/data/open-archiver
sudo chown -R $(id -u):$(id -g) /var/data/open-archiver
```
This ensures the directory is owned by your current user, which is necessary for the application to have write access. You will set this path in your `.env` file in the next step.
## 3. Configure Your Environment
The application is configured using environment variables. You'll need to create a `.env` file to store your configuration.
Copy the example environment file for Docker:
```bash
cp .env.example.docker .env
```
Now, open the `.env` file in a text editor and customize the settings.
### Key Configuration Steps
1. **Set the Storage Path**: Find the `STORAGE_LOCAL_ROOT_PATH` variable and set it to the path you just created.
```env
STORAGE_LOCAL_ROOT_PATH=/var/data/open-archiver
```
2. **Secure Your Instance**: You must change the following placeholder values to secure your instance:
- `POSTGRES_PASSWORD`: A strong, unique password for the database.
- `REDIS_PASSWORD`: A strong, unique password for the Valkey/Redis service.
- `MEILI_MASTER_KEY`: A complex key for Meilisearch.
- `JWT_SECRET`: A long, random string for signing authentication tokens.
- `ENCRYPTION_KEY`: A 32-byte hex string for encrypting sensitive data in the database. You can generate one with the following command:
```bash
openssl rand -hex 32
```
- `STORAGE_ENCRYPTION_KEY`: **(Optional but Recommended)** A 32-byte hex string for encrypting emails and attachments at rest. If this key is not provided, storage encryption will be disabled. You can generate one with:
```bash
openssl rand -hex 32
```
### Storage Configuration
By default, the Docker Compose setup uses local filesystem storage, which is persisted using a Docker volume named `archiver-data`. This is suitable for most use cases.
If you want to use S3-compatible object storage, change the `STORAGE_TYPE` to `s3` and fill in your S3 credentials (`STORAGE_S3_*` variables). When `STORAGE_TYPE` is set to `local`, the S3-related variables are not required.
### Using External Services
For convenience, the `docker-compose.yml` file includes services for PostgreSQL, Valkey (Redis), and Meilisearch. However, you can use your own external or managed instances for these services.
To do so:
1. **Update your `.env` file**: Change the host, port, and credential variables to point to your external service instances. For example, you would update `DATABASE_URL`, `REDIS_HOST`, and `MEILI_HOST`.
2. **Modify `docker-compose.yml`**: Remove or comment out the service definitions for `postgres`, `valkey`, and `meilisearch` from your `docker-compose.yml` file.
This will configure the Open Archiver application to connect to your services instead of starting the default ones.
### Environment Variable Reference
Here is a complete list of environment variables available for configuration:
#### Application Settings
| Variable | Description | Default Value |
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------- |
| `NODE_ENV` | The application environment. | `development` |
| `PORT_BACKEND` | The port for the backend service. | `4000` |
| `PORT_FRONTEND` | The port for the frontend service. | `3000` |
| `APP_URL` | The public-facing URL of your application. This is used by the backend to configure CORS. | `http://localhost:3000` |
| `ORIGIN` | Used by the SvelteKit Node adapter to determine the server's public-facing URL. It should always be set to the value of `APP_URL` (e.g., `ORIGIN=$APP_URL`). | `http://localhost:3000` |
| `SYNC_FREQUENCY` | The frequency of continuous email syncing. See [cron syntax](https://crontab.guru/) for more details. | `* * * * *` |
| `ALL_INCLUSIVE_ARCHIVE` | Set to `true` to include all emails, including Junk and Trash folders, in the email archive. | `false` |
#### Docker Compose Service Configuration
These variables are used by `docker-compose.yml` to configure the services.
| Variable | Description | Default Value |
| ---------------------- | ---------------------------------------------------- | -------------------------------------------------------- |
| `POSTGRES_DB` | The name of the PostgreSQL database. | `open_archive` |
| `POSTGRES_USER` | The username for the PostgreSQL database. | `admin` |
| `POSTGRES_PASSWORD` | The password for the PostgreSQL database. | `password` |
| `DATABASE_URL` | The connection URL for the PostgreSQL database. | `postgresql://admin:password@postgres:5432/open_archive` |
| `MEILI_MASTER_KEY` | The master key for Meilisearch. | `aSampleMasterKey` |
| `MEILI_HOST` | The host for the Meilisearch service. | `http://meilisearch:7700` |
| `MEILI_INDEXING_BATCH` | The number of emails to batch together for indexing. | `500` |
| `REDIS_HOST` | The host for the Valkey (Redis) service. | `valkey` |
| `REDIS_PORT` | The port for the Valkey (Redis) service. | `6379` |
| `REDIS_USER` | Optional Redis username if ACLs are used. | |
| `REDIS_PASSWORD` | The password for the Valkey (Redis) service. | `defaultredispassword` |
| `REDIS_TLS_ENABLED` | Enable or disable TLS for Redis. | `false` |
#### Storage Settings
| Variable | Description | Default Value |
| ------------------------------ | ----------------------------------------------------------------------------------------------------------- | ------------------------- |
| `STORAGE_TYPE` | The storage backend to use (`local` or `s3`). | `local` |
| `BODY_SIZE_LIMIT` | The maximum request body size for uploads. Can be a number in bytes or a string with a unit (e.g., `100M`). | `100M` |
| `STORAGE_LOCAL_ROOT_PATH` | The root path for Open Archiver app data. | `/var/data/open-archiver` |
| `STORAGE_S3_ENDPOINT` | The endpoint for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
| `STORAGE_S3_BUCKET` | The bucket name for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
| `STORAGE_S3_ACCESS_KEY_ID` | The access key ID for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
| `STORAGE_S3_SECRET_ACCESS_KEY` | The secret access key for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
| `STORAGE_S3_REGION` | The region for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
| `STORAGE_S3_FORCE_PATH_STYLE` | Force path-style addressing for S3 (optional). | `false` |
| `STORAGE_ENCRYPTION_KEY` | A 32-byte hex string for AES-256 encryption of files at rest. If not set, files will not be encrypted. | |
#### Security & Authentication
| Variable | Description | Default Value |
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
| `ENABLE_DELETION` | Enable or disable deletion of emails and ingestion sources. If this option is not set, or is set to any value other than `true`, deletion will be disabled for the entire instance. | `false` |
| `JWT_SECRET` | A secret key for signing JWT tokens. | `a-very-secret-key-that-you-should-change` |
| `JWT_EXPIRES_IN` | The expiration time for JWT tokens. | `7d` |
| ~~`SUPER_API_KEY`~~ (Deprecated) | An API key with super admin privileges. (The SUPER_API_KEY is deprecated since v0.3.0 after we roll out the role-based access control system.) | |
| `RATE_LIMIT_WINDOW_MS` | The window in milliseconds for which API requests are checked. | `900000` (15 minutes) |
| `RATE_LIMIT_MAX_REQUESTS` | The maximum number of API requests allowed from an IP within the window. | `100` |
| `ENCRYPTION_KEY` | A 32-byte hex string for encrypting sensitive data in the database. | |
#### Apache Tika Integration
| Variable | Description | Default Value |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------ |
| `TIKA_URL` | Optional. The URL of an Apache Tika server for advanced text extraction from attachments. If not set, the application falls back to built-in parsers for PDF, Word, and Excel files. | `http://tika:9998` |
## 4. Run the Application
Once you have configured your `.env` file, you can start all the services using Docker Compose:
```bash
docker compose up -d
```
This command will:
- Pull the required Docker images for the frontend, backend, database, and other services.
- Create and start the containers in the background (`-d` flag).
- Create the persistent volumes for your data.
You can check the status of the running containers with:
```bash
docker compose ps
```
## 5. Access the Application
Once the services are running, you can access the Open Archiver web interface by navigating to `http://localhost:3000` in your web browser.
Upon first visit, you will be redirected to the `/setup` page where you can set up your admin account. Make sure you are the first person who accesses the instance.
If you are not redirected to the `/setup` page but instead see the login page, there might be something wrong with the database. Restart the service and try again.
## 6. Next Steps
After successfully deploying and logging into Open Archiver, the next step is to configure your ingestion sources to start archiving emails.
- [Connecting to Google Workspace](./email-providers/google-workspace.md)
- [Connecting to Microsoft 365](./email-providers/microsoft-365.md)
- [Connecting to a Generic IMAP Server](./email-providers/imap.md)
## Updating Your Installation
To update your Open Archiver instance to the latest version, run the following commands:
```bash
# Pull the latest changes from the repository
git pull
# Pull the latest Docker images
docker compose pull
# Restart the services with the new images
docker compose up -d
```
## Deploying on Coolify
If you are deploying Open Archiver on [Coolify](https://coolify.io/), it is recommended to let Coolify manage the Docker networks for you. This can help avoid potential routing conflicts and simplify your setup.
To do this, you will need to make a small modification to your `docker-compose.yml` file.
### Modify `docker-compose.yml` for Coolify
1. **Open your `docker-compose.yml` file** in a text editor.
2. **Remove all `networks` sections** from the file. This includes the network configuration for each service and the top-level network definition.
Specifically, you need to remove:
- The `networks: - open-archiver-net` lines from the `open-archiver`, `postgres`, `valkey`, and `meilisearch` services.
- The entire `networks:` block at the end of the file.
Here is an example of what to remove from a service:
```diff
services:
open-archiver:
image: logiclabshq/open-archiver:latest
# ... other settings
- networks:
- - open-archiver-net
```
And remove this entire block from the end of the file:
```diff
- networks:
- open-archiver-net:
- driver: bridge
```
3. **Save the modified `docker-compose.yml` file.**
By removing these sections, you allow Coolify to automatically create and manage the necessary networks, ensuring that all services can communicate with each other and are correctly exposed through Coolify's reverse proxy.
After making these changes, you can proceed with deploying your application on Coolify as you normally would.
## Where is my data stored (When using local storage and Docker)?
If you are using local storage to store your emails, based on your `docker-compose.yml` file, your data is being stored in what's called a "named volume" (`archiver-data`). That's why you're not seeing the files in the `./data/open-archiver` directory you created.
1. **List all Docker volumes**:
Run this command to see all the volumes on your system:
```bash
docker volume ls
```
2. **Identify the correct volume**:
Look through the list for a volume name that ends with `_archiver-data`. The part before that will be your project's directory name. For example, if your project is in a folder named `OpenArchiver`, the volume will be `openarchiver_archiver-data` But it can be a randomly generated hash.
3. **Inspect the correct volume**:
Once you've identified the correct volume name, use it in the `inspect` command. For example:
```bash
docker volume inspect <your_volume_name_here>
```
This will give you the correct `Mountpoint` path where your data is being stored. It will look something like this (the exact path will vary depending on your system):
```json
{
"CreatedAt": "2025-07-25T11:22:19Z",
"Driver": "local",
"Labels": {
"com.docker.compose.config-hash": "---",
"com.docker.compose.project": "---",
"com.docker.compose.version": "2.38.2",
"com.docker.compose.volume": "us8wwos0o4ok4go4gc8cog84_archiver-data"
},
"Mountpoint": "/var/lib/docker/volumes/us8wwos0o4ok4go4gc8cog84_archiver-data/_data",
"Name": "us8wwos0o4ok4go4gc8cog84_archiver-data",
"Options": null,
"Scope": "local"
}
```
In this example, the data is located at `/var/lib/docker/volumes/us8wwos0o4ok4go4gc8cog84_archiver-data/_data`. You can then `cd` into that directory to see your files.
### To save data to a specific folder
To save the data to a specific folder on your machine, you'll need to make a change to your `docker-compose.yml`. You need to switch from a named volume to a "bind mount".
Heres how you can do it:
1. **Edit `docker-compose.yml`**:
Open the `docker-compose.yml` file and find the `open-archiver` service. You're going to change the `volumes` section.
**Change this:**
```yaml
services:
open-archiver:
# ... other config
volumes:
- archiver-data:/var/data/open-archiver
```
**To this:**
```yaml
services:
open-archiver:
# ... other config
volumes:
- ./data/open-archiver:/var/data/open-archiver
```
You'll also want to remove the `archiver-data` volume definition at the bottom of the file, since it's no longer needed.
**Remove this whole block:**
```yaml
volumes:
# ... other volumes
archiver-data:
driver: local
```
2. **Restart your containers**:
After you've saved the changes, run the following command in your terminal to apply them. The `--force-recreate` flag will ensure the container is recreated with the new volume settings.
```bash
docker-compose up -d --force-recreate
```
After this, any new data will be saved directly into the `./data/open-archiver` folder in your project directory.

View File

@@ -0,0 +1,37 @@
# Integrity Check
Open Archiver allows you to verify the integrity of your archived emails and their attachments. This guide explains how the integrity check works and what the results mean.
## How It Works
When an email is archived, Open Archiver calculates a unique cryptographic signature (a SHA256 hash) for the email's raw `.eml` file and for each of its attachments. These signatures are stored in the database alongside the email's metadata.
The integrity check feature recalculates these signatures for the stored files and compares them to the original signatures stored in the database. This process allows you to verify that the content of your archived emails has not been altered, corrupted, or tampered with since the moment they were archived.
## The Integrity Report
When you view an email in the Open Archiver interface, an integrity report is automatically generated and displayed. This report provides a clear, at-a-glance status for the email file and each of its attachments.
### Statuses
- **Valid (Green Badge):** A "Valid" status means that the current signature of the file matches the original signature stored in the database. This is the expected status and indicates that the file's integrity is intact.
- **Invalid (Red Badge):** An "Invalid" status means that the current signature of the file does _not_ match the original signature. This indicates that the file's content has changed since it was archived.
### Reasons for an "Invalid" Status
If a file is marked as "Invalid," you can hover over the badge to see a reason for the failure. Common reasons include:
- **Stored hash does not match current hash:** This is the most common reason and indicates that the file's content has been modified. This could be due to accidental changes, data corruption, or unauthorized tampering.
- **Could not read attachment file from storage:** This message indicates that the file could not be read from its storage location. This could be due to a storage system issue, a file permission problem, or because the file has been deleted.
## What to Do If an Integrity Check Fails
If you encounter an "Invalid" status for an email or attachment, it is important to investigate the issue. Here are some steps you can take:
1. **Check Storage:** Verify that the file exists in its storage location and that its permissions are correct.
2. **Review Audit Logs:** If you have audit logging enabled, review the logs for any unauthorized access or modifications to the file.
3. **Restore from Backup:** If you suspect data corruption, you may need to restore the affected file from a backup.
The integrity check feature is a crucial tool for ensuring the long-term reliability and trustworthiness of your email archive. By regularly monitoring the integrity of your archived data, you can be confident that your records are accurate and complete.

View File

@@ -0,0 +1,32 @@
# System Settings
System settings allow administrators to configure the global look and theme of the application. These settings apply to all users.
## Configuration
### Language
This setting determines the default display language for the application UI. The selected language will be used for all interface elements, including menus, labels, and messages.
> **Important:** When the language is changed, the backend (API) language will only change after a restart of the server. The frontend will update immediately.
Supported languages:
- English
- German
- French
- Estonian
- Spanish
- Italian
- Portuguese
- Dutch
- Greek
- Japanese
### Default Theme
This setting controls the default color theme for the application. Users can choose between light, dark, or system default. The system default theme will sync with the user's operating system theme.
### Support Email
This setting allows administrators to provide a public-facing email address for user support inquiries. This email address may be displayed on error pages or in other areas where users may need to contact support.

View File

@@ -0,0 +1,75 @@
# Troubleshooting CORS Errors
Cross-Origin Resource Sharing (CORS) is a security feature that controls how web applications in one domain can request and interact with resources in another. If not configured correctly, you may encounter errors when performing actions like uploading files.
This guide will help you diagnose and resolve common CORS-related issues.
## Symptoms
You may be experiencing a CORS issue if you see one of the following errors in your browser's developer console or in the application's logs:
- `TypeError: fetch failed`
- `Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource.`
- `Unexpected token 'C', "Cross-site"... is not valid JSON`
- A JSON error response similar to the following:
```json
{
"message": "CORS Error: This origin is not allowed.",
"requiredOrigin": "http://localhost:3000",
"receivedOrigin": "https://localhost:3000"
}
```
## Root Cause
These errors typically occur when the URL you are using to access the application in your browser does not exactly match the `APP_URL` configured in your `.env` file.
This can happen for several reasons:
- You are accessing the application via a different port.
- You are using a reverse proxy that changes the protocol (e.g., from `http` to `https`).
- The SvelteKit server, in a production build, is incorrectly guessing its public-facing URL.
## Solution
The solution is to ensure that the application's frontend and backend are correctly configured with the public-facing URL of your instance. This is done by setting two environment variables: `APP_URL` and `ORIGIN`.
1. **Open your `.env` file** in a text editor.
2. **Set `APP_URL`**: Define the `APP_URL` variable with the exact URL you use to access the application in your browser.
```env
APP_URL=http://your-domain-or-ip:3000
```
3. **Set `ORIGIN`**: The SvelteKit server requires a specific `ORIGIN` variable to correctly identify itself. This should always be set to the value of your `APP_URL`.
```env
ORIGIN=$APP_URL
```
By using `$APP_URL`, you ensure that both variables are always in sync.
### Example Configuration
If you are running the application locally on port `3000`, your configuration should look like this:
```env
APP_URL=http://localhost:3000
ORIGIN=$APP_URL
```
If your application is behind a reverse proxy and is accessible at `https://archive.mycompany.com`, your configuration should be:
```env
APP_URL=https://archive.mycompany.com
ORIGIN=$APP_URL
```
After making these changes to your `.env` file, you must restart the application for them to take effect:
```bash
docker compose up -d --force-recreate
```
This will ensure that the backend's CORS policy and the frontend server's origin are correctly aligned, resolving the errors.

View File

@@ -0,0 +1,141 @@
# Upgrading Meilisearch
Meilisearch, the search engine used by Open Archiver, requires a manual data migration process when upgrading to a new version. This is because Meilisearch databases are only compatible with the specific version that created them.
If an Open Archiver upgrade includes a major Meilisearch version change, you will need to migrate your search index by following the process below.
## Experimental: Dumpless Upgrade
> **Warning:** This feature is currently **experimental**. We do not recommend using it for production environments until it is marked as stable. Please use the [standard migration process](#standard-migration-process-recommended) instead. Proceed with caution.
Meilisearch recently introduced an experimental "dumpless" upgrade method. This allows you to migrate the database to a new Meilisearch version without manually creating and importing a dump. However, please note that **dumpless upgrades are not currently atomic**. If the process fails, your database may become corrupted, resulting in data loss.
**Prerequisite: Create a Snapshot**
Before attempting a dumpless upgrade, you **must** take a snapshot of your instance. This ensures you have a recovery point if the upgrade fails. Learn how to create snapshots in the [official Meilisearch documentation](https://www.meilisearch.com/docs/learn/data_backup/snapshots).
### How to Enable
To perform a dumpless upgrade, you need to configure your Meilisearch instance with the experimental flag. You can do this in one of two ways:
**Option 1: Using an Environment Variable**
Add the `MEILI_EXPERIMENTAL_DUMPLESS_UPGRADE` environment variable to your `docker-compose.yml` file for the Meilisearch service.
```yaml
services:
meilisearch:
image: getmeili/meilisearch:v1.x # The new version you want to upgrade to
environment:
- MEILI_MASTER_KEY=${MEILI_MASTER_KEY}
- MEILI_EXPERIMENTAL_DUMPLESS_UPGRADE=true
```
**Option 2: Using a CLI Option**
Alternatively, you can pass the `--experimental-dumpless-upgrade` flag in the command section of your `docker-compose.yml`.
```yaml
services:
meilisearch:
image: getmeili/meilisearch:v1.x # The new version you want to upgrade to
command: meilisearch --experimental-dumpless-upgrade
```
After updating your configuration, restart your container:
```bash
docker compose up -d
```
Meilisearch will attempt to migrate your database to the new version automatically.
---
## Standard Migration Process (Recommended)
For self-hosted instances using Docker Compose, the recommended migration process involves creating a data dump from your current Meilisearch instance, upgrading the Docker image, and then importing that dump into the new version.
### Step 1: Create a Dump
Before upgrading, you must create a dump of your existing Meilisearch data. You can do this by sending a POST request to the `/dumps` endpoint of the Meilisearch API.
1. **Find your Meilisearch container name**:
```bash
docker compose ps
```
Look for the service name that corresponds to Meilisearch, usually `meilisearch`.
2. **Execute the dump command**:
You will need your Meilisearch Admin API key, which can be found in your `.env` file as `MEILI_MASTER_KEY`.
```bash
curl -X POST 'http://localhost:7700/dumps' \
-H "Authorization: Bearer YOUR_MEILI_MASTER_KEY"
```
This will start the dump creation process. The dump file will be created inside the `meili_data` volume used by the Meilisearch container.
3. **Monitor the dump status**:
The dump creation request returns a `taskUid`. You can use this to check the status of the dump.
For more details on dump and import, see the [official Meilisearch documentation](https://www.meilisearch.com/docs/learn/update_and_migration/updating).
### Step 2: Upgrade Your Open Archiver Instance
Once the dump is successfully created, you can proceed with the standard Open Archiver upgrade process.
1. **Pull the latest changes and Docker images**:
```bash
git pull
docker compose pull
```
2. **Stop the running services**:
```bash
docker compose down
```
### Step 3: Import the Dump
Now, you need to restart the services while telling Meilisearch to import from your dump file.
1. **Modify `docker-compose.yml`**:
You need to temporarily add the `--import-dump` flag to the Meilisearch service command. Find the `meilisearch` service in your `docker-compose.yml` and modify the `command` section.
You will need the name of your dump file. It will be a `.dump` file located in the directory mapped to `/meili_data` inside the container.
```yaml
services:
meilisearch:
# ... other service config
command:
[
'--master-key=${MEILI_MASTER_KEY}',
'--env=production',
'--import-dump=/meili_data/dumps/YOUR_DUMP_FILE.dump',
]
```
2. **Restart the services**:
```bash
docker compose up -d
```
Meilisearch will now start and import the data from the dump file. This may take some time depending on the size of your index.
### Step 4: Clean Up
Once the import is complete and you have verified that your search is working correctly, you should remove the `--import-dump` flag from your `docker-compose.yml` to prevent it from running on every startup.
1. **Remove the `--import-dump` line** from the `command` section of the `meilisearch` service in `docker-compose.yml`.
2. **Restart the services** one last time:
```bash
docker compose up -d
```
Your Meilisearch instance is now upgraded and running with your migrated data.
For more advanced scenarios or troubleshooting, please refer to the **[official Meilisearch migration guide](https://www.meilisearch.com/docs/learn/update_and_migration/updating)**.

View File

@@ -0,0 +1,42 @@
# Upgrading Your Instance
This guide provides instructions for upgrading your Open Archiver instance to the latest version.
## Checking for New Versions
Open Archiver automatically checks for new versions and will display a notification in the footer of the web interface when an update is available. You can find a list of all releases and their release notes on the [GitHub Releases](https://github.com/LogicLabs-OU/OpenArchiver/releases) page.
## Upgrading Your Instance
To upgrade your Open Archiver instance, follow these steps:
1. **Pull the latest changes from the repository**:
```bash
git pull
```
2. **Pull the latest Docker images**:
```bash
docker compose pull
```
3. **Restart the services with the new images**:
```bash
docker compose up -d
```
This will restart your Open Archiver instance with the latest version of the application.
## Migrating Data
When you upgrade to a new version, database migrations are applied automatically when the application starts up. This ensures that your database schema is always up-to-date with the latest version of the application.
No manual intervention is required for database migrations.
## Upgrading Meilisearch
When an Open Archiver update includes a major version change for Meilisearch, you will need to manually migrate your search data. This process is not covered by the standard upgrade commands.
For detailed instructions, please see the [Meilisearch Upgrade Guide](./meilisearch-upgrade.md).

79
open-archiver.yml Normal file
View File

@@ -0,0 +1,79 @@
# documentation: https://openarchiver.com
# slogan: A self-hosted, open-source email archiving solution with full-text search capability.
# tags: email archiving,email,compliance,search
# logo: svgs/openarchiver.svg
# port: 3000
services:
open-archiver:
image: logiclabshq/open-archiver:latest
environment:
- SERVICE_URL_3000
- SERVICE_URL=${SERVICE_URL_3000}
- PORT_BACKEND=${PORT_BACKEND:-4000}
- PORT_FRONTEND=${PORT_FRONTEND:-3000}
- NODE_ENV=${NODE_ENV:-production}
- SYNC_FREQUENCY=${SYNC_FREQUENCY:-* * * * *}
- POSTGRES_DB=${POSTGRES_DB:-open_archive}
- POSTGRES_USER=${POSTGRES_USER:-admin}
- POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRES}
- DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
- MEILI_MASTER_KEY=${SERVICE_PASSWORD_MEILISEARCH}
- MEILI_HOST=http://meilisearch:7700
- REDIS_HOST=valkey
- REDIS_PORT=6379
- REDIS_USER=default
- REDIS_PASSWORD=${SERVICE_PASSWORD_VALKEY}
- REDIS_TLS_ENABLED=false
- STORAGE_TYPE=${STORAGE_TYPE:-local}
- STORAGE_LOCAL_ROOT_PATH=${STORAGE_LOCAL_ROOT_PATH:-/var/data/open-archiver}
- BODY_SIZE_LIMIT=${BODY_SIZE_LIMIT:-100M}
- STORAGE_S3_ENDPOINT=${STORAGE_S3_ENDPOINT}
- STORAGE_S3_BUCKET=${STORAGE_S3_BUCKET}
- STORAGE_S3_ACCESS_KEY_ID=${STORAGE_S3_ACCESS_KEY_ID}
- STORAGE_S3_SECRET_ACCESS_KEY=${STORAGE_S3_SECRET_ACCESS_KEY}
- STORAGE_S3_REGION=${STORAGE_S3_REGION}
- STORAGE_S3_FORCE_PATH_STYLE=${STORAGE_S3_FORCE_PATH_STYLE:-false}
- JWT_SECRET=${SERVICE_BASE64_128_JWT}
- JWT_EXPIRES_IN=${JWT_EXPIRES_IN:-7d}
- ENCRYPTION_KEY=${SERVICE_BASE64_64_ENCRYPTIONKEY}
- RATE_LIMIT_WINDOW_MS=${RATE_LIMIT_WINDOW_MS:-60000}
- RATE_LIMIT_MAX_REQUESTS=${RATE_LIMIT_MAX_REQUESTS:-100}
volumes:
- archiver-data:/var/data/open-archiver
depends_on:
postgres:
condition: service_healthy
valkey:
condition: service_started
meilisearch:
condition: service_started
postgres:
image: postgres:17-alpine
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRES}
- LC_ALL=C
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}']
interval: 10s
timeout: 20s
retries: 10
valkey:
image: valkey/valkey:8-alpine
command: valkey-server --requirepass ${SERVICE_PASSWORD_VALKEY}
volumes:
- valkeydata:/data
meilisearch:
image: getmeili/meilisearch:v1.15
environment:
- MEILI_MASTER_KEY=${SERVICE_PASSWORD_MEILISEARCH}
- MEILI_SCHEDULE_SNAPSHOT=86400
volumes:
- meilidata:/meili_data

View File

@@ -1,24 +1,50 @@
{
"name": "open-archiver",
"private": true,
"scripts": {
"dev": "dotenv -- pnpm --filter \"./packages/*\" --parallel dev",
"build": "pnpm --filter \"./packages/*\" --parallel build",
"start:workers": "dotenv -- concurrently \"pnpm --filter @open-archiver/backend start:ingestion-worker\" \"pnpm --filter @open-archiver/backend start:indexing-worker\""
},
"devDependencies": {
"concurrently": "^9.2.0",
"dotenv-cli": "8.0.0",
"typescript": "5.8.3"
},
"packageManager": "pnpm@10.13.1",
"engines": {
"node": ">=22.0.0",
"pnpm": "10.13.1"
},
"pnpm": {
"onlyBuiltDependencies": [
"esbuild"
]
}
"name": "open-archiver",
"version": "0.4.2",
"private": true,
"license": "SEE LICENSE IN LICENSE file",
"scripts": {
"build:oss": "pnpm --filter \"./packages/*\" --filter \"!./packages/enterprise\" --filter \"./apps/open-archiver\" build",
"build:enterprise": "cross-env VITE_ENTERPRISE_MODE=true pnpm build",
"start:oss": "dotenv -- concurrently \"node apps/open-archiver/dist/index.js\" \"pnpm --filter @open-archiver/frontend start\"",
"start:enterprise": "dotenv -- concurrently \"node apps/open-archiver-enterprise/dist/index.js\" \"pnpm --filter @open-archiver/frontend start\"",
"dev:enterprise": "cross-env VITE_ENTERPRISE_MODE=true dotenv -- pnpm --filter \"@open-archiver/*\" --filter \"open-archiver-enterprise-app\" --parallel dev",
"dev:oss": "dotenv -- pnpm --filter \"./packages/*\" --filter \"!./packages/@open-archiver/enterprise\" --filter \"open-archiver-app\" --parallel dev",
"build": "pnpm --filter \"./packages/*\" --filter \"./apps/*\" build",
"start": "dotenv -- pnpm --filter \"open-archiver-app\" --parallel start",
"start:workers": "dotenv -- concurrently \"pnpm --filter @open-archiver/backend start:ingestion-worker\" \"pnpm --filter @open-archiver/backend start:indexing-worker\" \"pnpm --filter @open-archiver/backend start:sync-scheduler\"",
"start:workers:dev": "dotenv -- concurrently \"pnpm --filter @open-archiver/backend start:ingestion-worker:dev\" \"pnpm --filter @open-archiver/backend start:indexing-worker:dev\" \"pnpm --filter @open-archiver/backend start:sync-scheduler:dev\"",
"db:generate": "dotenv -- pnpm --filter @open-archiver/backend db:generate",
"db:migrate": "dotenv -- pnpm --filter @open-archiver/backend db:migrate",
"db:migrate:dev": "dotenv -- pnpm --filter @open-archiver/backend db:migrate:dev",
"docker-start:oss": "concurrently \"pnpm start:workers\" \"pnpm start:oss\"",
"docker-start:enterprise": "concurrently \"pnpm start:workers\" \"pnpm start:enterprise\"",
"docs:dev": "vitepress dev docs --port 3009",
"docs:build": "vitepress build docs",
"docs:preview": "vitepress preview docs",
"format": "prettier --write .",
"lint": "prettier --check ."
},
"dependencies": {
"concurrently": "^9.2.0",
"dotenv-cli": "8.0.0"
},
"devDependencies": {
"cross-env": "^10.0.0",
"prettier": "^3.6.2",
"prettier-plugin-svelte": "^3.4.0",
"prettier-plugin-tailwindcss": "^0.6.14",
"typescript": "5.8.3",
"vitepress": "^1.6.4"
},
"packageManager": "pnpm@10.13.1",
"engines": {
"node": ">=22.0.0",
"pnpm": "10.13.1"
},
"pnpm": {
"onlyBuiltDependencies": [
"esbuild"
]
}
}

View File

@@ -1,19 +1,19 @@
import { defineConfig } from 'drizzle-kit';
import { config } from 'dotenv';
config({ path: '../../.env' });
config();
if (!process.env.DATABASE_URL) {
throw new Error('DATABASE_URL is not set in the .env file');
throw new Error('DATABASE_URL is not set in the .env file');
}
export default defineConfig({
schema: './src/database/schema.ts',
out: './src/database/migrations',
dialect: 'postgresql',
dbCredentials: {
url: process.env.DATABASE_URL,
},
verbose: true,
strict: true,
schema: './src/database/schema.ts',
out: './src/database/migrations',
dialect: 'postgresql',
dbCredentials: {
url: process.env.DATABASE_URL,
},
verbose: true,
strict: true,
});

View File

@@ -1,57 +1,81 @@
{
"name": "@open-archiver/backend",
"version": "0.1.0",
"private": true,
"main": "dist/index.js",
"scripts": {
"dev": "ts-node-dev --respawn --transpile-only src/index.ts ",
"build": "tsc",
"prestart": "npm run build",
"start": "node dist/index.js",
"start:ingestion-worker": "ts-node-dev --respawn --transpile-only src/workers/ingestion.worker.ts",
"start:indexing-worker": "ts-node-dev --respawn --transpile-only src/workers/indexing.worker.ts",
"db:generate": "drizzle-kit generate --config=drizzle.config.ts",
"db:push": "drizzle-kit push --config=drizzle.config.ts",
"db:migrate": "ts-node-dev src/database/migrate.ts"
},
"dependencies": {
"@aws-sdk/client-s3": "^3.844.0",
"@aws-sdk/lib-storage": "^3.844.0",
"@azure/msal-node": "^3.6.3",
"@open-archiver/types": "workspace:*",
"axios": "^1.10.0",
"bcryptjs": "^3.0.2",
"bullmq": "^5.56.3",
"dotenv": "^17.2.0",
"drizzle-orm": "^0.44.2",
"express": "^5.1.0",
"express-validator": "^7.2.1",
"google-auth-library": "^10.1.0",
"googleapis": "^152.0.0",
"imapflow": "^1.0.191",
"jose": "^6.0.11",
"mailparser": "^3.7.4",
"mammoth": "^1.9.1",
"meilisearch": "^0.51.0",
"pdf2json": "^3.1.6",
"pg": "^8.16.3",
"pino": "^9.7.0",
"pino-pretty": "^13.0.0",
"postgres": "^3.4.7",
"reflect-metadata": "^0.2.2",
"sqlite3": "^5.1.7",
"tsconfig-paths": "^4.2.0",
"xlsx": "^0.18.5"
},
"devDependencies": {
"@bull-board/api": "^6.11.0",
"@bull-board/express": "^6.11.0",
"@types/express": "^5.0.3",
"@types/mailparser": "^3.4.6",
"@types/node": "^24.0.12",
"bull-board": "^2.1.3",
"drizzle-kit": "^0.31.4",
"ts-node-dev": "^2.0.0",
"typescript": "^5.8.3"
}
"name": "@open-archiver/backend",
"version": "0.1.0",
"private": true,
"license": "SEE LICENSE IN LICENSE file",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"scripts": {
"build": "tsc && pnpm copy-assets",
"dev": "tsc --watch",
"copy-assets": "cp -r src/locales dist/locales",
"start:ingestion-worker": "node dist/workers/ingestion.worker.js",
"start:indexing-worker": "node dist/workers/indexing.worker.js",
"start:sync-scheduler": "node dist/jobs/schedulers/sync-scheduler.js",
"start:ingestion-worker:dev": "ts-node-dev --respawn --transpile-only src/workers/ingestion.worker.ts",
"start:indexing-worker:dev": "ts-node-dev --respawn --transpile-only src/workers/indexing.worker.ts",
"start:sync-scheduler:dev": "ts-node-dev --respawn --transpile-only src/jobs/schedulers/sync-scheduler.ts",
"db:generate": "drizzle-kit generate --config=drizzle.config.ts",
"db:push": "drizzle-kit push --config=drizzle.config.ts",
"db:migrate": "node dist/database/migrate.js",
"db:migrate:dev": "ts-node-dev src/database/migrate.ts"
},
"dependencies": {
"@aws-sdk/client-s3": "^3.844.0",
"@aws-sdk/lib-storage": "^3.844.0",
"@azure/msal-node": "^3.6.3",
"@casl/ability": "^6.7.3",
"@microsoft/microsoft-graph-client": "^3.0.7",
"@open-archiver/types": "workspace:*",
"archiver": "^7.0.1",
"axios": "^1.10.0",
"bcryptjs": "^3.0.2",
"bullmq": "^5.56.3",
"busboy": "^1.6.0",
"cors": "^2.8.5",
"cross-fetch": "^4.1.0",
"deepmerge-ts": "^7.1.5",
"dotenv": "^17.2.0",
"drizzle-kit": "^0.31.4",
"drizzle-orm": "^0.44.2",
"express": "^5.1.0",
"express-rate-limit": "^8.0.1",
"express-validator": "^7.2.1",
"google-auth-library": "^10.1.0",
"googleapis": "^152.0.0",
"i18next": "^25.4.2",
"i18next-fs-backend": "^2.6.0",
"i18next-http-middleware": "^3.8.0",
"imapflow": "^1.0.191",
"jose": "^6.0.11",
"mailparser": "^3.7.4",
"mammoth": "^1.9.1",
"meilisearch": "^0.51.0",
"multer": "^2.0.2",
"pdf2json": "^3.1.6",
"pg": "^8.16.3",
"pino": "^9.7.0",
"pino-pretty": "^13.0.0",
"postgres": "^3.4.7",
"pst-extractor": "^1.11.0",
"reflect-metadata": "^0.2.2",
"sqlite3": "^5.1.7",
"xlsx": "https://cdn.sheetjs.com/xlsx-0.20.3/xlsx-0.20.3.tgz",
"yauzl": "^3.2.0",
"zod": "^4.1.5"
},
"devDependencies": {
"@types/archiver": "^6.0.3",
"@types/busboy": "^1.5.4",
"@types/cors": "^2.8.19",
"@types/express": "^5.0.3",
"@types/mailparser": "^3.4.6",
"@types/microsoft-graph": "^2.40.1",
"@types/multer": "^2.0.0",
"@types/node": "^24.0.12",
"@types/yauzl": "^2.10.3",
"ts-node-dev": "^2.0.0",
"tsconfig-paths": "^4.2.0",
"typescript": "^5.8.3"
}
}

View File

@@ -0,0 +1,81 @@
import { Request, Response } from 'express';
import { ApiKeyService } from '../../services/ApiKeyService';
import { z } from 'zod';
import { UserService } from '../../services/UserService';
import { config } from '../../config';
const generateApiKeySchema = z.object({
name: z
.string()
.min(1, 'API kay name must be more than 1 characters')
.max(255, 'API kay name must not be more than 255 characters'),
expiresInDays: z
.number()
.int()
.positive('Only positive number is allowed')
.max(730, 'The API key must expire within 2 years / 730 days.'),
});
export class ApiKeyController {
private userService = new UserService();
public generateApiKey = async (req: Request, res: Response) => {
try {
if (config.app.isDemo) {
return res.status(403).json({ message: req.t('errors.demoMode') });
}
const { name, expiresInDays } = generateApiKeySchema.parse(req.body);
if (!req.user || !req.user.sub) {
return res.status(401).json({ message: 'Unauthorized' });
}
const userId = req.user.sub;
const actor = await this.userService.findById(userId);
if (!actor) {
return res.status(401).json({ message: 'Unauthorized' });
}
const key = await ApiKeyService.generate(
userId,
name,
expiresInDays,
actor,
req.ip || 'unknown'
);
res.status(201).json({ key });
} catch (error) {
if (error instanceof z.ZodError) {
return res
.status(400)
.json({ message: req.t('api.requestBodyInvalid'), errors: error.message });
}
res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
public getApiKeys = async (req: Request, res: Response) => {
if (!req.user || !req.user.sub) {
return res.status(401).json({ message: 'Unauthorized' });
}
const userId = req.user.sub;
const keys = await ApiKeyService.getKeys(userId);
res.status(200).json(keys);
};
public deleteApiKey = async (req: Request, res: Response) => {
if (config.app.isDemo) {
return res.status(403).json({ message: req.t('errors.demoMode') });
}
const { id } = req.params;
if (!req.user || !req.user.sub) {
return res.status(401).json({ message: 'Unauthorized' });
}
const userId = req.user.sub;
const actor = await this.userService.findById(userId);
if (!actor) {
return res.status(401).json({ message: 'Unauthorized' });
}
await ApiKeyService.deleteKey(id, userId, actor, req.ip || 'unknown');
res.status(204).send({ message: req.t('apiKeys.deleteSuccess') });
};
}

View File

@@ -1,36 +1,86 @@
import { Request, Response } from 'express';
import { ArchivedEmailService } from '../../services/ArchivedEmailService';
import { UserService } from '../../services/UserService';
import { checkDeletionEnabled } from '../../helpers/deletionGuard';
export class ArchivedEmailController {
public getArchivedEmails = async (req: Request, res: Response): Promise<Response> => {
try {
const { ingestionSourceId } = req.params;
const page = parseInt(req.query.page as string, 10) || 1;
const limit = parseInt(req.query.limit as string, 10) || 10;
private userService = new UserService();
public getArchivedEmails = async (req: Request, res: Response): Promise<Response> => {
try {
const { ingestionSourceId } = req.params;
const page = parseInt(req.query.page as string, 10) || 1;
const limit = parseInt(req.query.limit as string, 10) || 10;
const userId = req.user?.sub;
const result = await ArchivedEmailService.getArchivedEmails(
ingestionSourceId,
page,
limit
);
return res.status(200).json(result);
} catch (error) {
console.error('Get archived emails error:', error);
return res.status(500).json({ message: 'An internal server error occurred' });
}
};
if (!userId) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
public getArchivedEmailById = async (req: Request, res: Response): Promise<Response> => {
try {
const { id } = req.params;
const email = await ArchivedEmailService.getArchivedEmailById(id);
if (!email) {
return res.status(404).json({ message: 'Archived email not found' });
}
return res.status(200).json(email);
} catch (error) {
console.error(`Get archived email by id ${req.params.id} error:`, error);
return res.status(500).json({ message: 'An internal server error occurred' });
}
};
const result = await ArchivedEmailService.getArchivedEmails(
ingestionSourceId,
page,
limit,
userId
);
return res.status(200).json(result);
} catch (error) {
console.error('Get archived emails error:', error);
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
public getArchivedEmailById = async (req: Request, res: Response): Promise<Response> => {
try {
const { id } = req.params;
const userId = req.user?.sub;
if (!userId) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const actor = await this.userService.findById(userId);
if (!actor) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const email = await ArchivedEmailService.getArchivedEmailById(
id,
userId,
actor,
req.ip || 'unknown'
);
if (!email) {
return res.status(404).json({ message: req.t('archivedEmail.notFound') });
}
return res.status(200).json(email);
} catch (error) {
console.error(`Get archived email by id ${req.params.id} error:`, error);
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
public deleteArchivedEmail = async (req: Request, res: Response): Promise<Response> => {
try {
checkDeletionEnabled();
const { id } = req.params;
const userId = req.user?.sub;
if (!userId) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const actor = await this.userService.findById(userId);
if (!actor) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
await ArchivedEmailService.deleteArchivedEmail(id, actor, req.ip || 'unknown');
return res.status(204).send();
} catch (error) {
console.error(`Delete archived email ${req.params.id} error:`, error);
if (error instanceof Error) {
if (error.message === 'Archived email not found') {
return res.status(404).json({ message: req.t('archivedEmail.notFound') });
}
return res.status(500).json({ message: error.message });
}
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
}

View File

@@ -1,32 +1,130 @@
import type { Request, Response } from 'express';
import type { IAuthService } from '../../services/AuthService';
import { AuthService } from '../../services/AuthService';
import { UserService } from '../../services/UserService';
import { IamService } from '../../services/IamService';
import { db } from '../../database';
import * as schema from '../../database/schema';
import { eq, sql } from 'drizzle-orm';
import 'dotenv/config';
import { AuthorizationService } from '../../services/AuthorizationService';
import { CaslPolicy } from '@open-archiver/types';
export class AuthController {
#authService: IAuthService;
#authService: AuthService;
#userService: UserService;
constructor(authService: IAuthService) {
this.#authService = authService;
}
constructor(authService: AuthService, userService: UserService) {
this.#authService = authService;
this.#userService = userService;
}
/**
* Only used for setting up the instance, should only be displayed once upon instance set up.
* @param req
* @param res
* @returns
*/
public setup = async (req: Request, res: Response): Promise<Response> => {
const { email, password, first_name, last_name } = req.body;
public login = async (req: Request, res: Response): Promise<Response> => {
const { email, password } = req.body;
if (!email || !password || !first_name || !last_name) {
return res.status(400).json({ message: req.t('auth.setup.allFieldsRequired') });
}
if (!email || !password) {
return res.status(400).json({ message: 'Email and password are required' });
}
try {
const userCountResult = await db
.select({ count: sql<number>`count(*)` })
.from(schema.users);
const userCount = Number(userCountResult[0].count);
try {
const result = await this.#authService.login(email, password);
if (userCount > 0) {
return res.status(403).json({ message: req.t('auth.setup.alreadyCompleted') });
}
if (!result) {
return res.status(401).json({ message: 'Invalid credentials' });
}
const newUser = await this.#userService.createAdminUser(
{ email, password, first_name, last_name },
true
);
const result = await this.#authService.login(email, password, req.ip || 'unknown');
return res.status(201).json(result);
} catch (error) {
console.error('Setup error:', error);
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
return res.status(200).json(result);
} catch (error) {
// In a real application, you'd want to log this error.
console.error('Login error:', error);
return res.status(500).json({ message: 'An internal server error occurred' });
}
};
public login = async (req: Request, res: Response): Promise<Response> => {
const { email, password } = req.body;
if (!email || !password) {
return res.status(400).json({ message: req.t('auth.login.emailAndPasswordRequired') });
}
try {
const result = await this.#authService.login(email, password, req.ip || 'unknown');
if (!result) {
return res.status(401).json({ message: req.t('auth.login.invalidCredentials') });
}
return res.status(200).json(result);
} catch (error) {
console.error('Login error:', error);
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
public status = async (req: Request, res: Response): Promise<Response> => {
try {
const users = await db.select().from(schema.users);
/**
* Check the situation where the only user has "Super Admin" role, but they don't actually have Super Admin permission because the role was set up in an earlier version, we need to change that "Super Admin" role to the one used in the current version.
*/
if (users.length === 1) {
const iamService = new IamService();
const userRoles = await iamService.getRolesForUser(users[0].id);
if (userRoles.some((r) => r.name === 'Super Admin')) {
const authorizationService = new AuthorizationService();
const hasAdminPermission = await authorizationService.can(
users[0].id,
'manage',
'all'
);
if (!hasAdminPermission) {
const suerAdminPolicies: CaslPolicy[] = [
{
action: 'manage',
subject: 'all',
},
];
await db
.update(schema.roles)
.set({
policies: suerAdminPolicies,
slug: 'predefined_super_admin',
})
.where(eq(schema.roles.name, 'Super Admin'));
}
}
}
// in case user uses older version with admin user variables, we will create the admin user using those variables.
const needsSetupUser = users.length === 0;
if (needsSetupUser && process.env.ADMIN_EMAIL && process.env.ADMIN_PASSWORD) {
await this.#userService.createAdminUser(
{
email: process.env.ADMIN_EMAIL,
password: process.env.ADMIN_PASSWORD,
first_name: 'Admin',
last_name: 'User',
},
true
);
return res.status(200).json({ needsSetup: false });
}
return res.status(200).json({ needsSetup: needsSetupUser });
} catch (error) {
console.error('Status check error:', error);
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
}

View File

@@ -0,0 +1,31 @@
import { Request, Response } from 'express';
import { dashboardService } from '../../services/DashboardService';
class DashboardController {
public async getStats(req: Request, res: Response) {
const stats = await dashboardService.getStats();
res.json(stats);
}
public async getIngestionHistory(req: Request, res: Response) {
const history = await dashboardService.getIngestionHistory();
res.json(history);
}
public async getIngestionSources(req: Request, res: Response) {
const sources = await dashboardService.getIngestionSources();
res.json(sources);
}
public async getRecentSyncs(req: Request, res: Response) {
const syncs = await dashboardService.getRecentSyncs();
res.json(syncs);
}
public async getIndexedInsights(req: Request, res: Response) {
const insights = await dashboardService.getIndexedInsights();
res.json(insights);
}
}
export const dashboardController = new DashboardController();

View File

@@ -0,0 +1,151 @@
import { Request, Response } from 'express';
import { IamService } from '../../services/IamService';
import { PolicyValidator } from '../../iam-policy/policy-validator';
import type { CaslPolicy } from '@open-archiver/types';
import { logger } from '../../config/logger';
export class IamController {
#iamService: IamService;
constructor(iamService: IamService) {
this.#iamService = iamService;
}
public getRoles = async (req: Request, res: Response): Promise<void> => {
try {
let roles = await this.#iamService.getRoles();
if (!roles.some((r) => r.slug?.includes('predefined_'))) {
// create pre defined roles
logger.info({}, 'Creating predefined roles');
await this.createDefaultRoles();
}
res.status(200).json(roles);
} catch (error) {
res.status(500).json({ message: req.t('iam.failedToGetRoles') });
}
};
public getRoleById = async (req: Request, res: Response): Promise<void> => {
const { id } = req.params;
try {
const role = await this.#iamService.getRoleById(id);
if (role) {
res.status(200).json(role);
} else {
res.status(404).json({ message: req.t('iam.roleNotFound') });
}
} catch (error) {
res.status(500).json({ message: req.t('iam.failedToGetRole') });
}
};
public createRole = async (req: Request, res: Response) => {
const { name, policies } = req.body;
if (!name || !policies) {
res.status(400).json({ message: req.t('iam.missingRoleFields') });
return;
}
try {
for (const statement of policies) {
const { valid, reason } = PolicyValidator.isValid(statement as CaslPolicy);
if (!valid) {
res.status(400).json({ message: `${req.t('iam.invalidPolicy')} ${reason}` });
return;
}
}
const role = await this.#iamService.createRole(name, policies);
res.status(201).json(role);
} catch (error) {
console.log(error);
res.status(500).json({ message: req.t('iam.failedToCreateRole') });
}
};
public deleteRole = async (req: Request, res: Response) => {
const { id } = req.params;
try {
await this.#iamService.deleteRole(id);
res.status(204).send();
} catch (error) {
res.status(500).json({ message: req.t('iam.failedToDeleteRole') });
}
};
public updateRole = async (req: Request, res: Response) => {
const { id } = req.params;
const { name, policies } = req.body;
if (!name && !policies) {
res.status(400).json({ message: req.t('iam.missingUpdateFields') });
return;
}
if (policies) {
for (const statement of policies) {
const { valid, reason } = PolicyValidator.isValid(statement as CaslPolicy);
if (!valid) {
res.status(400).json({ message: `${req.t('iam.invalidPolicy')} ${reason}` });
return;
}
}
}
try {
const role = await this.#iamService.updateRole(id, { name, policies });
res.status(200).json(role);
} catch (error) {
res.status(500).json({ message: req.t('iam.failedToUpdateRole') });
}
};
private createDefaultRoles = async () => {
try {
// end user who can manage its own data, and create new ingestions.
await this.#iamService.createRole(
'End user',
[
{
action: 'read',
subject: 'dashboard',
},
{
action: 'create',
subject: 'ingestion',
},
{
action: 'manage',
subject: 'ingestion',
conditions: {
userId: '${user.id}',
},
},
{
action: 'manage',
subject: 'archive',
conditions: {
'ingestionSource.userId': '${user.id}',
},
},
],
'predefined_end_user'
);
// read only
await this.#iamService.createRole(
'Read only',
[
{
action: ['read', 'search'],
subject: ['ingestion', 'archive', 'dashboard', 'users', 'roles'],
},
],
'predefined_read_only_user'
);
} catch (error) {
logger.error({}, 'Failed to create default roles');
}
};
}

View File

@@ -1,83 +1,201 @@
import { Request, Response } from 'express';
import { IngestionService } from '../../services/IngestionService';
import { CreateIngestionSourceDto, UpdateIngestionSourceDto } from '@open-archiver/types';
import {
CreateIngestionSourceDto,
UpdateIngestionSourceDto,
IngestionSource,
SafeIngestionSource,
} from '@open-archiver/types';
import { logger } from '../../config/logger';
import { UserService } from '../../services/UserService';
import { checkDeletionEnabled } from '../../helpers/deletionGuard';
export class IngestionController {
public create = async (req: Request, res: Response): Promise<Response> => {
try {
const dto: CreateIngestionSourceDto = req.body;
const newSource = await IngestionService.create(dto);
return res.status(201).json(newSource);
} catch (error) {
console.error('Create ingestion source error:', error);
return res.status(500).json({ message: 'An internal server error occurred' });
}
};
private userService = new UserService();
/**
* Converts an IngestionSource object to a safe version for client-side consumption
* by removing the credentials.
* @param source The full IngestionSource object.
* @returns An object conforming to the SafeIngestionSource type.
*/
private toSafeIngestionSource(source: IngestionSource): SafeIngestionSource {
const { credentials, ...safeSource } = source;
return safeSource;
}
public findAll = async (req: Request, res: Response): Promise<Response> => {
try {
const sources = await IngestionService.findAll();
return res.status(200).json(sources);
} catch (error) {
console.error('Find all ingestion sources error:', error);
return res.status(500).json({ message: 'An internal server error occurred' });
}
};
public create = async (req: Request, res: Response): Promise<Response> => {
try {
const dto: CreateIngestionSourceDto = req.body;
const userId = req.user?.sub;
if (!userId) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const actor = await this.userService.findById(userId);
if (!actor) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const newSource = await IngestionService.create(
dto,
userId,
actor,
req.ip || 'unknown'
);
const safeSource = this.toSafeIngestionSource(newSource);
return res.status(201).json(safeSource);
} catch (error: any) {
logger.error({ err: error }, 'Create ingestion source error');
// Return a 400 Bad Request for connection errors
return res.status(400).json({
message: error.message || req.t('ingestion.failedToCreate'),
});
}
};
public findById = async (req: Request, res: Response): Promise<Response> => {
try {
const { id } = req.params;
const source = await IngestionService.findById(id);
return res.status(200).json(source);
} catch (error) {
console.error(`Find ingestion source by id ${req.params.id} error:`, error);
if (error instanceof Error && error.message === 'Ingestion source not found') {
return res.status(404).json({ message: error.message });
}
return res.status(500).json({ message: 'An internal server error occurred' });
}
};
public findAll = async (req: Request, res: Response): Promise<Response> => {
try {
const userId = req.user?.sub;
if (!userId) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const sources = await IngestionService.findAll(userId);
const safeSources = sources.map(this.toSafeIngestionSource);
return res.status(200).json(safeSources);
} catch (error) {
console.error('Find all ingestion sources error:', error);
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
public update = async (req: Request, res: Response): Promise<Response> => {
try {
const { id } = req.params;
const dto: UpdateIngestionSourceDto = req.body;
const updatedSource = await IngestionService.update(id, dto);
return res.status(200).json(updatedSource);
} catch (error) {
console.error(`Update ingestion source ${req.params.id} error:`, error);
if (error instanceof Error && error.message === 'Ingestion source not found') {
return res.status(404).json({ message: error.message });
}
return res.status(500).json({ message: 'An internal server error occurred' });
}
};
public findById = async (req: Request, res: Response): Promise<Response> => {
try {
const { id } = req.params;
const source = await IngestionService.findById(id);
const safeSource = this.toSafeIngestionSource(source);
return res.status(200).json(safeSource);
} catch (error) {
console.error(`Find ingestion source by id ${req.params.id} error:`, error);
if (error instanceof Error && error.message === 'Ingestion source not found') {
return res.status(404).json({ message: req.t('ingestion.notFound') });
}
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
public delete = async (req: Request, res: Response): Promise<Response> => {
try {
const { id } = req.params;
await IngestionService.delete(id);
return res.status(204).send();
} catch (error) {
console.error(`Delete ingestion source ${req.params.id} error:`, error);
if (error instanceof Error && error.message === 'Ingestion source not found') {
return res.status(404).json({ message: error.message });
}
return res.status(500).json({ message: 'An internal server error occurred' });
}
};
public update = async (req: Request, res: Response): Promise<Response> => {
try {
const { id } = req.params;
const dto: UpdateIngestionSourceDto = req.body;
const userId = req.user?.sub;
if (!userId) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const actor = await this.userService.findById(userId);
if (!actor) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const updatedSource = await IngestionService.update(
id,
dto,
actor,
req.ip || 'unknown'
);
const safeSource = this.toSafeIngestionSource(updatedSource);
return res.status(200).json(safeSource);
} catch (error) {
console.error(`Update ingestion source ${req.params.id} error:`, error);
if (error instanceof Error && error.message === 'Ingestion source not found') {
return res.status(404).json({ message: req.t('ingestion.notFound') });
}
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
public triggerInitialImport = async (req: Request, res: Response): Promise<Response> => {
try {
const { id } = req.params;
await IngestionService.triggerInitialImport(id);
return res.status(202).json({ message: 'Initial import triggered successfully.' });
} catch (error) {
console.error(`Trigger initial import for ${req.params.id} error:`, error);
if (error instanceof Error && error.message === 'Ingestion source not found') {
return res.status(404).json({ message: error.message });
}
return res.status(500).json({ message: 'An internal server error occurred' });
}
};
public delete = async (req: Request, res: Response): Promise<Response> => {
try {
checkDeletionEnabled();
const { id } = req.params;
const userId = req.user?.sub;
if (!userId) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const actor = await this.userService.findById(userId);
if (!actor) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
await IngestionService.delete(id, actor, req.ip || 'unknown');
return res.status(204).send();
} catch (error) {
console.error(`Delete ingestion source ${req.params.id} error:`, error);
if (error instanceof Error && error.message === 'Ingestion source not found') {
return res.status(404).json({ message: req.t('ingestion.notFound') });
} else if (error instanceof Error) {
return res.status(400).json({ message: error.message });
}
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
public triggerInitialImport = async (req: Request, res: Response): Promise<Response> => {
try {
const { id } = req.params;
await IngestionService.triggerInitialImport(id);
return res.status(202).json({ message: req.t('ingestion.initialImportTriggered') });
} catch (error) {
console.error(`Trigger initial import for ${req.params.id} error:`, error);
if (error instanceof Error && error.message === 'Ingestion source not found') {
return res.status(404).json({ message: req.t('ingestion.notFound') });
}
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
public pause = async (req: Request, res: Response): Promise<Response> => {
try {
const { id } = req.params;
const userId = req.user?.sub;
if (!userId) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const actor = await this.userService.findById(userId);
if (!actor) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const updatedSource = await IngestionService.update(
id,
{ status: 'paused' },
actor,
req.ip || 'unknown'
);
const safeSource = this.toSafeIngestionSource(updatedSource);
return res.status(200).json(safeSource);
} catch (error) {
console.error(`Pause ingestion source ${req.params.id} error:`, error);
if (error instanceof Error && error.message === 'Ingestion source not found') {
return res.status(404).json({ message: req.t('ingestion.notFound') });
}
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
public triggerForceSync = async (req: Request, res: Response): Promise<Response> => {
try {
const { id } = req.params;
const userId = req.user?.sub;
if (!userId) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
const actor = await this.userService.findById(userId);
if (!actor) {
return res.status(401).json({ message: req.t('errors.unauthorized') });
}
await IngestionService.triggerForceSync(id, actor, req.ip || 'unknown');
return res.status(202).json({ message: req.t('ingestion.forceSyncTriggered') });
} catch (error) {
console.error(`Trigger force sync for ${req.params.id} error:`, error);
if (error instanceof Error && error.message === 'Ingestion source not found') {
return res.status(404).json({ message: req.t('ingestion.notFound') });
}
return res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
}

View File

@@ -0,0 +1,29 @@
import { Request, Response } from 'express';
import { IntegrityService } from '../../services/IntegrityService';
import { z } from 'zod';
const checkIntegritySchema = z.object({
id: z.string().uuid(),
});
export class IntegrityController {
private integrityService = new IntegrityService();
public checkIntegrity = async (req: Request, res: Response) => {
try {
const { id } = checkIntegritySchema.parse(req.params);
const results = await this.integrityService.checkEmailIntegrity(id);
res.status(200).json(results);
} catch (error) {
if (error instanceof z.ZodError) {
return res
.status(400)
.json({ message: req.t('api.requestBodyInvalid'), errors: error.message });
}
if (error instanceof Error && error.message === 'Archived email not found') {
return res.status(404).json({ message: req.t('errors.notFound') });
}
res.status(500).json({ message: req.t('errors.internalServerError') });
}
};
}

View File

@@ -0,0 +1,42 @@
import { Request, Response } from 'express';
import { JobsService } from '../../services/JobsService';
import {
IGetQueueJobsRequestParams,
IGetQueueJobsRequestQuery,
JobStatus,
} from '@open-archiver/types';
export class JobsController {
private jobsService: JobsService;
constructor() {
this.jobsService = new JobsService();
}
public getQueues = async (req: Request, res: Response) => {
try {
const queues = await this.jobsService.getQueues();
res.status(200).json({ queues });
} catch (error) {
res.status(500).json({ message: 'Error fetching queues', error });
}
};
public getQueueJobs = async (req: Request, res: Response) => {
try {
const { queueName } = req.params as unknown as IGetQueueJobsRequestParams;
const { status, page, limit } = req.query as unknown as IGetQueueJobsRequestQuery;
const pageNumber = parseInt(page, 10) || 1;
const limitNumber = parseInt(limit, 10) || 10;
const queueDetails = await this.jobsService.getQueueDetails(
queueName,
status,
pageNumber,
limitNumber
);
res.status(200).json(queueDetails);
} catch (error) {
res.status(500).json({ message: 'Error fetching queue jobs', error });
}
};
}

View File

@@ -3,32 +3,42 @@ import { SearchService } from '../../services/SearchService';
import { MatchingStrategies } from 'meilisearch';
export class SearchController {
private searchService: SearchService;
private searchService: SearchService;
constructor() {
this.searchService = new SearchService();
}
constructor() {
this.searchService = new SearchService();
}
public search = async (req: Request, res: Response): Promise<void> => {
try {
const { keywords, page, limit, matchingStrategy } = req.query;
public search = async (req: Request, res: Response): Promise<void> => {
try {
const { keywords, page, limit, matchingStrategy } = req.query;
const userId = req.user?.sub;
if (!keywords) {
res.status(400).json({ message: 'Keywords are required' });
return;
}
if (!userId) {
res.status(401).json({ message: req.t('errors.unauthorized') });
return;
}
const results = await this.searchService.searchEmails({
query: keywords as string,
page: page ? parseInt(page as string) : 1,
limit: limit ? parseInt(limit as string) : 10,
matchingStrategy: matchingStrategy as MatchingStrategies
});
if (!keywords) {
res.status(400).json({ message: req.t('search.keywordsRequired') });
return;
}
res.status(200).json(results);
} catch (error) {
const message = error instanceof Error ? error.message : 'An unknown error occurred';
res.status(500).json({ message });
}
};
const results = await this.searchService.searchEmails(
{
query: keywords as string,
page: page ? parseInt(page as string) : 1,
limit: limit ? parseInt(limit as string) : 10,
matchingStrategy: matchingStrategy as MatchingStrategies,
},
userId,
req.ip || 'unknown'
);
res.status(200).json(results);
} catch (error) {
const message = error instanceof Error ? error.message : req.t('errors.unknown');
res.status(500).json({ message });
}
};
}

View File

@@ -0,0 +1,38 @@
import type { Request, Response } from 'express';
import { SettingsService } from '../../services/SettingsService';
import { UserService } from '../../services/UserService';
const settingsService = new SettingsService();
const userService = new UserService();
export const getSystemSettings = async (req: Request, res: Response) => {
try {
const settings = await settingsService.getSystemSettings();
res.status(200).json(settings);
} catch (error) {
// A more specific error could be logged here
res.status(500).json({ message: req.t('settings.failedToRetrieve') });
}
};
export const updateSystemSettings = async (req: Request, res: Response) => {
try {
// Basic validation can be performed here if necessary
if (!req.user || !req.user.sub) {
return res.status(401).json({ message: 'Unauthorized' });
}
const actor = await userService.findById(req.user.sub);
if (!actor) {
return res.status(401).json({ message: 'Unauthorized' });
}
const updatedSettings = await settingsService.updateSystemSettings(
req.body,
actor,
req.ip || 'unknown'
);
res.status(200).json(updatedSettings);
} catch (error) {
// A more specific error could be logged here
res.status(500).json({ message: req.t('settings.failedToUpdate') });
}
};

View File

@@ -1,32 +1,50 @@
import { Request, Response } from 'express';
import { StorageService } from '../../services/StorageService';
import * as path from 'path';
import { storage as storageConfig } from '../../config/storage';
export class StorageController {
constructor(private storageService: StorageService) { }
constructor(private storageService: StorageService) {}
public downloadFile = async (req: Request, res: Response): Promise<void> => {
const filePath = req.query.path as string;
public downloadFile = async (req: Request, res: Response): Promise<void> => {
const unsafePath = req.query.path as string;
if (!filePath) {
res.status(400).send('File path is required');
return;
}
if (!unsafePath) {
res.status(400).send(req.t('storage.filePathRequired'));
return;
}
try {
const fileExists = await this.storageService.exists(filePath);
if (!fileExists) {
console.log(filePath);
res.status(404).send('File not found');
return;
}
// Normalize the path to prevent directory traversal
const normalizedPath = path.normalize(unsafePath).replace(/^(\.\.(\/|\\|$))+/, '');
const fileStream = await this.storageService.get(filePath);
const fileName = filePath.split('/').pop();
res.setHeader('Content-Disposition', `attachment; filename=${fileName}`);
fileStream.pipe(res);
} catch (error) {
console.error('Error downloading file:', error);
res.status(500).send('Error downloading file');
}
};
// Determine the base path from storage configuration
const basePath = storageConfig.type === 'local' ? storageConfig.rootPath : '/';
// Resolve the full path and ensure it's within the storage directory
const fullPath = path.join(basePath, normalizedPath);
if (!fullPath.startsWith(basePath)) {
res.status(400).send(req.t('storage.invalidFilePath'));
return;
}
// Use the sanitized, relative path for storage service operations
const safePath = path.relative(basePath, fullPath);
try {
const fileExists = await this.storageService.exists(safePath);
if (!fileExists) {
res.status(404).send(req.t('storage.fileNotFound'));
return;
}
const fileStream = await this.storageService.get(safePath);
const fileName = path.basename(safePath);
res.setHeader('Content-Disposition', `attachment; filename="${fileName}"`);
fileStream.pipe(res);
} catch (error) {
console.error('Error downloading file:', error);
res.status(500).send(req.t('storage.downloadError'));
}
};
}

View File

@@ -0,0 +1,27 @@
import { Request, Response } from 'express';
import { StorageService } from '../../services/StorageService';
import { randomUUID } from 'crypto';
import busboy from 'busboy';
import { config } from '../../config/index';
export const uploadFile = async (req: Request, res: Response) => {
const storage = new StorageService();
const bb = busboy({ headers: req.headers });
const uploads: Promise<void>[] = [];
let filePath = '';
let originalFilename = '';
bb.on('file', (fieldname, file, filename) => {
originalFilename = filename.filename;
const uuid = randomUUID();
filePath = `${config.storage.openArchiverFolderName}/tmp/${uuid}-${originalFilename}`;
uploads.push(storage.put(filePath, file));
});
bb.on('finish', async () => {
await Promise.all(uploads);
res.json({ filePath });
});
req.pipe(bb);
};

View File

@@ -0,0 +1,145 @@
import { Request, Response } from 'express';
import { UserService } from '../../services/UserService';
import * as schema from '../../database/schema';
import { sql } from 'drizzle-orm';
import { db } from '../../database';
import { config } from '../../config';
const userService = new UserService();
export const getUsers = async (req: Request, res: Response) => {
const users = await userService.findAll();
res.json(users);
};
export const getUser = async (req: Request, res: Response) => {
const user = await userService.findById(req.params.id);
if (!user) {
return res.status(404).json({ message: req.t('user.notFound') });
}
res.json(user);
};
export const createUser = async (req: Request, res: Response) => {
const { email, first_name, last_name, password, roleId } = req.body;
if (!req.user || !req.user.sub) {
return res.status(401).json({ message: 'Unauthorized' });
}
const actor = await userService.findById(req.user.sub);
if (!actor) {
return res.status(401).json({ message: 'Unauthorized' });
}
const newUser = await userService.createUser(
{ email, first_name, last_name, password },
roleId,
actor,
req.ip || 'unknown'
);
res.status(201).json(newUser);
};
export const updateUser = async (req: Request, res: Response) => {
const { email, first_name, last_name, roleId } = req.body;
if (!req.user || !req.user.sub) {
return res.status(401).json({ message: 'Unauthorized' });
}
const actor = await userService.findById(req.user.sub);
if (!actor) {
return res.status(401).json({ message: 'Unauthorized' });
}
const updatedUser = await userService.updateUser(
req.params.id,
{ email, first_name, last_name },
roleId,
actor,
req.ip || 'unknown'
);
if (!updatedUser) {
return res.status(404).json({ message: req.t('user.notFound') });
}
res.json(updatedUser);
};
export const deleteUser = async (req: Request, res: Response) => {
const userCountResult = await db.select({ count: sql<number>`count(*)` }).from(schema.users);
const isOnlyUser = Number(userCountResult[0].count) === 1;
if (isOnlyUser) {
return res.status(400).json({
message: req.t('user.cannotDeleteOnlyUser'),
});
}
if (!req.user || !req.user.sub) {
return res.status(401).json({ message: 'Unauthorized' });
}
const actor = await userService.findById(req.user.sub);
if (!actor) {
return res.status(401).json({ message: 'Unauthorized' });
}
await userService.deleteUser(req.params.id, actor, req.ip || 'unknown');
res.status(204).send();
};
export const getProfile = async (req: Request, res: Response) => {
if (!req.user || !req.user.sub) {
return res.status(401).json({ message: 'Unauthorized' });
}
const user = await userService.findById(req.user.sub);
if (!user) {
return res.status(404).json({ message: req.t('user.notFound') });
}
res.json(user);
};
export const updateProfile = async (req: Request, res: Response) => {
if (config.app.isDemo) {
return res.status(403).json({ message: req.t('errors.demoMode') });
}
const { email, first_name, last_name } = req.body;
if (!req.user || !req.user.sub) {
return res.status(401).json({ message: 'Unauthorized' });
}
const actor = await userService.findById(req.user.sub);
if (!actor) {
return res.status(401).json({ message: 'Unauthorized' });
}
const updatedUser = await userService.updateUser(
req.user.sub,
{ email, first_name, last_name },
undefined,
actor,
req.ip || 'unknown'
);
res.json(updatedUser);
};
export const updatePassword = async (req: Request, res: Response) => {
if (config.app.isDemo) {
return res.status(403).json({ message: req.t('errors.demoMode') });
}
const { currentPassword, newPassword } = req.body;
if (!req.user || !req.user.sub) {
return res.status(401).json({ message: 'Unauthorized' });
}
const actor = await userService.findById(req.user.sub);
if (!actor) {
return res.status(401).json({ message: 'Unauthorized' });
}
try {
await userService.updatePassword(
req.user.sub,
currentPassword,
newPassword,
actor,
req.ip || 'unknown'
);
res.status(200).json({ message: 'Password updated successfully' });
} catch (e: any) {
if (e.message === 'Invalid current password') {
return res.status(400).json({ message: e.message });
}
throw e;
}
};

View File

@@ -0,0 +1,21 @@
import { rateLimit, ipKeyGenerator } from 'express-rate-limit';
import { config } from '../../config';
const windowInMinutes = Math.ceil(config.api.rateLimit.windowMs / 60000);
export const rateLimiter = rateLimit({
windowMs: config.api.rateLimit.windowMs,
max: config.api.rateLimit.max,
keyGenerator: (req, res) => {
// Use the real IP address of the client, even if it's behind a proxy.
// `app.set('trust proxy', true)` in `server.ts`.
return ipKeyGenerator(req.ip || 'unknown');
},
message: {
status: 429,
message: `Too many requests from this IP, please try again after ${windowInMinutes} minutes`,
},
statusCode: 429,
standardHeaders: true,
legacyHeaders: false,
});

View File

@@ -1,39 +1,58 @@
import type { Request, Response, NextFunction } from 'express';
import type { IAuthService } from '../../services/AuthService';
import type { AuthService } from '../../services/AuthService';
import type { AuthTokenPayload } from '@open-archiver/types';
import 'dotenv/config';
import { ApiKeyService } from '../../services/ApiKeyService';
import { UserService } from '../../services/UserService';
// By using module augmentation, we can add our custom 'user' property
// to the Express Request interface in a type-safe way.
declare global {
namespace Express {
export interface Request {
user?: AuthTokenPayload;
}
}
namespace Express {
export interface Request {
user?: AuthTokenPayload;
}
}
}
export const requireAuth = (authService: IAuthService) => {
return async (req: Request, res: Response, next: NextFunction) => {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).json({ message: 'Unauthorized: No token provided' });
}
const token = authHeader.split(' ')[1];
try {
// use a SUPER_API_KEY for all authentications.
if (token === process.env.SUPER_API_KEY) {
next();
return;
}
const payload = await authService.verifyToken(token);
if (!payload) {
return res.status(401).json({ message: 'Unauthorized: Invalid token' });
}
req.user = payload;
next();
} catch (error) {
console.error('Authentication error:', error);
return res.status(500).json({ message: 'An internal server error occurred during authentication' });
}
};
export const requireAuth = (authService: AuthService) => {
return async (req: Request, res: Response, next: NextFunction) => {
const authHeader = req.headers.authorization;
const apiKeyHeader = req.headers['x-api-key'];
if (apiKeyHeader) {
const userId = await ApiKeyService.validateKey(apiKeyHeader as string);
if (!userId) {
return res.status(401).json({ message: 'Unauthorized: Invalid API key' });
}
const user = await new UserService().findById(userId);
if (!user) {
return res.status(401).json({ message: 'Unauthorized: Invalid user' });
}
req.user = {
sub: user.id,
email: user.email,
roles: user.role ? [user.role.name] : [],
};
return next();
}
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).json({ message: 'Unauthorized: No token provided' });
}
const token = authHeader.split(' ')[1];
try {
const payload = await authService.verifyToken(token);
if (!payload) {
return res.status(401).json({ message: 'Unauthorized: Invalid token' });
}
req.user = payload;
next();
} catch (error) {
console.error('Authentication error:', error);
return res
.status(500)
.json({ message: 'An internal server error occurred during authentication' });
}
};
};

View File

@@ -0,0 +1,38 @@
import { AuthorizationService } from '../../services/AuthorizationService';
import type { Request, Response, NextFunction } from 'express';
import { AppActions, AppSubjects } from '@open-archiver/types';
export const requirePermission = (
action: AppActions,
subjectName: AppSubjects,
rejectMessage?: string
) => {
return async (req: Request, res: Response, next: NextFunction) => {
const userId = req.user?.sub;
if (!userId) {
return res.status(401).json({ message: 'Unauthorized' });
}
let resourceObject = undefined;
// Logic to fetch resourceObject if needed for condition-based checks...
const authorizationService = new AuthorizationService();
const hasPermission = await authorizationService.can(
userId,
action,
subjectName,
resourceObject
);
if (!hasPermission) {
const message = rejectMessage
? req.t(rejectMessage)
: req.t('errors.noPermissionToAction');
return res.status(403).json({
message,
});
}
next();
};
};

View File

@@ -0,0 +1,15 @@
import { Router } from 'express';
import { ApiKeyController } from '../controllers/api-key.controller';
import { requireAuth } from '../middleware/requireAuth';
import { AuthService } from '../../services/AuthService';
export const apiKeyRoutes = (authService: AuthService): Router => {
const router = Router();
const controller = new ApiKeyController();
router.post('/', requireAuth(authService), controller.generateApiKey);
router.get('/', requireAuth(authService), controller.getApiKeys);
router.delete('/:id', requireAuth(authService), controller.deleteApiKey);
return router;
};

View File

@@ -1,20 +1,35 @@
import { Router } from 'express';
import { ArchivedEmailController } from '../controllers/archived-email.controller';
import { requireAuth } from '../middleware/requireAuth';
import { IAuthService } from '../../services/AuthService';
import { requirePermission } from '../middleware/requirePermission';
import { AuthService } from '../../services/AuthService';
export const createArchivedEmailRouter = (
archivedEmailController: ArchivedEmailController,
authService: IAuthService
archivedEmailController: ArchivedEmailController,
authService: AuthService
): Router => {
const router = Router();
const router = Router();
// Secure all routes in this module
router.use(requireAuth(authService));
// Secure all routes in this module
router.use(requireAuth(authService));
router.get('/ingestion-source/:ingestionSourceId', archivedEmailController.getArchivedEmails);
router.get(
'/ingestion-source/:ingestionSourceId',
requirePermission('read', 'archive'),
archivedEmailController.getArchivedEmails
);
router.get('/:id', archivedEmailController.getArchivedEmailById);
router.get(
'/:id',
requirePermission('read', 'archive'),
archivedEmailController.getArchivedEmailById
);
return router;
router.delete(
'/:id',
requirePermission('delete', 'archive'),
archivedEmailController.deleteArchivedEmail
);
return router;
};

View File

@@ -2,14 +2,28 @@ import { Router } from 'express';
import type { AuthController } from '../controllers/auth.controller';
export const createAuthRouter = (authController: AuthController): Router => {
const router = Router();
const router = Router();
/**
* @route POST /api/v1/auth/login
* @description Authenticates a user and returns a JWT.
* @access Public
*/
router.post('/login', authController.login);
/**
* @route POST /api/v1/auth/setup
* @description Creates the initial administrator user.
* @access Public
*/
router.post('/setup', authController.setup);
return router;
/**
* @route POST /api/v1/auth/login
* @description Authenticates a user and returns a JWT.
* @access Public
*/
router.post('/login', authController.login);
/**
* @route GET /api/v1/auth/status
* @description Checks if the application has been set up.
* @access Public
*/
router.get('/status', authController.status);
return router;
};

View File

@@ -0,0 +1,39 @@
import { Router } from 'express';
import { dashboardController } from '../controllers/dashboard.controller';
import { requireAuth } from '../middleware/requireAuth';
import { requirePermission } from '../middleware/requirePermission';
import { AuthService } from '../../services/AuthService';
export const createDashboardRouter = (authService: AuthService): Router => {
const router = Router();
router.use(requireAuth(authService));
router.get(
'/stats',
requirePermission('read', 'dashboard', 'dashboard.permissionRequired'),
dashboardController.getStats
);
router.get(
'/ingestion-history',
requirePermission('read', 'dashboard', 'dashboard.permissionRequired'),
dashboardController.getIngestionHistory
);
router.get(
'/ingestion-sources',
requirePermission('read', 'dashboard', 'dashboard.permissionRequired'),
dashboardController.getIngestionSources
);
router.get(
'/recent-syncs',
requirePermission('read', 'dashboard', 'dashboard.permissionRequired'),
dashboardController.getRecentSyncs
);
router.get(
'/indexed-insights',
requirePermission('read', 'dashboard', 'dashboard.permissionRequired'),
dashboardController.getIndexedInsights
);
return router;
};

View File

@@ -0,0 +1,42 @@
import { Router } from 'express';
import { requireAuth } from '../middleware/requireAuth';
import { requirePermission } from '../middleware/requirePermission';
import type { IamController } from '../controllers/iam.controller';
import type { AuthService } from '../../services/AuthService';
export const createIamRouter = (iamController: IamController, authService: AuthService): Router => {
const router = Router();
router.use(requireAuth(authService));
/**
* @route GET /api/v1/iam/roles
* @description Gets all roles.
* @access Private
*/
router.get('/roles', requirePermission('read', 'roles'), iamController.getRoles);
router.get('/roles/:id', requirePermission('read', 'roles'), iamController.getRoleById);
/**
* Only super admin has the ability to modify existing roles or create new roles.
*/
router.post(
'/roles',
requirePermission('manage', 'all', 'iam.requiresSuperAdminRole'),
iamController.createRole
);
router.delete(
'/roles/:id',
requirePermission('manage', 'all', 'iam.requiresSuperAdminRole'),
iamController.deleteRole
);
router.put(
'/roles/:id',
requirePermission('manage', 'all', 'iam.requiresSuperAdminRole'),
iamController.updateRole
);
return router;
};

View File

@@ -1,28 +1,41 @@
import { Router } from 'express';
import { IngestionController } from '../controllers/ingestion.controller';
import { requireAuth } from '../middleware/requireAuth';
import { IAuthService } from '../../services/AuthService';
import { requirePermission } from '../middleware/requirePermission';
import { AuthService } from '../../services/AuthService';
export const createIngestionRouter = (
ingestionController: IngestionController,
authService: IAuthService
ingestionController: IngestionController,
authService: AuthService
): Router => {
const router = Router();
const router = Router();
// Secure all routes in this module
router.use(requireAuth(authService));
// Secure all routes in this module
router.use(requireAuth(authService));
router.post('/', ingestionController.create);
router.post('/', requirePermission('create', 'ingestion'), ingestionController.create);
router.get('/', ingestionController.findAll);
router.get('/', requirePermission('read', 'ingestion'), ingestionController.findAll);
router.get('/:id', ingestionController.findById);
router.get('/:id', requirePermission('read', 'ingestion'), ingestionController.findById);
router.put('/:id', ingestionController.update);
router.put('/:id', requirePermission('update', 'ingestion'), ingestionController.update);
router.delete('/:id', ingestionController.delete);
router.delete('/:id', requirePermission('delete', 'ingestion'), ingestionController.delete);
router.post('/:id/sync', ingestionController.triggerInitialImport);
router.post(
'/:id/import',
requirePermission('create', 'ingestion'),
ingestionController.triggerInitialImport
);
return router;
router.post('/:id/pause', requirePermission('update', 'ingestion'), ingestionController.pause);
router.post(
'/:id/sync',
requirePermission('sync', 'ingestion'),
ingestionController.triggerForceSync
);
return router;
};

View File

@@ -0,0 +1,16 @@
import { Router } from 'express';
import { IntegrityController } from '../controllers/integrity.controller';
import { requireAuth } from '../middleware/requireAuth';
import { requirePermission } from '../middleware/requirePermission';
import { AuthService } from '../../services/AuthService';
export const integrityRoutes = (authService: AuthService): Router => {
const router = Router();
const controller = new IntegrityController();
router.use(requireAuth(authService));
router.get('/:id', requirePermission('read', 'archive'), controller.checkIntegrity);
return router;
};

View File

@@ -0,0 +1,25 @@
import { Router } from 'express';
import { JobsController } from '../controllers/jobs.controller';
import { requireAuth } from '../middleware/requireAuth';
import { requirePermission } from '../middleware/requirePermission';
import { AuthService } from '../../services/AuthService';
export const createJobsRouter = (authService: AuthService): Router => {
const router = Router();
const jobsController = new JobsController();
router.use(requireAuth(authService));
router.get(
'/queues',
requirePermission('manage', 'all', 'user.requiresSuperAdminRole'),
jobsController.getQueues
);
router.get(
'/queues/:queueName',
requirePermission('manage', 'all', 'user.requiresSuperAdminRole'),
jobsController.getQueueJobs
);
return router;
};

View File

@@ -1,17 +1,18 @@
import { Router } from 'express';
import { SearchController } from '../controllers/search.controller';
import { requireAuth } from '../middleware/requireAuth';
import { IAuthService } from '../../services/AuthService';
import { requirePermission } from '../middleware/requirePermission';
import { AuthService } from '../../services/AuthService';
export const createSearchRouter = (
searchController: SearchController,
authService: IAuthService
searchController: SearchController,
authService: AuthService
): Router => {
const router = Router();
const router = Router();
router.use(requireAuth(authService));
router.use(requireAuth(authService));
router.get('/', searchController.search);
router.get('/', requirePermission('search', 'archive'), searchController.search);
return router;
return router;
};

View File

@@ -0,0 +1,25 @@
import { Router } from 'express';
import * as settingsController from '../controllers/settings.controller';
import { requireAuth } from '../middleware/requireAuth';
import { requirePermission } from '../middleware/requirePermission';
import { AuthService } from '../../services/AuthService';
export const createSettingsRouter = (authService: AuthService): Router => {
const router = Router();
// Public route to get non-sensitive settings. settings read should not be scoped with a permission because all end users need the settings data in the frontend. However, for sensitive settings data, we need to add a new permission subject to limit access. So this route should only expose non-sensitive settings data.
/**
* @returns SystemSettings
*/
router.get('/system', settingsController.getSystemSettings);
// Protected route to update settings
router.put(
'/system',
requireAuth(authService),
requirePermission('manage', 'settings', 'settings.noPermissionToUpdate'),
settingsController.updateSystemSettings
);
return router;
};

View File

@@ -1,18 +1,19 @@
import { Router } from 'express';
import { StorageController } from '../controllers/storage.controller';
import { requireAuth } from '../middleware/requireAuth';
import { IAuthService } from '../../services/AuthService';
import { requirePermission } from '../middleware/requirePermission';
import { AuthService } from '../../services/AuthService';
export const createStorageRouter = (
storageController: StorageController,
authService: IAuthService
storageController: StorageController,
authService: AuthService
): Router => {
const router = Router();
const router = Router();
// Secure all routes in this module
router.use(requireAuth(authService));
// Secure all routes in this module
router.use(requireAuth(authService));
router.get('/download', storageController.downloadFile);
router.get('/download', requirePermission('read', 'archive'), storageController.downloadFile);
return router;
return router;
};

View File

@@ -1,18 +0,0 @@
import { Router } from 'express';
import { ingestionQueue } from '../../jobs/queues';
const router: Router = Router();
router.post('/trigger-job', async (req, res) => {
try {
const job = await ingestionQueue.add('initial-import', {
ingestionSourceId: 'test-source-id-test-2345'
});
res.status(202).json({ message: 'Test job triggered successfully', jobId: job.id });
} catch (error) {
console.error('Failed to trigger test job', error);
res.status(500).json({ message: 'Failed to trigger test job' });
}
});
export default router;

View File

@@ -0,0 +1,15 @@
import { Router } from 'express';
import { uploadFile } from '../controllers/upload.controller';
import { requireAuth } from '../middleware/requireAuth';
import { AuthService } from '../../services/AuthService';
import { requirePermission } from '../middleware/requirePermission';
export const createUploadRouter = (authService: AuthService): Router => {
const router = Router();
router.use(requireAuth(authService));
router.post('/', requirePermission('create', 'ingestion'), uploadFile);
return router;
};

View File

@@ -0,0 +1,42 @@
import { Router } from 'express';
import * as userController from '../controllers/user.controller';
import { requireAuth } from '../middleware/requireAuth';
import { requirePermission } from '../middleware/requirePermission';
import { AuthService } from '../../services/AuthService';
export const createUserRouter = (authService: AuthService): Router => {
const router = Router();
router.use(requireAuth(authService));
router.get('/', requirePermission('read', 'users'), userController.getUsers);
router.get('/profile', userController.getProfile);
router.patch('/profile', userController.updateProfile);
router.post('/profile/password', userController.updatePassword);
router.get('/:id', requirePermission('read', 'users'), userController.getUser);
/**
* Only super admin has the ability to modify existing users or create new users.
*/
router.post(
'/',
requirePermission('manage', 'all', 'user.requiresSuperAdminRole'),
userController.createUser
);
router.put(
'/:id',
requirePermission('manage', 'all', 'user.requiresSuperAdminRole'),
userController.updateUser
);
router.delete(
'/:id',
requirePermission('manage', 'all', 'user.requiresSuperAdminRole'),
userController.deleteUser
);
return router;
};

View File

@@ -0,0 +1,170 @@
import express, { Express } from 'express';
import cors from 'cors';
import dotenv from 'dotenv';
import { AuthController } from './controllers/auth.controller';
import { IngestionController } from './controllers/ingestion.controller';
import { ArchivedEmailController } from './controllers/archived-email.controller';
import { StorageController } from './controllers/storage.controller';
import { SearchController } from './controllers/search.controller';
import { IamController } from './controllers/iam.controller';
import { createAuthRouter } from './routes/auth.routes';
import { createIamRouter } from './routes/iam.routes';
import { createIngestionRouter } from './routes/ingestion.routes';
import { createArchivedEmailRouter } from './routes/archived-email.routes';
import { createStorageRouter } from './routes/storage.routes';
import { createSearchRouter } from './routes/search.routes';
import { createDashboardRouter } from './routes/dashboard.routes';
import { createUploadRouter } from './routes/upload.routes';
import { createUserRouter } from './routes/user.routes';
import { createSettingsRouter } from './routes/settings.routes';
import { apiKeyRoutes } from './routes/api-key.routes';
import { integrityRoutes } from './routes/integrity.routes';
import { createJobsRouter } from './routes/jobs.routes';
import { AuthService } from '../services/AuthService';
import { AuditService } from '../services/AuditService';
import { UserService } from '../services/UserService';
import { IamService } from '../services/IamService';
import { StorageService } from '../services/StorageService';
import { SearchService } from '../services/SearchService';
import { SettingsService } from '../services/SettingsService';
import i18next from 'i18next';
import FsBackend from 'i18next-fs-backend';
import i18nextMiddleware from 'i18next-http-middleware';
import path from 'path';
import { logger } from '../config/logger';
import { rateLimiter } from './middleware/rateLimiter';
import { config } from '../config';
import { OpenArchiverFeature } from '@open-archiver/types';
// Define the "plugin" interface
export interface ArchiverModule {
initialize: (app: Express, authService: AuthService) => Promise<void>;
name: OpenArchiverFeature;
}
export let authService: AuthService;
export async function createServer(modules: ArchiverModule[] = []): Promise<Express> {
// Load environment variables
dotenv.config();
// --- Environment Variable Validation ---
const { JWT_SECRET, JWT_EXPIRES_IN } = process.env;
if (!JWT_SECRET || !JWT_EXPIRES_IN) {
throw new Error(
'Missing required environment variables for the backend: JWT_SECRET, JWT_EXPIRES_IN.'
);
}
// --- Dependency Injection Setup ---
const auditService = new AuditService();
const userService = new UserService();
authService = new AuthService(userService, auditService, JWT_SECRET, JWT_EXPIRES_IN);
const authController = new AuthController(authService, userService);
const ingestionController = new IngestionController();
const archivedEmailController = new ArchivedEmailController();
const storageService = new StorageService();
const storageController = new StorageController(storageService);
const searchService = new SearchService();
const searchController = new SearchController();
const iamService = new IamService();
const iamController = new IamController(iamService);
const settingsService = new SettingsService();
// --- i18next Initialization ---
const initializeI18next = async () => {
const systemSettings = await settingsService.getSystemSettings();
const defaultLanguage = systemSettings?.language || 'en';
logger.info({ language: defaultLanguage }, 'Default language');
await i18next.use(FsBackend).init({
lng: defaultLanguage,
fallbackLng: defaultLanguage,
ns: ['translation'],
defaultNS: 'translation',
backend: {
loadPath: path.resolve(__dirname, '../locales/{{lng}}/{{ns}}.json'),
},
});
};
// Initialize i18next
await initializeI18next();
logger.info({}, 'i18next initialized');
// Configure the Meilisearch index on startup
logger.info({}, 'Configuring email index...');
await searchService.configureEmailIndex();
const app = express();
// --- CORS ---
app.use(
cors({
origin: process.env.APP_URL || 'http://localhost:3000',
credentials: true,
})
);
// Trust the proxy to get the real IP address of the client.
// This is important for audit logging and security.
app.set('trust proxy', true);
// --- Routes ---
const authRouter = createAuthRouter(authController);
const ingestionRouter = createIngestionRouter(ingestionController, authService);
const archivedEmailRouter = createArchivedEmailRouter(archivedEmailController, authService);
const storageRouter = createStorageRouter(storageController, authService);
const searchRouter = createSearchRouter(searchController, authService);
const dashboardRouter = createDashboardRouter(authService);
const iamRouter = createIamRouter(iamController, authService);
const uploadRouter = createUploadRouter(authService);
const userRouter = createUserRouter(authService);
const settingsRouter = createSettingsRouter(authService);
const apiKeyRouter = apiKeyRoutes(authService);
const integrityRouter = integrityRoutes(authService);
const jobsRouter = createJobsRouter(authService);
// Middleware for all other routes
app.use((req, res, next) => {
// exclude certain API endpoints from the rate limiter, for example status, system settings
const excludedPatterns = [/^\/v\d+\/auth\/status$/, /^\/v\d+\/settings\/system$/];
for (const pattern of excludedPatterns) {
if (pattern.test(req.path)) {
return next();
}
}
rateLimiter(req, res, next);
});
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
// i18n middleware
app.use(i18nextMiddleware.handle(i18next));
app.use(`/${config.api.version}/auth`, authRouter);
app.use(`/${config.api.version}/iam`, iamRouter);
app.use(`/${config.api.version}/upload`, uploadRouter);
app.use(`/${config.api.version}/ingestion-sources`, ingestionRouter);
app.use(`/${config.api.version}/archived-emails`, archivedEmailRouter);
app.use(`/${config.api.version}/storage`, storageRouter);
app.use(`/${config.api.version}/search`, searchRouter);
app.use(`/${config.api.version}/dashboard`, dashboardRouter);
app.use(`/${config.api.version}/users`, userRouter);
app.use(`/${config.api.version}/settings`, settingsRouter);
app.use(`/${config.api.version}/api-keys`, apiKeyRouter);
app.use(`/${config.api.version}/integrity`, integrityRouter);
app.use(`/${config.api.version}/jobs`, jobsRouter);
// Load all provided extension modules
for (const module of modules) {
await module.initialize(app, authService);
console.log(`🏢 Enterprise module loaded: ${module.name}`);
}
app.get('/', (req, res) => {
res.send('Backend is running!!');
});
console.log('✅ Core OSS modules loaded.');
return app;
}

View File

@@ -0,0 +1,13 @@
import 'dotenv/config';
export const apiConfig = {
rateLimit: {
windowMs: process.env.RATE_LIMIT_WINDOW_MS
? parseInt(process.env.RATE_LIMIT_WINDOW_MS, 10)
: 1 * 60 * 1000, // 1 minutes
max: process.env.RATE_LIMIT_MAX_REQUESTS
? parseInt(process.env.RATE_LIMIT_MAX_REQUESTS, 10)
: 100, // limit each IP to 100 requests per windowMs
},
version: 'v1',
};

Some files were not shown because too many files have changed in this diff Show More