* fix(api): correct API key generation and proxy handling
This commit resolves an issue where generating a new API key would fail. The root cause was improper handling of POST request bodies in the frontend proxy server.
- Refactored `ApiKeyController` methods to use arrow functions to ensure correct `this` binding.
* User profile/account page, change password, API
* docs(api): update ingestion source provider values
Update the `CreateIngestionSourceDto` documentation in `ingestion.md` to reflect the current set of supported providers.
* updating tag
* feat: add REDIS_USER env variable (#172)
* feat: add REDIS_USER env variable
fixes#171
* add proper type for bullmq config
* Bulgarian UI language strings added (backend+frontend) (#194)
* Bulgarian UI Support added
* BG language UI support - Create translation.json
* update redis config logic
* Update Bulgarian language setting, register language
* Allow specifying local file path for mbox/eml/pst (#214)
* Add agents AI doc
* Allow local file path for Mbox file ingestion
---------
Co-authored-by: Wei S. <5291640+wayneshn@users.noreply.github.com>
* feat(ingestion): add local file path support and optimize EML processing
- Frontend: Updated IngestionSourceForm to allow toggling between "Upload File" and "Local File Path" for PST, EML, and Mbox providers.
- Frontend: Added logic to clear irrelevant form data when switching import methods.
- Frontend: Added English translations for new form fields.
- Backend: Refactored EMLConnector to stream ZIP entries using yauzl instead of extracting the full archive to disk, significantly improving efficiency for large archives.
- Docs: Updated API documentation and User Guides (PST, EML, Mbox) to clarify "Local File Path" usage, specifically within Docker environments.
* docs: add meilisearch dumpless upgrade guide and snapshot config
Update `docker-compose.yml` to include the `MEILI_SCHEDULE_SNAPSHOT` environment variable, defaulting to 86400 seconds (24 hours), enabling periodic data snapshots for easier recovery. Shout out to @morph027 for the inspiration!
Additionally, update the Meilisearch upgrade documentation to include an experimental "dumpless" upgrade guide while marking the previous method as the standard recommended process.
* build(coolify): enable daily snapshots for meilisearch
Configure the Meilisearch service in `open-archiver.yml` to create snapshots every 86400 seconds (24 hours) by setting the `MEILI_SCHEDULE_SNAPSHOT` environment variable.
---------
Co-authored-by: Antonia Schwennesen <53372671+zophiana@users.noreply.github.com>
Co-authored-by: IT Creativity + Art Team <admin@it-playground.net>
Co-authored-by: Jan Berdajs <mrbrdo@gmail.com>
* fix(api): correct API key generation and proxy handling
This commit resolves an issue where generating a new API key would fail. The root cause was improper handling of POST request bodies in the frontend proxy server.
- Refactored `ApiKeyController` methods to use arrow functions to ensure correct `this` binding.
* User profile/account page, change password, API
* docs(api): update ingestion source provider values
Update the `CreateIngestionSourceDto` documentation in `ingestion.md` to reflect the current set of supported providers.
* Jobs page responsive fix
* feat(ingestion): Refactor email indexing into a dedicated background job
This commit refactors the email indexing process to improve the performance and reliability of the ingestion pipeline.
Previously, email indexing was performed synchronously within the mailbox processing job. This could lead to timeouts and failed ingestion cycles if the indexing step was slow or encountered errors.
To address this, the indexing logic has been moved into a separate, dedicated background job queue (`indexingQueue`). Now, the mailbox processor simply adds a batch of emails to this queue. A separate worker then processes the indexing job asynchronously.
This decoupling makes the ingestion process more robust:
- It prevents slow indexing from blocking or failing the entire mailbox sync.
- It allows for better resource management and scalability by handling indexing in a dedicated process.
- It improves error handling, as a failed indexing job can be retried independently without affecting the main ingestion flow.
Additionally, this commit includes minor documentation updates and removes a premature timeout in the PDF text extraction helper that was causing issues.
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* open-core setup, adding enterprise package
* enterprise: Audit log API, UI
* Audit-log docs
* feat: Integrity report, allowing users to verify the integrity of archived emails and their attachments.
- When an email is archived, Open Archiver calculates a unique cryptographic signature (a SHA256 hash) for the email's raw `.eml` file and for each of its attachments. These signatures are stored in the database alongside the email's metadata.
- The integrity check feature recalculates these signatures for the stored files and compares them to the original signatures stored in the database. This process allows you to verify that the content of your archived emails has not been altered, corrupted, or tampered with since the moment they were archived.
- Add docs of Integrity report
* Update Docker-compose.yml to use bind mount for Open Archiver data.
Fix API rate-limiter warning about trust proxy
* File encryption support
* Scope attachment deduplication to ingestion source
Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source.
This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis.
Changes:
- **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases.
- **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source.
* Scope attachment deduplication to ingestion source
Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source.
This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis.
Changes:
- **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases.
- **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source.
* Add option to disable deletions
This commit introduces a new feature that allows admins to disable the deletion of emails and ingestion sources for the entire instance. This is a critical feature for compliance and data retention, as it prevents accidental or unauthorized deletions.
Changes:
- **Configuration**: Added an `ENABLE_DELETION` environment variable. If this variable is not set to `true`, all deletion operations will be disabled.
- **Deletion Guard**: A centralized `checkDeletionEnabled` guard has been implemented to enforce this setting at both the controller and service levels, ensuring a robust and secure implementation.
- **Documentation**: The installation guide has been updated to include the new `ENABLE_DELETION` environment variable and its behavior.
- **Refactor**: The `IngestionService`'s `create` method was refactored to remove unnecessary calls to the `delete` method, simplifying the code and improving its robustness.
* Adding position for menu items
* feat(docker): Fix CORS errors
This commit fixes CORS errors when running the app in Docker by introducing the `APP_URL` environment variable. A CORS policy is set up for the backend to only allow origin from the `APP_URL`.
Key changes include:
- New `APP_URL` and `ORIGIN` environment variables have been added to properly configure CORS and the SvelteKit adapter, making the application's public URL easily configurable.
- Dockerfiles are updated to copy the entrypoint script, Drizzle config, and migration files into the final image.
- Documentation and example files (`.env.example`, `docker-compose.yml`) have been updated to reflect these changes.
* feat(attachments): De-duplicate attachment content by content hash
This commit refactors attachment handling to allow multiple emails within the same ingestion source to reference attachments with identical content (same hash).
Changes:
- The unique index on the `attachments` table has been changed to a non-unique index to permit duplicate hash/source pairs.
- The ingestion logic is updated to first check for an existing attachment with the same hash and source. If found, it reuses the existing record; otherwise, it creates a new one. This maintains storage de-duplication.
- The email deletion logic is improved to be more robust. It now correctly removes the email-attachment link before checking if the attachment record and its corresponding file can be safely deleted.
* Not filtering our Trash folder
* feat(backend): Add BullMQ dashboard for job monitoring
This commit introduces a web-based UI for monitoring and managing background jobs using Bullmq.
Key changes:
- A new `/api/v1/jobs` endpoint is created, serving the Bull Board dashboard. Access is restricted to authenticated administrators.
- All BullMQ queue definitions (`ingestion`, `indexing`, `sync-scheduler`) have been centralized into a new `packages/backend/src/jobs/queues.ts` file.
- Workers and services now import queue instances from this central file, improving code organization and removing redundant queue instantiations.
* Add `ALL_INCLUSIVE_ARCHIVE` environment variable to disable jun filtering
* Using BSL license
* frontend: Responsive design for menu bar, pagination
* License service/module
* Remove demoMode logic
* Formatting code
* Remove enterprise packages
* Fix package.json in packages
* Search page responsive fix
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
This commit introduces two major features:
1. **Mbox File Ingestion:**
Users can now ingest emails from Mbox files (`.mbox`). A new Mbox connector has been implemented on the backend, and the user interface has been updated to support creating Mbox ingestion sources. Documentation for this new provider has also been added.
Additionally, this commit includes new documentation for upgrading and migrating Open Archiver.
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* feat(auth): Implement API key authentication
This commit enables API access with an API key system. This change provides a better experience for programmatic access and third-party integrations.
Key changes include:
- **API Key Management:** Users can now generate, manage, and revoke persistent API keys through a new "API Keys" section in the settings UI.
- **Authentication Middleware:** API requests are now authenticated via an `X-API-KEY` header instead of the previous `Authorization: Bearer` token.
- **Backend Implementation:** Adds a new `api_keys` database table, along with corresponding services, controllers, and routes to manage the key lifecycle securely.
- **Rate Limiting:** The API rate limiter now uses the API key to identify and track requests.
- **Documentation:** The API authentication documentation has been updated to reflect the new method.
* Add configurable API rate limiting
Two new variables are added to `.env.example`:
- `RATE_LIMIT_WINDOW_MS`: The time window in milliseconds for which requests are checked (defaults to 15 minutes).
- `RATE_LIMIT_MAX_REQUESTS`: The maximum number of requests allowed from an IP within the window (defaults to 100).
The installation documentation has been updated to reflect these new configuration options.
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* Disable system settings for demo mode
* Status API response: needsSetup
* Remove SUPER_API_KEY support
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* System settings setup
* Multi-language support
* feat: Add internationalization (i18n) support to frontend
This commit introduces internationalization (i18n) to the frontend using the `sveltekit-i18n` library, allowing the user interface to be translated into multiple languages.
Key changes:
- Added translation files for 10 languages (en, de, es, fr, etc.).
- Replaced hardcoded text strings throughout the frontend components and pages with translation keys.
- Added a language selector to the system settings page, allowing administrators to set the default application language.
- Updated the backend settings API to store and expose the new language configuration.
* Adding greek translation
* feat(backend): Implement i18n for API responses
This commit introduces internationalization (i18n) to the backend API using the `i18next` library.
Hardcoded error and response messages in the API controllers have been replaced with translation keys, which are processed by the new i18next middleware. This allows for API responses to be translated into different languages.
The following dependencies were added:
- `i18next`
- `i18next-fs-backend`
- `i18next-http-middleware`
* Formatting code
* Translation revamp for frontend and backend, adding systems docs
* Docs site title
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* Format checked, contributing.md update
* Middleware setup
* IAP API, create user/roles in frontend
* RBAC using CASL library
* Switch to CASL, secure search, resource-level access control
* Remove inherent behavior, index userEmail, adding docs for IAM policies
* Format
* Adding IAM policy documentation to Docs site
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* Format checked, contributing.md update
* Middleware setup
* IAP API, create user/roles in frontend
* RBAC using CASL library
* Switch to CASL, secure search, resource-level access control
* Remove inherent behavior, index userEmail, adding docs for IAM policies
* Format
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>