* OpenAPI root url fix
* Journaling OSS setup
* feat: add preserve-original-file mode for email ingestion for GoBD compliance
- Add `preserveOriginalFile` option to ingestion sources and connectors
- Stream original EML/MBOX/PST emails to temp files instead of holding
full buffers in memory, reducing memory allocation during ingestion
- Skip attachment binary extraction and EML re-serialization when
preserve mode is enabled; use raw file on disk as source of truth
- Update `EmailObject` to use `tempFilePath` instead of in-memory `eml`
buffer across all connectors (EML, MBOX, PST)
- Add new database migration (0032) for `preserve_original_file` column
- Add frontend UI toggle with tooltip (tippy.js) for the new option
- Replace console.warn calls with structured pino logger in connectors
* add isjournaled property to archived_email
* feat(ingestion): add unmerge ingestion source functionality
Introduces the ability to detach a child ingestion source from its
merge group, making it a standalone root source. Changes include:
- Add `unmerge` controller method with auth and error handling
- Add POST `/v1/ingestion-sources/{id}/unmerge` route with OpenAPI docs
- Implement `IngestionService.unmerge` backend logic
- Add unmerge UI action and handler in the frontend ingestion view
- Fix bulk delete to also remove children of deleted root sources
- Update docs with new API operation and merging sources user guide
* code formatting
* Database migration file for enum `partially_active`
* Error handling improvement
* adding exports to backend package, page icons update
* Integrity report PDF generation
* Fixed inline attachment images not displaying in the email preview by modifying `EmailPreview.svelte`.
The email HTML references embedded images via `cid:` URIs (e.g., `src="cid:ii_19c6d5f8d5eee7bd6d91"`), but the component never resolved those `cid:` references to actual image data, even though `postal-mime` already parses inline attachments with their `contentId` and binary `content`.
The `emailHtml` derived value now calls `resolveContentIdReferences()` before rendering, so inline/embedded images display correctly in the iframe preview.
* feat: strip non-inline attachments from EML before storage
Add nodemailer dependency and emlUtils helper to remove non-inline
attachments from .eml buffers during ingestion. This avoids
double-storing attachment data since attachments are already stored
separately.
* upload error handing for file based ingestion
* Use Postgres for sync session management
* Google workspace / MS 365 duplicate check, avoid extra API call when previous ingestion fails
* OpenAPI specs for API docs
* code formatting
* ran duplicate check for IMAP import, optimize message listing
* Version update
* feat(types): update license types and prepare @open-archiver/types for public publish
- Add `LicensePingRequest` and `LicensePingResponse` interfaces for the license server ping endpoint
- Update `LicenseStatusPayload` to include `lastCheckedAt` and `planSeats` fields, and change status from `REVOKED` to `INVALID`
- Update `ConsolidatedLicenseStatus` to reflect `INVALID` status and add `lastCheckedAt`
- Bump `@open-archiver/types` version from 0.1.0 to 0.1.2, set license to MIT, make package public, and add `files` field
* update license types
- Add `LicensePingRequest` and `LicensePingResponse` interfaces for the license server ping endpoint
- Update `LicenseStatusPayload` to include `lastCheckedAt` and `planSeats` fields, and change status from `REVOKED` to `INVALID`
- Update `ConsolidatedLicenseStatus` to reflect `INVALID` status and add `lastCheckedAt`
- Bump `@open-archiver/types` version from 0.1.0 to 0.1.2, set license to MIT, make package public, and add `files` field
* fix(backend): improve ingestion error handling and error messages
This commit introduces a "force delete" mechanism for Ingestion Sources and improves error messages for file-based connectors.
Changes:
- Update `IngestionService.delete` to accept a `force` flag, bypassing the `checkDeletionEnabled` check.
- Use `force` deletion when rolling back failed ingestion source creations (e.g., decryption errors or connection failures) to ensure cleanup even if deletion is globally disabled.
- Enhance error messages in `EMLConnector`, `MboxConnector`, and `PSTConnector` to distinguish between missing local files and failed uploads, providing more specific feedback to the user.
* feat(ingestion): optimize duplicate handling and fix race conditions in Google Workspace
- Implement fast duplicate check (by Message-ID) to skip full content download for existing emails in Google Workspace and IMAP connectors.
- Fix race condition in Google Workspace initial import by capturing `historyId` before listing messages, ensuring no data loss for incoming mail during import.
* fix(api): correct API key generation and proxy handling
This commit resolves an issue where generating a new API key would fail. The root cause was improper handling of POST request bodies in the frontend proxy server.
- Refactored `ApiKeyController` methods to use arrow functions to ensure correct `this` binding.
* User profile/account page, change password, API
* docs(api): update ingestion source provider values
Update the `CreateIngestionSourceDto` documentation in `ingestion.md` to reflect the current set of supported providers.
* updating tag
* feat: add REDIS_USER env variable (#172)
* feat: add REDIS_USER env variable
fixes#171
* add proper type for bullmq config
* Bulgarian UI language strings added (backend+frontend) (#194)
* Bulgarian UI Support added
* BG language UI support - Create translation.json
* update redis config logic
* Update Bulgarian language setting, register language
* Allow specifying local file path for mbox/eml/pst (#214)
* Add agents AI doc
* Allow local file path for Mbox file ingestion
---------
Co-authored-by: Wei S. <5291640+wayneshn@users.noreply.github.com>
* feat(ingestion): add local file path support and optimize EML processing
- Frontend: Updated IngestionSourceForm to allow toggling between "Upload File" and "Local File Path" for PST, EML, and Mbox providers.
- Frontend: Added logic to clear irrelevant form data when switching import methods.
- Frontend: Added English translations for new form fields.
- Backend: Refactored EMLConnector to stream ZIP entries using yauzl instead of extracting the full archive to disk, significantly improving efficiency for large archives.
- Docs: Updated API documentation and User Guides (PST, EML, Mbox) to clarify "Local File Path" usage, specifically within Docker environments.
* docs: add meilisearch dumpless upgrade guide and snapshot config
Update `docker-compose.yml` to include the `MEILI_SCHEDULE_SNAPSHOT` environment variable, defaulting to 86400 seconds (24 hours), enabling periodic data snapshots for easier recovery. Shout out to @morph027 for the inspiration!
Additionally, update the Meilisearch upgrade documentation to include an experimental "dumpless" upgrade guide while marking the previous method as the standard recommended process.
* build(coolify): enable daily snapshots for meilisearch
Configure the Meilisearch service in `open-archiver.yml` to create snapshots every 86400 seconds (24 hours) by setting the `MEILI_SCHEDULE_SNAPSHOT` environment variable.
---------
Co-authored-by: Antonia Schwennesen <53372671+zophiana@users.noreply.github.com>
Co-authored-by: IT Creativity + Art Team <admin@it-playground.net>
Co-authored-by: Jan Berdajs <mrbrdo@gmail.com>
* fix(api): correct API key generation and proxy handling
This commit resolves an issue where generating a new API key would fail. The root cause was improper handling of POST request bodies in the frontend proxy server.
- Refactored `ApiKeyController` methods to use arrow functions to ensure correct `this` binding.
* User profile/account page, change password, API
* docs(api): update ingestion source provider values
Update the `CreateIngestionSourceDto` documentation in `ingestion.md` to reflect the current set of supported providers.
* updating tag
* fix(api): correct API key generation and proxy handling
This commit resolves an issue where generating a new API key would fail. The root cause was improper handling of POST request bodies in the frontend proxy server.
- Refactored `ApiKeyController` methods to use arrow functions to ensure correct `this` binding.
* User profile/account page, change password, API
* docs(api): update ingestion source provider values
Update the `CreateIngestionSourceDto` documentation in `ingestion.md` to reflect the current set of supported providers.
* formatting code
* Remove uninstalled packages
* fix(imap): Improve IMAP connection stability and error handling
This commit refactors the IMAP connector to enhance connection management, error handling, and overall stability during email ingestion.
The `isConnected` flag has been removed in favor of relying directly on the `client.usable` property from the `imapflow` library. This simplifies the connection logic and avoids state synchronization issues.
The `connect` method now re-creates the client instance if it's not usable, ensuring a fresh connection after errors or disconnects. The retry mechanism (`withRetry`) has been updated to no longer manually reset the connection state, as the `connect` method now handles this automatically on the next attempt.
Additionally, a minor bug in the `sync-cycle-finished` processor has been fixed. The logic for merging sync states from successful jobs has been simplified and correctly typed, preventing potential runtime errors when no successful jobs are present.
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* Jobs page responsive fix
* feat(ingestion): Refactor email indexing into a dedicated background job
This commit refactors the email indexing process to improve the performance and reliability of the ingestion pipeline.
Previously, email indexing was performed synchronously within the mailbox processing job. This could lead to timeouts and failed ingestion cycles if the indexing step was slow or encountered errors.
To address this, the indexing logic has been moved into a separate, dedicated background job queue (`indexingQueue`). Now, the mailbox processor simply adds a batch of emails to this queue. A separate worker then processes the indexing job asynchronously.
This decoupling makes the ingestion process more robust:
- It prevents slow indexing from blocking or failing the entire mailbox sync.
- It allows for better resource management and scalability by handling indexing in a dedicated process.
- It improves error handling, as a failed indexing job can be retried independently without affecting the main ingestion flow.
Additionally, this commit includes minor documentation updates and removes a premature timeout in the PDF text extraction helper that was causing issues.
* remove uninstalled packages
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* Jobs page responsive fix
* feat(ingestion): Refactor email indexing into a dedicated background job
This commit refactors the email indexing process to improve the performance and reliability of the ingestion pipeline.
Previously, email indexing was performed synchronously within the mailbox processing job. This could lead to timeouts and failed ingestion cycles if the indexing step was slow or encountered errors.
To address this, the indexing logic has been moved into a separate, dedicated background job queue (`indexingQueue`). Now, the mailbox processor simply adds a batch of emails to this queue. A separate worker then processes the indexing job asynchronously.
This decoupling makes the ingestion process more robust:
- It prevents slow indexing from blocking or failing the entire mailbox sync.
- It allows for better resource management and scalability by handling indexing in a dedicated process.
- It improves error handling, as a failed indexing job can be retried independently without affecting the main ingestion flow.
Additionally, this commit includes minor documentation updates and removes a premature timeout in the PDF text extraction helper that was causing issues.
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* open-core setup, adding enterprise package
* enterprise: Audit log API, UI
* Audit-log docs
* feat: Integrity report, allowing users to verify the integrity of archived emails and their attachments.
- When an email is archived, Open Archiver calculates a unique cryptographic signature (a SHA256 hash) for the email's raw `.eml` file and for each of its attachments. These signatures are stored in the database alongside the email's metadata.
- The integrity check feature recalculates these signatures for the stored files and compares them to the original signatures stored in the database. This process allows you to verify that the content of your archived emails has not been altered, corrupted, or tampered with since the moment they were archived.
- Add docs of Integrity report
* Update Docker-compose.yml to use bind mount for Open Archiver data.
Fix API rate-limiter warning about trust proxy
* File encryption support
* Scope attachment deduplication to ingestion source
Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source.
This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis.
Changes:
- **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases.
- **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source.
* Scope attachment deduplication to ingestion source
Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source.
This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis.
Changes:
- **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases.
- **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source.
* Add option to disable deletions
This commit introduces a new feature that allows admins to disable the deletion of emails and ingestion sources for the entire instance. This is a critical feature for compliance and data retention, as it prevents accidental or unauthorized deletions.
Changes:
- **Configuration**: Added an `ENABLE_DELETION` environment variable. If this variable is not set to `true`, all deletion operations will be disabled.
- **Deletion Guard**: A centralized `checkDeletionEnabled` guard has been implemented to enforce this setting at both the controller and service levels, ensuring a robust and secure implementation.
- **Documentation**: The installation guide has been updated to include the new `ENABLE_DELETION` environment variable and its behavior.
- **Refactor**: The `IngestionService`'s `create` method was refactored to remove unnecessary calls to the `delete` method, simplifying the code and improving its robustness.
* Adding position for menu items
* feat(docker): Fix CORS errors
This commit fixes CORS errors when running the app in Docker by introducing the `APP_URL` environment variable. A CORS policy is set up for the backend to only allow origin from the `APP_URL`.
Key changes include:
- New `APP_URL` and `ORIGIN` environment variables have been added to properly configure CORS and the SvelteKit adapter, making the application's public URL easily configurable.
- Dockerfiles are updated to copy the entrypoint script, Drizzle config, and migration files into the final image.
- Documentation and example files (`.env.example`, `docker-compose.yml`) have been updated to reflect these changes.
* feat(attachments): De-duplicate attachment content by content hash
This commit refactors attachment handling to allow multiple emails within the same ingestion source to reference attachments with identical content (same hash).
Changes:
- The unique index on the `attachments` table has been changed to a non-unique index to permit duplicate hash/source pairs.
- The ingestion logic is updated to first check for an existing attachment with the same hash and source. If found, it reuses the existing record; otherwise, it creates a new one. This maintains storage de-duplication.
- The email deletion logic is improved to be more robust. It now correctly removes the email-attachment link before checking if the attachment record and its corresponding file can be safely deleted.
* Not filtering our Trash folder
* feat(backend): Add BullMQ dashboard for job monitoring
This commit introduces a web-based UI for monitoring and managing background jobs using Bullmq.
Key changes:
- A new `/api/v1/jobs` endpoint is created, serving the Bull Board dashboard. Access is restricted to authenticated administrators.
- All BullMQ queue definitions (`ingestion`, `indexing`, `sync-scheduler`) have been centralized into a new `packages/backend/src/jobs/queues.ts` file.
- Workers and services now import queue instances from this central file, improving code organization and removing redundant queue instantiations.
* Add `ALL_INCLUSIVE_ARCHIVE` environment variable to disable jun filtering
* Using BSL license
* frontend: Responsive design for menu bar, pagination
* License service/module
* Remove demoMode logic
* Formatting code
* Remove enterprise packages
* Fix package.json in packages
* Search page responsive fix
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* Feat/tika integration (#94)
* feat(Tika) Integration von Tika zur Textextraktion
* feat(Tika) Integration of Apache Tika for text extraction
* feat(Tika): Complete Tika integration with text extraction and docker-compose setup
- Add Tika service to docker-compose.yml
- Implement text sanitization and document validation
- Improve batch processing with concurrency control
* fix(comments) translated comments into english
fix(docker) removed ports (only used for testing)
* feat(indexing): Implement batch indexing for Meilisearch
This change introduces batch processing for indexing emails into Meilisearch to significantly improve performance and throughput during ingestion. This change is based on the batch processing method previously contributed by @axeldunkel.
Previously, each email was indexed individually, resulting in a high number of separate API calls. This approach was inefficient, especially for large mailboxes.
The `processMailbox` queue worker now accumulates emails into a batch before sending them to the `IndexingService`. The service then uses the `addDocuments` Meilisearch API endpoint to index the entire batch in a single request, reducing network overhead and improving indexing speed.
A new environment variable, `MEILI_INDEXING_BATCH`, has been added to make the batch size configurable, with a default of 500.
Additionally, this commit includes minor refactoring:
- The `TikaService` has been moved to its own dedicated file.
- The `PendingEmail` type has been moved to the shared `@open-archiver/types` package.
* chore(jobs): make continuous sync job scheduling idempotent
Adds a static `jobId` to the repeatable 'schedule-continuous-sync' job.
This prevents duplicate jobs from being scheduled if the server restarts. By providing a unique ID, the queue will update the existing repeatable job instead of creating a new one, ensuring the sync runs only at the configured frequency.
---------
Co-authored-by: axeldunkel <53174090+axeldunkel@users.noreply.github.com>
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
This commit introduces two major features:
1. **Mbox File Ingestion:**
Users can now ingest emails from Mbox files (`.mbox`). A new Mbox connector has been implemented on the backend, and the user interface has been updated to support creating Mbox ingestion sources. Documentation for this new provider has also been added.
Additionally, this commit includes new documentation for upgrading and migrating Open Archiver.
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
This commit updates several dependencies in the frontend and backend packages.
- **Backend:**
- Upgrades `xlsx` to version `0.20.3` by pointing to the official CDN URL. This ensures usage of the community edition with a permissive license.
- Removes the unused `bull-board` development dependency.
- **Frontend:**
- Upgrades `@sveltejs/kit` from `^2.16.0` to `^2.38.1` to stay current with the latest features and fixes.
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* feat: Add new version notification in footer
This commit implements a system to check for new application versions and notify the user.
On page load, the server-side code now fetches the latest release from the GitHub repository API. It uses `semver` to compare the current application version with the latest release tag.
If a newer version is available, an alert is displayed in the footer with a link to the release page. The current application version is also now displayed in the footer. The version check is cached for one hour to minimize API requests.
* Modify version notification
* current version 0.3.1
* Resolve conflicts
* Code formatting
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
This commit implements a system to check for new application versions and notify the user.
On page load, the server-side code now fetches the latest release from the GitHub repository API. It uses `semver` to compare the current application version with the latest release tag.
If a newer version is available, an alert is displayed in the footer with a link to the release page. The current application version is also now displayed in the footer. The version check is cached for one hour to minimize API requests.
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* feat(auth): Implement API key authentication
This commit enables API access with an API key system. This change provides a better experience for programmatic access and third-party integrations.
Key changes include:
- **API Key Management:** Users can now generate, manage, and revoke persistent API keys through a new "API Keys" section in the settings UI.
- **Authentication Middleware:** API requests are now authenticated via an `X-API-KEY` header instead of the previous `Authorization: Bearer` token.
- **Backend Implementation:** Adds a new `api_keys` database table, along with corresponding services, controllers, and routes to manage the key lifecycle securely.
- **Rate Limiting:** The API rate limiter now uses the API key to identify and track requests.
- **Documentation:** The API authentication documentation has been updated to reflect the new method.
* Add configurable API rate limiting
Two new variables are added to `.env.example`:
- `RATE_LIMIT_WINDOW_MS`: The time window in milliseconds for which requests are checked (defaults to 15 minutes).
- `RATE_LIMIT_MAX_REQUESTS`: The maximum number of requests allowed from an IP within the window (defaults to 100).
The installation documentation has been updated to reflect these new configuration options.
* Disable API operation in demo mode
* Exclude public API endpoints from rate limiting
* JSON rate limiting message & status code
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* feat(auth): Implement API key authentication
This commit enables API access with an API key system. This change provides a better experience for programmatic access and third-party integrations.
Key changes include:
- **API Key Management:** Users can now generate, manage, and revoke persistent API keys through a new "API Keys" section in the settings UI.
- **Authentication Middleware:** API requests are now authenticated via an `X-API-KEY` header instead of the previous `Authorization: Bearer` token.
- **Backend Implementation:** Adds a new `api_keys` database table, along with corresponding services, controllers, and routes to manage the key lifecycle securely.
- **Rate Limiting:** The API rate limiter now uses the API key to identify and track requests.
- **Documentation:** The API authentication documentation has been updated to reflect the new method.
* Add configurable API rate limiting
Two new variables are added to `.env.example`:
- `RATE_LIMIT_WINDOW_MS`: The time window in milliseconds for which requests are checked (defaults to 15 minutes).
- `RATE_LIMIT_MAX_REQUESTS`: The maximum number of requests allowed from an IP within the window (defaults to 100).
The installation documentation has been updated to reflect these new configuration options.
* Disable API operation in demo mode
* Exclude public API endpoints from rate limiting
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* feat(auth): Implement API key authentication
This commit enables API access with an API key system. This change provides a better experience for programmatic access and third-party integrations.
Key changes include:
- **API Key Management:** Users can now generate, manage, and revoke persistent API keys through a new "API Keys" section in the settings UI.
- **Authentication Middleware:** API requests are now authenticated via an `X-API-KEY` header instead of the previous `Authorization: Bearer` token.
- **Backend Implementation:** Adds a new `api_keys` database table, along with corresponding services, controllers, and routes to manage the key lifecycle securely.
- **Rate Limiting:** The API rate limiter now uses the API key to identify and track requests.
- **Documentation:** The API authentication documentation has been updated to reflect the new method.
* Add configurable API rate limiting
Two new variables are added to `.env.example`:
- `RATE_LIMIT_WINDOW_MS`: The time window in milliseconds for which requests are checked (defaults to 15 minutes).
- `RATE_LIMIT_MAX_REQUESTS`: The maximum number of requests allowed from an IP within the window (defaults to 100).
The installation documentation has been updated to reflect these new configuration options.
* Disable API operation in demo mode
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* feat(auth): Implement API key authentication
This commit enables API access with an API key system. This change provides a better experience for programmatic access and third-party integrations.
Key changes include:
- **API Key Management:** Users can now generate, manage, and revoke persistent API keys through a new "API Keys" section in the settings UI.
- **Authentication Middleware:** API requests are now authenticated via an `X-API-KEY` header instead of the previous `Authorization: Bearer` token.
- **Backend Implementation:** Adds a new `api_keys` database table, along with corresponding services, controllers, and routes to manage the key lifecycle securely.
- **Rate Limiting:** The API rate limiter now uses the API key to identify and track requests.
- **Documentation:** The API authentication documentation has been updated to reflect the new method.
* Add configurable API rate limiting
Two new variables are added to `.env.example`:
- `RATE_LIMIT_WINDOW_MS`: The time window in milliseconds for which requests are checked (defaults to 15 minutes).
- `RATE_LIMIT_MAX_REQUESTS`: The maximum number of requests allowed from an IP within the window (defaults to 100).
The installation documentation has been updated to reflect these new configuration options.
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* Disable system settings for demo mode
* Status API response: needsSetup
* Remove SUPER_API_KEY support
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* System settings setup
* Multi-language support
* feat: Add internationalization (i18n) support to frontend
This commit introduces internationalization (i18n) to the frontend using the `sveltekit-i18n` library, allowing the user interface to be translated into multiple languages.
Key changes:
- Added translation files for 10 languages (en, de, es, fr, etc.).
- Replaced hardcoded text strings throughout the frontend components and pages with translation keys.
- Added a language selector to the system settings page, allowing administrators to set the default application language.
- Updated the backend settings API to store and expose the new language configuration.
* Adding greek translation
* feat(backend): Implement i18n for API responses
This commit introduces internationalization (i18n) to the backend API using the `i18next` library.
Hardcoded error and response messages in the API controllers have been replaced with translation keys, which are processed by the new i18next middleware. This allows for API responses to be translated into different languages.
The following dependencies were added:
- `i18next`
- `i18next-fs-backend`
- `i18next-http-middleware`
* Formatting code
* Translation revamp for frontend and backend, adding systems docs
* Docs site title
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* Format checked, contributing.md update
* Middleware setup
* IAP API, create user/roles in frontend
* RBAC using CASL library
* Switch to CASL, secure search, resource-level access control
* Remove inherent behavior, index userEmail, adding docs for IAM policies
* Format
* System settings setup
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* Format checked, contributing.md update
* Middleware setup
* IAP API, create user/roles in frontend
* RBAC using CASL library
* Switch to CASL, secure search, resource-level access control
* Remove inherent behavior, index userEmail, adding docs for IAM policies
* Format
* Adding IAM policy documentation to Docs site
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
* Format checked, contributing.md update
* Middleware setup
* IAP API, create user/roles in frontend
* RBAC using CASL library
* Switch to CASL, secure search, resource-level access control
* Remove inherent behavior, index userEmail, adding docs for IAM policies
* Format
---------
Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>