mirror of
https://github.com/LogicLabs-OU/OpenArchiver.git
synced 2026-04-06 00:31:57 +02:00
v0.4 init: File encryption, integrity report, deletion protection, job monitoring (#187)
* open-core setup, adding enterprise package * enterprise: Audit log API, UI * Audit-log docs * feat: Integrity report, allowing users to verify the integrity of archived emails and their attachments. - When an email is archived, Open Archiver calculates a unique cryptographic signature (a SHA256 hash) for the email's raw `.eml` file and for each of its attachments. These signatures are stored in the database alongside the email's metadata. - The integrity check feature recalculates these signatures for the stored files and compares them to the original signatures stored in the database. This process allows you to verify that the content of your archived emails has not been altered, corrupted, or tampered with since the moment they were archived. - Add docs of Integrity report * Update Docker-compose.yml to use bind mount for Open Archiver data. Fix API rate-limiter warning about trust proxy * File encryption support * Scope attachment deduplication to ingestion source Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source. This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis. Changes: - **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases. - **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source. * Scope attachment deduplication to ingestion source Previously, attachment deduplication was handled globally by enforcing a unique constraint on the content hash (contentHashSha256) in the `attachments` table. This caused an issue where an attachment from one ingestion source would be incorrectly linked if the same attachment was processed by a different source. This commit refactors the deduplication logic to be scoped on a per-ingestion-source basis. Changes: - **Schema:** The `attachments` table schema has been updated to include a nullable `ingestionSourceId` column. A composite unique index has been added on `(ingestionSourceId, contentHashSha256)` to enforce per-source uniqueness. The `ingestionSourceId` is nullable to ensure backward compatibility with existing databases. - **Ingestion Logic:** The `IngestionService` has been updated to provide the `ingestionSourceId` when inserting attachment records. The `onConflictDoUpdate` clause now targets the new composite key, ensuring that attachments are only considered duplicates if they have the same hash and originate from the same ingestion source. * Add option to disable deletions This commit introduces a new feature that allows admins to disable the deletion of emails and ingestion sources for the entire instance. This is a critical feature for compliance and data retention, as it prevents accidental or unauthorized deletions. Changes: - **Configuration**: Added an `ENABLE_DELETION` environment variable. If this variable is not set to `true`, all deletion operations will be disabled. - **Deletion Guard**: A centralized `checkDeletionEnabled` guard has been implemented to enforce this setting at both the controller and service levels, ensuring a robust and secure implementation. - **Documentation**: The installation guide has been updated to include the new `ENABLE_DELETION` environment variable and its behavior. - **Refactor**: The `IngestionService`'s `create` method was refactored to remove unnecessary calls to the `delete` method, simplifying the code and improving its robustness. * Adding position for menu items * feat(docker): Fix CORS errors This commit fixes CORS errors when running the app in Docker by introducing the `APP_URL` environment variable. A CORS policy is set up for the backend to only allow origin from the `APP_URL`. Key changes include: - New `APP_URL` and `ORIGIN` environment variables have been added to properly configure CORS and the SvelteKit adapter, making the application's public URL easily configurable. - Dockerfiles are updated to copy the entrypoint script, Drizzle config, and migration files into the final image. - Documentation and example files (`.env.example`, `docker-compose.yml`) have been updated to reflect these changes. * feat(attachments): De-duplicate attachment content by content hash This commit refactors attachment handling to allow multiple emails within the same ingestion source to reference attachments with identical content (same hash). Changes: - The unique index on the `attachments` table has been changed to a non-unique index to permit duplicate hash/source pairs. - The ingestion logic is updated to first check for an existing attachment with the same hash and source. If found, it reuses the existing record; otherwise, it creates a new one. This maintains storage de-duplication. - The email deletion logic is improved to be more robust. It now correctly removes the email-attachment link before checking if the attachment record and its corresponding file can be safely deleted. * Not filtering our Trash folder * feat(backend): Add BullMQ dashboard for job monitoring This commit introduces a web-based UI for monitoring and managing background jobs using Bullmq. Key changes: - A new `/api/v1/jobs` endpoint is created, serving the Bull Board dashboard. Access is restricted to authenticated administrators. - All BullMQ queue definitions (`ingestion`, `indexing`, `sync-scheduler`) have been centralized into a new `packages/backend/src/jobs/queues.ts` file. - Workers and services now import queue instances from this central file, improving code organization and removing redundant queue instantiations. * Add `ALL_INCLUSIVE_ARCHIVE` environment variable to disable jun filtering * Using BSL license * frontend: Responsive design for menu bar, pagination * License service/module * Remove demoMode logic * Formatting code * Remove enterprise packages * Fix package.json in packages * Search page responsive fix --------- Co-authored-by: Wayne <5291640+ringoinca@users.noreply.github.com>
This commit is contained in:
23
.env.example
23
.env.example
@@ -4,8 +4,15 @@
|
|||||||
NODE_ENV=development
|
NODE_ENV=development
|
||||||
PORT_BACKEND=4000
|
PORT_BACKEND=4000
|
||||||
PORT_FRONTEND=3000
|
PORT_FRONTEND=3000
|
||||||
|
# The public-facing URL of your application. This is used by the backend to configure CORS.
|
||||||
|
APP_URL=http://localhost:3000
|
||||||
|
# This is used by the SvelteKit Node adapter to determine the server's public-facing URL.
|
||||||
|
# It should always be set to the value of APP_URL.
|
||||||
|
ORIGIN=$APP_URL
|
||||||
# The frequency of continuous email syncing. Default is every minutes, but you can change it to another value based on your needs.
|
# The frequency of continuous email syncing. Default is every minutes, but you can change it to another value based on your needs.
|
||||||
SYNC_FREQUENCY='* * * * *'
|
SYNC_FREQUENCY='* * * * *'
|
||||||
|
# Set to 'true' to include Junk and Trash folders in the email archive. Defaults to false.
|
||||||
|
ALL_INCLUSIVE_ARCHIVE=false
|
||||||
|
|
||||||
# --- Docker Compose Service Configuration ---
|
# --- Docker Compose Service Configuration ---
|
||||||
# These variables are used by docker-compose.yml to configure the services. Leave them unchanged if you use Docker services for Postgresql, Valkey (Redis) and Meilisearch. If you decide to use your own instances of these services, you can substitute them with your own connection credentials.
|
# These variables are used by docker-compose.yml to configure the services. Leave them unchanged if you use Docker services for Postgresql, Valkey (Redis) and Meilisearch. If you decide to use your own instances of these services, you can substitute them with your own connection credentials.
|
||||||
@@ -40,7 +47,9 @@ BODY_SIZE_LIMIT=100M
|
|||||||
# --- Local Storage Settings ---
|
# --- Local Storage Settings ---
|
||||||
# The path inside the container where files will be stored.
|
# The path inside the container where files will be stored.
|
||||||
# This is mapped to a Docker volume for persistence.
|
# This is mapped to a Docker volume for persistence.
|
||||||
# This is only used if STORAGE_TYPE is 'local'.
|
# This is not an optional variable, it is where the Open Archiver service stores application data. Set this even if you are using S3 storage.
|
||||||
|
# Make sure the user that runs the Open Archiver service has read and write access to this path.
|
||||||
|
# Important: It is recommended to create this path manually before installation, otherwise you may face permission and ownership problems.
|
||||||
STORAGE_LOCAL_ROOT_PATH=/var/data/open-archiver
|
STORAGE_LOCAL_ROOT_PATH=/var/data/open-archiver
|
||||||
|
|
||||||
# --- S3-Compatible Storage Settings ---
|
# --- S3-Compatible Storage Settings ---
|
||||||
@@ -53,8 +62,18 @@ STORAGE_S3_REGION=
|
|||||||
# Set to 'true' for MinIO and other non-AWS S3 services
|
# Set to 'true' for MinIO and other non-AWS S3 services
|
||||||
STORAGE_S3_FORCE_PATH_STYLE=false
|
STORAGE_S3_FORCE_PATH_STYLE=false
|
||||||
|
|
||||||
|
# --- Storage Encryption ---
|
||||||
|
# IMPORTANT: Generate a secure, random 32-byte hex string for this key.
|
||||||
|
# You can use `openssl rand -hex 32` to generate a key.
|
||||||
|
# This key is used for AES-256 encryption of files at rest.
|
||||||
|
# This is an optional variable, if not set, files will not be encrypted.
|
||||||
|
STORAGE_ENCRYPTION_KEY=
|
||||||
|
|
||||||
# --- Security & Authentication ---
|
# --- Security & Authentication ---
|
||||||
|
|
||||||
|
# Enable or disable deletion of emails and ingestion sources. Defaults to false.
|
||||||
|
ENABLE_DELETION=false
|
||||||
|
|
||||||
# Rate Limiting
|
# Rate Limiting
|
||||||
# The window in milliseconds for which API requests are checked. Defaults to 60000 (1 minute).
|
# The window in milliseconds for which API requests are checked. Defaults to 60000 (1 minute).
|
||||||
RATE_LIMIT_WINDOW_MS=60000
|
RATE_LIMIT_WINDOW_MS=60000
|
||||||
@@ -77,5 +96,3 @@ ENCRYPTION_KEY=
|
|||||||
# Apache Tika Integration
|
# Apache Tika Integration
|
||||||
# ONLY active if TIKA_URL is set
|
# ONLY active if TIKA_URL is set
|
||||||
TIKA_URL=http://tika:9998
|
TIKA_URL=http://tika:9998
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
7
.github/ISSUE_TEMPLATE/bug_report.md
vendored
7
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -4,7 +4,6 @@ about: Create a report to help us improve
|
|||||||
title: ''
|
title: ''
|
||||||
labels: bug
|
labels: bug
|
||||||
assignees: ''
|
assignees: ''
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Describe the bug**
|
**Describe the bug**
|
||||||
@@ -12,9 +11,10 @@ A clear and concise description of what the bug is.
|
|||||||
|
|
||||||
**To Reproduce**
|
**To Reproduce**
|
||||||
Steps to reproduce the behavior:
|
Steps to reproduce the behavior:
|
||||||
|
|
||||||
1. Go to '...'
|
1. Go to '...'
|
||||||
2. Click on '....'
|
2. Click on '....'
|
||||||
5. See error
|
3. See error
|
||||||
|
|
||||||
**Expected behavior**
|
**Expected behavior**
|
||||||
A clear and concise description of what you expected to happen.
|
A clear and concise description of what you expected to happen.
|
||||||
@@ -23,7 +23,8 @@ A clear and concise description of what you expected to happen.
|
|||||||
If applicable, add screenshots to help explain your problem.
|
If applicable, add screenshots to help explain your problem.
|
||||||
|
|
||||||
**System:**
|
**System:**
|
||||||
- Open Archiver Version:
|
|
||||||
|
- Open Archiver Version:
|
||||||
|
|
||||||
**Relevant logs:**
|
**Relevant logs:**
|
||||||
Any relevant logs (Redact sensitive information)
|
Any relevant logs (Redact sensitive information)
|
||||||
|
|||||||
3
.github/ISSUE_TEMPLATE/feature_request.md
vendored
3
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -4,11 +4,10 @@ about: Suggest an idea for this project
|
|||||||
title: ''
|
title: ''
|
||||||
labels: enhancement
|
labels: enhancement
|
||||||
assignees: ''
|
assignees: ''
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Is your feature request related to a problem? Please describe.**
|
**Is your feature request related to a problem? Please describe.**
|
||||||
A clear and concise description of what the problem is.
|
A clear and concise description of what the problem is.
|
||||||
|
|
||||||
**Describe the solution you'd like**
|
**Describe the solution you'd like**
|
||||||
A clear and concise description of what you want to happen.
|
A clear and concise description of what you want to happen.
|
||||||
|
|||||||
2
.github/workflows/docker-deployment.yml
vendored
2
.github/workflows/docker-deployment.yml
vendored
@@ -35,7 +35,7 @@ jobs:
|
|||||||
uses: docker/build-push-action@v6
|
uses: docker/build-push-action@v6
|
||||||
with:
|
with:
|
||||||
context: .
|
context: .
|
||||||
file: ./docker/Dockerfile
|
file: ./apps/open-archiver/Dockerfile
|
||||||
platforms: linux/amd64,linux/arm64
|
platforms: linux/amd64,linux/arm64
|
||||||
push: true
|
push: true
|
||||||
tags: logiclabshq/open-archiver:${{ steps.sha.outputs.sha }}
|
tags: logiclabshq/open-archiver:${{ steps.sha.outputs.sha }}
|
||||||
|
|||||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -24,3 +24,7 @@ pnpm-debug.log
|
|||||||
# Vitepress
|
# Vitepress
|
||||||
docs/.vitepress/dist
|
docs/.vitepress/dist
|
||||||
docs/.vitepress/cache
|
docs/.vitepress/cache
|
||||||
|
|
||||||
|
|
||||||
|
# TS
|
||||||
|
**/tsconfig.tsbuildinfo
|
||||||
|
|||||||
140
LICENSE
140
LICENSE
@@ -200,23 +200,23 @@ You may convey a work based on the Program, or the modifications to
|
|||||||
produce it from the Program, in the form of source code under the
|
produce it from the Program, in the form of source code under the
|
||||||
terms of section 4, provided that you also meet all of these conditions:
|
terms of section 4, provided that you also meet all of these conditions:
|
||||||
|
|
||||||
- **a)** The work must carry prominent notices stating that you modified
|
- **a)** The work must carry prominent notices stating that you modified
|
||||||
it, and giving a relevant date.
|
it, and giving a relevant date.
|
||||||
- **b)** The work must carry prominent notices stating that it is
|
- **b)** The work must carry prominent notices stating that it is
|
||||||
released under this License and any conditions added under section 7.
|
released under this License and any conditions added under section 7.
|
||||||
This requirement modifies the requirement in section 4 to
|
This requirement modifies the requirement in section 4 to
|
||||||
“keep intact all notices”.
|
“keep intact all notices”.
|
||||||
- **c)** You must license the entire work, as a whole, under this
|
- **c)** You must license the entire work, as a whole, under this
|
||||||
License to anyone who comes into possession of a copy. This
|
License to anyone who comes into possession of a copy. This
|
||||||
License will therefore apply, along with any applicable section 7
|
License will therefore apply, along with any applicable section 7
|
||||||
additional terms, to the whole of the work, and all its parts,
|
additional terms, to the whole of the work, and all its parts,
|
||||||
regardless of how they are packaged. This License gives no
|
regardless of how they are packaged. This License gives no
|
||||||
permission to license the work in any other way, but it does not
|
permission to license the work in any other way, but it does not
|
||||||
invalidate such permission if you have separately received it.
|
invalidate such permission if you have separately received it.
|
||||||
- **d)** If the work has interactive user interfaces, each must display
|
- **d)** If the work has interactive user interfaces, each must display
|
||||||
Appropriate Legal Notices; however, if the Program has interactive
|
Appropriate Legal Notices; however, if the Program has interactive
|
||||||
interfaces that do not display Appropriate Legal Notices, your
|
interfaces that do not display Appropriate Legal Notices, your
|
||||||
work need not make them do so.
|
work need not make them do so.
|
||||||
|
|
||||||
A compilation of a covered work with other separate and independent
|
A compilation of a covered work with other separate and independent
|
||||||
works, which are not by their nature extensions of the covered work,
|
works, which are not by their nature extensions of the covered work,
|
||||||
@@ -235,42 +235,42 @@ of sections 4 and 5, provided that you also convey the
|
|||||||
machine-readable Corresponding Source under the terms of this License,
|
machine-readable Corresponding Source under the terms of this License,
|
||||||
in one of these ways:
|
in one of these ways:
|
||||||
|
|
||||||
- **a)** Convey the object code in, or embodied in, a physical product
|
- **a)** Convey the object code in, or embodied in, a physical product
|
||||||
(including a physical distribution medium), accompanied by the
|
(including a physical distribution medium), accompanied by the
|
||||||
Corresponding Source fixed on a durable physical medium
|
Corresponding Source fixed on a durable physical medium
|
||||||
customarily used for software interchange.
|
customarily used for software interchange.
|
||||||
- **b)** Convey the object code in, or embodied in, a physical product
|
- **b)** Convey the object code in, or embodied in, a physical product
|
||||||
(including a physical distribution medium), accompanied by a
|
(including a physical distribution medium), accompanied by a
|
||||||
written offer, valid for at least three years and valid for as
|
written offer, valid for at least three years and valid for as
|
||||||
long as you offer spare parts or customer support for that product
|
long as you offer spare parts or customer support for that product
|
||||||
model, to give anyone who possesses the object code either **(1)** a
|
model, to give anyone who possesses the object code either **(1)** a
|
||||||
copy of the Corresponding Source for all the software in the
|
copy of the Corresponding Source for all the software in the
|
||||||
product that is covered by this License, on a durable physical
|
product that is covered by this License, on a durable physical
|
||||||
medium customarily used for software interchange, for a price no
|
medium customarily used for software interchange, for a price no
|
||||||
more than your reasonable cost of physically performing this
|
more than your reasonable cost of physically performing this
|
||||||
conveying of source, or **(2)** access to copy the
|
conveying of source, or **(2)** access to copy the
|
||||||
Corresponding Source from a network server at no charge.
|
Corresponding Source from a network server at no charge.
|
||||||
- **c)** Convey individual copies of the object code with a copy of the
|
- **c)** Convey individual copies of the object code with a copy of the
|
||||||
written offer to provide the Corresponding Source. This
|
written offer to provide the Corresponding Source. This
|
||||||
alternative is allowed only occasionally and noncommercially, and
|
alternative is allowed only occasionally and noncommercially, and
|
||||||
only if you received the object code with such an offer, in accord
|
only if you received the object code with such an offer, in accord
|
||||||
with subsection 6b.
|
with subsection 6b.
|
||||||
- **d)** Convey the object code by offering access from a designated
|
- **d)** Convey the object code by offering access from a designated
|
||||||
place (gratis or for a charge), and offer equivalent access to the
|
place (gratis or for a charge), and offer equivalent access to the
|
||||||
Corresponding Source in the same way through the same place at no
|
Corresponding Source in the same way through the same place at no
|
||||||
further charge. You need not require recipients to copy the
|
further charge. You need not require recipients to copy the
|
||||||
Corresponding Source along with the object code. If the place to
|
Corresponding Source along with the object code. If the place to
|
||||||
copy the object code is a network server, the Corresponding Source
|
copy the object code is a network server, the Corresponding Source
|
||||||
may be on a different server (operated by you or a third party)
|
may be on a different server (operated by you or a third party)
|
||||||
that supports equivalent copying facilities, provided you maintain
|
that supports equivalent copying facilities, provided you maintain
|
||||||
clear directions next to the object code saying where to find the
|
clear directions next to the object code saying where to find the
|
||||||
Corresponding Source. Regardless of what server hosts the
|
Corresponding Source. Regardless of what server hosts the
|
||||||
Corresponding Source, you remain obligated to ensure that it is
|
Corresponding Source, you remain obligated to ensure that it is
|
||||||
available for as long as needed to satisfy these requirements.
|
available for as long as needed to satisfy these requirements.
|
||||||
- **e)** Convey the object code using peer-to-peer transmission, provided
|
- **e)** Convey the object code using peer-to-peer transmission, provided
|
||||||
you inform other peers where the object code and Corresponding
|
you inform other peers where the object code and Corresponding
|
||||||
Source of the work are being offered to the general public at no
|
Source of the work are being offered to the general public at no
|
||||||
charge under subsection 6d.
|
charge under subsection 6d.
|
||||||
|
|
||||||
A separable portion of the object code, whose source code is excluded
|
A separable portion of the object code, whose source code is excluded
|
||||||
from the Corresponding Source as a System Library, need not be
|
from the Corresponding Source as a System Library, need not be
|
||||||
@@ -344,23 +344,23 @@ Notwithstanding any other provision of this License, for material you
|
|||||||
add to a covered work, you may (if authorized by the copyright holders of
|
add to a covered work, you may (if authorized by the copyright holders of
|
||||||
that material) supplement the terms of this License with terms:
|
that material) supplement the terms of this License with terms:
|
||||||
|
|
||||||
- **a)** Disclaiming warranty or limiting liability differently from the
|
- **a)** Disclaiming warranty or limiting liability differently from the
|
||||||
terms of sections 15 and 16 of this License; or
|
terms of sections 15 and 16 of this License; or
|
||||||
- **b)** Requiring preservation of specified reasonable legal notices or
|
- **b)** Requiring preservation of specified reasonable legal notices or
|
||||||
author attributions in that material or in the Appropriate Legal
|
author attributions in that material or in the Appropriate Legal
|
||||||
Notices displayed by works containing it; or
|
Notices displayed by works containing it; or
|
||||||
- **c)** Prohibiting misrepresentation of the origin of that material, or
|
- **c)** Prohibiting misrepresentation of the origin of that material, or
|
||||||
requiring that modified versions of such material be marked in
|
requiring that modified versions of such material be marked in
|
||||||
reasonable ways as different from the original version; or
|
reasonable ways as different from the original version; or
|
||||||
- **d)** Limiting the use for publicity purposes of names of licensors or
|
- **d)** Limiting the use for publicity purposes of names of licensors or
|
||||||
authors of the material; or
|
authors of the material; or
|
||||||
- **e)** Declining to grant rights under trademark law for use of some
|
- **e)** Declining to grant rights under trademark law for use of some
|
||||||
trade names, trademarks, or service marks; or
|
trade names, trademarks, or service marks; or
|
||||||
- **f)** Requiring indemnification of licensors and authors of that
|
- **f)** Requiring indemnification of licensors and authors of that
|
||||||
material by anyone who conveys the material (or modified versions of
|
material by anyone who conveys the material (or modified versions of
|
||||||
it) with contractual assumptions of liability to the recipient, for
|
it) with contractual assumptions of liability to the recipient, for
|
||||||
any liability that these contractual assumptions directly impose on
|
any liability that these contractual assumptions directly impose on
|
||||||
those licensors and authors.
|
those licensors and authors.
|
||||||
|
|
||||||
All other non-permissive additional terms are considered “further
|
All other non-permissive additional terms are considered “further
|
||||||
restrictions” within the meaning of section 10. If the Program as you
|
restrictions” within the meaning of section 10. If the Program as you
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# Dockerfile for Open Archiver
|
# Dockerfile for the OSS version of Open Archiver
|
||||||
|
|
||||||
ARG BASE_IMAGE=node:22-alpine
|
ARG BASE_IMAGE=node:22-alpine
|
||||||
|
|
||||||
@@ -15,12 +15,13 @@ COPY package.json pnpm-workspace.yaml pnpm-lock.yaml* ./
|
|||||||
COPY packages/backend/package.json ./packages/backend/
|
COPY packages/backend/package.json ./packages/backend/
|
||||||
COPY packages/frontend/package.json ./packages/frontend/
|
COPY packages/frontend/package.json ./packages/frontend/
|
||||||
COPY packages/types/package.json ./packages/types/
|
COPY packages/types/package.json ./packages/types/
|
||||||
|
COPY apps/open-archiver/package.json ./apps/open-archiver/
|
||||||
|
|
||||||
# 1. Build Stage: Install all dependencies and build the project
|
# 1. Build Stage: Install all dependencies and build the project
|
||||||
FROM base AS build
|
FROM base AS build
|
||||||
COPY packages/frontend/svelte.config.js ./packages/frontend/
|
COPY packages/frontend/svelte.config.js ./packages/frontend/
|
||||||
|
|
||||||
# Install all dependencies. Use --shamefully-hoist to create a flat node_modules structure
|
# Install all dependencies.
|
||||||
ENV PNPM_HOME="/pnpm"
|
ENV PNPM_HOME="/pnpm"
|
||||||
RUN --mount=type=cache,id=pnpm,target=/pnpm/store \
|
RUN --mount=type=cache,id=pnpm,target=/pnpm/store \
|
||||||
pnpm install --shamefully-hoist --frozen-lockfile --prod=false
|
pnpm install --shamefully-hoist --frozen-lockfile --prod=false
|
||||||
@@ -28,19 +29,19 @@ RUN --mount=type=cache,id=pnpm,target=/pnpm/store \
|
|||||||
# Copy the rest of the source code
|
# Copy the rest of the source code
|
||||||
COPY . .
|
COPY . .
|
||||||
|
|
||||||
# Build all packages.
|
# Build the OSS packages.
|
||||||
RUN pnpm build
|
RUN pnpm build:oss
|
||||||
|
|
||||||
# 2. Production Stage: Install only production dependencies and copy built artifacts
|
# 2. Production Stage: Install only production dependencies and copy built artifacts
|
||||||
FROM base AS production
|
FROM base AS production
|
||||||
|
|
||||||
|
|
||||||
# Copy built application from build stage
|
# Copy built application from build stage
|
||||||
COPY --from=build /app/packages/backend/dist ./packages/backend/dist
|
COPY --from=build /app/packages/backend/dist ./packages/backend/dist
|
||||||
COPY --from=build /app/packages/frontend/build ./packages/frontend/build
|
|
||||||
COPY --from=build /app/packages/types/dist ./packages/types/dist
|
|
||||||
COPY --from=build /app/packages/backend/drizzle.config.ts ./packages/backend/drizzle.config.ts
|
COPY --from=build /app/packages/backend/drizzle.config.ts ./packages/backend/drizzle.config.ts
|
||||||
COPY --from=build /app/packages/backend/src/database/migrations ./packages/backend/src/database/migrations
|
COPY --from=build /app/packages/backend/src/database/migrations ./packages/backend/src/database/migrations
|
||||||
|
COPY --from=build /app/packages/frontend/build ./packages/frontend/build
|
||||||
|
COPY --from=build /app/packages/types/dist ./packages/types/dist
|
||||||
|
COPY --from=build /app/apps/open-archiver/dist ./apps/open-archiver/dist
|
||||||
|
|
||||||
# Copy the entrypoint script and make it executable
|
# Copy the entrypoint script and make it executable
|
||||||
COPY docker/docker-entrypoint.sh /usr/local/bin/
|
COPY docker/docker-entrypoint.sh /usr/local/bin/
|
||||||
@@ -53,4 +54,4 @@ EXPOSE 3000
|
|||||||
ENTRYPOINT ["docker-entrypoint.sh"]
|
ENTRYPOINT ["docker-entrypoint.sh"]
|
||||||
|
|
||||||
# Start the application
|
# Start the application
|
||||||
CMD ["pnpm", "docker-start"]
|
CMD ["pnpm", "docker-start:oss"]
|
||||||
24
apps/open-archiver/index.ts
Normal file
24
apps/open-archiver/index.ts
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
import { createServer, logger } from '@open-archiver/backend';
|
||||||
|
import * as dotenv from 'dotenv';
|
||||||
|
|
||||||
|
dotenv.config();
|
||||||
|
|
||||||
|
async function start() {
|
||||||
|
// --- Environment Variable Validation ---
|
||||||
|
const { PORT_BACKEND } = process.env;
|
||||||
|
|
||||||
|
if (!PORT_BACKEND) {
|
||||||
|
throw new Error('Missing required environment variables for the backend: PORT_BACKEND.');
|
||||||
|
}
|
||||||
|
// Create the server instance (passing no modules for the default OSS version)
|
||||||
|
const app = await createServer([]);
|
||||||
|
|
||||||
|
app.listen(PORT_BACKEND, () => {
|
||||||
|
logger.info({}, `✅ Open Archiver (OSS) running on port ${PORT_BACKEND}`);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
start().catch((error) => {
|
||||||
|
logger.error({ error }, 'Failed to start the server:', error);
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
18
apps/open-archiver/package.json
Normal file
18
apps/open-archiver/package.json
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
{
|
||||||
|
"name": "open-archiver-app",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"private": true,
|
||||||
|
"scripts": {
|
||||||
|
"dev": "ts-node-dev --respawn --transpile-only index.ts",
|
||||||
|
"build": "tsc",
|
||||||
|
"start": "node dist/index.js"
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"@open-archiver/backend": "workspace:*",
|
||||||
|
"dotenv": "^17.2.0"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/dotenv": "^8.2.3",
|
||||||
|
"ts-node-dev": "^2.0.0"
|
||||||
|
}
|
||||||
|
}
|
||||||
8
apps/open-archiver/tsconfig.json
Normal file
8
apps/open-archiver/tsconfig.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"extends": "../../tsconfig.base.json",
|
||||||
|
"compilerOptions": {
|
||||||
|
"outDir": "dist"
|
||||||
|
},
|
||||||
|
"include": ["./**/*.ts"],
|
||||||
|
"references": [{ "path": "../../packages/backend" }]
|
||||||
|
}
|
||||||
BIN
assets/screenshots/integrity-report.png
Normal file
BIN
assets/screenshots/integrity-report.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 304 KiB |
@@ -10,7 +10,7 @@ services:
|
|||||||
env_file:
|
env_file:
|
||||||
- .env
|
- .env
|
||||||
volumes:
|
volumes:
|
||||||
- archiver-data:/var/data/open-archiver
|
- ${STORAGE_LOCAL_ROOT_PATH}:${STORAGE_LOCAL_ROOT_PATH}
|
||||||
depends_on:
|
depends_on:
|
||||||
- postgres
|
- postgres
|
||||||
- valkey
|
- valkey
|
||||||
@@ -66,8 +66,6 @@ volumes:
|
|||||||
driver: local
|
driver: local
|
||||||
meilidata:
|
meilidata:
|
||||||
driver: local
|
driver: local
|
||||||
archiver-data:
|
|
||||||
driver: local
|
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
open-archiver-net:
|
open-archiver-net:
|
||||||
|
|||||||
@@ -33,6 +33,7 @@ export default defineConfig({
|
|||||||
items: [
|
items: [
|
||||||
{ text: 'Get Started', link: '/' },
|
{ text: 'Get Started', link: '/' },
|
||||||
{ text: 'Installation', link: '/user-guides/installation' },
|
{ text: 'Installation', link: '/user-guides/installation' },
|
||||||
|
{ text: 'Email Integrity Check', link: '/user-guides/integrity-check' },
|
||||||
{
|
{
|
||||||
text: 'Email Providers',
|
text: 'Email Providers',
|
||||||
link: '/user-guides/email-providers/',
|
link: '/user-guides/email-providers/',
|
||||||
@@ -91,8 +92,10 @@ export default defineConfig({
|
|||||||
{ text: 'Archived Email', link: '/api/archived-email' },
|
{ text: 'Archived Email', link: '/api/archived-email' },
|
||||||
{ text: 'Dashboard', link: '/api/dashboard' },
|
{ text: 'Dashboard', link: '/api/dashboard' },
|
||||||
{ text: 'Ingestion', link: '/api/ingestion' },
|
{ text: 'Ingestion', link: '/api/ingestion' },
|
||||||
|
{ text: 'Integrity Check', link: '/api/integrity' },
|
||||||
{ text: 'Search', link: '/api/search' },
|
{ text: 'Search', link: '/api/search' },
|
||||||
{ text: 'Storage', link: '/api/storage' },
|
{ text: 'Storage', link: '/api/storage' },
|
||||||
|
{ text: 'Jobs', link: '/api/jobs' },
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
51
docs/api/integrity.md
Normal file
51
docs/api/integrity.md
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
# Integrity Check API
|
||||||
|
|
||||||
|
The Integrity Check API provides an endpoint to verify the cryptographic hash of an archived email and its attachments against the stored values in the database. This allows you to ensure that the stored files have not been tampered with or corrupted since they were archived.
|
||||||
|
|
||||||
|
## Check Email Integrity
|
||||||
|
|
||||||
|
Verifies the integrity of a specific archived email and all of its associated attachments.
|
||||||
|
|
||||||
|
- **URL:** `/api/v1/integrity/:id`
|
||||||
|
- **Method:** `GET`
|
||||||
|
- **URL Params:**
|
||||||
|
- `id=[string]` (required) - The UUID of the archived email to check.
|
||||||
|
- **Permissions:** `read:archive`
|
||||||
|
- **Success Response:**
|
||||||
|
- **Code:** 200 OK
|
||||||
|
- **Content:** `IntegrityCheckResult[]`
|
||||||
|
|
||||||
|
### Response Body `IntegrityCheckResult`
|
||||||
|
|
||||||
|
An array of objects, each representing the result of an integrity check for a single file (either the email itself or an attachment).
|
||||||
|
|
||||||
|
| Field | Type | Description |
|
||||||
|
| :--------- | :------------------------ | :-------------------------------------------------------------------------- |
|
||||||
|
| `type` | `'email' \| 'attachment'` | The type of the file being checked. |
|
||||||
|
| `id` | `string` | The UUID of the email or attachment. |
|
||||||
|
| `filename` | `string` (optional) | The filename of the attachment. This field is only present for attachments. |
|
||||||
|
| `isValid` | `boolean` | `true` if the current hash matches the stored hash, otherwise `false`. |
|
||||||
|
| `reason` | `string` (optional) | A reason for the failure. Only present if `isValid` is `false`. |
|
||||||
|
|
||||||
|
### Example Response
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "email",
|
||||||
|
"id": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
|
||||||
|
"isValid": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "attachment",
|
||||||
|
"id": "b2c3d4e5-f6a7-8901-2345-67890abcdef1",
|
||||||
|
"filename": "document.pdf",
|
||||||
|
"isValid": false,
|
||||||
|
"reason": "Stored hash does not match current hash."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Error Response:**
|
||||||
|
- **Code:** 404 Not Found
|
||||||
|
- **Content:** `{ "message": "Archived email not found" }`
|
||||||
128
docs/api/jobs.md
Normal file
128
docs/api/jobs.md
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
# Jobs API
|
||||||
|
|
||||||
|
The Jobs API provides endpoints for monitoring the job queues and the jobs within them.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Open Archiver uses a job queue system to handle asynchronous tasks like email ingestion and indexing. The system is built on Redis and BullMQ and uses a producer-consumer pattern.
|
||||||
|
|
||||||
|
### Job Statuses
|
||||||
|
|
||||||
|
Jobs can have one of the following statuses:
|
||||||
|
|
||||||
|
- **active:** The job is currently being processed.
|
||||||
|
- **completed:** The job has been completed successfully.
|
||||||
|
- **failed:** The job has failed after all retry attempts.
|
||||||
|
- **delayed:** The job is delayed and will be processed at a later time.
|
||||||
|
- **waiting:** The job is waiting to be processed.
|
||||||
|
- **paused:** The job is paused and will not be processed until it is resumed.
|
||||||
|
|
||||||
|
### Errors
|
||||||
|
|
||||||
|
When a job fails, the `failedReason` and `stacktrace` fields will contain information about the error. The `error` field will also be populated with the `failedReason` for easier access.
|
||||||
|
|
||||||
|
### Job Preservation
|
||||||
|
|
||||||
|
Jobs are preserved for a limited time after they are completed or failed. This means that the job counts and the jobs that you see in the API are for a limited time.
|
||||||
|
|
||||||
|
- **Completed jobs:** The last 1000 completed jobs are preserved.
|
||||||
|
- **Failed jobs:** The last 5000 failed jobs are preserved.
|
||||||
|
|
||||||
|
## Get All Queues
|
||||||
|
|
||||||
|
- **Endpoint:** `GET /v1/jobs/queues`
|
||||||
|
- **Description:** Retrieves a list of all job queues and their job counts.
|
||||||
|
- **Permissions:** `manage:all`
|
||||||
|
- **Responses:**
|
||||||
|
- `200 OK`: Returns a list of queue overviews.
|
||||||
|
- `401 Unauthorized`: If the user is not authenticated.
|
||||||
|
- `403 Forbidden`: If the user does not have the required permissions.
|
||||||
|
|
||||||
|
### Response Body
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"queues": [
|
||||||
|
{
|
||||||
|
"name": "ingestion",
|
||||||
|
"counts": {
|
||||||
|
"active": 0,
|
||||||
|
"completed": 56,
|
||||||
|
"failed": 4,
|
||||||
|
"delayed": 3,
|
||||||
|
"waiting": 0,
|
||||||
|
"paused": 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "indexing",
|
||||||
|
"counts": {
|
||||||
|
"active": 0,
|
||||||
|
"completed": 0,
|
||||||
|
"failed": 0,
|
||||||
|
"delayed": 0,
|
||||||
|
"waiting": 0,
|
||||||
|
"paused": 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Get Queue Jobs
|
||||||
|
|
||||||
|
- **Endpoint:** `GET /v1/jobs/queues/:queueName`
|
||||||
|
- **Description:** Retrieves a list of jobs within a specific queue, with pagination and filtering by status.
|
||||||
|
- **Permissions:** `manage:all`
|
||||||
|
- **URL Parameters:**
|
||||||
|
- `queueName` (string, required): The name of the queue to retrieve jobs from.
|
||||||
|
- **Query Parameters:**
|
||||||
|
- `status` (string, optional): The status of the jobs to retrieve. Can be one of `active`, `completed`, `failed`, `delayed`, `waiting`, `paused`. Defaults to `failed`.
|
||||||
|
- `page` (number, optional): The page number to retrieve. Defaults to `1`.
|
||||||
|
- `limit` (number, optional): The number of jobs to retrieve per page. Defaults to `10`.
|
||||||
|
- **Responses:**
|
||||||
|
- `200 OK`: Returns a detailed view of the queue, including a paginated list of jobs.
|
||||||
|
- `401 Unauthorized`: If the user is not authenticated.
|
||||||
|
- `403 Forbidden`: If the user does not have the required permissions.
|
||||||
|
- `404 Not Found`: If the specified queue does not exist.
|
||||||
|
|
||||||
|
### Response Body
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "ingestion",
|
||||||
|
"counts": {
|
||||||
|
"active": 0,
|
||||||
|
"completed": 56,
|
||||||
|
"failed": 4,
|
||||||
|
"delayed": 3,
|
||||||
|
"waiting": 0,
|
||||||
|
"paused": 0
|
||||||
|
},
|
||||||
|
"jobs": [
|
||||||
|
{
|
||||||
|
"id": "1",
|
||||||
|
"name": "initial-import",
|
||||||
|
"data": {
|
||||||
|
"ingestionSourceId": "clx1y2z3a0000b4d2e5f6g7h8"
|
||||||
|
},
|
||||||
|
"state": "failed",
|
||||||
|
"failedReason": "Error: Connection timed out",
|
||||||
|
"timestamp": 1678886400000,
|
||||||
|
"processedOn": 1678886401000,
|
||||||
|
"finishedOn": 1678886402000,
|
||||||
|
"attemptsMade": 5,
|
||||||
|
"stacktrace": ["..."],
|
||||||
|
"returnValue": null,
|
||||||
|
"ingestionSourceId": "clx1y2z3a0000b4d2e5f6g7h8",
|
||||||
|
"error": "Error: Connection timed out"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"pagination": {
|
||||||
|
"currentPage": 1,
|
||||||
|
"totalPages": 1,
|
||||||
|
"totalJobs": 4,
|
||||||
|
"limit": 10
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
78
docs/enterprise/audit-log/api.md
Normal file
78
docs/enterprise/audit-log/api.md
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
# Audit Log: API Endpoints
|
||||||
|
|
||||||
|
The audit log feature exposes two API endpoints for retrieving and verifying audit log data. Both endpoints require authentication and are only accessible to users with the appropriate permissions.
|
||||||
|
|
||||||
|
## Get Audit Logs
|
||||||
|
|
||||||
|
Retrieves a paginated list of audit log entries, with support for filtering and sorting.
|
||||||
|
|
||||||
|
- **Endpoint:** `GET /api/v1/enterprise/audit-logs`
|
||||||
|
- **Method:** `GET`
|
||||||
|
- **Authentication:** Required
|
||||||
|
|
||||||
|
### Query Parameters
|
||||||
|
|
||||||
|
| Parameter | Type | Description |
|
||||||
|
| ------------ | -------- | --------------------------------------------------------------------------- |
|
||||||
|
| `page` | `number` | The page number to retrieve. Defaults to `1`. |
|
||||||
|
| `limit` | `number` | The number of entries to retrieve per page. Defaults to `20`. |
|
||||||
|
| `startDate` | `date` | The start date for the date range filter. |
|
||||||
|
| `endDate` | `date` | The end date for the date range filter. |
|
||||||
|
| `actor` | `string` | The actor identifier to filter by. |
|
||||||
|
| `actionType` | `string` | The action type to filter by (e.g., `LOGIN`, `CREATE`). |
|
||||||
|
| `sort` | `string` | The sort order for the results. Can be `asc` or `desc`. Defaults to `desc`. |
|
||||||
|
|
||||||
|
### Response Body
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": [
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"previousHash": null,
|
||||||
|
"timestamp": "2025-10-03T00:00:00.000Z",
|
||||||
|
"actorIdentifier": "e8026a75-b58a-4902-8858-eb8780215f82",
|
||||||
|
"actorIp": "::1",
|
||||||
|
"actionType": "LOGIN",
|
||||||
|
"targetType": "User",
|
||||||
|
"targetId": "e8026a75-b58a-4902-8858-eb8780215f82",
|
||||||
|
"details": {},
|
||||||
|
"currentHash": "..."
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"meta": {
|
||||||
|
"total": 100,
|
||||||
|
"page": 1,
|
||||||
|
"limit": 20
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Verify Audit Log Integrity
|
||||||
|
|
||||||
|
Initiates a verification process to check the integrity of the entire audit log chain.
|
||||||
|
|
||||||
|
- **Endpoint:** `POST /api/v1/enterprise/audit-logs/verify`
|
||||||
|
- **Method:** `POST`
|
||||||
|
- **Authentication:** Required
|
||||||
|
|
||||||
|
### Response Body
|
||||||
|
|
||||||
|
**Success**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ok": true,
|
||||||
|
"message": "Audit log integrity verified successfully."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Failure**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ok": false,
|
||||||
|
"message": "Audit log chain is broken!",
|
||||||
|
"logId": 123
|
||||||
|
}
|
||||||
|
```
|
||||||
31
docs/enterprise/audit-log/audit-service.md
Normal file
31
docs/enterprise/audit-log/audit-service.md
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
# Audit Log: Backend Implementation
|
||||||
|
|
||||||
|
The backend implementation of the audit log is handled by the `AuditService`, located in `packages/backend/src/services/AuditService.ts`. This service encapsulates all the logic for creating, retrieving, and verifying audit log entries.
|
||||||
|
|
||||||
|
## Hashing and Verification Logic
|
||||||
|
|
||||||
|
The core of the audit log's immutability lies in its hashing and verification logic.
|
||||||
|
|
||||||
|
### Hash Calculation
|
||||||
|
|
||||||
|
The `calculateHash` method is responsible for generating a SHA-256 hash of a log entry. To ensure consistency, it performs the following steps:
|
||||||
|
|
||||||
|
1. **Canonical Object Creation:** It constructs a new object with a fixed property order, ensuring that the object's structure is always the same.
|
||||||
|
2. **Timestamp Normalization:** It converts the `timestamp` to milliseconds since the epoch (`getTime()`) to avoid any precision-related discrepancies between the application and the database.
|
||||||
|
3. **Canonical Stringification:** It uses a custom `canonicalStringify` function to create a JSON string representation of the object. This function sorts the object keys, ensuring that the output is always the same, regardless of the in-memory property order.
|
||||||
|
4. **Hash Generation:** It computes a SHA-256 hash of the canonical string.
|
||||||
|
|
||||||
|
### Verification Process
|
||||||
|
|
||||||
|
The `verifyAuditLog` method is designed to be highly scalable and efficient, even with millions of log entries. It processes the logs in manageable chunks (e.g., 1000 at a time) to avoid loading the entire table into memory.
|
||||||
|
|
||||||
|
The verification process involves the following steps:
|
||||||
|
|
||||||
|
1. **Iterative Processing:** It fetches the logs in batches within a `while` loop.
|
||||||
|
2. **Chain Verification:** For each log entry, it compares the `previousHash` with the `currentHash` of the preceding log. If they do not match, the chain is broken, and the verification fails.
|
||||||
|
3. **Hash Recalculation:** It recalculates the hash of the current log entry using the same `calculateHash` method used during creation.
|
||||||
|
4. **Integrity Check:** It compares the recalculated hash with the `currentHash` stored in the database. If they do not match, the log entry has been tampered with, and the verification fails.
|
||||||
|
|
||||||
|
## Service Integration
|
||||||
|
|
||||||
|
The `AuditService` is integrated into the application through the `AuditLogModule` (`packages/enterprise/src/modules/audit-log/audit-log.module.ts`), which registers the API routes for the audit log feature. The service's `createAuditLog` method is called from various other services throughout the application to record significant events.
|
||||||
39
docs/enterprise/audit-log/guide.md
Normal file
39
docs/enterprise/audit-log/guide.md
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
# Audit Log: User Interface
|
||||||
|
|
||||||
|
The audit log user interface provides a comprehensive view of all significant events that have occurred within the Open Archiver system. It is designed to be intuitive and user-friendly, allowing administrators to easily monitor and review system activity.
|
||||||
|
|
||||||
|
## Viewing Audit Logs
|
||||||
|
|
||||||
|
The main audit log page displays a table of log entries, with the following columns:
|
||||||
|
|
||||||
|
- **Timestamp:** The date and time of the event.
|
||||||
|
- **Actor:** The identifier of the user or system process that performed the action.
|
||||||
|
- **IP Address:** The IP address from which the action was initiated.
|
||||||
|
- **Action:** The type of action performed, displayed as a color-coded badge for easy identification.
|
||||||
|
- **Target Type:** The type of resource that was affected.
|
||||||
|
- **Target ID:** The unique identifier of the affected resource.
|
||||||
|
- **Details:** A truncated preview of the event's details. The full JSON object is displayed in a pop-up card on hover.
|
||||||
|
|
||||||
|
## Filtering and Sorting
|
||||||
|
|
||||||
|
The table can be sorted by timestamp by clicking the "Timestamp" header. This allows you to view the logs in either chronological or reverse chronological order.
|
||||||
|
|
||||||
|
## Pagination
|
||||||
|
|
||||||
|
Pagination controls are available below the table, allowing you to navigate through the entire history of audit log entries.
|
||||||
|
|
||||||
|
## Verifying Log Integrity
|
||||||
|
|
||||||
|
The "Verify Log Integrity" button allows you to initiate a verification process to check the integrity of the entire audit log chain. This process recalculates the hash of each log entry and compares it to the stored hash, ensuring that the cryptographic chain is unbroken and no entries have been tampered with.
|
||||||
|
|
||||||
|
### Verification Responses
|
||||||
|
|
||||||
|
- **Success:** A success notification is displayed, confirming that the audit log integrity has been verified successfully. This means that the log chain is complete and no entries have been tampered with.
|
||||||
|
|
||||||
|
- **Failure:** An error notification is displayed, indicating that the audit log chain is broken or an entry has been tampered with. The notification will include the ID of the log entry where the issue was detected. There are two types of failures:
|
||||||
|
- **Audit log chain is broken:** This means that the `previousHash` of a log entry does not match the `currentHash` of the preceding entry. This indicates that one or more log entries may have been deleted or inserted into the chain.
|
||||||
|
- **Audit log entry is tampered!:** This means that the recalculated hash of a log entry does not match its stored `currentHash`. This indicates that the data within the log entry has been altered.
|
||||||
|
|
||||||
|
## Viewing Log Details
|
||||||
|
|
||||||
|
You can view the full details of any log entry by clicking on its row in the table. This will open a dialog containing all the information associated with the log entry, including the previous and current hashes.
|
||||||
27
docs/enterprise/audit-log/index.md
Normal file
27
docs/enterprise/audit-log/index.md
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
# Audit Log
|
||||||
|
|
||||||
|
The Audit Log is an enterprise-grade feature designed to provide a complete, immutable, and verifiable record of every significant action that occurs within the Open Archiver system. Its primary purpose is to ensure compliance with strict regulatory standards, such as the German GoBD, by establishing a tamper-proof chain of evidence for all activities.
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
To fulfill its compliance and security functions, the audit log adheres to the following core principles:
|
||||||
|
|
||||||
|
### 1. Immutability
|
||||||
|
|
||||||
|
Every log entry is cryptographically chained to the previous one. Each new entry contains a SHA-256 hash of the preceding entry's hash, creating a verifiable chain. Any attempt to alter or delete a past entry would break this chain and be immediately detectable through the verification process.
|
||||||
|
|
||||||
|
### 2. Completeness
|
||||||
|
|
||||||
|
The system is designed to log every significant event without exception. This includes not only user-initiated actions (like logins, searches, and downloads) but also automated system processes, such as data ingestion and policy-based deletions.
|
||||||
|
|
||||||
|
### 3. Attribution
|
||||||
|
|
||||||
|
Each log entry is unambiguously linked to the actor that initiated the event. This could be a specific authenticated user, an external auditor, or an automated system process. The actor's identifier and source IP address are recorded to ensure full traceability.
|
||||||
|
|
||||||
|
### 4. Clarity and Detail
|
||||||
|
|
||||||
|
Log entries are structured to be detailed and human-readable, providing sufficient context for an auditor to understand the event without needing specialized system knowledge. This includes the action performed, the target resource affected, and a JSON object with specific, contextual details of the event.
|
||||||
|
|
||||||
|
### 5. Verifiability
|
||||||
|
|
||||||
|
The integrity of the entire audit log can be verified at any time. A dedicated process iterates through the logs from the beginning, recalculating the hash of each entry and comparing it to the stored hash, ensuring the cryptographic chain is unbroken and no entries have been tampered with.
|
||||||
@@ -17,7 +17,22 @@ git clone https://github.com/LogicLabs-OU/OpenArchiver.git
|
|||||||
cd OpenArchiver
|
cd OpenArchiver
|
||||||
```
|
```
|
||||||
|
|
||||||
## 2. Configure Your Environment
|
## 2. Create a Directory for Local Storage (Important)
|
||||||
|
|
||||||
|
Before configuring the application, you **must** create a directory on your host machine where Open Archiver will store its data (such as emails and attachments). Manually creating this directory helps prevent potential permission issues.
|
||||||
|
|
||||||
|
Foe examples, you can use this path `/var/data/open-archiver`.
|
||||||
|
|
||||||
|
Run the following commands to create the directory and set the correct permissions:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo mkdir -p /var/data/open-archiver
|
||||||
|
sudo chown -R $(id -u):$(id -g) /var/data/open-archiver
|
||||||
|
```
|
||||||
|
|
||||||
|
This ensures the directory is owned by your current user, which is necessary for the application to have write access. You will set this path in your `.env` file in the next step.
|
||||||
|
|
||||||
|
## 3. Configure Your Environment
|
||||||
|
|
||||||
The application is configured using environment variables. You'll need to create a `.env` file to store your configuration.
|
The application is configured using environment variables. You'll need to create a `.env` file to store your configuration.
|
||||||
|
|
||||||
@@ -29,9 +44,15 @@ cp .env.example.docker .env
|
|||||||
|
|
||||||
Now, open the `.env` file in a text editor and customize the settings.
|
Now, open the `.env` file in a text editor and customize the settings.
|
||||||
|
|
||||||
### Important Configuration
|
### Key Configuration Steps
|
||||||
|
|
||||||
You must change the following placeholder values to secure your instance:
|
1. **Set the Storage Path**: Find the `STORAGE_LOCAL_ROOT_PATH` variable and set it to the path you just created.
|
||||||
|
|
||||||
|
```env
|
||||||
|
STORAGE_LOCAL_ROOT_PATH=/var/data/open-archiver
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Secure Your Instance**: You must change the following placeholder values to secure your instance:
|
||||||
|
|
||||||
- `POSTGRES_PASSWORD`: A strong, unique password for the database.
|
- `POSTGRES_PASSWORD`: A strong, unique password for the database.
|
||||||
- `REDIS_PASSWORD`: A strong, unique password for the Valkey/Redis service.
|
- `REDIS_PASSWORD`: A strong, unique password for the Valkey/Redis service.
|
||||||
@@ -41,6 +62,10 @@ You must change the following placeholder values to secure your instance:
|
|||||||
```bash
|
```bash
|
||||||
openssl rand -hex 32
|
openssl rand -hex 32
|
||||||
```
|
```
|
||||||
|
- `STORAGE_ENCRYPTION_KEY`: **(Optional but Recommended)** A 32-byte hex string for encrypting emails and attachments at rest. If this key is not provided, storage encryption will be disabled. You can generate one with:
|
||||||
|
```bash
|
||||||
|
openssl rand -hex 32
|
||||||
|
```
|
||||||
|
|
||||||
### Storage Configuration
|
### Storage Configuration
|
||||||
|
|
||||||
@@ -65,12 +90,15 @@ Here is a complete list of environment variables available for configuration:
|
|||||||
|
|
||||||
#### Application Settings
|
#### Application Settings
|
||||||
|
|
||||||
| Variable | Description | Default Value |
|
| Variable | Description | Default Value |
|
||||||
| ---------------- | ----------------------------------------------------------------------------------------------------- | ------------- |
|
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------- |
|
||||||
| `NODE_ENV` | The application environment. | `development` |
|
| `NODE_ENV` | The application environment. | `development` |
|
||||||
| `PORT_BACKEND` | The port for the backend service. | `4000` |
|
| `PORT_BACKEND` | The port for the backend service. | `4000` |
|
||||||
| `PORT_FRONTEND` | The port for the frontend service. | `3000` |
|
| `PORT_FRONTEND` | The port for the frontend service. | `3000` |
|
||||||
| `SYNC_FREQUENCY` | The frequency of continuous email syncing. See [cron syntax](https://crontab.guru/) for more details. | `* * * * *` |
|
| `APP_URL` | The public-facing URL of your application. This is used by the backend to configure CORS. | `http://localhost:3000` |
|
||||||
|
| `ORIGIN` | Used by the SvelteKit Node adapter to determine the server's public-facing URL. It should always be set to the value of `APP_URL` (e.g., `ORIGIN=$APP_URL`). | `http://localhost:3000` |
|
||||||
|
| `SYNC_FREQUENCY` | The frequency of continuous email syncing. See [cron syntax](https://crontab.guru/) for more details. | `* * * * *` |
|
||||||
|
| `ALL_INCLUSIVE_ARCHIVE` | Set to `true` to include all emails, including Junk and Trash folders, in the email archive. | `false` |
|
||||||
|
|
||||||
#### Docker Compose Service Configuration
|
#### Docker Compose Service Configuration
|
||||||
|
|
||||||
@@ -96,24 +124,26 @@ These variables are used by `docker-compose.yml` to configure the services.
|
|||||||
| ------------------------------ | ----------------------------------------------------------------------------------------------------------- | ------------------------- |
|
| ------------------------------ | ----------------------------------------------------------------------------------------------------------- | ------------------------- |
|
||||||
| `STORAGE_TYPE` | The storage backend to use (`local` or `s3`). | `local` |
|
| `STORAGE_TYPE` | The storage backend to use (`local` or `s3`). | `local` |
|
||||||
| `BODY_SIZE_LIMIT` | The maximum request body size for uploads. Can be a number in bytes or a string with a unit (e.g., `100M`). | `100M` |
|
| `BODY_SIZE_LIMIT` | The maximum request body size for uploads. Can be a number in bytes or a string with a unit (e.g., `100M`). | `100M` |
|
||||||
| `STORAGE_LOCAL_ROOT_PATH` | The root path for local file storage. | `/var/data/open-archiver` |
|
| `STORAGE_LOCAL_ROOT_PATH` | The root path for Open Archiver app data. | `/var/data/open-archiver` |
|
||||||
| `STORAGE_S3_ENDPOINT` | The endpoint for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
|
| `STORAGE_S3_ENDPOINT` | The endpoint for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
|
||||||
| `STORAGE_S3_BUCKET` | The bucket name for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
|
| `STORAGE_S3_BUCKET` | The bucket name for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
|
||||||
| `STORAGE_S3_ACCESS_KEY_ID` | The access key ID for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
|
| `STORAGE_S3_ACCESS_KEY_ID` | The access key ID for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
|
||||||
| `STORAGE_S3_SECRET_ACCESS_KEY` | The secret access key for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
|
| `STORAGE_S3_SECRET_ACCESS_KEY` | The secret access key for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
|
||||||
| `STORAGE_S3_REGION` | The region for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
|
| `STORAGE_S3_REGION` | The region for S3-compatible storage (required if `STORAGE_TYPE` is `s3`). | |
|
||||||
| `STORAGE_S3_FORCE_PATH_STYLE` | Force path-style addressing for S3 (optional). | `false` |
|
| `STORAGE_S3_FORCE_PATH_STYLE` | Force path-style addressing for S3 (optional). | `false` |
|
||||||
|
| `STORAGE_ENCRYPTION_KEY` | A 32-byte hex string for AES-256 encryption of files at rest. If not set, files will not be encrypted. | |
|
||||||
|
|
||||||
#### Security & Authentication
|
#### Security & Authentication
|
||||||
|
|
||||||
| Variable | Description | Default Value |
|
| Variable | Description | Default Value |
|
||||||
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
|
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
|
||||||
| `JWT_SECRET` | A secret key for signing JWT tokens. | `a-very-secret-key-that-you-should-change` |
|
| `ENABLE_DELETION` | Enable or disable deletion of emails and ingestion sources. If this option is not set, or is set to any value other than `true`, deletion will be disabled for the entire instance. | `false` |
|
||||||
| `JWT_EXPIRES_IN` | The expiration time for JWT tokens. | `7d` |
|
| `JWT_SECRET` | A secret key for signing JWT tokens. | `a-very-secret-key-that-you-should-change` |
|
||||||
| ~~`SUPER_API_KEY`~~ (Deprecated) | An API key with super admin privileges. (The SUPER_API_KEY is deprecated since v0.3.0 after we roll out the role-based access control system.) | |
|
| `JWT_EXPIRES_IN` | The expiration time for JWT tokens. | `7d` |
|
||||||
| `RATE_LIMIT_WINDOW_MS` | The window in milliseconds for which API requests are checked. | `900000` (15 minutes) |
|
| ~~`SUPER_API_KEY`~~ (Deprecated) | An API key with super admin privileges. (The SUPER_API_KEY is deprecated since v0.3.0 after we roll out the role-based access control system.) | |
|
||||||
| `RATE_LIMIT_MAX_REQUESTS` | The maximum number of API requests allowed from an IP within the window. | `100` |
|
| `RATE_LIMIT_WINDOW_MS` | The window in milliseconds for which API requests are checked. | `900000` (15 minutes) |
|
||||||
| `ENCRYPTION_KEY` | A 32-byte hex string for encrypting sensitive data in the database. | |
|
| `RATE_LIMIT_MAX_REQUESTS` | The maximum number of API requests allowed from an IP within the window. | `100` |
|
||||||
|
| `ENCRYPTION_KEY` | A 32-byte hex string for encrypting sensitive data in the database. | |
|
||||||
|
|
||||||
#### Apache Tika Integration
|
#### Apache Tika Integration
|
||||||
|
|
||||||
@@ -121,7 +151,7 @@ These variables are used by `docker-compose.yml` to configure the services.
|
|||||||
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------ |
|
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------ |
|
||||||
| `TIKA_URL` | Optional. The URL of an Apache Tika server for advanced text extraction from attachments. If not set, the application falls back to built-in parsers for PDF, Word, and Excel files. | `http://tika:9998` |
|
| `TIKA_URL` | Optional. The URL of an Apache Tika server for advanced text extraction from attachments. If not set, the application falls back to built-in parsers for PDF, Word, and Excel files. | `http://tika:9998` |
|
||||||
|
|
||||||
## 3. Run the Application
|
## 4. Run the Application
|
||||||
|
|
||||||
Once you have configured your `.env` file, you can start all the services using Docker Compose:
|
Once you have configured your `.env` file, you can start all the services using Docker Compose:
|
||||||
|
|
||||||
@@ -141,7 +171,7 @@ You can check the status of the running containers with:
|
|||||||
docker compose ps
|
docker compose ps
|
||||||
```
|
```
|
||||||
|
|
||||||
## 4. Access the Application
|
## 5. Access the Application
|
||||||
|
|
||||||
Once the services are running, you can access the Open Archiver web interface by navigating to `http://localhost:3000` in your web browser.
|
Once the services are running, you can access the Open Archiver web interface by navigating to `http://localhost:3000` in your web browser.
|
||||||
|
|
||||||
@@ -149,7 +179,7 @@ Upon first visit, you will be redirected to the `/setup` page where you can set
|
|||||||
|
|
||||||
If you are not redirected to the `/setup` page but instead see the login page, there might be something wrong with the database. Restart the service and try again.
|
If you are not redirected to the `/setup` page but instead see the login page, there might be something wrong with the database. Restart the service and try again.
|
||||||
|
|
||||||
## 5. Next Steps
|
## 6. Next Steps
|
||||||
|
|
||||||
After successfully deploying and logging into Open Archiver, the next step is to configure your ingestion sources to start archiving emails.
|
After successfully deploying and logging into Open Archiver, the next step is to configure your ingestion sources to start archiving emails.
|
||||||
|
|
||||||
@@ -308,31 +338,3 @@ docker-compose up -d --force-recreate
|
|||||||
```
|
```
|
||||||
|
|
||||||
After this, any new data will be saved directly into the `./data/open-archiver` folder in your project directory.
|
After this, any new data will be saved directly into the `./data/open-archiver` folder in your project directory.
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### 403 Cross-Site POST Forbidden Error
|
|
||||||
|
|
||||||
If you are running the application behind a reverse proxy or have mapped the application to a different port (e.g., `3005:3000`), you may encounter a `403 Cross-site POST from submissions are forbidden` error when uploading files.
|
|
||||||
|
|
||||||
To resolve this, you must set the `ORIGIN` environment variable to the URL of your application. This ensures that the backend can verify the origin of requests and prevent cross-site request forgery (CSRF) attacks.
|
|
||||||
|
|
||||||
Add the following line to your `.env` file, replacing `<your_host>` and `<your_port>` with your specific values:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ORIGIN=http://<your_host>:<your_port>
|
|
||||||
```
|
|
||||||
|
|
||||||
For example, if your application is accessible at `http://localhost:3005`, you would set the variable as follows:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ORIGIN=http://localhost:3005
|
|
||||||
```
|
|
||||||
|
|
||||||
After adding the `ORIGIN` variable, restart your Docker containers for the changes to take effect:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose up -d --force-recreate
|
|
||||||
```
|
|
||||||
|
|
||||||
This will ensure that your file uploads are correctly authorized.
|
|
||||||
|
|||||||
37
docs/user-guides/integrity-check.md
Normal file
37
docs/user-guides/integrity-check.md
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
# Email Integrity Check
|
||||||
|
|
||||||
|
Open Archiver allows you to verify the integrity of your archived emails and their attachments. This guide explains how the integrity check works and what the results mean.
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
When an email is archived, Open Archiver calculates a unique cryptographic signature (a SHA256 hash) for the email's raw `.eml` file and for each of its attachments. These signatures are stored in the database alongside the email's metadata.
|
||||||
|
|
||||||
|
The integrity check feature recalculates these signatures for the stored files and compares them to the original signatures stored in the database. This process allows you to verify that the content of your archived emails has not been altered, corrupted, or tampered with since the moment they were archived.
|
||||||
|
|
||||||
|
## The Integrity Report
|
||||||
|
|
||||||
|
When you view an email in the Open Archiver interface, an integrity report is automatically generated and displayed. This report provides a clear, at-a-glance status for the email file and each of its attachments.
|
||||||
|
|
||||||
|
### Statuses
|
||||||
|
|
||||||
|
- **Valid (Green Badge):** A "Valid" status means that the current signature of the file matches the original signature stored in the database. This is the expected status and indicates that the file's integrity is intact.
|
||||||
|
|
||||||
|
- **Invalid (Red Badge):** An "Invalid" status means that the current signature of the file does _not_ match the original signature. This indicates that the file's content has changed since it was archived.
|
||||||
|
|
||||||
|
### Reasons for an "Invalid" Status
|
||||||
|
|
||||||
|
If a file is marked as "Invalid," you can hover over the badge to see a reason for the failure. Common reasons include:
|
||||||
|
|
||||||
|
- **Stored hash does not match current hash:** This is the most common reason and indicates that the file's content has been modified. This could be due to accidental changes, data corruption, or unauthorized tampering.
|
||||||
|
|
||||||
|
- **Could not read attachment file from storage:** This message indicates that the file could not be read from its storage location. This could be due to a storage system issue, a file permission problem, or because the file has been deleted.
|
||||||
|
|
||||||
|
## What to Do If an Integrity Check Fails
|
||||||
|
|
||||||
|
If you encounter an "Invalid" status for an email or attachment, it is important to investigate the issue. Here are some steps you can take:
|
||||||
|
|
||||||
|
1. **Check Storage:** Verify that the file exists in its storage location and that its permissions are correct.
|
||||||
|
2. **Review Audit Logs:** If you have audit logging enabled, review the logs for any unauthorized access or modifications to the file.
|
||||||
|
3. **Restore from Backup:** If you suspect data corruption, you may need to restore the affected file from a backup.
|
||||||
|
|
||||||
|
The integrity check feature is a crucial tool for ensuring the long-term reliability and trustworthiness of your email archive. By regularly monitoring the integrity of your archived data, you can be confident that your records are accurate and complete.
|
||||||
75
docs/user-guides/troubleshooting/cors-errors.md
Normal file
75
docs/user-guides/troubleshooting/cors-errors.md
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
# Troubleshooting CORS Errors
|
||||||
|
|
||||||
|
Cross-Origin Resource Sharing (CORS) is a security feature that controls how web applications in one domain can request and interact with resources in another. If not configured correctly, you may encounter errors when performing actions like uploading files.
|
||||||
|
|
||||||
|
This guide will help you diagnose and resolve common CORS-related issues.
|
||||||
|
|
||||||
|
## Symptoms
|
||||||
|
|
||||||
|
You may be experiencing a CORS issue if you see one of the following errors in your browser's developer console or in the application's logs:
|
||||||
|
|
||||||
|
- `TypeError: fetch failed`
|
||||||
|
- `Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource.`
|
||||||
|
- `Unexpected token 'C', "Cross-site"... is not valid JSON`
|
||||||
|
- A JSON error response similar to the following:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"message": "CORS Error: This origin is not allowed.",
|
||||||
|
"requiredOrigin": "http://localhost:3000",
|
||||||
|
"receivedOrigin": "https://localhost:3000"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Root Cause
|
||||||
|
|
||||||
|
These errors typically occur when the URL you are using to access the application in your browser does not exactly match the `APP_URL` configured in your `.env` file.
|
||||||
|
|
||||||
|
This can happen for several reasons:
|
||||||
|
|
||||||
|
- You are accessing the application via a different port.
|
||||||
|
- You are using a reverse proxy that changes the protocol (e.g., from `http` to `https`).
|
||||||
|
- The SvelteKit server, in a production build, is incorrectly guessing its public-facing URL.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
The solution is to ensure that the application's frontend and backend are correctly configured with the public-facing URL of your instance. This is done by setting two environment variables: `APP_URL` and `ORIGIN`.
|
||||||
|
|
||||||
|
1. **Open your `.env` file** in a text editor.
|
||||||
|
|
||||||
|
2. **Set `APP_URL`**: Define the `APP_URL` variable with the exact URL you use to access the application in your browser.
|
||||||
|
|
||||||
|
```env
|
||||||
|
APP_URL=http://your-domain-or-ip:3000
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Set `ORIGIN`**: The SvelteKit server requires a specific `ORIGIN` variable to correctly identify itself. This should always be set to the value of your `APP_URL`.
|
||||||
|
|
||||||
|
```env
|
||||||
|
ORIGIN=$APP_URL
|
||||||
|
```
|
||||||
|
|
||||||
|
By using `$APP_URL`, you ensure that both variables are always in sync.
|
||||||
|
|
||||||
|
### Example Configuration
|
||||||
|
|
||||||
|
If you are running the application locally on port `3000`, your configuration should look like this:
|
||||||
|
|
||||||
|
```env
|
||||||
|
APP_URL=http://localhost:3000
|
||||||
|
ORIGIN=$APP_URL
|
||||||
|
```
|
||||||
|
|
||||||
|
If your application is behind a reverse proxy and is accessible at `https://archive.mycompany.com`, your configuration should be:
|
||||||
|
|
||||||
|
```env
|
||||||
|
APP_URL=https://archive.mycompany.com
|
||||||
|
ORIGIN=$APP_URL
|
||||||
|
```
|
||||||
|
|
||||||
|
After making these changes to your `.env` file, you must restart the application for them to take effect:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose up -d --force-recreate
|
||||||
|
```
|
||||||
|
|
||||||
|
This will ensure that the backend's CORS policy and the frontend server's origin are correctly aligned, resolving the errors.
|
||||||
18
package.json
18
package.json
@@ -1,17 +1,24 @@
|
|||||||
{
|
{
|
||||||
"name": "open-archiver",
|
"name": "open-archiver",
|
||||||
"version": "0.3.4",
|
"version": "0.4.0",
|
||||||
"private": true,
|
"private": true,
|
||||||
|
"license": "SEE LICENSE IN LICENSE file",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"dev": "dotenv -- pnpm --filter \"./packages/*\" --parallel dev",
|
"build:oss": "pnpm --filter \"./packages/*\" --filter \"!./packages/enterprise\" --filter \"./apps/open-archiver\" build",
|
||||||
"build": "pnpm --filter \"./packages/*\" build",
|
"build:enterprise": "cross-env VITE_ENTERPRISE_MODE=true pnpm build",
|
||||||
"start": "dotenv -- pnpm --filter \"./packages/*\" --parallel start",
|
"start:oss": "dotenv -- concurrently \"node apps/open-archiver/dist/index.js\" \"pnpm --filter @open-archiver/frontend start\"",
|
||||||
|
"start:enterprise": "dotenv -- concurrently \"node apps/open-archiver-enterprise/dist/index.js\" \"pnpm --filter @open-archiver/frontend start\"",
|
||||||
|
"dev:enterprise": "cross-env VITE_ENTERPRISE_MODE=true dotenv -- pnpm --filter \"@open-archiver/*\" --filter \"open-archiver-enterprise-app\" --parallel dev",
|
||||||
|
"dev:oss": "dotenv -- pnpm --filter \"./packages/*\" --filter \"!./packages/@open-archiver/enterprise\" --filter \"open-archiver-app\" --parallel dev",
|
||||||
|
"build": "pnpm --filter \"./packages/*\" --filter \"./apps/*\" build",
|
||||||
|
"start": "dotenv -- pnpm --filter \"open-archiver-app\" --parallel start",
|
||||||
"start:workers": "dotenv -- concurrently \"pnpm --filter @open-archiver/backend start:ingestion-worker\" \"pnpm --filter @open-archiver/backend start:indexing-worker\" \"pnpm --filter @open-archiver/backend start:sync-scheduler\"",
|
"start:workers": "dotenv -- concurrently \"pnpm --filter @open-archiver/backend start:ingestion-worker\" \"pnpm --filter @open-archiver/backend start:indexing-worker\" \"pnpm --filter @open-archiver/backend start:sync-scheduler\"",
|
||||||
"start:workers:dev": "dotenv -- concurrently \"pnpm --filter @open-archiver/backend start:ingestion-worker:dev\" \"pnpm --filter @open-archiver/backend start:indexing-worker:dev\" \"pnpm --filter @open-archiver/backend start:sync-scheduler:dev\"",
|
"start:workers:dev": "dotenv -- concurrently \"pnpm --filter @open-archiver/backend start:ingestion-worker:dev\" \"pnpm --filter @open-archiver/backend start:indexing-worker:dev\" \"pnpm --filter @open-archiver/backend start:sync-scheduler:dev\"",
|
||||||
"db:generate": "dotenv -- pnpm --filter @open-archiver/backend db:generate",
|
"db:generate": "dotenv -- pnpm --filter @open-archiver/backend db:generate",
|
||||||
"db:migrate": "dotenv -- pnpm --filter @open-archiver/backend db:migrate",
|
"db:migrate": "dotenv -- pnpm --filter @open-archiver/backend db:migrate",
|
||||||
"db:migrate:dev": "dotenv -- pnpm --filter @open-archiver/backend db:migrate:dev",
|
"db:migrate:dev": "dotenv -- pnpm --filter @open-archiver/backend db:migrate:dev",
|
||||||
"docker-start": "concurrently \"pnpm start:workers\" \"pnpm start\"",
|
"docker-start:oss": "concurrently \"pnpm start:workers\" \"pnpm start:oss\"",
|
||||||
|
"docker-start:enterprise": "concurrently \"pnpm start:workers\" \"pnpm start:enterprise\"",
|
||||||
"docs:dev": "vitepress dev docs --port 3009",
|
"docs:dev": "vitepress dev docs --port 3009",
|
||||||
"docs:build": "vitepress build docs",
|
"docs:build": "vitepress build docs",
|
||||||
"docs:preview": "vitepress preview docs",
|
"docs:preview": "vitepress preview docs",
|
||||||
@@ -23,6 +30,7 @@
|
|||||||
"dotenv-cli": "8.0.0"
|
"dotenv-cli": "8.0.0"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
|
"cross-env": "^10.0.0",
|
||||||
"prettier": "^3.6.2",
|
"prettier": "^3.6.2",
|
||||||
"prettier-plugin-svelte": "^3.4.0",
|
"prettier-plugin-svelte": "^3.4.0",
|
||||||
"prettier-plugin-tailwindcss": "^0.6.14",
|
"prettier-plugin-tailwindcss": "^0.6.14",
|
||||||
|
|||||||
@@ -2,12 +2,13 @@
|
|||||||
"name": "@open-archiver/backend",
|
"name": "@open-archiver/backend",
|
||||||
"version": "0.1.0",
|
"version": "0.1.0",
|
||||||
"private": true,
|
"private": true,
|
||||||
|
"license": "SEE LICENSE IN LICENSE file",
|
||||||
"main": "dist/index.js",
|
"main": "dist/index.js",
|
||||||
|
"types": "dist/index.d.ts",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"dev": "ts-node-dev --respawn --transpile-only src/index.ts ",
|
|
||||||
"build": "tsc && pnpm copy-assets",
|
"build": "tsc && pnpm copy-assets",
|
||||||
|
"dev": "tsc --watch",
|
||||||
"copy-assets": "cp -r src/locales dist/locales",
|
"copy-assets": "cp -r src/locales dist/locales",
|
||||||
"start": "node dist/index.js",
|
|
||||||
"start:ingestion-worker": "node dist/workers/ingestion.worker.js",
|
"start:ingestion-worker": "node dist/workers/ingestion.worker.js",
|
||||||
"start:indexing-worker": "node dist/workers/indexing.worker.js",
|
"start:indexing-worker": "node dist/workers/indexing.worker.js",
|
||||||
"start:sync-scheduler": "node dist/jobs/schedulers/sync-scheduler.js",
|
"start:sync-scheduler": "node dist/jobs/schedulers/sync-scheduler.js",
|
||||||
@@ -31,6 +32,7 @@
|
|||||||
"bcryptjs": "^3.0.2",
|
"bcryptjs": "^3.0.2",
|
||||||
"bullmq": "^5.56.3",
|
"bullmq": "^5.56.3",
|
||||||
"busboy": "^1.6.0",
|
"busboy": "^1.6.0",
|
||||||
|
"cors": "^2.8.5",
|
||||||
"cross-fetch": "^4.1.0",
|
"cross-fetch": "^4.1.0",
|
||||||
"deepmerge-ts": "^7.1.5",
|
"deepmerge-ts": "^7.1.5",
|
||||||
"dotenv": "^17.2.0",
|
"dotenv": "^17.2.0",
|
||||||
@@ -58,16 +60,14 @@
|
|||||||
"pst-extractor": "^1.11.0",
|
"pst-extractor": "^1.11.0",
|
||||||
"reflect-metadata": "^0.2.2",
|
"reflect-metadata": "^0.2.2",
|
||||||
"sqlite3": "^5.1.7",
|
"sqlite3": "^5.1.7",
|
||||||
"tsconfig-paths": "^4.2.0",
|
|
||||||
"xlsx": "https://cdn.sheetjs.com/xlsx-0.20.3/xlsx-0.20.3.tgz",
|
"xlsx": "https://cdn.sheetjs.com/xlsx-0.20.3/xlsx-0.20.3.tgz",
|
||||||
"yauzl": "^3.2.0",
|
"yauzl": "^3.2.0",
|
||||||
"zod": "^4.1.5"
|
"zod": "^4.1.5"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@bull-board/api": "^6.11.0",
|
|
||||||
"@bull-board/express": "^6.11.0",
|
|
||||||
"@types/archiver": "^6.0.3",
|
"@types/archiver": "^6.0.3",
|
||||||
"@types/busboy": "^1.5.4",
|
"@types/busboy": "^1.5.4",
|
||||||
|
"@types/cors": "^2.8.19",
|
||||||
"@types/express": "^5.0.3",
|
"@types/express": "^5.0.3",
|
||||||
"@types/mailparser": "^3.4.6",
|
"@types/mailparser": "^3.4.6",
|
||||||
"@types/microsoft-graph": "^2.40.1",
|
"@types/microsoft-graph": "^2.40.1",
|
||||||
@@ -75,6 +75,7 @@
|
|||||||
"@types/node": "^24.0.12",
|
"@types/node": "^24.0.12",
|
||||||
"@types/yauzl": "^2.10.3",
|
"@types/yauzl": "^2.10.3",
|
||||||
"ts-node-dev": "^2.0.0",
|
"ts-node-dev": "^2.0.0",
|
||||||
|
"tsconfig-paths": "^4.2.0",
|
||||||
"typescript": "^5.8.3"
|
"typescript": "^5.8.3"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
import { Request, Response } from 'express';
|
import { Request, Response } from 'express';
|
||||||
import { ApiKeyService } from '../../services/ApiKeyService';
|
import { ApiKeyService } from '../../services/ApiKeyService';
|
||||||
import { z } from 'zod';
|
import { z } from 'zod';
|
||||||
import { config } from '../../config';
|
import { UserService } from '../../services/UserService';
|
||||||
|
|
||||||
const generateApiKeySchema = z.object({
|
const generateApiKeySchema = z.object({
|
||||||
name: z
|
name: z
|
||||||
@@ -14,20 +14,27 @@ const generateApiKeySchema = z.object({
|
|||||||
.positive('Only positive number is allowed')
|
.positive('Only positive number is allowed')
|
||||||
.max(730, 'The API key must expire within 2 years / 730 days.'),
|
.max(730, 'The API key must expire within 2 years / 730 days.'),
|
||||||
});
|
});
|
||||||
|
|
||||||
export class ApiKeyController {
|
export class ApiKeyController {
|
||||||
|
private userService = new UserService();
|
||||||
public async generateApiKey(req: Request, res: Response) {
|
public async generateApiKey(req: Request, res: Response) {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
try {
|
try {
|
||||||
const { name, expiresInDays } = generateApiKeySchema.parse(req.body);
|
const { name, expiresInDays } = generateApiKeySchema.parse(req.body);
|
||||||
if (!req.user || !req.user.sub) {
|
if (!req.user || !req.user.sub) {
|
||||||
return res.status(401).json({ message: 'Unauthorized' });
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
}
|
}
|
||||||
const userId = req.user.sub;
|
const userId = req.user.sub;
|
||||||
|
const actor = await this.userService.findById(userId);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
|
}
|
||||||
|
|
||||||
const key = await ApiKeyService.generate(userId, name, expiresInDays);
|
const key = await ApiKeyService.generate(
|
||||||
|
userId,
|
||||||
|
name,
|
||||||
|
expiresInDays,
|
||||||
|
actor,
|
||||||
|
req.ip || 'unknown'
|
||||||
|
);
|
||||||
|
|
||||||
res.status(201).json({ key });
|
res.status(201).json({ key });
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -51,15 +58,16 @@ export class ApiKeyController {
|
|||||||
}
|
}
|
||||||
|
|
||||||
public async deleteApiKey(req: Request, res: Response) {
|
public async deleteApiKey(req: Request, res: Response) {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
const { id } = req.params;
|
const { id } = req.params;
|
||||||
if (!req.user || !req.user.sub) {
|
if (!req.user || !req.user.sub) {
|
||||||
return res.status(401).json({ message: 'Unauthorized' });
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
}
|
}
|
||||||
const userId = req.user.sub;
|
const userId = req.user.sub;
|
||||||
await ApiKeyService.deleteKey(id, userId);
|
const actor = await this.userService.findById(userId);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
|
}
|
||||||
|
await ApiKeyService.deleteKey(id, userId, actor, req.ip || 'unknown');
|
||||||
|
|
||||||
res.status(204).send({ message: req.t('apiKeys.deleteSuccess') });
|
res.status(204).send({ message: req.t('apiKeys.deleteSuccess') });
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,8 +1,10 @@
|
|||||||
import { Request, Response } from 'express';
|
import { Request, Response } from 'express';
|
||||||
import { ArchivedEmailService } from '../../services/ArchivedEmailService';
|
import { ArchivedEmailService } from '../../services/ArchivedEmailService';
|
||||||
import { config } from '../../config';
|
import { UserService } from '../../services/UserService';
|
||||||
|
import { checkDeletionEnabled } from '../../helpers/deletionGuard';
|
||||||
|
|
||||||
export class ArchivedEmailController {
|
export class ArchivedEmailController {
|
||||||
|
private userService = new UserService();
|
||||||
public getArchivedEmails = async (req: Request, res: Response): Promise<Response> => {
|
public getArchivedEmails = async (req: Request, res: Response): Promise<Response> => {
|
||||||
try {
|
try {
|
||||||
const { ingestionSourceId } = req.params;
|
const { ingestionSourceId } = req.params;
|
||||||
@@ -35,8 +37,17 @@ export class ArchivedEmailController {
|
|||||||
if (!userId) {
|
if (!userId) {
|
||||||
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
}
|
}
|
||||||
|
const actor = await this.userService.findById(userId);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
|
||||||
const email = await ArchivedEmailService.getArchivedEmailById(id, userId);
|
const email = await ArchivedEmailService.getArchivedEmailById(
|
||||||
|
id,
|
||||||
|
userId,
|
||||||
|
actor,
|
||||||
|
req.ip || 'unknown'
|
||||||
|
);
|
||||||
if (!email) {
|
if (!email) {
|
||||||
return res.status(404).json({ message: req.t('archivedEmail.notFound') });
|
return res.status(404).json({ message: req.t('archivedEmail.notFound') });
|
||||||
}
|
}
|
||||||
@@ -48,12 +59,18 @@ export class ArchivedEmailController {
|
|||||||
};
|
};
|
||||||
|
|
||||||
public deleteArchivedEmail = async (req: Request, res: Response): Promise<Response> => {
|
public deleteArchivedEmail = async (req: Request, res: Response): Promise<Response> => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
try {
|
try {
|
||||||
|
checkDeletionEnabled();
|
||||||
const { id } = req.params;
|
const { id } = req.params;
|
||||||
await ArchivedEmailService.deleteArchivedEmail(id);
|
const userId = req.user?.sub;
|
||||||
|
if (!userId) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
const actor = await this.userService.findById(userId);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
await ArchivedEmailService.deleteArchivedEmail(id, actor, req.ip || 'unknown');
|
||||||
return res.status(204).send();
|
return res.status(204).send();
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(`Delete archived email ${req.params.id} error:`, error);
|
console.error(`Delete archived email ${req.params.id} error:`, error);
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ export class AuthController {
|
|||||||
{ email, password, first_name, last_name },
|
{ email, password, first_name, last_name },
|
||||||
true
|
true
|
||||||
);
|
);
|
||||||
const result = await this.#authService.login(email, password);
|
const result = await this.#authService.login(email, password, req.ip || 'unknown');
|
||||||
return res.status(201).json(result);
|
return res.status(201).json(result);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Setup error:', error);
|
console.error('Setup error:', error);
|
||||||
@@ -60,7 +60,7 @@ export class AuthController {
|
|||||||
}
|
}
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const result = await this.#authService.login(email, password);
|
const result = await this.#authService.login(email, password, req.ip || 'unknown');
|
||||||
|
|
||||||
if (!result) {
|
if (!result) {
|
||||||
return res.status(401).json({ message: req.t('auth.login.invalidCredentials') });
|
return res.status(401).json({ message: req.t('auth.login.invalidCredentials') });
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ import { IamService } from '../../services/IamService';
|
|||||||
import { PolicyValidator } from '../../iam-policy/policy-validator';
|
import { PolicyValidator } from '../../iam-policy/policy-validator';
|
||||||
import type { CaslPolicy } from '@open-archiver/types';
|
import type { CaslPolicy } from '@open-archiver/types';
|
||||||
import { logger } from '../../config/logger';
|
import { logger } from '../../config/logger';
|
||||||
import { config } from '../../config';
|
|
||||||
|
|
||||||
export class IamController {
|
export class IamController {
|
||||||
#iamService: IamService;
|
#iamService: IamService;
|
||||||
@@ -42,9 +41,6 @@ export class IamController {
|
|||||||
};
|
};
|
||||||
|
|
||||||
public createRole = async (req: Request, res: Response) => {
|
public createRole = async (req: Request, res: Response) => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
const { name, policies } = req.body;
|
const { name, policies } = req.body;
|
||||||
|
|
||||||
if (!name || !policies) {
|
if (!name || !policies) {
|
||||||
@@ -69,9 +65,6 @@ export class IamController {
|
|||||||
};
|
};
|
||||||
|
|
||||||
public deleteRole = async (req: Request, res: Response) => {
|
public deleteRole = async (req: Request, res: Response) => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
const { id } = req.params;
|
const { id } = req.params;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
@@ -83,9 +76,6 @@ export class IamController {
|
|||||||
};
|
};
|
||||||
|
|
||||||
public updateRole = async (req: Request, res: Response) => {
|
public updateRole = async (req: Request, res: Response) => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
const { id } = req.params;
|
const { id } = req.params;
|
||||||
const { name, policies } = req.body;
|
const { name, policies } = req.body;
|
||||||
|
|
||||||
|
|||||||
@@ -7,9 +7,11 @@ import {
|
|||||||
SafeIngestionSource,
|
SafeIngestionSource,
|
||||||
} from '@open-archiver/types';
|
} from '@open-archiver/types';
|
||||||
import { logger } from '../../config/logger';
|
import { logger } from '../../config/logger';
|
||||||
import { config } from '../../config';
|
import { UserService } from '../../services/UserService';
|
||||||
|
import { checkDeletionEnabled } from '../../helpers/deletionGuard';
|
||||||
|
|
||||||
export class IngestionController {
|
export class IngestionController {
|
||||||
|
private userService = new UserService();
|
||||||
/**
|
/**
|
||||||
* Converts an IngestionSource object to a safe version for client-side consumption
|
* Converts an IngestionSource object to a safe version for client-side consumption
|
||||||
* by removing the credentials.
|
* by removing the credentials.
|
||||||
@@ -22,16 +24,22 @@ export class IngestionController {
|
|||||||
}
|
}
|
||||||
|
|
||||||
public create = async (req: Request, res: Response): Promise<Response> => {
|
public create = async (req: Request, res: Response): Promise<Response> => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
try {
|
try {
|
||||||
const dto: CreateIngestionSourceDto = req.body;
|
const dto: CreateIngestionSourceDto = req.body;
|
||||||
const userId = req.user?.sub;
|
const userId = req.user?.sub;
|
||||||
if (!userId) {
|
if (!userId) {
|
||||||
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
}
|
}
|
||||||
const newSource = await IngestionService.create(dto, userId);
|
const actor = await this.userService.findById(userId);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
const newSource = await IngestionService.create(
|
||||||
|
dto,
|
||||||
|
userId,
|
||||||
|
actor,
|
||||||
|
req.ip || 'unknown'
|
||||||
|
);
|
||||||
const safeSource = this.toSafeIngestionSource(newSource);
|
const safeSource = this.toSafeIngestionSource(newSource);
|
||||||
return res.status(201).json(safeSource);
|
return res.status(201).json(safeSource);
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
@@ -74,13 +82,23 @@ export class IngestionController {
|
|||||||
};
|
};
|
||||||
|
|
||||||
public update = async (req: Request, res: Response): Promise<Response> => {
|
public update = async (req: Request, res: Response): Promise<Response> => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
try {
|
try {
|
||||||
const { id } = req.params;
|
const { id } = req.params;
|
||||||
const dto: UpdateIngestionSourceDto = req.body;
|
const dto: UpdateIngestionSourceDto = req.body;
|
||||||
const updatedSource = await IngestionService.update(id, dto);
|
const userId = req.user?.sub;
|
||||||
|
if (!userId) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
const actor = await this.userService.findById(userId);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
const updatedSource = await IngestionService.update(
|
||||||
|
id,
|
||||||
|
dto,
|
||||||
|
actor,
|
||||||
|
req.ip || 'unknown'
|
||||||
|
);
|
||||||
const safeSource = this.toSafeIngestionSource(updatedSource);
|
const safeSource = this.toSafeIngestionSource(updatedSource);
|
||||||
return res.status(200).json(safeSource);
|
return res.status(200).json(safeSource);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -93,26 +111,31 @@ export class IngestionController {
|
|||||||
};
|
};
|
||||||
|
|
||||||
public delete = async (req: Request, res: Response): Promise<Response> => {
|
public delete = async (req: Request, res: Response): Promise<Response> => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
try {
|
try {
|
||||||
|
checkDeletionEnabled();
|
||||||
const { id } = req.params;
|
const { id } = req.params;
|
||||||
await IngestionService.delete(id);
|
const userId = req.user?.sub;
|
||||||
|
if (!userId) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
const actor = await this.userService.findById(userId);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
await IngestionService.delete(id, actor, req.ip || 'unknown');
|
||||||
return res.status(204).send();
|
return res.status(204).send();
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(`Delete ingestion source ${req.params.id} error:`, error);
|
console.error(`Delete ingestion source ${req.params.id} error:`, error);
|
||||||
if (error instanceof Error && error.message === 'Ingestion source not found') {
|
if (error instanceof Error && error.message === 'Ingestion source not found') {
|
||||||
return res.status(404).json({ message: req.t('ingestion.notFound') });
|
return res.status(404).json({ message: req.t('ingestion.notFound') });
|
||||||
|
} else if (error instanceof Error) {
|
||||||
|
return res.status(400).json({ message: error.message });
|
||||||
}
|
}
|
||||||
return res.status(500).json({ message: req.t('errors.internalServerError') });
|
return res.status(500).json({ message: req.t('errors.internalServerError') });
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
public triggerInitialImport = async (req: Request, res: Response): Promise<Response> => {
|
public triggerInitialImport = async (req: Request, res: Response): Promise<Response> => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
try {
|
try {
|
||||||
const { id } = req.params;
|
const { id } = req.params;
|
||||||
await IngestionService.triggerInitialImport(id);
|
await IngestionService.triggerInitialImport(id);
|
||||||
@@ -127,12 +150,22 @@ export class IngestionController {
|
|||||||
};
|
};
|
||||||
|
|
||||||
public pause = async (req: Request, res: Response): Promise<Response> => {
|
public pause = async (req: Request, res: Response): Promise<Response> => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
try {
|
try {
|
||||||
const { id } = req.params;
|
const { id } = req.params;
|
||||||
const updatedSource = await IngestionService.update(id, { status: 'paused' });
|
const userId = req.user?.sub;
|
||||||
|
if (!userId) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
const actor = await this.userService.findById(userId);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
const updatedSource = await IngestionService.update(
|
||||||
|
id,
|
||||||
|
{ status: 'paused' },
|
||||||
|
actor,
|
||||||
|
req.ip || 'unknown'
|
||||||
|
);
|
||||||
const safeSource = this.toSafeIngestionSource(updatedSource);
|
const safeSource = this.toSafeIngestionSource(updatedSource);
|
||||||
return res.status(200).json(safeSource);
|
return res.status(200).json(safeSource);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -145,12 +178,17 @@ export class IngestionController {
|
|||||||
};
|
};
|
||||||
|
|
||||||
public triggerForceSync = async (req: Request, res: Response): Promise<Response> => {
|
public triggerForceSync = async (req: Request, res: Response): Promise<Response> => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
try {
|
try {
|
||||||
const { id } = req.params;
|
const { id } = req.params;
|
||||||
await IngestionService.triggerForceSync(id);
|
const userId = req.user?.sub;
|
||||||
|
if (!userId) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
const actor = await this.userService.findById(userId);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: req.t('errors.unauthorized') });
|
||||||
|
}
|
||||||
|
await IngestionService.triggerForceSync(id, actor, req.ip || 'unknown');
|
||||||
return res.status(202).json({ message: req.t('ingestion.forceSyncTriggered') });
|
return res.status(202).json({ message: req.t('ingestion.forceSyncTriggered') });
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(`Trigger force sync for ${req.params.id} error:`, error);
|
console.error(`Trigger force sync for ${req.params.id} error:`, error);
|
||||||
|
|||||||
29
packages/backend/src/api/controllers/integrity.controller.ts
Normal file
29
packages/backend/src/api/controllers/integrity.controller.ts
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
import { Request, Response } from 'express';
|
||||||
|
import { IntegrityService } from '../../services/IntegrityService';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
const checkIntegritySchema = z.object({
|
||||||
|
id: z.string().uuid(),
|
||||||
|
});
|
||||||
|
|
||||||
|
export class IntegrityController {
|
||||||
|
private integrityService = new IntegrityService();
|
||||||
|
|
||||||
|
public checkIntegrity = async (req: Request, res: Response) => {
|
||||||
|
try {
|
||||||
|
const { id } = checkIntegritySchema.parse(req.params);
|
||||||
|
const results = await this.integrityService.checkEmailIntegrity(id);
|
||||||
|
res.status(200).json(results);
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
return res
|
||||||
|
.status(400)
|
||||||
|
.json({ message: req.t('api.requestBodyInvalid'), errors: error.message });
|
||||||
|
}
|
||||||
|
if (error instanceof Error && error.message === 'Archived email not found') {
|
||||||
|
return res.status(404).json({ message: req.t('errors.notFound') });
|
||||||
|
}
|
||||||
|
res.status(500).json({ message: req.t('errors.internalServerError') });
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
42
packages/backend/src/api/controllers/jobs.controller.ts
Normal file
42
packages/backend/src/api/controllers/jobs.controller.ts
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
import { Request, Response } from 'express';
|
||||||
|
import { JobsService } from '../../services/JobsService';
|
||||||
|
import {
|
||||||
|
IGetQueueJobsRequestParams,
|
||||||
|
IGetQueueJobsRequestQuery,
|
||||||
|
JobStatus,
|
||||||
|
} from '@open-archiver/types';
|
||||||
|
|
||||||
|
export class JobsController {
|
||||||
|
private jobsService: JobsService;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
this.jobsService = new JobsService();
|
||||||
|
}
|
||||||
|
|
||||||
|
public getQueues = async (req: Request, res: Response) => {
|
||||||
|
try {
|
||||||
|
const queues = await this.jobsService.getQueues();
|
||||||
|
res.status(200).json({ queues });
|
||||||
|
} catch (error) {
|
||||||
|
res.status(500).json({ message: 'Error fetching queues', error });
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
public getQueueJobs = async (req: Request, res: Response) => {
|
||||||
|
try {
|
||||||
|
const { queueName } = req.params as unknown as IGetQueueJobsRequestParams;
|
||||||
|
const { status, page, limit } = req.query as unknown as IGetQueueJobsRequestQuery;
|
||||||
|
const pageNumber = parseInt(page, 10) || 1;
|
||||||
|
const limitNumber = parseInt(limit, 10) || 10;
|
||||||
|
const queueDetails = await this.jobsService.getQueueDetails(
|
||||||
|
queueName,
|
||||||
|
status,
|
||||||
|
pageNumber,
|
||||||
|
limitNumber
|
||||||
|
);
|
||||||
|
res.status(200).json(queueDetails);
|
||||||
|
} catch (error) {
|
||||||
|
res.status(500).json({ message: 'Error fetching queue jobs', error });
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
@@ -31,7 +31,8 @@ export class SearchController {
|
|||||||
limit: limit ? parseInt(limit as string) : 10,
|
limit: limit ? parseInt(limit as string) : 10,
|
||||||
matchingStrategy: matchingStrategy as MatchingStrategies,
|
matchingStrategy: matchingStrategy as MatchingStrategies,
|
||||||
},
|
},
|
||||||
userId
|
userId,
|
||||||
|
req.ip || 'unknown'
|
||||||
);
|
);
|
||||||
|
|
||||||
res.status(200).json(results);
|
res.status(200).json(results);
|
||||||
|
|||||||
@@ -1,8 +1,9 @@
|
|||||||
import type { Request, Response } from 'express';
|
import type { Request, Response } from 'express';
|
||||||
import { SettingsService } from '../../services/SettingsService';
|
import { SettingsService } from '../../services/SettingsService';
|
||||||
import { config } from '../../config';
|
import { UserService } from '../../services/UserService';
|
||||||
|
|
||||||
const settingsService = new SettingsService();
|
const settingsService = new SettingsService();
|
||||||
|
const userService = new UserService();
|
||||||
|
|
||||||
export const getSystemSettings = async (req: Request, res: Response) => {
|
export const getSystemSettings = async (req: Request, res: Response) => {
|
||||||
try {
|
try {
|
||||||
@@ -17,10 +18,18 @@ export const getSystemSettings = async (req: Request, res: Response) => {
|
|||||||
export const updateSystemSettings = async (req: Request, res: Response) => {
|
export const updateSystemSettings = async (req: Request, res: Response) => {
|
||||||
try {
|
try {
|
||||||
// Basic validation can be performed here if necessary
|
// Basic validation can be performed here if necessary
|
||||||
if (config.app.isDemo) {
|
if (!req.user || !req.user.sub) {
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
}
|
}
|
||||||
const updatedSettings = await settingsService.updateSystemSettings(req.body);
|
const actor = await userService.findById(req.user.sub);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
|
}
|
||||||
|
const updatedSettings = await settingsService.updateSystemSettings(
|
||||||
|
req.body,
|
||||||
|
actor,
|
||||||
|
req.ip || 'unknown'
|
||||||
|
);
|
||||||
res.status(200).json(updatedSettings);
|
res.status(200).json(updatedSettings);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// A more specific error could be logged here
|
// A more specific error could be logged here
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ import { UserService } from '../../services/UserService';
|
|||||||
import * as schema from '../../database/schema';
|
import * as schema from '../../database/schema';
|
||||||
import { sql } from 'drizzle-orm';
|
import { sql } from 'drizzle-orm';
|
||||||
import { db } from '../../database';
|
import { db } from '../../database';
|
||||||
import { config } from '../../config';
|
|
||||||
|
|
||||||
const userService = new UserService();
|
const userService = new UserService();
|
||||||
|
|
||||||
@@ -21,27 +20,39 @@ export const getUser = async (req: Request, res: Response) => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
export const createUser = async (req: Request, res: Response) => {
|
export const createUser = async (req: Request, res: Response) => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
const { email, first_name, last_name, password, roleId } = req.body;
|
const { email, first_name, last_name, password, roleId } = req.body;
|
||||||
|
if (!req.user || !req.user.sub) {
|
||||||
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
|
}
|
||||||
|
const actor = await userService.findById(req.user.sub);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
|
}
|
||||||
|
|
||||||
const newUser = await userService.createUser(
|
const newUser = await userService.createUser(
|
||||||
{ email, first_name, last_name, password },
|
{ email, first_name, last_name, password },
|
||||||
roleId
|
roleId,
|
||||||
|
actor,
|
||||||
|
req.ip || 'unknown'
|
||||||
);
|
);
|
||||||
res.status(201).json(newUser);
|
res.status(201).json(newUser);
|
||||||
};
|
};
|
||||||
|
|
||||||
export const updateUser = async (req: Request, res: Response) => {
|
export const updateUser = async (req: Request, res: Response) => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
const { email, first_name, last_name, roleId } = req.body;
|
const { email, first_name, last_name, roleId } = req.body;
|
||||||
|
if (!req.user || !req.user.sub) {
|
||||||
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
|
}
|
||||||
|
const actor = await userService.findById(req.user.sub);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
|
}
|
||||||
const updatedUser = await userService.updateUser(
|
const updatedUser = await userService.updateUser(
|
||||||
req.params.id,
|
req.params.id,
|
||||||
{ email, first_name, last_name },
|
{ email, first_name, last_name },
|
||||||
roleId
|
roleId,
|
||||||
|
actor,
|
||||||
|
req.ip || 'unknown'
|
||||||
);
|
);
|
||||||
if (!updatedUser) {
|
if (!updatedUser) {
|
||||||
return res.status(404).json({ message: req.t('user.notFound') });
|
return res.status(404).json({ message: req.t('user.notFound') });
|
||||||
@@ -50,9 +61,6 @@ export const updateUser = async (req: Request, res: Response) => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
export const deleteUser = async (req: Request, res: Response) => {
|
export const deleteUser = async (req: Request, res: Response) => {
|
||||||
if (config.app.isDemo) {
|
|
||||||
return res.status(403).json({ message: req.t('errors.demoMode') });
|
|
||||||
}
|
|
||||||
const userCountResult = await db.select({ count: sql<number>`count(*)` }).from(schema.users);
|
const userCountResult = await db.select({ count: sql<number>`count(*)` }).from(schema.users);
|
||||||
|
|
||||||
const isOnlyUser = Number(userCountResult[0].count) === 1;
|
const isOnlyUser = Number(userCountResult[0].count) === 1;
|
||||||
@@ -61,6 +69,13 @@ export const deleteUser = async (req: Request, res: Response) => {
|
|||||||
message: req.t('user.cannotDeleteOnlyUser'),
|
message: req.t('user.cannotDeleteOnlyUser'),
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
await userService.deleteUser(req.params.id);
|
if (!req.user || !req.user.sub) {
|
||||||
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
|
}
|
||||||
|
const actor = await userService.findById(req.user.sub);
|
||||||
|
if (!actor) {
|
||||||
|
return res.status(401).json({ message: 'Unauthorized' });
|
||||||
|
}
|
||||||
|
await userService.deleteUser(req.params.id, actor, req.ip || 'unknown');
|
||||||
res.status(204).send();
|
res.status(204).send();
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import rateLimit from 'express-rate-limit';
|
import { rateLimit, ipKeyGenerator } from 'express-rate-limit';
|
||||||
import { config } from '../../config';
|
import { config } from '../../config';
|
||||||
|
|
||||||
const windowInMinutes = Math.ceil(config.api.rateLimit.windowMs / 60000);
|
const windowInMinutes = Math.ceil(config.api.rateLimit.windowMs / 60000);
|
||||||
@@ -6,6 +6,11 @@ const windowInMinutes = Math.ceil(config.api.rateLimit.windowMs / 60000);
|
|||||||
export const rateLimiter = rateLimit({
|
export const rateLimiter = rateLimit({
|
||||||
windowMs: config.api.rateLimit.windowMs,
|
windowMs: config.api.rateLimit.windowMs,
|
||||||
max: config.api.rateLimit.max,
|
max: config.api.rateLimit.max,
|
||||||
|
keyGenerator: (req, res) => {
|
||||||
|
// Use the real IP address of the client, even if it's behind a proxy.
|
||||||
|
// `app.set('trust proxy', true)` in `server.ts`.
|
||||||
|
return ipKeyGenerator(req.ip || 'unknown');
|
||||||
|
},
|
||||||
message: {
|
message: {
|
||||||
status: 429,
|
status: 429,
|
||||||
message: `Too many requests from this IP, please try again after ${windowInMinutes} minutes`,
|
message: `Too many requests from this IP, please try again after ${windowInMinutes} minutes`,
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ import { ApiKeyController } from '../controllers/api-key.controller';
|
|||||||
import { requireAuth } from '../middleware/requireAuth';
|
import { requireAuth } from '../middleware/requireAuth';
|
||||||
import { AuthService } from '../../services/AuthService';
|
import { AuthService } from '../../services/AuthService';
|
||||||
|
|
||||||
export const apiKeyRoutes = (authService: AuthService) => {
|
export const apiKeyRoutes = (authService: AuthService): Router => {
|
||||||
const router = Router();
|
const router = Router();
|
||||||
const controller = new ApiKeyController();
|
const controller = new ApiKeyController();
|
||||||
|
|
||||||
|
|||||||
16
packages/backend/src/api/routes/integrity.routes.ts
Normal file
16
packages/backend/src/api/routes/integrity.routes.ts
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
import { Router } from 'express';
|
||||||
|
import { IntegrityController } from '../controllers/integrity.controller';
|
||||||
|
import { requireAuth } from '../middleware/requireAuth';
|
||||||
|
import { requirePermission } from '../middleware/requirePermission';
|
||||||
|
import { AuthService } from '../../services/AuthService';
|
||||||
|
|
||||||
|
export const integrityRoutes = (authService: AuthService): Router => {
|
||||||
|
const router = Router();
|
||||||
|
const controller = new IntegrityController();
|
||||||
|
|
||||||
|
router.use(requireAuth(authService));
|
||||||
|
|
||||||
|
router.get('/:id', requirePermission('read', 'archive'), controller.checkIntegrity);
|
||||||
|
|
||||||
|
return router;
|
||||||
|
};
|
||||||
25
packages/backend/src/api/routes/jobs.routes.ts
Normal file
25
packages/backend/src/api/routes/jobs.routes.ts
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
import { Router } from 'express';
|
||||||
|
import { JobsController } from '../controllers/jobs.controller';
|
||||||
|
import { requireAuth } from '../middleware/requireAuth';
|
||||||
|
import { requirePermission } from '../middleware/requirePermission';
|
||||||
|
import { AuthService } from '../../services/AuthService';
|
||||||
|
|
||||||
|
export const createJobsRouter = (authService: AuthService): Router => {
|
||||||
|
const router = Router();
|
||||||
|
const jobsController = new JobsController();
|
||||||
|
|
||||||
|
router.use(requireAuth(authService));
|
||||||
|
|
||||||
|
router.get(
|
||||||
|
'/queues',
|
||||||
|
requirePermission('manage', 'all', 'user.requiresSuperAdminRole'),
|
||||||
|
jobsController.getQueues
|
||||||
|
);
|
||||||
|
router.get(
|
||||||
|
'/queues/:queueName',
|
||||||
|
requirePermission('manage', 'all', 'user.requiresSuperAdminRole'),
|
||||||
|
jobsController.getQueueJobs
|
||||||
|
);
|
||||||
|
|
||||||
|
return router;
|
||||||
|
};
|
||||||
170
packages/backend/src/api/server.ts
Normal file
170
packages/backend/src/api/server.ts
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
import express, { Express } from 'express';
|
||||||
|
import cors from 'cors';
|
||||||
|
import dotenv from 'dotenv';
|
||||||
|
import { AuthController } from './controllers/auth.controller';
|
||||||
|
import { IngestionController } from './controllers/ingestion.controller';
|
||||||
|
import { ArchivedEmailController } from './controllers/archived-email.controller';
|
||||||
|
import { StorageController } from './controllers/storage.controller';
|
||||||
|
import { SearchController } from './controllers/search.controller';
|
||||||
|
import { IamController } from './controllers/iam.controller';
|
||||||
|
import { createAuthRouter } from './routes/auth.routes';
|
||||||
|
import { createIamRouter } from './routes/iam.routes';
|
||||||
|
import { createIngestionRouter } from './routes/ingestion.routes';
|
||||||
|
import { createArchivedEmailRouter } from './routes/archived-email.routes';
|
||||||
|
import { createStorageRouter } from './routes/storage.routes';
|
||||||
|
import { createSearchRouter } from './routes/search.routes';
|
||||||
|
import { createDashboardRouter } from './routes/dashboard.routes';
|
||||||
|
import { createUploadRouter } from './routes/upload.routes';
|
||||||
|
import { createUserRouter } from './routes/user.routes';
|
||||||
|
import { createSettingsRouter } from './routes/settings.routes';
|
||||||
|
import { apiKeyRoutes } from './routes/api-key.routes';
|
||||||
|
import { integrityRoutes } from './routes/integrity.routes';
|
||||||
|
import { createJobsRouter } from './routes/jobs.routes';
|
||||||
|
import { AuthService } from '../services/AuthService';
|
||||||
|
import { AuditService } from '../services/AuditService';
|
||||||
|
import { UserService } from '../services/UserService';
|
||||||
|
import { IamService } from '../services/IamService';
|
||||||
|
import { StorageService } from '../services/StorageService';
|
||||||
|
import { SearchService } from '../services/SearchService';
|
||||||
|
import { SettingsService } from '../services/SettingsService';
|
||||||
|
import i18next from 'i18next';
|
||||||
|
import FsBackend from 'i18next-fs-backend';
|
||||||
|
import i18nextMiddleware from 'i18next-http-middleware';
|
||||||
|
import path from 'path';
|
||||||
|
import { logger } from '../config/logger';
|
||||||
|
import { rateLimiter } from './middleware/rateLimiter';
|
||||||
|
import { config } from '../config';
|
||||||
|
import { OpenArchiverFeature } from '@open-archiver/types';
|
||||||
|
// Define the "plugin" interface
|
||||||
|
export interface ArchiverModule {
|
||||||
|
initialize: (app: Express, authService: AuthService) => Promise<void>;
|
||||||
|
name: OpenArchiverFeature;
|
||||||
|
}
|
||||||
|
|
||||||
|
export let authService: AuthService;
|
||||||
|
|
||||||
|
export async function createServer(modules: ArchiverModule[] = []): Promise<Express> {
|
||||||
|
// Load environment variables
|
||||||
|
dotenv.config();
|
||||||
|
|
||||||
|
// --- Environment Variable Validation ---
|
||||||
|
const { JWT_SECRET, JWT_EXPIRES_IN } = process.env;
|
||||||
|
|
||||||
|
if (!JWT_SECRET || !JWT_EXPIRES_IN) {
|
||||||
|
throw new Error(
|
||||||
|
'Missing required environment variables for the backend: JWT_SECRET, JWT_EXPIRES_IN.'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Dependency Injection Setup ---
|
||||||
|
const auditService = new AuditService();
|
||||||
|
const userService = new UserService();
|
||||||
|
authService = new AuthService(userService, auditService, JWT_SECRET, JWT_EXPIRES_IN);
|
||||||
|
const authController = new AuthController(authService, userService);
|
||||||
|
const ingestionController = new IngestionController();
|
||||||
|
const archivedEmailController = new ArchivedEmailController();
|
||||||
|
const storageService = new StorageService();
|
||||||
|
const storageController = new StorageController(storageService);
|
||||||
|
const searchService = new SearchService();
|
||||||
|
const searchController = new SearchController();
|
||||||
|
const iamService = new IamService();
|
||||||
|
const iamController = new IamController(iamService);
|
||||||
|
const settingsService = new SettingsService();
|
||||||
|
|
||||||
|
// --- i18next Initialization ---
|
||||||
|
const initializeI18next = async () => {
|
||||||
|
const systemSettings = await settingsService.getSystemSettings();
|
||||||
|
const defaultLanguage = systemSettings?.language || 'en';
|
||||||
|
logger.info({ language: defaultLanguage }, 'Default language');
|
||||||
|
await i18next.use(FsBackend).init({
|
||||||
|
lng: defaultLanguage,
|
||||||
|
fallbackLng: defaultLanguage,
|
||||||
|
ns: ['translation'],
|
||||||
|
defaultNS: 'translation',
|
||||||
|
backend: {
|
||||||
|
loadPath: path.resolve(__dirname, '../locales/{{lng}}/{{ns}}.json'),
|
||||||
|
},
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
// Initialize i18next
|
||||||
|
await initializeI18next();
|
||||||
|
logger.info({}, 'i18next initialized');
|
||||||
|
|
||||||
|
// Configure the Meilisearch index on startup
|
||||||
|
logger.info({}, 'Configuring email index...');
|
||||||
|
await searchService.configureEmailIndex();
|
||||||
|
|
||||||
|
const app = express();
|
||||||
|
|
||||||
|
// --- CORS ---
|
||||||
|
app.use(
|
||||||
|
cors({
|
||||||
|
origin: process.env.APP_URL || 'http://localhost:3000',
|
||||||
|
credentials: true,
|
||||||
|
})
|
||||||
|
);
|
||||||
|
|
||||||
|
// Trust the proxy to get the real IP address of the client.
|
||||||
|
// This is important for audit logging and security.
|
||||||
|
app.set('trust proxy', true);
|
||||||
|
|
||||||
|
// --- Routes ---
|
||||||
|
const authRouter = createAuthRouter(authController);
|
||||||
|
const ingestionRouter = createIngestionRouter(ingestionController, authService);
|
||||||
|
const archivedEmailRouter = createArchivedEmailRouter(archivedEmailController, authService);
|
||||||
|
const storageRouter = createStorageRouter(storageController, authService);
|
||||||
|
const searchRouter = createSearchRouter(searchController, authService);
|
||||||
|
const dashboardRouter = createDashboardRouter(authService);
|
||||||
|
const iamRouter = createIamRouter(iamController, authService);
|
||||||
|
const uploadRouter = createUploadRouter(authService);
|
||||||
|
const userRouter = createUserRouter(authService);
|
||||||
|
const settingsRouter = createSettingsRouter(authService);
|
||||||
|
const apiKeyRouter = apiKeyRoutes(authService);
|
||||||
|
const integrityRouter = integrityRoutes(authService);
|
||||||
|
const jobsRouter = createJobsRouter(authService);
|
||||||
|
|
||||||
|
// Middleware for all other routes
|
||||||
|
app.use((req, res, next) => {
|
||||||
|
// exclude certain API endpoints from the rate limiter, for example status, system settings
|
||||||
|
const excludedPatterns = [/^\/v\d+\/auth\/status$/, /^\/v\d+\/settings\/system$/];
|
||||||
|
for (const pattern of excludedPatterns) {
|
||||||
|
if (pattern.test(req.path)) {
|
||||||
|
return next();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
rateLimiter(req, res, next);
|
||||||
|
});
|
||||||
|
app.use(express.json());
|
||||||
|
app.use(express.urlencoded({ extended: true }));
|
||||||
|
|
||||||
|
// i18n middleware
|
||||||
|
app.use(i18nextMiddleware.handle(i18next));
|
||||||
|
|
||||||
|
app.use(`/${config.api.version}/auth`, authRouter);
|
||||||
|
app.use(`/${config.api.version}/iam`, iamRouter);
|
||||||
|
app.use(`/${config.api.version}/upload`, uploadRouter);
|
||||||
|
app.use(`/${config.api.version}/ingestion-sources`, ingestionRouter);
|
||||||
|
app.use(`/${config.api.version}/archived-emails`, archivedEmailRouter);
|
||||||
|
app.use(`/${config.api.version}/storage`, storageRouter);
|
||||||
|
app.use(`/${config.api.version}/search`, searchRouter);
|
||||||
|
app.use(`/${config.api.version}/dashboard`, dashboardRouter);
|
||||||
|
app.use(`/${config.api.version}/users`, userRouter);
|
||||||
|
app.use(`/${config.api.version}/settings`, settingsRouter);
|
||||||
|
app.use(`/${config.api.version}/api-keys`, apiKeyRouter);
|
||||||
|
app.use(`/${config.api.version}/integrity`, integrityRouter);
|
||||||
|
app.use(`/${config.api.version}/jobs`, jobsRouter);
|
||||||
|
|
||||||
|
// Load all provided extension modules
|
||||||
|
for (const module of modules) {
|
||||||
|
await module.initialize(app, authService);
|
||||||
|
console.log(`🏢 Enterprise module loaded: ${module.name}`);
|
||||||
|
}
|
||||||
|
app.get('/', (req, res) => {
|
||||||
|
res.send('Backend is running!!');
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('✅ Core OSS modules loaded.');
|
||||||
|
|
||||||
|
return app;
|
||||||
|
}
|
||||||
@@ -9,4 +9,5 @@ export const apiConfig = {
|
|||||||
? parseInt(process.env.RATE_LIMIT_MAX_REQUESTS, 10)
|
? parseInt(process.env.RATE_LIMIT_MAX_REQUESTS, 10)
|
||||||
: 100, // limit each IP to 100 requests per windowMs
|
: 100, // limit each IP to 100 requests per windowMs
|
||||||
},
|
},
|
||||||
|
version: 'v1',
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ export const app = {
|
|||||||
nodeEnv: process.env.NODE_ENV || 'development',
|
nodeEnv: process.env.NODE_ENV || 'development',
|
||||||
port: process.env.PORT_BACKEND ? parseInt(process.env.PORT_BACKEND, 10) : 4000,
|
port: process.env.PORT_BACKEND ? parseInt(process.env.PORT_BACKEND, 10) : 4000,
|
||||||
encryptionKey: process.env.ENCRYPTION_KEY,
|
encryptionKey: process.env.ENCRYPTION_KEY,
|
||||||
isDemo: process.env.IS_DEMO === 'true',
|
|
||||||
syncFrequency: process.env.SYNC_FREQUENCY || '* * * * *', //default to 1 minute
|
syncFrequency: process.env.SYNC_FREQUENCY || '* * * * *', //default to 1 minute
|
||||||
|
enableDeletion: process.env.ENABLE_DELETION === 'true',
|
||||||
|
allInclusiveArchive: process.env.ALL_INCLUSIVE_ARCHIVE === 'true',
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -2,9 +2,14 @@ import { StorageConfig } from '@open-archiver/types';
|
|||||||
import 'dotenv/config';
|
import 'dotenv/config';
|
||||||
|
|
||||||
const storageType = process.env.STORAGE_TYPE;
|
const storageType = process.env.STORAGE_TYPE;
|
||||||
|
const encryptionKey = process.env.STORAGE_ENCRYPTION_KEY;
|
||||||
const openArchiverFolderName = 'open-archiver';
|
const openArchiverFolderName = 'open-archiver';
|
||||||
let storageConfig: StorageConfig;
|
let storageConfig: StorageConfig;
|
||||||
|
|
||||||
|
if (encryptionKey && !/^[a-fA-F0-9]{64}$/.test(encryptionKey)) {
|
||||||
|
throw new Error('STORAGE_ENCRYPTION_KEY must be a 64-character hex string (32 bytes)');
|
||||||
|
}
|
||||||
|
|
||||||
if (storageType === 'local') {
|
if (storageType === 'local') {
|
||||||
if (!process.env.STORAGE_LOCAL_ROOT_PATH) {
|
if (!process.env.STORAGE_LOCAL_ROOT_PATH) {
|
||||||
throw new Error('STORAGE_LOCAL_ROOT_PATH is not defined in the environment variables');
|
throw new Error('STORAGE_LOCAL_ROOT_PATH is not defined in the environment variables');
|
||||||
@@ -13,6 +18,7 @@ if (storageType === 'local') {
|
|||||||
type: 'local',
|
type: 'local',
|
||||||
rootPath: process.env.STORAGE_LOCAL_ROOT_PATH,
|
rootPath: process.env.STORAGE_LOCAL_ROOT_PATH,
|
||||||
openArchiverFolderName: openArchiverFolderName,
|
openArchiverFolderName: openArchiverFolderName,
|
||||||
|
encryptionKey: encryptionKey,
|
||||||
};
|
};
|
||||||
} else if (storageType === 's3') {
|
} else if (storageType === 's3') {
|
||||||
if (
|
if (
|
||||||
@@ -32,6 +38,7 @@ if (storageType === 'local') {
|
|||||||
region: process.env.STORAGE_S3_REGION,
|
region: process.env.STORAGE_S3_REGION,
|
||||||
forcePathStyle: process.env.STORAGE_S3_FORCE_PATH_STYLE === 'true',
|
forcePathStyle: process.env.STORAGE_S3_FORCE_PATH_STYLE === 'true',
|
||||||
openArchiverFolderName: openArchiverFolderName,
|
openArchiverFolderName: openArchiverFolderName,
|
||||||
|
encryptionKey: encryptionKey,
|
||||||
};
|
};
|
||||||
} else {
|
} else {
|
||||||
throw new Error(`Invalid STORAGE_TYPE: ${storageType}`);
|
throw new Error(`Invalid STORAGE_TYPE: ${storageType}`);
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import { drizzle } from 'drizzle-orm/postgres-js';
|
import { drizzle, PostgresJsDatabase } from 'drizzle-orm/postgres-js';
|
||||||
import postgres from 'postgres';
|
import postgres from 'postgres';
|
||||||
import 'dotenv/config';
|
import 'dotenv/config';
|
||||||
|
|
||||||
@@ -12,3 +12,4 @@ if (!process.env.DATABASE_URL) {
|
|||||||
const connectionString = encodeDatabaseUrl(process.env.DATABASE_URL);
|
const connectionString = encodeDatabaseUrl(process.env.DATABASE_URL);
|
||||||
const client = postgres(connectionString);
|
const client = postgres(connectionString);
|
||||||
export const db = drizzle(client, { schema });
|
export const db = drizzle(client, { schema });
|
||||||
|
export type Database = PostgresJsDatabase<typeof schema>;
|
||||||
|
|||||||
@@ -0,0 +1,9 @@
|
|||||||
|
CREATE TYPE "public"."audit_log_action" AS ENUM('CREATE', 'READ', 'UPDATE', 'DELETE', 'LOGIN', 'LOGOUT', 'SETUP', 'IMPORT', 'PAUSE', 'SYNC', 'UPLOAD', 'SEARCH', 'DOWNLOAD', 'GENERATE');--> statement-breakpoint
|
||||||
|
CREATE TYPE "public"."audit_log_target_type" AS ENUM('ApiKey', 'ArchivedEmail', 'Dashboard', 'IngestionSource', 'Role', 'SystemSettings', 'User', 'File');--> statement-breakpoint
|
||||||
|
ALTER TABLE "audit_logs" ALTER COLUMN "target_type" SET DATA TYPE "public"."audit_log_target_type" USING "target_type"::"public"."audit_log_target_type";--> statement-breakpoint
|
||||||
|
ALTER TABLE "audit_logs" ADD COLUMN "previous_hash" varchar(64);--> statement-breakpoint
|
||||||
|
ALTER TABLE "audit_logs" ADD COLUMN "actor_ip" text;--> statement-breakpoint
|
||||||
|
ALTER TABLE "audit_logs" ADD COLUMN "action_type" "audit_log_action" NOT NULL;--> statement-breakpoint
|
||||||
|
ALTER TABLE "audit_logs" ADD COLUMN "current_hash" varchar(64) NOT NULL;--> statement-breakpoint
|
||||||
|
ALTER TABLE "audit_logs" DROP COLUMN "action";--> statement-breakpoint
|
||||||
|
ALTER TABLE "audit_logs" DROP COLUMN "is_tamper_evident";
|
||||||
@@ -0,0 +1,4 @@
|
|||||||
|
ALTER TABLE "attachments" DROP CONSTRAINT "attachments_content_hash_sha256_unique";--> statement-breakpoint
|
||||||
|
ALTER TABLE "attachments" ADD COLUMN "ingestion_source_id" uuid;--> statement-breakpoint
|
||||||
|
ALTER TABLE "attachments" ADD CONSTRAINT "attachments_ingestion_source_id_ingestion_sources_id_fk" FOREIGN KEY ("ingestion_source_id") REFERENCES "public"."ingestion_sources"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
|
||||||
|
CREATE UNIQUE INDEX "source_hash_unique" ON "attachments" USING btree ("ingestion_source_id","content_hash_sha256");
|
||||||
@@ -0,0 +1,2 @@
|
|||||||
|
DROP INDEX "source_hash_unique";--> statement-breakpoint
|
||||||
|
CREATE INDEX "source_hash_idx" ON "attachments" USING btree ("ingestion_source_id","content_hash_sha256");
|
||||||
File diff suppressed because it is too large
Load Diff
1225
packages/backend/src/database/migrations/meta/0021_snapshot.json
Normal file
1225
packages/backend/src/database/migrations/meta/0021_snapshot.json
Normal file
File diff suppressed because it is too large
Load Diff
1257
packages/backend/src/database/migrations/meta/0022_snapshot.json
Normal file
1257
packages/backend/src/database/migrations/meta/0022_snapshot.json
Normal file
File diff suppressed because it is too large
Load Diff
1257
packages/backend/src/database/migrations/meta/0023_snapshot.json
Normal file
1257
packages/backend/src/database/migrations/meta/0023_snapshot.json
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,153 +1,174 @@
|
|||||||
{
|
{
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"dialect": "postgresql",
|
"dialect": "postgresql",
|
||||||
"entries": [
|
"entries": [
|
||||||
{
|
{
|
||||||
"idx": 0,
|
"idx": 0,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1752225352591,
|
"when": 1752225352591,
|
||||||
"tag": "0000_amusing_namora",
|
"tag": "0000_amusing_namora",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 1,
|
"idx": 1,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1752326803882,
|
"when": 1752326803882,
|
||||||
"tag": "0001_odd_night_thrasher",
|
"tag": "0001_odd_night_thrasher",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 2,
|
"idx": 2,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1752332648392,
|
"when": 1752332648392,
|
||||||
"tag": "0002_lethal_quentin_quire",
|
"tag": "0002_lethal_quentin_quire",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 3,
|
"idx": 3,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1752332967084,
|
"when": 1752332967084,
|
||||||
"tag": "0003_petite_wrecker",
|
"tag": "0003_petite_wrecker",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 4,
|
"idx": 4,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1752606108876,
|
"when": 1752606108876,
|
||||||
"tag": "0004_sleepy_paper_doll",
|
"tag": "0004_sleepy_paper_doll",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 5,
|
"idx": 5,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1752606327253,
|
"when": 1752606327253,
|
||||||
"tag": "0005_chunky_sue_storm",
|
"tag": "0005_chunky_sue_storm",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 6,
|
"idx": 6,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1753112018514,
|
"when": 1753112018514,
|
||||||
"tag": "0006_majestic_caretaker",
|
"tag": "0006_majestic_caretaker",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 7,
|
"idx": 7,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1753190159356,
|
"when": 1753190159356,
|
||||||
"tag": "0007_handy_archangel",
|
"tag": "0007_handy_archangel",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 8,
|
"idx": 8,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1753370737317,
|
"when": 1753370737317,
|
||||||
"tag": "0008_eminent_the_spike",
|
"tag": "0008_eminent_the_spike",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 9,
|
"idx": 9,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1754337938241,
|
"when": 1754337938241,
|
||||||
"tag": "0009_late_lenny_balinger",
|
"tag": "0009_late_lenny_balinger",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 10,
|
"idx": 10,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1754420780849,
|
"when": 1754420780849,
|
||||||
"tag": "0010_perpetual_lightspeed",
|
"tag": "0010_perpetual_lightspeed",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 11,
|
"idx": 11,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1754422064158,
|
"when": 1754422064158,
|
||||||
"tag": "0011_tan_blackheart",
|
"tag": "0011_tan_blackheart",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 12,
|
"idx": 12,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1754476962901,
|
"when": 1754476962901,
|
||||||
"tag": "0012_warm_the_stranger",
|
"tag": "0012_warm_the_stranger",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 13,
|
"idx": 13,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1754659373517,
|
"when": 1754659373517,
|
||||||
"tag": "0013_classy_talkback",
|
"tag": "0013_classy_talkback",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 14,
|
"idx": 14,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1754831765718,
|
"when": 1754831765718,
|
||||||
"tag": "0014_foamy_vapor",
|
"tag": "0014_foamy_vapor",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 15,
|
"idx": 15,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1755443936046,
|
"when": 1755443936046,
|
||||||
"tag": "0015_wakeful_norman_osborn",
|
"tag": "0015_wakeful_norman_osborn",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 16,
|
"idx": 16,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1755780572342,
|
"when": 1755780572342,
|
||||||
"tag": "0016_lonely_mariko_yashida",
|
"tag": "0016_lonely_mariko_yashida",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 17,
|
"idx": 17,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1755961566627,
|
"when": 1755961566627,
|
||||||
"tag": "0017_tranquil_shooting_star",
|
"tag": "0017_tranquil_shooting_star",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 18,
|
"idx": 18,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1756911118035,
|
"when": 1756911118035,
|
||||||
"tag": "0018_flawless_owl",
|
"tag": "0018_flawless_owl",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 19,
|
"idx": 19,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1756937533843,
|
"when": 1756937533843,
|
||||||
"tag": "0019_confused_scream",
|
"tag": "0019_confused_scream",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"idx": 20,
|
"idx": 20,
|
||||||
"version": "7",
|
"version": "7",
|
||||||
"when": 1757860242528,
|
"when": 1757860242528,
|
||||||
"tag": "0020_panoramic_wolverine",
|
"tag": "0020_panoramic_wolverine",
|
||||||
"breakpoints": true
|
"breakpoints": true
|
||||||
}
|
},
|
||||||
]
|
{
|
||||||
}
|
"idx": 21,
|
||||||
|
"version": "7",
|
||||||
|
"when": 1759412986134,
|
||||||
|
"tag": "0021_nosy_veda",
|
||||||
|
"breakpoints": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"idx": 22,
|
||||||
|
"version": "7",
|
||||||
|
"when": 1759701622932,
|
||||||
|
"tag": "0022_complete_triton",
|
||||||
|
"breakpoints": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"idx": 23,
|
||||||
|
"version": "7",
|
||||||
|
"when": 1760354094610,
|
||||||
|
"tag": "0023_swift_swordsman",
|
||||||
|
"breakpoints": true
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|||||||
@@ -7,3 +7,5 @@ export * from './schema/ingestion-sources';
|
|||||||
export * from './schema/users';
|
export * from './schema/users';
|
||||||
export * from './schema/system-settings';
|
export * from './schema/system-settings';
|
||||||
export * from './schema/api-keys';
|
export * from './schema/api-keys';
|
||||||
|
export * from './schema/audit-logs';
|
||||||
|
export * from './schema/enums';
|
||||||
|
|||||||
@@ -1,15 +1,23 @@
|
|||||||
import { relations } from 'drizzle-orm';
|
import { relations } from 'drizzle-orm';
|
||||||
import { pgTable, text, uuid, bigint, primaryKey } from 'drizzle-orm/pg-core';
|
import { pgTable, text, uuid, bigint, primaryKey, index } from 'drizzle-orm/pg-core';
|
||||||
import { archivedEmails } from './archived-emails';
|
import { archivedEmails } from './archived-emails';
|
||||||
|
import { ingestionSources } from './ingestion-sources';
|
||||||
|
|
||||||
export const attachments = pgTable('attachments', {
|
export const attachments = pgTable(
|
||||||
id: uuid('id').primaryKey().defaultRandom(),
|
'attachments',
|
||||||
filename: text('filename').notNull(),
|
{
|
||||||
mimeType: text('mime_type'),
|
id: uuid('id').primaryKey().defaultRandom(),
|
||||||
sizeBytes: bigint('size_bytes', { mode: 'number' }).notNull(),
|
filename: text('filename').notNull(),
|
||||||
contentHashSha256: text('content_hash_sha256').notNull().unique(),
|
mimeType: text('mime_type'),
|
||||||
storagePath: text('storage_path').notNull(),
|
sizeBytes: bigint('size_bytes', { mode: 'number' }).notNull(),
|
||||||
});
|
contentHashSha256: text('content_hash_sha256').notNull(),
|
||||||
|
storagePath: text('storage_path').notNull(),
|
||||||
|
ingestionSourceId: uuid('ingestion_source_id').references(() => ingestionSources.id, {
|
||||||
|
onDelete: 'cascade',
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
(table) => [index('source_hash_idx').on(table.ingestionSourceId, table.contentHashSha256)]
|
||||||
|
);
|
||||||
|
|
||||||
export const emailAttachments = pgTable(
|
export const emailAttachments = pgTable(
|
||||||
'email_attachments',
|
'email_attachments',
|
||||||
|
|||||||
@@ -1,12 +1,34 @@
|
|||||||
import { bigserial, boolean, jsonb, pgTable, text, timestamp } from 'drizzle-orm/pg-core';
|
import { bigserial, jsonb, pgTable, text, timestamp, varchar } from 'drizzle-orm/pg-core';
|
||||||
|
import { auditLogActionEnum, auditLogTargetTypeEnum } from './enums';
|
||||||
|
|
||||||
export const auditLogs = pgTable('audit_logs', {
|
export const auditLogs = pgTable('audit_logs', {
|
||||||
|
// A unique, sequential, and gapless primary key for ordering.
|
||||||
id: bigserial('id', { mode: 'number' }).primaryKey(),
|
id: bigserial('id', { mode: 'number' }).primaryKey(),
|
||||||
|
|
||||||
|
// The SHA-256 hash of the preceding log entry's `currentHash`.
|
||||||
|
previousHash: varchar('previous_hash', { length: 64 }),
|
||||||
|
|
||||||
|
// A high-precision, UTC timestamp of when the event occurred.
|
||||||
timestamp: timestamp('timestamp', { withTimezone: true }).notNull().defaultNow(),
|
timestamp: timestamp('timestamp', { withTimezone: true }).notNull().defaultNow(),
|
||||||
|
|
||||||
|
// A stable identifier for the actor who performed the action.
|
||||||
actorIdentifier: text('actor_identifier').notNull(),
|
actorIdentifier: text('actor_identifier').notNull(),
|
||||||
action: text('action').notNull(),
|
|
||||||
targetType: text('target_type'),
|
// The IP address from which the action was initiated.
|
||||||
|
actorIp: text('actor_ip'),
|
||||||
|
|
||||||
|
// A standardized, machine-readable identifier for the event.
|
||||||
|
actionType: auditLogActionEnum('action_type').notNull(),
|
||||||
|
|
||||||
|
// The type of resource that was affected by the action.
|
||||||
|
targetType: auditLogTargetTypeEnum('target_type'),
|
||||||
|
|
||||||
|
// The unique identifier of the affected resource.
|
||||||
targetId: text('target_id'),
|
targetId: text('target_id'),
|
||||||
|
|
||||||
|
// A JSON object containing specific, contextual details of the event.
|
||||||
details: jsonb('details'),
|
details: jsonb('details'),
|
||||||
isTamperEvident: boolean('is_tamper_evident').default(false),
|
|
||||||
|
// The SHA-256 hash of this entire log entry's contents.
|
||||||
|
currentHash: varchar('current_hash', { length: 64 }).notNull(),
|
||||||
});
|
});
|
||||||
|
|||||||
5
packages/backend/src/database/schema/enums.ts
Normal file
5
packages/backend/src/database/schema/enums.ts
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
import { pgEnum } from 'drizzle-orm/pg-core';
|
||||||
|
import { AuditLogActions, AuditLogTargetTypes } from '@open-archiver/types';
|
||||||
|
|
||||||
|
export const auditLogActionEnum = pgEnum('audit_log_action', AuditLogActions);
|
||||||
|
export const auditLogTargetTypeEnum = pgEnum('audit_log_target_type', AuditLogTargetTypes);
|
||||||
9
packages/backend/src/helpers/deletionGuard.ts
Normal file
9
packages/backend/src/helpers/deletionGuard.ts
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
import { config } from '../config';
|
||||||
|
import i18next from 'i18next';
|
||||||
|
|
||||||
|
export function checkDeletionEnabled() {
|
||||||
|
if (!config.app.enableDeletion) {
|
||||||
|
const errorMessage = i18next.t('Deletion is disabled for this instance.');
|
||||||
|
throw new Error(errorMessage);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -59,14 +59,17 @@ async function extractTextLegacy(buffer: Buffer, mimeType: string): Promise<stri
|
|||||||
try {
|
try {
|
||||||
if (mimeType === 'application/pdf') {
|
if (mimeType === 'application/pdf') {
|
||||||
// Check PDF size (memory protection)
|
// Check PDF size (memory protection)
|
||||||
if (buffer.length > 50 * 1024 * 1024) { // 50MB Limit
|
if (buffer.length > 50 * 1024 * 1024) {
|
||||||
|
// 50MB Limit
|
||||||
logger.warn('PDF too large for legacy extraction, skipping');
|
logger.warn('PDF too large for legacy extraction, skipping');
|
||||||
return '';
|
return '';
|
||||||
}
|
}
|
||||||
return await extractTextFromPdf(buffer);
|
return await extractTextFromPdf(buffer);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mimeType === 'application/vnd.openxmlformats-officedocument.wordprocessingml.document') {
|
if (
|
||||||
|
mimeType === 'application/vnd.openxmlformats-officedocument.wordprocessingml.document'
|
||||||
|
) {
|
||||||
const { value } = await mammoth.extractRawText({ buffer });
|
const { value } = await mammoth.extractRawText({ buffer });
|
||||||
return value;
|
return value;
|
||||||
}
|
}
|
||||||
@@ -118,7 +121,9 @@ export async function extractText(buffer: Buffer, mimeType: string): Promise<str
|
|||||||
// General size limit
|
// General size limit
|
||||||
const maxSize = process.env.TIKA_URL ? 100 * 1024 * 1024 : 50 * 1024 * 1024; // 100MB for Tika, 50MB for Legacy
|
const maxSize = process.env.TIKA_URL ? 100 * 1024 * 1024 : 50 * 1024 * 1024; // 100MB for Tika, 50MB for Legacy
|
||||||
if (buffer.length > maxSize) {
|
if (buffer.length > maxSize) {
|
||||||
logger.warn(`File too large for text extraction: ${buffer.length} bytes (limit: ${maxSize})`);
|
logger.warn(
|
||||||
|
`File too large for text extraction: ${buffer.length} bytes (limit: ${maxSize})`
|
||||||
|
);
|
||||||
return '';
|
return '';
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -128,12 +133,12 @@ export async function extractText(buffer: Buffer, mimeType: string): Promise<str
|
|||||||
if (tikaUrl) {
|
if (tikaUrl) {
|
||||||
// Tika decides what it can parse
|
// Tika decides what it can parse
|
||||||
logger.debug(`Using Tika for text extraction: ${mimeType}`);
|
logger.debug(`Using Tika for text extraction: ${mimeType}`);
|
||||||
const ocrService = new OcrService()
|
const ocrService = new OcrService();
|
||||||
try {
|
try {
|
||||||
return await ocrService.extractTextWithTika(buffer, mimeType);
|
return await ocrService.extractTextWithTika(buffer, mimeType);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error({ error }, "OCR text extraction failed, returning empty string")
|
logger.error({ error }, 'OCR text extraction failed, returning empty string');
|
||||||
return ''
|
return '';
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// extract using legacy mode
|
// extract using legacy mode
|
||||||
|
|||||||
@@ -1,155 +1,10 @@
|
|||||||
import express from 'express';
|
export { createServer, ArchiverModule } from './api/server';
|
||||||
import dotenv from 'dotenv';
|
export { logger } from './config/logger';
|
||||||
import { AuthController } from './api/controllers/auth.controller';
|
export { config } from './config';
|
||||||
import { IngestionController } from './api/controllers/ingestion.controller';
|
export * from './services/AuthService';
|
||||||
import { ArchivedEmailController } from './api/controllers/archived-email.controller';
|
export * from './services/AuditService';
|
||||||
import { StorageController } from './api/controllers/storage.controller';
|
export * from './api/middleware/requireAuth';
|
||||||
import { SearchController } from './api/controllers/search.controller';
|
export * from './api/middleware/requirePermission';
|
||||||
import { IamController } from './api/controllers/iam.controller';
|
export { db } from './database';
|
||||||
import { requireAuth } from './api/middleware/requireAuth';
|
export * as drizzleOrm from 'drizzle-orm';
|
||||||
import { createAuthRouter } from './api/routes/auth.routes';
|
export * from './database/schema';
|
||||||
import { createIamRouter } from './api/routes/iam.routes';
|
|
||||||
import { createIngestionRouter } from './api/routes/ingestion.routes';
|
|
||||||
import { createArchivedEmailRouter } from './api/routes/archived-email.routes';
|
|
||||||
import { createStorageRouter } from './api/routes/storage.routes';
|
|
||||||
import { createSearchRouter } from './api/routes/search.routes';
|
|
||||||
import { createDashboardRouter } from './api/routes/dashboard.routes';
|
|
||||||
import { createUploadRouter } from './api/routes/upload.routes';
|
|
||||||
import { createUserRouter } from './api/routes/user.routes';
|
|
||||||
import { createSettingsRouter } from './api/routes/settings.routes';
|
|
||||||
import { apiKeyRoutes } from './api/routes/api-key.routes';
|
|
||||||
import { AuthService } from './services/AuthService';
|
|
||||||
import { UserService } from './services/UserService';
|
|
||||||
import { IamService } from './services/IamService';
|
|
||||||
import { StorageService } from './services/StorageService';
|
|
||||||
import { SearchService } from './services/SearchService';
|
|
||||||
import { SettingsService } from './services/SettingsService';
|
|
||||||
import i18next from 'i18next';
|
|
||||||
import FsBackend from 'i18next-fs-backend';
|
|
||||||
import i18nextMiddleware from 'i18next-http-middleware';
|
|
||||||
import path from 'path';
|
|
||||||
import { logger } from './config/logger';
|
|
||||||
import { rateLimiter } from './api/middleware/rateLimiter';
|
|
||||||
|
|
||||||
// Load environment variables
|
|
||||||
dotenv.config();
|
|
||||||
|
|
||||||
// --- Environment Variable Validation ---
|
|
||||||
const { PORT_BACKEND, JWT_SECRET, JWT_EXPIRES_IN } = process.env;
|
|
||||||
|
|
||||||
if (!PORT_BACKEND || !JWT_SECRET || !JWT_EXPIRES_IN) {
|
|
||||||
throw new Error(
|
|
||||||
'Missing required environment variables for the backend: PORT_BACKEND, JWT_SECRET, JWT_EXPIRES_IN.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- i18next Initialization ---
|
|
||||||
const initializeI18next = async () => {
|
|
||||||
const systemSettings = await settingsService.getSystemSettings();
|
|
||||||
const defaultLanguage = systemSettings?.language || 'en';
|
|
||||||
logger.info({ language: defaultLanguage }, 'Default language');
|
|
||||||
await i18next.use(FsBackend).init({
|
|
||||||
lng: defaultLanguage,
|
|
||||||
fallbackLng: defaultLanguage,
|
|
||||||
ns: ['translation'],
|
|
||||||
defaultNS: 'translation',
|
|
||||||
backend: {
|
|
||||||
loadPath: path.resolve(__dirname, './locales/{{lng}}/{{ns}}.json'),
|
|
||||||
},
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
// --- Dependency Injection Setup ---
|
|
||||||
|
|
||||||
const userService = new UserService();
|
|
||||||
const authService = new AuthService(userService, JWT_SECRET, JWT_EXPIRES_IN);
|
|
||||||
const authController = new AuthController(authService, userService);
|
|
||||||
const ingestionController = new IngestionController();
|
|
||||||
const archivedEmailController = new ArchivedEmailController();
|
|
||||||
const storageService = new StorageService();
|
|
||||||
const storageController = new StorageController(storageService);
|
|
||||||
const searchService = new SearchService();
|
|
||||||
const searchController = new SearchController();
|
|
||||||
const iamService = new IamService();
|
|
||||||
const iamController = new IamController(iamService);
|
|
||||||
const settingsService = new SettingsService();
|
|
||||||
|
|
||||||
// --- Express App Initialization ---
|
|
||||||
const app = express();
|
|
||||||
|
|
||||||
// --- Routes ---
|
|
||||||
const authRouter = createAuthRouter(authController);
|
|
||||||
const ingestionRouter = createIngestionRouter(ingestionController, authService);
|
|
||||||
const archivedEmailRouter = createArchivedEmailRouter(archivedEmailController, authService);
|
|
||||||
const storageRouter = createStorageRouter(storageController, authService);
|
|
||||||
const searchRouter = createSearchRouter(searchController, authService);
|
|
||||||
const dashboardRouter = createDashboardRouter(authService);
|
|
||||||
const iamRouter = createIamRouter(iamController, authService);
|
|
||||||
const uploadRouter = createUploadRouter(authService);
|
|
||||||
const userRouter = createUserRouter(authService);
|
|
||||||
const settingsRouter = createSettingsRouter(authService);
|
|
||||||
const apiKeyRouter = apiKeyRoutes(authService);
|
|
||||||
// upload route is added before middleware because it doesn't use the json middleware.
|
|
||||||
app.use('/v1/upload', uploadRouter);
|
|
||||||
|
|
||||||
// Middleware for all other routes
|
|
||||||
app.use((req, res, next) => {
|
|
||||||
// exclude certain API endpoints from the rate limiter, for example status, system settings
|
|
||||||
const excludedPatterns = [/^\/v\d+\/auth\/status$/, /^\/v\d+\/settings\/system$/];
|
|
||||||
for (const pattern of excludedPatterns) {
|
|
||||||
if (pattern.test(req.path)) {
|
|
||||||
return next();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
rateLimiter(req, res, next);
|
|
||||||
});
|
|
||||||
app.use(express.json());
|
|
||||||
app.use(express.urlencoded({ extended: true }));
|
|
||||||
|
|
||||||
// i18n middleware
|
|
||||||
app.use(i18nextMiddleware.handle(i18next));
|
|
||||||
|
|
||||||
app.use('/v1/auth', authRouter);
|
|
||||||
app.use('/v1/iam', iamRouter);
|
|
||||||
app.use('/v1/ingestion-sources', ingestionRouter);
|
|
||||||
app.use('/v1/archived-emails', archivedEmailRouter);
|
|
||||||
app.use('/v1/storage', storageRouter);
|
|
||||||
app.use('/v1/search', searchRouter);
|
|
||||||
app.use('/v1/dashboard', dashboardRouter);
|
|
||||||
app.use('/v1/users', userRouter);
|
|
||||||
app.use('/v1/settings', settingsRouter);
|
|
||||||
app.use('/v1/api-keys', apiKeyRouter);
|
|
||||||
|
|
||||||
// Example of a protected route
|
|
||||||
app.get('/v1/protected', requireAuth(authService), (req, res) => {
|
|
||||||
res.json({
|
|
||||||
message: 'You have accessed a protected route!',
|
|
||||||
user: req.user, // The user payload is attached by the requireAuth middleware
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
app.get('/', (req, res) => {
|
|
||||||
res.send('Backend is running!');
|
|
||||||
});
|
|
||||||
|
|
||||||
// --- Server Start ---
|
|
||||||
const startServer = async () => {
|
|
||||||
try {
|
|
||||||
// Initialize i18next
|
|
||||||
await initializeI18next();
|
|
||||||
logger.info({}, 'i18next initialized');
|
|
||||||
|
|
||||||
// Configure the Meilisearch index on startup
|
|
||||||
logger.info({}, 'Configuring email index...');
|
|
||||||
await searchService.configureEmailIndex();
|
|
||||||
|
|
||||||
app.listen(PORT_BACKEND, () => {
|
|
||||||
logger.info({}, `Backend listening at http://localhost:${PORT_BACKEND}`);
|
|
||||||
});
|
|
||||||
} catch (error) {
|
|
||||||
logger.error({ error }, 'Failed to start the server:', error);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
startServer();
|
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ const databaseService = new DatabaseService();
|
|||||||
const indexingService = new IndexingService(databaseService, searchService, storageService);
|
const indexingService = new IndexingService(databaseService, searchService, storageService);
|
||||||
|
|
||||||
export default async function (job: Job<{ emails: PendingEmail[] }>) {
|
export default async function (job: Job<{ emails: PendingEmail[] }>) {
|
||||||
const { emails } = job.data;
|
const { emails } = job.data;
|
||||||
console.log(`Indexing email batch with ${emails.length} emails`);
|
console.log(`Indexing email batch with ${emails.length} emails`);
|
||||||
await indexingService.indexEmailBatch(emails);
|
await indexingService.indexEmailBatch(emails);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ import { IndexingService } from '../../services/IndexingService';
|
|||||||
import { SearchService } from '../../services/SearchService';
|
import { SearchService } from '../../services/SearchService';
|
||||||
import { DatabaseService } from '../../services/DatabaseService';
|
import { DatabaseService } from '../../services/DatabaseService';
|
||||||
import { config } from '../../config';
|
import { config } from '../../config';
|
||||||
|
import { indexingQueue } from '../queues';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* This processor handles the ingestion of emails for a single user's mailbox.
|
* This processor handles the ingestion of emails for a single user's mailbox.
|
||||||
@@ -55,7 +55,7 @@ export const processMailboxProcessor = async (job: Job<IProcessMailboxJob, SyncS
|
|||||||
if (processedEmail) {
|
if (processedEmail) {
|
||||||
emailBatch.push(processedEmail);
|
emailBatch.push(processedEmail);
|
||||||
if (emailBatch.length >= BATCH_SIZE) {
|
if (emailBatch.length >= BATCH_SIZE) {
|
||||||
await indexingService.indexEmailBatch(emailBatch);
|
await indexingQueue.add('index-email-batch', { emails: emailBatch });
|
||||||
emailBatch = [];
|
emailBatch = [];
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -63,7 +63,7 @@ export const processMailboxProcessor = async (job: Job<IProcessMailboxJob, SyncS
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (emailBatch.length > 0) {
|
if (emailBatch.length > 0) {
|
||||||
await indexingService.indexEmailBatch(emailBatch);
|
await indexingQueue.add('index-email-batch', { emails: emailBatch });
|
||||||
emailBatch = [];
|
emailBatch = [];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -14,7 +14,8 @@
|
|||||||
"demoMode": "Dieser Vorgang ist im Demo-Modus nicht zulässig.",
|
"demoMode": "Dieser Vorgang ist im Demo-Modus nicht zulässig.",
|
||||||
"unauthorized": "Unbefugt",
|
"unauthorized": "Unbefugt",
|
||||||
"unknown": "Ein unbekannter Fehler ist aufgetreten",
|
"unknown": "Ein unbekannter Fehler ist aufgetreten",
|
||||||
"noPermissionToAction": "Sie haben keine Berechtigung, die aktuelle Aktion auszuführen."
|
"noPermissionToAction": "Sie haben keine Berechtigung, die aktuelle Aktion auszuführen.",
|
||||||
|
"deletion_disabled": "Das Löschen ist für diese Instanz deaktiviert."
|
||||||
},
|
},
|
||||||
"user": {
|
"user": {
|
||||||
"notFound": "Benutzer nicht gefunden",
|
"notFound": "Benutzer nicht gefunden",
|
||||||
|
|||||||
@@ -14,7 +14,8 @@
|
|||||||
"demoMode": "This operation is not allowed in demo mode.",
|
"demoMode": "This operation is not allowed in demo mode.",
|
||||||
"unauthorized": "Unauthorized",
|
"unauthorized": "Unauthorized",
|
||||||
"unknown": "An unknown error occurred",
|
"unknown": "An unknown error occurred",
|
||||||
"noPermissionToAction": "You don't have the permission to perform the current action."
|
"noPermissionToAction": "You don't have the permission to perform the current action.",
|
||||||
|
"deletion_disabled": "Deletion is disabled for this instance."
|
||||||
},
|
},
|
||||||
"user": {
|
"user": {
|
||||||
"notFound": "User not found",
|
"notFound": "User not found",
|
||||||
|
|||||||
@@ -3,13 +3,17 @@ import { db } from '../database';
|
|||||||
import { apiKeys } from '../database/schema/api-keys';
|
import { apiKeys } from '../database/schema/api-keys';
|
||||||
import { CryptoService } from './CryptoService';
|
import { CryptoService } from './CryptoService';
|
||||||
import { and, eq } from 'drizzle-orm';
|
import { and, eq } from 'drizzle-orm';
|
||||||
import { ApiKey } from '@open-archiver/types';
|
import { ApiKey, User } from '@open-archiver/types';
|
||||||
|
import { AuditService } from './AuditService';
|
||||||
|
|
||||||
export class ApiKeyService {
|
export class ApiKeyService {
|
||||||
|
private static auditService = new AuditService();
|
||||||
public static async generate(
|
public static async generate(
|
||||||
userId: string,
|
userId: string,
|
||||||
name: string,
|
name: string,
|
||||||
expiresInDays: number
|
expiresInDays: number,
|
||||||
|
actor: User,
|
||||||
|
actorIp: string
|
||||||
): Promise<string> {
|
): Promise<string> {
|
||||||
const key = randomBytes(32).toString('hex');
|
const key = randomBytes(32).toString('hex');
|
||||||
const expiresAt = new Date();
|
const expiresAt = new Date();
|
||||||
@@ -24,6 +28,17 @@ export class ApiKeyService {
|
|||||||
expiresAt,
|
expiresAt,
|
||||||
});
|
});
|
||||||
|
|
||||||
|
await this.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'GENERATE',
|
||||||
|
targetType: 'ApiKey',
|
||||||
|
targetId: name,
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
keyName: name,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
return key;
|
return key;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -46,8 +61,19 @@ export class ApiKeyService {
|
|||||||
.filter((k): k is NonNullable<typeof k> => k !== null);
|
.filter((k): k is NonNullable<typeof k> => k !== null);
|
||||||
}
|
}
|
||||||
|
|
||||||
public static async deleteKey(id: string, userId: string) {
|
public static async deleteKey(id: string, userId: string, actor: User, actorIp: string) {
|
||||||
|
const [key] = await db.select().from(apiKeys).where(eq(apiKeys.id, id));
|
||||||
await db.delete(apiKeys).where(and(eq(apiKeys.id, id), eq(apiKeys.userId, userId)));
|
await db.delete(apiKeys).where(and(eq(apiKeys.id, id), eq(apiKeys.userId, userId)));
|
||||||
|
await this.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'DELETE',
|
||||||
|
targetType: 'ApiKey',
|
||||||
|
targetId: id,
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
keyName: key?.name,
|
||||||
|
},
|
||||||
|
});
|
||||||
}
|
}
|
||||||
/**
|
/**
|
||||||
*
|
*
|
||||||
|
|||||||
@@ -17,6 +17,9 @@ import type {
|
|||||||
import { StorageService } from './StorageService';
|
import { StorageService } from './StorageService';
|
||||||
import { SearchService } from './SearchService';
|
import { SearchService } from './SearchService';
|
||||||
import type { Readable } from 'stream';
|
import type { Readable } from 'stream';
|
||||||
|
import { AuditService } from './AuditService';
|
||||||
|
import { User } from '@open-archiver/types';
|
||||||
|
import { checkDeletionEnabled } from '../helpers/deletionGuard';
|
||||||
|
|
||||||
interface DbRecipients {
|
interface DbRecipients {
|
||||||
to: { name: string; address: string }[];
|
to: { name: string; address: string }[];
|
||||||
@@ -34,6 +37,7 @@ async function streamToBuffer(stream: Readable): Promise<Buffer> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
export class ArchivedEmailService {
|
export class ArchivedEmailService {
|
||||||
|
private static auditService = new AuditService();
|
||||||
private static mapRecipients(dbRecipients: unknown): Recipient[] {
|
private static mapRecipients(dbRecipients: unknown): Recipient[] {
|
||||||
const { to = [], cc = [], bcc = [] } = dbRecipients as DbRecipients;
|
const { to = [], cc = [], bcc = [] } = dbRecipients as DbRecipients;
|
||||||
|
|
||||||
@@ -98,7 +102,9 @@ export class ArchivedEmailService {
|
|||||||
|
|
||||||
public static async getArchivedEmailById(
|
public static async getArchivedEmailById(
|
||||||
emailId: string,
|
emailId: string,
|
||||||
userId: string
|
userId: string,
|
||||||
|
actor: User,
|
||||||
|
actorIp: string
|
||||||
): Promise<ArchivedEmail | null> {
|
): Promise<ArchivedEmail | null> {
|
||||||
const email = await db.query.archivedEmails.findFirst({
|
const email = await db.query.archivedEmails.findFirst({
|
||||||
where: eq(archivedEmails.id, emailId),
|
where: eq(archivedEmails.id, emailId),
|
||||||
@@ -118,6 +124,15 @@ export class ArchivedEmailService {
|
|||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
await this.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'READ',
|
||||||
|
targetType: 'ArchivedEmail',
|
||||||
|
targetId: emailId,
|
||||||
|
actorIp,
|
||||||
|
details: {},
|
||||||
|
});
|
||||||
|
|
||||||
let threadEmails: ThreadEmail[] = [];
|
let threadEmails: ThreadEmail[] = [];
|
||||||
|
|
||||||
if (email.threadId) {
|
if (email.threadId) {
|
||||||
@@ -179,7 +194,12 @@ export class ArchivedEmailService {
|
|||||||
return mappedEmail;
|
return mappedEmail;
|
||||||
}
|
}
|
||||||
|
|
||||||
public static async deleteArchivedEmail(emailId: string): Promise<void> {
|
public static async deleteArchivedEmail(
|
||||||
|
emailId: string,
|
||||||
|
actor: User,
|
||||||
|
actorIp: string
|
||||||
|
): Promise<void> {
|
||||||
|
checkDeletionEnabled();
|
||||||
const [email] = await db
|
const [email] = await db
|
||||||
.select()
|
.select()
|
||||||
.from(archivedEmails)
|
.from(archivedEmails)
|
||||||
@@ -193,7 +213,7 @@ export class ArchivedEmailService {
|
|||||||
|
|
||||||
// Load and handle attachments before deleting the email itself
|
// Load and handle attachments before deleting the email itself
|
||||||
if (email.hasAttachments) {
|
if (email.hasAttachments) {
|
||||||
const emailAttachmentsResult = await db
|
const attachmentsForEmail = await db
|
||||||
.select({
|
.select({
|
||||||
attachmentId: attachments.id,
|
attachmentId: attachments.id,
|
||||||
storagePath: attachments.storagePath,
|
storagePath: attachments.storagePath,
|
||||||
@@ -203,37 +223,33 @@ export class ArchivedEmailService {
|
|||||||
.where(eq(emailAttachments.emailId, emailId));
|
.where(eq(emailAttachments.emailId, emailId));
|
||||||
|
|
||||||
try {
|
try {
|
||||||
for (const attachment of emailAttachmentsResult) {
|
for (const attachment of attachmentsForEmail) {
|
||||||
const [refCount] = await db
|
// Delete the link between this email and the attachment record.
|
||||||
.select({ count: count(emailAttachments.emailId) })
|
await db
|
||||||
|
.delete(emailAttachments)
|
||||||
|
.where(
|
||||||
|
and(
|
||||||
|
eq(emailAttachments.emailId, emailId),
|
||||||
|
eq(emailAttachments.attachmentId, attachment.attachmentId)
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Check if any other emails are linked to this attachment record.
|
||||||
|
const [recordRefCount] = await db
|
||||||
|
.select({ count: count() })
|
||||||
.from(emailAttachments)
|
.from(emailAttachments)
|
||||||
.where(eq(emailAttachments.attachmentId, attachment.attachmentId));
|
.where(eq(emailAttachments.attachmentId, attachment.attachmentId));
|
||||||
|
|
||||||
if (refCount.count === 1) {
|
// If no other emails are linked to this record, it's safe to delete it and the file.
|
||||||
|
if (recordRefCount.count === 0) {
|
||||||
await storage.delete(attachment.storagePath);
|
await storage.delete(attachment.storagePath);
|
||||||
await db
|
|
||||||
.delete(emailAttachments)
|
|
||||||
.where(
|
|
||||||
and(
|
|
||||||
eq(emailAttachments.emailId, emailId),
|
|
||||||
eq(emailAttachments.attachmentId, attachment.attachmentId)
|
|
||||||
)
|
|
||||||
);
|
|
||||||
await db
|
await db
|
||||||
.delete(attachments)
|
.delete(attachments)
|
||||||
.where(eq(attachments.id, attachment.attachmentId));
|
.where(eq(attachments.id, attachment.attachmentId));
|
||||||
} else {
|
|
||||||
await db
|
|
||||||
.delete(emailAttachments)
|
|
||||||
.where(
|
|
||||||
and(
|
|
||||||
eq(emailAttachments.emailId, emailId),
|
|
||||||
eq(emailAttachments.attachmentId, attachment.attachmentId)
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} catch {
|
} catch (error) {
|
||||||
|
console.error('Failed to delete email attachments', error);
|
||||||
throw new Error('Failed to delete email attachments');
|
throw new Error('Failed to delete email attachments');
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -245,5 +261,16 @@ export class ArchivedEmailService {
|
|||||||
await searchService.deleteDocuments('emails', [emailId]);
|
await searchService.deleteDocuments('emails', [emailId]);
|
||||||
|
|
||||||
await db.delete(archivedEmails).where(eq(archivedEmails.id, emailId));
|
await db.delete(archivedEmails).where(eq(archivedEmails.id, emailId));
|
||||||
|
|
||||||
|
await this.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'DELETE',
|
||||||
|
targetType: 'ArchivedEmail',
|
||||||
|
targetId: emailId,
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
reason: 'ManualDeletion',
|
||||||
|
},
|
||||||
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
199
packages/backend/src/services/AuditService.ts
Normal file
199
packages/backend/src/services/AuditService.ts
Normal file
@@ -0,0 +1,199 @@
|
|||||||
|
import { db, Database } from '../database';
|
||||||
|
import * as schema from '../database/schema';
|
||||||
|
import {
|
||||||
|
AuditLogEntry,
|
||||||
|
CreateAuditLogEntry,
|
||||||
|
GetAuditLogsOptions,
|
||||||
|
GetAuditLogsResponse,
|
||||||
|
} from '@open-archiver/types';
|
||||||
|
import { desc, sql, asc, and, gte, lte, eq } from 'drizzle-orm';
|
||||||
|
import { createHash } from 'crypto';
|
||||||
|
|
||||||
|
export class AuditService {
|
||||||
|
private sanitizeObject(obj: any): any {
|
||||||
|
if (obj === null || typeof obj !== 'object') {
|
||||||
|
return obj;
|
||||||
|
}
|
||||||
|
if (Array.isArray(obj)) {
|
||||||
|
return obj.map((item) => this.sanitizeObject(item));
|
||||||
|
}
|
||||||
|
const sanitizedObj: { [key: string]: any } = {};
|
||||||
|
for (const key in obj) {
|
||||||
|
if (Object.prototype.hasOwnProperty.call(obj, key)) {
|
||||||
|
const value = obj[key];
|
||||||
|
sanitizedObj[key] = value === undefined ? null : this.sanitizeObject(value);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return sanitizedObj;
|
||||||
|
}
|
||||||
|
|
||||||
|
public async createAuditLog(entry: CreateAuditLogEntry) {
|
||||||
|
return db.transaction(async (tx) => {
|
||||||
|
// Lock the table to prevent race conditions
|
||||||
|
await tx.execute(sql`LOCK TABLE audit_logs IN EXCLUSIVE MODE`);
|
||||||
|
|
||||||
|
const sanitizedEntry = this.sanitizeObject(entry);
|
||||||
|
|
||||||
|
const previousHash = await this.getLatestHash(tx);
|
||||||
|
const newEntry = {
|
||||||
|
...sanitizedEntry,
|
||||||
|
previousHash,
|
||||||
|
timestamp: new Date(),
|
||||||
|
};
|
||||||
|
const currentHash = this.calculateHash(newEntry);
|
||||||
|
|
||||||
|
const finalEntry = {
|
||||||
|
...newEntry,
|
||||||
|
currentHash,
|
||||||
|
};
|
||||||
|
|
||||||
|
await tx.insert(schema.auditLogs).values(finalEntry);
|
||||||
|
|
||||||
|
return finalEntry;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
private async getLatestHash(tx: Database): Promise<string | null> {
|
||||||
|
const [latest] = await tx
|
||||||
|
.select({
|
||||||
|
currentHash: schema.auditLogs.currentHash,
|
||||||
|
})
|
||||||
|
.from(schema.auditLogs)
|
||||||
|
.orderBy(desc(schema.auditLogs.id))
|
||||||
|
.limit(1);
|
||||||
|
|
||||||
|
return latest?.currentHash ?? null;
|
||||||
|
}
|
||||||
|
|
||||||
|
private calculateHash(entry: any): string {
|
||||||
|
// Create a canonical object for hashing to ensure consistency in property order and types.
|
||||||
|
const objectToHash = {
|
||||||
|
actorIdentifier: entry.actorIdentifier,
|
||||||
|
actorIp: entry.actorIp ?? null,
|
||||||
|
actionType: entry.actionType,
|
||||||
|
targetType: entry.targetType ?? null,
|
||||||
|
targetId: entry.targetId ?? null,
|
||||||
|
details: entry.details ?? null,
|
||||||
|
previousHash: entry.previousHash ?? null,
|
||||||
|
// Normalize timestamp to milliseconds since epoch to avoid precision issues.
|
||||||
|
timestamp: new Date(entry.timestamp).getTime(),
|
||||||
|
};
|
||||||
|
|
||||||
|
const data = this.canonicalStringify(objectToHash);
|
||||||
|
return createHash('sha256').update(data).digest('hex');
|
||||||
|
}
|
||||||
|
|
||||||
|
private canonicalStringify(obj: any): string {
|
||||||
|
if (obj === undefined) {
|
||||||
|
return 'null';
|
||||||
|
}
|
||||||
|
if (obj === null || typeof obj !== 'object') {
|
||||||
|
return JSON.stringify(obj);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Array.isArray(obj)) {
|
||||||
|
return `[${obj.map((item) => this.canonicalStringify(item)).join(',')}]`;
|
||||||
|
}
|
||||||
|
|
||||||
|
const keys = Object.keys(obj).sort();
|
||||||
|
const pairs = keys.map((key) => {
|
||||||
|
const value = obj[key];
|
||||||
|
return `${JSON.stringify(key)}:${this.canonicalStringify(value)}`;
|
||||||
|
});
|
||||||
|
return `{${pairs.join(',')}}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
public async getAuditLogs(options: GetAuditLogsOptions = {}): Promise<GetAuditLogsResponse> {
|
||||||
|
const {
|
||||||
|
page = 1,
|
||||||
|
limit = 20,
|
||||||
|
startDate,
|
||||||
|
endDate,
|
||||||
|
actor,
|
||||||
|
actionType,
|
||||||
|
sort = 'desc',
|
||||||
|
} = options;
|
||||||
|
|
||||||
|
const whereClauses = [];
|
||||||
|
if (startDate) whereClauses.push(gte(schema.auditLogs.timestamp, startDate));
|
||||||
|
if (endDate) whereClauses.push(lte(schema.auditLogs.timestamp, endDate));
|
||||||
|
if (actor) whereClauses.push(eq(schema.auditLogs.actorIdentifier, actor));
|
||||||
|
if (actionType) whereClauses.push(eq(schema.auditLogs.actionType, actionType));
|
||||||
|
|
||||||
|
const where = and(...whereClauses);
|
||||||
|
|
||||||
|
const logs = await db.query.auditLogs.findMany({
|
||||||
|
where,
|
||||||
|
orderBy: [sort === 'asc' ? asc(schema.auditLogs.id) : desc(schema.auditLogs.id)],
|
||||||
|
limit,
|
||||||
|
offset: (page - 1) * limit,
|
||||||
|
});
|
||||||
|
|
||||||
|
const totalResult = await db
|
||||||
|
.select({
|
||||||
|
count: sql<number>`count(*)`,
|
||||||
|
})
|
||||||
|
.from(schema.auditLogs)
|
||||||
|
.where(where);
|
||||||
|
|
||||||
|
const total = totalResult[0].count;
|
||||||
|
|
||||||
|
return {
|
||||||
|
data: logs as AuditLogEntry[],
|
||||||
|
meta: {
|
||||||
|
total,
|
||||||
|
page,
|
||||||
|
limit,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
public async verifyAuditLog(): Promise<{ ok: boolean; message: string; logId?: number }> {
|
||||||
|
const chunkSize = 1000;
|
||||||
|
let offset = 0;
|
||||||
|
let previousHash: string | null = null;
|
||||||
|
/**
|
||||||
|
* TODO: create job for audit log verification, generate audit report (new DB table)
|
||||||
|
*/
|
||||||
|
while (true) {
|
||||||
|
const logs = await db.query.auditLogs.findMany({
|
||||||
|
orderBy: [asc(schema.auditLogs.id)],
|
||||||
|
limit: chunkSize,
|
||||||
|
offset,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (logs.length === 0) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const log of logs) {
|
||||||
|
if (log.previousHash !== previousHash) {
|
||||||
|
return {
|
||||||
|
ok: false,
|
||||||
|
message: 'Audit log chain is broken!',
|
||||||
|
logId: log.id,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
const calculatedHash = this.calculateHash(log);
|
||||||
|
|
||||||
|
if (log.currentHash !== calculatedHash) {
|
||||||
|
return {
|
||||||
|
ok: false,
|
||||||
|
message: 'Audit log entry is tampered!',
|
||||||
|
logId: log.id,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
previousHash = log.currentHash;
|
||||||
|
}
|
||||||
|
|
||||||
|
offset += chunkSize;
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
ok: true,
|
||||||
|
message:
|
||||||
|
'Audit log integrity verified successfully. The logs are not tempered with and the log chain is complete.',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -2,17 +2,25 @@ import { compare } from 'bcryptjs';
|
|||||||
import { SignJWT, jwtVerify } from 'jose';
|
import { SignJWT, jwtVerify } from 'jose';
|
||||||
import type { AuthTokenPayload, LoginResponse } from '@open-archiver/types';
|
import type { AuthTokenPayload, LoginResponse } from '@open-archiver/types';
|
||||||
import { UserService } from './UserService';
|
import { UserService } from './UserService';
|
||||||
|
import { AuditService } from './AuditService';
|
||||||
import { db } from '../database';
|
import { db } from '../database';
|
||||||
import * as schema from '../database/schema';
|
import * as schema from '../database/schema';
|
||||||
import { eq } from 'drizzle-orm';
|
import { eq } from 'drizzle-orm';
|
||||||
|
|
||||||
export class AuthService {
|
export class AuthService {
|
||||||
#userService: UserService;
|
#userService: UserService;
|
||||||
|
#auditService: AuditService;
|
||||||
#jwtSecret: Uint8Array;
|
#jwtSecret: Uint8Array;
|
||||||
#jwtExpiresIn: string;
|
#jwtExpiresIn: string;
|
||||||
|
|
||||||
constructor(userService: UserService, jwtSecret: string, jwtExpiresIn: string) {
|
constructor(
|
||||||
|
userService: UserService,
|
||||||
|
auditService: AuditService,
|
||||||
|
jwtSecret: string,
|
||||||
|
jwtExpiresIn: string
|
||||||
|
) {
|
||||||
this.#userService = userService;
|
this.#userService = userService;
|
||||||
|
this.#auditService = auditService;
|
||||||
this.#jwtSecret = new TextEncoder().encode(jwtSecret);
|
this.#jwtSecret = new TextEncoder().encode(jwtSecret);
|
||||||
this.#jwtExpiresIn = jwtExpiresIn;
|
this.#jwtExpiresIn = jwtExpiresIn;
|
||||||
}
|
}
|
||||||
@@ -33,16 +41,36 @@ export class AuthService {
|
|||||||
.sign(this.#jwtSecret);
|
.sign(this.#jwtSecret);
|
||||||
}
|
}
|
||||||
|
|
||||||
public async login(email: string, password: string): Promise<LoginResponse | null> {
|
public async login(email: string, password: string, ip: string): Promise<LoginResponse | null> {
|
||||||
const user = await this.#userService.findByEmail(email);
|
const user = await this.#userService.findByEmail(email);
|
||||||
|
|
||||||
if (!user || !user.password) {
|
if (!user || !user.password) {
|
||||||
|
await this.#auditService.createAuditLog({
|
||||||
|
actorIdentifier: email,
|
||||||
|
actionType: 'LOGIN',
|
||||||
|
targetType: 'User',
|
||||||
|
targetId: email,
|
||||||
|
actorIp: ip,
|
||||||
|
details: {
|
||||||
|
error: 'UserNotFound',
|
||||||
|
},
|
||||||
|
});
|
||||||
return null; // User not found or password not set
|
return null; // User not found or password not set
|
||||||
}
|
}
|
||||||
|
|
||||||
const isPasswordValid = await this.verifyPassword(password, user.password);
|
const isPasswordValid = await this.verifyPassword(password, user.password);
|
||||||
|
|
||||||
if (!isPasswordValid) {
|
if (!isPasswordValid) {
|
||||||
|
await this.#auditService.createAuditLog({
|
||||||
|
actorIdentifier: user.id,
|
||||||
|
actionType: 'LOGIN',
|
||||||
|
targetType: 'User',
|
||||||
|
targetId: user.id,
|
||||||
|
actorIp: ip,
|
||||||
|
details: {
|
||||||
|
error: 'InvalidPassword',
|
||||||
|
},
|
||||||
|
});
|
||||||
return null; // Invalid password
|
return null; // Invalid password
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -63,6 +91,15 @@ export class AuthService {
|
|||||||
roles: roles,
|
roles: roles,
|
||||||
});
|
});
|
||||||
|
|
||||||
|
await this.#auditService.createAuditLog({
|
||||||
|
actorIdentifier: user.id,
|
||||||
|
actionType: 'LOGIN',
|
||||||
|
targetType: 'User',
|
||||||
|
targetId: user.id,
|
||||||
|
actorIp: ip,
|
||||||
|
details: {},
|
||||||
|
});
|
||||||
|
|
||||||
return {
|
return {
|
||||||
accessToken,
|
accessToken,
|
||||||
user: {
|
user: {
|
||||||
|
|||||||
@@ -60,7 +60,6 @@ function sanitizeObject<T>(obj: T): T {
|
|||||||
return obj;
|
return obj;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
export class IndexingService {
|
export class IndexingService {
|
||||||
private dbService: DatabaseService;
|
private dbService: DatabaseService;
|
||||||
private searchService: SearchService;
|
private searchService: SearchService;
|
||||||
@@ -235,9 +234,7 @@ export class IndexingService {
|
|||||||
/**
|
/**
|
||||||
* @deprecated
|
* @deprecated
|
||||||
*/
|
*/
|
||||||
private async indexByEmail(
|
private async indexByEmail(pendingEmail: PendingEmail): Promise<void> {
|
||||||
pendingEmail: PendingEmail
|
|
||||||
): Promise<void> {
|
|
||||||
const attachments: AttachmentsType = [];
|
const attachments: AttachmentsType = [];
|
||||||
if (pendingEmail.email.attachments && pendingEmail.email.attachments.length > 0) {
|
if (pendingEmail.email.attachments && pendingEmail.email.attachments.length > 0) {
|
||||||
for (const attachment of pendingEmail.email.attachments) {
|
for (const attachment of pendingEmail.email.attachments) {
|
||||||
@@ -259,7 +256,6 @@ export class IndexingService {
|
|||||||
await this.searchService.addDocuments('emails', [document], 'id');
|
await this.searchService.addDocuments('emails', [document], 'id');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Creates a search document from a raw email object and its attachments.
|
* Creates a search document from a raw email object and its attachments.
|
||||||
*/
|
*/
|
||||||
@@ -478,14 +474,12 @@ export class IndexingService {
|
|||||||
'image/heif',
|
'image/heif',
|
||||||
];
|
];
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
return extractableTypes.some((type) => mimeType.toLowerCase().includes(type));
|
return extractableTypes.some((type) => mimeType.toLowerCase().includes(type));
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Ensures all required fields are present in EmailDocument
|
* Ensures all required fields are present in EmailDocument
|
||||||
*/
|
*/
|
||||||
private ensureEmailDocumentFields(doc: Partial<EmailDocument>): EmailDocument {
|
private ensureEmailDocumentFields(doc: Partial<EmailDocument>): EmailDocument {
|
||||||
return {
|
return {
|
||||||
id: doc.id || 'missing-id',
|
id: doc.id || 'missing-id',
|
||||||
@@ -510,7 +504,10 @@ export class IndexingService {
|
|||||||
JSON.stringify(doc);
|
JSON.stringify(doc);
|
||||||
return true;
|
return true;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error({ doc, error: (error as Error).message }, 'Invalid EmailDocument detected');
|
logger.error(
|
||||||
|
{ doc, error: (error as Error).message },
|
||||||
|
'Invalid EmailDocument detected'
|
||||||
|
);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -20,16 +20,17 @@ import {
|
|||||||
attachments as attachmentsSchema,
|
attachments as attachmentsSchema,
|
||||||
emailAttachments,
|
emailAttachments,
|
||||||
} from '../database/schema';
|
} from '../database/schema';
|
||||||
import { createHash } from 'crypto';
|
import { createHash, randomUUID } from 'crypto';
|
||||||
import { logger } from '../config/logger';
|
import { logger } from '../config/logger';
|
||||||
import { IndexingService } from './IndexingService';
|
|
||||||
import { SearchService } from './SearchService';
|
import { SearchService } from './SearchService';
|
||||||
import { DatabaseService } from './DatabaseService';
|
|
||||||
import { config } from '../config/index';
|
import { config } from '../config/index';
|
||||||
import { FilterBuilder } from './FilterBuilder';
|
import { FilterBuilder } from './FilterBuilder';
|
||||||
import e from 'express';
|
import { AuditService } from './AuditService';
|
||||||
|
import { User } from '@open-archiver/types';
|
||||||
|
import { checkDeletionEnabled } from '../helpers/deletionGuard';
|
||||||
|
|
||||||
export class IngestionService {
|
export class IngestionService {
|
||||||
|
private static auditService = new AuditService();
|
||||||
private static decryptSource(
|
private static decryptSource(
|
||||||
source: typeof ingestionSources.$inferSelect
|
source: typeof ingestionSources.$inferSelect
|
||||||
): IngestionSource | null {
|
): IngestionSource | null {
|
||||||
@@ -54,7 +55,9 @@ export class IngestionService {
|
|||||||
|
|
||||||
public static async create(
|
public static async create(
|
||||||
dto: CreateIngestionSourceDto,
|
dto: CreateIngestionSourceDto,
|
||||||
userId: string
|
userId: string,
|
||||||
|
actor: User,
|
||||||
|
actorIp: string
|
||||||
): Promise<IngestionSource> {
|
): Promise<IngestionSource> {
|
||||||
const { providerConfig, ...rest } = dto;
|
const { providerConfig, ...rest } = dto;
|
||||||
const encryptedCredentials = CryptoService.encryptObject(providerConfig);
|
const encryptedCredentials = CryptoService.encryptObject(providerConfig);
|
||||||
@@ -68,9 +71,21 @@ export class IngestionService {
|
|||||||
|
|
||||||
const [newSource] = await db.insert(ingestionSources).values(valuesToInsert).returning();
|
const [newSource] = await db.insert(ingestionSources).values(valuesToInsert).returning();
|
||||||
|
|
||||||
|
await this.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'CREATE',
|
||||||
|
targetType: 'IngestionSource',
|
||||||
|
targetId: newSource.id,
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
sourceName: newSource.name,
|
||||||
|
sourceType: newSource.provider,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
const decryptedSource = this.decryptSource(newSource);
|
const decryptedSource = this.decryptSource(newSource);
|
||||||
if (!decryptedSource) {
|
if (!decryptedSource) {
|
||||||
await this.delete(newSource.id);
|
await this.delete(newSource.id, actor, actorIp);
|
||||||
throw new Error(
|
throw new Error(
|
||||||
'Failed to process newly created ingestion source due to a decryption error.'
|
'Failed to process newly created ingestion source due to a decryption error.'
|
||||||
);
|
);
|
||||||
@@ -81,13 +96,18 @@ export class IngestionService {
|
|||||||
const connectionValid = await connector.testConnection();
|
const connectionValid = await connector.testConnection();
|
||||||
// If connection succeeds, update status to auth_success, which triggers the initial import.
|
// If connection succeeds, update status to auth_success, which triggers the initial import.
|
||||||
if (connectionValid) {
|
if (connectionValid) {
|
||||||
return await this.update(decryptedSource.id, { status: 'auth_success' });
|
return await this.update(
|
||||||
|
decryptedSource.id,
|
||||||
|
{ status: 'auth_success' },
|
||||||
|
actor,
|
||||||
|
actorIp
|
||||||
|
);
|
||||||
} else {
|
} else {
|
||||||
throw Error('Ingestion authentication failed.')
|
throw Error('Ingestion authentication failed.');
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// If connection fails, delete the newly created source and throw the error.
|
// If connection fails, delete the newly created source and throw the error.
|
||||||
await this.delete(decryptedSource.id);
|
await this.delete(decryptedSource.id, actor, actorIp);
|
||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -124,7 +144,9 @@ export class IngestionService {
|
|||||||
|
|
||||||
public static async update(
|
public static async update(
|
||||||
id: string,
|
id: string,
|
||||||
dto: UpdateIngestionSourceDto
|
dto: UpdateIngestionSourceDto,
|
||||||
|
actor?: User,
|
||||||
|
actorIp?: string
|
||||||
): Promise<IngestionSource> {
|
): Promise<IngestionSource> {
|
||||||
const { providerConfig, ...rest } = dto;
|
const { providerConfig, ...rest } = dto;
|
||||||
const valuesToUpdate: Partial<typeof ingestionSources.$inferInsert> = { ...rest };
|
const valuesToUpdate: Partial<typeof ingestionSources.$inferInsert> = { ...rest };
|
||||||
@@ -159,11 +181,32 @@ export class IngestionService {
|
|||||||
if (originalSource.status !== 'auth_success' && decryptedSource.status === 'auth_success') {
|
if (originalSource.status !== 'auth_success' && decryptedSource.status === 'auth_success') {
|
||||||
await this.triggerInitialImport(decryptedSource.id);
|
await this.triggerInitialImport(decryptedSource.id);
|
||||||
}
|
}
|
||||||
|
if (actor && actorIp) {
|
||||||
|
const changedFields = Object.keys(dto).filter(
|
||||||
|
(key) =>
|
||||||
|
key !== 'providerConfig' &&
|
||||||
|
originalSource[key as keyof IngestionSource] !==
|
||||||
|
decryptedSource[key as keyof IngestionSource]
|
||||||
|
);
|
||||||
|
if (changedFields.length > 0) {
|
||||||
|
await this.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'UPDATE',
|
||||||
|
targetType: 'IngestionSource',
|
||||||
|
targetId: id,
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
changedFields,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return decryptedSource;
|
return decryptedSource;
|
||||||
}
|
}
|
||||||
|
|
||||||
public static async delete(id: string): Promise<IngestionSource> {
|
public static async delete(id: string, actor: User, actorIp: string): Promise<IngestionSource> {
|
||||||
|
checkDeletionEnabled();
|
||||||
const source = await this.findById(id);
|
const source = await this.findById(id);
|
||||||
if (!source) {
|
if (!source) {
|
||||||
throw new Error('Ingestion source not found');
|
throw new Error('Ingestion source not found');
|
||||||
@@ -196,6 +239,17 @@ export class IngestionService {
|
|||||||
.where(eq(ingestionSources.id, id))
|
.where(eq(ingestionSources.id, id))
|
||||||
.returning();
|
.returning();
|
||||||
|
|
||||||
|
await this.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'DELETE',
|
||||||
|
targetType: 'IngestionSource',
|
||||||
|
targetId: id,
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
sourceName: deletedSource.name,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
const decryptedSource = this.decryptSource(deletedSource);
|
const decryptedSource = this.decryptSource(deletedSource);
|
||||||
if (!decryptedSource) {
|
if (!decryptedSource) {
|
||||||
// Even if decryption fails, we should confirm deletion.
|
// Even if decryption fails, we should confirm deletion.
|
||||||
@@ -216,7 +270,7 @@ export class IngestionService {
|
|||||||
await ingestionQueue.add('initial-import', { ingestionSourceId: source.id });
|
await ingestionQueue.add('initial-import', { ingestionSourceId: source.id });
|
||||||
}
|
}
|
||||||
|
|
||||||
public static async triggerForceSync(id: string): Promise<void> {
|
public static async triggerForceSync(id: string, actor: User, actorIp: string): Promise<void> {
|
||||||
const source = await this.findById(id);
|
const source = await this.findById(id);
|
||||||
logger.info({ ingestionSourceId: id }, 'Force syncing started.');
|
logger.info({ ingestionSourceId: id }, 'Force syncing started.');
|
||||||
if (!source) {
|
if (!source) {
|
||||||
@@ -241,15 +295,35 @@ export class IngestionService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Reset status to 'active'
|
// Reset status to 'active'
|
||||||
await this.update(id, {
|
await this.update(
|
||||||
status: 'active',
|
id,
|
||||||
lastSyncStatusMessage: 'Force sync triggered by user.',
|
{
|
||||||
|
status: 'active',
|
||||||
|
lastSyncStatusMessage: 'Force sync triggered by user.',
|
||||||
|
},
|
||||||
|
actor,
|
||||||
|
actorIp
|
||||||
|
);
|
||||||
|
|
||||||
|
await this.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'SYNC',
|
||||||
|
targetType: 'IngestionSource',
|
||||||
|
targetId: id,
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
sourceName: source.name,
|
||||||
|
},
|
||||||
});
|
});
|
||||||
|
|
||||||
await ingestionQueue.add('continuous-sync', { ingestionSourceId: source.id });
|
await ingestionQueue.add('continuous-sync', { ingestionSourceId: source.id });
|
||||||
}
|
}
|
||||||
|
|
||||||
public async performBulkImport(job: IInitialImportJob): Promise<void> {
|
public static async performBulkImport(
|
||||||
|
job: IInitialImportJob,
|
||||||
|
actor: User,
|
||||||
|
actorIp: string
|
||||||
|
): Promise<void> {
|
||||||
const { ingestionSourceId } = job;
|
const { ingestionSourceId } = job;
|
||||||
const source = await IngestionService.findById(ingestionSourceId);
|
const source = await IngestionService.findById(ingestionSourceId);
|
||||||
if (!source) {
|
if (!source) {
|
||||||
@@ -257,10 +331,15 @@ export class IngestionService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
logger.info(`Starting bulk import for source: ${source.name} (${source.id})`);
|
logger.info(`Starting bulk import for source: ${source.name} (${source.id})`);
|
||||||
await IngestionService.update(ingestionSourceId, {
|
await IngestionService.update(
|
||||||
status: 'importing',
|
ingestionSourceId,
|
||||||
lastSyncStartedAt: new Date(),
|
{
|
||||||
});
|
status: 'importing',
|
||||||
|
lastSyncStartedAt: new Date(),
|
||||||
|
},
|
||||||
|
actor,
|
||||||
|
actorIp
|
||||||
|
);
|
||||||
|
|
||||||
const connector = EmailProviderFactory.createConnector(source);
|
const connector = EmailProviderFactory.createConnector(source);
|
||||||
|
|
||||||
@@ -288,12 +367,17 @@ export class IngestionService {
|
|||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error(`Bulk import failed for source: ${source.name} (${source.id})`, error);
|
logger.error(`Bulk import failed for source: ${source.name} (${source.id})`, error);
|
||||||
await IngestionService.update(ingestionSourceId, {
|
await IngestionService.update(
|
||||||
status: 'error',
|
ingestionSourceId,
|
||||||
lastSyncFinishedAt: new Date(),
|
{
|
||||||
lastSyncStatusMessage:
|
status: 'error',
|
||||||
error instanceof Error ? error.message : 'An unknown error occurred.',
|
lastSyncFinishedAt: new Date(),
|
||||||
});
|
lastSyncStatusMessage:
|
||||||
|
error instanceof Error ? error.message : 'An unknown error occurred.',
|
||||||
|
},
|
||||||
|
actor,
|
||||||
|
actorIp
|
||||||
|
);
|
||||||
throw error; // Re-throw to allow BullMQ to handle the job failure
|
throw error; // Re-throw to allow BullMQ to handle the job failure
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -372,29 +456,63 @@ export class IngestionService {
|
|||||||
const attachmentHash = createHash('sha256')
|
const attachmentHash = createHash('sha256')
|
||||||
.update(attachmentBuffer)
|
.update(attachmentBuffer)
|
||||||
.digest('hex');
|
.digest('hex');
|
||||||
const attachmentPath = `${config.storage.openArchiverFolderName}/${source.name.replaceAll(' ', '-')}-${source.id}/attachments/${attachment.filename}`;
|
|
||||||
await storage.put(attachmentPath, attachmentBuffer);
|
|
||||||
|
|
||||||
const [newAttachment] = await db
|
// Check if an attachment with the same hash already exists for this source
|
||||||
.insert(attachmentsSchema)
|
const existingAttachment = await db.query.attachments.findFirst({
|
||||||
.values({
|
where: and(
|
||||||
filename: attachment.filename,
|
eq(attachmentsSchema.contentHashSha256, attachmentHash),
|
||||||
mimeType: attachment.contentType,
|
eq(attachmentsSchema.ingestionSourceId, source.id)
|
||||||
sizeBytes: attachment.size,
|
),
|
||||||
contentHashSha256: attachmentHash,
|
});
|
||||||
storagePath: attachmentPath,
|
|
||||||
})
|
|
||||||
.onConflictDoUpdate({
|
|
||||||
target: attachmentsSchema.contentHashSha256,
|
|
||||||
set: { filename: attachment.filename },
|
|
||||||
})
|
|
||||||
.returning();
|
|
||||||
|
|
||||||
|
let storagePath: string;
|
||||||
|
|
||||||
|
if (existingAttachment) {
|
||||||
|
// If it exists, reuse the storage path and don't save the file again
|
||||||
|
storagePath = existingAttachment.storagePath;
|
||||||
|
logger.info(
|
||||||
|
{
|
||||||
|
attachmentHash,
|
||||||
|
ingestionSourceId: source.id,
|
||||||
|
reusedPath: storagePath,
|
||||||
|
},
|
||||||
|
'Reusing existing attachment file for deduplication.'
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
// If it's a new attachment, create a unique path and save it
|
||||||
|
const uniqueId = randomUUID().slice(0, 7);
|
||||||
|
storagePath = `${config.storage.openArchiverFolderName}/${source.name.replaceAll(' ', '-')}-${source.id}/attachments/${uniqueId}-${attachment.filename}`;
|
||||||
|
await storage.put(storagePath, attachmentBuffer);
|
||||||
|
}
|
||||||
|
|
||||||
|
let attachmentRecord = existingAttachment;
|
||||||
|
|
||||||
|
if (!attachmentRecord) {
|
||||||
|
// If it's a new attachment, create a unique path and save it
|
||||||
|
const uniqueId = randomUUID().slice(0, 5);
|
||||||
|
const storagePath = `${config.storage.openArchiverFolderName}/${source.name.replaceAll(' ', '-')}-${source.id}/attachments/${uniqueId}-${attachment.filename}`;
|
||||||
|
await storage.put(storagePath, attachmentBuffer);
|
||||||
|
|
||||||
|
// Insert a new attachment record
|
||||||
|
[attachmentRecord] = await db
|
||||||
|
.insert(attachmentsSchema)
|
||||||
|
.values({
|
||||||
|
filename: attachment.filename,
|
||||||
|
mimeType: attachment.contentType,
|
||||||
|
sizeBytes: attachment.size,
|
||||||
|
contentHashSha256: attachmentHash,
|
||||||
|
storagePath: storagePath,
|
||||||
|
ingestionSourceId: source.id,
|
||||||
|
})
|
||||||
|
.returning();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Link the attachment record (either new or existing) to the email
|
||||||
await db
|
await db
|
||||||
.insert(emailAttachments)
|
.insert(emailAttachments)
|
||||||
.values({
|
.values({
|
||||||
emailId: archivedEmail.id,
|
emailId: archivedEmail.id,
|
||||||
attachmentId: newAttachment.id,
|
attachmentId: attachmentRecord.id,
|
||||||
})
|
})
|
||||||
.onConflictDoNothing();
|
.onConflictDoNothing();
|
||||||
}
|
}
|
||||||
|
|||||||
93
packages/backend/src/services/IntegrityService.ts
Normal file
93
packages/backend/src/services/IntegrityService.ts
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
import { db } from '../database';
|
||||||
|
import { archivedEmails, emailAttachments } from '../database/schema';
|
||||||
|
import { eq } from 'drizzle-orm';
|
||||||
|
import { StorageService } from './StorageService';
|
||||||
|
import { createHash } from 'crypto';
|
||||||
|
import { logger } from '../config/logger';
|
||||||
|
import type { IntegrityCheckResult } from '@open-archiver/types';
|
||||||
|
import { streamToBuffer } from '../helpers/streamToBuffer';
|
||||||
|
|
||||||
|
export class IntegrityService {
|
||||||
|
private storageService = new StorageService();
|
||||||
|
|
||||||
|
public async checkEmailIntegrity(emailId: string): Promise<IntegrityCheckResult[]> {
|
||||||
|
const results: IntegrityCheckResult[] = [];
|
||||||
|
|
||||||
|
// 1. Fetch the archived email
|
||||||
|
const email = await db.query.archivedEmails.findFirst({
|
||||||
|
where: eq(archivedEmails.id, emailId),
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!email) {
|
||||||
|
throw new Error('Archived email not found');
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Check the email's integrity
|
||||||
|
const emailStream = await this.storageService.get(email.storagePath);
|
||||||
|
const emailBuffer = await streamToBuffer(emailStream);
|
||||||
|
const currentEmailHash = createHash('sha256').update(emailBuffer).digest('hex');
|
||||||
|
|
||||||
|
if (currentEmailHash === email.storageHashSha256) {
|
||||||
|
results.push({ type: 'email', id: email.id, isValid: true });
|
||||||
|
} else {
|
||||||
|
results.push({
|
||||||
|
type: 'email',
|
||||||
|
id: email.id,
|
||||||
|
isValid: false,
|
||||||
|
reason: 'Stored hash does not match current hash.',
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. If the email has attachments, check them
|
||||||
|
if (email.hasAttachments) {
|
||||||
|
const emailAttachmentsRelations = await db.query.emailAttachments.findMany({
|
||||||
|
where: eq(emailAttachments.emailId, emailId),
|
||||||
|
with: {
|
||||||
|
attachment: true,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
for (const relation of emailAttachmentsRelations) {
|
||||||
|
const attachment = relation.attachment;
|
||||||
|
try {
|
||||||
|
const attachmentStream = await this.storageService.get(attachment.storagePath);
|
||||||
|
const attachmentBuffer = await streamToBuffer(attachmentStream);
|
||||||
|
const currentAttachmentHash = createHash('sha256')
|
||||||
|
.update(attachmentBuffer)
|
||||||
|
.digest('hex');
|
||||||
|
|
||||||
|
if (currentAttachmentHash === attachment.contentHashSha256) {
|
||||||
|
results.push({
|
||||||
|
type: 'attachment',
|
||||||
|
id: attachment.id,
|
||||||
|
filename: attachment.filename,
|
||||||
|
isValid: true,
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
results.push({
|
||||||
|
type: 'attachment',
|
||||||
|
id: attachment.id,
|
||||||
|
filename: attachment.filename,
|
||||||
|
isValid: false,
|
||||||
|
reason: 'Stored hash does not match current hash.',
|
||||||
|
});
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
logger.error(
|
||||||
|
{ attachmentId: attachment.id, error },
|
||||||
|
'Failed to read attachment from storage for integrity check.'
|
||||||
|
);
|
||||||
|
results.push({
|
||||||
|
type: 'attachment',
|
||||||
|
id: attachment.id,
|
||||||
|
filename: attachment.filename,
|
||||||
|
isValid: false,
|
||||||
|
reason: 'Could not read attachment file from storage.',
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return results;
|
||||||
|
}
|
||||||
|
}
|
||||||
101
packages/backend/src/services/JobsService.ts
Normal file
101
packages/backend/src/services/JobsService.ts
Normal file
@@ -0,0 +1,101 @@
|
|||||||
|
import { Job, Queue } from 'bullmq';
|
||||||
|
import { ingestionQueue, indexingQueue } from '../jobs/queues';
|
||||||
|
import { IJob, IQueueCounts, IQueueDetails, IQueueOverview, JobStatus } from '@open-archiver/types';
|
||||||
|
|
||||||
|
export class JobsService {
|
||||||
|
private queues: Queue[];
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
this.queues = [ingestionQueue, indexingQueue];
|
||||||
|
}
|
||||||
|
|
||||||
|
public async getQueues(): Promise<IQueueOverview[]> {
|
||||||
|
const queueOverviews: IQueueOverview[] = [];
|
||||||
|
for (const queue of this.queues) {
|
||||||
|
const counts = await queue.getJobCounts(
|
||||||
|
'active',
|
||||||
|
'completed',
|
||||||
|
'failed',
|
||||||
|
'delayed',
|
||||||
|
'waiting',
|
||||||
|
'paused'
|
||||||
|
);
|
||||||
|
queueOverviews.push({
|
||||||
|
name: queue.name,
|
||||||
|
counts: {
|
||||||
|
active: counts.active || 0,
|
||||||
|
completed: counts.completed || 0,
|
||||||
|
failed: counts.failed || 0,
|
||||||
|
delayed: counts.delayed || 0,
|
||||||
|
waiting: counts.waiting || 0,
|
||||||
|
paused: counts.paused || 0,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
return queueOverviews;
|
||||||
|
}
|
||||||
|
|
||||||
|
public async getQueueDetails(
|
||||||
|
queueName: string,
|
||||||
|
status: JobStatus,
|
||||||
|
page: number,
|
||||||
|
limit: number
|
||||||
|
): Promise<IQueueDetails> {
|
||||||
|
const queue = this.queues.find((q) => q.name === queueName);
|
||||||
|
if (!queue) {
|
||||||
|
throw new Error(`Queue ${queueName} not found`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const counts = await queue.getJobCounts(
|
||||||
|
'active',
|
||||||
|
'completed',
|
||||||
|
'failed',
|
||||||
|
'delayed',
|
||||||
|
'waiting',
|
||||||
|
'paused'
|
||||||
|
);
|
||||||
|
const start = (page - 1) * limit;
|
||||||
|
const end = start + limit - 1;
|
||||||
|
const jobStatus = status === 'waiting' ? 'wait' : status;
|
||||||
|
const jobs = await queue.getJobs([jobStatus], start, end, true);
|
||||||
|
const totalJobs = await queue.getJobCountByTypes(jobStatus);
|
||||||
|
|
||||||
|
return {
|
||||||
|
name: queue.name,
|
||||||
|
counts: {
|
||||||
|
active: counts.active || 0,
|
||||||
|
completed: counts.completed || 0,
|
||||||
|
failed: counts.failed || 0,
|
||||||
|
delayed: counts.delayed || 0,
|
||||||
|
waiting: counts.waiting || 0,
|
||||||
|
paused: counts.paused || 0,
|
||||||
|
},
|
||||||
|
jobs: await Promise.all(jobs.map((job) => this.formatJob(job))),
|
||||||
|
pagination: {
|
||||||
|
currentPage: page,
|
||||||
|
totalPages: Math.ceil(totalJobs / limit),
|
||||||
|
totalJobs,
|
||||||
|
limit,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
private async formatJob(job: Job): Promise<IJob> {
|
||||||
|
const state = await job.getState();
|
||||||
|
return {
|
||||||
|
id: job.id,
|
||||||
|
name: job.name,
|
||||||
|
data: job.data,
|
||||||
|
state: state,
|
||||||
|
failedReason: job.failedReason,
|
||||||
|
timestamp: job.timestamp,
|
||||||
|
processedOn: job.processedOn,
|
||||||
|
finishedOn: job.finishedOn,
|
||||||
|
attemptsMade: job.attemptsMade,
|
||||||
|
stacktrace: job.stacktrace,
|
||||||
|
returnValue: job.returnvalue,
|
||||||
|
ingestionSourceId: job.data.ingestionSourceId,
|
||||||
|
error: state === 'failed' ? job.stacktrace : undefined,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -3,269 +3,270 @@ import { logger } from '../config/logger';
|
|||||||
|
|
||||||
// Simple LRU cache for Tika results with statistics
|
// Simple LRU cache for Tika results with statistics
|
||||||
class TikaCache {
|
class TikaCache {
|
||||||
private cache = new Map<string, string>();
|
private cache = new Map<string, string>();
|
||||||
private maxSize = 50;
|
private maxSize = 50;
|
||||||
private hits = 0;
|
private hits = 0;
|
||||||
private misses = 0;
|
private misses = 0;
|
||||||
|
|
||||||
get(key: string): string | undefined {
|
get(key: string): string | undefined {
|
||||||
const value = this.cache.get(key);
|
const value = this.cache.get(key);
|
||||||
if (value !== undefined) {
|
if (value !== undefined) {
|
||||||
this.hits++;
|
this.hits++;
|
||||||
// LRU: Move element to the end
|
// LRU: Move element to the end
|
||||||
this.cache.delete(key);
|
this.cache.delete(key);
|
||||||
this.cache.set(key, value);
|
this.cache.set(key, value);
|
||||||
} else {
|
} else {
|
||||||
this.misses++;
|
this.misses++;
|
||||||
}
|
}
|
||||||
return value;
|
return value;
|
||||||
}
|
}
|
||||||
|
|
||||||
set(key: string, value: string): void {
|
set(key: string, value: string): void {
|
||||||
// If already exists, delete first
|
// If already exists, delete first
|
||||||
if (this.cache.has(key)) {
|
if (this.cache.has(key)) {
|
||||||
this.cache.delete(key);
|
this.cache.delete(key);
|
||||||
}
|
}
|
||||||
// If cache is full, remove oldest element
|
// If cache is full, remove oldest element
|
||||||
else if (this.cache.size >= this.maxSize) {
|
else if (this.cache.size >= this.maxSize) {
|
||||||
const firstKey = this.cache.keys().next().value;
|
const firstKey = this.cache.keys().next().value;
|
||||||
if (firstKey !== undefined) {
|
if (firstKey !== undefined) {
|
||||||
this.cache.delete(firstKey);
|
this.cache.delete(firstKey);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
this.cache.set(key, value);
|
this.cache.set(key, value);
|
||||||
}
|
}
|
||||||
|
|
||||||
getStats(): { size: number; maxSize: number; hits: number; misses: number; hitRate: number } {
|
getStats(): { size: number; maxSize: number; hits: number; misses: number; hitRate: number } {
|
||||||
const total = this.hits + this.misses;
|
const total = this.hits + this.misses;
|
||||||
const hitRate = total > 0 ? (this.hits / total) * 100 : 0;
|
const hitRate = total > 0 ? (this.hits / total) * 100 : 0;
|
||||||
return {
|
return {
|
||||||
size: this.cache.size,
|
size: this.cache.size,
|
||||||
maxSize: this.maxSize,
|
maxSize: this.maxSize,
|
||||||
hits: this.hits,
|
hits: this.hits,
|
||||||
misses: this.misses,
|
misses: this.misses,
|
||||||
hitRate: Math.round(hitRate * 100) / 100 // 2 decimal places
|
hitRate: Math.round(hitRate * 100) / 100, // 2 decimal places
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
reset(): void {
|
reset(): void {
|
||||||
this.cache.clear();
|
this.cache.clear();
|
||||||
this.hits = 0;
|
this.hits = 0;
|
||||||
this.misses = 0;
|
this.misses = 0;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Semaphore for running Tika requests
|
// Semaphore for running Tika requests
|
||||||
class TikaSemaphore {
|
class TikaSemaphore {
|
||||||
private inProgress = new Map<string, Promise<string>>();
|
private inProgress = new Map<string, Promise<string>>();
|
||||||
private waitCount = 0;
|
private waitCount = 0;
|
||||||
|
|
||||||
async acquire(key: string, operation: () => Promise<string>): Promise<string> {
|
async acquire(key: string, operation: () => Promise<string>): Promise<string> {
|
||||||
// Check if a request for this key is already running
|
// Check if a request for this key is already running
|
||||||
const existingPromise = this.inProgress.get(key);
|
const existingPromise = this.inProgress.get(key);
|
||||||
if (existingPromise) {
|
if (existingPromise) {
|
||||||
this.waitCount++;
|
this.waitCount++;
|
||||||
logger.debug(`Waiting for in-progress Tika request (${key.slice(0, 8)}...)`);
|
logger.debug(`Waiting for in-progress Tika request (${key.slice(0, 8)}...)`);
|
||||||
try {
|
try {
|
||||||
return await existingPromise;
|
return await existingPromise;
|
||||||
} finally {
|
} finally {
|
||||||
this.waitCount--;
|
this.waitCount--;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start new request
|
// Start new request
|
||||||
const promise = this.executeOperation(key, operation);
|
const promise = this.executeOperation(key, operation);
|
||||||
this.inProgress.set(key, promise);
|
this.inProgress.set(key, promise);
|
||||||
|
|
||||||
try {
|
try {
|
||||||
return await promise;
|
return await promise;
|
||||||
} finally {
|
} finally {
|
||||||
// Remove promise from map when finished
|
// Remove promise from map when finished
|
||||||
this.inProgress.delete(key);
|
this.inProgress.delete(key);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private async executeOperation(key: string, operation: () => Promise<string>): Promise<string> {
|
private async executeOperation(key: string, operation: () => Promise<string>): Promise<string> {
|
||||||
try {
|
try {
|
||||||
return await operation();
|
return await operation();
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Remove promise from map even on errors
|
// Remove promise from map even on errors
|
||||||
logger.error(`Tika operation failed for key ${key.slice(0, 8)}...`, error);
|
logger.error(`Tika operation failed for key ${key.slice(0, 8)}...`, error);
|
||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
getStats(): { inProgress: number; waitCount: number } {
|
getStats(): { inProgress: number; waitCount: number } {
|
||||||
return {
|
return {
|
||||||
inProgress: this.inProgress.size,
|
inProgress: this.inProgress.size,
|
||||||
waitCount: this.waitCount
|
waitCount: this.waitCount,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
clear(): void {
|
clear(): void {
|
||||||
this.inProgress.clear();
|
this.inProgress.clear();
|
||||||
this.waitCount = 0;
|
this.waitCount = 0;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
export class OcrService {
|
export class OcrService {
|
||||||
private tikaCache = new TikaCache();
|
private tikaCache = new TikaCache();
|
||||||
private tikaSemaphore = new TikaSemaphore();
|
private tikaSemaphore = new TikaSemaphore();
|
||||||
|
|
||||||
// Tika-based text extraction with cache and semaphore
|
// Tika-based text extraction with cache and semaphore
|
||||||
async extractTextWithTika(buffer: Buffer, mimeType: string): Promise<string> {
|
async extractTextWithTika(buffer: Buffer, mimeType: string): Promise<string> {
|
||||||
const tikaUrl = process.env.TIKA_URL;
|
const tikaUrl = process.env.TIKA_URL;
|
||||||
if (!tikaUrl) {
|
if (!tikaUrl) {
|
||||||
throw new Error('TIKA_URL environment variable not set');
|
throw new Error('TIKA_URL environment variable not set');
|
||||||
}
|
}
|
||||||
|
|
||||||
// Cache key: SHA-256 hash of the buffer
|
// Cache key: SHA-256 hash of the buffer
|
||||||
const hash = crypto.createHash('sha256').update(buffer).digest('hex');
|
const hash = crypto.createHash('sha256').update(buffer).digest('hex');
|
||||||
|
|
||||||
// Cache lookup (before semaphore!)
|
// Cache lookup (before semaphore!)
|
||||||
const cachedResult = this.tikaCache.get(hash);
|
const cachedResult = this.tikaCache.get(hash);
|
||||||
if (cachedResult !== undefined) {
|
if (cachedResult !== undefined) {
|
||||||
logger.debug(`Tika cache hit for ${mimeType} (${buffer.length} bytes)`);
|
logger.debug(`Tika cache hit for ${mimeType} (${buffer.length} bytes)`);
|
||||||
return cachedResult;
|
return cachedResult;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use semaphore to deduplicate parallel requests
|
// Use semaphore to deduplicate parallel requests
|
||||||
return await this.tikaSemaphore.acquire(hash, async () => {
|
return await this.tikaSemaphore.acquire(hash, async () => {
|
||||||
// Check cache again (might have been filled by parallel request)
|
// Check cache again (might have been filled by parallel request)
|
||||||
const cachedAfterWait = this.tikaCache.get(hash);
|
const cachedAfterWait = this.tikaCache.get(hash);
|
||||||
if (cachedAfterWait !== undefined) {
|
if (cachedAfterWait !== undefined) {
|
||||||
logger.debug(`Tika cache hit after wait for ${mimeType} (${buffer.length} bytes)`);
|
logger.debug(`Tika cache hit after wait for ${mimeType} (${buffer.length} bytes)`);
|
||||||
return cachedAfterWait;
|
return cachedAfterWait;
|
||||||
}
|
}
|
||||||
|
|
||||||
logger.debug(`Executing Tika request for ${mimeType} (${buffer.length} bytes)`);
|
logger.debug(`Executing Tika request for ${mimeType} (${buffer.length} bytes)`);
|
||||||
|
|
||||||
// DNS fallback: If "tika" hostname, also try localhost
|
// DNS fallback: If "tika" hostname, also try localhost
|
||||||
const urlsToTry = [
|
const urlsToTry = [
|
||||||
`${tikaUrl}/tika`,
|
`${tikaUrl}/tika`,
|
||||||
// Fallback falls DNS-Problem mit "tika" hostname
|
// Fallback falls DNS-Problem mit "tika" hostname
|
||||||
...(tikaUrl.includes('://tika:')
|
...(tikaUrl.includes('://tika:')
|
||||||
? [`${tikaUrl.replace('://tika:', '://localhost:')}/tika`]
|
? [`${tikaUrl.replace('://tika:', '://localhost:')}/tika`]
|
||||||
: [])
|
: []),
|
||||||
];
|
];
|
||||||
|
|
||||||
for (const url of urlsToTry) {
|
for (const url of urlsToTry) {
|
||||||
try {
|
try {
|
||||||
logger.debug(`Trying Tika URL: ${url}`);
|
logger.debug(`Trying Tika URL: ${url}`);
|
||||||
const response = await fetch(url, {
|
const response = await fetch(url, {
|
||||||
method: 'PUT',
|
method: 'PUT',
|
||||||
headers: {
|
headers: {
|
||||||
'Content-Type': mimeType || 'application/octet-stream',
|
'Content-Type': mimeType || 'application/octet-stream',
|
||||||
Accept: 'text/plain',
|
Accept: 'text/plain',
|
||||||
Connection: 'close'
|
Connection: 'close',
|
||||||
},
|
},
|
||||||
body: buffer,
|
body: buffer,
|
||||||
signal: AbortSignal.timeout(180000)
|
signal: AbortSignal.timeout(180000),
|
||||||
});
|
});
|
||||||
|
|
||||||
if (!response.ok) {
|
if (!response.ok) {
|
||||||
logger.warn(
|
logger.warn(
|
||||||
`Tika extraction failed at ${url}: ${response.status} ${response.statusText}`
|
`Tika extraction failed at ${url}: ${response.status} ${response.statusText}`
|
||||||
);
|
);
|
||||||
continue; // Try next URL
|
continue; // Try next URL
|
||||||
}
|
}
|
||||||
|
|
||||||
const text = await response.text();
|
const text = await response.text();
|
||||||
const result = text.trim();
|
const result = text.trim();
|
||||||
|
|
||||||
// Cache result (also empty strings to avoid repeated attempts)
|
// Cache result (also empty strings to avoid repeated attempts)
|
||||||
this.tikaCache.set(hash, result);
|
this.tikaCache.set(hash, result);
|
||||||
|
|
||||||
const cacheStats = this.tikaCache.getStats();
|
const cacheStats = this.tikaCache.getStats();
|
||||||
const semaphoreStats = this.tikaSemaphore.getStats();
|
const semaphoreStats = this.tikaSemaphore.getStats();
|
||||||
logger.debug(
|
logger.debug(
|
||||||
`Tika extraction successful - Cache: ${cacheStats.hits}H/${cacheStats.misses}M (${cacheStats.hitRate}%) - Semaphore: ${semaphoreStats.inProgress} active, ${semaphoreStats.waitCount} waiting`
|
`Tika extraction successful - Cache: ${cacheStats.hits}H/${cacheStats.misses}M (${cacheStats.hitRate}%) - Semaphore: ${semaphoreStats.inProgress} active, ${semaphoreStats.waitCount} waiting`
|
||||||
);
|
);
|
||||||
|
|
||||||
return result;
|
return result;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.warn(
|
logger.warn(
|
||||||
`Tika extraction error at ${url}:`,
|
`Tika extraction error at ${url}:`,
|
||||||
error instanceof Error ? error.message : 'Unknown error'
|
error instanceof Error ? error.message : 'Unknown error'
|
||||||
);
|
);
|
||||||
// Continue to next URL
|
// Continue to next URL
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// All URLs failed - cache this too (as empty string)
|
// All URLs failed - cache this too (as empty string)
|
||||||
logger.error('All Tika URLs failed');
|
logger.error('All Tika URLs failed');
|
||||||
this.tikaCache.set(hash, '');
|
this.tikaCache.set(hash, '');
|
||||||
return '';
|
return '';
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
// Helper function to check Tika availability
|
// Helper function to check Tika availability
|
||||||
async checkTikaAvailability(): Promise<boolean> {
|
async checkTikaAvailability(): Promise<boolean> {
|
||||||
const tikaUrl = process.env.TIKA_URL;
|
const tikaUrl = process.env.TIKA_URL;
|
||||||
if (!tikaUrl) {
|
if (!tikaUrl) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const response = await fetch(`${tikaUrl}/version`, {
|
const response = await fetch(`${tikaUrl}/version`, {
|
||||||
method: 'GET',
|
method: 'GET',
|
||||||
signal: AbortSignal.timeout(5000) // 5 seconds timeout
|
signal: AbortSignal.timeout(5000), // 5 seconds timeout
|
||||||
});
|
});
|
||||||
|
|
||||||
if (response.ok) {
|
if (response.ok) {
|
||||||
const version = await response.text();
|
const version = await response.text();
|
||||||
logger.info(`Tika server available, version: ${version.trim()}`);
|
logger.info(`Tika server available, version: ${version.trim()}`);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
return false;
|
return false;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.warn(
|
logger.warn(
|
||||||
'Tika server not available:',
|
'Tika server not available:',
|
||||||
error instanceof Error ? error.message : 'Unknown error'
|
error instanceof Error ? error.message : 'Unknown error'
|
||||||
);
|
);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Optional: Tika health check on startup
|
// Optional: Tika health check on startup
|
||||||
async initializeTextExtractor(): Promise<void> {
|
async initializeTextExtractor(): Promise<void> {
|
||||||
const tikaUrl = process.env.TIKA_URL;
|
const tikaUrl = process.env.TIKA_URL;
|
||||||
|
|
||||||
if (tikaUrl) {
|
if (tikaUrl) {
|
||||||
const isAvailable = await this.checkTikaAvailability();
|
const isAvailable = await this.checkTikaAvailability();
|
||||||
if (!isAvailable) {
|
if (!isAvailable) {
|
||||||
logger.error(`Tika server configured but not available at: ${tikaUrl}`);
|
logger.error(`Tika server configured but not available at: ${tikaUrl}`);
|
||||||
logger.error('Text extraction will fall back to legacy methods or fail');
|
logger.error('Text extraction will fall back to legacy methods or fail');
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
logger.info('Using legacy text extraction methods (pdf2json, mammoth, xlsx)');
|
logger.info('Using legacy text extraction methods (pdf2json, mammoth, xlsx)');
|
||||||
logger.info('Set TIKA_URL environment variable to use Apache Tika for better extraction');
|
logger.info(
|
||||||
}
|
'Set TIKA_URL environment variable to use Apache Tika for better extraction'
|
||||||
}
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Get cache statistics
|
// Get cache statistics
|
||||||
getTikaCacheStats(): {
|
getTikaCacheStats(): {
|
||||||
size: number;
|
size: number;
|
||||||
maxSize: number;
|
maxSize: number;
|
||||||
hits: number;
|
hits: number;
|
||||||
misses: number;
|
misses: number;
|
||||||
hitRate: number;
|
hitRate: number;
|
||||||
} {
|
} {
|
||||||
return this.tikaCache.getStats();
|
return this.tikaCache.getStats();
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get semaphore statistics
|
// Get semaphore statistics
|
||||||
getTikaSemaphoreStats(): { inProgress: number; waitCount: number } {
|
getTikaSemaphoreStats(): { inProgress: number; waitCount: number } {
|
||||||
return this.tikaSemaphore.getStats();
|
return this.tikaSemaphore.getStats();
|
||||||
}
|
}
|
||||||
|
|
||||||
// Clear cache (e.g. for tests or manual reset)
|
// Clear cache (e.g. for tests or manual reset)
|
||||||
clearTikaCache(): void {
|
clearTikaCache(): void {
|
||||||
this.tikaCache.reset();
|
this.tikaCache.reset();
|
||||||
this.tikaSemaphore.clear();
|
this.tikaSemaphore.clear();
|
||||||
logger.info('Tika cache and semaphore cleared');
|
logger.info('Tika cache and semaphore cleared');
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,16 +1,25 @@
|
|||||||
import { Index, MeiliSearch, SearchParams } from 'meilisearch';
|
import { Index, MeiliSearch, SearchParams } from 'meilisearch';
|
||||||
import { config } from '../config';
|
import { config } from '../config';
|
||||||
import type { SearchQuery, SearchResult, EmailDocument, TopSender } from '@open-archiver/types';
|
import type {
|
||||||
|
SearchQuery,
|
||||||
|
SearchResult,
|
||||||
|
EmailDocument,
|
||||||
|
TopSender,
|
||||||
|
User,
|
||||||
|
} from '@open-archiver/types';
|
||||||
import { FilterBuilder } from './FilterBuilder';
|
import { FilterBuilder } from './FilterBuilder';
|
||||||
|
import { AuditService } from './AuditService';
|
||||||
|
|
||||||
export class SearchService {
|
export class SearchService {
|
||||||
private client: MeiliSearch;
|
private client: MeiliSearch;
|
||||||
|
private auditService: AuditService;
|
||||||
|
|
||||||
constructor() {
|
constructor() {
|
||||||
this.client = new MeiliSearch({
|
this.client = new MeiliSearch({
|
||||||
host: config.search.host,
|
host: config.search.host,
|
||||||
apiKey: config.search.apiKey,
|
apiKey: config.search.apiKey,
|
||||||
});
|
});
|
||||||
|
this.auditService = new AuditService();
|
||||||
}
|
}
|
||||||
|
|
||||||
public async getIndex<T extends Record<string, any>>(name: string): Promise<Index<T>> {
|
public async getIndex<T extends Record<string, any>>(name: string): Promise<Index<T>> {
|
||||||
@@ -48,7 +57,11 @@ export class SearchService {
|
|||||||
return index.deleteDocuments({ filter });
|
return index.deleteDocuments({ filter });
|
||||||
}
|
}
|
||||||
|
|
||||||
public async searchEmails(dto: SearchQuery, userId: string): Promise<SearchResult> {
|
public async searchEmails(
|
||||||
|
dto: SearchQuery,
|
||||||
|
userId: string,
|
||||||
|
actorIp: string
|
||||||
|
): Promise<SearchResult> {
|
||||||
const { query, filters, page = 1, limit = 10, matchingStrategy = 'last' } = dto;
|
const { query, filters, page = 1, limit = 10, matchingStrategy = 'last' } = dto;
|
||||||
const index = await this.getIndex<EmailDocument>('emails');
|
const index = await this.getIndex<EmailDocument>('emails');
|
||||||
|
|
||||||
@@ -84,9 +97,24 @@ export class SearchService {
|
|||||||
searchParams.filter = searchFilter;
|
searchParams.filter = searchFilter;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
console.log('searchParams', searchParams);
|
// console.log('searchParams', searchParams);
|
||||||
const searchResults = await index.search(query, searchParams);
|
const searchResults = await index.search(query, searchParams);
|
||||||
|
|
||||||
|
await this.auditService.createAuditLog({
|
||||||
|
actorIdentifier: userId,
|
||||||
|
actionType: 'SEARCH',
|
||||||
|
targetType: 'ArchivedEmail',
|
||||||
|
targetId: '',
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
query,
|
||||||
|
filters,
|
||||||
|
page,
|
||||||
|
limit,
|
||||||
|
matchingStrategy,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
return {
|
return {
|
||||||
hits: searchResults.hits,
|
hits: searchResults.hits,
|
||||||
total: searchResults.estimatedTotalHits ?? searchResults.hits.length,
|
total: searchResults.estimatedTotalHits ?? searchResults.hits.length,
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
import { db } from '../database';
|
import { db } from '../database';
|
||||||
import { systemSettings } from '../database/schema/system-settings';
|
import { systemSettings } from '../database/schema/system-settings';
|
||||||
import type { SystemSettings } from '@open-archiver/types';
|
import type { SystemSettings, User } from '@open-archiver/types';
|
||||||
import { eq } from 'drizzle-orm';
|
import { AuditService } from './AuditService';
|
||||||
|
|
||||||
const DEFAULT_SETTINGS: SystemSettings = {
|
const DEFAULT_SETTINGS: SystemSettings = {
|
||||||
language: 'en',
|
language: 'en',
|
||||||
@@ -10,6 +10,7 @@ const DEFAULT_SETTINGS: SystemSettings = {
|
|||||||
};
|
};
|
||||||
|
|
||||||
export class SettingsService {
|
export class SettingsService {
|
||||||
|
private auditService = new AuditService();
|
||||||
/**
|
/**
|
||||||
* Retrieves the current system settings.
|
* Retrieves the current system settings.
|
||||||
* If no settings exist, it initializes and returns the default settings.
|
* If no settings exist, it initializes and returns the default settings.
|
||||||
@@ -30,13 +31,36 @@ export class SettingsService {
|
|||||||
* @param newConfig - A partial object of the new settings configuration.
|
* @param newConfig - A partial object of the new settings configuration.
|
||||||
* @returns The updated system settings.
|
* @returns The updated system settings.
|
||||||
*/
|
*/
|
||||||
public async updateSystemSettings(newConfig: Partial<SystemSettings>): Promise<SystemSettings> {
|
public async updateSystemSettings(
|
||||||
|
newConfig: Partial<SystemSettings>,
|
||||||
|
actor: User,
|
||||||
|
actorIp: string
|
||||||
|
): Promise<SystemSettings> {
|
||||||
const currentConfig = await this.getSystemSettings();
|
const currentConfig = await this.getSystemSettings();
|
||||||
const mergedConfig = { ...currentConfig, ...newConfig };
|
const mergedConfig = { ...currentConfig, ...newConfig };
|
||||||
|
|
||||||
// Since getSettings ensures a record always exists, we can directly update.
|
// Since getSettings ensures a record always exists, we can directly update.
|
||||||
const [result] = await db.update(systemSettings).set({ config: mergedConfig }).returning();
|
const [result] = await db.update(systemSettings).set({ config: mergedConfig }).returning();
|
||||||
|
|
||||||
|
const changedFields = Object.keys(newConfig).filter(
|
||||||
|
(key) =>
|
||||||
|
currentConfig[key as keyof SystemSettings] !==
|
||||||
|
newConfig[key as keyof SystemSettings]
|
||||||
|
);
|
||||||
|
|
||||||
|
if (changedFields.length > 0) {
|
||||||
|
await this.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'UPDATE',
|
||||||
|
targetType: 'SystemSettings',
|
||||||
|
targetId: 'system',
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
changedFields,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
return result.config;
|
return result.config;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -2,11 +2,25 @@ import { IStorageProvider, StorageConfig } from '@open-archiver/types';
|
|||||||
import { LocalFileSystemProvider } from './storage/LocalFileSystemProvider';
|
import { LocalFileSystemProvider } from './storage/LocalFileSystemProvider';
|
||||||
import { S3StorageProvider } from './storage/S3StorageProvider';
|
import { S3StorageProvider } from './storage/S3StorageProvider';
|
||||||
import { config } from '../config/index';
|
import { config } from '../config/index';
|
||||||
|
import { createCipheriv, createDecipheriv, randomBytes } from 'crypto';
|
||||||
|
import { streamToBuffer } from '../helpers/streamToBuffer';
|
||||||
|
import { Readable } from 'stream';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* A unique identifier for Open Archiver encrypted files. This value SHOULD NOT BE ALTERED in future development to ensure compatibility.
|
||||||
|
*/
|
||||||
|
const ENCRYPTION_PREFIX = Buffer.from('oa_enc_idf_v1::');
|
||||||
|
|
||||||
export class StorageService implements IStorageProvider {
|
export class StorageService implements IStorageProvider {
|
||||||
private provider: IStorageProvider;
|
private provider: IStorageProvider;
|
||||||
|
private encryptionKey: Buffer | null = null;
|
||||||
|
private readonly algorithm = 'aes-256-cbc';
|
||||||
|
|
||||||
constructor(storageConfig: StorageConfig = config.storage) {
|
constructor(storageConfig: StorageConfig = config.storage) {
|
||||||
|
if (storageConfig.encryptionKey) {
|
||||||
|
this.encryptionKey = Buffer.from(storageConfig.encryptionKey, 'hex');
|
||||||
|
}
|
||||||
|
|
||||||
switch (storageConfig.type) {
|
switch (storageConfig.type) {
|
||||||
case 'local':
|
case 'local':
|
||||||
this.provider = new LocalFileSystemProvider(storageConfig);
|
this.provider = new LocalFileSystemProvider(storageConfig);
|
||||||
@@ -19,12 +33,52 @@ export class StorageService implements IStorageProvider {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
put(path: string, content: Buffer | NodeJS.ReadableStream): Promise<void> {
|
private async encrypt(content: Buffer): Promise<Buffer> {
|
||||||
return this.provider.put(path, content);
|
if (!this.encryptionKey) {
|
||||||
|
return content;
|
||||||
|
}
|
||||||
|
const iv = randomBytes(16);
|
||||||
|
const cipher = createCipheriv(this.algorithm, this.encryptionKey, iv);
|
||||||
|
const encrypted = Buffer.concat([cipher.update(content), cipher.final()]);
|
||||||
|
return Buffer.concat([ENCRYPTION_PREFIX, iv, encrypted]);
|
||||||
}
|
}
|
||||||
|
|
||||||
get(path: string): Promise<NodeJS.ReadableStream> {
|
private async decrypt(content: Buffer): Promise<Buffer> {
|
||||||
return this.provider.get(path);
|
if (!this.encryptionKey) {
|
||||||
|
return content;
|
||||||
|
}
|
||||||
|
const prefix = content.subarray(0, ENCRYPTION_PREFIX.length);
|
||||||
|
if (!prefix.equals(ENCRYPTION_PREFIX)) {
|
||||||
|
// File is not encrypted, return as is
|
||||||
|
return content;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const iv = content.subarray(ENCRYPTION_PREFIX.length, ENCRYPTION_PREFIX.length + 16);
|
||||||
|
const encrypted = content.subarray(ENCRYPTION_PREFIX.length + 16);
|
||||||
|
const decipher = createDecipheriv(this.algorithm, this.encryptionKey, iv);
|
||||||
|
return Buffer.concat([decipher.update(encrypted), decipher.final()]);
|
||||||
|
} catch (error) {
|
||||||
|
// Decryption failed for a file that has the prefix.
|
||||||
|
// This indicates a corrupted file or a wrong key.
|
||||||
|
throw new Error('Failed to decrypt file. It may be corrupted or the key is incorrect.');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async put(path: string, content: Buffer | NodeJS.ReadableStream): Promise<void> {
|
||||||
|
const buffer =
|
||||||
|
content instanceof Buffer
|
||||||
|
? content
|
||||||
|
: await streamToBuffer(content as NodeJS.ReadableStream);
|
||||||
|
const encryptedContent = await this.encrypt(buffer);
|
||||||
|
return this.provider.put(path, encryptedContent);
|
||||||
|
}
|
||||||
|
|
||||||
|
async get(path: string): Promise<NodeJS.ReadableStream> {
|
||||||
|
const stream = await this.provider.get(path);
|
||||||
|
const buffer = await streamToBuffer(stream);
|
||||||
|
const decryptedContent = await this.decrypt(buffer);
|
||||||
|
return Readable.from(decryptedContent);
|
||||||
}
|
}
|
||||||
|
|
||||||
delete(path: string): Promise<void> {
|
delete(path: string): Promise<void> {
|
||||||
|
|||||||
@@ -3,8 +3,10 @@ import * as schema from '../database/schema';
|
|||||||
import { eq, sql } from 'drizzle-orm';
|
import { eq, sql } from 'drizzle-orm';
|
||||||
import { hash } from 'bcryptjs';
|
import { hash } from 'bcryptjs';
|
||||||
import type { CaslPolicy, User } from '@open-archiver/types';
|
import type { CaslPolicy, User } from '@open-archiver/types';
|
||||||
|
import { AuditService } from './AuditService';
|
||||||
|
|
||||||
export class UserService {
|
export class UserService {
|
||||||
|
private static auditService = new AuditService();
|
||||||
/**
|
/**
|
||||||
* Finds a user by their email address.
|
* Finds a user by their email address.
|
||||||
* @param email The email address of the user to find.
|
* @param email The email address of the user to find.
|
||||||
@@ -60,7 +62,9 @@ export class UserService {
|
|||||||
|
|
||||||
public async createUser(
|
public async createUser(
|
||||||
userDetails: Pick<User, 'email' | 'first_name' | 'last_name'> & { password?: string },
|
userDetails: Pick<User, 'email' | 'first_name' | 'last_name'> & { password?: string },
|
||||||
roleId: string
|
roleId: string,
|
||||||
|
actor: User,
|
||||||
|
actorIp: string
|
||||||
): Promise<typeof schema.users.$inferSelect> {
|
): Promise<typeof schema.users.$inferSelect> {
|
||||||
const { email, first_name, last_name, password } = userDetails;
|
const { email, first_name, last_name, password } = userDetails;
|
||||||
const hashedPassword = password ? await hash(password, 10) : undefined;
|
const hashedPassword = password ? await hash(password, 10) : undefined;
|
||||||
@@ -80,33 +84,72 @@ export class UserService {
|
|||||||
roleId: roleId,
|
roleId: roleId,
|
||||||
});
|
});
|
||||||
|
|
||||||
|
await UserService.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'CREATE',
|
||||||
|
targetType: 'User',
|
||||||
|
targetId: newUser[0].id,
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
createdUserEmail: newUser[0].email,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
return newUser[0];
|
return newUser[0];
|
||||||
}
|
}
|
||||||
|
|
||||||
public async updateUser(
|
public async updateUser(
|
||||||
id: string,
|
id: string,
|
||||||
userDetails: Partial<Pick<User, 'email' | 'first_name' | 'last_name'>>,
|
userDetails: Partial<Pick<User, 'email' | 'first_name' | 'last_name'>>,
|
||||||
roleId?: string
|
roleId: string | undefined,
|
||||||
|
actor: User,
|
||||||
|
actorIp: string
|
||||||
): Promise<typeof schema.users.$inferSelect | null> {
|
): Promise<typeof schema.users.$inferSelect | null> {
|
||||||
|
const originalUser = await this.findById(id);
|
||||||
const updatedUser = await db
|
const updatedUser = await db
|
||||||
.update(schema.users)
|
.update(schema.users)
|
||||||
.set(userDetails)
|
.set(userDetails)
|
||||||
.where(eq(schema.users.id, id))
|
.where(eq(schema.users.id, id))
|
||||||
.returning();
|
.returning();
|
||||||
|
|
||||||
if (roleId) {
|
if (roleId && originalUser?.role?.id !== roleId) {
|
||||||
await db.delete(schema.userRoles).where(eq(schema.userRoles.userId, id));
|
await db.delete(schema.userRoles).where(eq(schema.userRoles.userId, id));
|
||||||
await db.insert(schema.userRoles).values({
|
await db.insert(schema.userRoles).values({
|
||||||
userId: id,
|
userId: id,
|
||||||
roleId: roleId,
|
roleId: roleId,
|
||||||
});
|
});
|
||||||
|
await UserService.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'UPDATE',
|
||||||
|
targetType: 'User',
|
||||||
|
targetId: id,
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
field: 'role',
|
||||||
|
oldValue: originalUser?.role?.name,
|
||||||
|
newValue: roleId, // TODO: get role name
|
||||||
|
},
|
||||||
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TODO: log other user detail changes
|
||||||
|
|
||||||
return updatedUser[0] || null;
|
return updatedUser[0] || null;
|
||||||
}
|
}
|
||||||
|
|
||||||
public async deleteUser(id: string): Promise<void> {
|
public async deleteUser(id: string, actor: User, actorIp: string): Promise<void> {
|
||||||
|
const userToDelete = await this.findById(id);
|
||||||
await db.delete(schema.users).where(eq(schema.users.id, id));
|
await db.delete(schema.users).where(eq(schema.users.id, id));
|
||||||
|
await UserService.auditService.createAuditLog({
|
||||||
|
actorIdentifier: actor.id,
|
||||||
|
actionType: 'DELETE',
|
||||||
|
targetType: 'User',
|
||||||
|
targetId: id,
|
||||||
|
actorIp,
|
||||||
|
details: {
|
||||||
|
deletedUserEmail: userToDelete?.email,
|
||||||
|
},
|
||||||
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -152,6 +195,17 @@ export class UserService {
|
|||||||
roleId: superAdminRole.id,
|
roleId: superAdminRole.id,
|
||||||
});
|
});
|
||||||
|
|
||||||
|
await UserService.auditService.createAuditLog({
|
||||||
|
actorIdentifier: 'SYSTEM',
|
||||||
|
actionType: 'SETUP',
|
||||||
|
targetType: 'User',
|
||||||
|
targetId: newUser[0].id,
|
||||||
|
actorIp: '::1', // System action
|
||||||
|
details: {
|
||||||
|
setupAdminEmail: newUser[0].email,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
return newUser[0];
|
return newUser[0];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ import type {
|
|||||||
import type { IEmailConnector } from '../EmailProviderFactory';
|
import type { IEmailConnector } from '../EmailProviderFactory';
|
||||||
import { ImapFlow } from 'imapflow';
|
import { ImapFlow } from 'imapflow';
|
||||||
import { simpleParser, ParsedMail, Attachment, AddressObject, Headers } from 'mailparser';
|
import { simpleParser, ParsedMail, Attachment, AddressObject, Headers } from 'mailparser';
|
||||||
|
import { config } from '../../config';
|
||||||
import { logger } from '../../config/logger';
|
import { logger } from '../../config/logger';
|
||||||
import { getThreadId } from './helpers/utils';
|
import { getThreadId } from './helpers/utils';
|
||||||
|
|
||||||
@@ -154,24 +155,18 @@ export class ImapConnector implements IEmailConnector {
|
|||||||
const mailboxes = await this.withRetry(async () => await this.client.list());
|
const mailboxes = await this.withRetry(async () => await this.client.list());
|
||||||
|
|
||||||
const processableMailboxes = mailboxes.filter((mailbox) => {
|
const processableMailboxes = mailboxes.filter((mailbox) => {
|
||||||
// filter out trash and all mail emails
|
if (config.app.allInclusiveArchive) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
// filter out junk/spam mail emails
|
||||||
if (mailbox.specialUse) {
|
if (mailbox.specialUse) {
|
||||||
const specialUse = mailbox.specialUse.toLowerCase();
|
const specialUse = mailbox.specialUse.toLowerCase();
|
||||||
if (
|
if (specialUse === '\\junk' || specialUse === '\\trash') {
|
||||||
specialUse === '\\junk' ||
|
|
||||||
specialUse === '\\trash' ||
|
|
||||||
specialUse === '\\all'
|
|
||||||
) {
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Fallback to checking flags
|
// Fallback to checking flags
|
||||||
if (
|
if (mailbox.flags.has('\\Trash') || mailbox.flags.has('\\Junk')) {
|
||||||
mailbox.flags.has('\\Noselect') ||
|
|
||||||
mailbox.flags.has('\\Trash') ||
|
|
||||||
mailbox.flags.has('\\Junk') ||
|
|
||||||
mailbox.flags.has('\\All')
|
|
||||||
) {
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
import type {
|
import type {
|
||||||
MboxImportCredentials,
|
MboxImportCredentials,
|
||||||
EmailObject,
|
EmailObject,
|
||||||
EmailAddress,
|
EmailAddress,
|
||||||
SyncState,
|
SyncState,
|
||||||
MailboxUser,
|
MailboxUser,
|
||||||
} from '@open-archiver/types';
|
} from '@open-archiver/types';
|
||||||
import type { IEmailConnector } from '../EmailProviderFactory';
|
import type { IEmailConnector } from '../EmailProviderFactory';
|
||||||
import { simpleParser, ParsedMail, Attachment, AddressObject } from 'mailparser';
|
import { simpleParser, ParsedMail, Attachment, AddressObject } from 'mailparser';
|
||||||
@@ -15,160 +15,160 @@ import { createHash } from 'crypto';
|
|||||||
import { streamToBuffer } from '../../helpers/streamToBuffer';
|
import { streamToBuffer } from '../../helpers/streamToBuffer';
|
||||||
|
|
||||||
export class MboxConnector implements IEmailConnector {
|
export class MboxConnector implements IEmailConnector {
|
||||||
private storage: StorageService;
|
private storage: StorageService;
|
||||||
|
|
||||||
constructor(private credentials: MboxImportCredentials) {
|
constructor(private credentials: MboxImportCredentials) {
|
||||||
this.storage = new StorageService();
|
this.storage = new StorageService();
|
||||||
}
|
}
|
||||||
|
|
||||||
public async testConnection(): Promise<boolean> {
|
public async testConnection(): Promise<boolean> {
|
||||||
try {
|
try {
|
||||||
if (!this.credentials.uploadedFilePath) {
|
if (!this.credentials.uploadedFilePath) {
|
||||||
throw Error('Mbox file path not provided.');
|
throw Error('Mbox file path not provided.');
|
||||||
}
|
}
|
||||||
if (!this.credentials.uploadedFilePath.includes('.mbox')) {
|
if (!this.credentials.uploadedFilePath.includes('.mbox')) {
|
||||||
throw Error('Provided file is not in the MBOX format.');
|
throw Error('Provided file is not in the MBOX format.');
|
||||||
}
|
}
|
||||||
const fileExist = await this.storage.exists(this.credentials.uploadedFilePath);
|
const fileExist = await this.storage.exists(this.credentials.uploadedFilePath);
|
||||||
if (!fileExist) {
|
if (!fileExist) {
|
||||||
throw Error('Mbox file upload not finished yet, please wait.');
|
throw Error('Mbox file upload not finished yet, please wait.');
|
||||||
}
|
}
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error({ error, credentials: this.credentials }, 'Mbox file validation failed.');
|
logger.error({ error, credentials: this.credentials }, 'Mbox file validation failed.');
|
||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public async *listAllUsers(): AsyncGenerator<MailboxUser> {
|
public async *listAllUsers(): AsyncGenerator<MailboxUser> {
|
||||||
const displayName =
|
const displayName =
|
||||||
this.credentials.uploadedFileName || `mbox-import-${new Date().getTime()}`;
|
this.credentials.uploadedFileName || `mbox-import-${new Date().getTime()}`;
|
||||||
logger.info(`Found potential mailbox: ${displayName}`);
|
logger.info(`Found potential mailbox: ${displayName}`);
|
||||||
const constructedPrimaryEmail = `${displayName.replace(/ /g, '.').toLowerCase()}@mbox.local`;
|
const constructedPrimaryEmail = `${displayName.replace(/ /g, '.').toLowerCase()}@mbox.local`;
|
||||||
yield {
|
yield {
|
||||||
id: constructedPrimaryEmail,
|
id: constructedPrimaryEmail,
|
||||||
primaryEmail: constructedPrimaryEmail,
|
primaryEmail: constructedPrimaryEmail,
|
||||||
displayName: displayName,
|
displayName: displayName,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
public async *fetchEmails(
|
public async *fetchEmails(
|
||||||
userEmail: string,
|
userEmail: string,
|
||||||
syncState?: SyncState | null
|
syncState?: SyncState | null
|
||||||
): AsyncGenerator<EmailObject | null> {
|
): AsyncGenerator<EmailObject | null> {
|
||||||
try {
|
try {
|
||||||
const fileStream = await this.storage.get(this.credentials.uploadedFilePath);
|
const fileStream = await this.storage.get(this.credentials.uploadedFilePath);
|
||||||
const fileBuffer = await streamToBuffer(fileStream as Readable);
|
const fileBuffer = await streamToBuffer(fileStream as Readable);
|
||||||
const mboxContent = fileBuffer.toString('utf-8');
|
const mboxContent = fileBuffer.toString('utf-8');
|
||||||
const emailDelimiter = '\nFrom ';
|
const emailDelimiter = '\nFrom ';
|
||||||
const emails = mboxContent.split(emailDelimiter);
|
const emails = mboxContent.split(emailDelimiter);
|
||||||
|
|
||||||
// The first split part might be empty or part of the first email's header, so we adjust.
|
// The first split part might be empty or part of the first email's header, so we adjust.
|
||||||
if (emails.length > 0 && !mboxContent.startsWith('From ')) {
|
if (emails.length > 0 && !mboxContent.startsWith('From ')) {
|
||||||
emails.shift(); // Adjust if the file doesn't start with "From "
|
emails.shift(); // Adjust if the file doesn't start with "From "
|
||||||
}
|
}
|
||||||
|
|
||||||
logger.info(`Found ${emails.length} potential emails in the mbox file.`);
|
logger.info(`Found ${emails.length} potential emails in the mbox file.`);
|
||||||
let emailCount = 0;
|
let emailCount = 0;
|
||||||
|
|
||||||
for (const email of emails) {
|
for (const email of emails) {
|
||||||
try {
|
try {
|
||||||
// Re-add the "From " delimiter for the parser, except for the very first email
|
// Re-add the "From " delimiter for the parser, except for the very first email
|
||||||
const emailWithDelimiter =
|
const emailWithDelimiter =
|
||||||
emailCount > 0 || mboxContent.startsWith('From ') ? `From ${email}` : email;
|
emailCount > 0 || mboxContent.startsWith('From ') ? `From ${email}` : email;
|
||||||
const emailBuffer = Buffer.from(emailWithDelimiter, 'utf-8');
|
const emailBuffer = Buffer.from(emailWithDelimiter, 'utf-8');
|
||||||
const emailObject = await this.parseMessage(emailBuffer, '');
|
const emailObject = await this.parseMessage(emailBuffer, '');
|
||||||
yield emailObject;
|
yield emailObject;
|
||||||
emailCount++;
|
emailCount++;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error(
|
logger.error(
|
||||||
{ error, file: this.credentials.uploadedFilePath },
|
{ error, file: this.credentials.uploadedFilePath },
|
||||||
'Failed to process a single message from mbox file. Skipping.'
|
'Failed to process a single message from mbox file. Skipping.'
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
logger.info(`Finished processing mbox file. Total emails processed: ${emailCount}`);
|
logger.info(`Finished processing mbox file. Total emails processed: ${emailCount}`);
|
||||||
} finally {
|
} finally {
|
||||||
try {
|
try {
|
||||||
await this.storage.delete(this.credentials.uploadedFilePath);
|
await this.storage.delete(this.credentials.uploadedFilePath);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error(
|
logger.error(
|
||||||
{ error, file: this.credentials.uploadedFilePath },
|
{ error, file: this.credentials.uploadedFilePath },
|
||||||
'Failed to delete mbox file after processing.'
|
'Failed to delete mbox file after processing.'
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private async parseMessage(emlBuffer: Buffer, path: string): Promise<EmailObject> {
|
private async parseMessage(emlBuffer: Buffer, path: string): Promise<EmailObject> {
|
||||||
const parsedEmail: ParsedMail = await simpleParser(emlBuffer);
|
const parsedEmail: ParsedMail = await simpleParser(emlBuffer);
|
||||||
|
|
||||||
const attachments = parsedEmail.attachments.map((attachment: Attachment) => ({
|
const attachments = parsedEmail.attachments.map((attachment: Attachment) => ({
|
||||||
filename: attachment.filename || 'untitled',
|
filename: attachment.filename || 'untitled',
|
||||||
contentType: attachment.contentType,
|
contentType: attachment.contentType,
|
||||||
size: attachment.size,
|
size: attachment.size,
|
||||||
content: attachment.content as Buffer,
|
content: attachment.content as Buffer,
|
||||||
}));
|
}));
|
||||||
|
|
||||||
const mapAddresses = (
|
const mapAddresses = (
|
||||||
addresses: AddressObject | AddressObject[] | undefined
|
addresses: AddressObject | AddressObject[] | undefined
|
||||||
): EmailAddress[] => {
|
): EmailAddress[] => {
|
||||||
if (!addresses) return [];
|
if (!addresses) return [];
|
||||||
const addressArray = Array.isArray(addresses) ? addresses : [addresses];
|
const addressArray = Array.isArray(addresses) ? addresses : [addresses];
|
||||||
return addressArray.flatMap((a) =>
|
return addressArray.flatMap((a) =>
|
||||||
a.value.map((v) => ({
|
a.value.map((v) => ({
|
||||||
name: v.name,
|
name: v.name,
|
||||||
address: v.address?.replaceAll(`'`, '') || '',
|
address: v.address?.replaceAll(`'`, '') || '',
|
||||||
}))
|
}))
|
||||||
);
|
);
|
||||||
};
|
};
|
||||||
|
|
||||||
const threadId = getThreadId(parsedEmail.headers);
|
const threadId = getThreadId(parsedEmail.headers);
|
||||||
let messageId = parsedEmail.messageId;
|
let messageId = parsedEmail.messageId;
|
||||||
|
|
||||||
if (!messageId) {
|
if (!messageId) {
|
||||||
messageId = `generated-${createHash('sha256').update(emlBuffer).digest('hex')}`;
|
messageId = `generated-${createHash('sha256').update(emlBuffer).digest('hex')}`;
|
||||||
}
|
}
|
||||||
|
|
||||||
const from = mapAddresses(parsedEmail.from);
|
const from = mapAddresses(parsedEmail.from);
|
||||||
if (from.length === 0) {
|
if (from.length === 0) {
|
||||||
from.push({ name: 'No Sender', address: 'No Sender' });
|
from.push({ name: 'No Sender', address: 'No Sender' });
|
||||||
}
|
}
|
||||||
|
|
||||||
// Extract folder path from headers. Mbox files don't have a standard folder structure, so we rely on custom headers added by email clients.
|
// Extract folder path from headers. Mbox files don't have a standard folder structure, so we rely on custom headers added by email clients.
|
||||||
// Gmail uses 'X-Gmail-Labels', and other clients like Thunderbird may use 'X-Folder'.
|
// Gmail uses 'X-Gmail-Labels', and other clients like Thunderbird may use 'X-Folder'.
|
||||||
const gmailLabels = parsedEmail.headers.get('x-gmail-labels');
|
const gmailLabels = parsedEmail.headers.get('x-gmail-labels');
|
||||||
const folderHeader = parsedEmail.headers.get('x-folder');
|
const folderHeader = parsedEmail.headers.get('x-folder');
|
||||||
let finalPath = '';
|
let finalPath = '';
|
||||||
|
|
||||||
if (gmailLabels && typeof gmailLabels === 'string') {
|
if (gmailLabels && typeof gmailLabels === 'string') {
|
||||||
// We take the first label as the primary folder.
|
// We take the first label as the primary folder.
|
||||||
// Gmail labels can be hierarchical, but we'll simplify to the first label.
|
// Gmail labels can be hierarchical, but we'll simplify to the first label.
|
||||||
finalPath = gmailLabels.split(',')[0];
|
finalPath = gmailLabels.split(',')[0];
|
||||||
} else if (folderHeader && typeof folderHeader === 'string') {
|
} else if (folderHeader && typeof folderHeader === 'string') {
|
||||||
finalPath = folderHeader;
|
finalPath = folderHeader;
|
||||||
}
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
id: messageId,
|
id: messageId,
|
||||||
threadId: threadId,
|
threadId: threadId,
|
||||||
from,
|
from,
|
||||||
to: mapAddresses(parsedEmail.to),
|
to: mapAddresses(parsedEmail.to),
|
||||||
cc: mapAddresses(parsedEmail.cc),
|
cc: mapAddresses(parsedEmail.cc),
|
||||||
bcc: mapAddresses(parsedEmail.bcc),
|
bcc: mapAddresses(parsedEmail.bcc),
|
||||||
subject: parsedEmail.subject || '',
|
subject: parsedEmail.subject || '',
|
||||||
body: parsedEmail.text || '',
|
body: parsedEmail.text || '',
|
||||||
html: parsedEmail.html || '',
|
html: parsedEmail.html || '',
|
||||||
headers: parsedEmail.headers,
|
headers: parsedEmail.headers,
|
||||||
attachments,
|
attachments,
|
||||||
receivedAt: parsedEmail.date || new Date(),
|
receivedAt: parsedEmail.date || new Date(),
|
||||||
eml: emlBuffer,
|
eml: emlBuffer,
|
||||||
path: finalPath,
|
path: finalPath,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
public getUpdatedSyncState(): SyncState {
|
public getUpdatedSyncState(): SyncState {
|
||||||
return {};
|
return {};
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -281,8 +281,8 @@ export class PSTConnector implements IEmailConnector {
|
|||||||
emlBuffer ?? Buffer.from(parsedEmail.text || parsedEmail.html || '', 'utf-8')
|
emlBuffer ?? Buffer.from(parsedEmail.text || parsedEmail.html || '', 'utf-8')
|
||||||
)
|
)
|
||||||
.digest('hex')}-${createHash('sha256')
|
.digest('hex')}-${createHash('sha256')
|
||||||
.update(emlBuffer ?? Buffer.from(msg.subject || '', 'utf-8'))
|
.update(emlBuffer ?? Buffer.from(msg.subject || '', 'utf-8'))
|
||||||
.digest('hex')}-${msg.clientSubmitTime?.getTime()}`;
|
.digest('hex')}-${msg.clientSubmitTime?.getTime()}`;
|
||||||
}
|
}
|
||||||
return {
|
return {
|
||||||
id: messageId,
|
id: messageId,
|
||||||
|
|||||||
@@ -13,12 +13,11 @@ const processor = async (job: any) => {
|
|||||||
|
|
||||||
const worker = new Worker('indexing', processor, {
|
const worker = new Worker('indexing', processor, {
|
||||||
connection,
|
connection,
|
||||||
concurrency: 5,
|
|
||||||
removeOnComplete: {
|
removeOnComplete: {
|
||||||
count: 1000, // keep last 1000 jobs
|
count: 100, // keep last 100 jobs
|
||||||
},
|
},
|
||||||
removeOnFail: {
|
removeOnFail: {
|
||||||
count: 5000, // keep last 5000 failed jobs
|
count: 500, // keep last 500 failed jobs
|
||||||
},
|
},
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -4,8 +4,14 @@
|
|||||||
"outDir": "./dist",
|
"outDir": "./dist",
|
||||||
"rootDir": "./src",
|
"rootDir": "./src",
|
||||||
"emitDecoratorMetadata": true,
|
"emitDecoratorMetadata": true,
|
||||||
"experimentalDecorators": true
|
"experimentalDecorators": true,
|
||||||
|
"composite": true
|
||||||
},
|
},
|
||||||
"include": ["src/**/*.ts"],
|
"include": ["src/**/*.ts"],
|
||||||
"exclude": ["node_modules", "dist"]
|
"exclude": ["node_modules", "dist"],
|
||||||
|
"references": [
|
||||||
|
{
|
||||||
|
"path": "../types"
|
||||||
|
}
|
||||||
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
@@ -1,6 +1,7 @@
|
|||||||
{
|
{
|
||||||
"name": "@open-archiver/frontend",
|
"name": "@open-archiver/frontend",
|
||||||
"private": true,
|
"private": true,
|
||||||
|
"license": "SEE LICENSE IN LICENSE file",
|
||||||
"version": "0.0.1",
|
"version": "0.0.1",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
@@ -16,9 +17,9 @@
|
|||||||
"@iconify/svelte": "^5.0.1",
|
"@iconify/svelte": "^5.0.1",
|
||||||
"@open-archiver/types": "workspace:*",
|
"@open-archiver/types": "workspace:*",
|
||||||
"@sveltejs/kit": "^2.38.1",
|
"@sveltejs/kit": "^2.38.1",
|
||||||
"bits-ui": "^2.8.10",
|
|
||||||
"clsx": "^2.1.1",
|
"clsx": "^2.1.1",
|
||||||
"d3-shape": "^3.2.0",
|
"d3-shape": "^3.2.0",
|
||||||
|
"date-fns": "^4.1.0",
|
||||||
"html-entities": "^2.6.0",
|
"html-entities": "^2.6.0",
|
||||||
"jose": "^6.0.1",
|
"jose": "^6.0.1",
|
||||||
"lucide-svelte": "^0.525.0",
|
"lucide-svelte": "^0.525.0",
|
||||||
@@ -31,13 +32,14 @@
|
|||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@internationalized/date": "^3.8.2",
|
"@internationalized/date": "^3.8.2",
|
||||||
"@lucide/svelte": "^0.515.0",
|
"@lucide/svelte": "^0.544.0",
|
||||||
"@sveltejs/adapter-auto": "^6.0.0",
|
"@sveltejs/adapter-auto": "^6.0.0",
|
||||||
"@sveltejs/adapter-node": "^5.2.13",
|
"@sveltejs/adapter-node": "^5.2.13",
|
||||||
"@sveltejs/vite-plugin-svelte": "^5.0.0",
|
"@sveltejs/vite-plugin-svelte": "^5.0.0",
|
||||||
"@tailwindcss/vite": "^4.0.0",
|
"@tailwindcss/vite": "^4.0.0",
|
||||||
"@types/d3-shape": "^3.1.7",
|
"@types/d3-shape": "^3.1.7",
|
||||||
"@types/semver": "^7.7.1",
|
"@types/semver": "^7.7.1",
|
||||||
|
"bits-ui": "^2.12.0",
|
||||||
"dotenv": "^17.2.0",
|
"dotenv": "^17.2.0",
|
||||||
"layerchart": "2.0.0-next.27",
|
"layerchart": "2.0.0-next.27",
|
||||||
"mode-watcher": "^1.1.0",
|
"mode-watcher": "^1.1.0",
|
||||||
|
|||||||
1
packages/frontend/src/app.d.ts
vendored
1
packages/frontend/src/app.d.ts
vendored
@@ -8,6 +8,7 @@ declare global {
|
|||||||
interface Locals {
|
interface Locals {
|
||||||
user: Omit<User, 'passwordHash'> | null;
|
user: Omit<User, 'passwordHash'> | null;
|
||||||
accessToken: string | null;
|
accessToken: string | null;
|
||||||
|
enterpriseMode: boolean | null;
|
||||||
}
|
}
|
||||||
// interface PageData {}
|
// interface PageData {}
|
||||||
// interface PageState {}
|
// interface PageState {}
|
||||||
|
|||||||
@@ -22,6 +22,11 @@ export const handle: Handle = async ({ event, resolve }) => {
|
|||||||
event.locals.user = null;
|
event.locals.user = null;
|
||||||
event.locals.accessToken = null;
|
event.locals.accessToken = null;
|
||||||
}
|
}
|
||||||
|
if (import.meta.env.VITE_ENTERPRISE_MODE === true) {
|
||||||
|
event.locals.enterpriseMode = true;
|
||||||
|
} else {
|
||||||
|
event.locals.enterpriseMode = false;
|
||||||
|
}
|
||||||
|
|
||||||
return resolve(event);
|
return resolve(event);
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -99,7 +99,7 @@
|
|||||||
});
|
});
|
||||||
const result = await response.json();
|
const result = await response.json();
|
||||||
if (!response.ok) {
|
if (!response.ok) {
|
||||||
throw new Error(`File upload failed + ${result}`);
|
throw new Error(result.message || 'File upload failed');
|
||||||
}
|
}
|
||||||
|
|
||||||
formData.providerConfig.uploadedFilePath = result.filePath;
|
formData.providerConfig.uploadedFilePath = result.filePath;
|
||||||
@@ -107,10 +107,11 @@
|
|||||||
fileUploading = false;
|
fileUploading = false;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
fileUploading = false;
|
fileUploading = false;
|
||||||
|
const message = error instanceof Error ? error.message : String(error);
|
||||||
setAlert({
|
setAlert({
|
||||||
type: 'error',
|
type: 'error',
|
||||||
title: $t('app.components.ingestion_source_form.upload_failed'),
|
title: $t('app.components.ingestion_source_form.upload_failed'),
|
||||||
message: JSON.stringify(error),
|
message,
|
||||||
duration: 5000,
|
duration: 5000,
|
||||||
show: true,
|
show: true,
|
||||||
});
|
});
|
||||||
|
|||||||
25
packages/frontend/src/lib/components/ui/pagination/index.ts
Normal file
25
packages/frontend/src/lib/components/ui/pagination/index.ts
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
import Root from "./pagination.svelte";
|
||||||
|
import Content from "./pagination-content.svelte";
|
||||||
|
import Item from "./pagination-item.svelte";
|
||||||
|
import Link from "./pagination-link.svelte";
|
||||||
|
import PrevButton from "./pagination-prev-button.svelte";
|
||||||
|
import NextButton from "./pagination-next-button.svelte";
|
||||||
|
import Ellipsis from "./pagination-ellipsis.svelte";
|
||||||
|
|
||||||
|
export {
|
||||||
|
Root,
|
||||||
|
Content,
|
||||||
|
Item,
|
||||||
|
Link,
|
||||||
|
PrevButton,
|
||||||
|
NextButton,
|
||||||
|
Ellipsis,
|
||||||
|
//
|
||||||
|
Root as Pagination,
|
||||||
|
Content as PaginationContent,
|
||||||
|
Item as PaginationItem,
|
||||||
|
Link as PaginationLink,
|
||||||
|
PrevButton as PaginationPrevButton,
|
||||||
|
NextButton as PaginationNextButton,
|
||||||
|
Ellipsis as PaginationEllipsis,
|
||||||
|
};
|
||||||
@@ -0,0 +1,20 @@
|
|||||||
|
<script lang="ts">
|
||||||
|
import type { HTMLAttributes } from "svelte/elements";
|
||||||
|
import { cn, type WithElementRef } from "$lib/utils.js";
|
||||||
|
|
||||||
|
let {
|
||||||
|
ref = $bindable(null),
|
||||||
|
class: className,
|
||||||
|
children,
|
||||||
|
...restProps
|
||||||
|
}: WithElementRef<HTMLAttributes<HTMLUListElement>> = $props();
|
||||||
|
</script>
|
||||||
|
|
||||||
|
<ul
|
||||||
|
bind:this={ref}
|
||||||
|
data-slot="pagination-content"
|
||||||
|
class={cn("flex flex-row items-center gap-1", className)}
|
||||||
|
{...restProps}
|
||||||
|
>
|
||||||
|
{@render children?.()}
|
||||||
|
</ul>
|
||||||
@@ -0,0 +1,22 @@
|
|||||||
|
<script lang="ts">
|
||||||
|
import EllipsisIcon from "@lucide/svelte/icons/ellipsis";
|
||||||
|
import { cn, type WithElementRef, type WithoutChildren } from "$lib/utils.js";
|
||||||
|
import type { HTMLAttributes } from "svelte/elements";
|
||||||
|
|
||||||
|
let {
|
||||||
|
ref = $bindable(null),
|
||||||
|
class: className,
|
||||||
|
...restProps
|
||||||
|
}: WithoutChildren<WithElementRef<HTMLAttributes<HTMLSpanElement>>> = $props();
|
||||||
|
</script>
|
||||||
|
|
||||||
|
<span
|
||||||
|
bind:this={ref}
|
||||||
|
aria-hidden="true"
|
||||||
|
data-slot="pagination-ellipsis"
|
||||||
|
class={cn("flex size-9 items-center justify-center", className)}
|
||||||
|
{...restProps}
|
||||||
|
>
|
||||||
|
<EllipsisIcon class="size-4" />
|
||||||
|
<span class="sr-only">More pages</span>
|
||||||
|
</span>
|
||||||
@@ -0,0 +1,14 @@
|
|||||||
|
<script lang="ts">
|
||||||
|
import type { HTMLLiAttributes } from "svelte/elements";
|
||||||
|
import type { WithElementRef } from "$lib/utils.js";
|
||||||
|
|
||||||
|
let {
|
||||||
|
ref = $bindable(null),
|
||||||
|
children,
|
||||||
|
...restProps
|
||||||
|
}: WithElementRef<HTMLLiAttributes> = $props();
|
||||||
|
</script>
|
||||||
|
|
||||||
|
<li bind:this={ref} data-slot="pagination-item" {...restProps}>
|
||||||
|
{@render children?.()}
|
||||||
|
</li>
|
||||||
@@ -0,0 +1,39 @@
|
|||||||
|
<script lang="ts">
|
||||||
|
import { Pagination as PaginationPrimitive } from "bits-ui";
|
||||||
|
import { cn } from "$lib/utils.js";
|
||||||
|
import { type Props, buttonVariants } from "$lib/components/ui/button/index.js";
|
||||||
|
|
||||||
|
let {
|
||||||
|
ref = $bindable(null),
|
||||||
|
class: className,
|
||||||
|
size = "icon",
|
||||||
|
isActive,
|
||||||
|
page,
|
||||||
|
children,
|
||||||
|
...restProps
|
||||||
|
}: PaginationPrimitive.PageProps &
|
||||||
|
Props & {
|
||||||
|
isActive: boolean;
|
||||||
|
} = $props();
|
||||||
|
</script>
|
||||||
|
|
||||||
|
{#snippet Fallback()}
|
||||||
|
{page.value}
|
||||||
|
{/snippet}
|
||||||
|
|
||||||
|
<PaginationPrimitive.Page
|
||||||
|
bind:ref
|
||||||
|
{page}
|
||||||
|
aria-current={isActive ? "page" : undefined}
|
||||||
|
data-slot="pagination-link"
|
||||||
|
data-active={isActive}
|
||||||
|
class={cn(
|
||||||
|
buttonVariants({
|
||||||
|
variant: isActive ? "outline" : "ghost",
|
||||||
|
size,
|
||||||
|
}),
|
||||||
|
className
|
||||||
|
)}
|
||||||
|
children={children || Fallback}
|
||||||
|
{...restProps}
|
||||||
|
/>
|
||||||
@@ -0,0 +1,33 @@
|
|||||||
|
<script lang="ts">
|
||||||
|
import { Pagination as PaginationPrimitive } from "bits-ui";
|
||||||
|
import ChevronRightIcon from "@lucide/svelte/icons/chevron-right";
|
||||||
|
import { buttonVariants } from "$lib/components/ui/button/index.js";
|
||||||
|
import { cn } from "$lib/utils.js";
|
||||||
|
|
||||||
|
let {
|
||||||
|
ref = $bindable(null),
|
||||||
|
class: className,
|
||||||
|
children,
|
||||||
|
...restProps
|
||||||
|
}: PaginationPrimitive.NextButtonProps = $props();
|
||||||
|
</script>
|
||||||
|
|
||||||
|
{#snippet Fallback()}
|
||||||
|
<span>Next</span>
|
||||||
|
<ChevronRightIcon class="size-4" />
|
||||||
|
{/snippet}
|
||||||
|
|
||||||
|
<PaginationPrimitive.NextButton
|
||||||
|
bind:ref
|
||||||
|
aria-label="Go to next page"
|
||||||
|
class={cn(
|
||||||
|
buttonVariants({
|
||||||
|
size: "default",
|
||||||
|
variant: "ghost",
|
||||||
|
class: "gap-1 px-2.5 sm:pr-2.5",
|
||||||
|
}),
|
||||||
|
className
|
||||||
|
)}
|
||||||
|
children={children || Fallback}
|
||||||
|
{...restProps}
|
||||||
|
/>
|
||||||
@@ -0,0 +1,33 @@
|
|||||||
|
<script lang="ts">
|
||||||
|
import { Pagination as PaginationPrimitive } from "bits-ui";
|
||||||
|
import ChevronLeftIcon from "@lucide/svelte/icons/chevron-left";
|
||||||
|
import { buttonVariants } from "$lib/components/ui/button/index.js";
|
||||||
|
import { cn } from "$lib/utils.js";
|
||||||
|
|
||||||
|
let {
|
||||||
|
ref = $bindable(null),
|
||||||
|
class: className,
|
||||||
|
children,
|
||||||
|
...restProps
|
||||||
|
}: PaginationPrimitive.PrevButtonProps = $props();
|
||||||
|
</script>
|
||||||
|
|
||||||
|
{#snippet Fallback()}
|
||||||
|
<ChevronLeftIcon class="size-4" />
|
||||||
|
<span>Previous</span>
|
||||||
|
{/snippet}
|
||||||
|
|
||||||
|
<PaginationPrimitive.PrevButton
|
||||||
|
bind:ref
|
||||||
|
aria-label="Go to previous page"
|
||||||
|
class={cn(
|
||||||
|
buttonVariants({
|
||||||
|
size: "default",
|
||||||
|
variant: "ghost",
|
||||||
|
class: "gap-1 px-2.5 sm:pl-2.5",
|
||||||
|
}),
|
||||||
|
className
|
||||||
|
)}
|
||||||
|
children={children || Fallback}
|
||||||
|
{...restProps}
|
||||||
|
/>
|
||||||
@@ -0,0 +1,28 @@
|
|||||||
|
<script lang="ts">
|
||||||
|
import { Pagination as PaginationPrimitive } from "bits-ui";
|
||||||
|
|
||||||
|
import { cn } from "$lib/utils.js";
|
||||||
|
|
||||||
|
let {
|
||||||
|
ref = $bindable(null),
|
||||||
|
class: className,
|
||||||
|
count = 0,
|
||||||
|
perPage = 10,
|
||||||
|
page = $bindable(1),
|
||||||
|
siblingCount = 1,
|
||||||
|
...restProps
|
||||||
|
}: PaginationPrimitive.RootProps = $props();
|
||||||
|
</script>
|
||||||
|
|
||||||
|
<PaginationPrimitive.Root
|
||||||
|
bind:ref
|
||||||
|
bind:page
|
||||||
|
role="navigation"
|
||||||
|
aria-label="pagination"
|
||||||
|
data-slot="pagination"
|
||||||
|
class={cn("mx-auto flex w-full justify-center", className)}
|
||||||
|
{count}
|
||||||
|
{perPage}
|
||||||
|
{siblingCount}
|
||||||
|
{...restProps}
|
||||||
|
/>
|
||||||
@@ -0,0 +1,7 @@
|
|||||||
|
import Root from "./progress.svelte";
|
||||||
|
|
||||||
|
export {
|
||||||
|
Root,
|
||||||
|
//
|
||||||
|
Root as Progress,
|
||||||
|
};
|
||||||
@@ -0,0 +1,27 @@
|
|||||||
|
<script lang="ts">
|
||||||
|
import { Progress as ProgressPrimitive } from "bits-ui";
|
||||||
|
import { cn, type WithoutChildrenOrChild } from "$lib/utils.js";
|
||||||
|
|
||||||
|
let {
|
||||||
|
ref = $bindable(null),
|
||||||
|
class: className,
|
||||||
|
max = 100,
|
||||||
|
value,
|
||||||
|
...restProps
|
||||||
|
}: WithoutChildrenOrChild<ProgressPrimitive.RootProps> = $props();
|
||||||
|
</script>
|
||||||
|
|
||||||
|
<ProgressPrimitive.Root
|
||||||
|
bind:ref
|
||||||
|
data-slot="progress"
|
||||||
|
class={cn("bg-primary/20 relative h-2 w-full overflow-hidden rounded-full", className)}
|
||||||
|
{value}
|
||||||
|
{max}
|
||||||
|
{...restProps}
|
||||||
|
>
|
||||||
|
<div
|
||||||
|
data-slot="progress-indicator"
|
||||||
|
class="bg-primary h-full w-full flex-1 transition-all"
|
||||||
|
style="transform: translateX(-{100 - (100 * (value ?? 0)) / (max ?? 1)}%)"
|
||||||
|
></div>
|
||||||
|
</ProgressPrimitive.Root>
|
||||||
@@ -7,7 +7,8 @@
|
|||||||
"password": "Passwort"
|
"password": "Passwort"
|
||||||
},
|
},
|
||||||
"common": {
|
"common": {
|
||||||
"working": "Arbeiten"
|
"working": "Arbeiten",
|
||||||
|
"read_docs": "Dokumentation lesen"
|
||||||
},
|
},
|
||||||
"archive": {
|
"archive": {
|
||||||
"title": "Archiv",
|
"title": "Archiv",
|
||||||
@@ -32,7 +33,14 @@
|
|||||||
"deleting": "Löschen",
|
"deleting": "Löschen",
|
||||||
"confirm": "Bestätigen",
|
"confirm": "Bestätigen",
|
||||||
"cancel": "Abbrechen",
|
"cancel": "Abbrechen",
|
||||||
"not_found": "E-Mail nicht gefunden."
|
"not_found": "E-Mail nicht gefunden.",
|
||||||
|
"integrity_report": "Integritätsbericht",
|
||||||
|
"email_eml": "E-Mail (.eml)",
|
||||||
|
"valid": "Gültig",
|
||||||
|
"invalid": "Ungültig",
|
||||||
|
"integrity_check_failed_title": "Integritätsprüfung fehlgeschlagen",
|
||||||
|
"integrity_check_failed_message": "Die Integrität der E-Mail und ihrer Anhänge konnte nicht überprüft werden.",
|
||||||
|
"integrity_report_description": "Dieser Bericht überprüft, ob der Inhalt Ihrer archivierten E-Mails nicht verändert wurde."
|
||||||
},
|
},
|
||||||
"ingestions": {
|
"ingestions": {
|
||||||
"title": "Erfassungsquellen",
|
"title": "Erfassungsquellen",
|
||||||
@@ -255,6 +263,80 @@
|
|||||||
"no_emails_found": "Keine archivierten E-Mails gefunden.",
|
"no_emails_found": "Keine archivierten E-Mails gefunden.",
|
||||||
"prev": "Zurück",
|
"prev": "Zurück",
|
||||||
"next": "Weiter"
|
"next": "Weiter"
|
||||||
|
},
|
||||||
|
"audit_log": {
|
||||||
|
"title": "Audit-Protokoll",
|
||||||
|
"header": "Audit-Protokoll",
|
||||||
|
"verify_integrity": "Integrität überprüfen",
|
||||||
|
"log_entries": "Protokolleinträge",
|
||||||
|
"timestamp": "Zeitstempel",
|
||||||
|
"actor": "Akteur",
|
||||||
|
"action": "Aktion",
|
||||||
|
"target": "Ziel",
|
||||||
|
"details": "Details",
|
||||||
|
"ip_address": "IP Adresse",
|
||||||
|
"target_type": "Zieltyp",
|
||||||
|
"target_id": "Ziel-ID",
|
||||||
|
"no_logs_found": "Keine Audit-Protokolle gefunden.",
|
||||||
|
"prev": "Zurück",
|
||||||
|
"next": "Weiter",
|
||||||
|
"verification_successful_title": "Überprüfung erfolgreich",
|
||||||
|
"verification_successful_message": "Integrität des Audit-Protokolls erfolgreich überprüft.",
|
||||||
|
"verification_failed_title": "Überprüfung fehlgeschlagen",
|
||||||
|
"verification_failed_message": "Die Integritätsprüfung des Audit-Protokolls ist fehlgeschlagen. Bitte überprüfen Sie die Systemprotokolle für weitere Details.",
|
||||||
|
"verification_error_message": "Während der Überprüfung ist ein unerwarteter Fehler aufgetreten. Bitte versuchen Sie es erneut."
|
||||||
|
},
|
||||||
|
"jobs": {
|
||||||
|
"title": "Job-Warteschlangen",
|
||||||
|
"queues": "Job-Warteschlangen",
|
||||||
|
"active": "Aktiv",
|
||||||
|
"completed": "Abgeschlossen",
|
||||||
|
"failed": "Fehlgeschlagen",
|
||||||
|
"delayed": "Verzögert",
|
||||||
|
"waiting": "Wartend",
|
||||||
|
"paused": "Pausiert",
|
||||||
|
"back_to_queues": "Zurück zu den Warteschlangen",
|
||||||
|
"queue_overview": "Warteschlangenübersicht",
|
||||||
|
"jobs": "Jobs",
|
||||||
|
"id": "ID",
|
||||||
|
"name": "Name",
|
||||||
|
"state": "Status",
|
||||||
|
"created_at": "Erstellt am",
|
||||||
|
"processed_at": "Verarbeitet am",
|
||||||
|
"finished_at": "Beendet am",
|
||||||
|
"showing": "Anzeige",
|
||||||
|
"of": "von",
|
||||||
|
"previous": "Zurück",
|
||||||
|
"next": "Weiter",
|
||||||
|
"ingestion_source": "Ingestion-Quelle"
|
||||||
|
},
|
||||||
|
"license_page": {
|
||||||
|
"title": "Enterprise-Lizenzstatus",
|
||||||
|
"meta_description": "Zeigen Sie den aktuellen Status Ihrer Open Archiver Enterprise-Lizenz an.",
|
||||||
|
"revoked_title": "Lizenz widerrufen",
|
||||||
|
"revoked_message": "Ihre Lizenz wurde vom Lizenzadministrator widerrufen. Enterprise-Funktionen werden deaktiviert {{grace_period}}. Bitte kontaktieren Sie Ihren Account Manager für Unterstützung.",
|
||||||
|
"revoked_grace_period": "am {{date}}",
|
||||||
|
"revoked_immediately": "sofort",
|
||||||
|
"seat_limit_exceeded_title": "Sitzplatzlimit überschritten",
|
||||||
|
"seat_limit_exceeded_message": "Ihre Lizenz gilt für {{planSeats}} Benutzer, aber Sie verwenden derzeit {{activeSeats}}. Bitte kontaktieren Sie den Vertrieb, um Ihr Abonnement anzupassen.",
|
||||||
|
"customer": "Kunde",
|
||||||
|
"license_details": "Lizenzdetails",
|
||||||
|
"license_status": "Lizenzstatus",
|
||||||
|
"active": "Aktiv",
|
||||||
|
"expired": "Abgelaufen",
|
||||||
|
"revoked": "Widerrufen",
|
||||||
|
"unknown": "Unbekannt",
|
||||||
|
"expires": "Läuft ab",
|
||||||
|
"seat_usage": "Sitzplatznutzung",
|
||||||
|
"seats_used": "{{activeSeats}} von {{planSeats}} Plätzen belegt",
|
||||||
|
"enabled_features": "Aktivierte Funktionen",
|
||||||
|
"enabled_features_description": "Die folgenden Enterprise-Funktionen sind derzeit aktiviert.",
|
||||||
|
"feature": "Funktion",
|
||||||
|
"status": "Status",
|
||||||
|
"enabled": "Aktiviert",
|
||||||
|
"disabled": "Deaktiviert",
|
||||||
|
"could_not_load_title": "Lizenz konnte nicht geladen werden",
|
||||||
|
"could_not_load_message": "Ein unerwarteter Fehler ist aufgetreten."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,8 @@
|
|||||||
"password": "Password"
|
"password": "Password"
|
||||||
},
|
},
|
||||||
"common": {
|
"common": {
|
||||||
"working": "Working"
|
"working": "Working",
|
||||||
|
"read_docs": "Read docs"
|
||||||
},
|
},
|
||||||
"archive": {
|
"archive": {
|
||||||
"title": "Archive",
|
"title": "Archive",
|
||||||
@@ -32,7 +33,14 @@
|
|||||||
"deleting": "Deleting",
|
"deleting": "Deleting",
|
||||||
"confirm": "Confirm",
|
"confirm": "Confirm",
|
||||||
"cancel": "Cancel",
|
"cancel": "Cancel",
|
||||||
"not_found": "Email not found."
|
"not_found": "Email not found.",
|
||||||
|
"integrity_report": "Integrity Report",
|
||||||
|
"email_eml": "Email (.eml)",
|
||||||
|
"valid": "Valid",
|
||||||
|
"invalid": "Invalid",
|
||||||
|
"integrity_check_failed_title": "Integrity Check Failed",
|
||||||
|
"integrity_check_failed_message": "Could not verify the integrity of the email and its attachments.",
|
||||||
|
"integrity_report_description": "This report verifies that the content of your archived emails has not been altered."
|
||||||
},
|
},
|
||||||
"ingestions": {
|
"ingestions": {
|
||||||
"title": "Ingestion Sources",
|
"title": "Ingestion Sources",
|
||||||
@@ -226,7 +234,8 @@
|
|||||||
"users": "Users",
|
"users": "Users",
|
||||||
"roles": "Roles",
|
"roles": "Roles",
|
||||||
"api_keys": "API Keys",
|
"api_keys": "API Keys",
|
||||||
"logout": "Logout"
|
"logout": "Logout",
|
||||||
|
"admin": "Admin"
|
||||||
},
|
},
|
||||||
"api_keys_page": {
|
"api_keys_page": {
|
||||||
"title": "API Keys",
|
"title": "API Keys",
|
||||||
@@ -287,6 +296,87 @@
|
|||||||
"indexed_insights": "Indexed insights",
|
"indexed_insights": "Indexed insights",
|
||||||
"top_10_senders": "Top 10 Senders",
|
"top_10_senders": "Top 10 Senders",
|
||||||
"no_indexed_insights": "No indexed insights available."
|
"no_indexed_insights": "No indexed insights available."
|
||||||
|
},
|
||||||
|
"audit_log": {
|
||||||
|
"title": "Audit Log",
|
||||||
|
"header": "Audit Log",
|
||||||
|
"verify_integrity": "Verify Log Integrity",
|
||||||
|
"log_entries": "Log Entries",
|
||||||
|
"timestamp": "Timestamp",
|
||||||
|
"actor": "Actor",
|
||||||
|
"action": "Action",
|
||||||
|
"target": "Target",
|
||||||
|
"details": "Details",
|
||||||
|
"ip_address": "IP Address",
|
||||||
|
"target_type": "Target Type",
|
||||||
|
"target_id": "Target ID",
|
||||||
|
"no_logs_found": "No audit logs found.",
|
||||||
|
"prev": "Prev",
|
||||||
|
"next": "Next",
|
||||||
|
"log_entry_details": "Log Entry Details",
|
||||||
|
"viewing_details_for": "Viewing the complete details for log entry #",
|
||||||
|
"actor_id": "Actor ID",
|
||||||
|
"previous_hash": "Previous Hash",
|
||||||
|
"current_hash": "Current Hash",
|
||||||
|
"close": "Close",
|
||||||
|
"verification_successful_title": "Verification Successful",
|
||||||
|
"verification_successful_message": "Audit log integrity verified successfully.",
|
||||||
|
"verification_failed_title": "Verification Failed",
|
||||||
|
"verification_failed_message": "The audit log integrity check failed. Please review the system logs for more details.",
|
||||||
|
"verification_error_message": "An unexpected error occurred during verification. Please try again."
|
||||||
|
},
|
||||||
|
"jobs": {
|
||||||
|
"title": "Job Queues",
|
||||||
|
"queues": "Job Queues",
|
||||||
|
"active": "Active",
|
||||||
|
"completed": "Completed",
|
||||||
|
"failed": "Failed",
|
||||||
|
"delayed": "Delayed",
|
||||||
|
"waiting": "Waiting",
|
||||||
|
"paused": "Paused",
|
||||||
|
"back_to_queues": "Back to Queues",
|
||||||
|
"queue_overview": "Queue Overview",
|
||||||
|
"jobs": "Jobs",
|
||||||
|
"id": "ID",
|
||||||
|
"name": "Name",
|
||||||
|
"state": "State",
|
||||||
|
|
||||||
|
"created_at": "Created At",
|
||||||
|
"processed_at": "Processed At",
|
||||||
|
"finished_at": "Finished At",
|
||||||
|
"showing": "Showing",
|
||||||
|
"of": "of",
|
||||||
|
"previous": "Previous",
|
||||||
|
"next": "Next",
|
||||||
|
"ingestion_source": "Ingestion Source"
|
||||||
|
},
|
||||||
|
"license_page": {
|
||||||
|
"title": "Enterprise License Status",
|
||||||
|
"meta_description": "View the current status of your Open Archiver Enterprise license.",
|
||||||
|
"revoked_title": "License Revoked",
|
||||||
|
"revoked_message": "Your license has been revoked by the license administrator. Enterprise features will be disabled {{grace_period}}. Please contact your account manager for assistance.",
|
||||||
|
"revoked_grace_period": "on {{date}}",
|
||||||
|
"revoked_immediately": "immediately",
|
||||||
|
"seat_limit_exceeded_title": "Seat Limit Exceeded",
|
||||||
|
"seat_limit_exceeded_message": "Your license is for {{planSeats}} users, but you are currently using {{activeSeats}}. Please contact sales to adjust your subscription.",
|
||||||
|
"customer": "Customer",
|
||||||
|
"license_details": "License Details",
|
||||||
|
"license_status": "License Status",
|
||||||
|
"active": "Active",
|
||||||
|
"expired": "Expired",
|
||||||
|
"revoked": "Revoked",
|
||||||
|
"unknown": "Unknown",
|
||||||
|
"expires": "Expires",
|
||||||
|
"seat_usage": "Seat Usage",
|
||||||
|
"seats_used": "{{activeSeats}} of {{planSeats}} seats used",
|
||||||
|
"enabled_features": "Enabled Features",
|
||||||
|
"enabled_features_description": "The following enterprise features are currently enabled.",
|
||||||
|
"feature": "Feature",
|
||||||
|
"status": "Status",
|
||||||
|
"enabled": "Enabled",
|
||||||
|
"disabled": "Disabled",
|
||||||
|
"could_not_load_title": "Could Not Load License",
|
||||||
|
"could_not_load_message": "An unexpected error occurred."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -63,7 +63,7 @@ export const load: LayoutServerLoad = async (event) => {
|
|||||||
return {
|
return {
|
||||||
user: locals.user,
|
user: locals.user,
|
||||||
accessToken: locals.accessToken,
|
accessToken: locals.accessToken,
|
||||||
isDemo: process.env.IS_DEMO === 'true',
|
enterpriseMode: locals.enterpriseMode,
|
||||||
systemSettings,
|
systemSettings,
|
||||||
currentVersion: version,
|
currentVersion: version,
|
||||||
newVersionInfo: newVersionInfo,
|
newVersionInfo: newVersionInfo,
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ export const load: LayoutLoad = async ({ url, data }) => {
|
|||||||
|
|
||||||
let initLocale: SupportedLanguage = 'en'; // Default fallback
|
let initLocale: SupportedLanguage = 'en'; // Default fallback
|
||||||
|
|
||||||
if (data.systemSettings?.language) {
|
if (data && data.systemSettings?.language) {
|
||||||
initLocale = data.systemSettings.language;
|
initLocale = data.systemSettings.language;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,26 +1,32 @@
|
|||||||
import { env } from '$env/dynamic/private';
|
import { env } from '$env/dynamic/private';
|
||||||
import type { RequestHandler } from '@sveltejs/kit';
|
import type { RequestHandler } from '@sveltejs/kit';
|
||||||
|
import { json } from '@sveltejs/kit';
|
||||||
|
|
||||||
const BACKEND_URL = `http://localhost:${env.PORT_BACKEND || 4000}`;
|
const BACKEND_URL = `http://localhost:${env.PORT_BACKEND || 4000}`;
|
||||||
|
|
||||||
const handleRequest: RequestHandler = async ({ request, params }) => {
|
const handleRequest: RequestHandler = async ({ request, params, fetch }) => {
|
||||||
const url = new URL(request.url);
|
const url = new URL(request.url);
|
||||||
const slug = params.slug || '';
|
const slug = params.slug || '';
|
||||||
const targetUrl = `${BACKEND_URL}/${slug}${url.search}`;
|
const targetUrl = `${BACKEND_URL}/${slug}${url.search}`;
|
||||||
|
|
||||||
// Create a new request with the same method, headers, and body
|
try {
|
||||||
const proxyRequest = new Request(targetUrl, {
|
const proxyRequest = new Request(targetUrl, {
|
||||||
method: request.method,
|
method: request.method,
|
||||||
headers: request.headers,
|
headers: request.headers,
|
||||||
body: request.body,
|
body: request.body,
|
||||||
duplex: 'half', // Required for streaming request bodies
|
duplex: 'half',
|
||||||
} as RequestInit);
|
} as RequestInit);
|
||||||
|
|
||||||
// Forward the request to the backend
|
const response = await fetch(proxyRequest);
|
||||||
const response = await fetch(proxyRequest);
|
|
||||||
|
|
||||||
// Return the response from the backend
|
return response;
|
||||||
return response;
|
} catch (error) {
|
||||||
|
console.error('Proxy request failed:', error);
|
||||||
|
return json(
|
||||||
|
{ message: `Failed to connect to the backend service. ${JSON.stringify(error)}` },
|
||||||
|
{ status: 500 }
|
||||||
|
);
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
export const GET = handleRequest;
|
export const GET = handleRequest;
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user