diff --git a/404.html b/404.html index b875bc5..e83cbcc 100644 --- a/404.html +++ b/404.html @@ -16,7 +16,7 @@
- + \ No newline at end of file diff --git a/SUMMARY.html b/SUMMARY.html index 43b51be..cae3739 100644 --- a/SUMMARY.html +++ b/SUMMARY.html @@ -18,8 +18,8 @@ -
Skip to content
- +
Skip to content
+ \ No newline at end of file diff --git a/api/archived-email.html b/api/archived-email.html index 5559f35..bbffc0a 100644 --- a/api/archived-email.html +++ b/api/archived-email.html @@ -18,7 +18,7 @@ -
Skip to content

Archived Email Service API

The Archived Email Service is responsible for retrieving archived emails and their details from the database and storage.

Endpoints

All endpoints in this service require authentication.

GET /api/v1/archived-emails/ingestion-source/:ingestionSourceId

Retrieves a paginated list of archived emails for a specific ingestion source.

Access: Authenticated

URL Parameters

ParameterTypeDescription
ingestionSourceIdstringThe ID of the ingestion source to get emails for.

Query Parameters

ParameterTypeDescriptionDefault
pagenumberThe page number for pagination.1
limitnumberThe number of items per page.10

Responses

  • 200 OK: A paginated list of archived emails.

    json
    {
    +    
    Skip to content

    Archived Email Service API

    The Archived Email Service is responsible for retrieving archived emails and their details from the database and storage.

    Endpoints

    All endpoints in this service require authentication.

    GET /api/v1/archived-emails/ingestion-source/:ingestionSourceId

    Retrieves a paginated list of archived emails for a specific ingestion source.

    Access: Authenticated

    URL Parameters

    ParameterTypeDescription
    ingestionSourceIdstringThe ID of the ingestion source to get emails for.

    Query Parameters

    ParameterTypeDescriptionDefault
    pagenumberThe page number for pagination.1
    limitnumberThe number of items per page.10

    Responses

    • 200 OK: A paginated list of archived emails.

      json
      {
           "items": [
               {
                   "id": "email-id",
      @@ -52,8 +52,8 @@
                   "sizeBytes": 12345
               }
           ]
      -}
    • 404 Not Found: The archived email with the specified ID was not found.

    • 500 Internal Server Error: An unexpected error occurred.

    Service Methods

    getArchivedEmails(ingestionSourceId: string, page: number, limit: number): Promise<PaginatedArchivedEmails>

    Retrieves a paginated list of archived emails from the database for a given ingestion source.

    • ingestionSourceId: The ID of the ingestion source.
    • page: The page number for pagination.
    • limit: The number of items per page.
    • Returns: A promise that resolves to a PaginatedArchivedEmails object.

    getArchivedEmailById(emailId: string): Promise<ArchivedEmail | null>

    Retrieves a single archived email by its ID, including its raw content and attachments.

    • emailId: The ID of the archived email.
    • Returns: A promise that resolves to an ArchivedEmail object or null if not found.
    - +}
  • 404 Not Found: The archived email with the specified ID was not found.

  • 500 Internal Server Error: An unexpected error occurred.

Service Methods

getArchivedEmails(ingestionSourceId: string, page: number, limit: number): Promise<PaginatedArchivedEmails>

Retrieves a paginated list of archived emails from the database for a given ingestion source.

  • ingestionSourceId: The ID of the ingestion source.
  • page: The page number for pagination.
  • limit: The number of items per page.
  • Returns: A promise that resolves to a PaginatedArchivedEmails object.

getArchivedEmailById(emailId: string): Promise<ArchivedEmail | null>

Retrieves a single archived email by its ID, including its raw content and attachments.

  • emailId: The ID of the archived email.
  • Returns: A promise that resolves to an ArchivedEmail object or null if not found.
+ \ No newline at end of file diff --git a/api/auth.html b/api/auth.html index 7929eab..6a32a53 100644 --- a/api/auth.html +++ b/api/auth.html @@ -18,7 +18,7 @@ -
Skip to content

Auth Service API

The Auth Service is responsible for handling user authentication, including login and token verification.

Endpoints

POST /api/v1/auth/login

Authenticates a user and returns a JWT if the credentials are valid.

Access: Public

Rate Limiting: This endpoint is rate-limited to prevent brute-force attacks.

Request Body

FieldTypeDescription
emailstringThe user's email address.
passwordstringThe user's password.

Responses

  • 200 OK: Authentication successful.

    json
    {
    +    
    Skip to content

    Auth Service API

    The Auth Service is responsible for handling user authentication, including login and token verification.

    Endpoints

    POST /api/v1/auth/login

    Authenticates a user and returns a JWT if the credentials are valid.

    Access: Public

    Rate Limiting: This endpoint is rate-limited to prevent brute-force attacks.

    Request Body

    FieldTypeDescription
    emailstringThe user's email address.
    passwordstringThe user's password.

    Responses

    • 200 OK: Authentication successful.

      json
      {
           "accessToken": "your.jwt.token",
           "user": {
               "id": "user-id",
      @@ -31,8 +31,8 @@
           "message": "Invalid credentials"
       }
    • 500 Internal Server Error: An unexpected error occurred.

      json
      {
           "message": "An internal server error occurred"
      -}

    Service Methods

    verifyPassword(password: string, hash: string): Promise<boolean>

    Compares a plain-text password with a hashed password to verify its correctness.

    • password: The plain-text password.
    • hash: The hashed password to compare against.
    • Returns: A promise that resolves to true if the password is valid, otherwise false.

    login(email: string, password: string): Promise<LoginResponse | null>

    Handles the user login process. It finds the user by email, verifies the password, and generates a JWT upon successful authentication.

    • email: The user's email.
    • password: The user's password.
    • Returns: A promise that resolves to a LoginResponse object containing the accessToken and user details, or null if authentication fails.

    verifyToken(token: string): Promise<AuthTokenPayload | null>

    Verifies the authenticity and expiration of a JWT.

    • token: The JWT string to verify.
    • Returns: A promise that resolves to the token's AuthTokenPayload if valid, otherwise null.
    - +}

Service Methods

verifyPassword(password: string, hash: string): Promise<boolean>

Compares a plain-text password with a hashed password to verify its correctness.

  • password: The plain-text password.
  • hash: The hashed password to compare against.
  • Returns: A promise that resolves to true if the password is valid, otherwise false.

login(email: string, password: string): Promise<LoginResponse | null>

Handles the user login process. It finds the user by email, verifies the password, and generates a JWT upon successful authentication.

  • email: The user's email.
  • password: The user's password.
  • Returns: A promise that resolves to a LoginResponse object containing the accessToken and user details, or null if authentication fails.

verifyToken(token: string): Promise<AuthTokenPayload | null>

Verifies the authenticity and expiration of a JWT.

  • token: The JWT string to verify.
  • Returns: A promise that resolves to the token's AuthTokenPayload if valid, otherwise null.
+ \ No newline at end of file diff --git a/api/authentication.html b/api/authentication.html index 7bab310..bfb52f0 100644 --- a/api/authentication.html +++ b/api/authentication.html @@ -18,7 +18,7 @@ -
Skip to content

API Authentication

To access protected API endpoints, you need to include a JSON Web Token (JWT) in the Authorization header of your requests.

Obtaining a JWT

First, you need to authenticate with the /api/v1/auth/login endpoint by providing your email and password. If the credentials are correct, the API will return an accessToken.

Request:

http
POST /api/v1/auth/login
+    
Skip to content

API Authentication

To access protected API endpoints, you need to include a JSON Web Token (JWT) in the Authorization header of your requests.

Obtaining a JWT

First, you need to authenticate with the /api/v1/auth/login endpoint by providing your email and password. If the credentials are correct, the API will return an accessToken.

Request:

http
POST /api/v1/auth/login
 Content-Type: application/json
 
 {
@@ -33,8 +33,8 @@
     }
 }

Making Authenticated Requests

Once you have the accessToken, you must include it in the Authorization header of all subsequent requests to protected endpoints, using the Bearer scheme.

Example:

http
GET /api/v1/dashboard/stats
 Authorization: Bearer your.jwt.token

If the token is missing, expired, or invalid, the API will respond with a 401 Unauthorized status code.

Using a Super API Key

Alternatively, for server-to-server communication or scripts, you can use a super API key. This key provides unrestricted access to the API and should be kept secret.

You can set the SUPER_API_KEY in your .env file.

To authenticate using the super API key, include it in the Authorization header as a Bearer token.

Example:

http
GET /api/v1/dashboard/stats
-Authorization: Bearer your-super-secret-api-key
- +Authorization: Bearer your-super-secret-api-key
+ \ No newline at end of file diff --git a/api/dashboard.html b/api/dashboard.html index c04bb85..555a236 100644 --- a/api/dashboard.html +++ b/api/dashboard.html @@ -18,7 +18,7 @@ -
Skip to content

Dashboard Service API

The Dashboard Service provides endpoints for retrieving statistics and data for the main dashboard.

Endpoints

All endpoints in this service require authentication.

GET /api/v1/dashboard/stats

Retrieves overall statistics, including the total number of archived emails, total storage used, and the number of failed ingestions in the last 7 days.

Access: Authenticated

Responses

  • 200 OK: An object containing the dashboard statistics.

    json
    {
    +    
    Skip to content

    Dashboard Service API

    The Dashboard Service provides endpoints for retrieving statistics and data for the main dashboard.

    Endpoints

    All endpoints in this service require authentication.

    GET /api/v1/dashboard/stats

    Retrieves overall statistics, including the total number of archived emails, total storage used, and the number of failed ingestions in the last 7 days.

    Access: Authenticated

    Responses

    • 200 OK: An object containing the dashboard statistics.

      json
      {
           "totalEmailsArchived": 12345,
           "totalStorageUsed": 54321098,
           "failedIngestionsLast7Days": 3
      @@ -55,8 +55,8 @@
                   "count": 42
               }
           ]
      -}
    - +}
+ \ No newline at end of file diff --git a/api/index.html b/api/index.html index fe4d5c5..44d98e0 100644 --- a/api/index.html +++ b/api/index.html @@ -3,7 +3,7 @@ - API documentation | Open Archiver Documentation + API Overview | Open Archiver Documentation @@ -13,13 +13,13 @@ - + -
Skip to content
- +
Skip to content

API Overview

Welcome to the Open Archiver API documentation. This section provides detailed information about the available API endpoints.

All API endpoints are prefixed with /api/v1.

Authentication

Before making requests to protected endpoints, you must authenticate with the API. See the Authentication Guide for details on how to obtain and use API tokens.

API Services

+ \ No newline at end of file diff --git a/api/ingestion.html b/api/ingestion.html index 31f7261..10bbdc9 100644 --- a/api/ingestion.html +++ b/api/ingestion.html @@ -18,7 +18,7 @@ -
Skip to content

Ingestion Service API

The Ingestion Service manages ingestion sources, which are configurations for connecting to email providers and importing emails.

Endpoints

All endpoints in this service require authentication.

POST /api/v1/ingestion

Creates a new ingestion source.

Access: Authenticated

Request Body

The request body should be a CreateIngestionSourceDto object.

typescript
interface CreateIngestionSourceDto {
+    
Skip to content

Ingestion Service API

The Ingestion Service manages ingestion sources, which are configurations for connecting to email providers and importing emails.

Endpoints

All endpoints in this service require authentication.

POST /api/v1/ingestion

Creates a new ingestion source.

Access: Authenticated

Request Body

The request body should be a CreateIngestionSourceDto object.

typescript
interface CreateIngestionSourceDto {
     name: string;
     provider: 'google' | 'microsoft' | 'generic_imap';
     providerConfig: IngestionCredentials;
@@ -33,8 +33,8 @@
         | 'active'
         | 'paused'
         | 'error';
-}

Responses

  • 200 OK: The updated ingestion source object.
  • 404 Not Found: Ingestion source not found.
  • 500 Internal Server Error: An unexpected error occurred.

DELETE /api/v1/ingestion/:id

Deletes an ingestion source and all associated data.

Access: Authenticated

URL Parameters

ParameterTypeDescription
idstringThe ID of the ingestion source.

Responses

  • 204 No Content: The ingestion source was deleted successfully.
  • 404 Not Found: Ingestion source not found.
  • 500 Internal Server Error: An unexpected error occurred.

POST /api/v1/ingestion/:id/import

Triggers the initial import process for an ingestion source.

Access: Authenticated

URL Parameters

ParameterTypeDescription
idstringThe ID of the ingestion source.

Responses

  • 202 Accepted: The initial import was triggered successfully.
  • 404 Not Found: Ingestion source not found.
  • 500 Internal Server Error: An unexpected error occurred.

POST /api/v1/ingestion/:id/pause

Pauses an active ingestion source.

Access: Authenticated

URL Parameters

ParameterTypeDescription
idstringThe ID of the ingestion source.

Responses

  • 200 OK: The updated ingestion source object with a paused status.
  • 404 Not Found: Ingestion source not found.
  • 500 Internal Server Error: An unexpected error occurred.

POST /api/v1/ingestion/:id/sync

Triggers a forced synchronization for an ingestion source.

Access: Authenticated

URL Parameters

ParameterTypeDescription
idstringThe ID of the ingestion source.

Responses

  • 202 Accepted: The force sync was triggered successfully.
  • 404 Not Found: Ingestion source not found.
  • 500 Internal Server Error: An unexpected error occurred.
- +}

Responses

  • 200 OK: The updated ingestion source object.
  • 404 Not Found: Ingestion source not found.
  • 500 Internal Server Error: An unexpected error occurred.

DELETE /api/v1/ingestion/:id

Deletes an ingestion source and all associated data.

Access: Authenticated

URL Parameters

ParameterTypeDescription
idstringThe ID of the ingestion source.

Responses

  • 204 No Content: The ingestion source was deleted successfully.
  • 404 Not Found: Ingestion source not found.
  • 500 Internal Server Error: An unexpected error occurred.

POST /api/v1/ingestion/:id/import

Triggers the initial import process for an ingestion source.

Access: Authenticated

URL Parameters

ParameterTypeDescription
idstringThe ID of the ingestion source.

Responses

  • 202 Accepted: The initial import was triggered successfully.
  • 404 Not Found: Ingestion source not found.
  • 500 Internal Server Error: An unexpected error occurred.

POST /api/v1/ingestion/:id/pause

Pauses an active ingestion source.

Access: Authenticated

URL Parameters

ParameterTypeDescription
idstringThe ID of the ingestion source.

Responses

  • 200 OK: The updated ingestion source object with a paused status.
  • 404 Not Found: Ingestion source not found.
  • 500 Internal Server Error: An unexpected error occurred.

POST /api/v1/ingestion/:id/sync

Triggers a forced synchronization for an ingestion source.

Access: Authenticated

URL Parameters

ParameterTypeDescription
idstringThe ID of the ingestion source.

Responses

  • 202 Accepted: The force sync was triggered successfully.
  • 404 Not Found: Ingestion source not found.
  • 500 Internal Server Error: An unexpected error occurred.
+ \ No newline at end of file diff --git a/api/search.html b/api/search.html index 2e4c034..21bf4b2 100644 --- a/api/search.html +++ b/api/search.html @@ -18,7 +18,7 @@ -
Skip to content
+ \ No newline at end of file diff --git a/api/storage.html b/api/storage.html index fba7fe2..c2a9665 100644 --- a/api/storage.html +++ b/api/storage.html @@ -18,8 +18,8 @@ -
Skip to content

Storage Service API

The Storage Service provides an endpoint for downloading files from the configured storage provider.

Endpoints

All endpoints in this service require authentication.

GET /api/v1/storage/download

Downloads a file from the storage.

Access: Authenticated

Query Parameters

ParameterTypeDescription
pathstringThe path to the file within the storage provider.

Responses

  • 200 OK: The file stream.
  • 400 Bad Request: File path is required or invalid.
  • 404 Not Found: File not found.
  • 500 Internal Server Error: An unexpected error occurred.
- +
Skip to content

Storage Service API

The Storage Service provides an endpoint for downloading files from the configured storage provider.

Endpoints

All endpoints in this service require authentication.

GET /api/v1/storage/download

Downloads a file from the storage.

Access: Authenticated

Query Parameters

ParameterTypeDescription
pathstringThe path to the file within the storage provider.

Responses

  • 200 OK: The file stream.
  • 400 Bad Request: File path is required or invalid.
  • 404 Not Found: File not found.
  • 500 Internal Server Error: An unexpected error occurred.
+ \ No newline at end of file diff --git a/assets/api_index.md.BsewM5Xg.js b/assets/api_index.md.BsewM5Xg.js deleted file mode 100644 index 300766d..0000000 --- a/assets/api_index.md.BsewM5Xg.js +++ /dev/null @@ -1 +0,0 @@ -import{_ as t,c as n,o,j as a,a as i}from"./chunks/framework.Cd-3tpCq.js";const f=JSON.parse('{"title":"API documentation","description":"","frontmatter":{},"headers":[],"relativePath":"api/index.md","filePath":"api/index.md"}'),r={name:"api/index.md"};function d(s,e,c,p,m,l){return o(),n("div",null,e[0]||(e[0]=[a("h1",{id:"api-documentation",tabindex:"-1"},[i("API documentation "),a("a",{class:"header-anchor",href:"#api-documentation","aria-label":'Permalink to "API documentation"'},"​")],-1)]))}const x=t(r,[["render",d]]);export{f as __pageData,x as default}; diff --git a/assets/api_index.md.BsewM5Xg.lean.js b/assets/api_index.md.BsewM5Xg.lean.js deleted file mode 100644 index 300766d..0000000 --- a/assets/api_index.md.BsewM5Xg.lean.js +++ /dev/null @@ -1 +0,0 @@ -import{_ as t,c as n,o,j as a,a as i}from"./chunks/framework.Cd-3tpCq.js";const f=JSON.parse('{"title":"API documentation","description":"","frontmatter":{},"headers":[],"relativePath":"api/index.md","filePath":"api/index.md"}'),r={name:"api/index.md"};function d(s,e,c,p,m,l){return o(),n("div",null,e[0]||(e[0]=[a("h1",{id:"api-documentation",tabindex:"-1"},[i("API documentation "),a("a",{class:"header-anchor",href:"#api-documentation","aria-label":'Permalink to "API documentation"'},"​")],-1)]))}const x=t(r,[["render",d]]);export{f as __pageData,x as default}; diff --git a/assets/api_index.md.rAxPnnFp.js b/assets/api_index.md.rAxPnnFp.js new file mode 100644 index 0000000..e2395e2 --- /dev/null +++ b/assets/api_index.md.rAxPnnFp.js @@ -0,0 +1 @@ +import{_ as a,c as t,o as i,ae as r}from"./chunks/framework.Cd-3tpCq.js";const p=JSON.parse('{"title":"API Overview","description":"","frontmatter":{},"headers":[],"relativePath":"api/index.md","filePath":"api/index.md"}'),o={name:"api/index.md"};function n(s,e,h,l,d,c){return i(),t("div",null,e[0]||(e[0]=[r('

API Overview

Welcome to the Open Archiver API documentation. This section provides detailed information about the available API endpoints.

All API endpoints are prefixed with /api/v1.

Authentication

Before making requests to protected endpoints, you must authenticate with the API. See the Authentication Guide for details on how to obtain and use API tokens.

API Services

',7)]))}const m=a(o,[["render",n]]);export{p as __pageData,m as default}; diff --git a/assets/api_index.md.rAxPnnFp.lean.js b/assets/api_index.md.rAxPnnFp.lean.js new file mode 100644 index 0000000..104a544 --- /dev/null +++ b/assets/api_index.md.rAxPnnFp.lean.js @@ -0,0 +1 @@ +import{_ as a,c as t,o as i,ae as r}from"./chunks/framework.Cd-3tpCq.js";const p=JSON.parse('{"title":"API Overview","description":"","frontmatter":{},"headers":[],"relativePath":"api/index.md","filePath":"api/index.md"}'),o={name:"api/index.md"};function n(s,e,h,l,d,c){return i(),t("div",null,e[0]||(e[0]=[r("",7)]))}const m=a(o,[["render",n]]);export{p as __pageData,m as default}; diff --git a/hashmap.json b/hashmap.json index 9f7ab7b..ca96712 100644 --- a/hashmap.json +++ b/hashmap.json @@ -1 +1 @@ -{"api_archived-email.md":"BkbaV2Hy","api_auth.md":"Du3O3W7h","api_authentication.md":"BeVpHL1b","api_dashboard.md":"CBdDRqLA","api_index.md":"BsewM5Xg","api_ingestion.md":"Db0jMzxp","api_search.md":"KKtP7fd6","api_storage.md":"DJf0CH38","index.md":"Jke24r4b","services_index.md":"DfyQfsBc","services_storage-service.md":"xOqM9CWx","summary.md":"DsFfZpsz","user-guides_email-providers_google-workspace.md":"CWhO43nR","user-guides_email-providers_imap.md":"BcXBEq43","user-guides_email-providers_index.md":"DY3ytyxv","user-guides_email-providers_microsoft-365.md":"C4O8w9wT","user-guides_installation.md":"D5Jetfyl"} +{"api_archived-email.md":"BkbaV2Hy","api_auth.md":"Du3O3W7h","api_authentication.md":"BeVpHL1b","api_dashboard.md":"CBdDRqLA","api_index.md":"rAxPnnFp","api_ingestion.md":"Db0jMzxp","api_search.md":"KKtP7fd6","api_storage.md":"DJf0CH38","index.md":"Jke24r4b","services_index.md":"DfyQfsBc","services_storage-service.md":"xOqM9CWx","summary.md":"DsFfZpsz","user-guides_email-providers_google-workspace.md":"CWhO43nR","user-guides_email-providers_imap.md":"BcXBEq43","user-guides_email-providers_index.md":"DY3ytyxv","user-guides_email-providers_microsoft-365.md":"C4O8w9wT","user-guides_installation.md":"D5Jetfyl"} diff --git a/index.html b/index.html index c218690..dff61ff 100644 --- a/index.html +++ b/index.html @@ -18,8 +18,8 @@ -
Skip to content

Get Started 👋

Welcome to Open Archiver! This guide will help you get started with setting up and using the platform.

What is Open Archiver? 🛡️

A secure, sovereign, and affordable open-source platform for email archiving and eDiscovery.

Open Archiver provides a robust, self-hosted solution for archiving, storing, indexing, and searching emails from major platforms, including Google Workspace (Gmail), Microsoft 365, as well as generic IMAP-enabled email inboxes. Use Open Archiver to keep a permanent, tamper-proof record of your communication history, free from vendor lock-in.

Key Features ✨

  • Universal Ingestion: Connect to Google Workspace, Microsoft 365, and standard IMAP servers to perform initial bulk imports and maintain continuous, real-time synchronization.
  • Secure & Efficient Storage: Emails are stored in the standard .eml format. The system uses deduplication and compression to minimize storage costs. All data is encrypted at rest.
  • Pluggable Storage Backends: Support both local filesystem storage and S3-compatible object storage (like AWS S3 or MinIO).
  • Powerful Search & eDiscovery: A high-performance search engine indexes the full text of emails and attachments (PDF, DOCX, etc.).
  • Compliance & Retention: Define granular retention policies to automatically manage the lifecycle of your data. Place legal holds on communications to prevent deletion during litigation (TBD).
  • Comprehensive Auditing: An immutable audit trail logs all system activities, ensuring you have a clear record of who accessed what and when (TBD).

Installation 🚀

To get your own instance of Open Archiver running, follow our detailed installation guide:

Data Source Configuration 🔌

After deploying the application, you will need to configure one or more ingestion sources to begin archiving emails. Follow our detailed guides to connect to your email provider:

Contributing ❤️

We welcome contributions from the community!

  • Reporting Bugs: If you find a bug, please open an issue on our GitHub repository.
  • Suggesting Enhancements: Have an idea for a new feature? We'd love to hear it. Open an issue to start the discussion.
  • Code Contributions: If you'd like to contribute code, please fork the repository and submit a pull request.

Please read our CONTRIBUTING.md file for more details on our code of conduct and the process for submitting pull requests.

- +
Skip to content

Get Started 👋

Welcome to Open Archiver! This guide will help you get started with setting up and using the platform.

What is Open Archiver? 🛡️

A secure, sovereign, and affordable open-source platform for email archiving and eDiscovery.

Open Archiver provides a robust, self-hosted solution for archiving, storing, indexing, and searching emails from major platforms, including Google Workspace (Gmail), Microsoft 365, as well as generic IMAP-enabled email inboxes. Use Open Archiver to keep a permanent, tamper-proof record of your communication history, free from vendor lock-in.

Key Features ✨

  • Universal Ingestion: Connect to Google Workspace, Microsoft 365, and standard IMAP servers to perform initial bulk imports and maintain continuous, real-time synchronization.
  • Secure & Efficient Storage: Emails are stored in the standard .eml format. The system uses deduplication and compression to minimize storage costs. All data is encrypted at rest.
  • Pluggable Storage Backends: Support both local filesystem storage and S3-compatible object storage (like AWS S3 or MinIO).
  • Powerful Search & eDiscovery: A high-performance search engine indexes the full text of emails and attachments (PDF, DOCX, etc.).
  • Compliance & Retention: Define granular retention policies to automatically manage the lifecycle of your data. Place legal holds on communications to prevent deletion during litigation (TBD).
  • Comprehensive Auditing: An immutable audit trail logs all system activities, ensuring you have a clear record of who accessed what and when (TBD).

Installation 🚀

To get your own instance of Open Archiver running, follow our detailed installation guide:

Data Source Configuration 🔌

After deploying the application, you will need to configure one or more ingestion sources to begin archiving emails. Follow our detailed guides to connect to your email provider:

Contributing ❤️

We welcome contributions from the community!

  • Reporting Bugs: If you find a bug, please open an issue on our GitHub repository.
  • Suggesting Enhancements: Have an idea for a new feature? We'd love to hear it. Open an issue to start the discussion.
  • Code Contributions: If you'd like to contribute code, please fork the repository and submit a pull request.

Please read our CONTRIBUTING.md file for more details on our code of conduct and the process for submitting pull requests.

+ \ No newline at end of file diff --git a/services/index.html b/services/index.html index bae7e28..da53fdf 100644 --- a/services/index.html +++ b/services/index.html @@ -18,8 +18,8 @@ -
Skip to content
- +
Skip to content
+ \ No newline at end of file diff --git a/services/storage-service.html b/services/storage-service.html index a1d4e8e..60d5c1f 100644 --- a/services/storage-service.html +++ b/services/storage-service.html @@ -18,7 +18,7 @@ -
Skip to content

Pluggable Storage Service (StorageService)

Overview

The StorageService provides a unified, abstract interface for handling file storage across different backends. Its primary purpose is to decouple the application's core logic from the underlying storage technology. This design allows administrators to switch between storage providers (e.g., from the local filesystem to an S3-compatible object store) with only a configuration change, requiring no modifications to the application code.

The service is built around a standardized IStorageProvider interface, which guarantees that all storage providers have a consistent API for common operations like storing, retrieving, and deleting files.

Configuration

The StorageService is configured via environment variables in the .env file. You must specify the storage backend you wish to use and provide the necessary credentials and settings for it.

1. Choosing the Backend

The STORAGE_TYPE variable determines which provider the service will use.

  • STORAGE_TYPE=local: Uses the local server's filesystem.
  • STORAGE_TYPE=s3: Uses an S3-compatible object storage service (e.g., AWS S3, MinIO, Google Cloud Storage).

2. Local Filesystem Configuration

When STORAGE_TYPE is set to local, you must also provide the root path where files will be stored.

env
# .env
+    
Skip to content

Pluggable Storage Service (StorageService)

Overview

The StorageService provides a unified, abstract interface for handling file storage across different backends. Its primary purpose is to decouple the application's core logic from the underlying storage technology. This design allows administrators to switch between storage providers (e.g., from the local filesystem to an S3-compatible object store) with only a configuration change, requiring no modifications to the application code.

The service is built around a standardized IStorageProvider interface, which guarantees that all storage providers have a consistent API for common operations like storing, retrieving, and deleting files.

Configuration

The StorageService is configured via environment variables in the .env file. You must specify the storage backend you wish to use and provide the necessary credentials and settings for it.

1. Choosing the Backend

The STORAGE_TYPE variable determines which provider the service will use.

  • STORAGE_TYPE=local: Uses the local server's filesystem.
  • STORAGE_TYPE=s3: Uses an S3-compatible object storage service (e.g., AWS S3, MinIO, Google Cloud Storage).

2. Local Filesystem Configuration

When STORAGE_TYPE is set to local, you must also provide the root path where files will be stored.

env
# .env
 STORAGE_TYPE=local
 STORAGE_LOCAL_ROOT_PATH=/var/data/open-archiver
  • STORAGE_LOCAL_ROOT_PATH: The absolute path on the server where the archive will be created. The service will create subdirectories within this path as needed.

3. S3-Compatible Storage Configuration

When STORAGE_TYPE is set to s3, you must provide the credentials and endpoint for your object storage provider.

env
# .env
 STORAGE_TYPE=s3
@@ -56,7 +56,7 @@
         }
     }
 }

API Reference

The StorageService implements the IStorageProvider interface. All methods are asynchronous and return a Promise.


put(path, content)

Stores a file at the specified path. If a file already exists at that path, it will be overwritten.

  • path: string: A unique identifier for the file, including its directory structure (e.g., "user-123/emails/message-abc.eml").
  • content: Buffer | NodeJS.ReadableStream: The content of the file. It can be a Buffer for small files or a ReadableStream for large files to ensure memory efficiency.
  • Returns: Promise<void> - A promise that resolves when the file has been successfully stored.

get(path)

Retrieves a file from the specified path as a readable stream.

  • path: string: The unique identifier of the file to retrieve.
  • Returns: Promise<NodeJS.ReadableStream> - A promise that resolves with a readable stream of the file's content.
  • Throws: An Error if the file is not found at the specified path.

delete(path)

Deletes a file from the storage backend.

  • path: string: The unique identifier of the file to delete.
  • Returns: Promise<void> - A promise that resolves when the file is deleted. If the file does not exist, the promise will still resolve successfully without throwing an error.

exists(path)

Checks for the existence of a file.

  • path: string: The unique identifier of the file to check.
  • Returns: Promise<boolean> - A promise that resolves with true if the file exists, and false otherwise.
- + \ No newline at end of file diff --git a/user-guides/email-providers/google-workspace.html b/user-guides/email-providers/google-workspace.html index c20a1ab..dd5b017 100644 --- a/user-guides/email-providers/google-workspace.html +++ b/user-guides/email-providers/google-workspace.html @@ -18,8 +18,8 @@ -
Skip to content

Connecting to Google Workspace

This guide provides instructions for Google Workspace administrators to set up a connection that allows the archiving of all user mailboxes within their organization.

The connection uses a Google Cloud Service Account with Domain-Wide Delegation. This is a secure method that grants the archiving service permission to access user data on behalf of the administrator, without requiring individual user passwords or consent.

Prerequisites

  • You must have Super Administrator privileges in your Google Workspace account.
  • You must have access to the Google Cloud Console associated with your organization.

Setup Overview

The setup process involves three main parts:

  1. Configuring the necessary permissions in the Google Cloud Console.
  2. Authorizing the service account in the Google Workspace Admin Console.
  3. Entering the generated credentials into the OpenArchiver application.

Part 1: Google Cloud Console Setup

In this part, you will create a service account and enable the APIs it needs to function.

  1. Create a Google Cloud Project:

    • Go to the Google Cloud Console.
    • If you don't already have one, create a new project for the archiving service (e.g., "Email Archiver").
  2. Enable Required APIs:

    • In your selected project, navigate to the "APIs & Services" > "Library" section.
    • Search for and enable the following two APIs:
      • Gmail API
      • Admin SDK API
  3. Create a Service Account:

    • Navigate to "IAM & Admin" > "Service Accounts".
    • Click "Create Service Account".
    • Give the service account a name (e.g., email-archiver-service) and a description.
    • Click "Create and Continue". You do not need to grant this service account any roles on the project. Click "Done".
  4. Generate a JSON Key:

    • Find the service account you just created in the list.
    • Click the three-dot menu under "Actions" and select "Manage keys".
    • Click "Add Key" > "Create new key".
    • Select JSON as the key type and click "Create".
    • A JSON file will be downloaded to your computer. Keep this file secure, as it contains private credentials. You will need the contents of this file in Part 3.

Troubleshooting

Error: "iam.disableServiceAccountKeyCreation"

If you receive an error message stating The organization policy constraint 'iam.disableServiceAccountKeyCreation' is enforced when trying to create a JSON key, it means your Google Cloud organization has a policy preventing the creation of new service account keys.

To resolve this, you must have Organization Administrator permissions.

  1. Navigate to your Organization: In the Google Cloud Console, use the project selector at the top of the page to select your organization node (it usually has a building icon).
  2. Go to IAM: From the navigation menu, select "IAM & Admin" > "IAM".
  3. Edit Your Permissions: Find your user account in the list and click the pencil icon to edit roles. Add the following two roles:
    • Organization Policy Administrator
    • Organization AdministratorNote: These roles are only available at the organization level, not the project level.
  4. Modify the Policy:
    • Navigate to "IAM & Admin" > "Organization Policies".
    • In the filter box, search for the policy "iam.disableServiceAccountKeyCreation".
    • Click on the policy to edit it.
    • You can either disable the policy entirely (if your security rules permit) or add a rule to exclude the specific project you are using for the archiver from this policy.
  5. Retry Key Creation: Once the policy is updated, return to your project and you should be able to generate the JSON key as described in Part 1.

Part 2: Grant Domain-Wide Delegation

Now, you will authorize the service account you created to access data from your Google Workspace.

  1. Get the Service Account's Client ID:

    • Go back to the list of service accounts in the Google Cloud Console.
    • Click on the service account you created.
    • Under the "Details" tab, find and copy the Unique ID (this is the Client ID).
  2. Authorize the Client in Google Workspace:

    • Go to your Google Workspace Admin Console at admin.google.com.
    • Navigate to Security > Access and data control > API controls.
    • Under the "Domain-wide Delegation" section, click "Manage Domain-wide Delegation".
    • Click "Add new".
  3. Enter Client Details and Scopes:

    • In the Client ID field, paste the Unique ID you copied from the service account.
    • In the OAuth scopes field, paste the following two scopes exactly as they appear, separated by a comma:
      https://www.googleapis.com/auth/admin.directory.user.readonly,https://www.googleapis.com/auth/gmail.readonly
    • Click "Authorize".

The service account is now permitted to list users and read their email data across your domain.


Part 3: Connecting in OpenArchiver

Finally, you will provide the generated credentials to the application.

  1. Navigate to Ingestion Sources: From the main dashboard, go to the Ingestion Sources page.

  2. Create a New Source: Click the "Create New" button.

  3. Fill in the Configuration Details:

    • Name: Give the source a name (e.g., "Google Workspace Archive").
    • Provider: Select "Google Workspace" from the dropdown.
    • Service Account Key (JSON): Open the JSON file you downloaded in Part 1. Copy the entire content of the file and paste it into this text area.
    • Impersonated Admin Email: Enter the email address of a Super Administrator in your Google Workspace (e.g., admin@your-domain.com). The service will use this user's authority to discover all other users.
  4. Save Changes: Click "Save changes".

What Happens Next?

Once the connection is saved and verified, the system will begin the archiving process:

  1. User Discovery: The service will first connect to the Admin SDK to get a list of all active users in your Google Workspace.
  2. Initial Import: The system will then start a background job to import the mailboxes of all discovered users. The status will show as "Importing". This can take a significant amount of time depending on the number of users and the size of their mailboxes.
  3. Continuous Sync: After the initial import is complete, the status will change to "Active". The system will then periodically check each user's mailbox for new emails and archive them automatically.
- +
Skip to content

Connecting to Google Workspace

This guide provides instructions for Google Workspace administrators to set up a connection that allows the archiving of all user mailboxes within their organization.

The connection uses a Google Cloud Service Account with Domain-Wide Delegation. This is a secure method that grants the archiving service permission to access user data on behalf of the administrator, without requiring individual user passwords or consent.

Prerequisites

  • You must have Super Administrator privileges in your Google Workspace account.
  • You must have access to the Google Cloud Console associated with your organization.

Setup Overview

The setup process involves three main parts:

  1. Configuring the necessary permissions in the Google Cloud Console.
  2. Authorizing the service account in the Google Workspace Admin Console.
  3. Entering the generated credentials into the OpenArchiver application.

Part 1: Google Cloud Console Setup

In this part, you will create a service account and enable the APIs it needs to function.

  1. Create a Google Cloud Project:

    • Go to the Google Cloud Console.
    • If you don't already have one, create a new project for the archiving service (e.g., "Email Archiver").
  2. Enable Required APIs:

    • In your selected project, navigate to the "APIs & Services" > "Library" section.
    • Search for and enable the following two APIs:
      • Gmail API
      • Admin SDK API
  3. Create a Service Account:

    • Navigate to "IAM & Admin" > "Service Accounts".
    • Click "Create Service Account".
    • Give the service account a name (e.g., email-archiver-service) and a description.
    • Click "Create and Continue". You do not need to grant this service account any roles on the project. Click "Done".
  4. Generate a JSON Key:

    • Find the service account you just created in the list.
    • Click the three-dot menu under "Actions" and select "Manage keys".
    • Click "Add Key" > "Create new key".
    • Select JSON as the key type and click "Create".
    • A JSON file will be downloaded to your computer. Keep this file secure, as it contains private credentials. You will need the contents of this file in Part 3.

Troubleshooting

Error: "iam.disableServiceAccountKeyCreation"

If you receive an error message stating The organization policy constraint 'iam.disableServiceAccountKeyCreation' is enforced when trying to create a JSON key, it means your Google Cloud organization has a policy preventing the creation of new service account keys.

To resolve this, you must have Organization Administrator permissions.

  1. Navigate to your Organization: In the Google Cloud Console, use the project selector at the top of the page to select your organization node (it usually has a building icon).
  2. Go to IAM: From the navigation menu, select "IAM & Admin" > "IAM".
  3. Edit Your Permissions: Find your user account in the list and click the pencil icon to edit roles. Add the following two roles:
    • Organization Policy Administrator
    • Organization AdministratorNote: These roles are only available at the organization level, not the project level.
  4. Modify the Policy:
    • Navigate to "IAM & Admin" > "Organization Policies".
    • In the filter box, search for the policy "iam.disableServiceAccountKeyCreation".
    • Click on the policy to edit it.
    • You can either disable the policy entirely (if your security rules permit) or add a rule to exclude the specific project you are using for the archiver from this policy.
  5. Retry Key Creation: Once the policy is updated, return to your project and you should be able to generate the JSON key as described in Part 1.

Part 2: Grant Domain-Wide Delegation

Now, you will authorize the service account you created to access data from your Google Workspace.

  1. Get the Service Account's Client ID:

    • Go back to the list of service accounts in the Google Cloud Console.
    • Click on the service account you created.
    • Under the "Details" tab, find and copy the Unique ID (this is the Client ID).
  2. Authorize the Client in Google Workspace:

    • Go to your Google Workspace Admin Console at admin.google.com.
    • Navigate to Security > Access and data control > API controls.
    • Under the "Domain-wide Delegation" section, click "Manage Domain-wide Delegation".
    • Click "Add new".
  3. Enter Client Details and Scopes:

    • In the Client ID field, paste the Unique ID you copied from the service account.
    • In the OAuth scopes field, paste the following two scopes exactly as they appear, separated by a comma:
      https://www.googleapis.com/auth/admin.directory.user.readonly,https://www.googleapis.com/auth/gmail.readonly
    • Click "Authorize".

The service account is now permitted to list users and read their email data across your domain.


Part 3: Connecting in OpenArchiver

Finally, you will provide the generated credentials to the application.

  1. Navigate to Ingestion Sources: From the main dashboard, go to the Ingestion Sources page.

  2. Create a New Source: Click the "Create New" button.

  3. Fill in the Configuration Details:

    • Name: Give the source a name (e.g., "Google Workspace Archive").
    • Provider: Select "Google Workspace" from the dropdown.
    • Service Account Key (JSON): Open the JSON file you downloaded in Part 1. Copy the entire content of the file and paste it into this text area.
    • Impersonated Admin Email: Enter the email address of a Super Administrator in your Google Workspace (e.g., admin@your-domain.com). The service will use this user's authority to discover all other users.
  4. Save Changes: Click "Save changes".

What Happens Next?

Once the connection is saved and verified, the system will begin the archiving process:

  1. User Discovery: The service will first connect to the Admin SDK to get a list of all active users in your Google Workspace.
  2. Initial Import: The system will then start a background job to import the mailboxes of all discovered users. The status will show as "Importing". This can take a significant amount of time depending on the number of users and the size of their mailboxes.
  3. Continuous Sync: After the initial import is complete, the status will change to "Active". The system will then periodically check each user's mailbox for new emails and archive them automatically.
+ \ No newline at end of file diff --git a/user-guides/email-providers/imap.html b/user-guides/email-providers/imap.html index 2205cd2..8bfca2e 100644 --- a/user-guides/email-providers/imap.html +++ b/user-guides/email-providers/imap.html @@ -18,8 +18,8 @@ -
Skip to content

Connecting to a Generic IMAP Server

This guide will walk you through connecting a standard IMAP email account as an ingestion source. This allows you to archive emails from any provider that supports the IMAP protocol, which is common for many self-hosted or traditional email services.

Step-by-Step Guide

  1. Navigate to Ingestion Sources: From the main dashboard, go to the Ingestions page.

  2. Create a New Source: Click the "Create New" button to open the ingestion source configuration dialog.

  3. Fill in the Configuration Details: You will see a form with several fields. Here is how to fill them out for an IMAP connection:

    • Name: Give your ingestion source a descriptive name that you will easily recognize, such as "Work Email (IMAP)" or "Personal Gmail".

    • Provider: From the dropdown menu, select "Generic IMAP". This will reveal the specific fields required for an IMAP connection.

    • Host: Enter the server address for your email provider's IMAP service. This often looks like imap.your-provider.com or mail.your-domain.com.

    • Port: Enter the port number for the IMAP server. For a secure connection (which is strongly recommended), this is typically 993.

    • Username: Enter the full email address or username you use to log in to your email account.

    • Password: Enter the password for your email account.

  4. Save Changes: Once you have filled in all the details, click the "Save changes" button.

Security Recommendation: Use an App Password

For enhanced security, we strongly recommend using an "app password" (sometimes called an "app-specific password") instead of your main account password.

Many email providers (like Gmail, Outlook, and Fastmail) allow you to generate a unique password that grants access only to a specific application (in this case, the archiving service). If you ever need to revoke access, you can simply delete the app password without affecting your main account login.

Please consult your email provider's documentation to see if they support app passwords and how to create one.

How to Obtain an App Password for Gmail

  1. Enable 2-Step Verification: You must have 2-Step Verification turned on for your Google Account.
  2. Go to App Passwords: Visit myaccount.google.com/apppasswords. You may be asked to sign in again.
  3. Create the Password:
    • At the bottom, click "Select app" and choose "Other (Custom name)".
    • Give it a name you'll recognize, like "OpenArchiver".
    • Click "Generate".
  4. Use the Password: A 16-digit password will be displayed. Copy this password (without the spaces) and paste it into the Password field in the OpenArchiver ingestion source form.

How to Obtain an App Password for Outlook/Microsoft Accounts

  1. Enable Two-Step Verification: You must have two-step verification enabled for your Microsoft account.
  2. Go to Security Options: Sign in to your Microsoft account and navigate to the Advanced security options.
  3. Create a New App Password:
    • Scroll down to the "App passwords" section.
    • Click "Create a new app password".
  4. Use the Password: A new password will be generated. Use this password in the Password field in the OpenArchiver ingestion source form.

What Happens Next?

After you save the connection, the system will attempt to connect to the IMAP server. The status of the ingestion source will update to reflect its current state:

  • Importing: The system is performing the initial, one-time import of all emails from your INBOX. This may take a while depending on the size of your mailbox.
  • Active: The initial import is complete, and the system will now periodically check for and archive new emails.
  • Paused: The connection is valid, but the system will not check for new emails until you resume it.
  • Error: The system was unable to connect using the provided credentials. Please double-check your Host, Port, Username, and Password and try again.

You can view, edit, pause, or manually sync any of your ingestion sources from the main table on the Ingestions page.

- +
Skip to content

Connecting to a Generic IMAP Server

This guide will walk you through connecting a standard IMAP email account as an ingestion source. This allows you to archive emails from any provider that supports the IMAP protocol, which is common for many self-hosted or traditional email services.

Step-by-Step Guide

  1. Navigate to Ingestion Sources: From the main dashboard, go to the Ingestions page.

  2. Create a New Source: Click the "Create New" button to open the ingestion source configuration dialog.

  3. Fill in the Configuration Details: You will see a form with several fields. Here is how to fill them out for an IMAP connection:

    • Name: Give your ingestion source a descriptive name that you will easily recognize, such as "Work Email (IMAP)" or "Personal Gmail".

    • Provider: From the dropdown menu, select "Generic IMAP". This will reveal the specific fields required for an IMAP connection.

    • Host: Enter the server address for your email provider's IMAP service. This often looks like imap.your-provider.com or mail.your-domain.com.

    • Port: Enter the port number for the IMAP server. For a secure connection (which is strongly recommended), this is typically 993.

    • Username: Enter the full email address or username you use to log in to your email account.

    • Password: Enter the password for your email account.

  4. Save Changes: Once you have filled in all the details, click the "Save changes" button.

Security Recommendation: Use an App Password

For enhanced security, we strongly recommend using an "app password" (sometimes called an "app-specific password") instead of your main account password.

Many email providers (like Gmail, Outlook, and Fastmail) allow you to generate a unique password that grants access only to a specific application (in this case, the archiving service). If you ever need to revoke access, you can simply delete the app password without affecting your main account login.

Please consult your email provider's documentation to see if they support app passwords and how to create one.

How to Obtain an App Password for Gmail

  1. Enable 2-Step Verification: You must have 2-Step Verification turned on for your Google Account.
  2. Go to App Passwords: Visit myaccount.google.com/apppasswords. You may be asked to sign in again.
  3. Create the Password:
    • At the bottom, click "Select app" and choose "Other (Custom name)".
    • Give it a name you'll recognize, like "OpenArchiver".
    • Click "Generate".
  4. Use the Password: A 16-digit password will be displayed. Copy this password (without the spaces) and paste it into the Password field in the OpenArchiver ingestion source form.

How to Obtain an App Password for Outlook/Microsoft Accounts

  1. Enable Two-Step Verification: You must have two-step verification enabled for your Microsoft account.
  2. Go to Security Options: Sign in to your Microsoft account and navigate to the Advanced security options.
  3. Create a New App Password:
    • Scroll down to the "App passwords" section.
    • Click "Create a new app password".
  4. Use the Password: A new password will be generated. Use this password in the Password field in the OpenArchiver ingestion source form.

What Happens Next?

After you save the connection, the system will attempt to connect to the IMAP server. The status of the ingestion source will update to reflect its current state:

  • Importing: The system is performing the initial, one-time import of all emails from your INBOX. This may take a while depending on the size of your mailbox.
  • Active: The initial import is complete, and the system will now periodically check for and archive new emails.
  • Paused: The connection is valid, but the system will not check for new emails until you resume it.
  • Error: The system was unable to connect using the provided credentials. Please double-check your Host, Port, Username, and Password and try again.

You can view, edit, pause, or manually sync any of your ingestion sources from the main table on the Ingestions page.

+ \ No newline at end of file diff --git a/user-guides/email-providers/index.html b/user-guides/email-providers/index.html index 2dc18d9..4f46f1f 100644 --- a/user-guides/email-providers/index.html +++ b/user-guides/email-providers/index.html @@ -18,8 +18,8 @@ -
Skip to content

Connecting Email Providers

Open Archiver can connect to a variety of email sources to ingest and archive your emails. This section provides guides for connecting to popular email providers.

Choose your provider from the list below to get started:

- +
Skip to content

Connecting Email Providers

Open Archiver can connect to a variety of email sources to ingest and archive your emails. This section provides guides for connecting to popular email providers.

Choose your provider from the list below to get started:

+ \ No newline at end of file diff --git a/user-guides/email-providers/microsoft-365.html b/user-guides/email-providers/microsoft-365.html index 1980754..e40e55e 100644 --- a/user-guides/email-providers/microsoft-365.html +++ b/user-guides/email-providers/microsoft-365.html @@ -18,8 +18,8 @@ -
Skip to content

Connecting to Microsoft 365

This guide provides instructions for Microsoft 365 administrators to set up a connection that allows the archiving of all user mailboxes within their organization.

The connection uses the Microsoft Graph API and an App Registration in Microsoft Entra ID. This is a secure, standard method that grants the archiving service permission to read email data on your behalf without ever needing to handle user passwords.

Prerequisites

  • You must have one of the following administrator roles in your Microsoft 365 tenant: Global Administrator, Application Administrator, or Cloud Application Administrator.

Setup Overview

The setup process involves four main parts, all performed within the Microsoft Entra admin center and the OpenArchiver application:

  1. Registering a new application identity for the archiver in Entra ID.
  2. Granting the application the specific permissions it needs to read mail.
  3. Creating a secure password (a client secret) for the application.
  4. Entering the generated credentials into the OpenArchiver application.

Part 1: Register a New Application in Microsoft Entra ID

First, you will create an "App registration," which acts as an identity for the archiving service within your Microsoft 365 ecosystem.

  1. Sign in to the Microsoft Entra admin center.
  2. In the left-hand navigation pane, go to Identity > Applications > App registrations.
  3. Click the + New registration button at the top of the page.
  4. On the "Register an application" screen:
    • Name: Give the application a descriptive name you will recognize, such as OpenArchiver Service.
    • Supported account types: Select "Accounts in this organizational directory only (Default Directory only - Single tenant)". This is the most secure option.
    • Redirect URI (optional): You can leave this blank.
  5. Click the Register button. You will be taken to the application's main "Overview" page.

Part 2: Grant API Permissions

Next, you must grant the application the specific permissions required to read user profiles and their mailboxes.

  1. From your new application's page, select API permissions from the left-hand menu.
  2. Click the + Add a permission button.
  3. In the "Request API permissions" pane, select Microsoft Graph.
  4. Select Application permissions. This is critical as it allows the service to run in the background without a user being signed in.
  5. In the "Select permissions" search box, find and check the boxes for the following two permissions:
    • Mail.Read
    • User.Read.All
  6. Click the Add permissions button at the bottom.
  7. Crucial Final Step: You will now see the permissions in your list with a warning status. You must grant consent on behalf of your organization. Click the "Grant admin consent for [Your Organization's Name]" button located above the permissions table. Click Yes in the confirmation dialog. The status for both permissions should now show a green checkmark.

Part 3: Create a Client Secret

The client secret is a password that the archiving service will use to authenticate. Treat this with the same level of security as an administrator's password.

  1. In your application's menu, navigate to Certificates & secrets.
  2. Select the Client secrets tab and click + New client secret.
  3. In the pane that appears:
    • Description: Enter a clear description, such as OpenArchiver Key.
    • Expires: Select an expiry duration. We recommend 12 or 24 months. Set a calendar reminder to renew it before it expires to prevent service interruption.
  4. Click Add.
  5. IMMEDIATELY COPY THE SECRET: The secret is now visible in the "Value" column. This is the only time it will be fully displayed. Copy this value now and store it in a secure password manager before navigating away. If you lose it, you must create a new one.

Part 4: Connecting in OpenArchiver

You now have the three pieces of information required to configure the connection.

  1. Navigate to Ingestion Sources: In the OpenArchiver application, go to the Ingestion Sources page.

  2. Create a New Source: Click the "Create New" button.

  3. Fill in the Configuration Details:

    • Name: Give the source a name (e.g., "Microsoft 365 Archive").
    • Provider: Select "Microsoft 365" from the dropdown.
    • Application (Client) ID: Go to the Overview page of your app registration in the Entra admin center and copy this value.
    • Directory (Tenant) ID: This value is also on the Overview page.
    • Client Secret Value: Paste the secret Value (not the Secret ID) that you copied and saved in the previous step.
  4. Save Changes: Click "Save changes".

What Happens Next?

Once the connection is saved, the system will begin the archiving process:

  1. User Discovery: The service will connect to the Microsoft Graph API to get a list of all users in your organization.
  2. Initial Import: The system will begin a background job to import the mailboxes of all discovered users, folder by folder. The status will show as "Importing". This can take a significant amount of time.
  3. Continuous Sync: After the initial import, the status will change to "Active". The system will use Microsoft Graph's delta query feature to efficiently fetch only new or changed emails, ensuring the archive stays up-to-date.
- +
Skip to content

Connecting to Microsoft 365

This guide provides instructions for Microsoft 365 administrators to set up a connection that allows the archiving of all user mailboxes within their organization.

The connection uses the Microsoft Graph API and an App Registration in Microsoft Entra ID. This is a secure, standard method that grants the archiving service permission to read email data on your behalf without ever needing to handle user passwords.

Prerequisites

  • You must have one of the following administrator roles in your Microsoft 365 tenant: Global Administrator, Application Administrator, or Cloud Application Administrator.

Setup Overview

The setup process involves four main parts, all performed within the Microsoft Entra admin center and the OpenArchiver application:

  1. Registering a new application identity for the archiver in Entra ID.
  2. Granting the application the specific permissions it needs to read mail.
  3. Creating a secure password (a client secret) for the application.
  4. Entering the generated credentials into the OpenArchiver application.

Part 1: Register a New Application in Microsoft Entra ID

First, you will create an "App registration," which acts as an identity for the archiving service within your Microsoft 365 ecosystem.

  1. Sign in to the Microsoft Entra admin center.
  2. In the left-hand navigation pane, go to Identity > Applications > App registrations.
  3. Click the + New registration button at the top of the page.
  4. On the "Register an application" screen:
    • Name: Give the application a descriptive name you will recognize, such as OpenArchiver Service.
    • Supported account types: Select "Accounts in this organizational directory only (Default Directory only - Single tenant)". This is the most secure option.
    • Redirect URI (optional): You can leave this blank.
  5. Click the Register button. You will be taken to the application's main "Overview" page.

Part 2: Grant API Permissions

Next, you must grant the application the specific permissions required to read user profiles and their mailboxes.

  1. From your new application's page, select API permissions from the left-hand menu.
  2. Click the + Add a permission button.
  3. In the "Request API permissions" pane, select Microsoft Graph.
  4. Select Application permissions. This is critical as it allows the service to run in the background without a user being signed in.
  5. In the "Select permissions" search box, find and check the boxes for the following two permissions:
    • Mail.Read
    • User.Read.All
  6. Click the Add permissions button at the bottom.
  7. Crucial Final Step: You will now see the permissions in your list with a warning status. You must grant consent on behalf of your organization. Click the "Grant admin consent for [Your Organization's Name]" button located above the permissions table. Click Yes in the confirmation dialog. The status for both permissions should now show a green checkmark.

Part 3: Create a Client Secret

The client secret is a password that the archiving service will use to authenticate. Treat this with the same level of security as an administrator's password.

  1. In your application's menu, navigate to Certificates & secrets.
  2. Select the Client secrets tab and click + New client secret.
  3. In the pane that appears:
    • Description: Enter a clear description, such as OpenArchiver Key.
    • Expires: Select an expiry duration. We recommend 12 or 24 months. Set a calendar reminder to renew it before it expires to prevent service interruption.
  4. Click Add.
  5. IMMEDIATELY COPY THE SECRET: The secret is now visible in the "Value" column. This is the only time it will be fully displayed. Copy this value now and store it in a secure password manager before navigating away. If you lose it, you must create a new one.

Part 4: Connecting in OpenArchiver

You now have the three pieces of information required to configure the connection.

  1. Navigate to Ingestion Sources: In the OpenArchiver application, go to the Ingestion Sources page.

  2. Create a New Source: Click the "Create New" button.

  3. Fill in the Configuration Details:

    • Name: Give the source a name (e.g., "Microsoft 365 Archive").
    • Provider: Select "Microsoft 365" from the dropdown.
    • Application (Client) ID: Go to the Overview page of your app registration in the Entra admin center and copy this value.
    • Directory (Tenant) ID: This value is also on the Overview page.
    • Client Secret Value: Paste the secret Value (not the Secret ID) that you copied and saved in the previous step.
  4. Save Changes: Click "Save changes".

What Happens Next?

Once the connection is saved, the system will begin the archiving process:

  1. User Discovery: The service will connect to the Microsoft Graph API to get a list of all users in your organization.
  2. Initial Import: The system will begin a background job to import the mailboxes of all discovered users, folder by folder. The status will show as "Importing". This can take a significant amount of time.
  3. Continuous Sync: After the initial import, the status will change to "Active". The system will use Microsoft Graph's delta query feature to efficiently fetch only new or changed emails, ensuring the archive stays up-to-date.
+ \ No newline at end of file diff --git a/user-guides/installation.html b/user-guides/installation.html index 0f5bb84..8171bd2 100644 --- a/user-guides/installation.html +++ b/user-guides/installation.html @@ -18,7 +18,7 @@ -
Skip to content

Installation Guide

This guide will walk you through setting up Open Archiver using Docker Compose. This is the recommended method for deploying the application.

Prerequisites

  • Docker and Docker Compose installed on your server or local machine.
  • A server or local machine with at least 2GB of RAM.
  • Git installed on your server or local machine.

1. Clone the Repository

First, clone the Open Archiver repository to your machine:

bash
git clone https://github.com/LogicLabs-OU/OpenArchiver.git
+    
Skip to content

Installation Guide

This guide will walk you through setting up Open Archiver using Docker Compose. This is the recommended method for deploying the application.

Prerequisites

  • Docker and Docker Compose installed on your server or local machine.
  • A server or local machine with at least 2GB of RAM.
  • Git installed on your server or local machine.

1. Clone the Repository

First, clone the Open Archiver repository to your machine:

bash
git clone https://github.com/LogicLabs-OU/OpenArchiver.git
 cd OpenArchiver

2. Configure Your Environment

The application is configured using environment variables. You'll need to create a .env file to store your configuration.

Copy the example environment file for Docker:

bash
cp .env.example.docker .env

Now, open the .env file in a text editor and customize the settings.

Important Configuration

You must change the following placeholder values to secure your instance:

  • POSTGRES_PASSWORD: A strong, unique password for the database.
  • REDIS_PASSWORD: A strong, unique password for the Valkey/Redis service.
  • MEILI_MASTER_KEY: A complex key for Meilisearch.
  • JWT_SECRET: A long, random string for signing authentication tokens.
  • ADMIN_PASSWORD: A strong password for the initial admin user.
  • ENCRYPTION_KEY: A 32-byte hex string for encrypting sensitive data in the database. You can generate one with the following command:
    bash
    openssl rand -hex 32

Storage Configuration

By default, the Docker Compose setup uses local filesystem storage, which is persisted using a Docker volume named archiver-data. This is suitable for most use cases.

If you want to use S3-compatible object storage, change the STORAGE_TYPE to s3 and fill in your S3 credentials (STORAGE_S3_* variables).

Using External Services

For convenience, the docker-compose.yml file includes services for PostgreSQL, Valkey (Redis), and Meilisearch. However, you can use your own external or managed instances for these services.

To do so:

  1. Update your .env file: Change the host, port, and credential variables to point to your external service instances. For example, you would update DATABASE_URL, REDIS_HOST, and MEILI_HOST.
  2. Modify docker-compose.yml: Remove or comment out the service definitions for postgres, valkey, and meilisearch from your docker-compose.yml file.

This will configure the Open Archiver application to connect to your services instead of starting the default ones.

Environment Variable Reference

Here is a complete list of environment variables available for configuration:

Application Settings

VariableDescriptionDefault Value
NODE_ENVThe application environment.development
PORT_BACKENDThe port for the backend service.4000
PORT_FRONTENDThe port for the frontend service.3000

Docker Compose Service Configuration

These variables are used by docker-compose.yml to configure the services.

VariableDescriptionDefault Value
POSTGRES_DBThe name of the PostgreSQL database.open_archive
POSTGRES_USERThe username for the PostgreSQL database.admin
POSTGRES_PASSWORDThe password for the PostgreSQL database.password
DATABASE_URLThe connection URL for the PostgreSQL database.postgresql://admin:password@postgres:5432/open_archive
MEILI_MASTER_KEYThe master key for Meilisearch.aSampleMasterKey
MEILI_HOSTThe host for the Meilisearch service.http://meilisearch:7700
REDIS_HOSTThe host for the Valkey (Redis) service.valkey
REDIS_PORTThe port for the Valkey (Redis) service.6379
REDIS_PASSWORDThe password for the Valkey (Redis) service.defaultredispassword
REDIS_TLS_ENABLEDEnable or disable TLS for Redis.false

Storage Settings

VariableDescriptionDefault Value
STORAGE_TYPEThe storage backend to use (local or s3).local
STORAGE_LOCAL_ROOT_PATHThe root path for local file storage./var/data/open-archiver
STORAGE_S3_ENDPOINTThe endpoint for S3-compatible storage.
STORAGE_S3_BUCKETThe bucket name for S3-compatible storage.
STORAGE_S3_ACCESS_KEY_IDThe access key ID for S3-compatible storage.
STORAGE_S3_SECRET_ACCESS_KEYThe secret access key for S3-compatible storage.
STORAGE_S3_REGIONThe region for S3-compatible storage.
STORAGE_S3_FORCE_PATH_STYLEForce path-style addressing for S3.false

Security & Authentication

VariableDescriptionDefault Value
JWT_SECRETA secret key for signing JWT tokens.a-very-secret-key-that-you-should-change
JWT_EXPIRES_INThe expiration time for JWT tokens.7d
ADMIN_EMAILThe email for the initial admin user.admin@local.com
ADMIN_PASSWORDThe password for the initial admin user.a_strong_password_that_you_should_change
SUPER_API_KEYAn API key with super admin privileges.
ENCRYPTION_KEYA 32-byte hex string for encrypting sensitive data.

3. Run the Application

Once you have configured your .env file, you can start all the services using Docker Compose:

bash
docker compose up -d

This command will:

  • Pull the required Docker images for the frontend, backend, database, and other services.
  • Create and start the containers in the background (-d flag).
  • Create the persistent volumes for your data.

You can check the status of the running containers with:

bash
docker compose ps

4. Access the Application

Once the services are running, you can access the Open Archiver web interface by navigating to http://localhost:3000 in your web browser.

You can log in with the ADMIN_EMAIL and ADMIN_PASSWORD you configured in your .env file.

5. Next Steps

After successfully deploying and logging into Open Archiver, the next step is to configure your ingestion sources to start archiving emails.

Updating Your Installation

To update your Open Archiver instance to the latest version, run the following commands:

bash
# Pull the latest changes from the repository
 git pull
 
@@ -27,7 +27,7 @@
 
 # Restart the services with the new images
 docker compose up -d
- + \ No newline at end of file