diff --git a/404.html b/404.html index ac3c43a..1f791af 100644 --- a/404.html +++ b/404.html @@ -9,7 +9,7 @@ - + @@ -18,7 +18,7 @@
- + \ No newline at end of file diff --git a/SUMMARY.html b/SUMMARY.html index f9cb63b..3f09fe7 100644 --- a/SUMMARY.html +++ b/SUMMARY.html @@ -9,9 +9,9 @@ - + - + @@ -20,8 +20,8 @@ -
Skip to content
- +
Skip to content
+ \ No newline at end of file diff --git a/api/archived-email.html b/api/archived-email.html index 1591b8b..5cea3c9 100644 --- a/api/archived-email.html +++ b/api/archived-email.html @@ -9,9 +9,9 @@ - + - + @@ -20,7 +20,7 @@ -
Skip to content

Archived Email Service API

The Archived Email Service is responsible for retrieving archived emails and their details from the database and storage.

Endpoints

All endpoints in this service require authentication.

GET /api/v1/archived-emails/ingestion-source/:ingestionSourceId

Retrieves a paginated list of archived emails for a specific ingestion source.

Access: Authenticated

URL Parameters

ParameterTypeDescription
ingestionSourceIdstringThe ID of the ingestion source to get emails for.

Query Parameters

ParameterTypeDescriptionDefault
pagenumberThe page number for pagination.1
limitnumberThe number of items per page.10

Responses

  • 200 OK: A paginated list of archived emails.

    json
    {
    +    
    Skip to content

    Archived Email Service API

    The Archived Email Service is responsible for retrieving archived emails and their details from the database and storage.

    Endpoints

    All endpoints in this service require authentication.

    GET /api/v1/archived-emails/ingestion-source/:ingestionSourceId

    Retrieves a paginated list of archived emails for a specific ingestion source.

    Access: Authenticated

    URL Parameters

    ParameterTypeDescription
    ingestionSourceIdstringThe ID of the ingestion source to get emails for.

    Query Parameters

    ParameterTypeDescriptionDefault
    pagenumberThe page number for pagination.1
    limitnumberThe number of items per page.10

    Responses

    • 200 OK: A paginated list of archived emails.

      json
      {
       	"items": [
       		{
       			"id": "email-id",
      @@ -51,7 +51,7 @@
       		}
       	]
       }
    • 404 Not Found: The archived email with the specified ID was not found.

    • 500 Internal Server Error: An unexpected error occurred.

    Service Methods

    getArchivedEmails(ingestionSourceId: string, page: number, limit: number): Promise<PaginatedArchivedEmails>

    Retrieves a paginated list of archived emails from the database for a given ingestion source.

    • ingestionSourceId: The ID of the ingestion source.
    • page: The page number for pagination.
    • limit: The number of items per page.
    • Returns: A promise that resolves to a PaginatedArchivedEmails object.

    getArchivedEmailById(emailId: string): Promise<ArchivedEmail | null>

    Retrieves a single archived email by its ID, including its raw content and attachments.

    • emailId: The ID of the archived email.
    • Returns: A promise that resolves to an ArchivedEmail object or null if not found.
    - + \ No newline at end of file diff --git a/api/auth.html b/api/auth.html index 1ffb11b..e86dd3e 100644 --- a/api/auth.html +++ b/api/auth.html @@ -9,9 +9,9 @@ - + - + @@ -20,7 +20,7 @@ -
    Skip to content

    Auth Service API

    The Auth Service is responsible for handling user authentication, including login and token verification.

    Endpoints

    POST /api/v1/auth/login

    Authenticates a user and returns a JWT if the credentials are valid.

    Access: Public

    Rate Limiting: This endpoint is rate-limited to prevent brute-force attacks.

    Request Body

    FieldTypeDescription
    emailstringThe user's email address.
    passwordstringThe user's password.

    Responses

    • 200 OK: Authentication successful.

      json
      {
      +    
      Skip to content

      Auth Service API

      The Auth Service is responsible for handling user authentication, including login and token verification.

      Endpoints

      POST /api/v1/auth/login

      Authenticates a user and returns a JWT if the credentials are valid.

      Access: Public

      Rate Limiting: This endpoint is rate-limited to prevent brute-force attacks.

      Request Body

      FieldTypeDescription
      emailstringThe user's email address.
      passwordstringThe user's password.

      Responses

      • 200 OK: Authentication successful.

        json
        {
         	"accessToken": "your.jwt.token",
         	"user": {
         		"id": "user-id",
        @@ -34,7 +34,7 @@
         }
      • 500 Internal Server Error: An unexpected error occurred.

        json
        {
         	"message": "An internal server error occurred"
         }

      Service Methods

      verifyPassword(password: string, hash: string): Promise<boolean>

      Compares a plain-text password with a hashed password to verify its correctness.

      • password: The plain-text password.
      • hash: The hashed password to compare against.
      • Returns: A promise that resolves to true if the password is valid, otherwise false.

      login(email: string, password: string): Promise<LoginResponse | null>

      Handles the user login process. It finds the user by email, verifies the password, and generates a JWT upon successful authentication.

      • email: The user's email.
      • password: The user's password.
      • Returns: A promise that resolves to a LoginResponse object containing the accessToken and user details, or null if authentication fails.

      verifyToken(token: string): Promise<AuthTokenPayload | null>

      Verifies the authenticity and expiration of a JWT.

      • token: The JWT string to verify.
      • Returns: A promise that resolves to the token's AuthTokenPayload if valid, otherwise null.
      - + \ No newline at end of file diff --git a/api/authentication.html b/api/authentication.html index 16740e1..01b43d6 100644 --- a/api/authentication.html +++ b/api/authentication.html @@ -9,9 +9,9 @@ - + - + @@ -20,9 +20,9 @@ -
      Skip to content

      API Authentication

      To access protected API endpoints, you need to include a user-generated API key in the X-API-KEY header of your requests.

      1. Creating an API Key

      You can create, manage, and view your API keys through the application's user interface.

      1. Navigate to Settings > API Keys in the dashboard.
      2. Click the "Generate API Key" button.
      3. Provide a descriptive name for your key and select an expiration period.
      4. The new API key will be displayed. Copy this key immediately and store it in a secure location. You will not be able to see it again.

      2. Making Authenticated Requests

      Once you have your API key, you must include it in the X-API-KEY header of all subsequent requests to protected API endpoints.

      Example:

      http
      GET /api/v1/dashboard/stats
      +    
      Skip to content

      API Authentication

      To access protected API endpoints, you need to include a user-generated API key in the X-API-KEY header of your requests.

      1. Creating an API Key

      You can create, manage, and view your API keys through the application's user interface.

      1. Navigate to Settings > API Keys in the dashboard.
      2. Click the "Generate API Key" button.
      3. Provide a descriptive name for your key and select an expiration period.
      4. The new API key will be displayed. Copy this key immediately and store it in a secure location. You will not be able to see it again.

      2. Making Authenticated Requests

      Once you have your API key, you must include it in the X-API-KEY header of all subsequent requests to protected API endpoints.

      Example:

      http
      GET /api/v1/dashboard/stats
       X-API-KEY: a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2

      If the API key is missing, expired, or invalid, the API will respond with a 401 Unauthorized status code.

      - + \ No newline at end of file diff --git a/api/dashboard.html b/api/dashboard.html index 9e03d0f..3b9a2f1 100644 --- a/api/dashboard.html +++ b/api/dashboard.html @@ -9,9 +9,9 @@ - + - + @@ -20,7 +20,7 @@ -
      Skip to content

      Dashboard Service API

      The Dashboard Service provides endpoints for retrieving statistics and data for the main dashboard.

      Endpoints

      All endpoints in this service require authentication.

      GET /api/v1/dashboard/stats

      Retrieves overall statistics, including the total number of archived emails, total storage used, and the number of failed ingestions in the last 7 days.

      Access: Authenticated

      Responses

      • 200 OK: An object containing the dashboard statistics.

        json
        {
        +    
        Skip to content

        Dashboard Service API

        The Dashboard Service provides endpoints for retrieving statistics and data for the main dashboard.

        Endpoints

        All endpoints in this service require authentication.

        GET /api/v1/dashboard/stats

        Retrieves overall statistics, including the total number of archived emails, total storage used, and the number of failed ingestions in the last 7 days.

        Access: Authenticated

        Responses

        • 200 OK: An object containing the dashboard statistics.

          json
          {
           	"totalEmailsArchived": 12345,
           	"totalStorageUsed": 54321098,
           	"failedIngestionsLast7Days": 3
          @@ -58,7 +58,7 @@
           		}
           	]
           }
        - + \ No newline at end of file diff --git a/api/index.html b/api/index.html index ad4829f..0b260ff 100644 --- a/api/index.html +++ b/api/index.html @@ -9,9 +9,9 @@ - + - + @@ -20,8 +20,8 @@ -
        Skip to content

        API Overview

        Welcome to the Open Archiver API documentation. This section provides detailed information about the available API endpoints.

        All API endpoints are prefixed with /api/v1.

        Authentication

        Before making requests to protected endpoints, you must authenticate with the API. See the Authentication Guide for details on how to obtain and use API tokens.

        API Services

        - +
        Skip to content

        API Overview

        Welcome to the Open Archiver API documentation. This section provides detailed information about the available API endpoints.

        All API endpoints are prefixed with /api/v1.

        Authentication

        Before making requests to protected endpoints, you must authenticate with the API. See the Authentication Guide for details on how to obtain and use API tokens.

        API Services

        + \ No newline at end of file diff --git a/api/ingestion.html b/api/ingestion.html index 37dc656..3b7458c 100644 --- a/api/ingestion.html +++ b/api/ingestion.html @@ -9,9 +9,9 @@ - + - + @@ -20,7 +20,7 @@ -
        Skip to content

        Ingestion Service API

        The Ingestion Service manages ingestion sources, which are configurations for connecting to email providers and importing emails.

        Endpoints

        All endpoints in this service require authentication.

        POST /api/v1/ingestion-sources

        Creates a new ingestion source.

        Access: Authenticated

        Request Body

        The request body should be a CreateIngestionSourceDto object.

        typescript
        interface CreateIngestionSourceDto {
        +    
        Skip to content

        Ingestion Service API

        The Ingestion Service manages ingestion sources, which are configurations for connecting to email providers and importing emails.

        Endpoints

        All endpoints in this service require authentication.

        POST /api/v1/ingestion-sources

        Creates a new ingestion source.

        Access: Authenticated

        Request Body

        The request body should be a CreateIngestionSourceDto object.

        typescript
        interface CreateIngestionSourceDto {
         	name: string;
         	provider: 'google' | 'microsoft' | 'generic_imap';
         	providerConfig: IngestionCredentials;
        @@ -30,7 +30,7 @@
         	providerConfig?: IngestionCredentials;
         	status?: 'pending_auth' | 'auth_success' | 'importing' | 'active' | 'paused' | 'error';
         }

        Responses

        • 200 OK: The updated ingestion source object.
        • 404 Not Found: Ingestion source not found.
        • 500 Internal Server Error: An unexpected error occurred.

        DELETE /api/v1/ingestion-sources/:id

        Deletes an ingestion source and all associated data.

        Access: Authenticated

        URL Parameters

        ParameterTypeDescription
        idstringThe ID of the ingestion source.

        Responses

        • 204 No Content: The ingestion source was deleted successfully.
        • 404 Not Found: Ingestion source not found.
        • 500 Internal Server Error: An unexpected error occurred.

        POST /api/v1/ingestion-sources/:id/import

        Triggers the initial import process for an ingestion source.

        Access: Authenticated

        URL Parameters

        ParameterTypeDescription
        idstringThe ID of the ingestion source.

        Responses

        • 202 Accepted: The initial import was triggered successfully.
        • 404 Not Found: Ingestion source not found.
        • 500 Internal Server Error: An unexpected error occurred.

        POST /api/v1/ingestion-sources/:id/pause

        Pauses an active ingestion source.

        Access: Authenticated

        URL Parameters

        ParameterTypeDescription
        idstringThe ID of the ingestion source.

        Responses

        • 200 OK: The updated ingestion source object with a paused status.
        • 404 Not Found: Ingestion source not found.
        • 500 Internal Server Error: An unexpected error occurred.

        POST /api/v1/ingestion-sources/:id/sync

        Triggers a forced synchronization for an ingestion source.

        Access: Authenticated

        URL Parameters

        ParameterTypeDescription
        idstringThe ID of the ingestion source.

        Responses

        • 202 Accepted: The force sync was triggered successfully.
        • 404 Not Found: Ingestion source not found.
        • 500 Internal Server Error: An unexpected error occurred.
        - + \ No newline at end of file diff --git a/api/rate-limiting.html b/api/rate-limiting.html index 53de920..9fb71c3 100644 --- a/api/rate-limiting.html +++ b/api/rate-limiting.html @@ -9,9 +9,9 @@ - + - + @@ -20,11 +20,11 @@ -
        Skip to content

        Rate Limiting

        The API implements rate limiting as a security measure to protect your instance from denial-of-service (DoS) and brute-force attacks. This is a crucial feature for maintaining the security and stability of the application.

        How It Works

        The rate limiter restricts the number of requests an IP address can make within a specific time frame. These limits are configurable via environment variables to suit your security needs.

        By default, the limits are:

        • 100 requests per 1 minute per IP address.

        If this limit is exceeded, the API will respond with an HTTP 429 Too Many Requests status code.

        Response Body

        When an IP address is rate-limited, the API will return a JSON response with the following format:

        json
        {
        +    
        Skip to content

        Rate Limiting

        The API implements rate limiting as a security measure to protect your instance from denial-of-service (DoS) and brute-force attacks. This is a crucial feature for maintaining the security and stability of the application.

        How It Works

        The rate limiter restricts the number of requests an IP address can make within a specific time frame. These limits are configurable via environment variables to suit your security needs.

        By default, the limits are:

        • 100 requests per 1 minute per IP address.

        If this limit is exceeded, the API will respond with an HTTP 429 Too Many Requests status code.

        Response Body

        When an IP address is rate-limited, the API will return a JSON response with the following format:

        json
        {
         	"status": 429,
         	"message": "Too many requests from this IP, please try again after 15 minutes"
         }

        Configuration

        You can customize the rate-limiting settings by setting the following environment variables in your .env file:

        • RATE_LIMIT_WINDOW_MS: The time window in milliseconds. Defaults to 60000 (1 minute).
        • RATE_LIMIT_MAX_REQUESTS: The maximum number of requests allowed per IP address within the time window. Defaults to 100.

        Handling Rate Limits

        If you are developing a client that interacts with the API, you should handle rate limiting gracefully:

        1. Check the Status Code: Monitor for a 429 HTTP status code in responses.
        2. Implement a Retry Mechanism: When you receive a 429 response, it is best practice to wait before retrying the request. Implementing an exponential backoff strategy is recommended.
        3. Check Headers: The response will include the following standard headers to help you manage your request rate:
          • RateLimit-Limit: The maximum number of requests allowed in the current window.
          • RateLimit-Remaining: The number of requests you have left in the current window.
          • RateLimit-Reset: The time when the rate limit window will reset, in UTC epoch seconds.

        Excluded Endpoints

        Certain essential endpoints are excluded from rate limiting to ensure the application's UI remains responsive. These are:

        • /auth/status
        • /settings/system

        These endpoints can be called as needed without affecting your rate limit count.

        - + \ No newline at end of file diff --git a/api/search.html b/api/search.html index f4e879c..11862d6 100644 --- a/api/search.html +++ b/api/search.html @@ -9,9 +9,9 @@ - + - + @@ -20,7 +20,7 @@ -
        Skip to content
      • Save the modified docker-compose.yml file.

      • By removing these sections, you allow Coolify to automatically create and manage the necessary networks, ensuring that all services can communicate with each other and are correctly exposed through Coolify's reverse proxy.

        After making these changes, you can proceed with deploying your application on Coolify as you normally would.

        Where is my data stored (When using local storage and Docker)?

        If you are using local storage to store your emails, based on your docker-compose.yml file, your data is being stored in what's called a "named volume" (archiver-data). That's why you're not seeing the files in the ./data/open-archiver directory you created.

        1. List all Docker volumes:

        Run this command to see all the volumes on your system:

        bash
        docker volume ls
        1. Identify the correct volume:

        Look through the list for a volume name that ends with _archiver-data. The part before that will be your project's directory name. For example, if your project is in a folder named OpenArchiver, the volume will be openarchiver_archiver-data But it can be a randomly generated hash.

        1. Inspect the correct volume:

        Once you've identified the correct volume name, use it in the inspect command. For example:

        bash
        docker volume inspect <your_volume_name_here>

        This will give you the correct Mountpoint path where your data is being stored. It will look something like this (the exact path will vary depending on your system):

        json
        {
        +	"CreatedAt": "2025-07-25T11:22:19Z",
        +	"Driver": "local",
        +	"Labels": {
        +		"com.docker.compose.config-hash": "---",
        +		"com.docker.compose.project": "---",
        +		"com.docker.compose.version": "2.38.2",
        +		"com.docker.compose.volume": "us8wwos0o4ok4go4gc8cog84_archiver-data"
        +	},
        +	"Mountpoint": "/var/lib/docker/volumes/us8wwos0o4ok4go4gc8cog84_archiver-data/_data",
        +	"Name": "us8wwos0o4ok4go4gc8cog84_archiver-data",
        +	"Options": null,
        +	"Scope": "local"
        +}

        In this example, the data is located at /var/lib/docker/volumes/us8wwos0o4ok4go4gc8cog84_archiver-data/_data. You can then cd into that directory to see your files.

        To save data to a specific folder

        To save the data to a specific folder on your machine, you'll need to make a change to your docker-compose.yml. You need to switch from a named volume to a "bind mount".

        Here’s how you can do it:

        1. Edit docker-compose.yml:

        Open the docker-compose.yml file and find the open-archiver service. You're going to change the volumes section.

        Change this:

        yaml
        services:
        +    open-archiver:
        +    # ... other config
        +    volumes:
        +        - archiver-data:/var/data/open-archiver

        To this:

        yaml
        services:
        +    open-archiver:
        +    # ... other config
        +    volumes:
        +        - ./data/open-archiver:/var/data/open-archiver

        You'll also want to remove the archiver-data volume definition at the bottom of the file, since it's no longer needed.

        Remove this whole block:

        yaml
        volumes:
        +    # ... other volumes
        +    archiver-data:
        +        driver: local
        1. Restart your containers:

        After you've saved the changes, run the following command in your terminal to apply them. The --force-recreate flag will ensure the container is recreated with the new volume settings.

        bash
        docker-compose up -d --force-recreate

        After this, any new data will be saved directly into the ./data/open-archiver folder in your project directory.

        Troubleshooting

        403 Cross-Site POST Forbidden Error

        If you are running the application behind a reverse proxy or have mapped the application to a different port (e.g., 3005:3000), you may encounter a 403 Cross-site POST from submissions are forbidden error when uploading files.

        To resolve this, you must set the ORIGIN environment variable to the URL of your application. This ensures that the backend can verify the origin of requests and prevent cross-site request forgery (CSRF) attacks.

        Add the following line to your .env file, replacing <your_host> and <your_port> with your specific values:

        bash
        ORIGIN=http://<your_host>:<your_port>

        For example, if your application is accessible at http://localhost:3005, you would set the variable as follows:

        bash
        ORIGIN=http://localhost:3005

        After adding the ORIGIN variable, restart your Docker containers for the changes to take effect:

        bash
        docker-compose up -d --force-recreate

        This will ensure that your file uploads are correctly authorized.

        `,98)]))}const g=s(o,[["render",n]]);export{k as __pageData,g as default}; diff --git a/assets/user-guides_installation.md.DW7xJzZQ.lean.js b/assets/user-guides_installation.md.DW7xJzZQ.lean.js new file mode 100644 index 0000000..16d7c3a --- /dev/null +++ b/assets/user-guides_installation.md.DW7xJzZQ.lean.js @@ -0,0 +1 @@ +import{_ as s,c as i,o as t,ag as a}from"./chunks/framework.S-Qvb3wi.js";const k=JSON.parse('{"title":"Installation Guide","description":"","frontmatter":{},"headers":[],"relativePath":"user-guides/installation.md","filePath":"user-guides/installation.md"}'),o={name:"user-guides/installation.md"};function n(r,e,l,d,h,c){return t(),i("div",null,e[0]||(e[0]=[a("",98)]))}const g=s(o,[["render",n]]);export{k as __pageData,g as default}; diff --git a/assets/user-guides_upgrade-and-migration_meilisearch-upgrade.md.xQXm1E12.js b/assets/user-guides_upgrade-and-migration_meilisearch-upgrade.md.xQXm1E12.js new file mode 100644 index 0000000..e827c64 --- /dev/null +++ b/assets/user-guides_upgrade-and-migration_meilisearch-upgrade.md.xQXm1E12.js @@ -0,0 +1,11 @@ +import{_ as i,c as s,o as a,ag as t}from"./chunks/framework.S-Qvb3wi.js";const u=JSON.parse('{"title":"Upgrading Meilisearch","description":"","frontmatter":{},"headers":[],"relativePath":"user-guides/upgrade-and-migration/meilisearch-upgrade.md","filePath":"user-guides/upgrade-and-migration/meilisearch-upgrade.md"}'),n={name:"user-guides/upgrade-and-migration/meilisearch-upgrade.md"};function r(o,e,p,l,h,d){return a(),s("div",null,e[0]||(e[0]=[t(`

        Upgrading Meilisearch

        Meilisearch, the search engine used by Open Archiver, requires a manual data migration process when upgrading to a new version. This is because Meilisearch databases are only compatible with the specific version that created them.

        If an Open Archiver upgrade includes a major Meilisearch version change, you will need to migrate your search index by following the process below.

        Migration Process Overview

        For self-hosted instances using Docker Compose (as recommended), the migration process involves creating a data dump from your current Meilisearch instance, upgrading the Docker image, and then importing that dump into the new version.

        Step 1: Create a Dump

        Before upgrading, you must create a dump of your existing Meilisearch data. You can do this by sending a POST request to the /dumps endpoint of the Meilisearch API.

        1. Find your Meilisearch container name:

          bash
          docker compose ps

          Look for the service name that corresponds to Meilisearch, usually meilisearch.

        2. Execute the dump command: You will need your Meilisearch Admin API key, which can be found in your .env file as MEILI_MASTER_KEY.

          bash
          curl -X POST 'http://localhost:7700/dumps' \\
          +  -H "Authorization: Bearer YOUR_MEILI_MASTER_KEY"

          This will start the dump creation process. The dump file will be created inside the meili_data volume used by the Meilisearch container.

        3. Monitor the dump status: The dump creation request returns a taskUid. You can use this to check the status of the dump.

          For more details on dump and import, see the official Meilisearch documentation.

        Step 2: Upgrade Your Open Archiver Instance

        Once the dump is successfully created, you can proceed with the standard Open Archiver upgrade process.

        1. Pull the latest changes and Docker images:

          bash
          git pull
          +docker compose pull
        2. Stop the running services:

          bash
          docker compose down

        Step 3: Import the Dump

        Now, you need to restart the services while telling Meilisearch to import from your dump file.

        1. Modify docker-compose.yml: You need to temporarily add the --import-dump flag to the Meilisearch service command. Find the meilisearch service in your docker-compose.yml and modify the command section.

          You will need the name of your dump file. It will be a .dump file located in the directory mapped to /meili_data inside the container.

          yaml
          services:
          +    meilisearch:
          +        # ... other service config
          +        command:
          +            [
          +                '--master-key=\${MEILI_MASTER_KEY}',
          +                '--env=production',
          +                '--import-dump=/meili_data/dumps/YOUR_DUMP_FILE.dump',
          +            ]
        2. Restart the services:

          bash
          docker compose up -d

          Meilisearch will now start and import the data from the dump file. This may take some time depending on the size of your index.

        Step 4: Clean Up

        Once the import is complete and you have verified that your search is working correctly, you should remove the --import-dump flag from your docker-compose.yml to prevent it from running on every startup.

        1. Remove the --import-dump line from the command section of the meilisearch service in docker-compose.yml.
        2. Restart the services one last time:
          bash
          docker compose up -d

        Your Meilisearch instance is now upgraded and running with your migrated data.

        For more advanced scenarios or troubleshooting, please refer to the official Meilisearch migration guide.

        `,19)]))}const g=i(n,[["render",r]]);export{u as __pageData,g as default}; diff --git a/assets/user-guides_upgrade-and-migration_meilisearch-upgrade.md.xQXm1E12.lean.js b/assets/user-guides_upgrade-and-migration_meilisearch-upgrade.md.xQXm1E12.lean.js new file mode 100644 index 0000000..b3238a8 --- /dev/null +++ b/assets/user-guides_upgrade-and-migration_meilisearch-upgrade.md.xQXm1E12.lean.js @@ -0,0 +1 @@ +import{_ as i,c as s,o as a,ag as t}from"./chunks/framework.S-Qvb3wi.js";const u=JSON.parse('{"title":"Upgrading Meilisearch","description":"","frontmatter":{},"headers":[],"relativePath":"user-guides/upgrade-and-migration/meilisearch-upgrade.md","filePath":"user-guides/upgrade-and-migration/meilisearch-upgrade.md"}'),n={name:"user-guides/upgrade-and-migration/meilisearch-upgrade.md"};function r(o,e,p,l,h,d){return a(),s("div",null,e[0]||(e[0]=[t("",19)]))}const g=i(n,[["render",r]]);export{u as __pageData,g as default}; diff --git a/assets/user-guides_upgrade-and-migration_upgrade.md.DieppEdN.js b/assets/user-guides_upgrade-and-migration_upgrade.md.DieppEdN.js new file mode 100644 index 0000000..a80e971 --- /dev/null +++ b/assets/user-guides_upgrade-and-migration_upgrade.md.DieppEdN.js @@ -0,0 +1 @@ +import{_ as e,c as i,o as s,ag as t}from"./chunks/framework.S-Qvb3wi.js";const g=JSON.parse('{"title":"Upgrading Your Instance","description":"","frontmatter":{},"headers":[],"relativePath":"user-guides/upgrade-and-migration/upgrade.md","filePath":"user-guides/upgrade-and-migration/upgrade.md"}'),n={name:"user-guides/upgrade-and-migration/upgrade.md"};function r(o,a,l,p,h,d){return s(),i("div",null,a[0]||(a[0]=[t('

        Upgrading Your Instance

        This guide provides instructions for upgrading your Open Archiver instance to the latest version.

        Checking for New Versions

        Open Archiver automatically checks for new versions and will display a notification in the footer of the web interface when an update is available. You can find a list of all releases and their release notes on the GitHub Releases page.

        Upgrading Your Instance

        To upgrade your Open Archiver instance, follow these steps:

        1. Pull the latest changes from the repository:

          bash
          git pull
        2. Pull the latest Docker images:

          bash
          docker compose pull
        3. Restart the services with the new images:

          bash
          docker compose up -d

        This will restart your Open Archiver instance with the latest version of the application.

        Migrating Data

        When you upgrade to a new version, database migrations are applied automatically when the application starts up. This ensures that your database schema is always up-to-date with the latest version of the application.

        No manual intervention is required for database migrations.

        Upgrading Meilisearch

        When an Open Archiver update includes a major version change for Meilisearch, you will need to manually migrate your search data. This process is not covered by the standard upgrade commands.

        For detailed instructions, please see the Meilisearch Upgrade Guide.

        ',14)]))}const u=e(n,[["render",r]]);export{g as __pageData,u as default}; diff --git a/assets/user-guides_upgrade-and-migration_upgrade.md.DieppEdN.lean.js b/assets/user-guides_upgrade-and-migration_upgrade.md.DieppEdN.lean.js new file mode 100644 index 0000000..3430803 --- /dev/null +++ b/assets/user-guides_upgrade-and-migration_upgrade.md.DieppEdN.lean.js @@ -0,0 +1 @@ +import{_ as e,c as i,o as s,ag as t}from"./chunks/framework.S-Qvb3wi.js";const g=JSON.parse('{"title":"Upgrading Your Instance","description":"","frontmatter":{},"headers":[],"relativePath":"user-guides/upgrade-and-migration/upgrade.md","filePath":"user-guides/upgrade-and-migration/upgrade.md"}'),n={name:"user-guides/upgrade-and-migration/upgrade.md"};function r(o,a,l,p,h,d){return s(),i("div",null,a[0]||(a[0]=[t("",14)]))}const u=e(n,[["render",r]]);export{g as __pageData,u as default}; diff --git a/hashmap.json b/hashmap.json index 8eb7f8c..ac37429 100644 --- a/hashmap.json +++ b/hashmap.json @@ -1 +1 @@ -{"api_archived-email.md":"rYKCgF6R","api_auth.md":"BqZ6wN0q","api_authentication.md":"CyDXtQYg","api_dashboard.md":"lu70c-Pf","api_index.md":"DFnFK07E","api_ingestion.md":"B8EOVCr7","api_rate-limiting.md":"a1m1O0N8","api_search.md":"B8tLtEbg","api_storage.md":"DKUKvFrO","index.md":"9PKJf5H1","services_iam-service.md":"DLjefPo_","services_iam-service_iam-policy.md":"BMP46V9x","services_index.md":"BLn224J3","services_storage-service.md":"Bgos1Y2E","summary.md":"5seSND4L","user-guides_email-providers_eml.md":"a288N17s","user-guides_email-providers_google-workspace.md":"BWo_12De","user-guides_email-providers_imap.md":"DnuaRv-0","user-guides_email-providers_index.md":"DLhwJ4vK","user-guides_email-providers_microsoft-365.md":"QHHVfYxW","user-guides_email-providers_pst.md":"c6jOF8P1","user-guides_installation.md":"BuO7QeKe","user-guides_settings_system.md":"DZw4puzm"} +{"api_archived-email.md":"rYKCgF6R","api_auth.md":"BqZ6wN0q","api_authentication.md":"CyDXtQYg","api_dashboard.md":"lu70c-Pf","api_index.md":"DFnFK07E","api_ingestion.md":"B8EOVCr7","api_rate-limiting.md":"a1m1O0N8","api_search.md":"B8tLtEbg","api_storage.md":"DKUKvFrO","index.md":"9PKJf5H1","services_iam-service.md":"DLjefPo_","services_iam-service_iam-policy.md":"BMP46V9x","services_index.md":"BLn224J3","services_storage-service.md":"Bgos1Y2E","summary.md":"5seSND4L","user-guides_email-providers_eml.md":"a288N17s","user-guides_email-providers_google-workspace.md":"BWo_12De","user-guides_email-providers_imap.md":"DnuaRv-0","user-guides_email-providers_index.md":"C3XNPTNj","user-guides_email-providers_mbox.md":"Cavm6di7","user-guides_email-providers_microsoft-365.md":"QHHVfYxW","user-guides_email-providers_pst.md":"c6jOF8P1","user-guides_installation.md":"DW7xJzZQ","user-guides_settings_system.md":"DZw4puzm","user-guides_upgrade-and-migration_meilisearch-upgrade.md":"xQXm1E12","user-guides_upgrade-and-migration_upgrade.md":"DieppEdN"} diff --git a/index.html b/index.html index d107a2a..f4f5a8c 100644 --- a/index.html +++ b/index.html @@ -9,9 +9,9 @@ - + - + @@ -20,8 +20,8 @@ -
        Skip to content

        Get Started 👋

        Welcome to Open Archiver! This guide will help you get started with setting up and using the platform.

        What is Open Archiver? 🛡️

        A secure, sovereign, and affordable open-source platform for email archiving and eDiscovery.

        Open Archiver provides a robust, self-hosted solution for archiving, storing, indexing, and searching emails from major platforms, including Google Workspace (Gmail), Microsoft 365, as well as generic IMAP-enabled email inboxes. Use Open Archiver to keep a permanent, tamper-proof record of your communication history, free from vendor lock-in.

        Key Features ✨

        • Universal Ingestion: Connect to Google Workspace, Microsoft 365, and standard IMAP servers to perform initial bulk imports and maintain continuous, real-time synchronization.
        • Secure & Efficient Storage: Emails are stored in the standard .eml format. The system uses deduplication and compression to minimize storage costs. All data is encrypted at rest.
        • Pluggable Storage Backends: Support both local filesystem storage and S3-compatible object storage (like AWS S3 or MinIO).
        • Powerful Search & eDiscovery: A high-performance search engine indexes the full text of emails and attachments (PDF, DOCX, etc.).
        • Compliance & Retention: Define granular retention policies to automatically manage the lifecycle of your data. Place legal holds on communications to prevent deletion during litigation (TBD).
        • Comprehensive Auditing: An immutable audit trail logs all system activities, ensuring you have a clear record of who accessed what and when (TBD).

        Installation 🚀

        To get your own instance of Open Archiver running, follow our detailed installation guide:

        Data Source Configuration 🔌

        After deploying the application, you will need to configure one or more ingestion sources to begin archiving emails. Follow our detailed guides to connect to your email provider:

        Contributing ❤️

        We welcome contributions from the community!

        • Reporting Bugs: If you find a bug, please open an issue on our GitHub repository.
        • Suggesting Enhancements: Have an idea for a new feature? We'd love to hear it. Open an issue to start the discussion.
        • Code Contributions: If you'd like to contribute code, please fork the repository and submit a pull request.

        Please read our CONTRIBUTING.md file for more details on our code of conduct and the process for submitting pull requests.

        - +
        Skip to content

        Get Started 👋

        Welcome to Open Archiver! This guide will help you get started with setting up and using the platform.

        What is Open Archiver? 🛡️

        A secure, sovereign, and affordable open-source platform for email archiving and eDiscovery.

        Open Archiver provides a robust, self-hosted solution for archiving, storing, indexing, and searching emails from major platforms, including Google Workspace (Gmail), Microsoft 365, as well as generic IMAP-enabled email inboxes. Use Open Archiver to keep a permanent, tamper-proof record of your communication history, free from vendor lock-in.

        Key Features ✨

        • Universal Ingestion: Connect to Google Workspace, Microsoft 365, and standard IMAP servers to perform initial bulk imports and maintain continuous, real-time synchronization.
        • Secure & Efficient Storage: Emails are stored in the standard .eml format. The system uses deduplication and compression to minimize storage costs. All data is encrypted at rest.
        • Pluggable Storage Backends: Support both local filesystem storage and S3-compatible object storage (like AWS S3 or MinIO).
        • Powerful Search & eDiscovery: A high-performance search engine indexes the full text of emails and attachments (PDF, DOCX, etc.).
        • Compliance & Retention: Define granular retention policies to automatically manage the lifecycle of your data. Place legal holds on communications to prevent deletion during litigation (TBD).
        • Comprehensive Auditing: An immutable audit trail logs all system activities, ensuring you have a clear record of who accessed what and when (TBD).

        Installation 🚀

        To get your own instance of Open Archiver running, follow our detailed installation guide:

        Data Source Configuration 🔌

        After deploying the application, you will need to configure one or more ingestion sources to begin archiving emails. Follow our detailed guides to connect to your email provider:

        Contributing ❤️

        We welcome contributions from the community!

        • Reporting Bugs: If you find a bug, please open an issue on our GitHub repository.
        • Suggesting Enhancements: Have an idea for a new feature? We'd love to hear it. Open an issue to start the discussion.
        • Code Contributions: If you'd like to contribute code, please fork the repository and submit a pull request.

        Please read our CONTRIBUTING.md file for more details on our code of conduct and the process for submitting pull requests.

        + \ No newline at end of file diff --git a/services/iam-service.html b/services/iam-service.html index 0d5eb89..300bf38 100644 --- a/services/iam-service.html +++ b/services/iam-service.html @@ -9,9 +9,9 @@ - + - + @@ -20,7 +20,7 @@ -
        Skip to content

        IAM Policies

        This document provides a guide to creating and managing IAM policies in Open Archiver. It is intended for developers and administrators who need to configure granular access control for users and roles.

        Policy Structure

        IAM policies are defined as an array of JSON objects, where each object represents a single permission rule. The structure of a policy object is as follows:

        json
        {
        +    
        Skip to content

        IAM Policies

        This document provides a guide to creating and managing IAM policies in Open Archiver. It is intended for developers and administrators who need to configure granular access control for users and roles.

        Policy Structure

        IAM policies are defined as an array of JSON objects, where each object represents a single permission rule. The structure of a policy object is as follows:

        json
        {
         	"action": "read" OR ["read", "create"],
         	"subject": "ingestion" OR ["ingestion", "dashboard"],
         	"conditions": {
        @@ -106,7 +106,7 @@
         		}
         	}
         ]
        - + \ No newline at end of file diff --git a/services/iam-service/iam-policy.html b/services/iam-service/iam-policy.html index 05d7526..6e78253 100644 --- a/services/iam-service/iam-policy.html +++ b/services/iam-service/iam-policy.html @@ -9,9 +9,9 @@ - + - + @@ -20,7 +20,7 @@ -
        Skip to content

        IAM Policy

        This document provides a guide to creating and managing IAM policies in Open Archiver. It is intended for developers and administrators who need to configure granular access control for users and roles.

        Policy Structure

        IAM policies are defined as an array of JSON objects, where each object represents a single permission rule. The structure of a policy object is as follows:

        json
        {
        +    
        Skip to content

        IAM Policy

        This document provides a guide to creating and managing IAM policies in Open Archiver. It is intended for developers and administrators who need to configure granular access control for users and roles.

        Policy Structure

        IAM policies are defined as an array of JSON objects, where each object represents a single permission rule. The structure of a policy object is as follows:

        json
        {
         	"action": "read" OR ["read", "create"],
         	"subject": "ingestion" OR ["ingestion", "dashboard"],
         	"conditions": {
        @@ -106,7 +106,7 @@
         		}
         	}
         ]
        - + \ No newline at end of file diff --git a/services/index.html b/services/index.html index 700af56..7eeeeb4 100644 --- a/services/index.html +++ b/services/index.html @@ -9,9 +9,9 @@ - + - + @@ -20,8 +20,8 @@ - - + + \ No newline at end of file diff --git a/services/storage-service.html b/services/storage-service.html index fcb7e8b..7ca34e9 100644 --- a/services/storage-service.html +++ b/services/storage-service.html @@ -9,9 +9,9 @@ - + - + @@ -20,7 +20,7 @@ -
        Skip to content

        Pluggable Storage Service (StorageService)

        Overview

        The StorageService provides a unified, abstract interface for handling file storage across different backends. Its primary purpose is to decouple the application's core logic from the underlying storage technology. This design allows administrators to switch between storage providers (e.g., from the local filesystem to an S3-compatible object store) with only a configuration change, requiring no modifications to the application code.

        The service is built around a standardized IStorageProvider interface, which guarantees that all storage providers have a consistent API for common operations like storing, retrieving, and deleting files.

        Configuration

        The StorageService is configured via environment variables in the .env file. You must specify the storage backend you wish to use and provide the necessary credentials and settings for it.

        1. Choosing the Backend

        The STORAGE_TYPE variable determines which provider the service will use.

        • STORAGE_TYPE=local: Uses the local server's filesystem.
        • STORAGE_TYPE=s3: Uses an S3-compatible object storage service (e.g., AWS S3, MinIO, Google Cloud Storage).

        2. Local Filesystem Configuration

        When STORAGE_TYPE is set to local, you must also provide the root path where files will be stored.

        env
        # .env
        +    
        Skip to content

        Pluggable Storage Service (StorageService)

        Overview

        The StorageService provides a unified, abstract interface for handling file storage across different backends. Its primary purpose is to decouple the application's core logic from the underlying storage technology. This design allows administrators to switch between storage providers (e.g., from the local filesystem to an S3-compatible object store) with only a configuration change, requiring no modifications to the application code.

        The service is built around a standardized IStorageProvider interface, which guarantees that all storage providers have a consistent API for common operations like storing, retrieving, and deleting files.

        Configuration

        The StorageService is configured via environment variables in the .env file. You must specify the storage backend you wish to use and provide the necessary credentials and settings for it.

        1. Choosing the Backend

        The STORAGE_TYPE variable determines which provider the service will use.

        • STORAGE_TYPE=local: Uses the local server's filesystem.
        • STORAGE_TYPE=s3: Uses an S3-compatible object storage service (e.g., AWS S3, MinIO, Google Cloud Storage).

        2. Local Filesystem Configuration

        When STORAGE_TYPE is set to local, you must also provide the root path where files will be stored.

        env
        # .env
         STORAGE_TYPE=local
         STORAGE_LOCAL_ROOT_PATH=/var/data/open-archiver
        • STORAGE_LOCAL_ROOT_PATH: The absolute path on the server where the archive will be created. The service will create subdirectories within this path as needed.

        3. S3-Compatible Storage Configuration

        When STORAGE_TYPE is set to s3, you must provide the credentials and endpoint for your object storage provider.

        env
        # .env
         STORAGE_TYPE=s3
        @@ -54,7 +54,7 @@
         		}
         	}
         }

        API Reference

        The StorageService implements the IStorageProvider interface. All methods are asynchronous and return a Promise.


        put(path, content)

        Stores a file at the specified path. If a file already exists at that path, it will be overwritten.

        • path: string: A unique identifier for the file, including its directory structure (e.g., "user-123/emails/message-abc.eml").
        • content: Buffer | NodeJS.ReadableStream: The content of the file. It can be a Buffer for small files or a ReadableStream for large files to ensure memory efficiency.
        • Returns: Promise<void> - A promise that resolves when the file has been successfully stored.

        get(path)

        Retrieves a file from the specified path as a readable stream.

        • path: string: The unique identifier of the file to retrieve.
        • Returns: Promise<NodeJS.ReadableStream> - A promise that resolves with a readable stream of the file's content.
        • Throws: An Error if the file is not found at the specified path.

        delete(path)

        Deletes a file from the storage backend.

        • path: string: The unique identifier of the file to delete.
        • Returns: Promise<void> - A promise that resolves when the file is deleted. If the file does not exist, the promise will still resolve successfully without throwing an error.

        exists(path)

        Checks for the existence of a file.

        • path: string: The unique identifier of the file to check.
        • Returns: Promise<boolean> - A promise that resolves with true if the file exists, and false otherwise.
        - + \ No newline at end of file diff --git a/user-guides/email-providers/eml.html b/user-guides/email-providers/eml.html index 223e441..9d4c9b2 100644 --- a/user-guides/email-providers/eml.html +++ b/user-guides/email-providers/eml.html @@ -9,9 +9,9 @@ - + - + @@ -20,7 +20,7 @@ -
        Skip to content

        EML Import

        OpenArchiver allows you to import EML files from a zip archive. This is useful for importing emails from a variety of sources, including other email clients and services.

        Preparing the Zip File

        To ensure a successful import, you should compress your .eml files to one zip file according to the following guidelines:

        • Structure: The zip file can contain any number of .eml files, organized in any folder structure. The folder structure will be preserved in OpenArchiver, so you can use it to organize your emails.
        • Compression: The zip file should be compressed using standard zip compression.

        Here's an example of a valid folder structure:

        archive.zip
        +    
        Skip to content

        EML Import

        OpenArchiver allows you to import EML files from a zip archive. This is useful for importing emails from a variety of sources, including other email clients and services.

        Preparing the Zip File

        To ensure a successful import, you should compress your .eml files to one zip file according to the following guidelines:

        • Structure: The zip file can contain any number of .eml files, organized in any folder structure. The folder structure will be preserved in OpenArchiver, so you can use it to organize your emails.
        • Compression: The zip file should be compressed using standard zip compression.

        Here's an example of a valid folder structure:

        archive.zip
         ├── inbox
         │   ├── email-01.eml
         │   └── email-02.eml
        @@ -30,7 +30,7 @@
             ├── nested-folder
             │   └── email-04.eml
             └── email-05.eml

        Creating an EML Ingestion Source

        1. Go to the Ingestion Sources page in the OpenArchiver dashboard.
        2. Click the Create New button.
        3. Select EML Import as the provider.
        4. Enter a name for the ingestion source.
        5. Click the Choose File button and select the zip archive containing your EML files.
        6. Click the Submit button.

        OpenArchiver will then start importing the EML files from the zip archive. The ingestion process may take some time, depending on the size of the archive.

        - + \ No newline at end of file diff --git a/user-guides/email-providers/google-workspace.html b/user-guides/email-providers/google-workspace.html index 23c8015..eb80ad5 100644 --- a/user-guides/email-providers/google-workspace.html +++ b/user-guides/email-providers/google-workspace.html @@ -9,9 +9,9 @@ - + - + @@ -20,8 +20,8 @@ -
        Skip to content

        Connecting to Google Workspace

        This guide provides instructions for Google Workspace administrators to set up a connection that allows the archiving of all user mailboxes within their organization.

        The connection uses a Google Cloud Service Account with Domain-Wide Delegation. This is a secure method that grants the archiving service permission to access user data on behalf of the administrator, without requiring individual user passwords or consent.

        Prerequisites

        • You must have Super Administrator privileges in your Google Workspace account.
        • You must have access to the Google Cloud Console associated with your organization.

        Setup Overview

        The setup process involves three main parts:

        1. Configuring the necessary permissions in the Google Cloud Console.
        2. Authorizing the service account in the Google Workspace Admin Console.
        3. Entering the generated credentials into the OpenArchiver application.

        Part 1: Google Cloud Console Setup

        In this part, you will create a service account and enable the APIs it needs to function.

        1. Create a Google Cloud Project:

          • Go to the Google Cloud Console.
          • If you don't already have one, create a new project for the archiving service (e.g., "Email Archiver").
        2. Enable Required APIs:

          • In your selected project, navigate to the "APIs & Services" > "Library" section.
          • Search for and enable the following two APIs:
            • Gmail API
            • Admin SDK API
        3. Create a Service Account:

          • Navigate to "IAM & Admin" > "Service Accounts".
          • Click "Create Service Account".
          • Give the service account a name (e.g., email-archiver-service) and a description.
          • Click "Create and Continue". You do not need to grant this service account any roles on the project. Click "Done".
        4. Generate a JSON Key:

          • Find the service account you just created in the list.
          • Click the three-dot menu under "Actions" and select "Manage keys".
          • Click "Add Key" > "Create new key".
          • Select JSON as the key type and click "Create".
          • A JSON file will be downloaded to your computer. Keep this file secure, as it contains private credentials. You will need the contents of this file in Part 3.

        Troubleshooting

        Error: "iam.disableServiceAccountKeyCreation"

        If you receive an error message stating The organization policy constraint 'iam.disableServiceAccountKeyCreation' is enforced when trying to create a JSON key, it means your Google Cloud organization has a policy preventing the creation of new service account keys.

        To resolve this, you must have Organization Administrator permissions.

        1. Navigate to your Organization: In the Google Cloud Console, use the project selector at the top of the page to select your organization node (it usually has a building icon).
        2. Go to IAM: From the navigation menu, select "IAM & Admin" > "IAM".
        3. Edit Your Permissions: Find your user account in the list and click the pencil icon to edit roles. Add the following two roles:
          • Organization Policy Administrator
          • Organization AdministratorNote: These roles are only available at the organization level, not the project level.
        4. Modify the Policy:
          • Navigate to "IAM & Admin" > "Organization Policies".
          • In the filter box, search for the policy "iam.disableServiceAccountKeyCreation".
          • Click on the policy to edit it.
          • You can either disable the policy entirely (if your security rules permit) or add a rule to exclude the specific project you are using for the archiver from this policy.
        5. Retry Key Creation: Once the policy is updated, return to your project and you should be able to generate the JSON key as described in Part 1.

        Part 2: Grant Domain-Wide Delegation

        Now, you will authorize the service account you created to access data from your Google Workspace.

        1. Get the Service Account's Client ID:

          • Go back to the list of service accounts in the Google Cloud Console.
          • Click on the service account you created.
          • Under the "Details" tab, find and copy the Unique ID (this is the Client ID).
        2. Authorize the Client in Google Workspace:

          • Go to your Google Workspace Admin Console at admin.google.com.
          • Navigate to Security > Access and data control > API controls.
          • Under the "Domain-wide Delegation" section, click "Manage Domain-wide Delegation".
          • Click "Add new".
        3. Enter Client Details and Scopes:

          • In the Client ID field, paste the Unique ID you copied from the service account.
          • In the OAuth scopes field, paste the following two scopes exactly as they appear, separated by a comma:
            https://www.googleapis.com/auth/admin.directory.user.readonly,https://www.googleapis.com/auth/gmail.readonly
          • Click "Authorize".

        The service account is now permitted to list users and read their email data across your domain.


        Part 3: Connecting in OpenArchiver

        Finally, you will provide the generated credentials to the application.

        1. Navigate to Ingestion Sources: From the main dashboard, go to the Ingestion Sources page.

        2. Create a New Source: Click the "Create New" button.

        3. Fill in the Configuration Details:

          • Name: Give the source a name (e.g., "Google Workspace Archive").
          • Provider: Select "Google Workspace" from the dropdown.
          • Service Account Key (JSON): Open the JSON file you downloaded in Part 1. Copy the entire content of the file and paste it into this text area.
          • Impersonated Admin Email: Enter the email address of a Super Administrator in your Google Workspace (e.g., admin@your-domain.com). The service will use this user's authority to discover all other users.
        4. Save Changes: Click "Save changes".

        What Happens Next?

        Once the connection is saved and verified, the system will begin the archiving process:

        1. User Discovery: The service will first connect to the Admin SDK to get a list of all active users in your Google Workspace.
        2. Initial Import: The system will then start a background job to import the mailboxes of all discovered users. The status will show as "Importing". This can take a significant amount of time depending on the number of users and the size of their mailboxes.
        3. Continuous Sync: After the initial import is complete, the status will change to "Active". The system will then periodically check each user's mailbox for new emails and archive them automatically.
        - +
        Skip to content

        Connecting to Google Workspace

        This guide provides instructions for Google Workspace administrators to set up a connection that allows the archiving of all user mailboxes within their organization.

        The connection uses a Google Cloud Service Account with Domain-Wide Delegation. This is a secure method that grants the archiving service permission to access user data on behalf of the administrator, without requiring individual user passwords or consent.

        Prerequisites

        • You must have Super Administrator privileges in your Google Workspace account.
        • You must have access to the Google Cloud Console associated with your organization.

        Setup Overview

        The setup process involves three main parts:

        1. Configuring the necessary permissions in the Google Cloud Console.
        2. Authorizing the service account in the Google Workspace Admin Console.
        3. Entering the generated credentials into the OpenArchiver application.

        Part 1: Google Cloud Console Setup

        In this part, you will create a service account and enable the APIs it needs to function.

        1. Create a Google Cloud Project:

          • Go to the Google Cloud Console.
          • If you don't already have one, create a new project for the archiving service (e.g., "Email Archiver").
        2. Enable Required APIs:

          • In your selected project, navigate to the "APIs & Services" > "Library" section.
          • Search for and enable the following two APIs:
            • Gmail API
            • Admin SDK API
        3. Create a Service Account:

          • Navigate to "IAM & Admin" > "Service Accounts".
          • Click "Create Service Account".
          • Give the service account a name (e.g., email-archiver-service) and a description.
          • Click "Create and Continue". You do not need to grant this service account any roles on the project. Click "Done".
        4. Generate a JSON Key:

          • Find the service account you just created in the list.
          • Click the three-dot menu under "Actions" and select "Manage keys".
          • Click "Add Key" > "Create new key".
          • Select JSON as the key type and click "Create".
          • A JSON file will be downloaded to your computer. Keep this file secure, as it contains private credentials. You will need the contents of this file in Part 3.

        Troubleshooting

        Error: "iam.disableServiceAccountKeyCreation"

        If you receive an error message stating The organization policy constraint 'iam.disableServiceAccountKeyCreation' is enforced when trying to create a JSON key, it means your Google Cloud organization has a policy preventing the creation of new service account keys.

        To resolve this, you must have Organization Administrator permissions.

        1. Navigate to your Organization: In the Google Cloud Console, use the project selector at the top of the page to select your organization node (it usually has a building icon).
        2. Go to IAM: From the navigation menu, select "IAM & Admin" > "IAM".
        3. Edit Your Permissions: Find your user account in the list and click the pencil icon to edit roles. Add the following two roles:
          • Organization Policy Administrator
          • Organization AdministratorNote: These roles are only available at the organization level, not the project level.
        4. Modify the Policy:
          • Navigate to "IAM & Admin" > "Organization Policies".
          • In the filter box, search for the policy "iam.disableServiceAccountKeyCreation".
          • Click on the policy to edit it.
          • You can either disable the policy entirely (if your security rules permit) or add a rule to exclude the specific project you are using for the archiver from this policy.
        5. Retry Key Creation: Once the policy is updated, return to your project and you should be able to generate the JSON key as described in Part 1.

        Part 2: Grant Domain-Wide Delegation

        Now, you will authorize the service account you created to access data from your Google Workspace.

        1. Get the Service Account's Client ID:

          • Go back to the list of service accounts in the Google Cloud Console.
          • Click on the service account you created.
          • Under the "Details" tab, find and copy the Unique ID (this is the Client ID).
        2. Authorize the Client in Google Workspace:

          • Go to your Google Workspace Admin Console at admin.google.com.
          • Navigate to Security > Access and data control > API controls.
          • Under the "Domain-wide Delegation" section, click "Manage Domain-wide Delegation".
          • Click "Add new".
        3. Enter Client Details and Scopes:

          • In the Client ID field, paste the Unique ID you copied from the service account.
          • In the OAuth scopes field, paste the following two scopes exactly as they appear, separated by a comma:
            https://www.googleapis.com/auth/admin.directory.user.readonly,https://www.googleapis.com/auth/gmail.readonly
          • Click "Authorize".

        The service account is now permitted to list users and read their email data across your domain.


        Part 3: Connecting in OpenArchiver

        Finally, you will provide the generated credentials to the application.

        1. Navigate to Ingestion Sources: From the main dashboard, go to the Ingestion Sources page.

        2. Create a New Source: Click the "Create New" button.

        3. Fill in the Configuration Details:

          • Name: Give the source a name (e.g., "Google Workspace Archive").
          • Provider: Select "Google Workspace" from the dropdown.
          • Service Account Key (JSON): Open the JSON file you downloaded in Part 1. Copy the entire content of the file and paste it into this text area.
          • Impersonated Admin Email: Enter the email address of a Super Administrator in your Google Workspace (e.g., admin@your-domain.com). The service will use this user's authority to discover all other users.
        4. Save Changes: Click "Save changes".

        What Happens Next?

        Once the connection is saved and verified, the system will begin the archiving process:

        1. User Discovery: The service will first connect to the Admin SDK to get a list of all active users in your Google Workspace.
        2. Initial Import: The system will then start a background job to import the mailboxes of all discovered users. The status will show as "Importing". This can take a significant amount of time depending on the number of users and the size of their mailboxes.
        3. Continuous Sync: After the initial import is complete, the status will change to "Active". The system will then periodically check each user's mailbox for new emails and archive them automatically.
        + \ No newline at end of file diff --git a/user-guides/email-providers/imap.html b/user-guides/email-providers/imap.html index c5f5068..cf71155 100644 --- a/user-guides/email-providers/imap.html +++ b/user-guides/email-providers/imap.html @@ -9,9 +9,9 @@ - + - + @@ -20,8 +20,8 @@ -
        Skip to content

        Connecting to a Generic IMAP Server

        This guide will walk you through connecting a standard IMAP email account as an ingestion source. This allows you to archive emails from any provider that supports the IMAP protocol, which is common for many self-hosted or traditional email services.

        Step-by-Step Guide

        1. Navigate to Ingestion Sources: From the main dashboard, go to the Ingestions page.

        2. Create a New Source: Click the "Create New" button to open the ingestion source configuration dialog.

        3. Fill in the Configuration Details: You will see a form with several fields. Here is how to fill them out for an IMAP connection:

          • Name: Give your ingestion source a descriptive name that you will easily recognize, such as "Work Email (IMAP)" or "Personal Gmail".

          • Provider: From the dropdown menu, select "Generic IMAP". This will reveal the specific fields required for an IMAP connection.

          • Host: Enter the server address for your email provider's IMAP service. This often looks like imap.your-provider.com or mail.your-domain.com.

          • Port: Enter the port number for the IMAP server. For a secure connection (which is strongly recommended), this is typically 993.

          • Username: Enter the full email address or username you use to log in to your email account.

          • Password: Enter the password for your email account.

        4. Save Changes: Once you have filled in all the details, click the "Save changes" button.

        Security Recommendation: Use an App Password

        For enhanced security, we strongly recommend using an "app password" (sometimes called an "app-specific password") instead of your main account password.

        Many email providers (like Gmail, Outlook, and Fastmail) allow you to generate a unique password that grants access only to a specific application (in this case, the archiving service). If you ever need to revoke access, you can simply delete the app password without affecting your main account login.

        Please consult your email provider's documentation to see if they support app passwords and how to create one.

        How to Obtain an App Password for Gmail

        1. Enable 2-Step Verification: You must have 2-Step Verification turned on for your Google Account.
        2. Go to App Passwords: Visit myaccount.google.com/apppasswords. You may be asked to sign in again.
        3. Create the Password:
          • At the bottom, click "Select app" and choose "Other (Custom name)".
          • Give it a name you'll recognize, like "OpenArchiver".
          • Click "Generate".
        4. Use the Password: A 16-digit password will be displayed. Copy this password and paste it into the Password field in the OpenArchiver ingestion source form.

        How to Obtain an App Password for Outlook/Microsoft Accounts

        1. Enable Two-Step Verification: You must have two-step verification enabled for your Microsoft account.
        2. Go to Security Options: Sign in to your Microsoft account and navigate to the Advanced security options.
        3. Create a New App Password:
          • Scroll down to the "App passwords" section.
          • Click "Create a new app password".
        4. Use the Password: A new password will be generated. Use this password in the Password field in the OpenArchiver ingestion source form.

        What Happens Next?

        After you save the connection, the system will attempt to connect to the IMAP server. The status of the ingestion source will update to reflect its current state:

        • Importing: The system is performing the initial, one-time import of all emails from your INBOX. This may take a while depending on the size of your mailbox.
        • Active: The initial import is complete, and the system will now periodically check for and archive new emails.
        • Paused: The connection is valid, but the system will not check for new emails until you resume it.
        • Error: The system was unable to connect using the provided credentials. Please double-check your Host, Port, Username, and Password and try again.

        You can view, edit, pause, or manually sync any of your ingestion sources from the main table on the Ingestions page.

        - +
        Skip to content

        Connecting to a Generic IMAP Server

        This guide will walk you through connecting a standard IMAP email account as an ingestion source. This allows you to archive emails from any provider that supports the IMAP protocol, which is common for many self-hosted or traditional email services.

        Step-by-Step Guide

        1. Navigate to Ingestion Sources: From the main dashboard, go to the Ingestions page.

        2. Create a New Source: Click the "Create New" button to open the ingestion source configuration dialog.

        3. Fill in the Configuration Details: You will see a form with several fields. Here is how to fill them out for an IMAP connection:

          • Name: Give your ingestion source a descriptive name that you will easily recognize, such as "Work Email (IMAP)" or "Personal Gmail".

          • Provider: From the dropdown menu, select "Generic IMAP". This will reveal the specific fields required for an IMAP connection.

          • Host: Enter the server address for your email provider's IMAP service. This often looks like imap.your-provider.com or mail.your-domain.com.

          • Port: Enter the port number for the IMAP server. For a secure connection (which is strongly recommended), this is typically 993.

          • Username: Enter the full email address or username you use to log in to your email account.

          • Password: Enter the password for your email account.

        4. Save Changes: Once you have filled in all the details, click the "Save changes" button.

        Security Recommendation: Use an App Password

        For enhanced security, we strongly recommend using an "app password" (sometimes called an "app-specific password") instead of your main account password.

        Many email providers (like Gmail, Outlook, and Fastmail) allow you to generate a unique password that grants access only to a specific application (in this case, the archiving service). If you ever need to revoke access, you can simply delete the app password without affecting your main account login.

        Please consult your email provider's documentation to see if they support app passwords and how to create one.

        How to Obtain an App Password for Gmail

        1. Enable 2-Step Verification: You must have 2-Step Verification turned on for your Google Account.
        2. Go to App Passwords: Visit myaccount.google.com/apppasswords. You may be asked to sign in again.
        3. Create the Password:
          • At the bottom, click "Select app" and choose "Other (Custom name)".
          • Give it a name you'll recognize, like "OpenArchiver".
          • Click "Generate".
        4. Use the Password: A 16-digit password will be displayed. Copy this password and paste it into the Password field in the OpenArchiver ingestion source form.

        How to Obtain an App Password for Outlook/Microsoft Accounts

        1. Enable Two-Step Verification: You must have two-step verification enabled for your Microsoft account.
        2. Go to Security Options: Sign in to your Microsoft account and navigate to the Advanced security options.
        3. Create a New App Password:
          • Scroll down to the "App passwords" section.
          • Click "Create a new app password".
        4. Use the Password: A new password will be generated. Use this password in the Password field in the OpenArchiver ingestion source form.

        What Happens Next?

        After you save the connection, the system will attempt to connect to the IMAP server. The status of the ingestion source will update to reflect its current state:

        • Importing: The system is performing the initial, one-time import of all emails from your INBOX. This may take a while depending on the size of your mailbox.
        • Active: The initial import is complete, and the system will now periodically check for and archive new emails.
        • Paused: The connection is valid, but the system will not check for new emails until you resume it.
        • Error: The system was unable to connect using the provided credentials. Please double-check your Host, Port, Username, and Password and try again.

        You can view, edit, pause, or manually sync any of your ingestion sources from the main table on the Ingestions page.

        + \ No newline at end of file diff --git a/user-guides/email-providers/index.html b/user-guides/email-providers/index.html index bf96ec8..6af4b40 100644 --- a/user-guides/email-providers/index.html +++ b/user-guides/email-providers/index.html @@ -9,19 +9,19 @@ - + - + - + -
        Skip to content

        Connecting Email Providers

        Open Archiver can connect to a variety of email sources to ingest and archive your emails. This section provides guides for connecting to popular email providers.

        Choose your provider from the list below to get started:

        - +
        Skip to content

        Connecting Email Providers

        Open Archiver can connect to a variety of email sources to ingest and archive your emails. This section provides guides for connecting to popular email providers.

        Choose your provider from the list below to get started:

        + \ No newline at end of file diff --git a/user-guides/email-providers/mbox.html b/user-guides/email-providers/mbox.html new file mode 100644 index 0000000..08e26cb --- /dev/null +++ b/user-guides/email-providers/mbox.html @@ -0,0 +1,27 @@ + + + + + + Mbox Ingestion | Open Archiver Docs + + + + + + + + + + + + + + + + +
        Skip to content

        Mbox Ingestion

        Mbox is a common format for storing email messages. This guide will walk you through the process of ingesting mbox files into OpenArchiver.

        1. Exporting from Your Email Client

        Most email clients that support mbox exports will allow you to export a folder of emails as a single .mbox file. Here are the general steps:

        • Mozilla Thunderbird: Right-click on a folder, select ImportExportTools NG, and then choose Export folder.
        • Gmail: You can use Google Takeout to export your emails in mbox format.
        • Other Clients: Refer to your email client's documentation for instructions on how to export emails to an mbox file.

        2. Uploading to OpenArchiver

        Once you have your .mbox file, you can upload it to OpenArchiver through the web interface.

        1. Navigate to the Ingestion page.
        2. Click on the New Ingestion button.
        3. Select Mbox as the source type.
        4. Upload your .mbox file.

        3. Folder Structure

        OpenArchiver will attempt to preserve the original folder structure of your emails. This is done by inspecting the following email headers:

        • X-Gmail-Labels: Used by Gmail to store labels.
        • X-Folder: A custom header used by some email clients like Thunderbird.

        If neither of these headers is present, the emails will be ingested into the root of the archive.

        + + + + \ No newline at end of file diff --git a/user-guides/email-providers/microsoft-365.html b/user-guides/email-providers/microsoft-365.html index b238a9b..32723d1 100644 --- a/user-guides/email-providers/microsoft-365.html +++ b/user-guides/email-providers/microsoft-365.html @@ -9,9 +9,9 @@ - + - + @@ -20,8 +20,8 @@ -
        Skip to content

        Connecting to Microsoft 365

        This guide provides instructions for Microsoft 365 administrators to set up a connection that allows the archiving of all user mailboxes within their organization.

        The connection uses the Microsoft Graph API and an App Registration in Microsoft Entra ID. This is a secure, standard method that grants the archiving service permission to read email data on your behalf without ever needing to handle user passwords.

        Prerequisites

        • You must have one of the following administrator roles in your Microsoft 365 tenant: Global Administrator, Application Administrator, or Cloud Application Administrator.

        Setup Overview

        The setup process involves four main parts, all performed within the Microsoft Entra admin center and the OpenArchiver application:

        1. Registering a new application identity for the archiver in Entra ID.
        2. Granting the application the specific permissions it needs to read mail.
        3. Creating a secure password (a client secret) for the application.
        4. Entering the generated credentials into the OpenArchiver application.

        Part 1: Register a New Application in Microsoft Entra ID

        First, you will create an "App registration," which acts as an identity for the archiving service within your Microsoft 365 ecosystem.

        1. Sign in to the Microsoft Entra admin center.
        2. In the left-hand navigation pane, go to Identity > Applications > App registrations.
        3. Click the + New registration button at the top of the page.
        4. On the "Register an application" screen:
          • Name: Give the application a descriptive name you will recognize, such as OpenArchiver Service.
          • Supported account types: Select "Accounts in this organizational directory only (Default Directory only - Single tenant)". This is the most secure option.
          • Redirect URI (optional): You can leave this blank.
        5. Click the Register button. You will be taken to the application's main "Overview" page.

        Part 2: Grant API Permissions

        Next, you must grant the application the specific permissions required to read user profiles and their mailboxes.

        1. From your new application's page, select API permissions from the left-hand menu.
        2. Click the + Add a permission button.
        3. In the "Request API permissions" pane, select Microsoft Graph.
        4. Select Application permissions. This is critical as it allows the service to run in the background without a user being signed in.
        5. In the "Select permissions" search box, find and check the boxes for the following two permissions:
          • Mail.Read
          • User.Read.All
        6. Click the Add permissions button at the bottom.
        7. Crucial Final Step: You will now see the permissions in your list with a warning status. You must grant consent on behalf of your organization. Click the "Grant admin consent for [Your Organization's Name]" button located above the permissions table. Click Yes in the confirmation dialog. The status for both permissions should now show a green checkmark.

        Part 3: Create a Client Secret

        The client secret is a password that the archiving service will use to authenticate. Treat this with the same level of security as an administrator's password.

        1. In your application's menu, navigate to Certificates & secrets.
        2. Select the Client secrets tab and click + New client secret.
        3. In the pane that appears:
          • Description: Enter a clear description, such as OpenArchiver Key.
          • Expires: Select an expiry duration. We recommend 12 or 24 months. Set a calendar reminder to renew it before it expires to prevent service interruption.
        4. Click Add.
        5. IMMEDIATELY COPY THE SECRET: The secret is now visible in the "Value" column. This is the only time it will be fully displayed. Copy this value now and store it in a secure password manager before navigating away. If you lose it, you must create a new one.

        Part 4: Connecting in OpenArchiver

        You now have the three pieces of information required to configure the connection.

        1. Navigate to Ingestion Sources: In the OpenArchiver application, go to the Ingestion Sources page.

        2. Create a New Source: Click the "Create New" button.

        3. Fill in the Configuration Details:

          • Name: Give the source a name (e.g., "Microsoft 365 Archive").
          • Provider: Select "Microsoft 365" from the dropdown.
          • Application (Client) ID: Go to the Overview page of your app registration in the Entra admin center and copy this value.
          • Directory (Tenant) ID: This value is also on the Overview page.
          • Client Secret Value: Paste the secret Value (not the Secret ID) that you copied and saved in the previous step.
        4. Save Changes: Click "Save changes".

        What Happens Next?

        Once the connection is saved, the system will begin the archiving process:

        1. User Discovery: The service will connect to the Microsoft Graph API to get a list of all users in your organization.
        2. Initial Import: The system will begin a background job to import the mailboxes of all discovered users, folder by folder. The status will show as "Importing". This can take a significant amount of time.
        3. Continuous Sync: After the initial import, the status will change to "Active". The system will use Microsoft Graph's delta query feature to efficiently fetch only new or changed emails, ensuring the archive stays up-to-date.
        - +
        Skip to content

        Connecting to Microsoft 365

        This guide provides instructions for Microsoft 365 administrators to set up a connection that allows the archiving of all user mailboxes within their organization.

        The connection uses the Microsoft Graph API and an App Registration in Microsoft Entra ID. This is a secure, standard method that grants the archiving service permission to read email data on your behalf without ever needing to handle user passwords.

        Prerequisites

        • You must have one of the following administrator roles in your Microsoft 365 tenant: Global Administrator, Application Administrator, or Cloud Application Administrator.

        Setup Overview

        The setup process involves four main parts, all performed within the Microsoft Entra admin center and the OpenArchiver application:

        1. Registering a new application identity for the archiver in Entra ID.
        2. Granting the application the specific permissions it needs to read mail.
        3. Creating a secure password (a client secret) for the application.
        4. Entering the generated credentials into the OpenArchiver application.

        Part 1: Register a New Application in Microsoft Entra ID

        First, you will create an "App registration," which acts as an identity for the archiving service within your Microsoft 365 ecosystem.

        1. Sign in to the Microsoft Entra admin center.
        2. In the left-hand navigation pane, go to Identity > Applications > App registrations.
        3. Click the + New registration button at the top of the page.
        4. On the "Register an application" screen:
          • Name: Give the application a descriptive name you will recognize, such as OpenArchiver Service.
          • Supported account types: Select "Accounts in this organizational directory only (Default Directory only - Single tenant)". This is the most secure option.
          • Redirect URI (optional): You can leave this blank.
        5. Click the Register button. You will be taken to the application's main "Overview" page.

        Part 2: Grant API Permissions

        Next, you must grant the application the specific permissions required to read user profiles and their mailboxes.

        1. From your new application's page, select API permissions from the left-hand menu.
        2. Click the + Add a permission button.
        3. In the "Request API permissions" pane, select Microsoft Graph.
        4. Select Application permissions. This is critical as it allows the service to run in the background without a user being signed in.
        5. In the "Select permissions" search box, find and check the boxes for the following two permissions:
          • Mail.Read
          • User.Read.All
        6. Click the Add permissions button at the bottom.
        7. Crucial Final Step: You will now see the permissions in your list with a warning status. You must grant consent on behalf of your organization. Click the "Grant admin consent for [Your Organization's Name]" button located above the permissions table. Click Yes in the confirmation dialog. The status for both permissions should now show a green checkmark.

        Part 3: Create a Client Secret

        The client secret is a password that the archiving service will use to authenticate. Treat this with the same level of security as an administrator's password.

        1. In your application's menu, navigate to Certificates & secrets.
        2. Select the Client secrets tab and click + New client secret.
        3. In the pane that appears:
          • Description: Enter a clear description, such as OpenArchiver Key.
          • Expires: Select an expiry duration. We recommend 12 or 24 months. Set a calendar reminder to renew it before it expires to prevent service interruption.
        4. Click Add.
        5. IMMEDIATELY COPY THE SECRET: The secret is now visible in the "Value" column. This is the only time it will be fully displayed. Copy this value now and store it in a secure password manager before navigating away. If you lose it, you must create a new one.

        Part 4: Connecting in OpenArchiver

        You now have the three pieces of information required to configure the connection.

        1. Navigate to Ingestion Sources: In the OpenArchiver application, go to the Ingestion Sources page.

        2. Create a New Source: Click the "Create New" button.

        3. Fill in the Configuration Details:

          • Name: Give the source a name (e.g., "Microsoft 365 Archive").
          • Provider: Select "Microsoft 365" from the dropdown.
          • Application (Client) ID: Go to the Overview page of your app registration in the Entra admin center and copy this value.
          • Directory (Tenant) ID: This value is also on the Overview page.
          • Client Secret Value: Paste the secret Value (not the Secret ID) that you copied and saved in the previous step.
        4. Save Changes: Click "Save changes".

        What Happens Next?

        Once the connection is saved, the system will begin the archiving process:

        1. User Discovery: The service will connect to the Microsoft Graph API to get a list of all users in your organization.
        2. Initial Import: The system will begin a background job to import the mailboxes of all discovered users, folder by folder. The status will show as "Importing". This can take a significant amount of time.
        3. Continuous Sync: After the initial import, the status will change to "Active". The system will use Microsoft Graph's delta query feature to efficiently fetch only new or changed emails, ensuring the archive stays up-to-date.
        + \ No newline at end of file diff --git a/user-guides/email-providers/pst.html b/user-guides/email-providers/pst.html index 9ae73b9..214a0a7 100644 --- a/user-guides/email-providers/pst.html +++ b/user-guides/email-providers/pst.html @@ -9,9 +9,9 @@ - + - + @@ -20,8 +20,8 @@ -
        Skip to content

        PST Import

        OpenArchiver allows you to import PST files. This is useful for importing emails from a variety of sources, including Microsoft Outlook.

        Preparing the PST File

        To ensure a successful import, you should prepare your PST file according to the following guidelines:

        • Structure: The PST file can contain any number of emails, organized in any folder structure. The folder structure will be preserved in OpenArchiver, so you can use it to organize your emails.
        • Password Protection: OpenArchiver does not support password-protected PST files. Please remove the password from your PST file before importing it.

        Creating a PST Ingestion Source

        1. Go to the Ingestion Sources page in the OpenArchiver dashboard.
        2. Click the Create New button.
        3. Select PST Import as the provider.
        4. Enter a name for the ingestion source.
        5. Click the Choose File button and select the PST file.
        6. Click the Submit button.

        OpenArchiver will then start importing the emails from the PST file. The ingestion process may take some time, depending on the size of the file.

        - +
        Skip to content

        PST Import

        OpenArchiver allows you to import PST files. This is useful for importing emails from a variety of sources, including Microsoft Outlook.

        Preparing the PST File

        To ensure a successful import, you should prepare your PST file according to the following guidelines:

        • Structure: The PST file can contain any number of emails, organized in any folder structure. The folder structure will be preserved in OpenArchiver, so you can use it to organize your emails.
        • Password Protection: OpenArchiver does not support password-protected PST files. Please remove the password from your PST file before importing it.

        Creating a PST Ingestion Source

        1. Go to the Ingestion Sources page in the OpenArchiver dashboard.
        2. Click the Create New button.
        3. Select PST Import as the provider.
        4. Enter a name for the ingestion source.
        5. Click the Choose File button and select the PST file.
        6. Click the Submit button.

        OpenArchiver will then start importing the emails from the PST file. The ingestion process may take some time, depending on the size of the file.

        + \ No newline at end of file diff --git a/user-guides/installation.html b/user-guides/installation.html index 9f889f3..0bf89c7 100644 --- a/user-guides/installation.html +++ b/user-guides/installation.html @@ -9,19 +9,19 @@ - + - + - + -
        Skip to content

        Installation Guide

        This guide will walk you through setting up Open Archiver using Docker Compose. This is the recommended method for deploying the application.

        Prerequisites

        • Docker and Docker Compose installed on your server or local machine.
        • A server or local machine with at least 4GB of RAM (2GB of RAM if you use external Postgres, Redis (Valkey) and Meilisearch instances).
        • Git installed on your server or local machine.

        1. Clone the Repository

        First, clone the Open Archiver repository to your machine:

        bash
        git clone https://github.com/LogicLabs-OU/OpenArchiver.git
        -cd OpenArchiver

        2. Configure Your Environment

        The application is configured using environment variables. You'll need to create a .env file to store your configuration.

        Copy the example environment file for Docker:

        bash
        cp .env.example.docker .env

        Now, open the .env file in a text editor and customize the settings.

        Important Configuration

        You must change the following placeholder values to secure your instance:

        • POSTGRES_PASSWORD: A strong, unique password for the database.
        • REDIS_PASSWORD: A strong, unique password for the Valkey/Redis service.
        • MEILI_MASTER_KEY: A complex key for Meilisearch.
        • JWT_SECRET: A long, random string for signing authentication tokens.
        • ENCRYPTION_KEY: A 32-byte hex string for encrypting sensitive data in the database. You can generate one with the following command:
          bash
          openssl rand -hex 32

        Storage Configuration

        By default, the Docker Compose setup uses local filesystem storage, which is persisted using a Docker volume named archiver-data. This is suitable for most use cases.

        If you want to use S3-compatible object storage, change the STORAGE_TYPE to s3 and fill in your S3 credentials (STORAGE_S3_* variables). When STORAGE_TYPE is set to local, the S3-related variables are not required.

        Using External Services

        For convenience, the docker-compose.yml file includes services for PostgreSQL, Valkey (Redis), and Meilisearch. However, you can use your own external or managed instances for these services.

        To do so:

        1. Update your .env file: Change the host, port, and credential variables to point to your external service instances. For example, you would update DATABASE_URL, REDIS_HOST, and MEILI_HOST.
        2. Modify docker-compose.yml: Remove or comment out the service definitions for postgres, valkey, and meilisearch from your docker-compose.yml file.

        This will configure the Open Archiver application to connect to your services instead of starting the default ones.

        Environment Variable Reference

        Here is a complete list of environment variables available for configuration:

        Application Settings

        VariableDescriptionDefault Value
        NODE_ENVThe application environment.development
        PORT_BACKENDThe port for the backend service.4000
        PORT_FRONTENDThe port for the frontend service.3000
        SYNC_FREQUENCYThe frequency of continuous email syncing. See cron syntax for more details.* * * * *

        Docker Compose Service Configuration

        These variables are used by docker-compose.yml to configure the services.

        VariableDescriptionDefault Value
        POSTGRES_DBThe name of the PostgreSQL database.open_archive
        POSTGRES_USERThe username for the PostgreSQL database.admin
        POSTGRES_PASSWORDThe password for the PostgreSQL database.password
        DATABASE_URLThe connection URL for the PostgreSQL database.postgresql://admin:password@postgres:5432/open_archive
        MEILI_MASTER_KEYThe master key for Meilisearch.aSampleMasterKey
        MEILI_HOSTThe host for the Meilisearch service.http://meilisearch:7700
        REDIS_HOSTThe host for the Valkey (Redis) service.valkey
        REDIS_PORTThe port for the Valkey (Redis) service.6379
        REDIS_PASSWORDThe password for the Valkey (Redis) service.defaultredispassword
        REDIS_TLS_ENABLEDEnable or disable TLS for Redis.false

        Storage Settings

        VariableDescriptionDefault Value
        STORAGE_TYPEThe storage backend to use (local or s3).local
        BODY_SIZE_LIMITThe maximum request body size for uploads. Can be a number in bytes or a string with a unit (e.g., 100M).100M
        STORAGE_LOCAL_ROOT_PATHThe root path for local file storage./var/data/open-archiver
        STORAGE_S3_ENDPOINTThe endpoint for S3-compatible storage (required if STORAGE_TYPE is s3).
        STORAGE_S3_BUCKETThe bucket name for S3-compatible storage (required if STORAGE_TYPE is s3).
        STORAGE_S3_ACCESS_KEY_IDThe access key ID for S3-compatible storage (required if STORAGE_TYPE is s3).
        STORAGE_S3_SECRET_ACCESS_KEYThe secret access key for S3-compatible storage (required if STORAGE_TYPE is s3).
        STORAGE_S3_REGIONThe region for S3-compatible storage (required if STORAGE_TYPE is s3).
        STORAGE_S3_FORCE_PATH_STYLEForce path-style addressing for S3 (optional).false

        Security & Authentication

        VariableDescriptionDefault Value
        JWT_SECRETA secret key for signing JWT tokens.a-very-secret-key-that-you-should-change
        JWT_EXPIRES_INThe expiration time for JWT tokens.7d
        SUPER_API_KEY (Deprecated)An API key with super admin privileges. (The SUPER_API_KEY is deprecated since v0.3.0 after we roll out the role-based access control system.)
        RATE_LIMIT_WINDOW_MSThe window in milliseconds for which API requests are checked.900000 (15 minutes)
        RATE_LIMIT_MAX_REQUESTSThe maximum number of API requests allowed from an IP within the window.100
        ENCRYPTION_KEYA 32-byte hex string for encrypting sensitive data in the database.

        3. Run the Application

        Once you have configured your .env file, you can start all the services using Docker Compose:

        bash
        docker compose up -d

        This command will:

        • Pull the required Docker images for the frontend, backend, database, and other services.
        • Create and start the containers in the background (-d flag).
        • Create the persistent volumes for your data.

        You can check the status of the running containers with:

        bash
        docker compose ps

        4. Access the Application

        Once the services are running, you can access the Open Archiver web interface by navigating to http://localhost:3000 in your web browser.

        You can log in with the ADMIN_EMAIL and ADMIN_PASSWORD you configured in your .env file.

        5. Next Steps

        After successfully deploying and logging into Open Archiver, the next step is to configure your ingestion sources to start archiving emails.

        Updating Your Installation

        To update your Open Archiver instance to the latest version, run the following commands:

        bash
        # Pull the latest changes from the repository
        +    
        Skip to content

        Installation Guide

        This guide will walk you through setting up Open Archiver using Docker Compose. This is the recommended method for deploying the application.

        Prerequisites

        • Docker and Docker Compose installed on your server or local machine.
        • A server or local machine with at least 4GB of RAM (2GB of RAM if you use external Postgres, Redis (Valkey) and Meilisearch instances).
        • Git installed on your server or local machine.

        1. Clone the Repository

        First, clone the Open Archiver repository to your machine:

        bash
        git clone https://github.com/LogicLabs-OU/OpenArchiver.git
        +cd OpenArchiver

        2. Configure Your Environment

        The application is configured using environment variables. You'll need to create a .env file to store your configuration.

        Copy the example environment file for Docker:

        bash
        cp .env.example.docker .env

        Now, open the .env file in a text editor and customize the settings.

        Important Configuration

        You must change the following placeholder values to secure your instance:

        • POSTGRES_PASSWORD: A strong, unique password for the database.
        • REDIS_PASSWORD: A strong, unique password for the Valkey/Redis service.
        • MEILI_MASTER_KEY: A complex key for Meilisearch.
        • JWT_SECRET: A long, random string for signing authentication tokens.
        • ENCRYPTION_KEY: A 32-byte hex string for encrypting sensitive data in the database. You can generate one with the following command:
          bash
          openssl rand -hex 32

        Storage Configuration

        By default, the Docker Compose setup uses local filesystem storage, which is persisted using a Docker volume named archiver-data. This is suitable for most use cases.

        If you want to use S3-compatible object storage, change the STORAGE_TYPE to s3 and fill in your S3 credentials (STORAGE_S3_* variables). When STORAGE_TYPE is set to local, the S3-related variables are not required.

        Using External Services

        For convenience, the docker-compose.yml file includes services for PostgreSQL, Valkey (Redis), and Meilisearch. However, you can use your own external or managed instances for these services.

        To do so:

        1. Update your .env file: Change the host, port, and credential variables to point to your external service instances. For example, you would update DATABASE_URL, REDIS_HOST, and MEILI_HOST.
        2. Modify docker-compose.yml: Remove or comment out the service definitions for postgres, valkey, and meilisearch from your docker-compose.yml file.

        This will configure the Open Archiver application to connect to your services instead of starting the default ones.

        Environment Variable Reference

        Here is a complete list of environment variables available for configuration:

        Application Settings

        VariableDescriptionDefault Value
        NODE_ENVThe application environment.development
        PORT_BACKENDThe port for the backend service.4000
        PORT_FRONTENDThe port for the frontend service.3000
        SYNC_FREQUENCYThe frequency of continuous email syncing. See cron syntax for more details.* * * * *

        Docker Compose Service Configuration

        These variables are used by docker-compose.yml to configure the services.

        VariableDescriptionDefault Value
        POSTGRES_DBThe name of the PostgreSQL database.open_archive
        POSTGRES_USERThe username for the PostgreSQL database.admin
        POSTGRES_PASSWORDThe password for the PostgreSQL database.password
        DATABASE_URLThe connection URL for the PostgreSQL database.postgresql://admin:password@postgres:5432/open_archive
        MEILI_MASTER_KEYThe master key for Meilisearch.aSampleMasterKey
        MEILI_HOSTThe host for the Meilisearch service.http://meilisearch:7700
        REDIS_HOSTThe host for the Valkey (Redis) service.valkey
        REDIS_PORTThe port for the Valkey (Redis) service.6379
        REDIS_PASSWORDThe password for the Valkey (Redis) service.defaultredispassword
        REDIS_TLS_ENABLEDEnable or disable TLS for Redis.false

        Storage Settings

        VariableDescriptionDefault Value
        STORAGE_TYPEThe storage backend to use (local or s3).local
        BODY_SIZE_LIMITThe maximum request body size for uploads. Can be a number in bytes or a string with a unit (e.g., 100M).100M
        STORAGE_LOCAL_ROOT_PATHThe root path for local file storage./var/data/open-archiver
        STORAGE_S3_ENDPOINTThe endpoint for S3-compatible storage (required if STORAGE_TYPE is s3).
        STORAGE_S3_BUCKETThe bucket name for S3-compatible storage (required if STORAGE_TYPE is s3).
        STORAGE_S3_ACCESS_KEY_IDThe access key ID for S3-compatible storage (required if STORAGE_TYPE is s3).
        STORAGE_S3_SECRET_ACCESS_KEYThe secret access key for S3-compatible storage (required if STORAGE_TYPE is s3).
        STORAGE_S3_REGIONThe region for S3-compatible storage (required if STORAGE_TYPE is s3).
        STORAGE_S3_FORCE_PATH_STYLEForce path-style addressing for S3 (optional).false

        Security & Authentication

        VariableDescriptionDefault Value
        JWT_SECRETA secret key for signing JWT tokens.a-very-secret-key-that-you-should-change
        JWT_EXPIRES_INThe expiration time for JWT tokens.7d
        SUPER_API_KEY (Deprecated)An API key with super admin privileges. (The SUPER_API_KEY is deprecated since v0.3.0 after we roll out the role-based access control system.)
        RATE_LIMIT_WINDOW_MSThe window in milliseconds for which API requests are checked.900000 (15 minutes)
        RATE_LIMIT_MAX_REQUESTSThe maximum number of API requests allowed from an IP within the window.100
        ENCRYPTION_KEYA 32-byte hex string for encrypting sensitive data in the database.

        3. Run the Application

        Once you have configured your .env file, you can start all the services using Docker Compose:

        bash
        docker compose up -d

        This command will:

        • Pull the required Docker images for the frontend, backend, database, and other services.
        • Create and start the containers in the background (-d flag).
        • Create the persistent volumes for your data.

        You can check the status of the running containers with:

        bash
        docker compose ps

        4. Access the Application

        Once the services are running, you can access the Open Archiver web interface by navigating to http://localhost:3000 in your web browser.

        Upon first visit, you will be redirected to the /setup page where you can set up your admin account. Make sure you are the first person who accesses the instance.

        If you are not redirected to the /setup page but instead see the login page, there might be something wrong with the database. Restart the service and try again.

        5. Next Steps

        After successfully deploying and logging into Open Archiver, the next step is to configure your ingestion sources to start archiving emails.

        Updating Your Installation

        To update your Open Archiver instance to the latest version, run the following commands:

        bash
        # Pull the latest changes from the repository
         git pull
         
         # Pull the latest Docker images
        @@ -35,60 +35,32 @@
         -   networks:
         -     - open-archiver-net

        And remove this entire block from the end of the file:

        diff
        - networks:
         -   open-archiver-net:
        --     driver: bridge
      • Save the modified docker-compose.yml file.

      • By removing these sections, you allow Coolify to automatically create and manage the necessary networks, ensuring that all services can communicate with each other and are correctly exposed through Coolify's reverse proxy.

        After making these changes, you can proceed with deploying your application on Coolify as you normally would.

        Where is my data stored (When using local storage and Docker)?

        If you are using local storage to store your emails, based on your docker-compose.yml file, your data is being stored in what's called a "named volume" (archiver-data). That's why you're not seeing the files in the ./data/open-archiver directory you created.

        1. List all Docker volumes:

        Run this command to see all the volumes on your system:

        ```bash
        -docker volume ls
        -```
        -
        1. Identify the correct volume:

        Look through the list for a volume name that ends with _archiver-data. The part before that will be your project's directory name. For example, if your project is in a folder named OpenArchiver, the volume will be openarchiver_archiver-data But it can be a randomly generated hash.

        1. Inspect the correct volume:

        Once you've identified the correct volume name, use it in the inspect command. For example:

        ```bash
        -docker volume inspect <your_volume_name_here>
        -```
        -

        This will give you the correct Mountpoint path where your data is being stored. It will look something like this (the exact path will vary depending on your system):

        ```json
        -{
        -    "CreatedAt": "2025-07-25T11:22:19Z",
        -    "Driver": "local",
        -    "Labels": {
        -        "com.docker.compose.config-hash": "---",
        -        "com.docker.compose.project": "---",
        -        "com.docker.compose.version": "2.38.2",
        -        "com.docker.compose.volume": "us8wwos0o4ok4go4gc8cog84_archiver-data"
        -    },
        -    "Mountpoint": "/var/lib/docker/volumes/us8wwos0o4ok4go4gc8cog84_archiver-data/_data",
        -    "Name": "us8wwos0o4ok4go4gc8cog84_archiver-data",
        -    "Options": null,
        -    "Scope": "local"
        -}
        -```
        -

        In this example, the data is located at /var/lib/docker/volumes/us8wwos0o4ok4go4gc8cog84_archiver-data/_data. You can then cd into that directory to see your files.

        To save data to a specific folder

        To save the data to a specific folder on your machine, you'll need to make a change to your docker-compose.yml. You need to switch from a named volume to a "bind mount".

        Here’s how you can do it:

        1. Edit docker-compose.yml:

        Open the docker-compose.yml file and find the open-archiver service. You're going to change the volumes section.

        **Change this:**
        -
        -```yaml
        -services:
        -  open-archiver:
        -    # ... other config
        -    volumes:
        -      - archiver-data:/var/data/open-archiver
        -```
        -
        -**To this:**
        -
        -```yaml
        -services:
        -  open-archiver:
        -    # ... other config
        -    volumes:
        -      - ./data/open-archiver:/var/data/open-archiver
        -```
        -

        You'll also want to remove the archiver-data volume definition at the bottom of the file, since it's no longer needed.

        **Remove this whole block:**
        -
        -```yaml
        -volumes:
        -  # ... other volumes
        -  archiver-data:
        -      driver: local
        -```
        -
        1. Restart your containers:

        After you've saved the changes, run the following command in your terminal to apply them. The --force-recreate flag will ensure the container is recreated with the new volume settings.

        ```bash
        -docker-compose up -d --force-recreate
        -```
        -

        After this, any new data will be saved directly into the ./data/open-archiver folder in your project directory.

        Troubleshooting

        403 Cross-Site POST Forbidden Error

        If you are running the application behind a reverse proxy or have mapped the application to a different port (e.g., 3005:3000), you may encounter a 403 Cross-site POST from submissions are forbidden error when uploading files.

        To resolve this, you must set the ORIGIN environment variable to the URL of your application. This ensures that the backend can verify the origin of requests and prevent cross-site request forgery (CSRF) attacks.

        Add the following line to your .env file, replacing <your_host> and <your_port> with your specific values:

        bash
        ORIGIN=http://<your_host>:<your_port>

        For example, if your application is accessible at http://localhost:3005, you would set the variable as follows:

        bash
        ORIGIN=http://localhost:3005

        After adding the ORIGIN variable, restart your Docker containers for the changes to take effect:

        bash
        docker-compose up -d --force-recreate

        This will ensure that your file uploads are correctly authorized.

        - +- driver: bridge
      • Save the modified docker-compose.yml file.

      • By removing these sections, you allow Coolify to automatically create and manage the necessary networks, ensuring that all services can communicate with each other and are correctly exposed through Coolify's reverse proxy.

        After making these changes, you can proceed with deploying your application on Coolify as you normally would.

        Where is my data stored (When using local storage and Docker)?

        If you are using local storage to store your emails, based on your docker-compose.yml file, your data is being stored in what's called a "named volume" (archiver-data). That's why you're not seeing the files in the ./data/open-archiver directory you created.

        1. List all Docker volumes:

        Run this command to see all the volumes on your system:

        bash
        docker volume ls
        1. Identify the correct volume:

        Look through the list for a volume name that ends with _archiver-data. The part before that will be your project's directory name. For example, if your project is in a folder named OpenArchiver, the volume will be openarchiver_archiver-data But it can be a randomly generated hash.

        1. Inspect the correct volume:

        Once you've identified the correct volume name, use it in the inspect command. For example:

        bash
        docker volume inspect <your_volume_name_here>

        This will give you the correct Mountpoint path where your data is being stored. It will look something like this (the exact path will vary depending on your system):

        json
        {
        +	"CreatedAt": "2025-07-25T11:22:19Z",
        +	"Driver": "local",
        +	"Labels": {
        +		"com.docker.compose.config-hash": "---",
        +		"com.docker.compose.project": "---",
        +		"com.docker.compose.version": "2.38.2",
        +		"com.docker.compose.volume": "us8wwos0o4ok4go4gc8cog84_archiver-data"
        +	},
        +	"Mountpoint": "/var/lib/docker/volumes/us8wwos0o4ok4go4gc8cog84_archiver-data/_data",
        +	"Name": "us8wwos0o4ok4go4gc8cog84_archiver-data",
        +	"Options": null,
        +	"Scope": "local"
        +}

        In this example, the data is located at /var/lib/docker/volumes/us8wwos0o4ok4go4gc8cog84_archiver-data/_data. You can then cd into that directory to see your files.

        To save data to a specific folder

        To save the data to a specific folder on your machine, you'll need to make a change to your docker-compose.yml. You need to switch from a named volume to a "bind mount".

        Here’s how you can do it:

        1. Edit docker-compose.yml:

        Open the docker-compose.yml file and find the open-archiver service. You're going to change the volumes section.

        Change this:

        yaml
        services:
        +    open-archiver:
        +    # ... other config
        +    volumes:
        +        - archiver-data:/var/data/open-archiver

        To this:

        yaml
        services:
        +    open-archiver:
        +    # ... other config
        +    volumes:
        +        - ./data/open-archiver:/var/data/open-archiver

        You'll also want to remove the archiver-data volume definition at the bottom of the file, since it's no longer needed.

        Remove this whole block:

        yaml
        volumes:
        +    # ... other volumes
        +    archiver-data:
        +        driver: local
        1. Restart your containers:

        After you've saved the changes, run the following command in your terminal to apply them. The --force-recreate flag will ensure the container is recreated with the new volume settings.

        bash
        docker-compose up -d --force-recreate

        After this, any new data will be saved directly into the ./data/open-archiver folder in your project directory.

        Troubleshooting

        403 Cross-Site POST Forbidden Error

        If you are running the application behind a reverse proxy or have mapped the application to a different port (e.g., 3005:3000), you may encounter a 403 Cross-site POST from submissions are forbidden error when uploading files.

        To resolve this, you must set the ORIGIN environment variable to the URL of your application. This ensures that the backend can verify the origin of requests and prevent cross-site request forgery (CSRF) attacks.

        Add the following line to your .env file, replacing <your_host> and <your_port> with your specific values:

        bash
        ORIGIN=http://<your_host>:<your_port>

        For example, if your application is accessible at http://localhost:3005, you would set the variable as follows:

        bash
        ORIGIN=http://localhost:3005

        After adding the ORIGIN variable, restart your Docker containers for the changes to take effect:

        bash
        docker-compose up -d --force-recreate

        This will ensure that your file uploads are correctly authorized.

        + \ No newline at end of file diff --git a/user-guides/settings/system.html b/user-guides/settings/system.html index 3e2c936..cf254ac 100644 --- a/user-guides/settings/system.html +++ b/user-guides/settings/system.html @@ -9,9 +9,9 @@ - + - + @@ -20,8 +20,8 @@ -
        Skip to content

        System Settings

        System settings allow administrators to configure the global look and theme of the application. These settings apply to all users.

        Configuration

        Language

        This setting determines the default display language for the application UI. The selected language will be used for all interface elements, including menus, labels, and messages.

        Important: When the language is changed, the backend (API) language will only change after a restart of the server. The frontend will update immediately.

        Supported languages:

        • English
        • German
        • French
        • Estonian
        • Spanish
        • Italian
        • Portuguese
        • Dutch
        • Greek
        • Japanese

        Default Theme

        This setting controls the default color theme for the application. Users can choose between light, dark, or system default. The system default theme will sync with the user's operating system theme.

        Support Email

        This setting allows administrators to provide a public-facing email address for user support inquiries. This email address may be displayed on error pages or in other areas where users may need to contact support.

        - +
        Skip to content

        System Settings

        System settings allow administrators to configure the global look and theme of the application. These settings apply to all users.

        Configuration

        Language

        This setting determines the default display language for the application UI. The selected language will be used for all interface elements, including menus, labels, and messages.

        Important: When the language is changed, the backend (API) language will only change after a restart of the server. The frontend will update immediately.

        Supported languages:

        • English
        • German
        • French
        • Estonian
        • Spanish
        • Italian
        • Portuguese
        • Dutch
        • Greek
        • Japanese

        Default Theme

        This setting controls the default color theme for the application. Users can choose between light, dark, or system default. The system default theme will sync with the user's operating system theme.

        Support Email

        This setting allows administrators to provide a public-facing email address for user support inquiries. This email address may be displayed on error pages or in other areas where users may need to contact support.

        + \ No newline at end of file diff --git a/user-guides/upgrade-and-migration/meilisearch-upgrade.html b/user-guides/upgrade-and-migration/meilisearch-upgrade.html new file mode 100644 index 0000000..462c3fb --- /dev/null +++ b/user-guides/upgrade-and-migration/meilisearch-upgrade.html @@ -0,0 +1,37 @@ + + + + + + Upgrading Meilisearch | Open Archiver Docs + + + + + + + + + + + + + + + + +
        Skip to content

        Upgrading Meilisearch

        Meilisearch, the search engine used by Open Archiver, requires a manual data migration process when upgrading to a new version. This is because Meilisearch databases are only compatible with the specific version that created them.

        If an Open Archiver upgrade includes a major Meilisearch version change, you will need to migrate your search index by following the process below.

        Migration Process Overview

        For self-hosted instances using Docker Compose (as recommended), the migration process involves creating a data dump from your current Meilisearch instance, upgrading the Docker image, and then importing that dump into the new version.

        Step 1: Create a Dump

        Before upgrading, you must create a dump of your existing Meilisearch data. You can do this by sending a POST request to the /dumps endpoint of the Meilisearch API.

        1. Find your Meilisearch container name:

          bash
          docker compose ps

          Look for the service name that corresponds to Meilisearch, usually meilisearch.

        2. Execute the dump command: You will need your Meilisearch Admin API key, which can be found in your .env file as MEILI_MASTER_KEY.

          bash
          curl -X POST 'http://localhost:7700/dumps' \
          +  -H "Authorization: Bearer YOUR_MEILI_MASTER_KEY"

          This will start the dump creation process. The dump file will be created inside the meili_data volume used by the Meilisearch container.

        3. Monitor the dump status: The dump creation request returns a taskUid. You can use this to check the status of the dump.

          For more details on dump and import, see the official Meilisearch documentation.

        Step 2: Upgrade Your Open Archiver Instance

        Once the dump is successfully created, you can proceed with the standard Open Archiver upgrade process.

        1. Pull the latest changes and Docker images:

          bash
          git pull
          +docker compose pull
        2. Stop the running services:

          bash
          docker compose down

        Step 3: Import the Dump

        Now, you need to restart the services while telling Meilisearch to import from your dump file.

        1. Modify docker-compose.yml: You need to temporarily add the --import-dump flag to the Meilisearch service command. Find the meilisearch service in your docker-compose.yml and modify the command section.

          You will need the name of your dump file. It will be a .dump file located in the directory mapped to /meili_data inside the container.

          yaml
          services:
          +    meilisearch:
          +        # ... other service config
          +        command:
          +            [
          +                '--master-key=${MEILI_MASTER_KEY}',
          +                '--env=production',
          +                '--import-dump=/meili_data/dumps/YOUR_DUMP_FILE.dump',
          +            ]
        2. Restart the services:

          bash
          docker compose up -d

          Meilisearch will now start and import the data from the dump file. This may take some time depending on the size of your index.

        Step 4: Clean Up

        Once the import is complete and you have verified that your search is working correctly, you should remove the --import-dump flag from your docker-compose.yml to prevent it from running on every startup.

        1. Remove the --import-dump line from the command section of the meilisearch service in docker-compose.yml.
        2. Restart the services one last time:
          bash
          docker compose up -d

        Your Meilisearch instance is now upgraded and running with your migrated data.

        For more advanced scenarios or troubleshooting, please refer to the official Meilisearch migration guide.

        + + + + \ No newline at end of file diff --git a/user-guides/upgrade-and-migration/upgrade.html b/user-guides/upgrade-and-migration/upgrade.html new file mode 100644 index 0000000..75a0248 --- /dev/null +++ b/user-guides/upgrade-and-migration/upgrade.html @@ -0,0 +1,27 @@ + + + + + + Upgrading Your Instance | Open Archiver Docs + + + + + + + + + + + + + + + + +
        Skip to content

        Upgrading Your Instance

        This guide provides instructions for upgrading your Open Archiver instance to the latest version.

        Checking for New Versions

        Open Archiver automatically checks for new versions and will display a notification in the footer of the web interface when an update is available. You can find a list of all releases and their release notes on the GitHub Releases page.

        Upgrading Your Instance

        To upgrade your Open Archiver instance, follow these steps:

        1. Pull the latest changes from the repository:

          bash
          git pull
        2. Pull the latest Docker images:

          bash
          docker compose pull
        3. Restart the services with the new images:

          bash
          docker compose up -d

        This will restart your Open Archiver instance with the latest version of the application.

        Migrating Data

        When you upgrade to a new version, database migrations are applied automatically when the application starts up. This ensures that your database schema is always up-to-date with the latest version of the application.

        No manual intervention is required for database migrations.

        Upgrading Meilisearch

        When an Open Archiver update includes a major version change for Meilisearch, you will need to manually migrate your search data. This process is not covered by the standard upgrade commands.

        For detailed instructions, please see the Meilisearch Upgrade Guide.

        + + + + \ No newline at end of file