* fix(api): correct API key generation and proxy handling This commit resolves an issue where generating a new API key would fail. The root cause was improper handling of POST request bodies in the frontend proxy server. - Refactored `ApiKeyController` methods to use arrow functions to ensure correct `this` binding. * User profile/account page, change password, API * docs(api): update ingestion source provider values Update the `CreateIngestionSourceDto` documentation in `ingestion.md` to reflect the current set of supported providers. * updating tag * feat: add REDIS_USER env variable (#172) * feat: add REDIS_USER env variable fixes #171 * add proper type for bullmq config * Bulgarian UI language strings added (backend+frontend) (#194) * Bulgarian UI Support added * BG language UI support - Create translation.json * update redis config logic * Update Bulgarian language setting, register language * Allow specifying local file path for mbox/eml/pst (#214) * Add agents AI doc * Allow local file path for Mbox file ingestion --------- Co-authored-by: Wei S. <5291640+wayneshn@users.noreply.github.com> * feat(ingestion): add local file path support and optimize EML processing - Frontend: Updated IngestionSourceForm to allow toggling between "Upload File" and "Local File Path" for PST, EML, and Mbox providers. - Frontend: Added logic to clear irrelevant form data when switching import methods. - Frontend: Added English translations for new form fields. - Backend: Refactored EMLConnector to stream ZIP entries using yauzl instead of extracting the full archive to disk, significantly improving efficiency for large archives. - Docs: Updated API documentation and User Guides (PST, EML, Mbox) to clarify "Local File Path" usage, specifically within Docker environments. * docs: add meilisearch dumpless upgrade guide and snapshot config Update `docker-compose.yml` to include the `MEILI_SCHEDULE_SNAPSHOT` environment variable, defaulting to 86400 seconds (24 hours), enabling periodic data snapshots for easier recovery. Shout out to @morph027 for the inspiration! Additionally, update the Meilisearch upgrade documentation to include an experimental "dumpless" upgrade guide while marking the previous method as the standard recommended process. * build(coolify): enable daily snapshots for meilisearch Configure the Meilisearch service in `open-archiver.yml` to create snapshots every 86400 seconds (24 hours) by setting the `MEILI_SCHEDULE_SNAPSHOT` environment variable. --------- Co-authored-by: Antonia Schwennesen <53372671+zophiana@users.noreply.github.com> Co-authored-by: IT Creativity + Art Team <admin@it-playground.net> Co-authored-by: Jan Berdajs <mrbrdo@gmail.com>
5.7 KiB
Upgrading Meilisearch
Meilisearch, the search engine used by Open Archiver, requires a manual data migration process when upgrading to a new version. This is because Meilisearch databases are only compatible with the specific version that created them.
If an Open Archiver upgrade includes a major Meilisearch version change, you will need to migrate your search index by following the process below.
Experimental: Dumpless Upgrade
Warning: This feature is currently experimental. We do not recommend using it for production environments until it is marked as stable. Please use the standard migration process instead. Proceed with caution.
Meilisearch recently introduced an experimental "dumpless" upgrade method. This allows you to migrate the database to a new Meilisearch version without manually creating and importing a dump. However, please note that dumpless upgrades are not currently atomic. If the process fails, your database may become corrupted, resulting in data loss.
Prerequisite: Create a Snapshot
Before attempting a dumpless upgrade, you must take a snapshot of your instance. This ensures you have a recovery point if the upgrade fails. Learn how to create snapshots in the official Meilisearch documentation.
How to Enable
To perform a dumpless upgrade, you need to configure your Meilisearch instance with the experimental flag. You can do this in one of two ways:
Option 1: Using an Environment Variable
Add the MEILI_EXPERIMENTAL_DUMPLESS_UPGRADE environment variable to your docker-compose.yml file for the Meilisearch service.
services:
meilisearch:
image: getmeili/meilisearch:v1.x # The new version you want to upgrade to
environment:
- MEILI_MASTER_KEY=${MEILI_MASTER_KEY}
- MEILI_EXPERIMENTAL_DUMPLESS_UPGRADE=true
Option 2: Using a CLI Option
Alternatively, you can pass the --experimental-dumpless-upgrade flag in the command section of your docker-compose.yml.
services:
meilisearch:
image: getmeili/meilisearch:v1.x # The new version you want to upgrade to
command: meilisearch --experimental-dumpless-upgrade
After updating your configuration, restart your container:
docker compose up -d
Meilisearch will attempt to migrate your database to the new version automatically.
Standard Migration Process (Recommended)
For self-hosted instances using Docker Compose, the recommended migration process involves creating a data dump from your current Meilisearch instance, upgrading the Docker image, and then importing that dump into the new version.
Step 1: Create a Dump
Before upgrading, you must create a dump of your existing Meilisearch data. You can do this by sending a POST request to the /dumps endpoint of the Meilisearch API.
-
Find your Meilisearch container name:
docker compose psLook for the service name that corresponds to Meilisearch, usually
meilisearch. -
Execute the dump command: You will need your Meilisearch Admin API key, which can be found in your
.envfile asMEILI_MASTER_KEY.curl -X POST 'http://localhost:7700/dumps' \ -H "Authorization: Bearer YOUR_MEILI_MASTER_KEY"This will start the dump creation process. The dump file will be created inside the
meili_datavolume used by the Meilisearch container. -
Monitor the dump status: The dump creation request returns a
taskUid. You can use this to check the status of the dump.For more details on dump and import, see the official Meilisearch documentation.
Step 2: Upgrade Your Open Archiver Instance
Once the dump is successfully created, you can proceed with the standard Open Archiver upgrade process.
-
Pull the latest changes and Docker images:
git pull docker compose pull -
Stop the running services:
docker compose down
Step 3: Import the Dump
Now, you need to restart the services while telling Meilisearch to import from your dump file.
-
Modify
docker-compose.yml: You need to temporarily add the--import-dumpflag to the Meilisearch service command. Find themeilisearchservice in yourdocker-compose.ymland modify thecommandsection.You will need the name of your dump file. It will be a
.dumpfile located in the directory mapped to/meili_datainside the container.services: meilisearch: # ... other service config command: [ '--master-key=${MEILI_MASTER_KEY}', '--env=production', '--import-dump=/meili_data/dumps/YOUR_DUMP_FILE.dump', ] -
Restart the services:
docker compose up -dMeilisearch will now start and import the data from the dump file. This may take some time depending on the size of your index.
Step 4: Clean Up
Once the import is complete and you have verified that your search is working correctly, you should remove the --import-dump flag from your docker-compose.yml to prevent it from running on every startup.
- Remove the
--import-dumpline from thecommandsection of themeilisearchservice indocker-compose.yml. - Restart the services one last time:
docker compose up -d
Your Meilisearch instance is now upgraded and running with your migrated data.
For more advanced scenarios or troubleshooting, please refer to the official Meilisearch migration guide.