diff --git a/README.md b/README.md index 097d225..47377bb 100644 --- a/README.md +++ b/README.md @@ -71,6 +71,14 @@ Open Archiver is built on a modern, scalable, and maintainable technology stack: 4. **Access the application:** Once the services are running, you can access the Open Archiver web interface by navigating to `http://localhost:3000` in your web browser. +## Data Source Configuration + +After deploying the application, you will need to configure one or more ingestion sources to begin archiving emails. Follow our detailed guides to connect to your email provider: + +- [Connecting to Google Workspace](docs/services/email-providers/google-workspace.md) +- [Connecting to Microsoft 365](docs/services/email-providers/microsoft-365.md) +- [Connecting to a Generic IMAP Server](docs/services/email-providers/imap.md) + ## Contributing We welcome contributions from the community! diff --git a/docs/services/email-providers/google-workspace.md b/docs/services/email-providers/google-workspace.md new file mode 100644 index 0000000..0f04539 --- /dev/null +++ b/docs/services/email-providers/google-workspace.md @@ -0,0 +1,130 @@ +# Connecting to Google Workspace + +This guide provides instructions for Google Workspace administrators to set up a connection that allows the archiving of all user mailboxes within their organization. + +The connection uses a **Google Cloud Service Account** with **Domain-Wide Delegation**. This is a secure method that grants the archiving service permission to access user data on behalf of the administrator, without requiring individual user passwords or consent. + +## Prerequisites + +- You must have **Super Administrator** privileges in your Google Workspace account. +- You must have access to the **Google Cloud Console** associated with your organization. + +## Setup Overview + +The setup process involves three main parts: + +1. Configuring the necessary permissions in the Google Cloud Console. +2. Authorizing the service account in the Google Workspace Admin Console. +3. Entering the generated credentials into the OpenArchiver application. + +--- + +### Part 1: Google Cloud Console Setup + +In this part, you will create a service account and enable the APIs it needs to function. + +1. **Create a Google Cloud Project:** + + - Go to the [Google Cloud Console](https://console.cloud.google.com/). + - If you don't already have one, create a new project for the archiving service (e.g., "Email Archiver"). + +2. **Enable Required APIs:** + + - In your selected project, navigate to the **"APIs & Services" > "Library"** section. + - Search for and enable the following two APIs: + - **Gmail API** + - **Admin SDK API** + +3. **Create a Service Account:** + + - Navigate to **"IAM & Admin" > "Service Accounts"**. + - Click **"Create Service Account"**. + - Give the service account a name (e.g., `email-archiver-service`) and a description. + - Click **"Create and Continue"**. You do not need to grant this service account any roles on the project. Click **"Done"**. + +4. **Generate a JSON Key:** + - Find the service account you just created in the list. + - Click the three-dot menu under **"Actions"** and select **"Manage keys"**. + - Click **"Add Key"** > **"Create new key"**. + - Select **JSON** as the key type and click **"Create"**. + - A JSON file will be downloaded to your computer. **Keep this file secure, as it contains private credentials.** You will need the contents of this file in Part 3. + +### Troubleshooting + +#### Error: "iam.disableServiceAccountKeyCreation" + +If you receive an error message stating `The organization policy constraint 'iam.disableServiceAccountKeyCreation' is enforced` when trying to create a JSON key, it means your Google Cloud organization has a policy preventing the creation of new service account keys. + +To resolve this, you must have **Organization Administrator** permissions. + +1. **Navigate to your Organization:** In the Google Cloud Console, use the project selector at the top of the page to select your organization node (it usually has a building icon). +2. **Go to IAM:** From the navigation menu, select **"IAM & Admin" > "IAM"**. +3. **Edit Your Permissions:** Find your user account in the list and click the pencil icon to edit roles. Add the following two roles: + - `Organization Policy Administrator` + - `Organization Administrator` + _Note: These roles are only available at the organization level, not the project level._ +4. **Modify the Policy:** + - Navigate to **"IAM & Admin" > "Organization Policies"**. + - In the filter box, search for the policy **"iam.disableServiceAccountKeyCreation"**. + - Click on the policy to edit it. + - You can either disable the policy entirely (if your security rules permit) or add a rule to exclude the specific project you are using for the archiver from this policy. +5. **Retry Key Creation:** Once the policy is updated, return to your project and you should be able to generate the JSON key as described in Part 1. + +--- + +### Part 2: Grant Domain-Wide Delegation + +Now, you will authorize the service account you created to access data from your Google Workspace. + +1. **Get the Service Account's Client ID:** + + - Go back to the list of service accounts in the Google Cloud Console. + - Click on the service account you created. + - Under the **"Details"** tab, find and copy the **Unique ID** (this is the Client ID). + +2. **Authorize the Client in Google Workspace:** + + - Go to your **Google Workspace Admin Console** at [admin.google.com](https://admin.google.com). + - Navigate to **Security > Access and data control > API controls**. + - Under the "Domain-wide Delegation" section, click **"Manage Domain-wide Delegation"**. + - Click **"Add new"**. + +3. **Enter Client Details and Scopes:** + - In the **Client ID** field, paste the **Unique ID** you copied from the service account. + - In the **OAuth scopes** field, paste the following two scopes exactly as they appear, separated by a comma: + ``` + https://www.googleapis.com/auth/admin.directory.user.readonly,https://www.googleapis.com/auth/gmail.readonly + ``` + - Click **"Authorize"**. + +The service account is now permitted to list users and read their email data across your domain. + +--- + +### Part 3: Connecting in OpenArchiver + +Finally, you will provide the generated credentials to the application. + +1. **Navigate to Ingestion Sources:** + From the main dashboard, go to the **Ingestion Sources** page. + +2. **Create a New Source:** + Click the **"Create New"** button. + +3. **Fill in the Configuration Details:** + + - **Name:** Give the source a name (e.g., "Google Workspace Archive"). + - **Provider:** Select **"Google Workspace"** from the dropdown. + - **Service Account Key (JSON):** Open the JSON file you downloaded in Part 1. Copy the entire content of the file and paste it into this text area. + - **Impersonated Admin Email:** Enter the email address of a Super Administrator in your Google Workspace (e.g., `admin@your-domain.com`). The service will use this user's authority to discover all other users. + +4. **Save Changes:** + Click **"Save changes"**. + +## What Happens Next? + +Once the connection is saved and verified, the system will begin the archiving process: + +1. **User Discovery:** The service will first connect to the Admin SDK to get a list of all active users in your Google Workspace. +2. **Initial Import:** The system will then start a background job to import the mailboxes of all discovered users. The status will show as **"Importing"**. This can take a significant amount of time depending on the number of users and the size of their mailboxes. +3. **Continuous Sync:** After the initial import is complete, the status will change to **"Active"**. The system will then periodically check each user's mailbox for new emails and archive them automatically. diff --git a/docs/services/email-providers/imap.md b/docs/services/email-providers/imap.md new file mode 100644 index 0000000..fb80356 --- /dev/null +++ b/docs/services/email-providers/imap.md @@ -0,0 +1,67 @@ +# Connecting to a Generic IMAP Server + +This guide will walk you through connecting a standard IMAP email account as an ingestion source. This allows you to archive emails from any provider that supports the IMAP protocol, which is common for many self-hosted or traditional email services. + +## Step-by-Step Guide + +1. **Navigate to Ingestion Sources:** + From the main dashboard, go to the **Ingestions** page. + +2. **Create a New Source:** + Click the **"Create New"** button to open the ingestion source configuration dialog. + +3. **Fill in the Configuration Details:** + You will see a form with several fields. Here is how to fill them out for an IMAP connection: + + - **Name:** Give your ingestion source a descriptive name that you will easily recognize, such as "Work Email (IMAP)" or "Personal Gmail". + + - **Provider:** From the dropdown menu, select **"Generic IMAP"**. This will reveal the specific fields required for an IMAP connection. + + - **Host:** Enter the server address for your email provider's IMAP service. This often looks like `imap.your-provider.com` or `mail.your-domain.com`. + + - **Port:** Enter the port number for the IMAP server. For a secure connection (which is strongly recommended), this is typically `993`. + + - **Username:** Enter the full email address or username you use to log in to your email account. + + - **Password:** Enter the password for your email account. + +4. **Save Changes:** + Once you have filled in all the details, click the **"Save changes"** button. + +## Security Recommendation: Use an App Password + +For enhanced security, we strongly recommend using an **"app password"** (sometimes called an "app-specific password") instead of your main account password. + +Many email providers (like Gmail, Outlook, and Fastmail) allow you to generate a unique password that grants access only to a specific application (in this case, the archiving service). If you ever need to revoke access, you can simply delete the app password without affecting your main account login. + +Please consult your email provider's documentation to see if they support app passwords and how to create one. + +### How to Obtain an App Password for Gmail + +1. **Enable 2-Step Verification:** You must have 2-Step Verification turned on for your Google Account. +2. **Go to App Passwords:** Visit [myaccount.google.com/apppasswords](https://myaccount.google.com/apppasswords). You may be asked to sign in again. +3. **Create the Password:** + - At the bottom, click **"Select app"** and choose **"Other (Custom name)"**. + - Give it a name you'll recognize, like "OpenArchiver". + - Click **"Generate"**. +4. **Use the Password:** A 16-digit password will be displayed. Copy this password (without the spaces) and paste it into the **Password** field in the OpenArchiver ingestion source form. + +### How to Obtain an App Password for Outlook/Microsoft Accounts + +1. **Enable Two-Step Verification:** You must have two-step verification enabled for your Microsoft account. +2. **Go to Security Options:** Sign in to your Microsoft account and navigate to the [Advanced security options](https://account.live.com/proofs/manage/additional). +3. **Create a New App Password:** + - Scroll down to the **"App passwords"** section. + - Click **"Create a new app password"**. +4. **Use the Password:** A new password will be generated. Use this password in the **Password** field in the OpenArchiver ingestion source form. + +## What Happens Next? + +After you save the connection, the system will attempt to connect to the IMAP server. The status of the ingestion source will update to reflect its current state: + +- **Importing:** The system is performing the initial, one-time import of all emails from your `INBOX`. This may take a while depending on the size of your mailbox. +- **Active:** The initial import is complete, and the system will now periodically check for and archive new emails. +- **Paused:** The connection is valid, but the system will not check for new emails until you resume it. +- **Error:** The system was unable to connect using the provided credentials. Please double-check your Host, Port, Username, and Password and try again. + +You can view, edit, pause, or manually sync any of your ingestion sources from the main table on the **Ingestions** page. diff --git a/docs/services/email-providers/microsoft-365.md b/docs/services/email-providers/microsoft-365.md new file mode 100644 index 0000000..d8c3c4b --- /dev/null +++ b/docs/services/email-providers/microsoft-365.md @@ -0,0 +1,94 @@ +# Connecting to Microsoft 365 + +This guide provides instructions for Microsoft 365 administrators to set up a connection that allows the archiving of all user mailboxes within their organization. + +The connection uses the **Microsoft Graph API** and an **App Registration** in Microsoft Entra ID. This is a secure, standard method that grants the archiving service permission to read email data on your behalf without ever needing to handle user passwords. + +## Prerequisites + +- You must have one of the following administrator roles in your Microsoft 365 tenant: **Global Administrator**, **Application Administrator**, or **Cloud Application Administrator**. + +## Setup Overview + +The setup process involves four main parts, all performed within the Microsoft Entra admin center and the OpenArchiver application: + +1. Registering a new application identity for the archiver in Entra ID. +2. Granting the application the specific permissions it needs to read mail. +3. Creating a secure password (a client secret) for the application. +4. Entering the generated credentials into the OpenArchiver application. + +--- + +### Part 1: Register a New Application in Microsoft Entra ID + +First, you will create an "App registration," which acts as an identity for the archiving service within your Microsoft 365 ecosystem. + +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). +2. In the left-hand navigation pane, go to **Identity > Applications > App registrations**. +3. Click the **+ New registration** button at the top of the page. +4. On the "Register an application" screen: + - **Name:** Give the application a descriptive name you will recognize, such as `OpenArchiver Service`. + - **Supported account types:** Select **"Accounts in this organizational directory only (Default Directory only - Single tenant)"**. This is the most secure option. + - **Redirect URI (optional):** You can leave this blank. +5. Click the **Register** button. You will be taken to the application's main "Overview" page. + +--- + +### Part 2: Grant API Permissions + +Next, you must grant the application the specific permissions required to read user profiles and their mailboxes. + +1. From your new application's page, select **API permissions** from the left-hand menu. +2. Click the **+ Add a permission** button. +3. In the "Request API permissions" pane, select **Microsoft Graph**. +4. Select **Application permissions**. This is critical as it allows the service to run in the background without a user being signed in. +5. In the "Select permissions" search box, find and check the boxes for the following two permissions: + - `Mail.Read` + - `User.Read.All` +6. Click the **Add permissions** button at the bottom. +7. **Crucial Final Step:** You will now see the permissions in your list with a warning status. You must grant consent on behalf of your organization. Click the **"Grant admin consent for [Your Organization's Name]"** button located above the permissions table. Click **Yes** in the confirmation dialog. The status for both permissions should now show a green checkmark. + +--- + +### Part 3: Create a Client Secret + +The client secret is a password that the archiving service will use to authenticate. Treat this with the same level of security as an administrator's password. + +1. In your application's menu, navigate to **Certificates & secrets**. +2. Select the **Client secrets** tab and click **+ New client secret**. +3. In the pane that appears: + - **Description:** Enter a clear description, such as `OpenArchiver Key`. + - **Expires:** Select an expiry duration. We recommend **12 or 24 months**. Set a calendar reminder to renew it before it expires to prevent service interruption. +4. Click **Add**. +5. **IMMEDIATELY COPY THE SECRET:** The secret is now visible in the **"Value"** column. This is the only time it will be fully displayed. Copy this value now and store it in a secure password manager before navigating away. If you lose it, you must create a new one. + +--- + +### Part 4: Connecting in OpenArchiver + +You now have the three pieces of information required to configure the connection. + +1. **Navigate to Ingestion Sources:** + In the OpenArchiver application, go to the **Ingestion Sources** page. + +2. **Create a New Source:** + Click the **"Create New"** button. + +3. **Fill in the Configuration Details:** + + - **Name:** Give the source a name (e.g., "Microsoft 365 Archive"). + - **Provider:** Select **"Microsoft 365"** from the dropdown. + - **Application (Client) ID:** Go to the **Overview** page of your app registration in the Entra admin center and copy this value. + - **Directory (Tenant) ID:** This value is also on the **Overview** page. + - **Client Secret Value:** Paste the secret **Value** (not the Secret ID) that you copied and saved in the previous step. + +4. **Save Changes:** + Click **"Save changes"**. + +## What Happens Next? + +Once the connection is saved, the system will begin the archiving process: + +1. **User Discovery:** The service will connect to the Microsoft Graph API to get a list of all users in your organization. +2. **Initial Import:** The system will begin a background job to import the mailboxes of all discovered users, folder by folder. The status will show as **"Importing"**. This can take a significant amount of time. +3. **Continuous Sync:** After the initial import, the status will change to **"Active"**. The system will use Microsoft Graph's delta query feature to efficiently fetch only new or changed emails, ensuring the archive stays up-to-date. diff --git a/packages/backend/.git_disabled/HEAD b/packages/backend/.git_disabled/HEAD new file mode 100644 index 0000000..b870d82 --- /dev/null +++ b/packages/backend/.git_disabled/HEAD @@ -0,0 +1 @@ +ref: refs/heads/main diff --git a/packages/backend/.git_disabled/config b/packages/backend/.git_disabled/config new file mode 100644 index 0000000..6c9406b --- /dev/null +++ b/packages/backend/.git_disabled/config @@ -0,0 +1,7 @@ +[core] + repositoryformatversion = 0 + filemode = true + bare = false + logallrefupdates = true + ignorecase = true + precomposeunicode = true diff --git a/packages/backend/.git_disabled/description b/packages/backend/.git_disabled/description new file mode 100644 index 0000000..498b267 --- /dev/null +++ b/packages/backend/.git_disabled/description @@ -0,0 +1 @@ +Unnamed repository; edit this file 'description' to name the repository. diff --git a/packages/backend/.git_disabled/hooks/applypatch-msg.sample b/packages/backend/.git_disabled/hooks/applypatch-msg.sample new file mode 100755 index 0000000..a5d7b84 --- /dev/null +++ b/packages/backend/.git_disabled/hooks/applypatch-msg.sample @@ -0,0 +1,15 @@ +#!/bin/sh +# +# An example hook script to check the commit log message taken by +# applypatch from an e-mail message. +# +# The hook should exit with non-zero status after issuing an +# appropriate message if it wants to stop the commit. The hook is +# allowed to edit the commit message file. +# +# To enable this hook, rename this file to "applypatch-msg". + +. git-sh-setup +commitmsg="$(git rev-parse --git-path hooks/commit-msg)" +test -x "$commitmsg" && exec "$commitmsg" ${1+"$@"} +: diff --git a/packages/backend/.git_disabled/hooks/commit-msg.sample b/packages/backend/.git_disabled/hooks/commit-msg.sample new file mode 100755 index 0000000..b58d118 --- /dev/null +++ b/packages/backend/.git_disabled/hooks/commit-msg.sample @@ -0,0 +1,24 @@ +#!/bin/sh +# +# An example hook script to check the commit log message. +# Called by "git commit" with one argument, the name of the file +# that has the commit message. The hook should exit with non-zero +# status after issuing an appropriate message if it wants to stop the +# commit. The hook is allowed to edit the commit message file. +# +# To enable this hook, rename this file to "commit-msg". + +# Uncomment the below to add a Signed-off-by line to the message. +# Doing this in a hook is a bad idea in general, but the prepare-commit-msg +# hook is more suited to it. +# +# SOB=$(git var GIT_AUTHOR_IDENT | sed -n 's/^\(.*>\).*$/Signed-off-by: \1/p') +# grep -qs "^$SOB" "$1" || echo "$SOB" >> "$1" + +# This example catches duplicate Signed-off-by lines. + +test "" = "$(grep '^Signed-off-by: ' "$1" | + sort | uniq -c | sed -e '/^[ ]*1[ ]/d')" || { + echo >&2 Duplicate Signed-off-by lines. + exit 1 +} diff --git a/packages/backend/.git_disabled/hooks/fsmonitor-watchman.sample b/packages/backend/.git_disabled/hooks/fsmonitor-watchman.sample new file mode 100755 index 0000000..23e856f --- /dev/null +++ b/packages/backend/.git_disabled/hooks/fsmonitor-watchman.sample @@ -0,0 +1,174 @@ +#!/usr/bin/perl + +use strict; +use warnings; +use IPC::Open2; + +# An example hook script to integrate Watchman +# (https://facebook.github.io/watchman/) with git to speed up detecting +# new and modified files. +# +# The hook is passed a version (currently 2) and last update token +# formatted as a string and outputs to stdout a new update token and +# all files that have been modified since the update token. Paths must +# be relative to the root of the working tree and separated by a single NUL. +# +# To enable this hook, rename this file to "query-watchman" and set +# 'git config core.fsmonitor .git/hooks/query-watchman' +# +my ($version, $last_update_token) = @ARGV; + +# Uncomment for debugging +# print STDERR "$0 $version $last_update_token\n"; + +# Check the hook interface version +if ($version ne 2) { + die "Unsupported query-fsmonitor hook version '$version'.\n" . + "Falling back to scanning...\n"; +} + +my $git_work_tree = get_working_dir(); + +my $retry = 1; + +my $json_pkg; +eval { + require JSON::XS; + $json_pkg = "JSON::XS"; + 1; +} or do { + require JSON::PP; + $json_pkg = "JSON::PP"; +}; + +launch_watchman(); + +sub launch_watchman { + my $o = watchman_query(); + if (is_work_tree_watched($o)) { + output_result($o->{clock}, @{$o->{files}}); + } +} + +sub output_result { + my ($clockid, @files) = @_; + + # Uncomment for debugging watchman output + # open (my $fh, ">", ".git/watchman-output.out"); + # binmode $fh, ":utf8"; + # print $fh "$clockid\n@files\n"; + # close $fh; + + binmode STDOUT, ":utf8"; + print $clockid; + print "\0"; + local $, = "\0"; + print @files; +} + +sub watchman_clock { + my $response = qx/watchman clock "$git_work_tree"/; + die "Failed to get clock id on '$git_work_tree'.\n" . + "Falling back to scanning...\n" if $? != 0; + + return $json_pkg->new->utf8->decode($response); +} + +sub watchman_query { + my $pid = open2(\*CHLD_OUT, \*CHLD_IN, 'watchman -j --no-pretty') + or die "open2() failed: $!\n" . + "Falling back to scanning...\n"; + + # In the query expression below we're asking for names of files that + # changed since $last_update_token but not from the .git folder. + # + # To accomplish this, we're using the "since" generator to use the + # recency index to select candidate nodes and "fields" to limit the + # output to file names only. Then we're using the "expression" term to + # further constrain the results. + my $last_update_line = ""; + if (substr($last_update_token, 0, 1) eq "c") { + $last_update_token = "\"$last_update_token\""; + $last_update_line = qq[\n"since": $last_update_token,]; + } + my $query = <<" END"; + ["query", "$git_work_tree", {$last_update_line + "fields": ["name"], + "expression": ["not", ["dirname", ".git"]] + }] + END + + # Uncomment for debugging the watchman query + # open (my $fh, ">", ".git/watchman-query.json"); + # print $fh $query; + # close $fh; + + print CHLD_IN $query; + close CHLD_IN; + my $response = do {local $/; }; + + # Uncomment for debugging the watch response + # open ($fh, ">", ".git/watchman-response.json"); + # print $fh $response; + # close $fh; + + die "Watchman: command returned no output.\n" . + "Falling back to scanning...\n" if $response eq ""; + die "Watchman: command returned invalid output: $response\n" . + "Falling back to scanning...\n" unless $response =~ /^\{/; + + return $json_pkg->new->utf8->decode($response); +} + +sub is_work_tree_watched { + my ($output) = @_; + my $error = $output->{error}; + if ($retry > 0 and $error and $error =~ m/unable to resolve root .* directory (.*) is not watched/) { + $retry--; + my $response = qx/watchman watch "$git_work_tree"/; + die "Failed to make watchman watch '$git_work_tree'.\n" . + "Falling back to scanning...\n" if $? != 0; + $output = $json_pkg->new->utf8->decode($response); + $error = $output->{error}; + die "Watchman: $error.\n" . + "Falling back to scanning...\n" if $error; + + # Uncomment for debugging watchman output + # open (my $fh, ">", ".git/watchman-output.out"); + # close $fh; + + # Watchman will always return all files on the first query so + # return the fast "everything is dirty" flag to git and do the + # Watchman query just to get it over with now so we won't pay + # the cost in git to look up each individual file. + my $o = watchman_clock(); + $error = $output->{error}; + + die "Watchman: $error.\n" . + "Falling back to scanning...\n" if $error; + + output_result($o->{clock}, ("/")); + $last_update_token = $o->{clock}; + + eval { launch_watchman() }; + return 0; + } + + die "Watchman: $error.\n" . + "Falling back to scanning...\n" if $error; + + return 1; +} + +sub get_working_dir { + my $working_dir; + if ($^O =~ 'msys' || $^O =~ 'cygwin') { + $working_dir = Win32::GetCwd(); + $working_dir =~ tr/\\/\//; + } else { + require Cwd; + $working_dir = Cwd::cwd(); + } + + return $working_dir; +} diff --git a/packages/backend/.git_disabled/hooks/post-update.sample b/packages/backend/.git_disabled/hooks/post-update.sample new file mode 100755 index 0000000..ec17ec1 --- /dev/null +++ b/packages/backend/.git_disabled/hooks/post-update.sample @@ -0,0 +1,8 @@ +#!/bin/sh +# +# An example hook script to prepare a packed repository for use over +# dumb transports. +# +# To enable this hook, rename this file to "post-update". + +exec git update-server-info diff --git a/packages/backend/.git_disabled/hooks/pre-applypatch.sample b/packages/backend/.git_disabled/hooks/pre-applypatch.sample new file mode 100755 index 0000000..4142082 --- /dev/null +++ b/packages/backend/.git_disabled/hooks/pre-applypatch.sample @@ -0,0 +1,14 @@ +#!/bin/sh +# +# An example hook script to verify what is about to be committed +# by applypatch from an e-mail message. +# +# The hook should exit with non-zero status after issuing an +# appropriate message if it wants to stop the commit. +# +# To enable this hook, rename this file to "pre-applypatch". + +. git-sh-setup +precommit="$(git rev-parse --git-path hooks/pre-commit)" +test -x "$precommit" && exec "$precommit" ${1+"$@"} +: diff --git a/packages/backend/.git_disabled/hooks/pre-commit.sample b/packages/backend/.git_disabled/hooks/pre-commit.sample new file mode 100755 index 0000000..e144712 --- /dev/null +++ b/packages/backend/.git_disabled/hooks/pre-commit.sample @@ -0,0 +1,49 @@ +#!/bin/sh +# +# An example hook script to verify what is about to be committed. +# Called by "git commit" with no arguments. The hook should +# exit with non-zero status after issuing an appropriate message if +# it wants to stop the commit. +# +# To enable this hook, rename this file to "pre-commit". + +if git rev-parse --verify HEAD >/dev/null 2>&1 +then + against=HEAD +else + # Initial commit: diff against an empty tree object + against=$(git hash-object -t tree /dev/null) +fi + +# If you want to allow non-ASCII filenames set this variable to true. +allownonascii=$(git config --type=bool hooks.allownonascii) + +# Redirect output to stderr. +exec 1>&2 + +# Cross platform projects tend to avoid non-ASCII filenames; prevent +# them from being added to the repository. We exploit the fact that the +# printable range starts at the space character and ends with tilde. +if [ "$allownonascii" != "true" ] && + # Note that the use of brackets around a tr range is ok here, (it's + # even required, for portability to Solaris 10's /usr/bin/tr), since + # the square bracket bytes happen to fall in the designated range. + test $(git diff --cached --name-only --diff-filter=A -z $against | + LC_ALL=C tr -d '[ -~]\0' | wc -c) != 0 +then + cat <<\EOF +Error: Attempt to add a non-ASCII file name. + +This can cause problems if you want to work with people on other platforms. + +To be portable it is advisable to rename the file. + +If you know what you are doing you can disable this check using: + + git config hooks.allownonascii true +EOF + exit 1 +fi + +# If there are whitespace errors, print the offending file names and fail. +exec git diff-index --check --cached $against -- diff --git a/packages/backend/.git_disabled/hooks/pre-merge-commit.sample b/packages/backend/.git_disabled/hooks/pre-merge-commit.sample new file mode 100755 index 0000000..399eab1 --- /dev/null +++ b/packages/backend/.git_disabled/hooks/pre-merge-commit.sample @@ -0,0 +1,13 @@ +#!/bin/sh +# +# An example hook script to verify what is about to be committed. +# Called by "git merge" with no arguments. The hook should +# exit with non-zero status after issuing an appropriate message to +# stderr if it wants to stop the merge commit. +# +# To enable this hook, rename this file to "pre-merge-commit". + +. git-sh-setup +test -x "$GIT_DIR/hooks/pre-commit" && + exec "$GIT_DIR/hooks/pre-commit" +: diff --git a/packages/backend/.git_disabled/hooks/pre-push.sample b/packages/backend/.git_disabled/hooks/pre-push.sample new file mode 100755 index 0000000..4ce688d --- /dev/null +++ b/packages/backend/.git_disabled/hooks/pre-push.sample @@ -0,0 +1,53 @@ +#!/bin/sh + +# An example hook script to verify what is about to be pushed. Called by "git +# push" after it has checked the remote status, but before anything has been +# pushed. If this script exits with a non-zero status nothing will be pushed. +# +# This hook is called with the following parameters: +# +# $1 -- Name of the remote to which the push is being done +# $2 -- URL to which the push is being done +# +# If pushing without using a named remote those arguments will be equal. +# +# Information about the commits which are being pushed is supplied as lines to +# the standard input in the form: +# +# +# +# This sample shows how to prevent push of commits where the log message starts +# with "WIP" (work in progress). + +remote="$1" +url="$2" + +zero=$(git hash-object --stdin &2 "Found WIP commit in $local_ref, not pushing" + exit 1 + fi + fi +done + +exit 0 diff --git a/packages/backend/.git_disabled/hooks/pre-rebase.sample b/packages/backend/.git_disabled/hooks/pre-rebase.sample new file mode 100755 index 0000000..6cbef5c --- /dev/null +++ b/packages/backend/.git_disabled/hooks/pre-rebase.sample @@ -0,0 +1,169 @@ +#!/bin/sh +# +# Copyright (c) 2006, 2008 Junio C Hamano +# +# The "pre-rebase" hook is run just before "git rebase" starts doing +# its job, and can prevent the command from running by exiting with +# non-zero status. +# +# The hook is called with the following parameters: +# +# $1 -- the upstream the series was forked from. +# $2 -- the branch being rebased (or empty when rebasing the current branch). +# +# This sample shows how to prevent topic branches that are already +# merged to 'next' branch from getting rebased, because allowing it +# would result in rebasing already published history. + +publish=next +basebranch="$1" +if test "$#" = 2 +then + topic="refs/heads/$2" +else + topic=`git symbolic-ref HEAD` || + exit 0 ;# we do not interrupt rebasing detached HEAD +fi + +case "$topic" in +refs/heads/??/*) + ;; +*) + exit 0 ;# we do not interrupt others. + ;; +esac + +# Now we are dealing with a topic branch being rebased +# on top of master. Is it OK to rebase it? + +# Does the topic really exist? +git show-ref -q "$topic" || { + echo >&2 "No such branch $topic" + exit 1 +} + +# Is topic fully merged to master? +not_in_master=`git rev-list --pretty=oneline ^master "$topic"` +if test -z "$not_in_master" +then + echo >&2 "$topic is fully merged to master; better remove it." + exit 1 ;# we could allow it, but there is no point. +fi + +# Is topic ever merged to next? If so you should not be rebasing it. +only_next_1=`git rev-list ^master "^$topic" ${publish} | sort` +only_next_2=`git rev-list ^master ${publish} | sort` +if test "$only_next_1" = "$only_next_2" +then + not_in_topic=`git rev-list "^$topic" master` + if test -z "$not_in_topic" + then + echo >&2 "$topic is already up to date with master" + exit 1 ;# we could allow it, but there is no point. + else + exit 0 + fi +else + not_in_next=`git rev-list --pretty=oneline ^${publish} "$topic"` + /usr/bin/perl -e ' + my $topic = $ARGV[0]; + my $msg = "* $topic has commits already merged to public branch:\n"; + my (%not_in_next) = map { + /^([0-9a-f]+) /; + ($1 => 1); + } split(/\n/, $ARGV[1]); + for my $elem (map { + /^([0-9a-f]+) (.*)$/; + [$1 => $2]; + } split(/\n/, $ARGV[2])) { + if (!exists $not_in_next{$elem->[0]}) { + if ($msg) { + print STDERR $msg; + undef $msg; + } + print STDERR " $elem->[1]\n"; + } + } + ' "$topic" "$not_in_next" "$not_in_master" + exit 1 +fi + +<<\DOC_END + +This sample hook safeguards topic branches that have been +published from being rewound. + +The workflow assumed here is: + + * Once a topic branch forks from "master", "master" is never + merged into it again (either directly or indirectly). + + * Once a topic branch is fully cooked and merged into "master", + it is deleted. If you need to build on top of it to correct + earlier mistakes, a new topic branch is created by forking at + the tip of the "master". This is not strictly necessary, but + it makes it easier to keep your history simple. + + * Whenever you need to test or publish your changes to topic + branches, merge them into "next" branch. + +The script, being an example, hardcodes the publish branch name +to be "next", but it is trivial to make it configurable via +$GIT_DIR/config mechanism. + +With this workflow, you would want to know: + +(1) ... if a topic branch has ever been merged to "next". Young + topic branches can have stupid mistakes you would rather + clean up before publishing, and things that have not been + merged into other branches can be easily rebased without + affecting other people. But once it is published, you would + not want to rewind it. + +(2) ... if a topic branch has been fully merged to "master". + Then you can delete it. More importantly, you should not + build on top of it -- other people may already want to + change things related to the topic as patches against your + "master", so if you need further changes, it is better to + fork the topic (perhaps with the same name) afresh from the + tip of "master". + +Let's look at this example: + + o---o---o---o---o---o---o---o---o---o "next" + / / / / + / a---a---b A / / + / / / / + / / c---c---c---c B / + / / / \ / + / / / b---b C \ / + / / / / \ / + ---o---o---o---o---o---o---o---o---o---o---o "master" + + +A, B and C are topic branches. + + * A has one fix since it was merged up to "next". + + * B has finished. It has been fully merged up to "master" and "next", + and is ready to be deleted. + + * C has not merged to "next" at all. + +We would want to allow C to be rebased, refuse A, and encourage +B to be deleted. + +To compute (1): + + git rev-list ^master ^topic next + git rev-list ^master next + + if these match, topic has not merged in next at all. + +To compute (2): + + git rev-list master..topic + + if this is empty, it is fully merged to "master". + +DOC_END diff --git a/packages/backend/.git_disabled/hooks/pre-receive.sample b/packages/backend/.git_disabled/hooks/pre-receive.sample new file mode 100755 index 0000000..a1fd29e --- /dev/null +++ b/packages/backend/.git_disabled/hooks/pre-receive.sample @@ -0,0 +1,24 @@ +#!/bin/sh +# +# An example hook script to make use of push options. +# The example simply echoes all push options that start with 'echoback=' +# and rejects all pushes when the "reject" push option is used. +# +# To enable this hook, rename this file to "pre-receive". + +if test -n "$GIT_PUSH_OPTION_COUNT" +then + i=0 + while test "$i" -lt "$GIT_PUSH_OPTION_COUNT" + do + eval "value=\$GIT_PUSH_OPTION_$i" + case "$value" in + echoback=*) + echo "echo from the pre-receive-hook: ${value#*=}" >&2 + ;; + reject) + exit 1 + esac + i=$((i + 1)) + done +fi diff --git a/packages/backend/.git_disabled/hooks/prepare-commit-msg.sample b/packages/backend/.git_disabled/hooks/prepare-commit-msg.sample new file mode 100755 index 0000000..10fa14c --- /dev/null +++ b/packages/backend/.git_disabled/hooks/prepare-commit-msg.sample @@ -0,0 +1,42 @@ +#!/bin/sh +# +# An example hook script to prepare the commit log message. +# Called by "git commit" with the name of the file that has the +# commit message, followed by the description of the commit +# message's source. The hook's purpose is to edit the commit +# message file. If the hook fails with a non-zero status, +# the commit is aborted. +# +# To enable this hook, rename this file to "prepare-commit-msg". + +# This hook includes three examples. The first one removes the +# "# Please enter the commit message..." help message. +# +# The second includes the output of "git diff --name-status -r" +# into the message, just before the "git status" output. It is +# commented because it doesn't cope with --amend or with squashed +# commits. +# +# The third example adds a Signed-off-by line to the message, that can +# still be edited. This is rarely a good idea. + +COMMIT_MSG_FILE=$1 +COMMIT_SOURCE=$2 +SHA1=$3 + +/usr/bin/perl -i.bak -ne 'print unless(m/^. Please enter the commit message/..m/^#$/)' "$COMMIT_MSG_FILE" + +# case "$COMMIT_SOURCE,$SHA1" in +# ,|template,) +# /usr/bin/perl -i.bak -pe ' +# print "\n" . `git diff --cached --name-status -r` +# if /^#/ && $first++ == 0' "$COMMIT_MSG_FILE" ;; +# *) ;; +# esac + +# SOB=$(git var GIT_COMMITTER_IDENT | sed -n 's/^\(.*>\).*$/Signed-off-by: \1/p') +# git interpret-trailers --in-place --trailer "$SOB" "$COMMIT_MSG_FILE" +# if test -z "$COMMIT_SOURCE" +# then +# /usr/bin/perl -i.bak -pe 'print "\n" if !$first_line++' "$COMMIT_MSG_FILE" +# fi diff --git a/packages/backend/.git_disabled/hooks/push-to-checkout.sample b/packages/backend/.git_disabled/hooks/push-to-checkout.sample new file mode 100755 index 0000000..af5a0c0 --- /dev/null +++ b/packages/backend/.git_disabled/hooks/push-to-checkout.sample @@ -0,0 +1,78 @@ +#!/bin/sh + +# An example hook script to update a checked-out tree on a git push. +# +# This hook is invoked by git-receive-pack(1) when it reacts to git +# push and updates reference(s) in its repository, and when the push +# tries to update the branch that is currently checked out and the +# receive.denyCurrentBranch configuration variable is set to +# updateInstead. +# +# By default, such a push is refused if the working tree and the index +# of the remote repository has any difference from the currently +# checked out commit; when both the working tree and the index match +# the current commit, they are updated to match the newly pushed tip +# of the branch. This hook is to be used to override the default +# behaviour; however the code below reimplements the default behaviour +# as a starting point for convenient modification. +# +# The hook receives the commit with which the tip of the current +# branch is going to be updated: +commit=$1 + +# It can exit with a non-zero status to refuse the push (when it does +# so, it must not modify the index or the working tree). +die () { + echo >&2 "$*" + exit 1 +} + +# Or it can make any necessary changes to the working tree and to the +# index to bring them to the desired state when the tip of the current +# branch is updated to the new commit, and exit with a zero status. +# +# For example, the hook can simply run git read-tree -u -m HEAD "$1" +# in order to emulate git fetch that is run in the reverse direction +# with git push, as the two-tree form of git read-tree -u -m is +# essentially the same as git switch or git checkout that switches +# branches while keeping the local changes in the working tree that do +# not interfere with the difference between the branches. + +# The below is a more-or-less exact translation to shell of the C code +# for the default behaviour for git's push-to-checkout hook defined in +# the push_to_deploy() function in builtin/receive-pack.c. +# +# Note that the hook will be executed from the repository directory, +# not from the working tree, so if you want to perform operations on +# the working tree, you will have to adapt your code accordingly, e.g. +# by adding "cd .." or using relative paths. + +if ! git update-index -q --ignore-submodules --refresh +then + die "Up-to-date check failed" +fi + +if ! git diff-files --quiet --ignore-submodules -- +then + die "Working directory has unstaged changes" +fi + +# This is a rough translation of: +# +# head_has_history() ? "HEAD" : EMPTY_TREE_SHA1_HEX +if git cat-file -e HEAD 2>/dev/null +then + head=HEAD +else + head=$(git hash-object -t tree --stdin &2 + echo " (if you want, you could supply GIT_DIR then run" >&2 + echo " $0 )" >&2 + exit 1 +fi + +if [ -z "$refname" -o -z "$oldrev" -o -z "$newrev" ]; then + echo "usage: $0 " >&2 + exit 1 +fi + +# --- Config +allowunannotated=$(git config --type=bool hooks.allowunannotated) +allowdeletebranch=$(git config --type=bool hooks.allowdeletebranch) +denycreatebranch=$(git config --type=bool hooks.denycreatebranch) +allowdeletetag=$(git config --type=bool hooks.allowdeletetag) +allowmodifytag=$(git config --type=bool hooks.allowmodifytag) + +# check for no description +projectdesc=$(sed -e '1q' "$GIT_DIR/description") +case "$projectdesc" in +"Unnamed repository"* | "") + echo "*** Project description file hasn't been set" >&2 + exit 1 + ;; +esac + +# --- Check types +# if $newrev is 0000...0000, it's a commit to delete a ref. +zero=$(git hash-object --stdin &2 + echo "*** Use 'git tag [ -a | -s ]' for tags you want to propagate." >&2 + exit 1 + fi + ;; + refs/tags/*,delete) + # delete tag + if [ "$allowdeletetag" != "true" ]; then + echo "*** Deleting a tag is not allowed in this repository" >&2 + exit 1 + fi + ;; + refs/tags/*,tag) + # annotated tag + if [ "$allowmodifytag" != "true" ] && git rev-parse $refname > /dev/null 2>&1 + then + echo "*** Tag '$refname' already exists." >&2 + echo "*** Modifying a tag is not allowed in this repository." >&2 + exit 1 + fi + ;; + refs/heads/*,commit) + # branch + if [ "$oldrev" = "$zero" -a "$denycreatebranch" = "true" ]; then + echo "*** Creating a branch is not allowed in this repository" >&2 + exit 1 + fi + ;; + refs/heads/*,delete) + # delete branch + if [ "$allowdeletebranch" != "true" ]; then + echo "*** Deleting a branch is not allowed in this repository" >&2 + exit 1 + fi + ;; + refs/remotes/*,commit) + # tracking branch + ;; + refs/remotes/*,delete) + # delete tracking branch + if [ "$allowdeletebranch" != "true" ]; then + echo "*** Deleting a tracking branch is not allowed in this repository" >&2 + exit 1 + fi + ;; + *) + # Anything else (is there anything else?) + echo "*** Update hook: unknown type of update to ref $refname of type $newrev_type" >&2 + exit 1 + ;; +esac + +# --- Finished +exit 0 diff --git a/packages/backend/.git_disabled/info/exclude b/packages/backend/.git_disabled/info/exclude new file mode 100644 index 0000000..a5196d1 --- /dev/null +++ b/packages/backend/.git_disabled/info/exclude @@ -0,0 +1,6 @@ +# git ls-files --others --exclude-from=.git/info/exclude +# Lines that start with '#' are comments. +# For a project mostly in C, the following would be a good set of +# exclude patterns (uncomment them if you want to use them): +# *.[oa] +# *~