mirror of
https://github.com/databasus/databasus.git
synced 2026-04-06 00:32:03 +02:00
Compare commits
17 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b278a79104 | ||
|
|
b74ae734af | ||
|
|
d21a9398c6 | ||
|
|
6ad7b95b7d | ||
|
|
8432d1626f | ||
|
|
d7f631fa93 | ||
|
|
c3fb2aa529 | ||
|
|
1817937409 | ||
|
|
3172396668 | ||
|
|
9cd5c8c57c | ||
|
|
d8826d85c3 | ||
|
|
49fdd46cbe | ||
|
|
c6261d434b | ||
|
|
918002acde | ||
|
|
c0721a43e1 | ||
|
|
461e15cd7a | ||
|
|
69a53936f5 |
102
.github/CODE_OF_CONDUCT.md
vendored
Normal file
102
.github/CODE_OF_CONDUCT.md
vendored
Normal file
@@ -0,0 +1,102 @@
|
||||
# Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors and maintainers pledge to make participation in the Postgresus community a friendly and welcoming experience for everyone, regardless of background, experience level or personal circumstances.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
### Examples of behavior that contributes to a positive environment
|
||||
|
||||
- Using welcoming and inclusive language
|
||||
- Being respectful of differing viewpoints and experiences
|
||||
- Gracefully accepting constructive criticism
|
||||
- Focusing on what is best for the community
|
||||
- Showing empathy towards other community members
|
||||
- Helping newcomers get started with contributions
|
||||
- Providing clear and constructive feedback on pull requests
|
||||
- Celebrating successes and acknowledging contributions
|
||||
|
||||
### Examples of unacceptable behavior
|
||||
|
||||
- Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
- Publishing others' private information, such as physical or email addresses, without their explicit permission
|
||||
- Spam, self-promotion or off-topic content in project spaces
|
||||
- Other conduct which could reasonably be considered inappropriate in a professional setting
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, including:
|
||||
|
||||
- GitHub repositories (issues, pull requests, discussions, comments)
|
||||
- Telegram channels and direct messages related to Postgresus
|
||||
- Social media interactions when representing the project
|
||||
- Community forums and online discussions
|
||||
- Any other spaces where Postgresus community members interact
|
||||
|
||||
This Code of Conduct also applies when an individual is officially representing the community in public spaces, such as using an official email address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive or unacceptable behavior may be reported to the community leaders responsible for enforcement:
|
||||
|
||||
- **Email**: [info@postgresus.com](mailto:info@postgresus.com)
|
||||
- **Telegram**: [@rostislav_dugin](https://t.me/rostislav_dugin)
|
||||
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series of actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within the community.
|
||||
|
||||
## Contributing with Respect
|
||||
|
||||
When contributing to Postgresus, please:
|
||||
|
||||
- Be patient with maintainers and other contributors
|
||||
- Understand that everyone has different levels of experience
|
||||
- Ask questions in a respectful manner
|
||||
- Accept that your contribution may not be accepted, and be open to feedback
|
||||
- Follow the [contribution guidelines](https://postgresus.com/contribute)
|
||||
|
||||
For code contributions, remember to:
|
||||
|
||||
- Discuss significant changes before implementing them
|
||||
- Be open to code review feedback
|
||||
- Help review others' contributions when possible
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org), version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html](https://www.contributor-covenant.org/version/2/1/code_of_conduct.html).
|
||||
|
||||
Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq](https://www.contributor-covenant.org/faq).
|
||||
54
.github/SECURITY.md
vendored
Normal file
54
.github/SECURITY.md
vendored
Normal file
@@ -0,0 +1,54 @@
|
||||
# Security Policy
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
If you discover a security vulnerability in Postgresus, please report it responsibly. **Do not create a public GitHub issue for security vulnerabilities.**
|
||||
|
||||
### How to Report
|
||||
|
||||
1. **Email** (preferred): Send details to [info@postgresus.com](mailto:info@postgresus.com)
|
||||
2. **Telegram**: Contact [@rostislav_dugin](https://t.me/rostislav_dugin)
|
||||
3. **GitHub Security Advisories**: Use the [private vulnerability reporting](https://github.com/RostislavDugin/postgresus/security/advisories/new) feature
|
||||
|
||||
### What to Include
|
||||
|
||||
- Description of the vulnerability
|
||||
- Steps to reproduce the issue
|
||||
- Potential impact and severity assessment
|
||||
- Any suggested fixes (optional)
|
||||
|
||||
## Supported Versions
|
||||
|
||||
| Version | Supported |
|
||||
| ------- | --------- |
|
||||
| Latest | Yes |
|
||||
|
||||
We recommend always using the latest version of Postgresus. Security patches are applied to the most recent release.
|
||||
|
||||
### PostgreSQL Compatibility
|
||||
|
||||
Postgresus supports PostgreSQL versions 12, 13, 14, 15, 16, 17 and 18.
|
||||
|
||||
## Response Timeline
|
||||
|
||||
- **Acknowledgment**: Within 48-72 hours
|
||||
- **Initial Assessment**: Within 1 week
|
||||
- **Fix Timeline**: Depends on severity, but we aim to address critical issues as quickly as possible
|
||||
|
||||
We follow a coordinated disclosure policy. We ask that you give us reasonable time to address the vulnerability before any public disclosure.
|
||||
|
||||
## Security Features
|
||||
|
||||
Postgresus is designed with security in mind. For full details, see our [security documentation](https://postgresus.com/security).
|
||||
|
||||
Key features include:
|
||||
|
||||
- **AES-256-GCM Encryption**: Enterprise-grade encryption for backup files and sensitive data
|
||||
- **Read-Only Database Access**: Postgresus uses read-only access by default and warns if write permissions are detected
|
||||
- **Role-Based Access Control**: Assign viewer, member, admin or owner roles within workspaces
|
||||
- **Audit Logging**: Track all system activities and changes made by users
|
||||
- **Zero-Trust Storage**: Encrypted backups are safe even in shared cloud storage
|
||||
|
||||
## License
|
||||
|
||||
Postgresus is licensed under [Apache 2.0](../LICENSE).
|
||||
17
.github/workflows/ci-release.yml
vendored
17
.github/workflows/ci-release.yml
vendored
@@ -17,7 +17,7 @@ jobs:
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: "1.23.3"
|
||||
go-version: "1.24.4"
|
||||
|
||||
- name: Cache Go modules
|
||||
uses: actions/cache@v4
|
||||
@@ -31,7 +31,7 @@ jobs:
|
||||
|
||||
- name: Install golangci-lint
|
||||
run: |
|
||||
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.60.3
|
||||
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/HEAD/install.sh | sh -s -- -b $(go env GOPATH)/bin v2.7.2
|
||||
echo "$(go env GOPATH)/bin" >> $GITHUB_PATH
|
||||
|
||||
- name: Install swag for swagger generation
|
||||
@@ -116,7 +116,7 @@ jobs:
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: "1.23.3"
|
||||
go-version: "1.24.4"
|
||||
|
||||
- name: Cache Go modules
|
||||
uses: actions/cache@v4
|
||||
@@ -485,6 +485,17 @@ jobs:
|
||||
echo EOF
|
||||
} >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Update CITATION.cff version
|
||||
run: |
|
||||
VERSION="${{ needs.determine-version.outputs.new_version }}"
|
||||
sed -i "s/^version: .*/version: ${VERSION}/" CITATION.cff
|
||||
sed -i "s/^date-released: .*/date-released: \"$(date +%Y-%m-%d)\"/" CITATION.cff
|
||||
git config user.name "github-actions[bot]"
|
||||
git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git add CITATION.cff
|
||||
git commit -m "Update CITATION.cff to v${VERSION}" || true
|
||||
git push || true
|
||||
|
||||
- name: Create GitHub Release
|
||||
uses: actions/create-release@v1
|
||||
env:
|
||||
|
||||
33
CITATION.cff
Normal file
33
CITATION.cff
Normal file
@@ -0,0 +1,33 @@
|
||||
cff-version: 1.2.0
|
||||
title: Postgresus
|
||||
message: "If you use this software, please cite it as below."
|
||||
type: software
|
||||
authors:
|
||||
- family-names: Dugin
|
||||
given-names: Rostislav
|
||||
repository-code: https://github.com/RostislavDugin/postgresus
|
||||
url: https://postgresus.com
|
||||
abstract: "Free, open source and self-hosted solution for automated PostgreSQL backups with multiple storage options and notifications."
|
||||
keywords:
|
||||
- docker
|
||||
- kubernetes
|
||||
- golang
|
||||
- backups
|
||||
- postgres
|
||||
- devops
|
||||
- backup
|
||||
- database
|
||||
- tools
|
||||
- monitoring
|
||||
- ftp
|
||||
- postgresql
|
||||
- s3
|
||||
- psql
|
||||
- web-ui
|
||||
- self-hosted
|
||||
- pg
|
||||
- system-administration
|
||||
- database-backup
|
||||
license: Apache-2.0
|
||||
version: 2.8.0
|
||||
date-released: "2025-12-18"
|
||||
@@ -22,7 +22,7 @@ RUN npm run build
|
||||
|
||||
# ========= BUILD BACKEND =========
|
||||
# Backend build stage
|
||||
FROM --platform=$BUILDPLATFORM golang:1.23.3 AS backend-build
|
||||
FROM --platform=$BUILDPLATFORM golang:1.24.4 AS backend-build
|
||||
|
||||
# Make TARGET args available early so tools built here match the final image arch
|
||||
ARG TARGETOS
|
||||
@@ -77,16 +77,16 @@ ENV APP_VERSION=$APP_VERSION
|
||||
# Set production mode for Docker containers
|
||||
ENV ENV_MODE=production
|
||||
|
||||
# Install PostgreSQL server and client tools (versions 12-18)
|
||||
# Install PostgreSQL server and client tools (versions 12-18) and rclone
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
wget ca-certificates gnupg lsb-release sudo gosu && \
|
||||
wget ca-certificates gnupg lsb-release sudo gosu curl unzip && \
|
||||
wget -qO- https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - && \
|
||||
echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" \
|
||||
> /etc/apt/sources.list.d/pgdg.list && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
postgresql-17 postgresql-18 postgresql-client-12 postgresql-client-13 postgresql-client-14 postgresql-client-15 \
|
||||
postgresql-client-16 postgresql-client-17 postgresql-client-18 && \
|
||||
postgresql-client-16 postgresql-client-17 postgresql-client-18 rclone && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Create postgres user and set up directories
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
version: "2"
|
||||
|
||||
run:
|
||||
timeout: 1m
|
||||
timeout: 5m
|
||||
tests: false
|
||||
concurrency: 4
|
||||
|
||||
|
||||
234
backend/go.mod
234
backend/go.mod
@@ -1,6 +1,6 @@
|
||||
module postgresus-backend
|
||||
|
||||
go 1.23.3
|
||||
go 1.24.4
|
||||
|
||||
require (
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0
|
||||
@@ -12,40 +12,195 @@ require (
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/ilyakaznacheev/cleanenv v1.5.0
|
||||
github.com/jackc/pgx/v5 v5.7.5
|
||||
github.com/jlaffaye/ftp v0.2.0
|
||||
github.com/jlaffaye/ftp v0.2.1-0.20240918233326-1b970516f5d3
|
||||
github.com/jmoiron/sqlx v1.4.0
|
||||
github.com/joho/godotenv v1.5.1
|
||||
github.com/lib/pq v1.10.9
|
||||
github.com/minio/minio-go/v7 v7.0.92
|
||||
github.com/shirou/gopsutil/v4 v4.25.5
|
||||
github.com/minio/minio-go/v7 v7.0.97
|
||||
github.com/rclone/rclone v1.72.1
|
||||
github.com/shirou/gopsutil/v4 v4.25.10
|
||||
github.com/stretchr/testify v1.11.1
|
||||
github.com/swaggo/files v1.0.1
|
||||
github.com/swaggo/gin-swagger v1.6.0
|
||||
github.com/swaggo/swag v1.16.4
|
||||
golang.org/x/crypto v0.41.0
|
||||
golang.org/x/time v0.12.0
|
||||
golang.org/x/crypto v0.45.0
|
||||
golang.org/x/time v0.14.0
|
||||
gorm.io/driver/postgres v1.5.11
|
||||
gorm.io/gorm v1.26.1
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.0 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
|
||||
github.com/hashicorp/errwrap v1.0.0 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azfile v1.5.3 // indirect
|
||||
github.com/Azure/go-ntlmssp v0.0.2-0.20251110135918-10b7b7e7cd26 // indirect
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0 // indirect
|
||||
github.com/Files-com/files-sdk-go/v3 v3.2.264 // indirect
|
||||
github.com/IBM/go-sdk-core/v5 v5.21.0 // indirect
|
||||
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd // indirect
|
||||
github.com/Microsoft/go-winio v0.6.2 // indirect
|
||||
github.com/ProtonMail/bcrypt v0.0.0-20211005172633-e235017c1baf // indirect
|
||||
github.com/ProtonMail/gluon v0.17.1-0.20230724134000-308be39be96e // indirect
|
||||
github.com/ProtonMail/go-crypto v1.3.0 // indirect
|
||||
github.com/ProtonMail/go-mime v0.0.0-20230322103455-7d82a3887f2f // indirect
|
||||
github.com/ProtonMail/go-srp v0.0.7 // indirect
|
||||
github.com/ProtonMail/gopenpgp/v2 v2.9.0 // indirect
|
||||
github.com/PuerkitoBio/goquery v1.10.3 // indirect
|
||||
github.com/a1ex3/zstd-seekable-format-go/pkg v0.10.0 // indirect
|
||||
github.com/abbot/go-http-auth v0.4.0 // indirect
|
||||
github.com/anchore/go-lzo v0.1.0 // indirect
|
||||
github.com/andybalholm/cascadia v1.3.3 // indirect
|
||||
github.com/appscode/go-querystring v0.0.0-20170504095604-0126cfb3f1dc // indirect
|
||||
github.com/aws/aws-sdk-go-v2 v1.39.6 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/config v1.31.17 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.18.21 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.13 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.13 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.13 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.13 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.13 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.13 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.90.0 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.1 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.39.1 // indirect
|
||||
github.com/aws/smithy-go v1.23.2 // indirect
|
||||
github.com/bahlo/generic-list-go v0.2.0 // indirect
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/boombuler/barcode v1.1.0 // indirect
|
||||
github.com/bradenaw/juniper v0.15.3 // indirect
|
||||
github.com/bradfitz/iter v0.0.0-20191230175014-e8f45d346db8 // indirect
|
||||
github.com/buengese/sgzip v0.1.1 // indirect
|
||||
github.com/buger/jsonparser v1.1.1 // indirect
|
||||
github.com/calebcase/tmpfile v1.0.3 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/chilts/sid v0.0.0-20190607042430-660e94789ec9 // indirect
|
||||
github.com/clipperhouse/stringish v0.1.1 // indirect
|
||||
github.com/clipperhouse/uax29/v2 v2.3.0 // indirect
|
||||
github.com/cloudflare/circl v1.6.1 // indirect
|
||||
github.com/cloudinary/cloudinary-go/v2 v2.13.0 // indirect
|
||||
github.com/cloudsoda/go-smb2 v0.0.0-20250228001242-d4c70e6251cc // indirect
|
||||
github.com/cloudsoda/sddl v0.0.0-20250224235906-926454e91efc // indirect
|
||||
github.com/colinmarc/hdfs/v2 v2.4.0 // indirect
|
||||
github.com/coreos/go-semver v0.3.1 // indirect
|
||||
github.com/coreos/go-systemd/v22 v22.6.0 // indirect
|
||||
github.com/creasty/defaults v1.8.0 // indirect
|
||||
github.com/cronokirby/saferith v0.33.0 // indirect
|
||||
github.com/diskfs/go-diskfs v1.7.0 // indirect
|
||||
github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5 // indirect
|
||||
github.com/emersion/go-message v0.18.2 // indirect
|
||||
github.com/emersion/go-vcard v0.0.0-20241024213814-c9703dde27ff // indirect
|
||||
github.com/flynn/noise v1.1.0 // indirect
|
||||
github.com/go-chi/chi/v5 v5.2.3 // indirect
|
||||
github.com/go-darwin/apfs v0.0.0-20211011131704-f84b94dbf348 // indirect
|
||||
github.com/go-git/go-billy/v5 v5.6.2 // indirect
|
||||
github.com/go-openapi/errors v0.22.4 // indirect
|
||||
github.com/go-openapi/strfmt v0.25.0 // indirect
|
||||
github.com/go-resty/resty/v2 v2.16.5 // indirect
|
||||
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
|
||||
github.com/gofrs/flock v0.13.0 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0 // indirect
|
||||
github.com/google/btree v1.1.3 // indirect
|
||||
github.com/gorilla/schema v1.4.1 // indirect
|
||||
github.com/hashicorp/errwrap v1.1.0 // indirect
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
|
||||
github.com/hashicorp/go-multierror v1.1.1 // indirect
|
||||
github.com/hashicorp/go-retryablehttp v0.7.8 // indirect
|
||||
github.com/hashicorp/go-uuid v1.0.3 // indirect
|
||||
github.com/henrybear327/Proton-API-Bridge v1.0.0 // indirect
|
||||
github.com/henrybear327/go-proton-api v1.0.0 // indirect
|
||||
github.com/jcmturner/aescts/v2 v2.0.0 // indirect
|
||||
github.com/jcmturner/dnsutils/v2 v2.0.0 // indirect
|
||||
github.com/jcmturner/gofork v1.7.6 // indirect
|
||||
github.com/jcmturner/goidentity/v6 v6.0.1 // indirect
|
||||
github.com/jcmturner/gokrb5/v8 v8.4.4 // indirect
|
||||
github.com/jcmturner/rpc/v2 v2.0.3 // indirect
|
||||
github.com/jtolio/noiseconn v0.0.0-20231127013910-f6d9ecbf1de7 // indirect
|
||||
github.com/jzelinskie/whirlpool v0.0.0-20201016144138-0675e54bb004 // indirect
|
||||
github.com/klauspost/crc32 v1.3.0 // indirect
|
||||
github.com/koofr/go-httpclient v0.0.0-20240520111329-e20f8f203988 // indirect
|
||||
github.com/koofr/go-koofrclient v0.0.0-20221207135200-cbd7fc9ad6a6 // indirect
|
||||
github.com/kr/fs v0.1.0 // indirect
|
||||
github.com/kylelemons/godebug v1.1.0 // indirect
|
||||
github.com/lanrat/extsort v1.4.2 // indirect
|
||||
github.com/lpar/date v1.0.0 // indirect
|
||||
github.com/lufia/plan9stats v0.0.0-20251013123823-9fd1530e3ec3 // indirect
|
||||
github.com/mattn/go-colorable v0.1.14 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.19 // indirect
|
||||
github.com/mitchellh/go-homedir v1.1.0 // indirect
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
||||
github.com/ncw/swift/v2 v2.0.5 // indirect
|
||||
github.com/oklog/ulid v1.3.1 // indirect
|
||||
github.com/oracle/oci-go-sdk/v65 v65.104.0 // indirect
|
||||
github.com/panjf2000/ants/v2 v2.11.3 // indirect
|
||||
github.com/patrickmn/go-cache v2.1.0+incompatible // indirect
|
||||
github.com/pengsrc/go-shared v0.2.1-0.20190131101655-1999055a4a14 // indirect
|
||||
github.com/peterh/liner v1.2.2 // indirect
|
||||
github.com/pierrec/lz4/v4 v4.1.22 // indirect
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/pkg/sftp v1.13.10 // indirect
|
||||
github.com/pkg/xattr v0.4.12 // indirect
|
||||
github.com/pquerna/otp v1.5.0 // indirect
|
||||
github.com/prometheus/client_golang v1.23.2 // indirect
|
||||
github.com/prometheus/client_model v0.6.2 // indirect
|
||||
github.com/prometheus/common v0.67.2 // indirect
|
||||
github.com/prometheus/procfs v0.19.2 // indirect
|
||||
github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8 // indirect
|
||||
github.com/relvacode/iso8601 v1.7.0 // indirect
|
||||
github.com/rfjakob/eme v1.1.2 // indirect
|
||||
github.com/sabhiram/go-gitignore v0.0.0-20210923224102-525f6e181f06 // indirect
|
||||
github.com/samber/lo v1.52.0 // indirect
|
||||
github.com/sirupsen/logrus v1.9.4-0.20230606125235-dd1b4c2e81af // indirect
|
||||
github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 // indirect
|
||||
github.com/sony/gobreaker v1.0.0 // indirect
|
||||
github.com/spacemonkeygo/monkit/v3 v3.0.25-0.20251022131615-eb24eb109368 // indirect
|
||||
github.com/spf13/pflag v1.0.10 // indirect
|
||||
github.com/t3rm1n4l/go-mega v0.0.0-20251031123324-a804aaa87491 // indirect
|
||||
github.com/tklauser/go-sysconf v0.3.15 // indirect
|
||||
github.com/tklauser/numcpus v0.10.0 // indirect
|
||||
github.com/ulikunitz/xz v0.5.15 // indirect
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 // indirect
|
||||
github.com/xanzy/ssh-agent v0.3.3 // indirect
|
||||
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
|
||||
github.com/yunify/qingstor-sdk-go/v3 v3.2.0 // indirect
|
||||
github.com/zeebo/blake3 v0.2.4 // indirect
|
||||
github.com/zeebo/errs v1.4.0 // indirect
|
||||
github.com/zeebo/xxh3 v1.0.2 // indirect
|
||||
go.etcd.io/bbolt v1.4.3 // indirect
|
||||
go.mongodb.org/mongo-driver v1.17.6 // indirect
|
||||
go.yaml.in/yaml/v2 v2.4.3 // indirect
|
||||
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
|
||||
golang.org/x/term v0.37.0 // indirect
|
||||
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
|
||||
gopkg.in/validator.v2 v2.0.1 // indirect
|
||||
moul.io/http2curl/v2 v2.3.0 // indirect
|
||||
sigs.k8s.io/yaml v1.6.0 // indirect
|
||||
storj.io/common v0.0.0-20251107171817-6221ae45072c // indirect
|
||||
storj.io/drpc v0.0.35-0.20250513201419-f7819ea69b55 // indirect
|
||||
storj.io/eventkit v0.0.0-20250410172343-61f26d3de156 // indirect
|
||||
storj.io/infectious v0.0.2 // indirect
|
||||
storj.io/picobuf v0.0.4 // indirect
|
||||
storj.io/uplink v1.13.1 // indirect
|
||||
)
|
||||
|
||||
require (
|
||||
cloud.google.com/go/auth v0.16.2 // indirect
|
||||
cloud.google.com/go/auth v0.17.0 // indirect
|
||||
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
|
||||
cloud.google.com/go/compute/metadata v0.7.0 // indirect
|
||||
github.com/geoffgarside/ber v1.1.0 // indirect
|
||||
cloud.google.com/go/compute/metadata v0.9.0 // indirect
|
||||
github.com/geoffgarside/ber v1.2.0 // indirect
|
||||
github.com/google/s2a-go v0.1.9 // indirect
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
|
||||
github.com/googleapis/gax-go/v2 v2.14.2 // indirect
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
|
||||
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
|
||||
github.com/hirochachacha/go-smb2 v1.1.0
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250528174236-200df99c418a // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
|
||||
google.golang.org/grpc v1.73.0 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
|
||||
google.golang.org/grpc v1.76.0 // indirect
|
||||
)
|
||||
|
||||
require (
|
||||
@@ -56,11 +211,11 @@ require (
|
||||
github.com/bytedance/sonic v1.13.2 // indirect
|
||||
github.com/bytedance/sonic/loader v0.2.4 // indirect
|
||||
github.com/cloudwego/base64x v0.1.5 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/ebitengine/purego v0.8.4 // indirect
|
||||
github.com/ebitengine/purego v0.9.1 // indirect
|
||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||
github.com/gabriel-vasile/mimetype v1.4.9 // indirect
|
||||
github.com/gabriel-vasile/mimetype v1.4.11 // indirect
|
||||
github.com/gin-contrib/sse v1.1.0 // indirect
|
||||
github.com/go-ini/ini v1.67.0 // indirect
|
||||
github.com/go-logr/logr v1.4.3 // indirect
|
||||
@@ -72,7 +227,7 @@ require (
|
||||
github.com/go-openapi/swag v0.19.15 // indirect
|
||||
github.com/go-playground/locales v0.14.1 // indirect
|
||||
github.com/go-playground/universal-translator v0.18.1 // indirect
|
||||
github.com/go-playground/validator/v10 v10.26.0 // indirect
|
||||
github.com/go-playground/validator/v10 v10.28.0 // indirect
|
||||
github.com/go-sql-driver/mysql v1.9.2 // indirect
|
||||
github.com/goccy/go-json v0.10.5 // indirect
|
||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||
@@ -82,40 +237,39 @@ require (
|
||||
github.com/jinzhu/now v1.1.5 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/klauspost/compress v1.18.0 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
|
||||
github.com/klauspost/compress v1.18.1 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
||||
github.com/leodido/go-urn v1.4.0 // indirect
|
||||
github.com/mailru/easyjson v0.7.6 // indirect
|
||||
github.com/mailru/easyjson v0.9.1 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/minio/crc64nvme v1.0.1 // indirect
|
||||
github.com/minio/crc64nvme v1.1.1 // indirect
|
||||
github.com/minio/md5-simd v1.1.2 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
|
||||
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/philhofer/fwd v1.2.0 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
|
||||
github.com/rogpeppe/go-internal v1.14.1 // indirect
|
||||
github.com/rs/xid v1.6.0 // indirect
|
||||
github.com/stretchr/objx v0.5.2 // indirect
|
||||
github.com/tinylib/msgp v1.3.0 // indirect
|
||||
github.com/tinylib/msgp v1.5.0 // indirect
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
||||
github.com/ugorji/go/codec v1.2.12 // indirect
|
||||
github.com/yusufpapurcu/wmi v1.2.4 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
|
||||
go.opentelemetry.io/otel v1.36.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.36.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.36.0 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 // indirect
|
||||
go.opentelemetry.io/otel v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.38.0 // indirect
|
||||
golang.org/x/arch v0.17.0 // indirect
|
||||
golang.org/x/net v0.43.0 // indirect
|
||||
golang.org/x/oauth2 v0.30.0
|
||||
golang.org/x/sync v0.16.0 // indirect
|
||||
golang.org/x/sys v0.35.0 // indirect
|
||||
golang.org/x/text v0.28.0 // indirect
|
||||
golang.org/x/tools v0.35.0 // indirect
|
||||
google.golang.org/api v0.239.0
|
||||
google.golang.org/protobuf v1.36.6 // indirect
|
||||
golang.org/x/net v0.47.0 // indirect
|
||||
golang.org/x/oauth2 v0.33.0
|
||||
golang.org/x/sync v0.18.0 // indirect
|
||||
golang.org/x/sys v0.38.0 // indirect
|
||||
golang.org/x/text v0.31.0 // indirect
|
||||
golang.org/x/tools v0.38.0 // indirect
|
||||
google.golang.org/api v0.255.0
|
||||
google.golang.org/protobuf v1.36.10 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
olympos.io/encoding/edn v0.0.0-20201019073823-d3554ca0b0a3 // indirect
|
||||
|
||||
949
backend/go.sum
949
backend/go.sum
File diff suppressed because it is too large
Load Diff
@@ -131,7 +131,8 @@ func (uc *CreatePostgresqlBackupUsecase) streamToStorage(
|
||||
}
|
||||
defer func() {
|
||||
if pgpassFile != "" {
|
||||
_ = os.Remove(pgpassFile)
|
||||
// Remove the entire temp directory (which contains the .pgpass file)
|
||||
_ = os.RemoveAll(filepath.Dir(pgpassFile))
|
||||
}
|
||||
}()
|
||||
|
||||
|
||||
@@ -34,6 +34,9 @@ type PostgresqlDatabase struct {
|
||||
// backup settings
|
||||
IncludeSchemas []string `json:"includeSchemas" gorm:"-"`
|
||||
IncludeSchemasString string `json:"-" gorm:"column:include_schemas;type:text;not null;default:''"`
|
||||
|
||||
// restore settings (not saved to DB)
|
||||
IsExcludeExtensions bool `json:"isExcludeExtensions" gorm:"-"`
|
||||
}
|
||||
|
||||
func (p *PostgresqlDatabase) TableName() string {
|
||||
|
||||
@@ -171,6 +171,36 @@ func Test_RestoreBackup_WhenUserIsNotWorkspaceMember_ReturnsForbidden(t *testing
|
||||
assert.Contains(t, string(testResp.Body), "insufficient permissions")
|
||||
}
|
||||
|
||||
func Test_RestoreBackup_WithIsExcludeExtensions_FlagPassedCorrectly(t *testing.T) {
|
||||
router := createTestRouter()
|
||||
owner := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Test Workspace", owner, router)
|
||||
|
||||
_, backup := createTestDatabaseWithBackupForRestore(workspace, owner, router)
|
||||
|
||||
request := RestoreBackupRequest{
|
||||
PostgresqlDatabase: &postgresql.PostgresqlDatabase{
|
||||
Version: tools.PostgresqlVersion16,
|
||||
Host: "localhost",
|
||||
Port: 5432,
|
||||
Username: "postgres",
|
||||
Password: "postgres",
|
||||
IsExcludeExtensions: true,
|
||||
},
|
||||
}
|
||||
|
||||
testResp := test_utils.MakePostRequest(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/restores/%s/restore", backup.ID.String()),
|
||||
"Bearer "+owner.Token,
|
||||
request,
|
||||
http.StatusOK,
|
||||
)
|
||||
|
||||
assert.Contains(t, string(testResp.Body), "restore started successfully")
|
||||
}
|
||||
|
||||
func Test_RestoreBackup_AuditLogWritten(t *testing.T) {
|
||||
router := createTestRouter()
|
||||
owner := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
|
||||
@@ -214,6 +214,11 @@ func (s *RestoreService) RestoreBackup(
|
||||
return fmt.Errorf("failed to auto-detect database version: %w", err)
|
||||
}
|
||||
|
||||
isExcludeExtensions := false
|
||||
if requestDTO.PostgresqlDatabase != nil {
|
||||
isExcludeExtensions = requestDTO.PostgresqlDatabase.IsExcludeExtensions
|
||||
}
|
||||
|
||||
err = s.restoreBackupUsecase.Execute(
|
||||
backupConfig,
|
||||
restore,
|
||||
@@ -221,6 +226,7 @@ func (s *RestoreService) RestoreBackup(
|
||||
restoringToDB,
|
||||
backup,
|
||||
storage,
|
||||
isExcludeExtensions,
|
||||
)
|
||||
if err != nil {
|
||||
errMsg := err.Error()
|
||||
|
||||
@@ -42,6 +42,7 @@ func (uc *RestorePostgresqlBackupUsecase) Execute(
|
||||
restore models.Restore,
|
||||
backup *backups.Backup,
|
||||
storage *storages.Storage,
|
||||
isExcludeExtensions bool,
|
||||
) error {
|
||||
if originalDB.Type != databases.DatabaseTypePostgres {
|
||||
return errors.New("database type not supported")
|
||||
@@ -96,6 +97,7 @@ func (uc *RestorePostgresqlBackupUsecase) Execute(
|
||||
backup,
|
||||
storage,
|
||||
pg,
|
||||
isExcludeExtensions,
|
||||
)
|
||||
}
|
||||
|
||||
@@ -108,6 +110,7 @@ func (uc *RestorePostgresqlBackupUsecase) restoreFromStorage(
|
||||
backup *backups.Backup,
|
||||
storage *storages.Storage,
|
||||
pgConfig *pgtypes.PostgresqlDatabase,
|
||||
isExcludeExtensions bool,
|
||||
) error {
|
||||
uc.logger.Info(
|
||||
"Restoring PostgreSQL backup from storage via temporary file",
|
||||
@@ -115,6 +118,8 @@ func (uc *RestorePostgresqlBackupUsecase) restoreFromStorage(
|
||||
pgBin,
|
||||
"args",
|
||||
args,
|
||||
"isExcludeExtensions",
|
||||
isExcludeExtensions,
|
||||
)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Minute)
|
||||
@@ -171,6 +176,26 @@ func (uc *RestorePostgresqlBackupUsecase) restoreFromStorage(
|
||||
}
|
||||
defer cleanupFunc()
|
||||
|
||||
// If excluding extensions, generate filtered TOC list and use it
|
||||
if isExcludeExtensions {
|
||||
tocListFile, err := uc.generateFilteredTocList(
|
||||
ctx,
|
||||
pgBin,
|
||||
tempBackupFile,
|
||||
pgpassFile,
|
||||
pgConfig,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to generate filtered TOC list: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = os.Remove(tocListFile)
|
||||
}()
|
||||
|
||||
// Add -L flag to use the filtered list
|
||||
args = append(args, "-L", tocListFile)
|
||||
}
|
||||
|
||||
// Add the temporary backup file as the last argument to pg_restore
|
||||
args = append(args, tempBackupFile)
|
||||
|
||||
@@ -554,6 +579,75 @@ func containsIgnoreCase(str, substr string) bool {
|
||||
return strings.Contains(strings.ToLower(str), strings.ToLower(substr))
|
||||
}
|
||||
|
||||
// generateFilteredTocList generates a pg_restore TOC list file with extensions filtered out.
|
||||
// This is used when isExcludeExtensions is true to skip CREATE EXTENSION statements.
|
||||
func (uc *RestorePostgresqlBackupUsecase) generateFilteredTocList(
|
||||
ctx context.Context,
|
||||
pgBin string,
|
||||
backupFile string,
|
||||
pgpassFile string,
|
||||
pgConfig *pgtypes.PostgresqlDatabase,
|
||||
) (string, error) {
|
||||
uc.logger.Info("Generating filtered TOC list to exclude extensions", "backupFile", backupFile)
|
||||
|
||||
// Run pg_restore -l to get the TOC list
|
||||
listCmd := exec.CommandContext(ctx, pgBin, "-l", backupFile)
|
||||
uc.setupPgRestoreEnvironment(listCmd, pgpassFile, pgConfig)
|
||||
|
||||
tocOutput, err := listCmd.Output()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to generate TOC list: %w", err)
|
||||
}
|
||||
|
||||
// Filter out EXTENSION-related lines (both CREATE EXTENSION and COMMENT ON EXTENSION)
|
||||
var filteredLines []string
|
||||
for line := range strings.SplitSeq(string(tocOutput), "\n") {
|
||||
trimmedLine := strings.TrimSpace(line)
|
||||
if trimmedLine == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
upperLine := strings.ToUpper(trimmedLine)
|
||||
|
||||
// Skip lines that contain " EXTENSION " - this catches both:
|
||||
// - CREATE EXTENSION entries: "3420; 0 0 EXTENSION - uuid-ossp"
|
||||
// - COMMENT ON EXTENSION entries: "3462; 0 0 COMMENT - EXTENSION "uuid-ossp""
|
||||
if strings.Contains(upperLine, " EXTENSION ") {
|
||||
uc.logger.Info("Excluding extension-related entry from restore", "tocLine", trimmedLine)
|
||||
continue
|
||||
}
|
||||
|
||||
filteredLines = append(filteredLines, line)
|
||||
}
|
||||
|
||||
// Write filtered TOC to temporary file
|
||||
tocFile, err := os.CreateTemp("", "pg_restore_toc_*.list")
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create TOC list file: %w", err)
|
||||
}
|
||||
tocFilePath := tocFile.Name()
|
||||
|
||||
filteredContent := strings.Join(filteredLines, "\n")
|
||||
if _, err := tocFile.WriteString(filteredContent); err != nil {
|
||||
_ = tocFile.Close()
|
||||
_ = os.Remove(tocFilePath)
|
||||
return "", fmt.Errorf("failed to write TOC list file: %w", err)
|
||||
}
|
||||
|
||||
if err := tocFile.Close(); err != nil {
|
||||
_ = os.Remove(tocFilePath)
|
||||
return "", fmt.Errorf("failed to close TOC list file: %w", err)
|
||||
}
|
||||
|
||||
uc.logger.Info("Generated filtered TOC list file",
|
||||
"tocFile", tocFilePath,
|
||||
"originalLines", len(strings.Split(string(tocOutput), "\n")),
|
||||
"filteredLines", len(filteredLines),
|
||||
)
|
||||
|
||||
return tocFilePath, nil
|
||||
}
|
||||
|
||||
// createTempPgpassFile creates a temporary .pgpass file with the given password
|
||||
func (uc *RestorePostgresqlBackupUsecase) createTempPgpassFile(
|
||||
pgConfig *pgtypes.PostgresqlDatabase,
|
||||
|
||||
@@ -21,6 +21,7 @@ func (uc *RestoreBackupUsecase) Execute(
|
||||
restoringToDB *databases.Database,
|
||||
backup *backups.Backup,
|
||||
storage *storages.Storage,
|
||||
isExcludeExtensions bool,
|
||||
) error {
|
||||
if originalDB.Type == databases.DatabaseTypePostgres {
|
||||
return uc.restorePostgresqlBackupUsecase.Execute(
|
||||
@@ -30,6 +31,7 @@ func (uc *RestoreBackupUsecase) Execute(
|
||||
restore,
|
||||
backup,
|
||||
storage,
|
||||
isExcludeExtensions,
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
google_drive_storage "postgresus-backend/internal/features/storages/models/google_drive"
|
||||
local_storage "postgresus-backend/internal/features/storages/models/local"
|
||||
nas_storage "postgresus-backend/internal/features/storages/models/nas"
|
||||
rclone_storage "postgresus-backend/internal/features/storages/models/rclone"
|
||||
s3_storage "postgresus-backend/internal/features/storages/models/s3"
|
||||
users_enums "postgresus-backend/internal/features/users/enums"
|
||||
users_middleware "postgresus-backend/internal/features/users/middleware"
|
||||
@@ -786,6 +787,52 @@ func Test_StorageSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
assert.Equal(t, "", storage.FTPStorage.Password)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Rclone Storage",
|
||||
storageType: StorageTypeRclone,
|
||||
createStorage: func(workspaceID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeRclone,
|
||||
Name: "Test Rclone Storage",
|
||||
RcloneStorage: &rclone_storage.RcloneStorage{
|
||||
ConfigContent: "[myremote]\ntype = s3\nprovider = AWS\naccess_key_id = test\nsecret_access_key = secret\n",
|
||||
RemotePath: "/backups",
|
||||
},
|
||||
}
|
||||
},
|
||||
updateStorage: func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
ID: storageID,
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeRclone,
|
||||
Name: "Updated Rclone Storage",
|
||||
RcloneStorage: &rclone_storage.RcloneStorage{
|
||||
ConfigContent: "",
|
||||
RemotePath: "/backups2",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, storage *Storage) {
|
||||
assert.True(t, strings.HasPrefix(storage.RcloneStorage.ConfigContent, "enc:"),
|
||||
"ConfigContent should be encrypted with 'enc:' prefix")
|
||||
|
||||
encryptor := encryption.GetFieldEncryptor()
|
||||
configContent, err := encryptor.Decrypt(
|
||||
storage.ID,
|
||||
storage.RcloneStorage.ConfigContent,
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(
|
||||
t,
|
||||
"[myremote]\ntype = s3\nprovider = AWS\naccess_key_id = test\nsecret_access_key = secret\n",
|
||||
configContent,
|
||||
)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, storage *Storage) {
|
||||
assert.Equal(t, "", storage.RcloneStorage.ConfigContent)
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
|
||||
@@ -9,4 +9,5 @@ const (
|
||||
StorageTypeNAS StorageType = "NAS"
|
||||
StorageTypeAzureBlob StorageType = "AZURE_BLOB"
|
||||
StorageTypeFTP StorageType = "FTP"
|
||||
StorageTypeRclone StorageType = "RCLONE"
|
||||
)
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
google_drive_storage "postgresus-backend/internal/features/storages/models/google_drive"
|
||||
local_storage "postgresus-backend/internal/features/storages/models/local"
|
||||
nas_storage "postgresus-backend/internal/features/storages/models/nas"
|
||||
rclone_storage "postgresus-backend/internal/features/storages/models/rclone"
|
||||
s3_storage "postgresus-backend/internal/features/storages/models/s3"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
|
||||
@@ -30,6 +31,7 @@ type Storage struct {
|
||||
NASStorage *nas_storage.NASStorage `json:"nasStorage" gorm:"foreignKey:StorageID"`
|
||||
AzureBlobStorage *azure_blob_storage.AzureBlobStorage `json:"azureBlobStorage" gorm:"foreignKey:StorageID"`
|
||||
FTPStorage *ftp_storage.FTPStorage `json:"ftpStorage" gorm:"foreignKey:StorageID"`
|
||||
RcloneStorage *rclone_storage.RcloneStorage `json:"rcloneStorage" gorm:"foreignKey:StorageID"`
|
||||
}
|
||||
|
||||
func (s *Storage) SaveFile(
|
||||
@@ -115,6 +117,10 @@ func (s *Storage) Update(incoming *Storage) {
|
||||
if s.FTPStorage != nil && incoming.FTPStorage != nil {
|
||||
s.FTPStorage.Update(incoming.FTPStorage)
|
||||
}
|
||||
case StorageTypeRclone:
|
||||
if s.RcloneStorage != nil && incoming.RcloneStorage != nil {
|
||||
s.RcloneStorage.Update(incoming.RcloneStorage)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -132,6 +138,8 @@ func (s *Storage) getSpecificStorage() StorageFileSaver {
|
||||
return s.AzureBlobStorage
|
||||
case StorageTypeFTP:
|
||||
return s.FTPStorage
|
||||
case StorageTypeRclone:
|
||||
return s.RcloneStorage
|
||||
default:
|
||||
panic("invalid storage type: " + string(s.Type))
|
||||
}
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
google_drive_storage "postgresus-backend/internal/features/storages/models/google_drive"
|
||||
local_storage "postgresus-backend/internal/features/storages/models/local"
|
||||
nas_storage "postgresus-backend/internal/features/storages/models/nas"
|
||||
rclone_storage "postgresus-backend/internal/features/storages/models/rclone"
|
||||
s3_storage "postgresus-backend/internal/features/storages/models/s3"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
@@ -145,6 +146,20 @@ func Test_Storage_BasicOperations(t *testing.T) {
|
||||
Path: "test-files",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "RcloneStorage",
|
||||
storage: &rclone_storage.RcloneStorage{
|
||||
StorageID: uuid.New(),
|
||||
ConfigContent: fmt.Sprintf(`[minio]
|
||||
type = s3
|
||||
provider = Other
|
||||
access_key_id = %s
|
||||
secret_access_key = %s
|
||||
endpoint = http://%s
|
||||
acl = private`, s3Container.accessKey, s3Container.secretKey, s3Container.endpoint),
|
||||
RemotePath: s3Container.bucketName,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Add Google Drive storage test only if environment variables are available
|
||||
|
||||
293
backend/internal/features/storages/models/rclone/model.go
Normal file
293
backend/internal/features/storages/models/rclone/model.go
Normal file
@@ -0,0 +1,293 @@
|
||||
package rclone_storage
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/config"
|
||||
"github.com/rclone/rclone/fs/operations"
|
||||
|
||||
_ "github.com/rclone/rclone/backend/all"
|
||||
)
|
||||
|
||||
const (
|
||||
rcloneOperationTimeout = 30 * time.Second
|
||||
)
|
||||
|
||||
var rcloneConfigMu sync.Mutex
|
||||
|
||||
type RcloneStorage struct {
|
||||
StorageID uuid.UUID `json:"storageId" gorm:"primaryKey;type:uuid;column:storage_id"`
|
||||
ConfigContent string `json:"configContent" gorm:"not null;type:text;column:config_content"`
|
||||
RemotePath string `json:"remotePath" gorm:"type:text;column:remote_path"`
|
||||
}
|
||||
|
||||
func (r *RcloneStorage) TableName() string {
|
||||
return "rclone_storages"
|
||||
}
|
||||
|
||||
func (r *RcloneStorage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
logger.Info("Starting to save file to rclone storage", "fileId", fileID.String())
|
||||
|
||||
remoteFs, err := r.getFs(ctx, encryptor)
|
||||
if err != nil {
|
||||
logger.Error("Failed to create rclone filesystem", "fileId", fileID.String(), "error", err)
|
||||
return fmt.Errorf("failed to create rclone filesystem: %w", err)
|
||||
}
|
||||
|
||||
filePath := r.getFilePath(fileID.String())
|
||||
logger.Debug("Uploading file via rclone", "fileId", fileID.String(), "filePath", filePath)
|
||||
|
||||
_, err = operations.Rcat(ctx, remoteFs, filePath, io.NopCloser(file), time.Now().UTC(), nil)
|
||||
if err != nil {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
logger.Info("Rclone upload cancelled", "fileId", fileID.String())
|
||||
return ctx.Err()
|
||||
default:
|
||||
logger.Error(
|
||||
"Failed to upload file via rclone",
|
||||
"fileId",
|
||||
fileID.String(),
|
||||
"error",
|
||||
err,
|
||||
)
|
||||
return fmt.Errorf("failed to upload file via rclone: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
logger.Info(
|
||||
"Successfully saved file to rclone storage",
|
||||
"fileId",
|
||||
fileID.String(),
|
||||
"filePath",
|
||||
filePath,
|
||||
)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *RcloneStorage) GetFile(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fileID uuid.UUID,
|
||||
) (io.ReadCloser, error) {
|
||||
ctx := context.Background()
|
||||
|
||||
remoteFs, err := r.getFs(ctx, encryptor)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create rclone filesystem: %w", err)
|
||||
}
|
||||
|
||||
filePath := r.getFilePath(fileID.String())
|
||||
|
||||
obj, err := remoteFs.NewObject(ctx, filePath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get object from rclone: %w", err)
|
||||
}
|
||||
|
||||
reader, err := obj.Open(ctx)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to open object from rclone: %w", err)
|
||||
}
|
||||
|
||||
return reader, nil
|
||||
}
|
||||
|
||||
func (r *RcloneStorage) DeleteFile(encryptor encryption.FieldEncryptor, fileID uuid.UUID) error {
|
||||
ctx := context.Background()
|
||||
|
||||
remoteFs, err := r.getFs(ctx, encryptor)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create rclone filesystem: %w", err)
|
||||
}
|
||||
|
||||
filePath := r.getFilePath(fileID.String())
|
||||
|
||||
obj, err := remoteFs.NewObject(ctx, filePath)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
err = obj.Remove(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to delete file from rclone: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *RcloneStorage) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if r.ConfigContent == "" {
|
||||
return errors.New("rclone config content is required")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *RcloneStorage) TestConnection(encryptor encryption.FieldEncryptor) error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), rcloneOperationTimeout)
|
||||
defer cancel()
|
||||
|
||||
remoteFs, err := r.getFs(ctx, encryptor)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create rclone filesystem: %w", err)
|
||||
}
|
||||
|
||||
testFileID := uuid.New().String() + "-test"
|
||||
testFilePath := r.getFilePath(testFileID)
|
||||
testData := strings.NewReader("test connection")
|
||||
|
||||
_, err = operations.Rcat(
|
||||
ctx,
|
||||
remoteFs,
|
||||
testFilePath,
|
||||
io.NopCloser(testData),
|
||||
time.Now().UTC(),
|
||||
nil,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to upload test file via rclone: %w", err)
|
||||
}
|
||||
|
||||
obj, err := remoteFs.NewObject(ctx, testFilePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get test file from rclone: %w", err)
|
||||
}
|
||||
|
||||
err = obj.Remove(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to delete test file from rclone: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *RcloneStorage) HideSensitiveData() {
|
||||
r.ConfigContent = ""
|
||||
}
|
||||
|
||||
func (r *RcloneStorage) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
if r.ConfigContent != "" {
|
||||
encrypted, err := encryptor.Encrypt(r.StorageID, r.ConfigContent)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt rclone config content: %w", err)
|
||||
}
|
||||
r.ConfigContent = encrypted
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *RcloneStorage) Update(incoming *RcloneStorage) {
|
||||
r.RemotePath = incoming.RemotePath
|
||||
|
||||
if incoming.ConfigContent != "" {
|
||||
r.ConfigContent = incoming.ConfigContent
|
||||
}
|
||||
}
|
||||
|
||||
func (r *RcloneStorage) getFs(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
) (fs.Fs, error) {
|
||||
configContent, err := encryptor.Decrypt(r.StorageID, r.ConfigContent)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt rclone config content: %w", err)
|
||||
}
|
||||
|
||||
rcloneConfigMu.Lock()
|
||||
defer rcloneConfigMu.Unlock()
|
||||
|
||||
parsedConfig, err := parseConfigContent(configContent)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse rclone config: %w", err)
|
||||
}
|
||||
|
||||
if len(parsedConfig) == 0 {
|
||||
return nil, errors.New("rclone config must contain at least one remote section")
|
||||
}
|
||||
|
||||
var remoteName string
|
||||
for section, values := range parsedConfig {
|
||||
remoteName = section
|
||||
for key, value := range values {
|
||||
config.FileSetValue(section, key, value)
|
||||
}
|
||||
}
|
||||
|
||||
remotePath := remoteName + ":"
|
||||
if r.RemotePath != "" {
|
||||
remotePath = remoteName + ":" + strings.TrimPrefix(r.RemotePath, "/")
|
||||
}
|
||||
|
||||
remoteFs, err := fs.NewFs(ctx, remotePath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(
|
||||
"failed to create rclone filesystem for remote '%s': %w",
|
||||
remoteName,
|
||||
err,
|
||||
)
|
||||
}
|
||||
|
||||
return remoteFs, nil
|
||||
}
|
||||
|
||||
func (r *RcloneStorage) getFilePath(filename string) string {
|
||||
return filename
|
||||
}
|
||||
|
||||
func parseConfigContent(content string) (map[string]map[string]string, error) {
|
||||
sections := make(map[string]map[string]string)
|
||||
|
||||
var currentSection string
|
||||
scanner := bufio.NewScanner(strings.NewReader(content))
|
||||
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
|
||||
if line == "" || strings.HasPrefix(line, "#") || strings.HasPrefix(line, ";") {
|
||||
continue
|
||||
}
|
||||
|
||||
if strings.HasPrefix(line, "[") && strings.HasSuffix(line, "]") {
|
||||
currentSection = strings.TrimPrefix(strings.TrimSuffix(line, "]"), "[")
|
||||
if sections[currentSection] == nil {
|
||||
sections[currentSection] = make(map[string]string)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if currentSection != "" && strings.Contains(line, "=") {
|
||||
parts := strings.SplitN(line, "=", 2)
|
||||
key := strings.TrimSpace(parts[0])
|
||||
value := ""
|
||||
if len(parts) > 1 {
|
||||
value = strings.TrimSpace(parts[1])
|
||||
}
|
||||
sections[currentSection][key] = value
|
||||
}
|
||||
}
|
||||
|
||||
return sections, scanner.Err()
|
||||
}
|
||||
@@ -120,6 +120,7 @@ func (r *StorageRepository) FindByID(id uuid.UUID) (*Storage, error) {
|
||||
Preload("NASStorage").
|
||||
Preload("AzureBlobStorage").
|
||||
Preload("FTPStorage").
|
||||
Preload("RcloneStorage").
|
||||
Where("id = ?", id).
|
||||
First(&s).Error; err != nil {
|
||||
return nil, err
|
||||
@@ -139,6 +140,7 @@ func (r *StorageRepository) FindByWorkspaceID(workspaceID uuid.UUID) ([]*Storage
|
||||
Preload("NASStorage").
|
||||
Preload("AzureBlobStorage").
|
||||
Preload("FTPStorage").
|
||||
Preload("RcloneStorage").
|
||||
Where("workspace_id = ?", workspaceID).
|
||||
Order("name ASC").
|
||||
Find(&storages).Error; err != nil {
|
||||
|
||||
@@ -243,250 +243,98 @@ func Test_BackupAndRestoreSupabase_PublicSchemaOnly_RestoreIsSuccessful(t *testi
|
||||
|
||||
func Test_BackupPostgresql_SchemaSelection_AllSchemasWhenNoneSpecified(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
|
||||
container, err := connectToPostgresContainer("16", env.TestPostgres16Port)
|
||||
assert.NoError(t, err)
|
||||
defer container.DB.Close()
|
||||
|
||||
_, err = container.DB.Exec(`
|
||||
DROP SCHEMA IF EXISTS schema_a CASCADE;
|
||||
DROP SCHEMA IF EXISTS schema_b CASCADE;
|
||||
CREATE SCHEMA schema_a;
|
||||
CREATE SCHEMA schema_b;
|
||||
|
||||
CREATE TABLE public.public_table (id SERIAL PRIMARY KEY, data TEXT);
|
||||
CREATE TABLE schema_a.table_a (id SERIAL PRIMARY KEY, data TEXT);
|
||||
CREATE TABLE schema_b.table_b (id SERIAL PRIMARY KEY, data TEXT);
|
||||
|
||||
INSERT INTO public.public_table (data) VALUES ('public_data');
|
||||
INSERT INTO schema_a.table_a (data) VALUES ('schema_a_data');
|
||||
INSERT INTO schema_b.table_b (data) VALUES ('schema_b_data');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
defer func() {
|
||||
_, _ = container.DB.Exec(`
|
||||
DROP TABLE IF EXISTS public.public_table;
|
||||
DROP SCHEMA IF EXISTS schema_a CASCADE;
|
||||
DROP SCHEMA IF EXISTS schema_b CASCADE;
|
||||
`)
|
||||
}()
|
||||
|
||||
router := createTestRouter()
|
||||
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Schema Test Workspace", user, router)
|
||||
|
||||
storage := storages.CreateTestStorage(workspace.ID)
|
||||
|
||||
database := createDatabaseWithSchemasViaAPI(
|
||||
t, router, "All Schemas Database", workspace.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, container.Database,
|
||||
nil,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
enableBackupsViaAPI(
|
||||
t, router, database.ID, storage.ID,
|
||||
backups_config.BackupEncryptionNone, user.Token,
|
||||
)
|
||||
|
||||
createBackupViaAPI(t, router, database.ID, user.Token)
|
||||
|
||||
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
|
||||
|
||||
newDBName := "restored_all_schemas"
|
||||
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, container.Username, container.Password, newDBName)
|
||||
newDB, err := sqlx.Connect("postgres", newDSN)
|
||||
assert.NoError(t, err)
|
||||
defer newDB.Close()
|
||||
|
||||
createRestoreViaAPI(
|
||||
t, router, backup.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, newDBName,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
|
||||
|
||||
var publicTableExists bool
|
||||
err = newDB.Get(&publicTableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'public' AND table_name = 'public_table'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, publicTableExists, "public.public_table should exist in restored database")
|
||||
|
||||
var schemaATableExists bool
|
||||
err = newDB.Get(&schemaATableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'schema_a' AND table_name = 'table_a'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, schemaATableExists, "schema_a.table_a should exist in restored database")
|
||||
|
||||
var schemaBTableExists bool
|
||||
err = newDB.Get(&schemaBTableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'schema_b' AND table_name = 'table_b'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, schemaBTableExists, "schema_b.table_b should exist in restored database")
|
||||
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to delete backup file: %v", err)
|
||||
cases := []struct {
|
||||
name string
|
||||
version string
|
||||
port string
|
||||
}{
|
||||
{"PostgreSQL 12", "12", env.TestPostgres12Port},
|
||||
{"PostgreSQL 13", "13", env.TestPostgres13Port},
|
||||
{"PostgreSQL 14", "14", env.TestPostgres14Port},
|
||||
{"PostgreSQL 15", "15", env.TestPostgres15Port},
|
||||
{"PostgreSQL 16", "16", env.TestPostgres16Port},
|
||||
{"PostgreSQL 17", "17", env.TestPostgres17Port},
|
||||
{"PostgreSQL 18", "18", env.TestPostgres18Port},
|
||||
}
|
||||
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/"+database.ID.String(),
|
||||
"Bearer "+user.Token,
|
||||
http.StatusNoContent,
|
||||
)
|
||||
storages.RemoveTestStorage(storage.ID)
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testSchemaSelectionAllSchemasForVersion(t, tc.version, tc.port)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_BackupAndRestorePostgresql_WithExcludeExtensions_RestoreIsSuccessful(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
cases := []struct {
|
||||
name string
|
||||
version string
|
||||
port string
|
||||
}{
|
||||
{"PostgreSQL 12", "12", env.TestPostgres12Port},
|
||||
{"PostgreSQL 13", "13", env.TestPostgres13Port},
|
||||
{"PostgreSQL 14", "14", env.TestPostgres14Port},
|
||||
{"PostgreSQL 15", "15", env.TestPostgres15Port},
|
||||
{"PostgreSQL 16", "16", env.TestPostgres16Port},
|
||||
{"PostgreSQL 17", "17", env.TestPostgres17Port},
|
||||
{"PostgreSQL 18", "18", env.TestPostgres18Port},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testBackupRestoreWithExcludeExtensionsForVersion(t, tc.version, tc.port)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_BackupAndRestorePostgresql_WithoutExcludeExtensions_ExtensionsAreRecovered(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
cases := []struct {
|
||||
name string
|
||||
version string
|
||||
port string
|
||||
}{
|
||||
{"PostgreSQL 12", "12", env.TestPostgres12Port},
|
||||
{"PostgreSQL 13", "13", env.TestPostgres13Port},
|
||||
{"PostgreSQL 14", "14", env.TestPostgres14Port},
|
||||
{"PostgreSQL 15", "15", env.TestPostgres15Port},
|
||||
{"PostgreSQL 16", "16", env.TestPostgres16Port},
|
||||
{"PostgreSQL 17", "17", env.TestPostgres17Port},
|
||||
{"PostgreSQL 18", "18", env.TestPostgres18Port},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testBackupRestoreWithoutExcludeExtensionsForVersion(t, tc.version, tc.port)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_BackupPostgresql_SchemaSelection_OnlySpecifiedSchemas(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
|
||||
container, err := connectToPostgresContainer("16", env.TestPostgres16Port)
|
||||
assert.NoError(t, err)
|
||||
defer container.DB.Close()
|
||||
|
||||
_, err = container.DB.Exec(`
|
||||
DROP SCHEMA IF EXISTS schema_a CASCADE;
|
||||
DROP SCHEMA IF EXISTS schema_b CASCADE;
|
||||
CREATE SCHEMA schema_a;
|
||||
CREATE SCHEMA schema_b;
|
||||
|
||||
CREATE TABLE public.public_table (id SERIAL PRIMARY KEY, data TEXT);
|
||||
CREATE TABLE schema_a.table_a (id SERIAL PRIMARY KEY, data TEXT);
|
||||
CREATE TABLE schema_b.table_b (id SERIAL PRIMARY KEY, data TEXT);
|
||||
|
||||
INSERT INTO public.public_table (data) VALUES ('public_data');
|
||||
INSERT INTO schema_a.table_a (data) VALUES ('schema_a_data');
|
||||
INSERT INTO schema_b.table_b (data) VALUES ('schema_b_data');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
defer func() {
|
||||
_, _ = container.DB.Exec(`
|
||||
DROP TABLE IF EXISTS public.public_table;
|
||||
DROP SCHEMA IF EXISTS schema_a CASCADE;
|
||||
DROP SCHEMA IF EXISTS schema_b CASCADE;
|
||||
`)
|
||||
}()
|
||||
|
||||
router := createTestRouter()
|
||||
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Schema Test Workspace", user, router)
|
||||
|
||||
storage := storages.CreateTestStorage(workspace.ID)
|
||||
|
||||
database := createDatabaseWithSchemasViaAPI(
|
||||
t, router, "Specific Schemas Database", workspace.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, container.Database,
|
||||
[]string{"public", "schema_a"},
|
||||
user.Token,
|
||||
)
|
||||
|
||||
enableBackupsViaAPI(
|
||||
t, router, database.ID, storage.ID,
|
||||
backups_config.BackupEncryptionNone, user.Token,
|
||||
)
|
||||
|
||||
createBackupViaAPI(t, router, database.ID, user.Token)
|
||||
|
||||
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
|
||||
|
||||
newDBName := "restored_specific_schemas"
|
||||
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, container.Username, container.Password, newDBName)
|
||||
newDB, err := sqlx.Connect("postgres", newDSN)
|
||||
assert.NoError(t, err)
|
||||
defer newDB.Close()
|
||||
|
||||
createRestoreViaAPI(
|
||||
t, router, backup.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, newDBName,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
|
||||
|
||||
var publicTableExists bool
|
||||
err = newDB.Get(&publicTableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'public' AND table_name = 'public_table'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, publicTableExists, "public.public_table should exist (was included)")
|
||||
|
||||
var schemaATableExists bool
|
||||
err = newDB.Get(&schemaATableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'schema_a' AND table_name = 'table_a'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, schemaATableExists, "schema_a.table_a should exist (was included)")
|
||||
|
||||
var schemaBTableExists bool
|
||||
err = newDB.Get(&schemaBTableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'schema_b' AND table_name = 'table_b'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, schemaBTableExists, "schema_b.table_b should NOT exist (was excluded)")
|
||||
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to delete backup file: %v", err)
|
||||
cases := []struct {
|
||||
name string
|
||||
version string
|
||||
port string
|
||||
}{
|
||||
{"PostgreSQL 12", "12", env.TestPostgres12Port},
|
||||
{"PostgreSQL 13", "13", env.TestPostgres13Port},
|
||||
{"PostgreSQL 14", "14", env.TestPostgres14Port},
|
||||
{"PostgreSQL 15", "15", env.TestPostgres15Port},
|
||||
{"PostgreSQL 16", "16", env.TestPostgres16Port},
|
||||
{"PostgreSQL 17", "17", env.TestPostgres17Port},
|
||||
{"PostgreSQL 18", "18", env.TestPostgres18Port},
|
||||
}
|
||||
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/"+database.ID.String(),
|
||||
"Bearer "+user.Token,
|
||||
http.StatusNoContent,
|
||||
)
|
||||
storages.RemoveTestStorage(storage.ID)
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testSchemaSelectionOnlySpecifiedSchemasForVersion(t, tc.version, tc.port)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
|
||||
@@ -573,6 +421,522 @@ func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
}
|
||||
|
||||
func testSchemaSelectionAllSchemasForVersion(t *testing.T, pgVersion string, port string) {
|
||||
container, err := connectToPostgresContainer(pgVersion, port)
|
||||
assert.NoError(t, err)
|
||||
defer container.DB.Close()
|
||||
|
||||
_, err = container.DB.Exec(`
|
||||
DROP SCHEMA IF EXISTS schema_a CASCADE;
|
||||
DROP SCHEMA IF EXISTS schema_b CASCADE;
|
||||
CREATE SCHEMA schema_a;
|
||||
CREATE SCHEMA schema_b;
|
||||
|
||||
CREATE TABLE public.public_table (id SERIAL PRIMARY KEY, data TEXT);
|
||||
CREATE TABLE schema_a.table_a (id SERIAL PRIMARY KEY, data TEXT);
|
||||
CREATE TABLE schema_b.table_b (id SERIAL PRIMARY KEY, data TEXT);
|
||||
|
||||
INSERT INTO public.public_table (data) VALUES ('public_data');
|
||||
INSERT INTO schema_a.table_a (data) VALUES ('schema_a_data');
|
||||
INSERT INTO schema_b.table_b (data) VALUES ('schema_b_data');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
defer func() {
|
||||
_, _ = container.DB.Exec(`
|
||||
DROP TABLE IF EXISTS public.public_table;
|
||||
DROP SCHEMA IF EXISTS schema_a CASCADE;
|
||||
DROP SCHEMA IF EXISTS schema_b CASCADE;
|
||||
`)
|
||||
}()
|
||||
|
||||
router := createTestRouter()
|
||||
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Schema Test Workspace", user, router)
|
||||
|
||||
storage := storages.CreateTestStorage(workspace.ID)
|
||||
|
||||
database := createDatabaseWithSchemasViaAPI(
|
||||
t, router, "All Schemas Database", workspace.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, container.Database,
|
||||
nil,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
enableBackupsViaAPI(
|
||||
t, router, database.ID, storage.ID,
|
||||
backups_config.BackupEncryptionNone, user.Token,
|
||||
)
|
||||
|
||||
createBackupViaAPI(t, router, database.ID, user.Token)
|
||||
|
||||
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
|
||||
|
||||
newDBName := "restored_all_schemas_" + pgVersion
|
||||
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, container.Username, container.Password, newDBName)
|
||||
newDB, err := sqlx.Connect("postgres", newDSN)
|
||||
assert.NoError(t, err)
|
||||
defer newDB.Close()
|
||||
|
||||
createRestoreViaAPI(
|
||||
t, router, backup.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, newDBName,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
|
||||
|
||||
var publicTableExists bool
|
||||
err = newDB.Get(&publicTableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'public' AND table_name = 'public_table'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, publicTableExists, "public.public_table should exist in restored database")
|
||||
|
||||
var schemaATableExists bool
|
||||
err = newDB.Get(&schemaATableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'schema_a' AND table_name = 'table_a'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, schemaATableExists, "schema_a.table_a should exist in restored database")
|
||||
|
||||
var schemaBTableExists bool
|
||||
err = newDB.Get(&schemaBTableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'schema_b' AND table_name = 'table_b'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, schemaBTableExists, "schema_b.table_b should exist in restored database")
|
||||
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to delete backup file: %v", err)
|
||||
}
|
||||
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/"+database.ID.String(),
|
||||
"Bearer "+user.Token,
|
||||
http.StatusNoContent,
|
||||
)
|
||||
storages.RemoveTestStorage(storage.ID)
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
}
|
||||
|
||||
func testBackupRestoreWithExcludeExtensionsForVersion(t *testing.T, pgVersion string, port string) {
|
||||
container, err := connectToPostgresContainer(pgVersion, port)
|
||||
assert.NoError(t, err)
|
||||
defer container.DB.Close()
|
||||
|
||||
// Create table with uuid-ossp extension and add a comment on the extension
|
||||
// The comment is important to test that COMMENT ON EXTENSION statements are also excluded
|
||||
_, err = container.DB.Exec(`
|
||||
DROP EXTENSION IF EXISTS "uuid-ossp" CASCADE;
|
||||
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
|
||||
COMMENT ON EXTENSION "uuid-ossp" IS 'Test comment on uuid-ossp extension';
|
||||
|
||||
DROP TABLE IF EXISTS test_extension_data;
|
||||
CREATE TABLE test_extension_data (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
name TEXT NOT NULL,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
INSERT INTO test_extension_data (name) VALUES ('test1'), ('test2'), ('test3');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
defer func() {
|
||||
_, _ = container.DB.Exec(`
|
||||
DROP TABLE IF EXISTS test_extension_data;
|
||||
DROP EXTENSION IF EXISTS "uuid-ossp" CASCADE;
|
||||
`)
|
||||
}()
|
||||
|
||||
router := createTestRouter()
|
||||
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Extension Test Workspace", user, router)
|
||||
|
||||
storage := storages.CreateTestStorage(workspace.ID)
|
||||
|
||||
database := createDatabaseViaAPI(
|
||||
t, router, "Extension Test Database", workspace.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, container.Database,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
enableBackupsViaAPI(
|
||||
t, router, database.ID, storage.ID,
|
||||
backups_config.BackupEncryptionNone, user.Token,
|
||||
)
|
||||
|
||||
createBackupViaAPI(t, router, database.ID, user.Token)
|
||||
|
||||
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
|
||||
|
||||
// Create new database for restore with extension pre-installed
|
||||
newDBName := "restored_exclude_ext_" + pgVersion
|
||||
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, container.Username, container.Password, newDBName)
|
||||
newDB, err := sqlx.Connect("postgres", newDSN)
|
||||
assert.NoError(t, err)
|
||||
defer newDB.Close()
|
||||
|
||||
// Pre-install the extension in the target database (simulating managed service behavior)
|
||||
_, err = newDB.Exec(`CREATE EXTENSION IF NOT EXISTS "uuid-ossp";`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Restore with isExcludeExtensions=true
|
||||
createRestoreWithOptionsViaAPI(
|
||||
t, router, backup.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, newDBName,
|
||||
true, // isExcludeExtensions
|
||||
user.Token,
|
||||
)
|
||||
|
||||
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
|
||||
|
||||
// Verify the table was restored
|
||||
var tableExists bool
|
||||
err = newDB.Get(&tableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'public' AND table_name = 'test_extension_data'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, tableExists, "test_extension_data should exist in restored database")
|
||||
|
||||
// Verify data was restored
|
||||
var count int
|
||||
err = newDB.Get(&count, `SELECT COUNT(*) FROM test_extension_data`)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 3, count, "Should have 3 rows after restore")
|
||||
|
||||
// Verify extension still works (uuid_generate_v4 should work)
|
||||
var newUUID string
|
||||
err = newDB.Get(&newUUID, `SELECT uuid_generate_v4()::text`)
|
||||
assert.NoError(t, err)
|
||||
assert.NotEmpty(t, newUUID, "uuid_generate_v4 should work")
|
||||
|
||||
// Cleanup
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to delete backup file: %v", err)
|
||||
}
|
||||
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/"+database.ID.String(),
|
||||
"Bearer "+user.Token,
|
||||
http.StatusNoContent,
|
||||
)
|
||||
storages.RemoveTestStorage(storage.ID)
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
}
|
||||
|
||||
func testBackupRestoreWithoutExcludeExtensionsForVersion(
|
||||
t *testing.T,
|
||||
pgVersion string,
|
||||
port string,
|
||||
) {
|
||||
container, err := connectToPostgresContainer(pgVersion, port)
|
||||
assert.NoError(t, err)
|
||||
defer container.DB.Close()
|
||||
|
||||
// Create table with uuid-ossp extension
|
||||
_, err = container.DB.Exec(`
|
||||
DROP EXTENSION IF EXISTS "uuid-ossp" CASCADE;
|
||||
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
|
||||
|
||||
DROP TABLE IF EXISTS test_extension_recovery;
|
||||
CREATE TABLE test_extension_recovery (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
name TEXT NOT NULL,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
INSERT INTO test_extension_recovery (name) VALUES ('test1'), ('test2'), ('test3');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
defer func() {
|
||||
_, _ = container.DB.Exec(`
|
||||
DROP TABLE IF EXISTS test_extension_recovery;
|
||||
DROP EXTENSION IF EXISTS "uuid-ossp" CASCADE;
|
||||
`)
|
||||
}()
|
||||
|
||||
router := createTestRouter()
|
||||
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace(
|
||||
"Extension Recovery Test Workspace",
|
||||
user,
|
||||
router,
|
||||
)
|
||||
|
||||
storage := storages.CreateTestStorage(workspace.ID)
|
||||
|
||||
database := createDatabaseViaAPI(
|
||||
t, router, "Extension Recovery Test Database", workspace.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, container.Database,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
enableBackupsViaAPI(
|
||||
t, router, database.ID, storage.ID,
|
||||
backups_config.BackupEncryptionNone, user.Token,
|
||||
)
|
||||
|
||||
createBackupViaAPI(t, router, database.ID, user.Token)
|
||||
|
||||
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
|
||||
|
||||
// Create new database for restore WITHOUT pre-installed extension
|
||||
newDBName := "restored_with_ext_" + pgVersion
|
||||
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, container.Username, container.Password, newDBName)
|
||||
newDB, err := sqlx.Connect("postgres", newDSN)
|
||||
assert.NoError(t, err)
|
||||
defer newDB.Close()
|
||||
|
||||
// Verify extension does NOT exist before restore
|
||||
var extensionExistsBefore bool
|
||||
err = newDB.Get(&extensionExistsBefore, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM pg_extension WHERE extname = 'uuid-ossp'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, extensionExistsBefore, "Extension should NOT exist before restore")
|
||||
|
||||
// Restore with isExcludeExtensions=false (extensions should be recovered)
|
||||
createRestoreWithOptionsViaAPI(
|
||||
t, router, backup.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, newDBName,
|
||||
false, // isExcludeExtensions = false means extensions ARE included
|
||||
user.Token,
|
||||
)
|
||||
|
||||
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
|
||||
|
||||
// Verify the extension was recovered
|
||||
var extensionExists bool
|
||||
err = newDB.Get(&extensionExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM pg_extension WHERE extname = 'uuid-ossp'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, extensionExists, "Extension 'uuid-ossp' should be recovered during restore")
|
||||
|
||||
// Verify the table was restored
|
||||
var tableExists bool
|
||||
err = newDB.Get(&tableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'public' AND table_name = 'test_extension_recovery'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, tableExists, "test_extension_recovery should exist in restored database")
|
||||
|
||||
// Verify data was restored
|
||||
var count int
|
||||
err = newDB.Get(&count, `SELECT COUNT(*) FROM test_extension_recovery`)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 3, count, "Should have 3 rows after restore")
|
||||
|
||||
// Verify extension works (uuid_generate_v4 should work)
|
||||
var newUUID string
|
||||
err = newDB.Get(&newUUID, `SELECT uuid_generate_v4()::text`)
|
||||
assert.NoError(t, err)
|
||||
assert.NotEmpty(t, newUUID, "uuid_generate_v4 should work after extension recovery")
|
||||
|
||||
// Cleanup
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to delete backup file: %v", err)
|
||||
}
|
||||
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/"+database.ID.String(),
|
||||
"Bearer "+user.Token,
|
||||
http.StatusNoContent,
|
||||
)
|
||||
storages.RemoveTestStorage(storage.ID)
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
}
|
||||
|
||||
func testSchemaSelectionOnlySpecifiedSchemasForVersion(
|
||||
t *testing.T,
|
||||
pgVersion string,
|
||||
port string,
|
||||
) {
|
||||
container, err := connectToPostgresContainer(pgVersion, port)
|
||||
assert.NoError(t, err)
|
||||
defer container.DB.Close()
|
||||
|
||||
_, err = container.DB.Exec(`
|
||||
DROP SCHEMA IF EXISTS schema_a CASCADE;
|
||||
DROP SCHEMA IF EXISTS schema_b CASCADE;
|
||||
CREATE SCHEMA schema_a;
|
||||
CREATE SCHEMA schema_b;
|
||||
|
||||
CREATE TABLE public.public_table (id SERIAL PRIMARY KEY, data TEXT);
|
||||
CREATE TABLE schema_a.table_a (id SERIAL PRIMARY KEY, data TEXT);
|
||||
CREATE TABLE schema_b.table_b (id SERIAL PRIMARY KEY, data TEXT);
|
||||
|
||||
INSERT INTO public.public_table (data) VALUES ('public_data');
|
||||
INSERT INTO schema_a.table_a (data) VALUES ('schema_a_data');
|
||||
INSERT INTO schema_b.table_b (data) VALUES ('schema_b_data');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
defer func() {
|
||||
_, _ = container.DB.Exec(`
|
||||
DROP TABLE IF EXISTS public.public_table;
|
||||
DROP SCHEMA IF EXISTS schema_a CASCADE;
|
||||
DROP SCHEMA IF EXISTS schema_b CASCADE;
|
||||
`)
|
||||
}()
|
||||
|
||||
router := createTestRouter()
|
||||
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Schema Test Workspace", user, router)
|
||||
|
||||
storage := storages.CreateTestStorage(workspace.ID)
|
||||
|
||||
database := createDatabaseWithSchemasViaAPI(
|
||||
t, router, "Specific Schemas Database", workspace.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, container.Database,
|
||||
[]string{"public", "schema_a"},
|
||||
user.Token,
|
||||
)
|
||||
|
||||
enableBackupsViaAPI(
|
||||
t, router, database.ID, storage.ID,
|
||||
backups_config.BackupEncryptionNone, user.Token,
|
||||
)
|
||||
|
||||
createBackupViaAPI(t, router, database.ID, user.Token)
|
||||
|
||||
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
|
||||
|
||||
newDBName := "restored_specific_schemas_" + pgVersion
|
||||
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, container.Username, container.Password, newDBName)
|
||||
newDB, err := sqlx.Connect("postgres", newDSN)
|
||||
assert.NoError(t, err)
|
||||
defer newDB.Close()
|
||||
|
||||
createRestoreViaAPI(
|
||||
t, router, backup.ID,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, newDBName,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
|
||||
|
||||
var publicTableExists bool
|
||||
err = newDB.Get(&publicTableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'public' AND table_name = 'public_table'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, publicTableExists, "public.public_table should exist (was included)")
|
||||
|
||||
var schemaATableExists bool
|
||||
err = newDB.Get(&schemaATableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'schema_a' AND table_name = 'table_a'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, schemaATableExists, "schema_a.table_a should exist (was included)")
|
||||
|
||||
var schemaBTableExists bool
|
||||
err = newDB.Get(&schemaBTableExists, `
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'schema_b' AND table_name = 'table_b'
|
||||
)
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, schemaBTableExists, "schema_b.table_b should NOT exist (was excluded)")
|
||||
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to delete backup file: %v", err)
|
||||
}
|
||||
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/"+database.ID.String(),
|
||||
"Bearer "+user.Token,
|
||||
http.StatusNoContent,
|
||||
)
|
||||
storages.RemoveTestStorage(storage.ID)
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
}
|
||||
|
||||
func testBackupRestoreWithEncryptionForVersion(t *testing.T, pgVersion string, port string) {
|
||||
container, err := connectToPostgresContainer(pgVersion, port)
|
||||
assert.NoError(t, err)
|
||||
@@ -852,14 +1216,41 @@ func createRestoreViaAPI(
|
||||
password string,
|
||||
database string,
|
||||
token string,
|
||||
) {
|
||||
createRestoreWithOptionsViaAPI(
|
||||
t,
|
||||
router,
|
||||
backupID,
|
||||
host,
|
||||
port,
|
||||
username,
|
||||
password,
|
||||
database,
|
||||
false,
|
||||
token,
|
||||
)
|
||||
}
|
||||
|
||||
func createRestoreWithOptionsViaAPI(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
backupID uuid.UUID,
|
||||
host string,
|
||||
port int,
|
||||
username string,
|
||||
password string,
|
||||
database string,
|
||||
isExcludeExtensions bool,
|
||||
token string,
|
||||
) {
|
||||
request := restores.RestoreBackupRequest{
|
||||
PostgresqlDatabase: &pgtypes.PostgresqlDatabase{
|
||||
Host: host,
|
||||
Port: port,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Database: &database,
|
||||
Host: host,
|
||||
Port: port,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Database: &database,
|
||||
IsExcludeExtensions: isExcludeExtensions,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
23
backend/migrations/20251218123447_add_rclone_storages.sql
Normal file
23
backend/migrations/20251218123447_add_rclone_storages.sql
Normal file
@@ -0,0 +1,23 @@
|
||||
-- +goose Up
|
||||
-- +goose StatementBegin
|
||||
|
||||
CREATE TABLE rclone_storages (
|
||||
storage_id UUID PRIMARY KEY,
|
||||
config_content TEXT NOT NULL,
|
||||
remote_path TEXT
|
||||
);
|
||||
|
||||
ALTER TABLE rclone_storages
|
||||
ADD CONSTRAINT fk_rclone_storages_storage
|
||||
FOREIGN KEY (storage_id)
|
||||
REFERENCES storages (id)
|
||||
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
|
||||
|
||||
-- +goose StatementEnd
|
||||
|
||||
-- +goose Down
|
||||
-- +goose StatementBegin
|
||||
|
||||
DROP TABLE IF EXISTS rclone_storages;
|
||||
|
||||
-- +goose StatementEnd
|
||||
102
frontend/public/icons/storages/rclone.svg
Normal file
102
frontend/public/icons/storages/rclone.svg
Normal file
@@ -0,0 +1,102 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<svg
|
||||
xmlns:dc="http://purl.org/dc/elements/1.1/"
|
||||
xmlns:cc="http://creativecommons.org/ns#"
|
||||
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
||||
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
||||
width="64"
|
||||
height="64"
|
||||
viewBox="0 0 64 64"
|
||||
version="1.1"
|
||||
xml:space="preserve"
|
||||
style="clip-rule:evenodd;fill-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:1.41420996"
|
||||
id="svg50"
|
||||
sodipodi:docname="rclone-icon.svg"
|
||||
inkscape:version="0.92.4 (5da689c313, 2019-01-14)"><metadata
|
||||
id="metadata56"><rdf:RDF><cc:Work
|
||||
rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
|
||||
rdf:resource="http://purl.org/dc/dcmitype/StillImage" /><dc:title></dc:title></cc:Work></rdf:RDF></metadata><defs
|
||||
id="defs54">
|
||||
|
||||
|
||||
|
||||
<clipPath
|
||||
id="_clip1">
|
||||
<rect
|
||||
x="14"
|
||||
y="579"
|
||||
width="257"
|
||||
height="84"
|
||||
id="rect4" />
|
||||
</clipPath>
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
</defs><sodipodi:namedview
|
||||
pagecolor="#ffffff"
|
||||
bordercolor="#666666"
|
||||
borderopacity="1"
|
||||
objecttolerance="10"
|
||||
gridtolerance="10"
|
||||
guidetolerance="10"
|
||||
inkscape:pageopacity="0"
|
||||
inkscape:pageshadow="2"
|
||||
inkscape:window-width="1531"
|
||||
inkscape:window-height="807"
|
||||
id="namedview52"
|
||||
showgrid="false"
|
||||
units="px"
|
||||
inkscape:zoom="1.539823"
|
||||
inkscape:cx="-84.425288"
|
||||
inkscape:cy="26.5"
|
||||
inkscape:window-x="70"
|
||||
inkscape:window-y="27"
|
||||
inkscape:window-maximized="0"
|
||||
inkscape:current-layer="svg50" />
|
||||
<g
|
||||
id="g824"
|
||||
transform="matrix(1.3422256,0,0,1.3422256,-2.2309418e-8,3.8420351)"><path
|
||||
d="m 45.726917,21.83581 c -1.507672,-2.611426 -3.701518,-4.579735 -6.222732,-5.808561 -0.322585,1.72227 -0.932898,3.419936 -1.857594,5.021921 l -1.459147,2.532147 c 0.971853,0.539918 1.817954,1.334759 2.414598,2.368122 1.753027,3.035842 0.712146,6.919151 -2.324383,8.672176 -3.035847,1.753025 -6.919159,0.712829 -8.672186,-2.323698 l -2.944264,-5.091631 h -4.751283 l -2.375642,4.114312 2.946315,5.090948 c 4.025469,6.971776 12.939592,9.360401 19.911375,5.334937 6.971101,-4.024782 9.359727,-12.938896 5.334943,-19.910673"
|
||||
style="fill:#70caf2;fill-rule:nonzero;stroke-width:0.68344086"
|
||||
id="path7"
|
||||
inkscape:connector-curvature="0" /><path
|
||||
d="M 31.127807,0.45456543 C 24.156023,-3.5702158 15.2419,-1.1815912 11.217114,5.7895021 9.7087599,8.4009285 9.1018638,11.285048 9.2980112,14.083052 10.950572,13.501445 12.726153,13.180911 14.576228,13.180911 l 2.921711,-0.0027 c -0.01845,-1.111274 0.247406,-2.241684 0.843367,-3.2743635 1.75371,-3.036526 5.636339,-4.0774059 8.672868,-2.3236971 3.03653,1.7530242 4.076727,5.6356506 2.323701,8.6721766 l -2.936747,5.095732 2.374958,4.114995 4.751283,-6.83e-4 2.93538,-5.097099 C 40.488218,13.394145 38.099591,4.4793466 31.127807,0.45456543"
|
||||
style="fill:#b4e3f9;fill-rule:nonzero;stroke-width:0.68344086"
|
||||
id="path11"
|
||||
inkscape:connector-curvature="0" /><path
|
||||
d="m 19.297646,37.095505 -1.463932,-2.529413 c -0.9534,0.57204 -2.064675,0.906925 -3.25728,0.906925 -3.506736,0 -6.3491688,-2.842428 -6.3491688,-6.349162 0,-3.50605 2.8424328,-6.348479 6.3491688,-6.348479 l 5.881011,-0.0041 2.376326,-4.114312 -2.376326,-4.114312 -5.881695,0.0055 C 6.5254965,14.548074 1.6621211e-8,21.074248 1.6621211e-8,29.12381 1.6621211e-8,37.174056 6.5254965,43.70023 14.57575,43.70023 c 3.014659,0 5.814718,-0.915811 8.139101,-2.48294 -1.329976,-1.140662 -2.49251,-2.520528 -3.417205,-4.12183"
|
||||
style="fill:#3f79ad;fill-rule:nonzero;stroke-width:0.68344086"
|
||||
id="path15"
|
||||
inkscape:connector-curvature="0" /></g>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 4.3 KiB |
@@ -14,4 +14,7 @@ export interface PostgresqlDatabase {
|
||||
|
||||
// backup settings
|
||||
includeSchemas?: string[];
|
||||
|
||||
// restore settings (not saved to DB)
|
||||
isExcludeExtensions?: boolean;
|
||||
}
|
||||
|
||||
@@ -9,3 +9,4 @@ export { getStorageNameFromType } from './models/getStorageNameFromType';
|
||||
export { type GoogleDriveStorage } from './models/GoogleDriveStorage';
|
||||
export { type AzureBlobStorage } from './models/AzureBlobStorage';
|
||||
export { type FTPStorage } from './models/FTPStorage';
|
||||
export { type RcloneStorage } from './models/RcloneStorage';
|
||||
|
||||
4
frontend/src/entity/storages/models/RcloneStorage.ts
Normal file
4
frontend/src/entity/storages/models/RcloneStorage.ts
Normal file
@@ -0,0 +1,4 @@
|
||||
export interface RcloneStorage {
|
||||
configContent: string;
|
||||
remotePath?: string;
|
||||
}
|
||||
@@ -3,6 +3,7 @@ import type { FTPStorage } from './FTPStorage';
|
||||
import type { GoogleDriveStorage } from './GoogleDriveStorage';
|
||||
import type { LocalStorage } from './LocalStorage';
|
||||
import type { NASStorage } from './NASStorage';
|
||||
import type { RcloneStorage } from './RcloneStorage';
|
||||
import type { S3Storage } from './S3Storage';
|
||||
import type { StorageType } from './StorageType';
|
||||
|
||||
@@ -20,4 +21,5 @@ export interface Storage {
|
||||
nasStorage?: NASStorage;
|
||||
azureBlobStorage?: AzureBlobStorage;
|
||||
ftpStorage?: FTPStorage;
|
||||
rcloneStorage?: RcloneStorage;
|
||||
}
|
||||
|
||||
@@ -5,4 +5,5 @@ export enum StorageType {
|
||||
NAS = 'NAS',
|
||||
AZURE_BLOB = 'AZURE_BLOB',
|
||||
FTP = 'FTP',
|
||||
RCLONE = 'RCLONE',
|
||||
}
|
||||
|
||||
@@ -14,6 +14,8 @@ export const getStorageLogoFromType = (type: StorageType) => {
|
||||
return '/icons/storages/azure.svg';
|
||||
case StorageType.FTP:
|
||||
return '/icons/storages/ftp.svg';
|
||||
case StorageType.RCLONE:
|
||||
return '/icons/storages/rclone.svg';
|
||||
default:
|
||||
return '';
|
||||
}
|
||||
|
||||
@@ -14,6 +14,8 @@ export const getStorageNameFromType = (type: StorageType) => {
|
||||
return 'Azure Blob Storage';
|
||||
case StorageType.FTP:
|
||||
return 'FTP';
|
||||
case StorageType.RCLONE:
|
||||
return 'Rclone';
|
||||
default:
|
||||
return '';
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { CopyOutlined, DownOutlined, UpOutlined } from '@ant-design/icons';
|
||||
import { App, Button, Input, InputNumber, Select, Switch } from 'antd';
|
||||
import { CopyOutlined, DownOutlined, InfoCircleOutlined, UpOutlined } from '@ant-design/icons';
|
||||
import { App, Button, Checkbox, Input, InputNumber, Select, Switch, Tooltip } from 'antd';
|
||||
import { useEffect, useState } from 'react';
|
||||
|
||||
import { type Database, DatabaseType, databaseApi } from '../../../../entity/databases';
|
||||
@@ -20,6 +20,7 @@ interface Props {
|
||||
onSaved: (database: Database) => void;
|
||||
|
||||
isShowDbName?: boolean;
|
||||
isRestoreMode?: boolean;
|
||||
}
|
||||
|
||||
export const EditDatabaseSpecificDataComponent = ({
|
||||
@@ -35,6 +36,7 @@ export const EditDatabaseSpecificDataComponent = ({
|
||||
isSaveToApi,
|
||||
onSaved,
|
||||
isShowDbName = true,
|
||||
isRestoreMode = false,
|
||||
}: Props) => {
|
||||
const { message } = App.useApp();
|
||||
|
||||
@@ -45,7 +47,8 @@ export const EditDatabaseSpecificDataComponent = ({
|
||||
const [isTestingConnection, setIsTestingConnection] = useState(false);
|
||||
const [isConnectionFailed, setIsConnectionFailed] = useState(false);
|
||||
|
||||
const hasAdvancedValues = !!database.postgresql?.includeSchemas?.length;
|
||||
const hasAdvancedValues =
|
||||
!!database.postgresql?.includeSchemas?.length || !!database.postgresql?.isExcludeExtensions;
|
||||
const [isShowAdvanced, setShowAdvanced] = useState(hasAdvancedValues);
|
||||
|
||||
const [hasAutoAddedPublicSchema, setHasAutoAddedPublicSchema] = useState(false);
|
||||
@@ -328,7 +331,7 @@ export const EditDatabaseSpecificDataComponent = ({
|
||||
}}
|
||||
size="small"
|
||||
className="max-w-[200px] grow"
|
||||
placeholder="Enter PG database name (optional)"
|
||||
placeholder="Enter PG database name"
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
@@ -366,25 +369,60 @@ export const EditDatabaseSpecificDataComponent = ({
|
||||
</div>
|
||||
|
||||
{isShowAdvanced && (
|
||||
<div className="mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Include schemas</div>
|
||||
<Select
|
||||
mode="tags"
|
||||
value={editingDatabase.postgresql?.includeSchemas || []}
|
||||
onChange={(values) => {
|
||||
if (!editingDatabase.postgresql) return;
|
||||
<>
|
||||
{!isRestoreMode && (
|
||||
<div className="mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Include schemas</div>
|
||||
<Select
|
||||
mode="tags"
|
||||
value={editingDatabase.postgresql?.includeSchemas || []}
|
||||
onChange={(values) => {
|
||||
if (!editingDatabase.postgresql) return;
|
||||
|
||||
setEditingDatabase({
|
||||
...editingDatabase,
|
||||
postgresql: { ...editingDatabase.postgresql, includeSchemas: values },
|
||||
});
|
||||
}}
|
||||
size="small"
|
||||
className="max-w-[200px] grow"
|
||||
placeholder="All schemas (default)"
|
||||
tokenSeparators={[',']}
|
||||
/>
|
||||
</div>
|
||||
setEditingDatabase({
|
||||
...editingDatabase,
|
||||
postgresql: { ...editingDatabase.postgresql, includeSchemas: values },
|
||||
});
|
||||
}}
|
||||
size="small"
|
||||
className="max-w-[200px] grow"
|
||||
placeholder="All schemas (default)"
|
||||
tokenSeparators={[',']}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{isRestoreMode && (
|
||||
<div className="mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Exclude extensions</div>
|
||||
<div className="flex items-center">
|
||||
<Checkbox
|
||||
checked={editingDatabase.postgresql?.isExcludeExtensions || false}
|
||||
onChange={(e) => {
|
||||
if (!editingDatabase.postgresql) return;
|
||||
|
||||
setEditingDatabase({
|
||||
...editingDatabase,
|
||||
postgresql: {
|
||||
...editingDatabase.postgresql,
|
||||
isExcludeExtensions: e.target.checked,
|
||||
},
|
||||
});
|
||||
}}
|
||||
>
|
||||
Skip extensions
|
||||
</Checkbox>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Skip restoring extension definitions (CREATE EXTENSION statements). Enable this if you're restoring to a managed PostgreSQL service where extensions are managed by the provider."
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
|
||||
@@ -111,6 +111,7 @@ export const RestoresComponent = ({ database, backup }: Props) => {
|
||||
setEditingDatabase({ ...database });
|
||||
restore(database);
|
||||
}}
|
||||
isRestoreMode={true}
|
||||
/>
|
||||
</>
|
||||
);
|
||||
@@ -245,6 +246,13 @@ export const RestoresComponent = ({ database, backup }: Props) => {
|
||||
</Button>
|
||||
}
|
||||
>
|
||||
{showingRestoreError.failMessage?.includes('must be owner of extension') && (
|
||||
<div className="mb-4 rounded border border-yellow-300 bg-yellow-50 p-3 text-sm dark:border-yellow-600 dark:bg-yellow-900/30">
|
||||
<strong>💡 Tip:</strong> This error typically occurs when restoring to managed
|
||||
PostgreSQL services (like Yandex Cloud, AWS RDS or similar). Try enabling{' '}
|
||||
<strong>"Exclude extensions"</strong> in Advanced settings before restoring.
|
||||
</div>
|
||||
)}
|
||||
<div className="overflow-y-auto text-sm whitespace-pre-wrap" style={{ height: '400px' }}>
|
||||
{showingRestoreError.failMessage}
|
||||
</div>
|
||||
|
||||
@@ -12,6 +12,7 @@ import { EditAzureBlobStorageComponent } from './storages/EditAzureBlobStorageCo
|
||||
import { EditFTPStorageComponent } from './storages/EditFTPStorageComponent';
|
||||
import { EditGoogleDriveStorageComponent } from './storages/EditGoogleDriveStorageComponent';
|
||||
import { EditNASStorageComponent } from './storages/EditNASStorageComponent';
|
||||
import { EditRcloneStorageComponent } from './storages/EditRcloneStorageComponent';
|
||||
import { EditS3StorageComponent } from './storages/EditS3StorageComponent';
|
||||
|
||||
interface Props {
|
||||
@@ -88,6 +89,7 @@ export function EditStorageComponent({
|
||||
storage.googleDriveStorage = undefined;
|
||||
storage.azureBlobStorage = undefined;
|
||||
storage.ftpStorage = undefined;
|
||||
storage.rcloneStorage = undefined;
|
||||
|
||||
if (type === StorageType.LOCAL) {
|
||||
storage.localStorage = {};
|
||||
@@ -146,6 +148,13 @@ export function EditStorageComponent({
|
||||
};
|
||||
}
|
||||
|
||||
if (type === StorageType.RCLONE) {
|
||||
storage.rcloneStorage = {
|
||||
configContent: '',
|
||||
remotePath: '',
|
||||
};
|
||||
}
|
||||
|
||||
setStorage(
|
||||
JSON.parse(
|
||||
JSON.stringify({
|
||||
@@ -261,6 +270,14 @@ export function EditStorageComponent({
|
||||
);
|
||||
}
|
||||
|
||||
if (storage.type === StorageType.RCLONE) {
|
||||
if (storage.id) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return storage.rcloneStorage?.configContent;
|
||||
}
|
||||
|
||||
return false;
|
||||
};
|
||||
|
||||
@@ -298,6 +315,7 @@ export function EditStorageComponent({
|
||||
{ label: 'NAS', value: StorageType.NAS },
|
||||
{ label: 'Azure Blob Storage', value: StorageType.AZURE_BLOB },
|
||||
{ label: 'FTP', value: StorageType.FTP },
|
||||
{ label: 'Rclone', value: StorageType.RCLONE },
|
||||
]}
|
||||
onChange={(value) => {
|
||||
setStorageType(value);
|
||||
@@ -370,6 +388,17 @@ export function EditStorageComponent({
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
|
||||
{storage?.type === StorageType.RCLONE && (
|
||||
<EditRcloneStorageComponent
|
||||
storage={storage}
|
||||
setStorage={setStorage}
|
||||
setUnsaved={() => {
|
||||
setIsUnsaved(true);
|
||||
setIsTestConnectionSuccess(false);
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div className="mt-3 flex">
|
||||
|
||||
@@ -0,0 +1,109 @@
|
||||
import { InfoCircleOutlined } from '@ant-design/icons';
|
||||
import { Input, Tooltip } from 'antd';
|
||||
|
||||
import type { Storage } from '../../../../../entity/storages';
|
||||
|
||||
interface Props {
|
||||
storage: Storage;
|
||||
setStorage: (storage: Storage) => void;
|
||||
setUnsaved: () => void;
|
||||
}
|
||||
|
||||
export function EditRcloneStorageComponent({ storage, setStorage, setUnsaved }: Props) {
|
||||
return (
|
||||
<>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-start">
|
||||
<div className="mb-1 min-w-[110px] pt-1 sm:mb-0">Config</div>
|
||||
<div className="flex w-full flex-col">
|
||||
<div className="flex items-start">
|
||||
<Input.TextArea
|
||||
value={storage?.rcloneStorage?.configContent || ''}
|
||||
onChange={(e) => {
|
||||
if (!storage?.rcloneStorage) return;
|
||||
|
||||
setStorage({
|
||||
...storage,
|
||||
rcloneStorage: {
|
||||
...storage.rcloneStorage,
|
||||
configContent: e.target.value,
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
className="w-full max-w-[400px] font-mono text-xs"
|
||||
placeholder={`[myremote]
|
||||
type = s3
|
||||
provider = AWS
|
||||
access_key_id = YOUR_ACCESS_KEY
|
||||
secret_access_key = YOUR_SECRET_KEY
|
||||
region = us-east-1`}
|
||||
rows={8}
|
||||
style={{ resize: 'vertical' }}
|
||||
/>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Paste your rclone.conf content here. You can get it by running 'rclone config file' and copying the contents. This config supports 70+ cloud storage providers."
|
||||
>
|
||||
<InfoCircleOutlined className="mt-2 ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{!storage?.id && (
|
||||
<div className="mb-2 flex items-center">
|
||||
<div className="hidden min-w-[110px] sm:block" />
|
||||
|
||||
<div className="max-w-[300px] text-xs text-gray-400">
|
||||
*content is hidden to not expose sensitive data. If you want to update existing config,
|
||||
put a new one here
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="mb-2 flex items-center">
|
||||
<div className="hidden min-w-[110px] sm:block" />
|
||||
|
||||
<div className="text-xs text-blue-600">
|
||||
<a href="https://rclone.org/docs/" target="_blank" rel="noreferrer">
|
||||
Rclone documentation
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[110px] sm:mb-0">Remote path</div>
|
||||
<div className="flex items-center">
|
||||
<Input
|
||||
value={storage?.rcloneStorage?.remotePath || ''}
|
||||
onChange={(e) => {
|
||||
if (!storage?.rcloneStorage) return;
|
||||
|
||||
setStorage({
|
||||
...storage,
|
||||
rcloneStorage: {
|
||||
...storage.rcloneStorage,
|
||||
remotePath: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="/backups (optional)"
|
||||
/>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Optional path prefix on the remote where backups will be stored (e.g., '/backups' or 'my-folder/backups')"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-5" />
|
||||
</>
|
||||
);
|
||||
}
|
||||
@@ -5,6 +5,7 @@ import { ShowAzureBlobStorageComponent } from './storages/ShowAzureBlobStorageCo
|
||||
import { ShowFTPStorageComponent } from './storages/ShowFTPStorageComponent';
|
||||
import { ShowGoogleDriveStorageComponent } from './storages/ShowGoogleDriveStorageComponent';
|
||||
import { ShowNASStorageComponent } from './storages/ShowNASStorageComponent';
|
||||
import { ShowRcloneStorageComponent } from './storages/ShowRcloneStorageComponent';
|
||||
import { ShowS3StorageComponent } from './storages/ShowS3StorageComponent';
|
||||
|
||||
interface Props {
|
||||
@@ -49,6 +50,10 @@ export function ShowStorageComponent({ storage }: Props) {
|
||||
<div>
|
||||
{storage?.type === StorageType.FTP && <ShowFTPStorageComponent storage={storage} />}
|
||||
</div>
|
||||
|
||||
<div>
|
||||
{storage?.type === StorageType.RCLONE && <ShowRcloneStorageComponent storage={storage} />}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -0,0 +1,21 @@
|
||||
import type { Storage } from '../../../../../entity/storages';
|
||||
|
||||
interface Props {
|
||||
storage: Storage;
|
||||
}
|
||||
|
||||
export function ShowRcloneStorageComponent({ storage }: Props) {
|
||||
return (
|
||||
<>
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="min-w-[110px]">Remote path</div>
|
||||
{storage?.rcloneStorage?.remotePath || '-'}
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="min-w-[110px]">Config</div>
|
||||
{'*************'}
|
||||
</div>
|
||||
</>
|
||||
);
|
||||
}
|
||||
Reference in New Issue
Block a user