Compare commits

...

18 Commits

Author SHA1 Message Date
Rostislav Dugin
c0721a43e1 FEATURE (docs): Add code of conduct 2025-12-17 16:41:07 +03:00
Rostislav Dugin
461e15cd7a FEATURE (security): Add security md file 2025-12-17 16:33:10 +03:00
Rostislav Dugin
69a53936f5 FEATURE (citation): Add CITATION.cff 2025-12-17 16:17:43 +03:00
Rostislav Dugin
2bafec3c19 FIX (databases): Fix second opening of storage & notifier creation dialogs 2025-12-16 13:33:56 +03:00
Rostislav Dugin
422b44dfdc FEATURE (ftp): Get rid of passive mode 2025-12-14 00:01:21 +03:00
Rostislav Dugin
51d7fe54d0 Merge pull request #144 from omerkarabacak/main
FEATURE (clusters): Add cluster-based database management and bulk import
2025-12-13 22:37:35 +03:00
Omer Karabacak
6e2d63626c FEATURE (clusters): Add cluster-based database management and bulk import functionality 2025-12-13 20:32:54 +01:00
Rostislav Dugin
260c7a1188 FEATURE (frontend): Add frontend tests 2025-12-13 22:22:31 +03:00
Rostislav Dugin
ace94c144b FEATURE (storanges): Add FTP storange 2025-12-13 22:17:16 +03:00
Rostislav Dugin
b666cd9e2e Merge pull request #143 from RostislavDugin/develop
FEATURE (parsing): Add parsing connection string on DB creation
2025-12-13 13:53:30 +03:00
Rostislav Dugin
9dac63430d FEATURE (parsing): Add parsing connection string on DB creation 2025-12-13 13:50:22 +03:00
Rostislav Dugin
8217906c7a Merge pull request #139 from RostislavDugin/develop
Merge develop into main
2025-12-11 20:02:32 +03:00
Rostislav Dugin
db71a5ef7b FIX (databases): Add support dashed databases for read only users creation 2025-12-11 19:57:49 +03:00
Rostislav Dugin
df78e296b3 FEATURE (s3): Allow to skip TLS verification 2025-12-11 19:50:59 +03:00
Rostislav Dugin
fda3bf9b98 FEATURE (supabase): Add support of Supabase, schemas excluding and get rid of version in UI 2025-12-11 19:27:45 +03:00
pv-create
e19f449c60 FIX (readme): Fix typos and links
* fix typos

* fix link

* fix email param

---------

Co-authored-by: pavelvilkov <vilkovpy@mi-broker.ru>
2025-12-10 19:44:49 +03:00
Leonardo Flores
5944d7c4b6 feat(postgresus): Add schema filter for pg_dump and pg_restore (#131)
Add optional "Schemas" field to PostgreSQL database settings allowing
users to specify which schemas to include in backups (comma-separated).

This solves permission issues when backing up some of databases that
have restricted internal schemas (auth, storage, realtime).

Changes:
- Add schemas column to postgresql_databases table (migration)
- Update PostgresqlDatabase model with Schemas field
- Modify buildPgDumpArgs() to append --schema flags for each schema
- Modify pg_restore args to support --schema filtering on restore
- Add Schemas input field to frontend edit form with tooltip
- Display schemas in read-only database view

Example usage: Setting schemas to "public,drizzle" generates:
  pg_dump ... --schema public --schema drizzle
  pg_restore ... --schema public --schema drizzle
2025-12-10 13:19:15 +03:00
Unicorn-Zombie-Apocalypse
1f5c9d3d01 feat: Add support for custom Root CA configuration in Helm chart (#129)
* feat: Add support for custom Root CA configuration in Helm chart

* fix: Remove default value for customRootCA in Helm chart
2025-12-09 19:36:52 +03:00
59 changed files with 4200 additions and 623 deletions

102
.github/CODE_OF_CONDUCT.md vendored Normal file
View File

@@ -0,0 +1,102 @@
# Code of Conduct
## Our Pledge
We as members, contributors and maintainers pledge to make participation in the Postgresus community a friendly and welcoming experience for everyone, regardless of background, experience level or personal circumstances.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive and healthy community.
## Our Standards
### Examples of behavior that contributes to a positive environment
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
- Helping newcomers get started with contributions
- Providing clear and constructive feedback on pull requests
- Celebrating successes and acknowledging contributions
### Examples of unacceptable behavior
- Trolling, insulting or derogatory comments, and personal or political attacks
- Publishing others' private information, such as physical or email addresses, without their explicit permission
- Spam, self-promotion or off-topic content in project spaces
- Other conduct which could reasonably be considered inappropriate in a professional setting
## Scope
This Code of Conduct applies within all community spaces, including:
- GitHub repositories (issues, pull requests, discussions, comments)
- Telegram channels and direct messages related to Postgresus
- Social media interactions when representing the project
- Community forums and online discussions
- Any other spaces where Postgresus community members interact
This Code of Conduct also applies when an individual is officially representing the community in public spaces, such as using an official email address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
## Enforcement
Instances of abusive or unacceptable behavior may be reported to the community leaders responsible for enforcement:
- **Email**: [info@postgresus.com](mailto:info@postgresus.com)
- **Telegram**: [@rostislav_dugin](https://t.me/rostislav_dugin)
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of actions.
**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the community.
## Contributing with Respect
When contributing to Postgresus, please:
- Be patient with maintainers and other contributors
- Understand that everyone has different levels of experience
- Ask questions in a respectful manner
- Accept that your contribution may not be accepted, and be open to feedback
- Follow the [contribution guidelines](https://postgresus.com/contribute)
For code contributions, remember to:
- Discuss significant changes before implementing them
- Be open to code review feedback
- Help review others' contributions when possible
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org), version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html](https://www.contributor-covenant.org/version/2/1/code_of_conduct.html).
Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq](https://www.contributor-covenant.org/faq).

54
.github/SECURITY.md vendored Normal file
View File

@@ -0,0 +1,54 @@
# Security Policy
## Reporting a Vulnerability
If you discover a security vulnerability in Postgresus, please report it responsibly. **Do not create a public GitHub issue for security vulnerabilities.**
### How to Report
1. **Email** (preferred): Send details to [info@postgresus.com](mailto:info@postgresus.com)
2. **Telegram**: Contact [@rostislav_dugin](https://t.me/rostislav_dugin)
3. **GitHub Security Advisories**: Use the [private vulnerability reporting](https://github.com/RostislavDugin/postgresus/security/advisories/new) feature
### What to Include
- Description of the vulnerability
- Steps to reproduce the issue
- Potential impact and severity assessment
- Any suggested fixes (optional)
## Supported Versions
| Version | Supported |
| ------- | --------- |
| Latest | Yes |
We recommend always using the latest version of Postgresus. Security patches are applied to the most recent release.
### PostgreSQL Compatibility
Postgresus supports PostgreSQL versions 12, 13, 14, 15, 16, 17 and 18.
## Response Timeline
- **Acknowledgment**: Within 48-72 hours
- **Initial Assessment**: Within 1 week
- **Fix Timeline**: Depends on severity, but we aim to address critical issues as quickly as possible
We follow a coordinated disclosure policy. We ask that you give us reasonable time to address the vulnerability before any public disclosure.
## Security Features
Postgresus is designed with security in mind. For full details, see our [security documentation](https://postgresus.com/security).
Key features include:
- **AES-256-GCM Encryption**: Enterprise-grade encryption for backup files and sensitive data
- **Read-Only Database Access**: Postgresus uses read-only access by default and warns if write permissions are detected
- **Role-Based Access Control**: Assign viewer, member, admin or owner roles within workspaces
- **Audit Logging**: Track all system activities and changes made by users
- **Zero-Trust Storage**: Encrypted backups are safe even in shared cloud storage
## License
Postgresus is licensed under [Apache 2.0](../LICENSE).

View File

@@ -82,6 +82,30 @@ jobs:
cd frontend
npm run lint
test-frontend:
runs-on: ubuntu-latest
needs: [lint-frontend]
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
cache-dependency-path: frontend/package-lock.json
- name: Install dependencies
run: |
cd frontend
npm ci
- name: Run frontend tests
run: |
cd frontend
npm run test
test-backend:
runs-on: ubuntu-latest
needs: [lint-backend]
@@ -144,6 +168,12 @@ jobs:
# testing Telegram
TEST_TELEGRAM_BOT_TOKEN=${{ secrets.TEST_TELEGRAM_BOT_TOKEN }}
TEST_TELEGRAM_CHAT_ID=${{ secrets.TEST_TELEGRAM_CHAT_ID }}
# supabase
TEST_SUPABASE_HOST=${{ secrets.TEST_SUPABASE_HOST }}
TEST_SUPABASE_PORT=${{ secrets.TEST_SUPABASE_PORT }}
TEST_SUPABASE_USERNAME=${{ secrets.TEST_SUPABASE_USERNAME }}
TEST_SUPABASE_PASSWORD=${{ secrets.TEST_SUPABASE_PASSWORD }}
TEST_SUPABASE_DATABASE=${{ secrets.TEST_SUPABASE_DATABASE }}
EOF
- name: Start test containers
@@ -202,7 +232,7 @@ jobs:
determine-version:
runs-on: ubuntu-latest
needs: [test-backend, lint-frontend]
needs: [test-backend, test-frontend]
if: ${{ github.ref == 'refs/heads/main' && !contains(github.event.head_commit.message, '[skip-release]') }}
outputs:
should_release: ${{ steps.version_bump.outputs.should_release }}
@@ -295,7 +325,7 @@ jobs:
build-only:
runs-on: ubuntu-latest
needs: [test-backend, lint-frontend]
needs: [test-backend, test-frontend]
if: ${{ github.ref == 'refs/heads/main' && contains(github.event.head_commit.message, '[skip-release]') }}
steps:
- name: Check out code
@@ -455,6 +485,17 @@ jobs:
echo EOF
} >> $GITHUB_OUTPUT
- name: Update CITATION.cff version
run: |
VERSION="${{ needs.determine-version.outputs.new_version }}"
sed -i "s/^version: .*/version: ${VERSION}/" CITATION.cff
sed -i "s/^date-released: .*/date-released: \"$(date +%Y-%m-%d)\"/" CITATION.cff
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
git add CITATION.cff
git commit -m "Update CITATION.cff to v${VERSION}" || true
git push || true
- name: Create GitHub Release
uses: actions/create-release@v1
env:

33
CITATION.cff Normal file
View File

@@ -0,0 +1,33 @@
cff-version: 1.2.0
title: Postgresus
message: "If you use this software, please cite it as below."
type: software
authors:
- family-names: Dugin
given-names: Rostislav
repository-code: https://github.com/RostislavDugin/postgresus
url: https://postgresus.com
abstract: "Free, open source and self-hosted solution for automated PostgreSQL backups with multiple storage options and notifications."
keywords:
- docker
- kubernetes
- golang
- backups
- postgres
- devops
- backup
- database
- tools
- monitoring
- ftp
- postgresql
- s3
- psql
- web-ui
- self-hosted
- pg
- system-administration
- database-backup
license: Apache-2.0
version: 2.5.1
date-released: "2025-06-01"

View File

@@ -97,7 +97,7 @@ Postgresus works seamlessly with both self-hosted PostgreSQL and cloud-managed d
### 📦 Installation <a href="https://postgresus.com/installation">(docs)</a>
You have three ways to install Postgresus:
You have several ways to install Postgresus:
- Script (recommended)
- Simple Docker run
@@ -115,7 +115,7 @@ You have three ways to install Postgresus: automated script (recommended), simpl
The installation script will:
- ✅ Install Docker with Docker Compose(if not already installed)
- ✅ Install Docker with Docker Compose (if not already installed)
- ✅ Set up Postgresus
- ✅ Configure automatic startup on system reboot
@@ -232,10 +232,10 @@ Replace `admin` with the actual email address of the user whose password you wan
## 📝 License
This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details
---
## 🤝 Contributing
Contributions are welcome! Read <a href="https://postgresus.com/contributing">contributing guide</a> for more details, prioerities and rules are specified there. If you want to contribute, but don't know what and how - message me on Telegram [@rostislav_dugin](https://t.me/rostislav_dugin)
Contributions are welcome! Read <a href="https://postgresus.com/contribute">contributing guide</a> for more details, priorities and rules are specified there. If you want to contribute, but don't know what and how - message me on Telegram [@rostislav_dugin](https://t.me/rostislav_dugin)

View File

@@ -33,4 +33,12 @@ TEST_NAS_PORT=7006
TEST_TELEGRAM_BOT_TOKEN=
TEST_TELEGRAM_CHAT_ID=
# testing Azure Blob Storage
TEST_AZURITE_BLOB_PORT=10000
TEST_AZURITE_BLOB_PORT=10000
# supabase
TEST_SUPABASE_HOST=
TEST_SUPABASE_PORT=
TEST_SUPABASE_USERNAME=
TEST_SUPABASE_PASSWORD=
TEST_SUPABASE_DATABASE=
# FTP
TEST_FTP_PORT=7007

View File

@@ -132,3 +132,17 @@ services:
-s "backups;/shared;yes;no;no;testuser"
-p
container_name: test-nas
# Test FTP server
test-ftp:
image: stilliard/pure-ftpd:latest
ports:
- "${TEST_FTP_PORT:-21}:21"
- "30000-30009:30000-30009"
environment:
- PUBLICHOST=localhost
- FTP_USER_NAME=testuser
- FTP_USER_PASS=testpassword
- FTP_USER_HOME=/home/ftpusers/testuser
- FTP_PASSIVE_PORTS=30000:30009
container_name: test-ftp

View File

@@ -12,6 +12,7 @@ require (
github.com/google/uuid v1.6.0
github.com/ilyakaznacheev/cleanenv v1.5.0
github.com/jackc/pgx/v5 v5.7.5
github.com/jlaffaye/ftp v0.2.0
github.com/jmoiron/sqlx v1.4.0
github.com/joho/godotenv v1.5.1
github.com/lib/pq v1.10.9
@@ -27,7 +28,11 @@ require (
gorm.io/gorm v1.26.1
)
require github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
require (
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
)
require (
cloud.google.com/go/auth v0.16.2 // indirect

View File

@@ -107,6 +107,10 @@ github.com/googleapis/enterprise-certificate-proxy v0.3.6 h1:GW/XbdyBFQ8Qe+YAmFU
github.com/googleapis/enterprise-certificate-proxy v0.3.6/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
github.com/googleapis/gax-go/v2 v2.14.2 h1:eBLnkZ9635krYIPD+ag1USrOAI0Nr0QYF3+/3GqO0k0=
github.com/googleapis/gax-go/v2 v2.14.2/go.mod h1:ON64QhlJkhVtSqp4v1uaK92VyZ2gmvDQsweuyLV+8+w=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hirochachacha/go-smb2 v1.1.0 h1:b6hs9qKIql9eVXAiN0M2wSFY5xnhbHAQoCwRKbaRTZI=
github.com/hirochachacha/go-smb2 v1.1.0/go.mod h1:8F1A4d5EZzrGu5R7PU163UcMRDJQl4FtcxjBfsY8TZE=
github.com/ilyakaznacheev/cleanenv v1.5.0 h1:0VNZXggJE2OYdXE87bfSSwGxeiGt9moSR2lOrsHHvr4=
@@ -123,6 +127,8 @@ github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/jlaffaye/ftp v0.2.0 h1:lXNvW7cBu7R/68bknOX3MrRIIqZ61zELs1P2RAiA3lg=
github.com/jlaffaye/ftp v0.2.0/go.mod h1:is2Ds5qkhceAPy2xD6RLI6hmp/qysSoymZ+Z2uTnspI=
github.com/jmoiron/sqlx v1.4.0 h1:1PLqN7S1UYp5t4SrVVnt4nUVNemrDAtxlulVe+Qgm3o=
github.com/jmoiron/sqlx v1.4.0/go.mod h1:ZrZ7UsYB/weZdl2Bxg6jCRO9c3YHl8r3ahlKmRT4JLY=
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=

View File

@@ -48,6 +48,7 @@ type EnvVariables struct {
TestAzuriteBlobPort string `env:"TEST_AZURITE_BLOB_PORT"`
TestNASPort string `env:"TEST_NAS_PORT"`
TestFTPPort string `env:"TEST_FTP_PORT"`
// oauth
GitHubClientID string `env:"GITHUB_CLIENT_ID"`
@@ -58,6 +59,13 @@ type EnvVariables struct {
// testing Telegram
TestTelegramBotToken string `env:"TEST_TELEGRAM_BOT_TOKEN"`
TestTelegramChatID string `env:"TEST_TELEGRAM_CHAT_ID"`
// testing Supabase
TestSupabaseHost string `env:"TEST_SUPABASE_HOST"`
TestSupabasePort string `env:"TEST_SUPABASE_PORT"`
TestSupabaseUsername string `env:"TEST_SUPABASE_USERNAME"`
TestSupabasePassword string `env:"TEST_SUPABASE_PASSWORD"`
TestSupabaseDatabase string `env:"TEST_SUPABASE_DATABASE"`
}
var (

View File

@@ -334,6 +334,10 @@ func (uc *CreatePostgresqlBackupUsecase) buildPgDumpArgs(pg *pgtypes.PostgresqlD
"--verbose",
}
for _, schema := range pg.IncludeSchemas {
args = append(args, "-n", schema)
}
compressionArgs := uc.getCompressionArgs(pg.Version)
return append(args, compressionArgs...)
}

View File

@@ -13,6 +13,7 @@ import (
"github.com/google/uuid"
"github.com/jackc/pgx/v5"
"gorm.io/gorm"
)
type PostgresqlDatabase struct {
@@ -29,17 +30,37 @@ type PostgresqlDatabase struct {
Password string `json:"password" gorm:"type:text;not null"`
Database *string `json:"database" gorm:"type:text"`
IsHttps bool `json:"isHttps" gorm:"type:boolean;default:false"`
// backup settings
IncludeSchemas []string `json:"includeSchemas" gorm:"-"`
IncludeSchemasString string `json:"-" gorm:"column:include_schemas;type:text;not null;default:''"`
}
func (p *PostgresqlDatabase) TableName() string {
return "postgresql_databases"
}
func (p *PostgresqlDatabase) Validate() error {
if p.Version == "" {
return errors.New("version is required")
func (p *PostgresqlDatabase) BeforeSave(_ *gorm.DB) error {
if len(p.IncludeSchemas) > 0 {
p.IncludeSchemasString = strings.Join(p.IncludeSchemas, ",")
} else {
p.IncludeSchemasString = ""
}
return nil
}
func (p *PostgresqlDatabase) AfterFind(_ *gorm.DB) error {
if p.IncludeSchemasString != "" {
p.IncludeSchemas = strings.Split(p.IncludeSchemasString, ",")
} else {
p.IncludeSchemas = []string{}
}
return nil
}
func (p *PostgresqlDatabase) Validate() error {
if p.Host == "" {
return errors.New("host is required")
}
@@ -85,6 +106,7 @@ func (p *PostgresqlDatabase) Update(incoming *PostgresqlDatabase) {
p.Username = incoming.Username
p.Database = incoming.Database
p.IsHttps = incoming.IsHttps
p.IncludeSchemas = incoming.IncludeSchemas
if incoming.Password != "" {
p.Password = incoming.Password
@@ -106,6 +128,50 @@ func (p *PostgresqlDatabase) EncryptSensitiveFields(
return nil
}
// PopulateVersionIfEmpty detects and sets the PostgreSQL version if not already set.
// This should be called before encrypting sensitive fields.
func (p *PostgresqlDatabase) PopulateVersionIfEmpty(
logger *slog.Logger,
encryptor encryption.FieldEncryptor,
databaseID uuid.UUID,
) error {
if p.Version != "" {
return nil
}
if p.Database == nil || *p.Database == "" {
return nil
}
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
password, err := decryptPasswordIfNeeded(p.Password, encryptor, databaseID)
if err != nil {
return fmt.Errorf("failed to decrypt password: %w", err)
}
connStr := buildConnectionStringForDB(p, *p.Database, password)
conn, err := pgx.Connect(ctx, connStr)
if err != nil {
return fmt.Errorf("failed to connect to database: %w", err)
}
defer func() {
if closeErr := conn.Close(ctx); closeErr != nil {
logger.Error("Failed to close connection", "error", closeErr)
}
}()
detectedVersion, err := detectDatabaseVersion(ctx, conn)
if err != nil {
return err
}
p.Version = detectedVersion
return nil
}
// IsUserReadOnly checks if the database user has read-only privileges.
//
// This method performs a comprehensive security check by examining:
@@ -286,8 +352,20 @@ func (p *PostgresqlDatabase) CreateReadOnlyUser(
// Retry logic for username collision
maxRetries := 3
for attempt := 0; attempt < maxRetries; attempt++ {
username := fmt.Sprintf("postgresus-%s", uuid.New().String()[:8])
for attempt := range maxRetries {
// Generate base username for PostgreSQL user creation
baseUsername := fmt.Sprintf("postgresus-%s", uuid.New().String()[:8])
// For Supabase session pooler, the username format for connection is "username.projectid"
// but the actual PostgreSQL user must be created with just the base name.
// The pooler will strip the ".projectid" suffix when authenticating.
connectionUsername := baseUsername
if isSupabaseConnection(p.Host, p.Username) {
if supabaseProjectID := extractSupabaseProjectID(p.Username); supabaseProjectID != "" {
connectionUsername = fmt.Sprintf("%s.%s", baseUsername, supabaseProjectID)
}
}
newPassword := uuid.New().String()
tx, err := conn.Begin(ctx)
@@ -305,9 +383,10 @@ func (p *PostgresqlDatabase) CreateReadOnlyUser(
}()
// Step 1: Create PostgreSQL user with LOGIN privilege
// Note: We use baseUsername for the actual PostgreSQL user name if Supabase is used
_, err = tx.Exec(
ctx,
fmt.Sprintf(`CREATE USER "%s" WITH PASSWORD '%s' LOGIN`, username, newPassword),
fmt.Sprintf(`CREATE USER "%s" WITH PASSWORD '%s' LOGIN`, baseUsername, newPassword),
)
if err != nil {
if err.Error() != "" && attempt < maxRetries-1 {
@@ -331,28 +410,28 @@ func (p *PostgresqlDatabase) CreateReadOnlyUser(
}
// Now revoke from the specific user as well (belt and suspenders)
_, err = tx.Exec(ctx, fmt.Sprintf(`REVOKE CREATE ON SCHEMA public FROM "%s"`, username))
_, err = tx.Exec(ctx, fmt.Sprintf(`REVOKE CREATE ON SCHEMA public FROM "%s"`, baseUsername))
if err != nil {
logger.Error(
"Failed to revoke CREATE on public schema from user",
"error",
err,
"username",
username,
baseUsername,
)
}
// Step 2: Grant database connection privilege and revoke TEMP
_, err = tx.Exec(
ctx,
fmt.Sprintf(`GRANT CONNECT ON DATABASE %s TO "%s"`, *p.Database, username),
fmt.Sprintf(`GRANT CONNECT ON DATABASE "%s" TO "%s"`, *p.Database, baseUsername),
)
if err != nil {
return "", "", fmt.Errorf("failed to grant connect privilege: %w", err)
}
// Revoke TEMP privilege from PUBLIC role (like CREATE on public schema, TEMP is granted to PUBLIC by default)
_, err = tx.Exec(ctx, fmt.Sprintf(`REVOKE TEMP ON DATABASE %s FROM PUBLIC`, *p.Database))
_, err = tx.Exec(ctx, fmt.Sprintf(`REVOKE TEMP ON DATABASE "%s" FROM PUBLIC`, *p.Database))
if err != nil {
logger.Warn("Failed to revoke TEMP from PUBLIC", "error", err)
}
@@ -360,10 +439,10 @@ func (p *PostgresqlDatabase) CreateReadOnlyUser(
// Also revoke from the specific user (belt and suspenders)
_, err = tx.Exec(
ctx,
fmt.Sprintf(`REVOKE TEMP ON DATABASE %s FROM "%s"`, *p.Database, username),
fmt.Sprintf(`REVOKE TEMP ON DATABASE "%s" FROM "%s"`, *p.Database, baseUsername),
)
if err != nil {
logger.Warn("Failed to revoke TEMP privilege", "error", err, "username", username)
logger.Warn("Failed to revoke TEMP privilege", "error", err, "username", baseUsername)
}
// Step 3: Discover all user-created schemas
@@ -396,7 +475,7 @@ func (p *PostgresqlDatabase) CreateReadOnlyUser(
// Revoke CREATE specifically (handles inheritance from PUBLIC role)
_, err = tx.Exec(
ctx,
fmt.Sprintf(`REVOKE CREATE ON SCHEMA "%s" FROM "%s"`, schema, username),
fmt.Sprintf(`REVOKE CREATE ON SCHEMA "%s" FROM "%s"`, schema, baseUsername),
)
if err != nil {
logger.Warn(
@@ -406,14 +485,14 @@ func (p *PostgresqlDatabase) CreateReadOnlyUser(
"schema",
schema,
"username",
username,
baseUsername,
)
}
// Grant only USAGE (not CREATE)
_, err = tx.Exec(
ctx,
fmt.Sprintf(`GRANT USAGE ON SCHEMA "%s" TO "%s"`, schema, username),
fmt.Sprintf(`GRANT USAGE ON SCHEMA "%s" TO "%s"`, schema, baseUsername),
)
if err != nil {
return "", "", fmt.Errorf("failed to grant usage on schema %s: %w", schema, err)
@@ -435,7 +514,7 @@ func (p *PostgresqlDatabase) CreateReadOnlyUser(
EXECUTE format('GRANT SELECT ON ALL SEQUENCES IN SCHEMA %%I TO "%s"', schema_rec.schema_name);
END LOOP;
END $$;
`, username, username)
`, baseUsername, baseUsername)
_, err = tx.Exec(ctx, grantSelectSQL)
if err != nil {
@@ -457,7 +536,7 @@ func (p *PostgresqlDatabase) CreateReadOnlyUser(
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %%I GRANT SELECT ON SEQUENCES TO "%s"', schema_rec.schema_name);
END LOOP;
END $$;
`, username, username)
`, baseUsername, baseUsername)
_, err = tx.Exec(ctx, defaultPrivilegesSQL)
if err != nil {
@@ -466,7 +545,7 @@ func (p *PostgresqlDatabase) CreateReadOnlyUser(
// Step 7: Verify user creation before committing
var verifyUsername string
err = tx.QueryRow(ctx, fmt.Sprintf(`SELECT rolname FROM pg_roles WHERE rolname = '%s'`, username)).
err = tx.QueryRow(ctx, fmt.Sprintf(`SELECT rolname FROM pg_roles WHERE rolname = '%s'`, baseUsername)).
Scan(&verifyUsername)
if err != nil {
return "", "", fmt.Errorf("failed to verify user creation: %w", err)
@@ -477,8 +556,15 @@ func (p *PostgresqlDatabase) CreateReadOnlyUser(
}
success = true
logger.Info("Read-only user created successfully", "username", username)
return username, newPassword, nil
// Return connectionUsername (with project ID suffix for Supabase) for the caller to use when connecting
logger.Info(
"Read-only user created successfully",
"username",
baseUsername,
"connectionUsername",
connectionUsername,
)
return connectionUsername, newPassword, nil
}
return "", "", errors.New("failed to generate unique username after 3 attempts")
@@ -521,10 +607,12 @@ func testSingleDatabaseConnection(
}
}()
// Check version after successful connection
if err := verifyDatabaseVersion(ctx, conn, postgresDb.Version); err != nil {
// Detect and set the database version automatically
detectedVersion, err := detectDatabaseVersion(ctx, conn)
if err != nil {
return err
}
postgresDb.Version = detectedVersion
// Test if we can perform basic operations (like pg_dump would need)
if err := testBasicOperations(ctx, conn, *postgresDb.Database); err != nil {
@@ -538,16 +626,12 @@ func testSingleDatabaseConnection(
return nil
}
// verifyDatabaseVersion checks if the actual database version matches the specified version
func verifyDatabaseVersion(
ctx context.Context,
conn *pgx.Conn,
expectedVersion tools.PostgresqlVersion,
) error {
// detectDatabaseVersion queries and returns the PostgreSQL major version
func detectDatabaseVersion(ctx context.Context, conn *pgx.Conn) (tools.PostgresqlVersion, error) {
var versionStr string
err := conn.QueryRow(ctx, "SELECT version()").Scan(&versionStr)
if err != nil {
return fmt.Errorf("failed to query database version: %w", err)
return "", fmt.Errorf("failed to query database version: %w", err)
}
// Parse version from string like "PostgreSQL 14.2 on x86_64-pc-linux-gnu..."
@@ -555,19 +639,18 @@ func verifyDatabaseVersion(
re := regexp.MustCompile(`PostgreSQL (\d+)`)
matches := re.FindStringSubmatch(versionStr)
if len(matches) < 2 {
return fmt.Errorf("could not parse version from: %s", versionStr)
return "", fmt.Errorf("could not parse version from: %s", versionStr)
}
actualVersion := tools.GetPostgresqlVersionEnum(matches[1])
if actualVersion != expectedVersion {
return fmt.Errorf(
"you specified wrong version. Real version is %s, but you specified %s",
actualVersion,
expectedVersion,
)
}
majorVersion := matches[1]
return nil
// Map to known PostgresqlVersion enum values
switch majorVersion {
case "12", "13", "14", "15", "16", "17", "18":
return tools.PostgresqlVersion(majorVersion), nil
default:
return "", fmt.Errorf("unsupported PostgreSQL version: %s", majorVersion)
}
}
// testBasicOperations tests basic operations that backup tools need
@@ -615,3 +698,15 @@ func decryptPasswordIfNeeded(
}
return encryptor.Decrypt(databaseID, password)
}
func isSupabaseConnection(host, username string) bool {
return strings.Contains(strings.ToLower(host), "supabase") ||
strings.Contains(strings.ToLower(username), "supabase")
}
func extractSupabaseProjectID(username string) string {
if idx := strings.Index(username, "."); idx != -1 {
return username[idx+1:]
}
return ""
}

View File

@@ -246,6 +246,188 @@ func Test_ReadOnlyUser_MultipleSchemas_AllAccessible(t *testing.T) {
assert.NoError(t, err)
}
func Test_CreateReadOnlyUser_DatabaseNameWithDash_Success(t *testing.T) {
env := config.GetEnv()
container := connectToPostgresContainer(t, env.TestPostgres16Port)
defer container.DB.Close()
dashDbName := "test-db-with-dash"
_, err := container.DB.Exec(fmt.Sprintf(`DROP DATABASE IF EXISTS "%s"`, dashDbName))
assert.NoError(t, err)
_, err = container.DB.Exec(fmt.Sprintf(`CREATE DATABASE "%s"`, dashDbName))
assert.NoError(t, err)
defer func() {
_, _ = container.DB.Exec(fmt.Sprintf(`DROP DATABASE IF EXISTS "%s"`, dashDbName))
}()
dashDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
container.Host, container.Port, container.Username, container.Password, dashDbName)
dashDB, err := sqlx.Connect("postgres", dashDSN)
assert.NoError(t, err)
defer dashDB.Close()
_, err = dashDB.Exec(`
CREATE TABLE dash_test (
id SERIAL PRIMARY KEY,
data TEXT NOT NULL
);
INSERT INTO dash_test (data) VALUES ('test1'), ('test2');
`)
assert.NoError(t, err)
pgModel := &PostgresqlDatabase{
Version: tools.GetPostgresqlVersionEnum("16"),
Host: container.Host,
Port: container.Port,
Username: container.Username,
Password: container.Password,
Database: &dashDbName,
IsHttps: false,
}
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
ctx := context.Background()
username, password, err := pgModel.CreateReadOnlyUser(ctx, logger, nil, uuid.New())
assert.NoError(t, err)
assert.NotEmpty(t, username)
assert.NotEmpty(t, password)
assert.True(t, strings.HasPrefix(username, "postgresus-"))
readOnlyDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
container.Host, container.Port, username, password, dashDbName)
readOnlyConn, err := sqlx.Connect("postgres", readOnlyDSN)
assert.NoError(t, err)
defer readOnlyConn.Close()
var count int
err = readOnlyConn.Get(&count, "SELECT COUNT(*) FROM dash_test")
assert.NoError(t, err)
assert.Equal(t, 2, count)
_, err = readOnlyConn.Exec("INSERT INTO dash_test (data) VALUES ('should-fail')")
assert.Error(t, err)
assert.Contains(t, err.Error(), "permission denied")
_, err = dashDB.Exec(fmt.Sprintf(`DROP OWNED BY "%s" CASCADE`, username))
if err != nil {
t.Logf("Warning: Failed to drop owned objects: %v", err)
}
_, err = dashDB.Exec(fmt.Sprintf(`DROP USER IF EXISTS "%s"`, username))
assert.NoError(t, err)
}
func Test_CreateReadOnlyUser_Supabase_UserCanReadButNotWrite(t *testing.T) {
env := config.GetEnv()
if env.TestSupabaseHost == "" {
t.Skip("Skipping Supabase test: missing environment variables")
}
portInt, err := strconv.Atoi(env.TestSupabasePort)
assert.NoError(t, err)
dsn := fmt.Sprintf(
"host=%s port=%d user=%s password=%s dbname=%s sslmode=require",
env.TestSupabaseHost,
portInt,
env.TestSupabaseUsername,
env.TestSupabasePassword,
env.TestSupabaseDatabase,
)
adminDB, err := sqlx.Connect("postgres", dsn)
assert.NoError(t, err)
defer adminDB.Close()
tableName := fmt.Sprintf(
"readonly_test_%s",
strings.ReplaceAll(uuid.New().String()[:8], "-", ""),
)
_, err = adminDB.Exec(fmt.Sprintf(`
DROP TABLE IF EXISTS public.%s CASCADE;
CREATE TABLE public.%s (
id SERIAL PRIMARY KEY,
data TEXT NOT NULL
);
INSERT INTO public.%s (data) VALUES ('test1'), ('test2');
`, tableName, tableName, tableName))
assert.NoError(t, err)
defer func() {
_, _ = adminDB.Exec(fmt.Sprintf(`DROP TABLE IF EXISTS public.%s CASCADE`, tableName))
}()
pgModel := &PostgresqlDatabase{
Host: env.TestSupabaseHost,
Port: portInt,
Username: env.TestSupabaseUsername,
Password: env.TestSupabasePassword,
Database: &env.TestSupabaseDatabase,
IsHttps: true,
}
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
ctx := context.Background()
connectionUsername, newPassword, err := pgModel.CreateReadOnlyUser(ctx, logger, nil, uuid.New())
assert.NoError(t, err)
assert.NotEmpty(t, connectionUsername)
assert.NotEmpty(t, newPassword)
assert.True(t, strings.HasPrefix(connectionUsername, "postgresus-"))
baseUsername := connectionUsername
if idx := strings.Index(connectionUsername, "."); idx != -1 {
baseUsername = connectionUsername[:idx]
}
defer func() {
_, _ = adminDB.Exec(fmt.Sprintf(`DROP OWNED BY "%s" CASCADE`, baseUsername))
_, _ = adminDB.Exec(fmt.Sprintf(`DROP USER IF EXISTS "%s"`, baseUsername))
}()
readOnlyDSN := fmt.Sprintf(
"host=%s port=%d user=%s password=%s dbname=%s sslmode=require",
env.TestSupabaseHost,
portInt,
connectionUsername,
newPassword,
env.TestSupabaseDatabase,
)
readOnlyConn, err := sqlx.Connect("postgres", readOnlyDSN)
assert.NoError(t, err)
defer readOnlyConn.Close()
var count int
err = readOnlyConn.Get(&count, fmt.Sprintf("SELECT COUNT(*) FROM public.%s", tableName))
assert.NoError(t, err)
assert.Equal(t, 2, count)
_, err = readOnlyConn.Exec(
fmt.Sprintf("INSERT INTO public.%s (data) VALUES ('should-fail')", tableName),
)
assert.Error(t, err)
assert.Contains(t, err.Error(), "permission denied")
_, err = readOnlyConn.Exec(
fmt.Sprintf("UPDATE public.%s SET data = 'hacked' WHERE id = 1", tableName),
)
assert.Error(t, err)
assert.Contains(t, err.Error(), "permission denied")
_, err = readOnlyConn.Exec(fmt.Sprintf("DELETE FROM public.%s WHERE id = 1", tableName))
assert.Error(t, err)
assert.Contains(t, err.Error(), "permission denied")
_, err = readOnlyConn.Exec("CREATE TABLE public.hack_table (id INT)")
assert.Error(t, err)
assert.Contains(t, err.Error(), "permission denied")
}
type PostgresContainer struct {
Host string
Port int

View File

@@ -75,6 +75,16 @@ func (d *Database) EncryptSensitiveFields(encryptor encryption.FieldEncryptor) e
return nil
}
func (d *Database) PopulateVersionIfEmpty(
logger *slog.Logger,
encryptor encryption.FieldEncryptor,
) error {
if d.Postgresql != nil {
return d.Postgresql.PopulateVersionIfEmpty(logger, encryptor, d.ID)
}
return nil
}
func (d *Database) Update(incoming *Database) {
d.Name = incoming.Name
d.Type = incoming.Type

View File

@@ -68,6 +68,10 @@ func (s *DatabaseService) CreateDatabase(
return nil, err
}
if err := database.PopulateVersionIfEmpty(s.logger, s.fieldEncryptor); err != nil {
return nil, fmt.Errorf("failed to auto-detect database version: %w", err)
}
if err := database.EncryptSensitiveFields(s.fieldEncryptor); err != nil {
return nil, fmt.Errorf("failed to encrypt sensitive fields: %w", err)
}
@@ -125,6 +129,10 @@ func (s *DatabaseService) UpdateDatabase(
return err
}
if err := existingDatabase.PopulateVersionIfEmpty(s.logger, s.fieldEncryptor); err != nil {
return fmt.Errorf("failed to auto-detect database version: %w", err)
}
if err := existingDatabase.EncryptSensitiveFields(s.fieldEncryptor); err != nil {
return fmt.Errorf("failed to encrypt sensitive fields: %w", err)
}

View File

@@ -8,6 +8,7 @@ import (
"postgresus-backend/internal/features/restores/usecases"
"postgresus-backend/internal/features/storages"
workspaces_services "postgresus-backend/internal/features/workspaces/services"
"postgresus-backend/internal/util/encryption"
"postgresus-backend/internal/util/logger"
)
@@ -22,6 +23,7 @@ var restoreService = &RestoreService{
logger.GetLogger(),
workspaces_services.GetWorkspaceService(),
audit_logs.GetAuditLogService(),
encryption.GetFieldEncryptor(),
}
var restoreController = &RestoreController{
restoreService,

View File

@@ -14,6 +14,7 @@ import (
"postgresus-backend/internal/features/storages"
users_models "postgresus-backend/internal/features/users/models"
workspaces_services "postgresus-backend/internal/features/workspaces/services"
"postgresus-backend/internal/util/encryption"
"postgresus-backend/internal/util/tools"
"time"
@@ -30,6 +31,7 @@ type RestoreService struct {
logger *slog.Logger
workspaceService *workspaces_services.WorkspaceService
auditLogService *audit_logs.AuditLogService
fieldEncryptor encryption.FieldEncryptor
}
func (s *RestoreService) OnBeforeBackupRemove(backup *backups.Backup) error {
@@ -120,12 +122,6 @@ func (s *RestoreService) RestoreBackupWithAuth(
return err
}
fmt.Printf(
"restore from %s to %s\n",
backupDatabase.Postgresql.Version,
requestDTO.PostgresqlDatabase.Version,
)
if tools.IsBackupDbVersionHigherThanRestoreDbVersion(
backupDatabase.Postgresql.Version,
requestDTO.PostgresqlDatabase.Version,
@@ -214,6 +210,10 @@ func (s *RestoreService) RestoreBackup(
Postgresql: requestDTO.PostgresqlDatabase,
}
if err := restoringToDB.PopulateVersionIfEmpty(s.logger, s.fieldEncryptor); err != nil {
return fmt.Errorf("failed to auto-detect database version: %w", err)
}
err = s.restoreBackupUsecase.Execute(
backupConfig,
restore,

View File

@@ -8,6 +8,7 @@ import (
audit_logs "postgresus-backend/internal/features/audit_logs"
azure_blob_storage "postgresus-backend/internal/features/storages/models/azure_blob"
ftp_storage "postgresus-backend/internal/features/storages/models/ftp"
google_drive_storage "postgresus-backend/internal/features/storages/models/google_drive"
local_storage "postgresus-backend/internal/features/storages/models/local"
nas_storage "postgresus-backend/internal/features/storages/models/nas"
@@ -738,6 +739,53 @@ func Test_StorageSensitiveDataLifecycle_AllTypes(t *testing.T) {
assert.Equal(t, "", storage.GoogleDriveStorage.TokenJSON)
},
},
{
name: "FTP Storage",
storageType: StorageTypeFTP,
createStorage: func(workspaceID uuid.UUID) *Storage {
return &Storage{
WorkspaceID: workspaceID,
Type: StorageTypeFTP,
Name: "Test FTP Storage",
FTPStorage: &ftp_storage.FTPStorage{
Host: "ftp.example.com",
Port: 21,
Username: "testuser",
Password: "original-password",
UseSSL: false,
Path: "/backups",
},
}
},
updateStorage: func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage {
return &Storage{
ID: storageID,
WorkspaceID: workspaceID,
Type: StorageTypeFTP,
Name: "Updated FTP Storage",
FTPStorage: &ftp_storage.FTPStorage{
Host: "ftp2.example.com",
Port: 2121,
Username: "testuser2",
Password: "",
UseSSL: true,
Path: "/backups2",
},
}
},
verifySensitiveData: func(t *testing.T, storage *Storage) {
assert.True(t, strings.HasPrefix(storage.FTPStorage.Password, "enc:"),
"Password should be encrypted with 'enc:' prefix")
encryptor := encryption.GetFieldEncryptor()
password, err := encryptor.Decrypt(storage.ID, storage.FTPStorage.Password)
assert.NoError(t, err)
assert.Equal(t, "original-password", password)
},
verifyHiddenData: func(t *testing.T, storage *Storage) {
assert.Equal(t, "", storage.FTPStorage.Password)
},
},
}
for _, tc := range testCases {

View File

@@ -8,4 +8,5 @@ const (
StorageTypeGoogleDrive StorageType = "GOOGLE_DRIVE"
StorageTypeNAS StorageType = "NAS"
StorageTypeAzureBlob StorageType = "AZURE_BLOB"
StorageTypeFTP StorageType = "FTP"
)

View File

@@ -6,6 +6,7 @@ import (
"io"
"log/slog"
azure_blob_storage "postgresus-backend/internal/features/storages/models/azure_blob"
ftp_storage "postgresus-backend/internal/features/storages/models/ftp"
google_drive_storage "postgresus-backend/internal/features/storages/models/google_drive"
local_storage "postgresus-backend/internal/features/storages/models/local"
nas_storage "postgresus-backend/internal/features/storages/models/nas"
@@ -28,6 +29,7 @@ type Storage struct {
GoogleDriveStorage *google_drive_storage.GoogleDriveStorage `json:"googleDriveStorage" gorm:"foreignKey:StorageID"`
NASStorage *nas_storage.NASStorage `json:"nasStorage" gorm:"foreignKey:StorageID"`
AzureBlobStorage *azure_blob_storage.AzureBlobStorage `json:"azureBlobStorage" gorm:"foreignKey:StorageID"`
FTPStorage *ftp_storage.FTPStorage `json:"ftpStorage" gorm:"foreignKey:StorageID"`
}
func (s *Storage) SaveFile(
@@ -109,6 +111,10 @@ func (s *Storage) Update(incoming *Storage) {
if s.AzureBlobStorage != nil && incoming.AzureBlobStorage != nil {
s.AzureBlobStorage.Update(incoming.AzureBlobStorage)
}
case StorageTypeFTP:
if s.FTPStorage != nil && incoming.FTPStorage != nil {
s.FTPStorage.Update(incoming.FTPStorage)
}
}
}
@@ -124,6 +130,8 @@ func (s *Storage) getSpecificStorage() StorageFileSaver {
return s.NASStorage
case StorageTypeAzureBlob:
return s.AzureBlobStorage
case StorageTypeFTP:
return s.FTPStorage
default:
panic("invalid storage type: " + string(s.Type))
}

View File

@@ -9,6 +9,7 @@ import (
"path/filepath"
"postgresus-backend/internal/config"
azure_blob_storage "postgresus-backend/internal/features/storages/models/azure_blob"
ftp_storage "postgresus-backend/internal/features/storages/models/ftp"
google_drive_storage "postgresus-backend/internal/features/storages/models/google_drive"
local_storage "postgresus-backend/internal/features/storages/models/local"
nas_storage "postgresus-backend/internal/features/storages/models/nas"
@@ -70,6 +71,14 @@ func Test_Storage_BasicOperations(t *testing.T) {
}
}
// Setup FTP port
ftpPort := 21
if portStr := config.GetEnv().TestFTPPort; portStr != "" {
if port, err := strconv.Atoi(portStr); err == nil {
ftpPort = port
}
}
// Run tests
testCases := []struct {
name string
@@ -124,6 +133,18 @@ func Test_Storage_BasicOperations(t *testing.T) {
ContainerName: azuriteContainer.containerNameStr,
},
},
{
name: "FTPStorage",
storage: &ftp_storage.FTPStorage{
StorageID: uuid.New(),
Host: "localhost",
Port: ftpPort,
Username: "testuser",
Password: "testpassword",
UseSSL: false,
Path: "test-files",
},
},
}
// Add Google Drive storage test only if environment variables are available

View File

@@ -0,0 +1,368 @@
package ftp_storage
import (
"context"
"crypto/tls"
"errors"
"fmt"
"io"
"log/slog"
"postgresus-backend/internal/util/encryption"
"strings"
"time"
"github.com/google/uuid"
"github.com/jlaffaye/ftp"
)
const (
ftpConnectTimeout = 30 * time.Second
ftpTestConnectTimeout = 10 * time.Second
ftpChunkSize = 16 * 1024 * 1024
)
type FTPStorage struct {
StorageID uuid.UUID `json:"storageId" gorm:"primaryKey;type:uuid;column:storage_id"`
Host string `json:"host" gorm:"not null;type:text;column:host"`
Port int `json:"port" gorm:"not null;default:21;column:port"`
Username string `json:"username" gorm:"not null;type:text;column:username"`
Password string `json:"password" gorm:"not null;type:text;column:password"`
Path string `json:"path" gorm:"type:text;column:path"`
UseSSL bool `json:"useSsl" gorm:"not null;default:false;column:use_ssl"`
SkipTLSVerify bool `json:"skipTlsVerify" gorm:"not null;default:false;column:skip_tls_verify"`
}
func (f *FTPStorage) TableName() string {
return "ftp_storages"
}
func (f *FTPStorage) SaveFile(
ctx context.Context,
encryptor encryption.FieldEncryptor,
logger *slog.Logger,
fileID uuid.UUID,
file io.Reader,
) error {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
logger.Info("Starting to save file to FTP storage", "fileId", fileID.String(), "host", f.Host)
conn, err := f.connect(encryptor, ftpConnectTimeout)
if err != nil {
logger.Error("Failed to connect to FTP", "fileId", fileID.String(), "error", err)
return fmt.Errorf("failed to connect to FTP: %w", err)
}
defer func() {
if quitErr := conn.Quit(); quitErr != nil {
logger.Error(
"Failed to close FTP connection",
"fileId",
fileID.String(),
"error",
quitErr,
)
}
}()
if f.Path != "" {
if err := f.ensureDirectory(conn, f.Path); err != nil {
logger.Error(
"Failed to ensure directory",
"fileId",
fileID.String(),
"path",
f.Path,
"error",
err,
)
return fmt.Errorf("failed to ensure directory: %w", err)
}
}
filePath := f.getFilePath(fileID.String())
logger.Debug("Uploading file to FTP", "fileId", fileID.String(), "filePath", filePath)
ctxReader := &contextReader{ctx: ctx, reader: file}
err = conn.Stor(filePath, ctxReader)
if err != nil {
select {
case <-ctx.Done():
logger.Info("FTP upload cancelled", "fileId", fileID.String())
return ctx.Err()
default:
logger.Error("Failed to upload file to FTP", "fileId", fileID.String(), "error", err)
return fmt.Errorf("failed to upload file to FTP: %w", err)
}
}
logger.Info(
"Successfully saved file to FTP storage",
"fileId",
fileID.String(),
"filePath",
filePath,
)
return nil
}
func (f *FTPStorage) GetFile(
encryptor encryption.FieldEncryptor,
fileID uuid.UUID,
) (io.ReadCloser, error) {
conn, err := f.connect(encryptor, ftpConnectTimeout)
if err != nil {
return nil, fmt.Errorf("failed to connect to FTP: %w", err)
}
filePath := f.getFilePath(fileID.String())
resp, err := conn.Retr(filePath)
if err != nil {
_ = conn.Quit()
return nil, fmt.Errorf("failed to retrieve file from FTP: %w", err)
}
return &ftpFileReader{
response: resp,
conn: conn,
}, nil
}
func (f *FTPStorage) DeleteFile(encryptor encryption.FieldEncryptor, fileID uuid.UUID) error {
conn, err := f.connect(encryptor, ftpConnectTimeout)
if err != nil {
return fmt.Errorf("failed to connect to FTP: %w", err)
}
defer func() {
_ = conn.Quit()
}()
filePath := f.getFilePath(fileID.String())
_, err = conn.FileSize(filePath)
if err != nil {
return nil
}
err = conn.Delete(filePath)
if err != nil {
return fmt.Errorf("failed to delete file from FTP: %w", err)
}
return nil
}
func (f *FTPStorage) Validate(encryptor encryption.FieldEncryptor) error {
if f.Host == "" {
return errors.New("FTP host is required")
}
if f.Username == "" {
return errors.New("FTP username is required")
}
if f.Password == "" {
return errors.New("FTP password is required")
}
if f.Port <= 0 || f.Port > 65535 {
return errors.New("FTP port must be between 1 and 65535")
}
return nil
}
func (f *FTPStorage) TestConnection(encryptor encryption.FieldEncryptor) error {
ctx, cancel := context.WithTimeout(context.Background(), ftpTestConnectTimeout)
defer cancel()
conn, err := f.connectWithContext(ctx, encryptor, ftpTestConnectTimeout)
if err != nil {
return fmt.Errorf("failed to connect to FTP: %w", err)
}
defer func() {
_ = conn.Quit()
}()
if f.Path != "" {
if err := f.ensureDirectory(conn, f.Path); err != nil {
return fmt.Errorf("failed to access or create path '%s': %w", f.Path, err)
}
}
return nil
}
func (f *FTPStorage) HideSensitiveData() {
f.Password = ""
}
func (f *FTPStorage) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
if f.Password != "" {
encrypted, err := encryptor.Encrypt(f.StorageID, f.Password)
if err != nil {
return fmt.Errorf("failed to encrypt FTP password: %w", err)
}
f.Password = encrypted
}
return nil
}
func (f *FTPStorage) Update(incoming *FTPStorage) {
f.Host = incoming.Host
f.Port = incoming.Port
f.Username = incoming.Username
f.UseSSL = incoming.UseSSL
f.SkipTLSVerify = incoming.SkipTLSVerify
f.Path = incoming.Path
if incoming.Password != "" {
f.Password = incoming.Password
}
}
func (f *FTPStorage) connect(
encryptor encryption.FieldEncryptor,
timeout time.Duration,
) (*ftp.ServerConn, error) {
return f.connectWithContext(context.Background(), encryptor, timeout)
}
func (f *FTPStorage) connectWithContext(
ctx context.Context,
encryptor encryption.FieldEncryptor,
timeout time.Duration,
) (*ftp.ServerConn, error) {
password, err := encryptor.Decrypt(f.StorageID, f.Password)
if err != nil {
return nil, fmt.Errorf("failed to decrypt FTP password: %w", err)
}
address := fmt.Sprintf("%s:%d", f.Host, f.Port)
dialCtx, cancel := context.WithTimeout(ctx, timeout)
defer cancel()
var conn *ftp.ServerConn
if f.UseSSL {
tlsConfig := &tls.Config{
ServerName: f.Host,
InsecureSkipVerify: f.SkipTLSVerify,
}
conn, err = ftp.Dial(address,
ftp.DialWithContext(dialCtx),
ftp.DialWithExplicitTLS(tlsConfig),
)
} else {
conn, err = ftp.Dial(address, ftp.DialWithContext(dialCtx))
}
if err != nil {
return nil, fmt.Errorf("failed to dial FTP server: %w", err)
}
err = conn.Login(f.Username, password)
if err != nil {
_ = conn.Quit()
return nil, fmt.Errorf("failed to login to FTP server: %w", err)
}
return conn, nil
}
func (f *FTPStorage) ensureDirectory(conn *ftp.ServerConn, path string) error {
path = strings.TrimPrefix(path, "/")
path = strings.TrimSuffix(path, "/")
if path == "" {
return nil
}
parts := strings.Split(path, "/")
currentPath := ""
for _, part := range parts {
if part == "" || part == "." {
continue
}
if currentPath == "" {
currentPath = part
} else {
currentPath = currentPath + "/" + part
}
err := conn.ChangeDir(currentPath)
if err != nil {
err = conn.MakeDir(currentPath)
if err != nil {
return fmt.Errorf("failed to create directory '%s': %w", currentPath, err)
}
}
err = conn.ChangeDirToParent()
if err != nil {
return fmt.Errorf("failed to change to parent directory: %w", err)
}
}
return nil
}
func (f *FTPStorage) getFilePath(filename string) string {
if f.Path == "" {
return filename
}
path := strings.TrimPrefix(f.Path, "/")
path = strings.TrimSuffix(path, "/")
return path + "/" + filename
}
type ftpFileReader struct {
response *ftp.Response
conn *ftp.ServerConn
}
func (r *ftpFileReader) Read(p []byte) (n int, err error) {
return r.response.Read(p)
}
func (r *ftpFileReader) Close() error {
var errs []error
if r.response != nil {
if err := r.response.Close(); err != nil {
errs = append(errs, fmt.Errorf("failed to close response: %w", err))
}
}
if r.conn != nil {
if err := r.conn.Quit(); err != nil {
errs = append(errs, fmt.Errorf("failed to close connection: %w", err))
}
}
if len(errs) > 0 {
return errs[0]
}
return nil
}
type contextReader struct {
ctx context.Context
reader io.Reader
}
func (r *contextReader) Read(p []byte) (n int, err error) {
select {
case <-r.ctx.Done():
return 0, r.ctx.Err()
default:
return r.reader.Read(p)
}
}

View File

@@ -3,6 +3,7 @@ package s3_storage
import (
"bytes"
"context"
"crypto/tls"
"errors"
"fmt"
"io"
@@ -40,6 +41,7 @@ type S3Storage struct {
S3Prefix string `json:"s3Prefix" gorm:"type:text;column:s3_prefix"`
S3UseVirtualHostedStyle bool `json:"s3UseVirtualHostedStyle" gorm:"default:false;column:s3_use_virtual_hosted_style"`
SkipTLSVerify bool `json:"skipTLSVerify" gorm:"default:false;column:skip_tls_verify"`
}
func (s *S3Storage) TableName() string {
@@ -331,6 +333,7 @@ func (s *S3Storage) Update(incoming *S3Storage) {
s.S3Region = incoming.S3Region
s.S3Endpoint = incoming.S3Endpoint
s.S3UseVirtualHostedStyle = incoming.S3UseVirtualHostedStyle
s.SkipTLSVerify = incoming.SkipTLSVerify
if incoming.S3AccessKey != "" {
s.S3AccessKey = incoming.S3AccessKey
@@ -442,6 +445,9 @@ func (s *S3Storage) getClientParams(
TLSHandshakeTimeout: s3TLSHandshakeTimeout,
ResponseHeaderTimeout: s3ResponseTimeout,
IdleConnTimeout: s3IdleConnTimeout,
TLSClientConfig: &tls.Config{
InsecureSkipVerify: s.SkipTLSVerify,
},
}
return endpoint, useSSL, accessKey, secretKey, bucketLookup, transport, nil

View File

@@ -34,17 +34,21 @@ func (r *StorageRepository) Save(storage *Storage) (*Storage, error) {
if storage.AzureBlobStorage != nil {
storage.AzureBlobStorage.StorageID = storage.ID
}
case StorageTypeFTP:
if storage.FTPStorage != nil {
storage.FTPStorage.StorageID = storage.ID
}
}
if storage.ID == uuid.Nil {
if err := tx.Create(storage).
Omit("LocalStorage", "S3Storage", "GoogleDriveStorage", "NASStorage", "AzureBlobStorage").
Omit("LocalStorage", "S3Storage", "GoogleDriveStorage", "NASStorage", "AzureBlobStorage", "FTPStorage").
Error; err != nil {
return err
}
} else {
if err := tx.Save(storage).
Omit("LocalStorage", "S3Storage", "GoogleDriveStorage", "NASStorage", "AzureBlobStorage").
Omit("LocalStorage", "S3Storage", "GoogleDriveStorage", "NASStorage", "AzureBlobStorage", "FTPStorage").
Error; err != nil {
return err
}
@@ -86,6 +90,13 @@ func (r *StorageRepository) Save(storage *Storage) (*Storage, error) {
return err
}
}
case StorageTypeFTP:
if storage.FTPStorage != nil {
storage.FTPStorage.StorageID = storage.ID // Ensure ID is set
if err := tx.Save(storage.FTPStorage).Error; err != nil {
return err
}
}
}
return nil
@@ -108,6 +119,7 @@ func (r *StorageRepository) FindByID(id uuid.UUID) (*Storage, error) {
Preload("GoogleDriveStorage").
Preload("NASStorage").
Preload("AzureBlobStorage").
Preload("FTPStorage").
Where("id = ?", id).
First(&s).Error; err != nil {
return nil, err
@@ -126,6 +138,7 @@ func (r *StorageRepository) FindByWorkspaceID(workspaceID uuid.UUID) ([]*Storage
Preload("GoogleDriveStorage").
Preload("NASStorage").
Preload("AzureBlobStorage").
Preload("FTPStorage").
Where("workspace_id = ?", workspaceID).
Order("name ASC").
Find(&storages).Error; err != nil {
@@ -169,6 +182,12 @@ func (r *StorageRepository) Delete(s *Storage) error {
return err
}
}
case StorageTypeFTP:
if s.FTPStorage != nil {
if err := tx.Delete(s.FTPStorage).Error; err != nil {
return err
}
}
}
// Delete the main storage

View File

@@ -30,7 +30,6 @@ import (
workspaces_controllers "postgresus-backend/internal/features/workspaces/controllers"
workspaces_testing "postgresus-backend/internal/features/workspaces/testing"
test_utils "postgresus-backend/internal/util/testing"
"postgresus-backend/internal/util/tools"
)
const createAndFillTableQuery = `
@@ -114,6 +113,382 @@ func Test_BackupAndRestorePostgresqlWithEncryption_RestoreIsSuccessful(t *testin
}
}
func Test_BackupAndRestoreSupabase_PublicSchemaOnly_RestoreIsSuccessful(t *testing.T) {
env := config.GetEnv()
if env.TestSupabaseHost == "" {
t.Skip("Skipping Supabase test: missing environment variables")
}
portInt, err := strconv.Atoi(env.TestSupabasePort)
assert.NoError(t, err)
dsn := fmt.Sprintf(
"host=%s port=%d user=%s password=%s dbname=%s sslmode=require",
env.TestSupabaseHost,
portInt,
env.TestSupabaseUsername,
env.TestSupabasePassword,
env.TestSupabaseDatabase,
)
supabaseDB, err := sqlx.Connect("postgres", dsn)
assert.NoError(t, err)
defer supabaseDB.Close()
tableName := fmt.Sprintf("backup_test_%s", uuid.New().String()[:8])
createTableQuery := fmt.Sprintf(`
DROP TABLE IF EXISTS public.%s;
CREATE TABLE public.%s (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
value INTEGER NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
INSERT INTO public.%s (name, value) VALUES
('test1', 100),
('test2', 200),
('test3', 300);
`, tableName, tableName, tableName)
_, err = supabaseDB.Exec(createTableQuery)
assert.NoError(t, err)
defer func() {
_, _ = supabaseDB.Exec(fmt.Sprintf(`DROP TABLE IF EXISTS public.%s`, tableName))
}()
router := createTestRouter()
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
workspace := workspaces_testing.CreateTestWorkspace("Supabase Test Workspace", user, router)
storage := storages.CreateTestStorage(workspace.ID)
database := createSupabaseDatabaseViaAPI(
t, router, "Supabase Test Database", workspace.ID,
env.TestSupabaseHost, portInt,
env.TestSupabaseUsername, env.TestSupabasePassword, env.TestSupabaseDatabase,
[]string{"public"},
user.Token,
)
enableBackupsViaAPI(
t, router, database.ID, storage.ID,
backups_config.BackupEncryptionNone, user.Token,
)
createBackupViaAPI(t, router, database.ID, user.Token)
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
_, err = supabaseDB.Exec(fmt.Sprintf(`DELETE FROM public.%s`, tableName))
assert.NoError(t, err)
var countAfterDelete int
err = supabaseDB.Get(
&countAfterDelete,
fmt.Sprintf(`SELECT COUNT(*) FROM public.%s`, tableName),
)
assert.NoError(t, err)
assert.Equal(t, 0, countAfterDelete, "Table should be empty after delete")
createSupabaseRestoreViaAPI(
t, router, backup.ID,
env.TestSupabaseHost, portInt,
env.TestSupabaseUsername, env.TestSupabasePassword, env.TestSupabaseDatabase,
user.Token,
)
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
var countAfterRestore int
err = supabaseDB.Get(
&countAfterRestore,
fmt.Sprintf(`SELECT COUNT(*) FROM public.%s`, tableName),
)
assert.NoError(t, err)
assert.Equal(t, 3, countAfterRestore, "Table should have 3 rows after restore")
var restoredData []TestDataItem
err = supabaseDB.Select(
&restoredData,
fmt.Sprintf(`SELECT id, name, value, created_at FROM public.%s ORDER BY id`, tableName),
)
assert.NoError(t, err)
assert.Len(t, restoredData, 3)
assert.Equal(t, "test1", restoredData[0].Name)
assert.Equal(t, 100, restoredData[0].Value)
assert.Equal(t, "test2", restoredData[1].Name)
assert.Equal(t, 200, restoredData[1].Value)
assert.Equal(t, "test3", restoredData[2].Name)
assert.Equal(t, 300, restoredData[2].Value)
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
if err != nil {
t.Logf("Warning: Failed to delete backup file: %v", err)
}
test_utils.MakeDeleteRequest(
t,
router,
"/api/v1/databases/"+database.ID.String(),
"Bearer "+user.Token,
http.StatusNoContent,
)
storages.RemoveTestStorage(storage.ID)
workspaces_testing.RemoveTestWorkspace(workspace, router)
}
func Test_BackupPostgresql_SchemaSelection_AllSchemasWhenNoneSpecified(t *testing.T) {
env := config.GetEnv()
container, err := connectToPostgresContainer("16", env.TestPostgres16Port)
assert.NoError(t, err)
defer container.DB.Close()
_, err = container.DB.Exec(`
DROP SCHEMA IF EXISTS schema_a CASCADE;
DROP SCHEMA IF EXISTS schema_b CASCADE;
CREATE SCHEMA schema_a;
CREATE SCHEMA schema_b;
CREATE TABLE public.public_table (id SERIAL PRIMARY KEY, data TEXT);
CREATE TABLE schema_a.table_a (id SERIAL PRIMARY KEY, data TEXT);
CREATE TABLE schema_b.table_b (id SERIAL PRIMARY KEY, data TEXT);
INSERT INTO public.public_table (data) VALUES ('public_data');
INSERT INTO schema_a.table_a (data) VALUES ('schema_a_data');
INSERT INTO schema_b.table_b (data) VALUES ('schema_b_data');
`)
assert.NoError(t, err)
defer func() {
_, _ = container.DB.Exec(`
DROP TABLE IF EXISTS public.public_table;
DROP SCHEMA IF EXISTS schema_a CASCADE;
DROP SCHEMA IF EXISTS schema_b CASCADE;
`)
}()
router := createTestRouter()
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
workspace := workspaces_testing.CreateTestWorkspace("Schema Test Workspace", user, router)
storage := storages.CreateTestStorage(workspace.ID)
database := createDatabaseWithSchemasViaAPI(
t, router, "All Schemas Database", workspace.ID,
container.Host, container.Port,
container.Username, container.Password, container.Database,
nil,
user.Token,
)
enableBackupsViaAPI(
t, router, database.ID, storage.ID,
backups_config.BackupEncryptionNone, user.Token,
)
createBackupViaAPI(t, router, database.ID, user.Token)
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
newDBName := "restored_all_schemas"
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
assert.NoError(t, err)
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
assert.NoError(t, err)
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
container.Host, container.Port, container.Username, container.Password, newDBName)
newDB, err := sqlx.Connect("postgres", newDSN)
assert.NoError(t, err)
defer newDB.Close()
createRestoreViaAPI(
t, router, backup.ID,
container.Host, container.Port,
container.Username, container.Password, newDBName,
user.Token,
)
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
var publicTableExists bool
err = newDB.Get(&publicTableExists, `
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_schema = 'public' AND table_name = 'public_table'
)
`)
assert.NoError(t, err)
assert.True(t, publicTableExists, "public.public_table should exist in restored database")
var schemaATableExists bool
err = newDB.Get(&schemaATableExists, `
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_schema = 'schema_a' AND table_name = 'table_a'
)
`)
assert.NoError(t, err)
assert.True(t, schemaATableExists, "schema_a.table_a should exist in restored database")
var schemaBTableExists bool
err = newDB.Get(&schemaBTableExists, `
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_schema = 'schema_b' AND table_name = 'table_b'
)
`)
assert.NoError(t, err)
assert.True(t, schemaBTableExists, "schema_b.table_b should exist in restored database")
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
if err != nil {
t.Logf("Warning: Failed to delete backup file: %v", err)
}
test_utils.MakeDeleteRequest(
t,
router,
"/api/v1/databases/"+database.ID.String(),
"Bearer "+user.Token,
http.StatusNoContent,
)
storages.RemoveTestStorage(storage.ID)
workspaces_testing.RemoveTestWorkspace(workspace, router)
}
func Test_BackupPostgresql_SchemaSelection_OnlySpecifiedSchemas(t *testing.T) {
env := config.GetEnv()
container, err := connectToPostgresContainer("16", env.TestPostgres16Port)
assert.NoError(t, err)
defer container.DB.Close()
_, err = container.DB.Exec(`
DROP SCHEMA IF EXISTS schema_a CASCADE;
DROP SCHEMA IF EXISTS schema_b CASCADE;
CREATE SCHEMA schema_a;
CREATE SCHEMA schema_b;
CREATE TABLE public.public_table (id SERIAL PRIMARY KEY, data TEXT);
CREATE TABLE schema_a.table_a (id SERIAL PRIMARY KEY, data TEXT);
CREATE TABLE schema_b.table_b (id SERIAL PRIMARY KEY, data TEXT);
INSERT INTO public.public_table (data) VALUES ('public_data');
INSERT INTO schema_a.table_a (data) VALUES ('schema_a_data');
INSERT INTO schema_b.table_b (data) VALUES ('schema_b_data');
`)
assert.NoError(t, err)
defer func() {
_, _ = container.DB.Exec(`
DROP TABLE IF EXISTS public.public_table;
DROP SCHEMA IF EXISTS schema_a CASCADE;
DROP SCHEMA IF EXISTS schema_b CASCADE;
`)
}()
router := createTestRouter()
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
workspace := workspaces_testing.CreateTestWorkspace("Schema Test Workspace", user, router)
storage := storages.CreateTestStorage(workspace.ID)
database := createDatabaseWithSchemasViaAPI(
t, router, "Specific Schemas Database", workspace.ID,
container.Host, container.Port,
container.Username, container.Password, container.Database,
[]string{"public", "schema_a"},
user.Token,
)
enableBackupsViaAPI(
t, router, database.ID, storage.ID,
backups_config.BackupEncryptionNone, user.Token,
)
createBackupViaAPI(t, router, database.ID, user.Token)
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
newDBName := "restored_specific_schemas"
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
assert.NoError(t, err)
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
assert.NoError(t, err)
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
container.Host, container.Port, container.Username, container.Password, newDBName)
newDB, err := sqlx.Connect("postgres", newDSN)
assert.NoError(t, err)
defer newDB.Close()
createRestoreViaAPI(
t, router, backup.ID,
container.Host, container.Port,
container.Username, container.Password, newDBName,
user.Token,
)
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
var publicTableExists bool
err = newDB.Get(&publicTableExists, `
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_schema = 'public' AND table_name = 'public_table'
)
`)
assert.NoError(t, err)
assert.True(t, publicTableExists, "public.public_table should exist (was included)")
var schemaATableExists bool
err = newDB.Get(&schemaATableExists, `
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_schema = 'schema_a' AND table_name = 'table_a'
)
`)
assert.NoError(t, err)
assert.True(t, schemaATableExists, "schema_a.table_a should exist (was included)")
var schemaBTableExists bool
err = newDB.Get(&schemaBTableExists, `
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_schema = 'schema_b' AND table_name = 'table_b'
)
`)
assert.NoError(t, err)
assert.False(t, schemaBTableExists, "schema_b.table_b should NOT exist (was excluded)")
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
if err != nil {
t.Logf("Warning: Failed to delete backup file: %v", err)
}
test_utils.MakeDeleteRequest(
t,
router,
"/api/v1/databases/"+database.ID.String(),
"Bearer "+user.Token,
http.StatusNoContent,
)
storages.RemoveTestStorage(storage.ID)
workspaces_testing.RemoveTestWorkspace(workspace, router)
}
func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
container, err := connectToPostgresContainer(pgVersion, port)
assert.NoError(t, err)
@@ -132,10 +507,9 @@ func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
storage := storages.CreateTestStorage(workspace.ID)
pgVersionEnum := tools.GetPostgresqlVersionEnum(pgVersion)
database := createDatabaseViaAPI(
t, router, "Test Database", workspace.ID,
pgVersionEnum, container.Host, container.Port,
container.Host, container.Port,
container.Username, container.Password, container.Database,
user.Token,
)
@@ -164,7 +538,7 @@ func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
defer newDB.Close()
createRestoreViaAPI(
t, router, backup.ID, pgVersionEnum,
t, router, backup.ID,
container.Host, container.Port,
container.Username, container.Password, newDBName,
user.Token,
@@ -217,10 +591,9 @@ func testBackupRestoreWithEncryptionForVersion(t *testing.T, pgVersion string, p
storage := storages.CreateTestStorage(workspace.ID)
pgVersionEnum := tools.GetPostgresqlVersionEnum(pgVersion)
database := createDatabaseViaAPI(
t, router, "Test Database", workspace.ID,
pgVersionEnum, container.Host, container.Port,
container.Host, container.Port,
container.Username, container.Password, container.Database,
user.Token,
)
@@ -250,7 +623,7 @@ func testBackupRestoreWithEncryptionForVersion(t *testing.T, pgVersion string, p
defer newDB.Close()
createRestoreViaAPI(
t, router, backup.ID, pgVersionEnum,
t, router, backup.ID,
container.Host, container.Port,
container.Username, container.Password, newDBName,
user.Token,
@@ -379,7 +752,6 @@ func createDatabaseViaAPI(
router *gin.Engine,
name string,
workspaceID uuid.UUID,
pgVersion tools.PostgresqlVersion,
host string,
port int,
username string,
@@ -392,7 +764,6 @@ func createDatabaseViaAPI(
WorkspaceID: &workspaceID,
Type: databases.DatabaseTypePostgres,
Postgresql: &pgtypes.PostgresqlDatabase{
Version: pgVersion,
Host: host,
Port: port,
Username: username,
@@ -475,7 +846,6 @@ func createRestoreViaAPI(
t *testing.T,
router *gin.Engine,
backupID uuid.UUID,
pgVersion tools.PostgresqlVersion,
host string,
port int,
username string,
@@ -485,7 +855,6 @@ func createRestoreViaAPI(
) {
request := restores.RestoreBackupRequest{
PostgresqlDatabase: &pgtypes.PostgresqlDatabase{
Version: pgVersion,
Host: host,
Port: port,
Username: username,
@@ -504,6 +873,141 @@ func createRestoreViaAPI(
)
}
func createDatabaseWithSchemasViaAPI(
t *testing.T,
router *gin.Engine,
name string,
workspaceID uuid.UUID,
host string,
port int,
username string,
password string,
database string,
includeSchemas []string,
token string,
) *databases.Database {
request := databases.Database{
Name: name,
WorkspaceID: &workspaceID,
Type: databases.DatabaseTypePostgres,
Postgresql: &pgtypes.PostgresqlDatabase{
Host: host,
Port: port,
Username: username,
Password: password,
Database: &database,
IncludeSchemas: includeSchemas,
},
}
w := workspaces_testing.MakeAPIRequest(
router,
"POST",
"/api/v1/databases/create",
"Bearer "+token,
request,
)
if w.Code != http.StatusCreated {
t.Fatalf(
"Failed to create database with schemas. Status: %d, Body: %s",
w.Code,
w.Body.String(),
)
}
var createdDatabase databases.Database
if err := json.Unmarshal(w.Body.Bytes(), &createdDatabase); err != nil {
t.Fatalf("Failed to unmarshal database response: %v", err)
}
return &createdDatabase
}
func createSupabaseDatabaseViaAPI(
t *testing.T,
router *gin.Engine,
name string,
workspaceID uuid.UUID,
host string,
port int,
username string,
password string,
database string,
includeSchemas []string,
token string,
) *databases.Database {
request := databases.Database{
Name: name,
WorkspaceID: &workspaceID,
Type: databases.DatabaseTypePostgres,
Postgresql: &pgtypes.PostgresqlDatabase{
Host: host,
Port: port,
Username: username,
Password: password,
Database: &database,
IsHttps: true,
IncludeSchemas: includeSchemas,
},
}
w := workspaces_testing.MakeAPIRequest(
router,
"POST",
"/api/v1/databases/create",
"Bearer "+token,
request,
)
if w.Code != http.StatusCreated {
t.Fatalf(
"Failed to create Supabase database. Status: %d, Body: %s",
w.Code,
w.Body.String(),
)
}
var createdDatabase databases.Database
if err := json.Unmarshal(w.Body.Bytes(), &createdDatabase); err != nil {
t.Fatalf("Failed to unmarshal database response: %v", err)
}
return &createdDatabase
}
func createSupabaseRestoreViaAPI(
t *testing.T,
router *gin.Engine,
backupID uuid.UUID,
host string,
port int,
username string,
password string,
database string,
token string,
) {
request := restores.RestoreBackupRequest{
PostgresqlDatabase: &pgtypes.PostgresqlDatabase{
Host: host,
Port: port,
Username: username,
Password: password,
Database: &database,
IsHttps: true,
},
}
test_utils.MakePostRequest(
t,
router,
fmt.Sprintf("/api/v1/restores/%s/restore", backupID.String()),
"Bearer "+token,
request,
http.StatusOK,
)
}
func verifyDataIntegrity(t *testing.T, originalDB *sqlx.DB, restoredDB *sqlx.DB) {
var originalData []TestDataItem
var restoredData []TestDataItem
@@ -550,7 +1054,6 @@ func connectToPostgresContainer(version string, port string) (*PostgresContainer
Username: username,
Password: password,
Database: dbName,
Version: version,
DB: db,
}, nil
}

View File

@@ -0,0 +1,11 @@
-- +goose Up
-- +goose StatementBegin
ALTER TABLE postgresql_databases
ADD COLUMN include_schemas TEXT NOT NULL DEFAULT '';
-- +goose StatementEnd
-- +goose Down
-- +goose StatementBegin
ALTER TABLE postgresql_databases
DROP COLUMN include_schemas;
-- +goose StatementEnd

View File

@@ -0,0 +1,11 @@
-- +goose Up
-- +goose StatementBegin
ALTER TABLE s3_storages
ADD COLUMN skip_tls_verify BOOLEAN NOT NULL DEFAULT FALSE;
-- +goose StatementEnd
-- +goose Down
-- +goose StatementBegin
ALTER TABLE s3_storages
DROP COLUMN skip_tls_verify;
-- +goose StatementEnd

View File

@@ -0,0 +1,29 @@
-- +goose Up
-- +goose StatementBegin
CREATE TABLE ftp_storages (
storage_id UUID PRIMARY KEY,
host TEXT NOT NULL,
port INTEGER NOT NULL DEFAULT 21,
username TEXT NOT NULL,
password TEXT NOT NULL,
path TEXT,
use_ssl BOOLEAN NOT NULL DEFAULT FALSE,
skip_tls_verify BOOLEAN NOT NULL DEFAULT FALSE,
passive_mode BOOLEAN NOT NULL DEFAULT TRUE
);
ALTER TABLE ftp_storages
ADD CONSTRAINT fk_ftp_storages_storage
FOREIGN KEY (storage_id)
REFERENCES storages (id)
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
-- +goose StatementEnd
-- +goose Down
-- +goose StatementBegin
DROP TABLE IF EXISTS ftp_storages;
-- +goose StatementEnd

View File

@@ -0,0 +1,15 @@
-- +goose Up
-- +goose StatementBegin
ALTER TABLE ftp_storages
DROP COLUMN passive_mode;
-- +goose StatementEnd
-- +goose Down
-- +goose StatementBegin
ALTER TABLE ftp_storages
ADD COLUMN passive_mode BOOLEAN NOT NULL DEFAULT TRUE;
-- +goose StatementEnd

View File

@@ -32,6 +32,29 @@ Then open `http://localhost:4005` in your browser.
| `image.pullPolicy` | Image pull policy | `Always` |
| `replicaCount` | Number of replicas | `1` |
### Custom Root CA
| Parameter | Description | Default Value |
| -------------- | ---------------------------------------- | ------------- |
| `customRootCA` | Name of Secret containing CA certificate | `""` |
To trust a custom CA certificate (e.g., for internal services with self-signed certificates):
1. Create a Secret with your CA certificate:
```bash
kubectl create secret generic my-root-ca \
--from-file=ca.crt=./path/to/ca-certificate.crt
```
2. Reference it in values:
```yaml
customRootCA: my-root-ca
```
The certificate will be mounted to `/etc/ssl/certs/custom-root-ca.crt` and the `SSL_CERT_FILE` environment variable will be set automatically.
### Service
| Parameter | Description | Default Value |

View File

@@ -39,6 +39,11 @@ spec:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.customRootCA }}
env:
- name: SSL_CERT_FILE
value: /etc/ssl/certs/custom-root-ca.crt
{{- end }}
ports:
- name: http
containerPort: {{ .Values.service.targetPort }}
@@ -46,6 +51,12 @@ spec:
volumeMounts:
- name: postgresus-storage
mountPath: {{ .Values.persistence.mountPath }}
{{- if .Values.customRootCA }}
- name: custom-root-ca
mountPath: /etc/ssl/certs/custom-root-ca.crt
subPath: ca.crt
readOnly: true
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.livenessProbe.enabled }}
@@ -66,6 +77,12 @@ spec:
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
{{- end }}
{{- if .Values.customRootCA }}
volumes:
- name: custom-root-ca
secret:
secretName: {{ .Values.customRootCA }}
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:

View File

@@ -9,6 +9,9 @@ image:
# StatefulSet configuration
replicaCount: 1
# RootCA setup, need name of secret in same namespace
customRootCA: ""
# Service configuration
service:
type: ClusterIP

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,9 @@
"build": "tsc -b && vite build",
"lint": "eslint .",
"format": "prettier --write \"**/*.{ts,tsx,js,jsx,json,css,md}\"",
"preview": "vite preview"
"preview": "vite preview",
"test": "vitest run",
"test:watch": "vitest"
},
"dependencies": {
"@tailwindcss/vite": "^4.1.7",
@@ -22,6 +24,7 @@
"tailwindcss": "^4.1.7"
},
"devDependencies": {
"@vitest/coverage-v8": "^3.2.4",
"@eslint/js": "^9.25.0",
"@trivago/prettier-plugin-sort-imports": "^5.2.2",
"@types/react": "^19.1.2",
@@ -36,6 +39,7 @@
"prettier-plugin-tailwindcss": "^0.6.11",
"typescript": "~5.8.3",
"typescript-eslint": "^8.30.1",
"vite": "^6.3.5"
"vite": "^6.3.5",
"vitest": "^3.2.4"
}
}

View File

@@ -0,0 +1,3 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg width="800px" height="800px" viewBox="0 0 1024 1024" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg"><path d="M853.333333 256H469.333333l-85.333333-85.333333H170.666667c-46.933333 0-85.333333 38.4-85.333334 85.333333v170.666667h853.333334v-85.333334c0-46.933333-38.4-85.333333-85.333334-85.333333z" fill="#FFA000" /><path d="M853.333333 256H170.666667c-46.933333 0-85.333333 38.4-85.333334 85.333333v426.666667c0 46.933333 38.4 85.333333 85.333334 85.333333h682.666666c46.933333 0 85.333333-38.4 85.333334-85.333333V341.333333c0-46.933333-38.4-85.333333-85.333334-85.333333z" fill="#FFCA28" /></svg>

After

Width:  |  Height:  |  Size: 741 B

View File

@@ -0,0 +1,528 @@
import { describe, expect, it } from 'vitest';
import {
ConnectionStringParser,
type ParseError,
type ParseResult,
} from './ConnectionStringParser';
describe('ConnectionStringParser', () => {
// Helper to assert successful parse
const expectSuccess = (result: ParseResult | ParseError): ParseResult => {
expect('error' in result).toBe(false);
return result as ParseResult;
};
// Helper to assert parse error
const expectError = (result: ParseResult | ParseError): ParseError => {
expect('error' in result).toBe(true);
return result as ParseError;
};
describe('Standard PostgreSQL URI (postgresql://)', () => {
it('should parse basic postgresql:// connection string', () => {
const result = expectSuccess(
ConnectionStringParser.parse('postgresql://myuser:mypassword@localhost:5432/mydb'),
);
expect(result.host).toBe('localhost');
expect(result.port).toBe(5432);
expect(result.username).toBe('myuser');
expect(result.password).toBe('mypassword');
expect(result.database).toBe('mydb');
expect(result.isHttps).toBe(false);
});
it('should default port to 5432 when not specified', () => {
const result = expectSuccess(ConnectionStringParser.parse('postgresql://user:pass@host/db'));
expect(result.port).toBe(5432);
});
it('should handle URL-encoded passwords', () => {
const result = expectSuccess(
ConnectionStringParser.parse('postgresql://user:p%40ss%23word@host:5432/db'),
);
expect(result.password).toBe('p@ss#word');
});
it('should handle URL-encoded usernames', () => {
const result = expectSuccess(
ConnectionStringParser.parse('postgresql://user%40domain:password@host:5432/db'),
);
expect(result.username).toBe('user@domain');
});
});
describe('Postgres URI (postgres://)', () => {
it('should parse basic postgres:// connection string', () => {
const result = expectSuccess(
ConnectionStringParser.parse('postgres://admin:secret@db.example.com:5432/production'),
);
expect(result.host).toBe('db.example.com');
expect(result.port).toBe(5432);
expect(result.username).toBe('admin');
expect(result.password).toBe('secret');
expect(result.database).toBe('production');
});
});
describe('Supabase Direct Connection', () => {
it('should parse Supabase direct connection string', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgresql://postgres:mySecretPassword@db.abcdefghijklmnop.supabase.co:5432/postgres',
),
);
expect(result.host).toBe('db.abcdefghijklmnop.supabase.co');
expect(result.port).toBe(5432);
expect(result.username).toBe('postgres');
expect(result.password).toBe('mySecretPassword');
expect(result.database).toBe('postgres');
});
});
describe('Supabase Pooler Connection', () => {
it('should parse Supabase pooler session mode connection string (port 5432)', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgres://postgres.abcdefghijklmnop:myPassword@aws-0-us-east-1.pooler.supabase.com:5432/postgres',
),
);
expect(result.host).toBe('aws-0-us-east-1.pooler.supabase.com');
expect(result.port).toBe(5432);
expect(result.username).toBe('postgres.abcdefghijklmnop');
expect(result.password).toBe('myPassword');
expect(result.database).toBe('postgres');
});
it('should parse Supabase pooler transaction mode connection string (port 6543)', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgres://postgres.projectref:myPassword@aws-0-eu-west-1.pooler.supabase.com:6543/postgres',
),
);
expect(result.host).toBe('aws-0-eu-west-1.pooler.supabase.com');
expect(result.port).toBe(6543);
expect(result.username).toBe('postgres.projectref');
});
});
describe('JDBC Connection String', () => {
it('should parse JDBC connection string with user and password params', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'jdbc:postgresql://localhost:5432/mydb?user=admin&password=secret',
),
);
expect(result.host).toBe('localhost');
expect(result.port).toBe(5432);
expect(result.username).toBe('admin');
expect(result.password).toBe('secret');
expect(result.database).toBe('mydb');
});
it('should parse JDBC connection string without port', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'jdbc:postgresql://db.example.com/mydb?user=admin&password=secret',
),
);
expect(result.host).toBe('db.example.com');
expect(result.port).toBe(5432);
});
it('should parse JDBC with sslmode parameter', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'jdbc:postgresql://host:5432/db?user=u&password=p&sslmode=require',
),
);
expect(result.isHttps).toBe(true);
});
it('should return error for JDBC without user parameter', () => {
const result = expectError(
ConnectionStringParser.parse('jdbc:postgresql://host:5432/db?password=secret'),
);
expect(result.error).toContain('user');
expect(result.format).toBe('JDBC');
});
it('should return error for JDBC without password parameter', () => {
const result = expectError(
ConnectionStringParser.parse('jdbc:postgresql://host:5432/db?user=admin'),
);
expect(result.error).toContain('Password');
expect(result.format).toBe('JDBC');
});
});
describe('Neon Connection String', () => {
it('should parse Neon connection string', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgresql://neonuser:password123@ep-cool-name-123456.us-east-2.aws.neon.tech/neondb',
),
);
expect(result.host).toBe('ep-cool-name-123456.us-east-2.aws.neon.tech');
expect(result.username).toBe('neonuser');
expect(result.database).toBe('neondb');
});
});
describe('Railway Connection String', () => {
it('should parse Railway connection string', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgresql://postgres:railwaypass@containers-us-west-123.railway.app:5432/railway',
),
);
expect(result.host).toBe('containers-us-west-123.railway.app');
expect(result.username).toBe('postgres');
expect(result.database).toBe('railway');
});
});
describe('Render Connection String', () => {
it('should parse Render connection string', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgresql://renderuser:renderpass@dpg-abc123.oregon-postgres.render.com/mydb',
),
);
expect(result.host).toBe('dpg-abc123.oregon-postgres.render.com');
expect(result.username).toBe('renderuser');
expect(result.database).toBe('mydb');
});
});
describe('DigitalOcean Connection String', () => {
it('should parse DigitalOcean connection string with sslmode', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgresql://doadmin:dopassword@db-postgresql-nyc1-12345-do-user-123456-0.b.db.ondigitalocean.com:25060/defaultdb?sslmode=require',
),
);
expect(result.host).toBe('db-postgresql-nyc1-12345-do-user-123456-0.b.db.ondigitalocean.com');
expect(result.port).toBe(25060);
expect(result.username).toBe('doadmin');
expect(result.database).toBe('defaultdb');
expect(result.isHttps).toBe(true);
});
});
describe('AWS RDS Connection String', () => {
it('should parse AWS RDS connection string', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgresql://rdsuser:rdspass@mydb.abc123xyz.us-east-1.rds.amazonaws.com:5432/mydb',
),
);
expect(result.host).toBe('mydb.abc123xyz.us-east-1.rds.amazonaws.com');
expect(result.username).toBe('rdsuser');
});
});
describe('Azure Database for PostgreSQL Connection String', () => {
it('should parse Azure connection string with user@server format', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgresql://myuser@myserver:mypassword@myserver.postgres.database.azure.com:5432/mydb?sslmode=require',
),
);
expect(result.host).toBe('myserver.postgres.database.azure.com');
expect(result.port).toBe(5432);
expect(result.username).toBe('myuser');
expect(result.password).toBe('mypassword');
expect(result.database).toBe('mydb');
expect(result.isHttps).toBe(true);
});
});
describe('Heroku Connection String', () => {
it('should parse Heroku connection string', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgres://herokuuser:herokupass@ec2-12-34-56-789.compute-1.amazonaws.com:5432/herokudb',
),
);
expect(result.host).toBe('ec2-12-34-56-789.compute-1.amazonaws.com');
expect(result.username).toBe('herokuuser');
expect(result.database).toBe('herokudb');
});
});
describe('CockroachDB Connection String', () => {
it('should parse CockroachDB connection string with sslmode=verify-full', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgresql://crdbuser:crdbpass@free-tier.gcp-us-central1.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full',
),
);
expect(result.host).toBe('free-tier.gcp-us-central1.cockroachlabs.cloud');
expect(result.port).toBe(26257);
expect(result.isHttps).toBe(true);
});
});
describe('SSL Mode Handling', () => {
it('should set isHttps=true for sslmode=require', () => {
const result = expectSuccess(
ConnectionStringParser.parse('postgresql://u:p@host:5432/db?sslmode=require'),
);
expect(result.isHttps).toBe(true);
});
it('should set isHttps=true for sslmode=verify-ca', () => {
const result = expectSuccess(
ConnectionStringParser.parse('postgresql://u:p@host:5432/db?sslmode=verify-ca'),
);
expect(result.isHttps).toBe(true);
});
it('should set isHttps=true for sslmode=verify-full', () => {
const result = expectSuccess(
ConnectionStringParser.parse('postgresql://u:p@host:5432/db?sslmode=verify-full'),
);
expect(result.isHttps).toBe(true);
});
it('should set isHttps=false for sslmode=disable', () => {
const result = expectSuccess(
ConnectionStringParser.parse('postgresql://u:p@host:5432/db?sslmode=disable'),
);
expect(result.isHttps).toBe(false);
});
it('should set isHttps=false when no sslmode specified', () => {
const result = expectSuccess(ConnectionStringParser.parse('postgresql://u:p@host:5432/db'));
expect(result.isHttps).toBe(false);
});
});
describe('libpq Key-Value Format', () => {
it('should parse libpq format connection string', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'host=localhost port=5432 dbname=mydb user=admin password=secret',
),
);
expect(result.host).toBe('localhost');
expect(result.port).toBe(5432);
expect(result.username).toBe('admin');
expect(result.password).toBe('secret');
expect(result.database).toBe('mydb');
});
it('should parse libpq format with quoted password containing spaces', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
"host=localhost port=5432 dbname=mydb user=admin password='my secret pass'",
),
);
expect(result.password).toBe('my secret pass');
});
it('should default port to 5432 when not specified in libpq format', () => {
const result = expectSuccess(
ConnectionStringParser.parse('host=localhost dbname=mydb user=admin password=secret'),
);
expect(result.port).toBe(5432);
});
it('should handle hostaddr as alternative to host', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'hostaddr=192.168.1.1 port=5432 dbname=mydb user=admin password=secret',
),
);
expect(result.host).toBe('192.168.1.1');
});
it('should handle database as alternative to dbname', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'host=localhost port=5432 database=mydb user=admin password=secret',
),
);
expect(result.database).toBe('mydb');
});
it('should handle username as alternative to user', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'host=localhost port=5432 dbname=mydb username=admin password=secret',
),
);
expect(result.username).toBe('admin');
});
it('should parse sslmode in libpq format', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'host=localhost dbname=mydb user=admin password=secret sslmode=require',
),
);
expect(result.isHttps).toBe(true);
});
it('should return error for libpq format missing host', () => {
const result = expectError(
ConnectionStringParser.parse('port=5432 dbname=mydb user=admin password=secret'),
);
expect(result.error).toContain('Host');
expect(result.format).toBe('libpq');
});
it('should return error for libpq format missing user', () => {
const result = expectError(
ConnectionStringParser.parse('host=localhost dbname=mydb password=secret'),
);
expect(result.error).toContain('Username');
expect(result.format).toBe('libpq');
});
it('should return error for libpq format missing password', () => {
const result = expectError(
ConnectionStringParser.parse('host=localhost dbname=mydb user=admin'),
);
expect(result.error).toContain('Password');
expect(result.format).toBe('libpq');
});
it('should return error for libpq format missing dbname', () => {
const result = expectError(
ConnectionStringParser.parse('host=localhost user=admin password=secret'),
);
expect(result.error).toContain('Database');
expect(result.format).toBe('libpq');
});
});
describe('Error Cases', () => {
it('should return error for empty string', () => {
const result = expectError(ConnectionStringParser.parse(''));
expect(result.error).toContain('empty');
});
it('should return error for whitespace-only string', () => {
const result = expectError(ConnectionStringParser.parse(' '));
expect(result.error).toContain('empty');
});
it('should return error for unrecognized format', () => {
const result = expectError(ConnectionStringParser.parse('some random text'));
expect(result.error).toContain('Unrecognized');
});
it('should return error for missing username in URI', () => {
const result = expectError(
ConnectionStringParser.parse('postgresql://:password@host:5432/db'),
);
expect(result.error).toContain('Username');
});
it('should return error for missing password in URI', () => {
const result = expectError(ConnectionStringParser.parse('postgresql://user@host:5432/db'));
expect(result.error).toContain('Password');
});
it('should return error for missing database in URI', () => {
const result = expectError(ConnectionStringParser.parse('postgresql://user:pass@host:5432/'));
expect(result.error).toContain('Database');
});
it('should return error for invalid JDBC format', () => {
const result = expectError(ConnectionStringParser.parse('jdbc:postgresql://invalid'));
expect(result.format).toBe('JDBC');
});
});
describe('Edge Cases', () => {
it('should handle special characters in password', () => {
const result = expectSuccess(
ConnectionStringParser.parse('postgresql://user:p%40ss%3Aw%2Ford@host:5432/db'),
);
expect(result.password).toBe('p@ss:w/ord');
});
it('should handle numeric database names', () => {
const result = expectSuccess(
ConnectionStringParser.parse('postgresql://user:pass@host:5432/12345'),
);
expect(result.database).toBe('12345');
});
it('should handle hyphenated host names', () => {
const result = expectSuccess(
ConnectionStringParser.parse('postgresql://user:pass@my-database-host.example.com:5432/db'),
);
expect(result.host).toBe('my-database-host.example.com');
});
it('should handle connection string with extra query parameters', () => {
const result = expectSuccess(
ConnectionStringParser.parse(
'postgresql://user:pass@host:5432/db?sslmode=require&connect_timeout=10&application_name=myapp',
),
);
expect(result.isHttps).toBe(true);
expect(result.database).toBe('db');
});
it('should trim whitespace from connection string', () => {
const result = expectSuccess(
ConnectionStringParser.parse(' postgresql://user:pass@host:5432/db '),
);
expect(result.host).toBe('host');
});
});
});

View File

@@ -0,0 +1,284 @@
export type ParseResult = {
host: string;
port: number;
username: string;
password: string;
database: string;
isHttps: boolean;
};
export type ParseError = {
error: string;
format?: string;
};
export class ConnectionStringParser {
/**
* Parses a PostgreSQL connection string in various formats.
*
* Supported formats:
* 1. Standard PostgreSQL URI: postgresql://user:pass@host:port/db
* 2. Postgres URI: postgres://user:pass@host:port/db
* 3. Supabase Direct: postgresql://postgres:pass@db.xxx.supabase.co:5432/postgres
* 4. Supabase Pooler Session: postgres://postgres.ref:pass@aws-0-region.pooler.supabase.com:5432/postgres
* 5. Supabase Pooler Transaction: same as above with port 6543
* 6. JDBC: jdbc:postgresql://host:port/db?user=x&password=y
* 7. Neon: postgresql://user:pass@ep-xxx.neon.tech/db
* 8. Railway: postgresql://postgres:pass@xxx.railway.app:port/railway
* 9. Render: postgresql://user:pass@xxx.render.com/db
* 10. DigitalOcean: postgresql://user:pass@xxx.ondigitalocean.com:port/db?sslmode=require
* 11. AWS RDS: postgresql://user:pass@xxx.rds.amazonaws.com:port/db
* 12. Azure: postgresql://user@server:pass@xxx.postgres.database.azure.com:port/db?sslmode=require
* 13. Heroku: postgres://user:pass@ec2-xxx.amazonaws.com:port/db
* 14. CockroachDB: postgresql://user:pass@xxx.cockroachlabs.cloud:port/db?sslmode=verify-full
* 15. With SSL params: postgresql://user:pass@host:port/db?sslmode=require
* 16. libpq key-value: host=x port=5432 dbname=db user=u password=p
*/
static parse(connectionString: string): ParseResult | ParseError {
const trimmed = connectionString.trim();
if (!trimmed) {
return { error: 'Connection string is empty' };
}
// Try JDBC format first (starts with jdbc:)
if (trimmed.startsWith('jdbc:postgresql://')) {
return this.parseJdbc(trimmed);
}
// Try libpq key-value format (contains key=value pairs without ://)
if (this.isLibpqFormat(trimmed)) {
return this.parseLibpq(trimmed);
}
// Try URI format (postgresql:// or postgres://)
if (trimmed.startsWith('postgresql://') || trimmed.startsWith('postgres://')) {
return this.parseUri(trimmed);
}
return {
error: 'Unrecognized connection string format',
};
}
private static isLibpqFormat(str: string): boolean {
// libpq format has key=value pairs separated by spaces
// Must contain at least host= or dbname= to be considered libpq format
return (
!str.includes('://') &&
(str.includes('host=') || str.includes('dbname=')) &&
str.includes('=')
);
}
private static parseUri(connectionString: string): ParseResult | ParseError {
try {
// Handle Azure format where username contains @: user@server:pass
// Azure format: postgresql://user@servername:password@host:port/db
const azureMatch = connectionString.match(
/^postgres(?:ql)?:\/\/([^@:]+)@([^:]+):([^@]+)@([^:/?]+):?(\d+)?\/([^?]+)(?:\?(.*))?$/,
);
if (azureMatch) {
const [, user, , password, host, port, database, queryString] = azureMatch;
const isHttps = this.checkSslMode(queryString);
return {
host: host,
port: port ? parseInt(port, 10) : 5432,
username: decodeURIComponent(user),
password: decodeURIComponent(password),
database: decodeURIComponent(database),
isHttps,
};
}
// Standard URI parsing using URL API
const url = new URL(connectionString);
const host = url.hostname;
const port = url.port ? parseInt(url.port, 10) : 5432;
const username = decodeURIComponent(url.username);
const password = decodeURIComponent(url.password);
const database = decodeURIComponent(url.pathname.slice(1)); // Remove leading /
const isHttps = this.checkSslMode(url.search);
// Validate required fields
if (!host) {
return { error: 'Host is missing from connection string' };
}
if (!username) {
return { error: 'Username is missing from connection string' };
}
if (!password) {
return { error: 'Password is missing from connection string' };
}
if (!database) {
return { error: 'Database name is missing from connection string' };
}
return {
host,
port,
username,
password,
database,
isHttps,
};
} catch (e) {
return {
error: `Failed to parse connection string: ${(e as Error).message}`,
format: 'URI',
};
}
}
private static parseJdbc(connectionString: string): ParseResult | ParseError {
try {
// JDBC format: jdbc:postgresql://host:port/database?user=x&password=y
const jdbcRegex = /^jdbc:postgresql:\/\/([^:/?]+):?(\d+)?\/([^?]+)(?:\?(.*))?$/;
const match = connectionString.match(jdbcRegex);
if (!match) {
return {
error:
'Invalid JDBC connection string format. Expected: jdbc:postgresql://host:port/database?user=x&password=y',
format: 'JDBC',
};
}
const [, host, port, database, queryString] = match;
if (!queryString) {
return {
error: 'JDBC connection string is missing query parameters (user and password)',
format: 'JDBC',
};
}
const params = new URLSearchParams(queryString);
const username = params.get('user');
const password = params.get('password');
const isHttps = this.checkSslMode(queryString);
if (!username) {
return {
error: 'Username (user parameter) is missing from JDBC connection string',
format: 'JDBC',
};
}
if (!password) {
return {
error: 'Password parameter is missing from JDBC connection string',
format: 'JDBC',
};
}
return {
host,
port: port ? parseInt(port, 10) : 5432,
username: decodeURIComponent(username),
password: decodeURIComponent(password),
database: decodeURIComponent(database),
isHttps,
};
} catch (e) {
return {
error: `Failed to parse JDBC connection string: ${(e as Error).message}`,
format: 'JDBC',
};
}
}
private static parseLibpq(connectionString: string): ParseResult | ParseError {
try {
// libpq format: host=x port=5432 dbname=db user=u password=p
// Values can be quoted with single quotes: password='my pass'
const params: Record<string, string> = {};
// Match key=value or key='quoted value'
const regex = /(\w+)=(?:'([^']*)'|(\S+))/g;
let match;
while ((match = regex.exec(connectionString)) !== null) {
const key = match[1];
const value = match[2] !== undefined ? match[2] : match[3];
params[key] = value;
}
const host = params['host'] || params['hostaddr'];
const port = params['port'];
const database = params['dbname'] || params['database'];
const username = params['user'] || params['username'];
const password = params['password'];
const sslmode = params['sslmode'];
if (!host) {
return {
error: 'Host is missing from connection string. Use host=hostname',
format: 'libpq',
};
}
if (!username) {
return {
error: 'Username is missing from connection string. Use user=username',
format: 'libpq',
};
}
if (!password) {
return {
error: 'Password is missing from connection string. Use password=yourpassword',
format: 'libpq',
};
}
if (!database) {
return {
error: 'Database name is missing from connection string. Use dbname=database',
format: 'libpq',
};
}
const isHttps = this.isSslEnabled(sslmode);
return {
host,
port: port ? parseInt(port, 10) : 5432,
username,
password,
database,
isHttps,
};
} catch (e) {
return {
error: `Failed to parse libpq connection string: ${(e as Error).message}`,
format: 'libpq',
};
}
}
private static checkSslMode(queryString: string | undefined | null): boolean {
if (!queryString) return false;
const params = new URLSearchParams(
queryString.startsWith('?') ? queryString.slice(1) : queryString,
);
const sslmode = params.get('sslmode');
return this.isSslEnabled(sslmode);
}
private static isSslEnabled(sslmode: string | null | undefined): boolean {
if (!sslmode) return false;
// These modes require SSL
const sslModes = ['require', 'verify-ca', 'verify-full'];
return sslModes.includes(sslmode.toLowerCase());
}
}

View File

@@ -11,4 +11,7 @@ export interface PostgresqlDatabase {
password: string;
database?: string;
isHttps: boolean;
// backup settings
includeSchemas?: string[];
}

View File

@@ -8,3 +8,4 @@ export { getStorageLogoFromType } from './models/getStorageLogoFromType';
export { getStorageNameFromType } from './models/getStorageNameFromType';
export { type GoogleDriveStorage } from './models/GoogleDriveStorage';
export { type AzureBlobStorage } from './models/AzureBlobStorage';
export { type FTPStorage } from './models/FTPStorage';

View File

@@ -0,0 +1,9 @@
export interface FTPStorage {
host: string;
port: number;
username: string;
password: string;
useSsl: boolean;
skipTlsVerify?: boolean;
path?: string;
}

View File

@@ -6,4 +6,5 @@ export interface S3Storage {
s3Endpoint?: string;
s3Prefix?: string;
s3UseVirtualHostedStyle?: boolean;
skipTLSVerify?: boolean;
}

View File

@@ -1,4 +1,5 @@
import type { AzureBlobStorage } from './AzureBlobStorage';
import type { FTPStorage } from './FTPStorage';
import type { GoogleDriveStorage } from './GoogleDriveStorage';
import type { LocalStorage } from './LocalStorage';
import type { NASStorage } from './NASStorage';
@@ -18,4 +19,5 @@ export interface Storage {
googleDriveStorage?: GoogleDriveStorage;
nasStorage?: NASStorage;
azureBlobStorage?: AzureBlobStorage;
ftpStorage?: FTPStorage;
}

View File

@@ -4,4 +4,5 @@ export enum StorageType {
GOOGLE_DRIVE = 'GOOGLE_DRIVE',
NAS = 'NAS',
AZURE_BLOB = 'AZURE_BLOB',
FTP = 'FTP',
}

View File

@@ -12,6 +12,8 @@ export const getStorageLogoFromType = (type: StorageType) => {
return '/icons/storages/nas.svg';
case StorageType.AZURE_BLOB:
return '/icons/storages/azure.svg';
case StorageType.FTP:
return '/icons/storages/ftp.svg';
default:
return '';
}

View File

@@ -12,6 +12,8 @@ export const getStorageNameFromType = (type: StorageType) => {
return 'NAS';
case StorageType.AZURE_BLOB:
return 'Azure Blob Storage';
case StorageType.FTP:
return 'FTP';
default:
return '';
}

View File

@@ -72,6 +72,7 @@ export const EditBackupConfigComponent = ({
const [storages, setStorages] = useState<Storage[]>([]);
const [isStoragesLoading, setIsStoragesLoading] = useState(false);
const [isShowCreateStorage, setShowCreateStorage] = useState(false);
const [storageSelectKey, setStorageSelectKey] = useState(0);
const [isShowWarn, setIsShowWarn] = useState(false);
@@ -397,6 +398,7 @@ export const EditBackupConfigComponent = ({
<div className="mb-1 min-w-[150px] sm:mb-0">Storage</div>
<div className="flex w-full items-center">
<Select
key={storageSelectKey}
value={backupConfig.storage?.id}
onChange={(storageId) => {
if (storageId.includes('create-new-storage')) {
@@ -527,7 +529,10 @@ export const EditBackupConfigComponent = ({
title="Add storage"
footer={<div />}
open={isShowCreateStorage}
onCancel={() => setShowCreateStorage(false)}
onCancel={() => {
setShowCreateStorage(false);
setStorageSelectKey((prev) => prev + 1);
}}
>
<div className="my-3 max-w-[275px] text-gray-500 dark:text-gray-400">
Storage - is a place where backups will be stored (local disk, S3, Google Drive, etc.)

View File

@@ -43,6 +43,7 @@ export const EditDatabaseNotifiersComponent = ({
const [notifiers, setNotifiers] = useState<Notifier[]>([]);
const [isNotifiersLoading, setIsNotifiersLoading] = useState(false);
const [isShowCreateNotifier, setShowCreateNotifier] = useState(false);
const [notifierSelectKey, setNotifierSelectKey] = useState(0);
const saveDatabase = async () => {
if (!editingDatabase) return;
@@ -104,6 +105,7 @@ export const EditDatabaseNotifiersComponent = ({
<div className="min-w-[150px]">Notifiers</div>
<Select
key={notifierSelectKey}
mode="multiple"
value={editingDatabase.notifiers.map((n) => n.id)}
onChange={(notifiersIds) => {
@@ -160,7 +162,10 @@ export const EditDatabaseNotifiersComponent = ({
title="Add notifier"
footer={<div />}
open={isShowCreateNotifier}
onCancel={() => setShowCreateNotifier(false)}
onCancel={() => {
setShowCreateNotifier(false);
setNotifierSelectKey((prev) => prev + 1);
}}
>
<div className="my-3 max-w-[275px] text-gray-500 dark:text-gray-400">
Notifier - is a place where notifications will be sent (email, Slack, Telegram, etc.)

View File

@@ -1,13 +1,9 @@
import { InfoCircleOutlined } from '@ant-design/icons';
import { Button, Input, InputNumber, Select, Switch, Tooltip } from 'antd';
import { CopyOutlined, DownOutlined, UpOutlined } from '@ant-design/icons';
import { App, Button, Input, InputNumber, Select, Switch } from 'antd';
import { useEffect, useState } from 'react';
import {
type Database,
DatabaseType,
PostgresqlVersion,
databaseApi,
} from '../../../../entity/databases';
import { type Database, DatabaseType, databaseApi } from '../../../../entity/databases';
import { ConnectionStringParser } from '../../../../entity/databases/model/postgresql/ConnectionStringParser';
import { ToastHelper } from '../../../../shared/toast';
interface Props {
@@ -23,7 +19,6 @@ interface Props {
isSaveToApi: boolean;
onSaved: (database: Database) => void;
isShowDbVersionHint?: boolean;
isShowDbName?: boolean;
}
@@ -39,10 +34,10 @@ export const EditDatabaseSpecificDataComponent = ({
saveButtonText,
isSaveToApi,
onSaved,
isShowDbVersionHint = true,
isShowDbName = true,
}: Props) => {
const { message } = App.useApp();
const [editingDatabase, setEditingDatabase] = useState<Database>();
const [isSaving, setIsSaving] = useState(false);
@@ -50,6 +45,76 @@ export const EditDatabaseSpecificDataComponent = ({
const [isTestingConnection, setIsTestingConnection] = useState(false);
const [isConnectionFailed, setIsConnectionFailed] = useState(false);
const hasAdvancedValues = !!database.postgresql?.includeSchemas?.length;
const [isShowAdvanced, setShowAdvanced] = useState(hasAdvancedValues);
const [hasAutoAddedPublicSchema, setHasAutoAddedPublicSchema] = useState(false);
const parseFromClipboard = async () => {
try {
const text = await navigator.clipboard.readText();
const trimmedText = text.trim();
if (!trimmedText) {
message.error('Clipboard is empty');
return;
}
const result = ConnectionStringParser.parse(trimmedText);
if ('error' in result) {
message.error(result.error);
return;
}
if (!editingDatabase?.postgresql) return;
const updatedDatabase: Database = {
...editingDatabase,
postgresql: {
...editingDatabase.postgresql,
host: result.host,
port: result.port,
username: result.username,
password: result.password,
database: result.database,
isHttps: result.isHttps,
},
};
setEditingDatabase(autoAddPublicSchemaForSupabase(updatedDatabase));
setIsConnectionTested(false);
message.success('Connection string parsed successfully');
} catch {
message.error('Failed to read clipboard. Please check browser permissions.');
}
};
const autoAddPublicSchemaForSupabase = (updatedDatabase: Database): Database => {
if (hasAutoAddedPublicSchema) return updatedDatabase;
const host = updatedDatabase.postgresql?.host || '';
const username = updatedDatabase.postgresql?.username || '';
const isSupabase = host.includes('supabase') || username.includes('supabase');
if (isSupabase && updatedDatabase.postgresql) {
setHasAutoAddedPublicSchema(true);
const currentSchemas = updatedDatabase.postgresql.includeSchemas || [];
if (!currentSchemas.includes('public')) {
return {
...updatedDatabase,
postgresql: {
...updatedDatabase.postgresql,
includeSchemas: ['public', ...currentSchemas],
},
};
}
}
return updatedDatabase;
};
const testConnection = async () => {
if (!editingDatabase) return;
setIsTestingConnection(true);
@@ -100,7 +165,6 @@ export const EditDatabaseSpecificDataComponent = ({
if (!editingDatabase) return null;
let isAllFieldsFilled = true;
if (!editingDatabase.postgresql?.version) isAllFieldsFilled = false;
if (!editingDatabase.postgresql?.host) isAllFieldsFilled = false;
if (!editingDatabase.postgresql?.port) isAllFieldsFilled = false;
if (!editingDatabase.postgresql?.username) isAllFieldsFilled = false;
@@ -111,49 +175,23 @@ export const EditDatabaseSpecificDataComponent = ({
editingDatabase.postgresql?.host?.includes('localhost') ||
editingDatabase.postgresql?.host?.includes('127.0.0.1');
const isSupabaseDb =
editingDatabase.postgresql?.host?.includes('supabase') ||
editingDatabase.postgresql?.username?.includes('supabase');
return (
<div>
{editingDatabase.type === DatabaseType.POSTGRES && (
<>
<div className="mb-1 flex w-full items-center">
<div className="min-w-[150px]">PG version</div>
<Select
value={editingDatabase.postgresql?.version}
onChange={(v) => {
if (!editingDatabase.postgresql) return;
setEditingDatabase({
...editingDatabase,
postgresql: {
...editingDatabase.postgresql,
version: v as PostgresqlVersion,
},
});
setIsConnectionTested(false);
}}
size="small"
className="max-w-[200px] grow"
placeholder="Select PG version"
options={[
{ label: '12', value: PostgresqlVersion.PostgresqlVersion12 },
{ label: '13', value: PostgresqlVersion.PostgresqlVersion13 },
{ label: '14', value: PostgresqlVersion.PostgresqlVersion14 },
{ label: '15', value: PostgresqlVersion.PostgresqlVersion15 },
{ label: '16', value: PostgresqlVersion.PostgresqlVersion16 },
{ label: '17', value: PostgresqlVersion.PostgresqlVersion17 },
{ label: '18', value: PostgresqlVersion.PostgresqlVersion18 },
]}
/>
{isShowDbVersionHint && (
<Tooltip
className="cursor-pointer"
title="Please select the version of PostgreSQL you are backing up now. You will be able to restore backup to the same version or higher"
>
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
</Tooltip>
)}
<div className="mb-3 flex">
<div className="min-w-[150px]" />
<div
className="cursor-pointer text-sm text-gray-600 transition-colors hover:text-gray-900 dark:text-gray-400 dark:hover:text-gray-200"
onClick={parseFromClipboard}
>
<CopyOutlined className="mr-1" />
Parse from clipboard
</div>
</div>
<div className="mb-1 flex w-full items-center">
@@ -163,13 +201,14 @@ export const EditDatabaseSpecificDataComponent = ({
onChange={(e) => {
if (!editingDatabase.postgresql) return;
setEditingDatabase({
const updatedDatabase = {
...editingDatabase,
postgresql: {
...editingDatabase.postgresql,
host: e.target.value.trim().replace('https://', '').replace('http://', ''),
},
});
};
setEditingDatabase(autoAddPublicSchemaForSupabase(updatedDatabase));
setIsConnectionTested(false);
}}
size="small"
@@ -184,7 +223,7 @@ export const EditDatabaseSpecificDataComponent = ({
<div className="max-w-[200px] text-xs text-gray-500 dark:text-gray-400">
Please{' '}
<a
href="https://postgresus.com/faq#how-to-backup-localhost"
href="https://postgresus.com/faq/localhost"
target="_blank"
rel="noreferrer"
className="!text-blue-600 dark:!text-blue-400"
@@ -196,6 +235,24 @@ export const EditDatabaseSpecificDataComponent = ({
</div>
)}
{isSupabaseDb && (
<div className="mb-1 flex">
<div className="min-w-[150px]" />
<div className="max-w-[200px] text-xs text-gray-500 dark:text-gray-400">
Please{' '}
<a
href="https://postgresus.com/faq/supabase"
target="_blank"
rel="noreferrer"
className="!text-blue-600 dark:!text-blue-400"
>
read this document
</a>{' '}
to study how to backup Supabase database
</div>
</div>
)}
<div className="mb-1 flex w-full items-center">
<div className="min-w-[150px]">Port</div>
<InputNumber
@@ -223,10 +280,11 @@ export const EditDatabaseSpecificDataComponent = ({
onChange={(e) => {
if (!editingDatabase.postgresql) return;
setEditingDatabase({
const updatedDatabase = {
...editingDatabase,
postgresql: { ...editingDatabase.postgresql, username: e.target.value.trim() },
});
};
setEditingDatabase(autoAddPublicSchemaForSupabase(updatedDatabase));
setIsConnectionTested(false);
}}
size="small"
@@ -291,6 +349,43 @@ export const EditDatabaseSpecificDataComponent = ({
size="small"
/>
</div>
<div className="mt-4 mb-3 flex items-center">
<div
className="flex cursor-pointer items-center text-sm text-blue-600 hover:text-blue-800"
onClick={() => setShowAdvanced(!isShowAdvanced)}
>
<span className="mr-2">Advanced settings</span>
{isShowAdvanced ? (
<UpOutlined style={{ fontSize: '12px' }} />
) : (
<DownOutlined style={{ fontSize: '12px' }} />
)}
</div>
</div>
{isShowAdvanced && (
<div className="mb-1 flex w-full items-center">
<div className="min-w-[150px]">Include schemas</div>
<Select
mode="tags"
value={editingDatabase.postgresql?.includeSchemas || []}
onChange={(values) => {
if (!editingDatabase.postgresql) return;
setEditingDatabase({
...editingDatabase,
postgresql: { ...editingDatabase.postgresql, includeSchemas: values },
});
}}
size="small"
className="max-w-[200px] grow"
placeholder="All schemas (default)"
tokenSeparators={[',']}
/>
</div>
)}
</>
)}

View File

@@ -57,6 +57,13 @@ export const ShowDatabaseSpecificDataComponent = ({ database }: Props) => {
<div className="min-w-[150px]">Use HTTPS</div>
<div>{database.postgresql?.isHttps ? 'Yes' : 'No'}</div>
</div>
{!!database.postgresql?.includeSchemas?.length && (
<div className="mb-1 flex w-full items-center">
<div className="min-w-[150px]">Include schemas</div>
<div>{database.postgresql.includeSchemas.join(', ')}</div>
</div>
)}
</>
)}
</div>

View File

@@ -111,7 +111,6 @@ export const RestoresComponent = ({ database, backup }: Props) => {
setEditingDatabase({ ...database });
restore(database);
}}
isShowDbVersionHint={false}
/>
</>
);

View File

@@ -9,6 +9,7 @@ import {
} from '../../../../entity/storages';
import { ToastHelper } from '../../../../shared/toast';
import { EditAzureBlobStorageComponent } from './storages/EditAzureBlobStorageComponent';
import { EditFTPStorageComponent } from './storages/EditFTPStorageComponent';
import { EditGoogleDriveStorageComponent } from './storages/EditGoogleDriveStorageComponent';
import { EditNASStorageComponent } from './storages/EditNASStorageComponent';
import { EditS3StorageComponent } from './storages/EditS3StorageComponent';
@@ -39,6 +40,7 @@ export function EditStorageComponent({
const [isTestingConnection, setIsTestingConnection] = useState(false);
const [isTestConnectionSuccess, setIsTestConnectionSuccess] = useState(false);
const [connectionError, setConnectionError] = useState<string | undefined>();
const save = async () => {
if (!storage) return;
@@ -60,6 +62,7 @@ export function EditStorageComponent({
if (!storage) return;
setIsTestingConnection(true);
setConnectionError(undefined);
try {
await storageApi.testStorageConnectionDirect(storage);
@@ -69,7 +72,9 @@ export function EditStorageComponent({
description: 'Storage connection tested successfully',
});
} catch (e) {
alert((e as Error).message);
const errorMessage = (e as Error).message;
setConnectionError(errorMessage);
alert(errorMessage);
}
setIsTestingConnection(false);
@@ -82,6 +87,7 @@ export function EditStorageComponent({
storage.s3Storage = undefined;
storage.googleDriveStorage = undefined;
storage.azureBlobStorage = undefined;
storage.ftpStorage = undefined;
if (type === StorageType.LOCAL) {
storage.localStorage = {};
@@ -129,6 +135,17 @@ export function EditStorageComponent({
};
}
if (type === StorageType.FTP) {
storage.ftpStorage = {
host: '',
port: 21,
username: '',
password: '',
useSsl: false,
path: '',
};
}
setStorage(
JSON.parse(
JSON.stringify({
@@ -231,6 +248,19 @@ export function EditStorageComponent({
}
}
if (storage.type === StorageType.FTP) {
if (storage.id) {
return storage.ftpStorage?.host && storage.ftpStorage?.port && storage.ftpStorage?.username;
}
return (
storage.ftpStorage?.host &&
storage.ftpStorage?.port &&
storage.ftpStorage?.username &&
storage.ftpStorage?.password
);
}
return false;
};
@@ -267,6 +297,7 @@ export function EditStorageComponent({
{ label: 'Google Drive', value: StorageType.GOOGLE_DRIVE },
{ label: 'NAS', value: StorageType.NAS },
{ label: 'Azure Blob Storage', value: StorageType.AZURE_BLOB },
{ label: 'FTP', value: StorageType.FTP },
]}
onChange={(value) => {
setStorageType(value);
@@ -290,7 +321,9 @@ export function EditStorageComponent({
setUnsaved={() => {
setIsUnsaved(true);
setIsTestConnectionSuccess(false);
setConnectionError(undefined);
}}
connectionError={connectionError}
/>
)}
@@ -326,6 +359,17 @@ export function EditStorageComponent({
}}
/>
)}
{storage?.type === StorageType.FTP && (
<EditFTPStorageComponent
storage={storage}
setStorage={setStorage}
setUnsaved={() => {
setIsUnsaved(true);
setIsTestConnectionSuccess(false);
}}
/>
)}
</div>
<div className="mt-3 flex">

View File

@@ -0,0 +1,229 @@
import { DownOutlined, InfoCircleOutlined, UpOutlined } from '@ant-design/icons';
import { Checkbox, Input, InputNumber, Tooltip } from 'antd';
import { useState } from 'react';
import type { Storage } from '../../../../../entity/storages';
interface Props {
storage: Storage;
setStorage: (storage: Storage) => void;
setUnsaved: () => void;
}
export function EditFTPStorageComponent({ storage, setStorage, setUnsaved }: Props) {
const hasAdvancedValues = !!storage?.ftpStorage?.skipTlsVerify;
const [showAdvanced, setShowAdvanced] = useState(hasAdvancedValues);
return (
<>
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
<div className="mb-1 min-w-[110px] sm:mb-0">Host</div>
<Input
value={storage?.ftpStorage?.host || ''}
onChange={(e) => {
if (!storage?.ftpStorage) return;
setStorage({
...storage,
ftpStorage: {
...storage.ftpStorage,
host: e.target.value.trim(),
},
});
setUnsaved();
}}
size="small"
className="w-full max-w-[250px]"
placeholder="ftp.example.com"
/>
</div>
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
<div className="mb-1 min-w-[110px] sm:mb-0">Port</div>
<InputNumber
value={storage?.ftpStorage?.port}
onChange={(value) => {
if (!storage?.ftpStorage || !value) return;
setStorage({
...storage,
ftpStorage: {
...storage.ftpStorage,
port: value,
},
});
setUnsaved();
}}
size="small"
className="w-full max-w-[250px]"
min={1}
max={65535}
placeholder="21"
/>
</div>
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
<div className="mb-1 min-w-[110px] sm:mb-0">Username</div>
<Input
value={storage?.ftpStorage?.username || ''}
onChange={(e) => {
if (!storage?.ftpStorage) return;
setStorage({
...storage,
ftpStorage: {
...storage.ftpStorage,
username: e.target.value.trim(),
},
});
setUnsaved();
}}
size="small"
className="w-full max-w-[250px]"
placeholder="username"
/>
</div>
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
<div className="mb-1 min-w-[110px] sm:mb-0">Password</div>
<Input.Password
value={storage?.ftpStorage?.password || ''}
onChange={(e) => {
if (!storage?.ftpStorage) return;
setStorage({
...storage,
ftpStorage: {
...storage.ftpStorage,
password: e.target.value,
},
});
setUnsaved();
}}
size="small"
className="w-full max-w-[250px]"
placeholder="password"
/>
</div>
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
<div className="mb-1 min-w-[110px] sm:mb-0">Path</div>
<div className="flex items-center">
<Input
value={storage?.ftpStorage?.path || ''}
onChange={(e) => {
if (!storage?.ftpStorage) return;
let pathValue = e.target.value.trim();
if (pathValue.startsWith('/')) {
pathValue = pathValue.substring(1);
}
setStorage({
...storage,
ftpStorage: {
...storage.ftpStorage,
path: pathValue || undefined,
},
});
setUnsaved();
}}
size="small"
className="w-full max-w-[250px]"
placeholder="backups (optional)"
/>
<Tooltip
className="cursor-pointer"
title="Remote directory path for storing backups (optional)"
>
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
</Tooltip>
</div>
</div>
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
<div className="mb-1 min-w-[110px] sm:mb-0">Use SSL/TLS</div>
<div className="flex items-center">
<Checkbox
checked={storage?.ftpStorage?.useSsl || false}
onChange={(e) => {
if (!storage?.ftpStorage) return;
setStorage({
...storage,
ftpStorage: {
...storage.ftpStorage,
useSsl: e.target.checked,
},
});
setUnsaved();
}}
>
Enable FTPS
</Checkbox>
<Tooltip
className="cursor-pointer"
title="Use explicit TLS encryption (FTPS) for secure file transfer"
>
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
</Tooltip>
</div>
</div>
<div className="mt-4 mb-3 flex items-center">
<div
className="flex cursor-pointer items-center text-sm text-blue-600 hover:text-blue-800"
onClick={() => setShowAdvanced(!showAdvanced)}
>
<span className="mr-2">Advanced settings</span>
{showAdvanced ? (
<UpOutlined style={{ fontSize: '12px' }} />
) : (
<DownOutlined style={{ fontSize: '12px' }} />
)}
</div>
</div>
{showAdvanced && (
<>
{storage?.ftpStorage?.useSsl && (
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
<div className="mb-1 min-w-[110px] sm:mb-0">Skip TLS verify</div>
<div className="flex items-center">
<Checkbox
checked={storage?.ftpStorage?.skipTlsVerify || false}
onChange={(e) => {
if (!storage?.ftpStorage) return;
setStorage({
...storage,
ftpStorage: {
...storage.ftpStorage,
skipTlsVerify: e.target.checked,
},
});
setUnsaved();
}}
>
Skip certificate verification
</Checkbox>
<Tooltip
className="cursor-pointer"
title="Skip TLS certificate verification. Enable this if your FTP server uses a self-signed certificate. Warning: this reduces security."
>
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
</Tooltip>
</div>
</div>
)}
</>
)}
<div className="mb-5" />
</>
);
}

View File

@@ -1,6 +1,6 @@
import { DownOutlined, InfoCircleOutlined, UpOutlined } from '@ant-design/icons';
import { Checkbox, Input, Tooltip } from 'antd';
import { useState } from 'react';
import { useEffect, useState } from 'react';
import type { Storage } from '../../../../../entity/storages';
@@ -8,13 +8,27 @@ interface Props {
storage: Storage;
setStorage: (storage: Storage) => void;
setUnsaved: () => void;
connectionError?: string;
}
export function EditS3StorageComponent({ storage, setStorage, setUnsaved }: Props) {
export function EditS3StorageComponent({
storage,
setStorage,
setUnsaved,
connectionError,
}: Props) {
const hasAdvancedValues =
!!storage?.s3Storage?.s3Prefix || !!storage?.s3Storage?.s3UseVirtualHostedStyle;
!!storage?.s3Storage?.s3Prefix ||
!!storage?.s3Storage?.s3UseVirtualHostedStyle ||
!!storage?.s3Storage?.skipTLSVerify;
const [showAdvanced, setShowAdvanced] = useState(hasAdvancedValues);
useEffect(() => {
if (connectionError?.includes('failed to verify certificate')) {
setShowAdvanced(true);
}
}, [connectionError]);
return (
<>
<div className="mb-2 flex items-center">
@@ -226,6 +240,36 @@ export function EditS3StorageComponent({ storage, setStorage, setUnsaved }: Prop
</Tooltip>
</div>
</div>
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
<div className="mb-1 min-w-[110px] sm:mb-0">Skip TLS verify</div>
<div className="flex items-center">
<Checkbox
checked={storage?.s3Storage?.skipTLSVerify || false}
onChange={(e) => {
if (!storage?.s3Storage) return;
setStorage({
...storage,
s3Storage: {
...storage.s3Storage,
skipTLSVerify: e.target.checked,
},
});
setUnsaved();
}}
>
Skip TLS
</Checkbox>
<Tooltip
className="cursor-pointer"
title="Skip TLS certificate verification. Enable this if your S3-compatible storage uses a self-signed certificate. Warning: this reduces security."
>
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
</Tooltip>
</div>
</div>
</>
)}

View File

@@ -2,6 +2,7 @@ import { type Storage, StorageType } from '../../../../entity/storages';
import { getStorageLogoFromType } from '../../../../entity/storages/models/getStorageLogoFromType';
import { getStorageNameFromType } from '../../../../entity/storages/models/getStorageNameFromType';
import { ShowAzureBlobStorageComponent } from './storages/ShowAzureBlobStorageComponent';
import { ShowFTPStorageComponent } from './storages/ShowFTPStorageComponent';
import { ShowGoogleDriveStorageComponent } from './storages/ShowGoogleDriveStorageComponent';
import { ShowNASStorageComponent } from './storages/ShowNASStorageComponent';
import { ShowS3StorageComponent } from './storages/ShowS3StorageComponent';
@@ -44,6 +45,10 @@ export function ShowStorageComponent({ storage }: Props) {
<ShowAzureBlobStorageComponent storage={storage} />
)}
</div>
<div>
{storage?.type === StorageType.FTP && <ShowFTPStorageComponent storage={storage} />}
</div>
</div>
);
}

View File

@@ -0,0 +1,48 @@
import type { Storage } from '../../../../../entity/storages';
interface Props {
storage: Storage;
}
export function ShowFTPStorageComponent({ storage }: Props) {
return (
<>
<div className="mb-1 flex items-center">
<div className="min-w-[110px]">Host</div>
{storage?.ftpStorage?.host || '-'}
</div>
<div className="mb-1 flex items-center">
<div className="min-w-[110px]">Port</div>
{storage?.ftpStorage?.port || '-'}
</div>
<div className="mb-1 flex items-center">
<div className="min-w-[110px]">Username</div>
{storage?.ftpStorage?.username || '-'}
</div>
<div className="mb-1 flex items-center">
<div className="min-w-[110px]">Password</div>
{'*************'}
</div>
<div className="mb-1 flex items-center">
<div className="min-w-[110px]">Path</div>
{storage?.ftpStorage?.path || '-'}
</div>
<div className="mb-1 flex items-center">
<div className="min-w-[110px]">Use SSL/TLS</div>
{storage?.ftpStorage?.useSsl ? 'Yes' : 'No'}
</div>
{storage?.ftpStorage?.useSsl && storage?.ftpStorage?.skipTlsVerify && (
<div className="mb-1 flex items-center">
<div className="min-w-[110px]">Skip TLS</div>
Enabled
</div>
)}
</>
);
}

View File

@@ -45,6 +45,13 @@ export function ShowS3StorageComponent({ storage }: Props) {
Enabled
</div>
)}
{storage?.s3Storage?.skipTLSVerify && (
<div className="mb-1 flex items-center">
<div className="min-w-[110px]">Skip TLS</div>
Enabled
</div>
)}
</>
);
}

View File

@@ -0,0 +1,8 @@
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
environment: 'node',
include: ['src/**/*.test.ts'],
},
});

470
package-lock.json generated
View File

@@ -1,470 +0,0 @@
{
"name": "postgresus",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"dependencies": {
"@types/recharts": "^1.8.29",
"recharts": "^3.2.0"
}
},
"node_modules/@reduxjs/toolkit": {
"version": "2.9.0",
"resolved": "https://registry.npmjs.org/@reduxjs/toolkit/-/toolkit-2.9.0.tgz",
"integrity": "sha512-fSfQlSRu9Z5yBkvsNhYF2rPS8cGXn/TZVrlwN1948QyZ8xMZ0JvP50S2acZNaf+o63u6aEeMjipFyksjIcWrog==",
"license": "MIT",
"dependencies": {
"@standard-schema/spec": "^1.0.0",
"@standard-schema/utils": "^0.3.0",
"immer": "^10.0.3",
"redux": "^5.0.1",
"redux-thunk": "^3.1.0",
"reselect": "^5.1.0"
},
"peerDependencies": {
"react": "^16.9.0 || ^17.0.0 || ^18 || ^19",
"react-redux": "^7.2.1 || ^8.1.3 || ^9.0.0"
},
"peerDependenciesMeta": {
"react": {
"optional": true
},
"react-redux": {
"optional": true
}
}
},
"node_modules/@standard-schema/spec": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/@standard-schema/spec/-/spec-1.0.0.tgz",
"integrity": "sha512-m2bOd0f2RT9k8QJx1JN85cZYyH1RqFBdlwtkSlf4tBDYLCiiZnv1fIIwacK6cqwXavOydf0NPToMQgpKq+dVlA==",
"license": "MIT"
},
"node_modules/@standard-schema/utils": {
"version": "0.3.0",
"resolved": "https://registry.npmjs.org/@standard-schema/utils/-/utils-0.3.0.tgz",
"integrity": "sha512-e7Mew686owMaPJVNNLs55PUvgz371nKgwsc4vxE49zsODpJEnxgxRo2y/OKrqueavXgZNMDVj3DdHFlaSAeU8g==",
"license": "MIT"
},
"node_modules/@types/d3-array": {
"version": "3.2.2",
"resolved": "https://registry.npmjs.org/@types/d3-array/-/d3-array-3.2.2.tgz",
"integrity": "sha512-hOLWVbm7uRza0BYXpIIW5pxfrKe0W+D5lrFiAEYR+pb6w3N2SwSMaJbXdUfSEv+dT4MfHBLtn5js0LAWaO6otw==",
"license": "MIT"
},
"node_modules/@types/d3-color": {
"version": "3.1.3",
"resolved": "https://registry.npmjs.org/@types/d3-color/-/d3-color-3.1.3.tgz",
"integrity": "sha512-iO90scth9WAbmgv7ogoq57O9YpKmFBbmoEoCHDB2xMBY0+/KVrqAaCDyCE16dUspeOvIxFFRI+0sEtqDqy2b4A==",
"license": "MIT"
},
"node_modules/@types/d3-ease": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/@types/d3-ease/-/d3-ease-3.0.2.tgz",
"integrity": "sha512-NcV1JjO5oDzoK26oMzbILE6HW7uVXOHLQvHshBUW4UMdZGfiY6v5BeQwh9a9tCzv+CeefZQHJt5SRgK154RtiA==",
"license": "MIT"
},
"node_modules/@types/d3-interpolate": {
"version": "3.0.4",
"resolved": "https://registry.npmjs.org/@types/d3-interpolate/-/d3-interpolate-3.0.4.tgz",
"integrity": "sha512-mgLPETlrpVV1YRJIglr4Ez47g7Yxjl1lj7YKsiMCb27VJH9W8NVM6Bb9d8kkpG/uAQS5AmbA48q2IAolKKo1MA==",
"license": "MIT",
"dependencies": {
"@types/d3-color": "*"
}
},
"node_modules/@types/d3-path": {
"version": "1.0.11",
"resolved": "https://registry.npmjs.org/@types/d3-path/-/d3-path-1.0.11.tgz",
"integrity": "sha512-4pQMp8ldf7UaB/gR8Fvvy69psNHkTpD/pVw3vmEi8iZAB9EPMBruB1JvHO4BIq9QkUUd2lV1F5YXpMNj7JPBpw==",
"license": "MIT"
},
"node_modules/@types/d3-scale": {
"version": "4.0.9",
"resolved": "https://registry.npmjs.org/@types/d3-scale/-/d3-scale-4.0.9.tgz",
"integrity": "sha512-dLmtwB8zkAeO/juAMfnV+sItKjlsw2lKdZVVy6LRr0cBmegxSABiLEpGVmSJJ8O08i4+sGR6qQtb6WtuwJdvVw==",
"license": "MIT",
"dependencies": {
"@types/d3-time": "*"
}
},
"node_modules/@types/d3-shape": {
"version": "1.3.12",
"resolved": "https://registry.npmjs.org/@types/d3-shape/-/d3-shape-1.3.12.tgz",
"integrity": "sha512-8oMzcd4+poSLGgV0R1Q1rOlx/xdmozS4Xab7np0eamFFUYq71AU9pOCJEFnkXW2aI/oXdVYJzw6pssbSut7Z9Q==",
"license": "MIT",
"dependencies": {
"@types/d3-path": "^1"
}
},
"node_modules/@types/d3-time": {
"version": "3.0.4",
"resolved": "https://registry.npmjs.org/@types/d3-time/-/d3-time-3.0.4.tgz",
"integrity": "sha512-yuzZug1nkAAaBlBBikKZTgzCeA+k1uy4ZFwWANOfKw5z5LRhV0gNA7gNkKm7HoK+HRN0wX3EkxGk0fpbWhmB7g==",
"license": "MIT"
},
"node_modules/@types/d3-timer": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/@types/d3-timer/-/d3-timer-3.0.2.tgz",
"integrity": "sha512-Ps3T8E8dZDam6fUyNiMkekK3XUsaUEik+idO9/YjPtfj2qruF8tFBXS7XhtE4iIXBLxhmLjP3SXpLhVf21I9Lw==",
"license": "MIT"
},
"node_modules/@types/react": {
"version": "19.1.13",
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.1.13.tgz",
"integrity": "sha512-hHkbU/eoO3EG5/MZkuFSKmYqPbSVk5byPFa3e7y/8TybHiLMACgI8seVYlicwk7H5K/rI2px9xrQp/C+AUDTiQ==",
"license": "MIT",
"dependencies": {
"csstype": "^3.0.2"
}
},
"node_modules/@types/recharts": {
"version": "1.8.29",
"resolved": "https://registry.npmjs.org/@types/recharts/-/recharts-1.8.29.tgz",
"integrity": "sha512-ulKklaVsnFIIhTQsQw226TnOibrddW1qUQNFVhoQEyY1Z7FRQrNecFCGt7msRuJseudzE9czVawZb17dK/aPXw==",
"license": "MIT",
"dependencies": {
"@types/d3-shape": "^1",
"@types/react": "*"
}
},
"node_modules/@types/use-sync-external-store": {
"version": "0.0.6",
"resolved": "https://registry.npmjs.org/@types/use-sync-external-store/-/use-sync-external-store-0.0.6.tgz",
"integrity": "sha512-zFDAD+tlpf2r4asuHEj0XH6pY6i0g5NeAHPn+15wk3BV6JA69eERFXC1gyGThDkVa1zCyKr5jox1+2LbV/AMLg==",
"license": "MIT"
},
"node_modules/clsx": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/clsx/-/clsx-2.1.1.tgz",
"integrity": "sha512-eYm0QWBtUrBWZWG0d386OGAw16Z995PiOVo2B7bjWSbHedGl5e0ZWaq65kOGgUSNesEIDkB9ISbTg/JK9dhCZA==",
"license": "MIT",
"engines": {
"node": ">=6"
}
},
"node_modules/csstype": {
"version": "3.1.3",
"resolved": "https://registry.npmjs.org/csstype/-/csstype-3.1.3.tgz",
"integrity": "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw==",
"license": "MIT"
},
"node_modules/d3-array": {
"version": "3.2.4",
"resolved": "https://registry.npmjs.org/d3-array/-/d3-array-3.2.4.tgz",
"integrity": "sha512-tdQAmyA18i4J7wprpYq8ClcxZy3SC31QMeByyCFyRt7BVHdREQZ5lpzoe5mFEYZUWe+oq8HBvk9JjpibyEV4Jg==",
"license": "ISC",
"dependencies": {
"internmap": "1 - 2"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-color": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/d3-color/-/d3-color-3.1.0.tgz",
"integrity": "sha512-zg/chbXyeBtMQ1LbD/WSoW2DpC3I0mpmPdW+ynRTj/x2DAWYrIY7qeZIHidozwV24m4iavr15lNwIwLxRmOxhA==",
"license": "ISC",
"engines": {
"node": ">=12"
}
},
"node_modules/d3-ease": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/d3-ease/-/d3-ease-3.0.1.tgz",
"integrity": "sha512-wR/XK3D3XcLIZwpbvQwQ5fK+8Ykds1ip7A2Txe0yxncXSdq1L9skcG7blcedkOX+ZcgxGAmLX1FrRGbADwzi0w==",
"license": "BSD-3-Clause",
"engines": {
"node": ">=12"
}
},
"node_modules/d3-format": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/d3-format/-/d3-format-3.1.0.tgz",
"integrity": "sha512-YyUI6AEuY/Wpt8KWLgZHsIU86atmikuoOmCfommt0LYHiQSPjvX2AcFc38PX0CBpr2RCyZhjex+NS/LPOv6YqA==",
"license": "ISC",
"engines": {
"node": ">=12"
}
},
"node_modules/d3-interpolate": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/d3-interpolate/-/d3-interpolate-3.0.1.tgz",
"integrity": "sha512-3bYs1rOD33uo8aqJfKP3JWPAibgw8Zm2+L9vBKEHJ2Rg+viTR7o5Mmv5mZcieN+FRYaAOWX5SJATX6k1PWz72g==",
"license": "ISC",
"dependencies": {
"d3-color": "1 - 3"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-path": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/d3-path/-/d3-path-3.1.0.tgz",
"integrity": "sha512-p3KP5HCf/bvjBSSKuXid6Zqijx7wIfNW+J/maPs+iwR35at5JCbLUT0LzF1cnjbCHWhqzQTIN2Jpe8pRebIEFQ==",
"license": "ISC",
"engines": {
"node": ">=12"
}
},
"node_modules/d3-scale": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/d3-scale/-/d3-scale-4.0.2.tgz",
"integrity": "sha512-GZW464g1SH7ag3Y7hXjf8RoUuAFIqklOAq3MRl4OaWabTFJY9PN/E1YklhXLh+OQ3fM9yS2nOkCoS+WLZ6kvxQ==",
"license": "ISC",
"dependencies": {
"d3-array": "2.10.0 - 3",
"d3-format": "1 - 3",
"d3-interpolate": "1.2.0 - 3",
"d3-time": "2.1.1 - 3",
"d3-time-format": "2 - 4"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-shape": {
"version": "3.2.0",
"resolved": "https://registry.npmjs.org/d3-shape/-/d3-shape-3.2.0.tgz",
"integrity": "sha512-SaLBuwGm3MOViRq2ABk3eLoxwZELpH6zhl3FbAoJ7Vm1gofKx6El1Ib5z23NUEhF9AsGl7y+dzLe5Cw2AArGTA==",
"license": "ISC",
"dependencies": {
"d3-path": "^3.1.0"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-time": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/d3-time/-/d3-time-3.1.0.tgz",
"integrity": "sha512-VqKjzBLejbSMT4IgbmVgDjpkYrNWUYJnbCGo874u7MMKIWsILRX+OpX/gTk8MqjpT1A/c6HY2dCA77ZN0lkQ2Q==",
"license": "ISC",
"dependencies": {
"d3-array": "2 - 3"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-time-format": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/d3-time-format/-/d3-time-format-4.1.0.tgz",
"integrity": "sha512-dJxPBlzC7NugB2PDLwo9Q8JiTR3M3e4/XANkreKSUxF8vvXKqm1Yfq4Q5dl8budlunRVlUUaDUgFt7eA8D6NLg==",
"license": "ISC",
"dependencies": {
"d3-time": "1 - 3"
},
"engines": {
"node": ">=12"
}
},
"node_modules/d3-timer": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/d3-timer/-/d3-timer-3.0.1.tgz",
"integrity": "sha512-ndfJ/JxxMd3nw31uyKoY2naivF+r29V+Lc0svZxe1JvvIRmi8hUsrMvdOwgS1o6uBHmiz91geQ0ylPP0aj1VUA==",
"license": "ISC",
"engines": {
"node": ">=12"
}
},
"node_modules/decimal.js-light": {
"version": "2.5.1",
"resolved": "https://registry.npmjs.org/decimal.js-light/-/decimal.js-light-2.5.1.tgz",
"integrity": "sha512-qIMFpTMZmny+MMIitAB6D7iVPEorVw6YQRWkvarTkT4tBeSLLiHzcwj6q0MmYSFCiVpiqPJTJEYIrpcPzVEIvg==",
"license": "MIT"
},
"node_modules/es-toolkit": {
"version": "1.39.10",
"resolved": "https://registry.npmjs.org/es-toolkit/-/es-toolkit-1.39.10.tgz",
"integrity": "sha512-E0iGnTtbDhkeczB0T+mxmoVlT4YNweEKBLq7oaU4p11mecdsZpNWOglI4895Vh4usbQ+LsJiuLuI2L0Vdmfm2w==",
"license": "MIT",
"workspaces": [
"docs",
"benchmarks"
]
},
"node_modules/eventemitter3": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-5.0.1.tgz",
"integrity": "sha512-GWkBvjiSZK87ELrYOSESUYeVIc9mvLLf/nXalMOS5dYrgZq9o5OVkbZAVM06CVxYsCwH9BDZFPlQTlPA1j4ahA==",
"license": "MIT"
},
"node_modules/immer": {
"version": "10.1.3",
"resolved": "https://registry.npmjs.org/immer/-/immer-10.1.3.tgz",
"integrity": "sha512-tmjF/k8QDKydUlm3mZU+tjM6zeq9/fFpPqH9SzWmBnVVKsPBg/V66qsMwb3/Bo90cgUN+ghdVBess+hPsxUyRw==",
"license": "MIT",
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/immer"
}
},
"node_modules/internmap": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/internmap/-/internmap-2.0.3.tgz",
"integrity": "sha512-5Hh7Y1wQbvY5ooGgPbDaL5iYLAPzMTUrjMulskHLH6wnv/A+1q5rgEaiuqEjB+oxGXIVZs1FF+R/KPN3ZSQYYg==",
"license": "ISC",
"engines": {
"node": ">=12"
}
},
"node_modules/react": {
"version": "19.1.1",
"resolved": "https://registry.npmjs.org/react/-/react-19.1.1.tgz",
"integrity": "sha512-w8nqGImo45dmMIfljjMwOGtbmC/mk4CMYhWIicdSflH91J9TyCyczcPFXJzrZ/ZXcgGRFeP6BU0BEJTw6tZdfQ==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/react-dom": {
"version": "19.1.1",
"resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.1.1.tgz",
"integrity": "sha512-Dlq/5LAZgF0Gaz6yiqZCf6VCcZs1ghAJyrsu84Q/GT0gV+mCxbfmKNoGRKBYMJ8IEdGPqu49YWXD02GCknEDkw==",
"license": "MIT",
"peer": true,
"dependencies": {
"scheduler": "^0.26.0"
},
"peerDependencies": {
"react": "^19.1.1"
}
},
"node_modules/react-is": {
"version": "19.1.1",
"resolved": "https://registry.npmjs.org/react-is/-/react-is-19.1.1.tgz",
"integrity": "sha512-tr41fA15Vn8p4X9ntI+yCyeGSf1TlYaY5vlTZfQmeLBrFo3psOPX6HhTDnFNL9uj3EhP0KAQ80cugCl4b4BERA==",
"license": "MIT",
"peer": true
},
"node_modules/react-redux": {
"version": "9.2.0",
"resolved": "https://registry.npmjs.org/react-redux/-/react-redux-9.2.0.tgz",
"integrity": "sha512-ROY9fvHhwOD9ySfrF0wmvu//bKCQ6AeZZq1nJNtbDC+kk5DuSuNX/n6YWYF/SYy7bSba4D4FSz8DJeKY/S/r+g==",
"license": "MIT",
"dependencies": {
"@types/use-sync-external-store": "^0.0.6",
"use-sync-external-store": "^1.4.0"
},
"peerDependencies": {
"@types/react": "^18.2.25 || ^19",
"react": "^18.0 || ^19",
"redux": "^5.0.0"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"redux": {
"optional": true
}
}
},
"node_modules/recharts": {
"version": "3.2.0",
"resolved": "https://registry.npmjs.org/recharts/-/recharts-3.2.0.tgz",
"integrity": "sha512-fX0xCgNXo6mag9wz3oLuANR+dUQM4uIlTYBGTGq9CBRgW/8TZPzqPGYs5NTt8aENCf+i1CI8vqxT1py8L/5J2w==",
"license": "MIT",
"dependencies": {
"@reduxjs/toolkit": "1.x.x || 2.x.x",
"clsx": "^2.1.1",
"decimal.js-light": "^2.5.1",
"es-toolkit": "^1.39.3",
"eventemitter3": "^5.0.1",
"immer": "^10.1.1",
"react-redux": "8.x.x || 9.x.x",
"reselect": "5.1.1",
"tiny-invariant": "^1.3.3",
"use-sync-external-store": "^1.2.2",
"victory-vendor": "^37.0.2"
},
"engines": {
"node": ">=18"
},
"peerDependencies": {
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0",
"react-dom": "^16.0.0 || ^17.0.0 || ^18.0.0 || ^19.0.0",
"react-is": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
}
},
"node_modules/redux": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/redux/-/redux-5.0.1.tgz",
"integrity": "sha512-M9/ELqF6fy8FwmkpnF0S3YKOqMyoWJ4+CS5Efg2ct3oY9daQvd/Pc71FpGZsVsbl3Cpb+IIcjBDUnnyBdQbq4w==",
"license": "MIT"
},
"node_modules/redux-thunk": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/redux-thunk/-/redux-thunk-3.1.0.tgz",
"integrity": "sha512-NW2r5T6ksUKXCabzhL9z+h206HQw/NJkcLm1GPImRQ8IzfXwRGqjVhKJGauHirT0DAuyy6hjdnMZaRoAcy0Klw==",
"license": "MIT",
"peerDependencies": {
"redux": "^5.0.0"
}
},
"node_modules/reselect": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/reselect/-/reselect-5.1.1.tgz",
"integrity": "sha512-K/BG6eIky/SBpzfHZv/dd+9JBFiS4SWV7FIujVyJRux6e45+73RaUHXLmIR1f7WOMaQ0U1km6qwklRQxpJJY0w==",
"license": "MIT"
},
"node_modules/scheduler": {
"version": "0.26.0",
"resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.26.0.tgz",
"integrity": "sha512-NlHwttCI/l5gCPR3D1nNXtWABUmBwvZpEQiD4IXSbIDq8BzLIK/7Ir5gTFSGZDUu37K5cMNp0hFtzO38sC7gWA==",
"license": "MIT",
"peer": true
},
"node_modules/tiny-invariant": {
"version": "1.3.3",
"resolved": "https://registry.npmjs.org/tiny-invariant/-/tiny-invariant-1.3.3.tgz",
"integrity": "sha512-+FbBPE1o9QAYvviau/qC5SE3caw21q3xkvWKBtja5vgqOWIHHJ3ioaq1VPfn/Szqctz2bU/oYeKd9/z5BL+PVg==",
"license": "MIT"
},
"node_modules/use-sync-external-store": {
"version": "1.5.0",
"resolved": "https://registry.npmjs.org/use-sync-external-store/-/use-sync-external-store-1.5.0.tgz",
"integrity": "sha512-Rb46I4cGGVBmjamjphe8L/UnvJD+uPPtTkNvX5mZgqdbavhI4EbgIWJiIHXJ8bc/i9EQGPRh4DwEURJ552Do0A==",
"license": "MIT",
"peerDependencies": {
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
}
},
"node_modules/victory-vendor": {
"version": "37.3.6",
"resolved": "https://registry.npmjs.org/victory-vendor/-/victory-vendor-37.3.6.tgz",
"integrity": "sha512-SbPDPdDBYp+5MJHhBCAyI7wKM3d5ivekigc2Dk2s7pgbZ9wIgIBYGVw4zGHBml/qTFbexrofXW6Gu4noGxrOwQ==",
"license": "MIT AND ISC",
"dependencies": {
"@types/d3-array": "^3.0.3",
"@types/d3-ease": "^3.0.0",
"@types/d3-interpolate": "^3.0.1",
"@types/d3-scale": "^4.0.2",
"@types/d3-shape": "^3.1.0",
"@types/d3-time": "^3.0.0",
"@types/d3-timer": "^3.0.0",
"d3-array": "^3.1.6",
"d3-ease": "^3.0.1",
"d3-interpolate": "^3.0.1",
"d3-scale": "^4.0.2",
"d3-shape": "^3.1.0",
"d3-time": "^3.0.0",
"d3-timer": "^3.0.1"
}
},
"node_modules/victory-vendor/node_modules/@types/d3-shape": {
"version": "3.1.7",
"resolved": "https://registry.npmjs.org/@types/d3-shape/-/d3-shape-3.1.7.tgz",
"integrity": "sha512-VLvUQ33C+3J+8p+Daf+nYSOsjB4GXp19/S/aGo60m9h1v6XaxjiT82lKVWJCfzhtuZ3yD7i/TPeC/fuKLLOSmg==",
"license": "MIT",
"dependencies": {
"@types/d3-path": "*"
}
}
}
}

View File

@@ -1,6 +0,0 @@
{
"dependencies": {
"@types/recharts": "^1.8.29",
"recharts": "^3.2.0"
}
}