mirror of
https://github.com/databasus/databasus.git
synced 2026-04-06 08:41:58 +02:00
Compare commits
45 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
12eea72392 | ||
|
|
75c88bac50 | ||
|
|
ff1b6536bf | ||
|
|
06197f986d | ||
|
|
fe72e9e0a6 | ||
|
|
640cceadbd | ||
|
|
80e573fcb3 | ||
|
|
35498d83f1 | ||
|
|
77ae8d1ac7 | ||
|
|
2f20845b3d | ||
|
|
a3d3df4093 | ||
|
|
8db83d40d5 | ||
|
|
065ded37bd | ||
|
|
71e801debb | ||
|
|
ffd4e3a27b | ||
|
|
d2a9085591 | ||
|
|
6f0152b60c | ||
|
|
7007236f2f | ||
|
|
db55cad310 | ||
|
|
25bd096c81 | ||
|
|
7e98dd578c | ||
|
|
ba37b30e83 | ||
|
|
34b3f822e3 | ||
|
|
14700130b7 | ||
|
|
de11ab8d8a | ||
|
|
06282bb435 | ||
|
|
a3b263bbac | ||
|
|
a956dccf7c | ||
|
|
ce9fa18d58 | ||
|
|
281e185f21 | ||
|
|
bb5b0064ea | ||
|
|
da95bbb178 | ||
|
|
cfe5993831 | ||
|
|
fa0e3d1ce2 | ||
|
|
d07085c462 | ||
|
|
c89c1f9654 | ||
|
|
6cfc0ca79b | ||
|
|
5d27123bd7 | ||
|
|
79ca374bb6 | ||
|
|
b3f1a6f7e5 | ||
|
|
d521e2abc6 | ||
|
|
82eca7501b | ||
|
|
51866437fd | ||
|
|
244a56d1bb | ||
|
|
95c833b619 |
34
.github/workflows/ci-release.yml
vendored
34
.github/workflows/ci-release.yml
vendored
@@ -465,3 +465,37 @@ jobs:
|
||||
body: ${{ steps.changelog.outputs.changelog }}
|
||||
draft: false
|
||||
prerelease: false
|
||||
|
||||
publish-helm-chart:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [determine-version, build-and-push]
|
||||
if: ${{ needs.determine-version.outputs.should_release == 'true' }}
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Helm
|
||||
uses: azure/setup-helm@v4
|
||||
with:
|
||||
version: v3.14.0
|
||||
|
||||
- name: Log in to GHCR
|
||||
run: echo "${{ secrets.GITHUB_TOKEN }}" | helm registry login ghcr.io -u ${{ github.actor }} --password-stdin
|
||||
|
||||
- name: Update Chart.yaml with release version
|
||||
run: |
|
||||
VERSION="${{ needs.determine-version.outputs.new_version }}"
|
||||
sed -i "s/^version: .*/version: ${VERSION}/" deploy/helm/Chart.yaml
|
||||
sed -i "s/^appVersion: .*/appVersion: \"v${VERSION}\"/" deploy/helm/Chart.yaml
|
||||
cat deploy/helm/Chart.yaml
|
||||
|
||||
- name: Package Helm chart
|
||||
run: helm package deploy/helm --destination .
|
||||
|
||||
- name: Push Helm chart to GHCR
|
||||
run: |
|
||||
VERSION="${{ needs.determine-version.outputs.new_version }}"
|
||||
helm push postgresus-${VERSION}.tgz oci://ghcr.io/rostislavdugin/charts
|
||||
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -4,4 +4,5 @@ postgresus-data/
|
||||
pgdata/
|
||||
docker-compose.yml
|
||||
node_modules/
|
||||
.idea
|
||||
.idea
|
||||
/articles
|
||||
56
README.md
56
README.md
@@ -25,6 +25,8 @@
|
||||
<a href="https://postgresus.com" target="_blank"><strong>🌐 Postgresus website</strong></a>
|
||||
</p>
|
||||
|
||||
<img src="assets/dashboard-dark.svg" alt="Postgresus Dark Dashboard" width="800" style="margin-bottom: 10px;"/>
|
||||
|
||||
<img src="assets/dashboard.svg" alt="Postgresus Dashboard" width="800"/>
|
||||
|
||||
|
||||
@@ -58,12 +60,12 @@
|
||||
- **SSL support**: Secure connections available
|
||||
- **Easy restoration**: One-click restore from any backup
|
||||
|
||||
### 🔒 **Backup Encryption** <a href="https://postgresus.com/encryption">(docs)</a>
|
||||
### 🔒 **Enterprise-grade security** <a href="https://postgresus.com/security">(docs)</a>
|
||||
|
||||
- **AES-256-GCM encryption**: Enterprise-grade protection for backup files
|
||||
- **Zero-trust storage**: Encrypted backups are useless so you can keep in shared storages like S3, Azure Blob Storage, etc.
|
||||
- **Optionality**: Encrypted backups are optional and can be enabled or disabled if you wish
|
||||
- **Download unencrypted**: You can still download unencrypted backups via the 'Download' button to use them in `pg_restore` or other tools.
|
||||
- **Zero-trust storage**: Backups are encrypted and they are useless to attackers, so you can keep them in shared storages like S3, Azure Blob Storage, etc.
|
||||
- **Encryption for secrets**: Any sensitive data is encrypted and never exposed, even in logs or error messages
|
||||
- **Read-only user**: Postgresus uses by default a read-only user for backups and never stores anything that can change your data
|
||||
|
||||
### 👥 **Suitable for Teams** <a href="https://postgresus.com/access-management">(docs)</a>
|
||||
|
||||
@@ -72,6 +74,12 @@
|
||||
- **Audit logs**: Track all system activities and changes made by users
|
||||
- **User roles**: Assign viewer, member, admin or owner roles within workspaces
|
||||
|
||||
### 🎨 **UX-Friendly**
|
||||
|
||||
- **Designer-polished UI**: Clean, intuitive interface crafted with attention to detail
|
||||
- **Dark & light themes**: Choose the look that suits your workflow
|
||||
- **Mobile adaptive**: Check your backups from anywhere on any device
|
||||
|
||||
### 🐳 **Self-Hosted & Secure**
|
||||
|
||||
- **Docker-based**: Easy deployment and management
|
||||
@@ -149,6 +157,46 @@ Then run:
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Option 4: Kubernetes with Helm
|
||||
|
||||
For Kubernetes deployments, install directly from the OCI registry.
|
||||
|
||||
**With ClusterIP + port-forward (development/testing):**
|
||||
|
||||
```bash
|
||||
helm install postgresus oci://ghcr.io/rostislavdugin/charts/postgresus \
|
||||
-n postgresus --create-namespace
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl port-forward svc/postgresus-service 4005:4005 -n postgresus
|
||||
# Access at http://localhost:4005
|
||||
```
|
||||
|
||||
**With LoadBalancer (cloud environments):**
|
||||
|
||||
```bash
|
||||
helm install postgresus oci://ghcr.io/rostislavdugin/charts/postgresus \
|
||||
-n postgresus --create-namespace \
|
||||
--set service.type=LoadBalancer
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl get svc postgresus-service -n postgresus
|
||||
# Access at http://<EXTERNAL-IP>:4005
|
||||
```
|
||||
|
||||
**With Ingress (domain-based access):**
|
||||
|
||||
```bash
|
||||
helm install postgresus oci://ghcr.io/rostislavdugin/charts/postgresus \
|
||||
-n postgresus --create-namespace \
|
||||
--set ingress.enabled=true \
|
||||
--set ingress.hosts[0].host=backup.example.com
|
||||
```
|
||||
|
||||
For more options (NodePort, TLS, HTTPRoute for Gateway API), see the [Helm chart README](deploy/helm/README.md).
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Usage
|
||||
|
||||
1
assets/dashboard-dark.svg
Normal file
1
assets/dashboard-dark.svg
Normal file
File diff suppressed because one or more lines are too long
|
After Width: | Height: | Size: 537 KiB |
@@ -18,6 +18,7 @@ import (
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
"postgresus-backend/internal/features/disk"
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
healthcheck_attempt "postgresus-backend/internal/features/healthcheck/attempt"
|
||||
healthcheck_config "postgresus-backend/internal/features/healthcheck/config"
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
@@ -64,6 +65,12 @@ func main() {
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
err = secrets.GetSecretKeyService().MigrateKeyFromDbToFileIfExist()
|
||||
if err != nil {
|
||||
log.Error("Failed to migrate secret key from database to file", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
err = users_services.GetUserService().CreateInitialAdmin()
|
||||
if err != nil {
|
||||
log.Error("Failed to create initial admin", "error", err)
|
||||
|
||||
@@ -26,8 +26,9 @@ type EnvVariables struct {
|
||||
EnvMode env_utils.EnvMode `env:"ENV_MODE" required:"true"`
|
||||
PostgresesInstallDir string `env:"POSTGRES_INSTALL_DIR"`
|
||||
|
||||
DataFolder string
|
||||
TempFolder string
|
||||
DataFolder string
|
||||
TempFolder string
|
||||
SecretKeyPath string
|
||||
|
||||
TestGoogleDriveClientID string `env:"TEST_GOOGLE_DRIVE_CLIENT_ID"`
|
||||
TestGoogleDriveClientSecret string `env:"TEST_GOOGLE_DRIVE_CLIENT_SECRET"`
|
||||
@@ -146,6 +147,7 @@ func loadEnvVariables() {
|
||||
// (projectRoot/postgresus-data -> /postgresus-data)
|
||||
env.DataFolder = filepath.Join(filepath.Dir(backendRoot), "postgresus-data", "backups")
|
||||
env.TempFolder = filepath.Join(filepath.Dir(backendRoot), "postgresus-data", "temp")
|
||||
env.SecretKeyPath = filepath.Join(filepath.Dir(backendRoot), "postgresus-data", "secret.key")
|
||||
|
||||
if env.IsTesting {
|
||||
if env.TestPostgres12Port == "" {
|
||||
|
||||
@@ -2,20 +2,21 @@ package backups
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"sync"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
type BackupContextManager struct {
|
||||
mu sync.RWMutex
|
||||
cancelFuncs map[uuid.UUID]context.CancelFunc
|
||||
mu sync.RWMutex
|
||||
cancelFuncs map[uuid.UUID]context.CancelFunc
|
||||
cancelledBackups map[uuid.UUID]bool
|
||||
}
|
||||
|
||||
func NewBackupContextManager() *BackupContextManager {
|
||||
return &BackupContextManager{
|
||||
cancelFuncs: make(map[uuid.UUID]context.CancelFunc),
|
||||
cancelFuncs: make(map[uuid.UUID]context.CancelFunc),
|
||||
cancelledBackups: make(map[uuid.UUID]bool),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -23,25 +24,37 @@ func (m *BackupContextManager) RegisterBackup(backupID uuid.UUID, cancelFunc con
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
m.cancelFuncs[backupID] = cancelFunc
|
||||
delete(m.cancelledBackups, backupID)
|
||||
}
|
||||
|
||||
func (m *BackupContextManager) CancelBackup(backupID uuid.UUID) error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
cancelFunc, exists := m.cancelFuncs[backupID]
|
||||
if !exists {
|
||||
return errors.New("backup is not in progress or already completed")
|
||||
if m.cancelledBackups[backupID] {
|
||||
return nil
|
||||
}
|
||||
|
||||
cancelFunc()
|
||||
delete(m.cancelFuncs, backupID)
|
||||
cancelFunc, exists := m.cancelFuncs[backupID]
|
||||
if exists {
|
||||
cancelFunc()
|
||||
delete(m.cancelFuncs, backupID)
|
||||
}
|
||||
|
||||
m.cancelledBackups[backupID] = true
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *BackupContextManager) IsCancelled(backupID uuid.UUID) bool {
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
return m.cancelledBackups[backupID]
|
||||
}
|
||||
|
||||
func (m *BackupContextManager) UnregisterBackup(backupID uuid.UUID) {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
delete(m.cancelFuncs, backupID)
|
||||
delete(m.cancelledBackups, backupID)
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package backups
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -701,7 +702,7 @@ func createTestBackup(
|
||||
dummyContent := []byte("dummy backup content for testing")
|
||||
reader := strings.NewReader(string(dummyContent))
|
||||
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
|
||||
if err := storages[0].SaveFile(encryption.GetFieldEncryptor(), logger, backup.ID, reader); err != nil {
|
||||
if err := storages[0].SaveFile(context.Background(), encryption.GetFieldEncryptor(), logger, backup.ID, reader); err != nil {
|
||||
panic(fmt.Sprintf("Failed to create test backup file: %v", err))
|
||||
}
|
||||
|
||||
|
||||
@@ -1,17 +1,18 @@
|
||||
package backups
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
audit_logs "postgresus-backend/internal/features/audit_logs"
|
||||
"postgresus-backend/internal/features/backups/backups/usecases"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
encryption_secrets "postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
"time"
|
||||
)
|
||||
|
||||
var backupRepository = &BackupRepository{}
|
||||
@@ -25,7 +26,7 @@ var backupService = &BackupService{
|
||||
notifiers.GetNotifierService(),
|
||||
notifiers.GetNotifierService(),
|
||||
backups_config.GetBackupConfigService(),
|
||||
users_repositories.GetSecretKeyRepository(),
|
||||
encryption_secrets.GetSecretKeyService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
usecases.GetCreateBackupUsecase(),
|
||||
logger.GetLogger(),
|
||||
|
||||
@@ -7,19 +7,20 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
audit_logs "postgresus-backend/internal/features/audit_logs"
|
||||
"postgresus-backend/internal/features/backups/backups/encryption"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
encryption_secrets "postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
users_models "postgresus-backend/internal/features/users/models"
|
||||
users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
util_encryption "postgresus-backend/internal/util/encryption"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
@@ -31,7 +32,7 @@ type BackupService struct {
|
||||
notifierService *notifiers.NotifierService
|
||||
notificationSender NotificationSender
|
||||
backupConfigService *backups_config.BackupConfigService
|
||||
secretKeyRepo *users_repositories.SecretKeyRepository
|
||||
secretKeyService *encryption_secrets.SecretKeyService
|
||||
fieldEncryptor util_encryption.FieldEncryptor
|
||||
|
||||
createBackupUseCase CreateBackupUsecase
|
||||
@@ -274,7 +275,12 @@ func (s *BackupService) MakeBackup(databaseID uuid.UUID, isLastTry bool) {
|
||||
errMsg := err.Error()
|
||||
|
||||
// Check if backup was cancelled (not due to shutdown)
|
||||
if strings.Contains(errMsg, "backup cancelled") && !strings.Contains(errMsg, "shutdown") {
|
||||
isCancelled := strings.Contains(errMsg, "backup cancelled") ||
|
||||
strings.Contains(errMsg, "context canceled") ||
|
||||
errors.Is(err, context.Canceled)
|
||||
isShutdown := strings.Contains(errMsg, "shutdown")
|
||||
|
||||
if isCancelled && !isShutdown {
|
||||
backup.Status = BackupStatusCanceled
|
||||
backup.BackupDurationMs = time.Since(start).Milliseconds()
|
||||
backup.BackupSizeMb = 0
|
||||
@@ -628,7 +634,7 @@ func (s *BackupService) getBackupReader(backupID uuid.UUID) (io.ReadCloser, erro
|
||||
}
|
||||
|
||||
// Get master key
|
||||
masterKey, err := s.secretKeyRepo.GetSecretKey()
|
||||
masterKey, err := s.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
if closeErr := fileReader.Close(); closeErr != nil {
|
||||
s.logger.Error("Failed to close file reader", "error", closeErr)
|
||||
|
||||
@@ -3,21 +3,22 @@ package backups
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
usecases_postgresql "postgresus-backend/internal/features/backups/backups/usecases/postgresql"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
encryption_secrets "postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
users_enums "postgresus-backend/internal/features/users/enums"
|
||||
users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
users_testing "postgresus-backend/internal/features/users/testing"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
workspaces_testing "postgresus-backend/internal/features/workspaces/testing"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -56,7 +57,7 @@ func Test_BackupExecuted_NotificationSent(t *testing.T) {
|
||||
notifiers.GetNotifierService(),
|
||||
mockNotificationSender,
|
||||
backups_config.GetBackupConfigService(),
|
||||
users_repositories.GetSecretKeyRepository(),
|
||||
encryption_secrets.GetSecretKeyService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
&CreateFailedBackupUsecase{},
|
||||
logger.GetLogger(),
|
||||
@@ -104,7 +105,7 @@ func Test_BackupExecuted_NotificationSent(t *testing.T) {
|
||||
notifiers.GetNotifierService(),
|
||||
mockNotificationSender,
|
||||
backups_config.GetBackupConfigService(),
|
||||
users_repositories.GetSecretKeyRepository(),
|
||||
encryption_secrets.GetSecretKeyService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
&CreateSuccessBackupUsecase{},
|
||||
logger.GetLogger(),
|
||||
@@ -129,7 +130,7 @@ func Test_BackupExecuted_NotificationSent(t *testing.T) {
|
||||
notifiers.GetNotifierService(),
|
||||
mockNotificationSender,
|
||||
backups_config.GetBackupConfigService(),
|
||||
users_repositories.GetSecretKeyRepository(),
|
||||
encryption_secrets.GetSecretKeyService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
&CreateSuccessBackupUsecase{},
|
||||
logger.GetLogger(),
|
||||
|
||||
@@ -19,8 +19,8 @@ import (
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
pgtypes "postgresus-backend/internal/features/databases/databases/postgresql"
|
||||
encryption_secrets "postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
|
||||
@@ -34,16 +34,20 @@ const (
|
||||
progressReportIntervalMB = 1.0
|
||||
pgConnectTimeout = 30
|
||||
compressionLevel = 5
|
||||
defaultBackupLimit = 1000
|
||||
exitCodeAccessViolation = -1073741819
|
||||
exitCodeGenericError = 1
|
||||
exitCodeConnectionError = 2
|
||||
)
|
||||
|
||||
type CreatePostgresqlBackupUsecase struct {
|
||||
logger *slog.Logger
|
||||
secretKeyRepo *users_repositories.SecretKeyRepository
|
||||
fieldEncryptor encryption.FieldEncryptor
|
||||
logger *slog.Logger
|
||||
secretKeyService *encryption_secrets.SecretKeyService
|
||||
fieldEncryptor encryption.FieldEncryptor
|
||||
}
|
||||
|
||||
type writeResult struct {
|
||||
bytesWritten int
|
||||
writeErr error
|
||||
}
|
||||
|
||||
// Execute creates a backup of the database
|
||||
@@ -81,6 +85,11 @@ func (uc *CreatePostgresqlBackupUsecase) Execute(
|
||||
|
||||
args := uc.buildPgDumpArgs(pg)
|
||||
|
||||
decryptedPassword, err := uc.fieldEncryptor.Decrypt(db.ID, pg.Password)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt database password: %w", err)
|
||||
}
|
||||
|
||||
return uc.streamToStorage(
|
||||
ctx,
|
||||
backupID,
|
||||
@@ -92,7 +101,7 @@ func (uc *CreatePostgresqlBackupUsecase) Execute(
|
||||
config.GetEnv().PostgresesInstallDir,
|
||||
),
|
||||
args,
|
||||
pg.Password,
|
||||
decryptedPassword,
|
||||
storage,
|
||||
db,
|
||||
backupProgressListener,
|
||||
@@ -168,7 +177,7 @@ func (uc *CreatePostgresqlBackupUsecase) streamToStorage(
|
||||
// Start streaming into storage in its own goroutine
|
||||
saveErrCh := make(chan error, 1)
|
||||
go func() {
|
||||
saveErr := storage.SaveFile(uc.fieldEncryptor, uc.logger, backupID, storageReader)
|
||||
saveErr := storage.SaveFile(ctx, uc.fieldEncryptor, uc.logger, backupID, storageReader)
|
||||
saveErrCh <- saveErr
|
||||
}()
|
||||
|
||||
@@ -191,12 +200,10 @@ func (uc *CreatePostgresqlBackupUsecase) streamToStorage(
|
||||
copyResultCh <- err
|
||||
}()
|
||||
|
||||
// Wait for the copy to finish first, then the dump process
|
||||
copyErr := <-copyResultCh
|
||||
bytesWritten := <-bytesWrittenCh
|
||||
waitErr := cmd.Wait()
|
||||
|
||||
// Check for shutdown or cancellation before finalizing
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
uc.cleanupOnCancellation(encryptionWriter, storageWriter, saveErrCh)
|
||||
@@ -209,7 +216,6 @@ func (uc *CreatePostgresqlBackupUsecase) streamToStorage(
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Wait until storage ends reading
|
||||
saveErr := <-saveErrCh
|
||||
stderrOutput := <-stderrCh
|
||||
|
||||
@@ -263,7 +269,23 @@ func (uc *CreatePostgresqlBackupUsecase) copyWithShutdownCheck(
|
||||
|
||||
bytesRead, readErr := src.Read(buf)
|
||||
if bytesRead > 0 {
|
||||
bytesWritten, writeErr := dst.Write(buf[0:bytesRead])
|
||||
writeResultCh := make(chan writeResult, 1)
|
||||
go func() {
|
||||
bytesWritten, writeErr := dst.Write(buf[0:bytesRead])
|
||||
writeResultCh <- writeResult{bytesWritten, writeErr}
|
||||
}()
|
||||
|
||||
var bytesWritten int
|
||||
var writeErr error
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return totalBytesWritten, fmt.Errorf("copy cancelled during write: %w", ctx.Err())
|
||||
case result := <-writeResultCh:
|
||||
bytesWritten = result.bytesWritten
|
||||
writeErr = result.writeErr
|
||||
}
|
||||
|
||||
if bytesWritten < 0 || bytesRead < bytesWritten {
|
||||
bytesWritten = 0
|
||||
if writeErr == nil {
|
||||
@@ -350,6 +372,9 @@ func (uc *CreatePostgresqlBackupUsecase) createBackupContext(
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-parentCtx.Done():
|
||||
cancel()
|
||||
return
|
||||
case <-ticker.C:
|
||||
if config.IsShouldShutdown() {
|
||||
cancel()
|
||||
@@ -413,7 +438,6 @@ func (uc *CreatePostgresqlBackupUsecase) setupPgEnvironment(
|
||||
"PGCONNECT_TIMEOUT="+strconv.Itoa(pgConnectTimeout),
|
||||
"LC_ALL=C.UTF-8",
|
||||
"LANG=C.UTF-8",
|
||||
"PGOPTIONS=--client-encoding=UTF8",
|
||||
)
|
||||
|
||||
if shouldRequireSSL {
|
||||
@@ -461,7 +485,7 @@ func (uc *CreatePostgresqlBackupUsecase) setupBackupEncryption(
|
||||
return nil, nil, metadata, fmt.Errorf("failed to generate nonce: %w", err)
|
||||
}
|
||||
|
||||
masterKey, err := uc.secretKeyRepo.GetSecretKey()
|
||||
masterKey, err := uc.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
return nil, nil, metadata, fmt.Errorf("failed to get master key: %w", err)
|
||||
}
|
||||
@@ -607,7 +631,6 @@ func (uc *CreatePostgresqlBackupUsecase) handleExitCode1NoStderr(
|
||||
"PGCONNECT_TIMEOUT=" + strconv.Itoa(pgConnectTimeout),
|
||||
"LC_ALL=C.UTF-8",
|
||||
"LANG=C.UTF-8",
|
||||
"PGOPTIONS=--client-encoding=UTF8",
|
||||
},
|
||||
)
|
||||
|
||||
@@ -715,11 +738,15 @@ func (uc *CreatePostgresqlBackupUsecase) createTempPgpassFile(
|
||||
return "", nil
|
||||
}
|
||||
|
||||
escapedHost := tools.EscapePgpassField(pgConfig.Host)
|
||||
escapedUsername := tools.EscapePgpassField(pgConfig.Username)
|
||||
escapedPassword := tools.EscapePgpassField(password)
|
||||
|
||||
pgpassContent := fmt.Sprintf("%s:%d:*:%s:%s",
|
||||
pgConfig.Host,
|
||||
escapedHost,
|
||||
pgConfig.Port,
|
||||
pgConfig.Username,
|
||||
password,
|
||||
escapedUsername,
|
||||
escapedPassword,
|
||||
)
|
||||
|
||||
tempDir, err := os.MkdirTemp("", "pgpass")
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
package usecases_postgresql
|
||||
|
||||
import (
|
||||
users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
)
|
||||
|
||||
var createPostgresqlBackupUsecase = &CreatePostgresqlBackupUsecase{
|
||||
logger.GetLogger(),
|
||||
users_repositories.GetSecretKeyRepository(),
|
||||
secrets.GetSecretKeyService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
}
|
||||
|
||||
|
||||
@@ -26,7 +26,8 @@ func (c *DatabaseController) RegisterRoutes(router *gin.RouterGroup) {
|
||||
router.POST("/databases/test-connection-direct", c.TestDatabaseConnectionDirect)
|
||||
router.POST("/databases/:id/copy", c.CopyDatabase)
|
||||
router.GET("/databases/notifier/:id/is-using", c.IsNotifierUsing)
|
||||
|
||||
router.POST("/databases/is-readonly", c.IsUserReadOnly)
|
||||
router.POST("/databases/create-readonly-user", c.CreateReadOnlyUser)
|
||||
}
|
||||
|
||||
// CreateDatabase
|
||||
@@ -330,3 +331,76 @@ func (c *DatabaseController) CopyDatabase(ctx *gin.Context) {
|
||||
|
||||
ctx.JSON(http.StatusCreated, copiedDatabase)
|
||||
}
|
||||
|
||||
// IsUserReadOnly
|
||||
// @Summary Check if database user is read-only
|
||||
// @Description Check if current database credentials have only read (SELECT) privileges
|
||||
// @Tags databases
|
||||
// @Accept json
|
||||
// @Produce json
|
||||
// @Security BearerAuth
|
||||
// @Param request body Database true "Database configuration to check"
|
||||
// @Success 200 {object} IsReadOnlyResponse
|
||||
// @Failure 400 {object} map[string]string
|
||||
// @Failure 401 {object} map[string]string
|
||||
// @Failure 403 {object} map[string]string
|
||||
// @Router /databases/is-readonly [post]
|
||||
func (c *DatabaseController) IsUserReadOnly(ctx *gin.Context) {
|
||||
user, ok := users_middleware.GetUserFromContext(ctx)
|
||||
if !ok {
|
||||
ctx.JSON(http.StatusUnauthorized, gin.H{"error": "User not authenticated"})
|
||||
return
|
||||
}
|
||||
|
||||
var request Database
|
||||
if err := ctx.ShouldBindJSON(&request); err != nil {
|
||||
ctx.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
isReadOnly, err := c.databaseService.IsUserReadOnly(user, &request)
|
||||
if err != nil {
|
||||
ctx.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
ctx.JSON(http.StatusOK, IsReadOnlyResponse{IsReadOnly: isReadOnly})
|
||||
}
|
||||
|
||||
// CreateReadOnlyUser
|
||||
// @Summary Create read-only database user
|
||||
// @Description Create a new PostgreSQL user with read-only privileges for backup operations
|
||||
// @Tags databases
|
||||
// @Accept json
|
||||
// @Produce json
|
||||
// @Security BearerAuth
|
||||
// @Param request body Database true "Database configuration to create user for"
|
||||
// @Success 200 {object} CreateReadOnlyUserResponse
|
||||
// @Failure 400 {object} map[string]string
|
||||
// @Failure 401 {object} map[string]string
|
||||
// @Failure 403 {object} map[string]string
|
||||
// @Router /databases/create-readonly-user [post]
|
||||
func (c *DatabaseController) CreateReadOnlyUser(ctx *gin.Context) {
|
||||
user, ok := users_middleware.GetUserFromContext(ctx)
|
||||
if !ok {
|
||||
ctx.JSON(http.StatusUnauthorized, gin.H{"error": "User not authenticated"})
|
||||
return
|
||||
}
|
||||
|
||||
var request Database
|
||||
if err := ctx.ShouldBindJSON(&request); err != nil {
|
||||
ctx.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
username, password, err := c.databaseService.CreateReadOnlyUser(user, &request)
|
||||
if err != nil {
|
||||
ctx.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
ctx.JSON(http.StatusOK, CreateReadOnlyUserResponse{
|
||||
Username: username,
|
||||
Password: password,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
@@ -18,7 +19,6 @@ type PostgresqlDatabase struct {
|
||||
ID uuid.UUID `json:"id" gorm:"primaryKey;type:uuid;default:gen_random_uuid()"`
|
||||
|
||||
DatabaseID *uuid.UUID `json:"databaseId" gorm:"type:uuid;column:database_id"`
|
||||
RestoreID *uuid.UUID `json:"restoreId" gorm:"type:uuid;column:restore_id"`
|
||||
|
||||
Version tools.PostgresqlVersion `json:"version" gorm:"type:text;not null"`
|
||||
|
||||
@@ -106,6 +106,384 @@ func (p *PostgresqlDatabase) EncryptSensitiveFields(
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsUserReadOnly checks if the database user has read-only privileges.
|
||||
//
|
||||
// This method performs a comprehensive security check by examining:
|
||||
// - Role-level attributes (superuser, createrole, createdb)
|
||||
// - Database-level privileges (CREATE, TEMP)
|
||||
// - Table-level write permissions (INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER)
|
||||
//
|
||||
// A user is considered read-only only if they have ZERO write privileges
|
||||
// across all three levels. This ensures the database user follows the
|
||||
// principle of least privilege for backup operations.
|
||||
func (p *PostgresqlDatabase) IsUserReadOnly(
|
||||
ctx context.Context,
|
||||
logger *slog.Logger,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
databaseID uuid.UUID,
|
||||
) (bool, error) {
|
||||
password, err := decryptPasswordIfNeeded(p.Password, encryptor, databaseID)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to decrypt password: %w", err)
|
||||
}
|
||||
|
||||
connStr := buildConnectionStringForDB(p, *p.Database, password)
|
||||
|
||||
conn, err := pgx.Connect(ctx, connStr)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to connect to database: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if closeErr := conn.Close(ctx); closeErr != nil {
|
||||
logger.Error("Failed to close connection", "error", closeErr)
|
||||
}
|
||||
}()
|
||||
|
||||
// LEVEL 1: Check role-level attributes
|
||||
var isSuperuser, canCreateRole, canCreateDB bool
|
||||
err = conn.QueryRow(ctx, `
|
||||
SELECT
|
||||
rolsuper,
|
||||
rolcreaterole,
|
||||
rolcreatedb
|
||||
FROM pg_roles
|
||||
WHERE rolname = current_user
|
||||
`).Scan(&isSuperuser, &canCreateRole, &canCreateDB)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to check role attributes: %w", err)
|
||||
}
|
||||
|
||||
if isSuperuser || canCreateRole || canCreateDB {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// LEVEL 2: Check database-level privileges
|
||||
var canCreate, canTemp bool
|
||||
err = conn.QueryRow(ctx, `
|
||||
SELECT
|
||||
has_database_privilege(current_user, current_database(), 'CREATE') as can_create,
|
||||
has_database_privilege(current_user, current_database(), 'TEMP') as can_temp
|
||||
`).Scan(&canCreate, &canTemp)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to check database privileges: %w", err)
|
||||
}
|
||||
|
||||
if canCreate || canTemp {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// LEVEL 2.5: Check schema-level CREATE privileges
|
||||
schemaRows, err := conn.Query(ctx, `
|
||||
SELECT DISTINCT nspname
|
||||
FROM pg_namespace n
|
||||
WHERE has_schema_privilege(current_user, n.nspname, 'CREATE')
|
||||
AND nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')
|
||||
`)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to check schema privileges: %w", err)
|
||||
}
|
||||
defer schemaRows.Close()
|
||||
|
||||
// If user has CREATE privilege on any schema, they're not read-only
|
||||
if schemaRows.Next() {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
if err := schemaRows.Err(); err != nil {
|
||||
return false, fmt.Errorf("error iterating schema privileges: %w", err)
|
||||
}
|
||||
|
||||
// LEVEL 3: Check table-level write permissions
|
||||
rows, err := conn.Query(ctx, `
|
||||
SELECT DISTINCT privilege_type
|
||||
FROM information_schema.role_table_grants
|
||||
WHERE grantee = current_user
|
||||
AND table_schema NOT IN ('pg_catalog', 'information_schema')
|
||||
`)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to check table privileges: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
writePrivileges := map[string]bool{
|
||||
"INSERT": true,
|
||||
"UPDATE": true,
|
||||
"DELETE": true,
|
||||
"TRUNCATE": true,
|
||||
"REFERENCES": true,
|
||||
"TRIGGER": true,
|
||||
}
|
||||
|
||||
for rows.Next() {
|
||||
var privilege string
|
||||
if err := rows.Scan(&privilege); err != nil {
|
||||
return false, fmt.Errorf("failed to scan privilege: %w", err)
|
||||
}
|
||||
|
||||
if writePrivileges[privilege] {
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
return false, fmt.Errorf("error iterating privileges: %w", err)
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// CreateReadOnlyUser creates a new PostgreSQL user with read-only privileges.
|
||||
//
|
||||
// This method performs the following operations atomically in a single transaction:
|
||||
// 1. Creates a PostgreSQL user with a UUID-based password
|
||||
// 2. Grants CONNECT privilege on the database
|
||||
// 3. Grants USAGE on all non-system schemas
|
||||
// 4. Grants SELECT on all existing tables and sequences
|
||||
// 5. Sets default privileges for future tables and sequences
|
||||
//
|
||||
// Security features:
|
||||
// - Username format: "postgresus-{8-char-uuid}" for uniqueness
|
||||
// - Password: Full UUID (36 characters) for strong entropy
|
||||
// - Transaction safety: All operations rollback on any failure
|
||||
// - Retry logic: Up to 3 attempts if username collision occurs
|
||||
// - Pre-validation: Checks CREATEROLE privilege before starting transaction
|
||||
func (p *PostgresqlDatabase) CreateReadOnlyUser(
|
||||
ctx context.Context,
|
||||
logger *slog.Logger,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
databaseID uuid.UUID,
|
||||
) (string, string, error) {
|
||||
password, err := decryptPasswordIfNeeded(p.Password, encryptor, databaseID)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to decrypt password: %w", err)
|
||||
}
|
||||
|
||||
connStr := buildConnectionStringForDB(p, *p.Database, password)
|
||||
|
||||
conn, err := pgx.Connect(ctx, connStr)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to connect to database: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if closeErr := conn.Close(ctx); closeErr != nil {
|
||||
logger.Error("Failed to close connection", "error", closeErr)
|
||||
}
|
||||
}()
|
||||
|
||||
// Pre-validate: Check if current user can create roles
|
||||
var canCreateRole, isSuperuser bool
|
||||
err = conn.QueryRow(ctx, `
|
||||
SELECT rolcreaterole, rolsuper
|
||||
FROM pg_roles
|
||||
WHERE rolname = current_user
|
||||
`).Scan(&canCreateRole, &isSuperuser)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to check permissions: %w", err)
|
||||
}
|
||||
if !canCreateRole && !isSuperuser {
|
||||
return "", "", errors.New("current database user lacks CREATEROLE privilege")
|
||||
}
|
||||
|
||||
// Retry logic for username collision
|
||||
maxRetries := 3
|
||||
for attempt := 0; attempt < maxRetries; attempt++ {
|
||||
username := fmt.Sprintf("postgresus-%s", uuid.New().String()[:8])
|
||||
newPassword := uuid.New().String()
|
||||
|
||||
tx, err := conn.Begin(ctx)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to begin transaction: %w", err)
|
||||
}
|
||||
|
||||
success := false
|
||||
defer func() {
|
||||
if !success {
|
||||
if rollbackErr := tx.Rollback(ctx); rollbackErr != nil {
|
||||
logger.Error("Failed to rollback transaction", "error", rollbackErr)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Step 1: Create PostgreSQL user with LOGIN privilege
|
||||
_, err = tx.Exec(
|
||||
ctx,
|
||||
fmt.Sprintf(`CREATE USER "%s" WITH PASSWORD '%s' LOGIN`, username, newPassword),
|
||||
)
|
||||
if err != nil {
|
||||
if err.Error() != "" && attempt < maxRetries-1 {
|
||||
continue
|
||||
}
|
||||
return "", "", fmt.Errorf("failed to create user: %w", err)
|
||||
}
|
||||
|
||||
// Step 1.5: Revoke CREATE privilege from PUBLIC role on public schema
|
||||
// This is necessary because all PostgreSQL users inherit CREATE privilege on the
|
||||
// public schema through the PUBLIC role. This is a one-time operation that affects
|
||||
// the entire database, making it more secure by default.
|
||||
// Note: This only affects the public schema; other schemas are unaffected.
|
||||
_, err = tx.Exec(ctx, `REVOKE CREATE ON SCHEMA public FROM PUBLIC`)
|
||||
if err != nil {
|
||||
logger.Error("Failed to revoke CREATE on public from PUBLIC", "error", err)
|
||||
if !strings.Contains(err.Error(), "schema \"public\" does not exist") &&
|
||||
!strings.Contains(err.Error(), "permission denied") {
|
||||
return "", "", fmt.Errorf("failed to revoke CREATE from PUBLIC: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Now revoke from the specific user as well (belt and suspenders)
|
||||
_, err = tx.Exec(ctx, fmt.Sprintf(`REVOKE CREATE ON SCHEMA public FROM "%s"`, username))
|
||||
if err != nil {
|
||||
logger.Error(
|
||||
"Failed to revoke CREATE on public schema from user",
|
||||
"error",
|
||||
err,
|
||||
"username",
|
||||
username,
|
||||
)
|
||||
}
|
||||
|
||||
// Step 2: Grant database connection privilege and revoke TEMP
|
||||
_, err = tx.Exec(
|
||||
ctx,
|
||||
fmt.Sprintf(`GRANT CONNECT ON DATABASE %s TO "%s"`, *p.Database, username),
|
||||
)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to grant connect privilege: %w", err)
|
||||
}
|
||||
|
||||
// Revoke TEMP privilege from PUBLIC role (like CREATE on public schema, TEMP is granted to PUBLIC by default)
|
||||
_, err = tx.Exec(ctx, fmt.Sprintf(`REVOKE TEMP ON DATABASE %s FROM PUBLIC`, *p.Database))
|
||||
if err != nil {
|
||||
logger.Warn("Failed to revoke TEMP from PUBLIC", "error", err)
|
||||
}
|
||||
|
||||
// Also revoke from the specific user (belt and suspenders)
|
||||
_, err = tx.Exec(
|
||||
ctx,
|
||||
fmt.Sprintf(`REVOKE TEMP ON DATABASE %s FROM "%s"`, *p.Database, username),
|
||||
)
|
||||
if err != nil {
|
||||
logger.Warn("Failed to revoke TEMP privilege", "error", err, "username", username)
|
||||
}
|
||||
|
||||
// Step 3: Discover all user-created schemas
|
||||
rows, err := tx.Query(ctx, `
|
||||
SELECT schema_name
|
||||
FROM information_schema.schemata
|
||||
WHERE schema_name NOT IN ('pg_catalog', 'information_schema')
|
||||
`)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get schemas: %w", err)
|
||||
}
|
||||
|
||||
var schemas []string
|
||||
for rows.Next() {
|
||||
var schema string
|
||||
if err := rows.Scan(&schema); err != nil {
|
||||
rows.Close()
|
||||
return "", "", fmt.Errorf("failed to scan schema: %w", err)
|
||||
}
|
||||
schemas = append(schemas, schema)
|
||||
}
|
||||
rows.Close()
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
return "", "", fmt.Errorf("error iterating schemas: %w", err)
|
||||
}
|
||||
|
||||
// Step 4: Grant USAGE on each schema and explicitly prevent CREATE
|
||||
for _, schema := range schemas {
|
||||
// Revoke CREATE specifically (handles inheritance from PUBLIC role)
|
||||
_, err = tx.Exec(
|
||||
ctx,
|
||||
fmt.Sprintf(`REVOKE CREATE ON SCHEMA "%s" FROM "%s"`, schema, username),
|
||||
)
|
||||
if err != nil {
|
||||
logger.Warn(
|
||||
"Failed to revoke CREATE on schema",
|
||||
"error",
|
||||
err,
|
||||
"schema",
|
||||
schema,
|
||||
"username",
|
||||
username,
|
||||
)
|
||||
}
|
||||
|
||||
// Grant only USAGE (not CREATE)
|
||||
_, err = tx.Exec(
|
||||
ctx,
|
||||
fmt.Sprintf(`GRANT USAGE ON SCHEMA "%s" TO "%s"`, schema, username),
|
||||
)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to grant usage on schema %s: %w", schema, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Step 5: Grant SELECT on ALL existing tables and sequences
|
||||
grantSelectSQL := fmt.Sprintf(`
|
||||
DO $$
|
||||
DECLARE
|
||||
schema_rec RECORD;
|
||||
BEGIN
|
||||
FOR schema_rec IN
|
||||
SELECT schema_name
|
||||
FROM information_schema.schemata
|
||||
WHERE schema_name NOT IN ('pg_catalog', 'information_schema')
|
||||
LOOP
|
||||
EXECUTE format('GRANT SELECT ON ALL TABLES IN SCHEMA %%I TO "%s"', schema_rec.schema_name);
|
||||
EXECUTE format('GRANT SELECT ON ALL SEQUENCES IN SCHEMA %%I TO "%s"', schema_rec.schema_name);
|
||||
END LOOP;
|
||||
END $$;
|
||||
`, username, username)
|
||||
|
||||
_, err = tx.Exec(ctx, grantSelectSQL)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to grant select on tables: %w", err)
|
||||
}
|
||||
|
||||
// Step 6: Set default privileges for FUTURE tables and sequences
|
||||
defaultPrivilegesSQL := fmt.Sprintf(`
|
||||
DO $$
|
||||
DECLARE
|
||||
schema_rec RECORD;
|
||||
BEGIN
|
||||
FOR schema_rec IN
|
||||
SELECT schema_name
|
||||
FROM information_schema.schemata
|
||||
WHERE schema_name NOT IN ('pg_catalog', 'information_schema')
|
||||
LOOP
|
||||
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %%I GRANT SELECT ON TABLES TO "%s"', schema_rec.schema_name);
|
||||
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %%I GRANT SELECT ON SEQUENCES TO "%s"', schema_rec.schema_name);
|
||||
END LOOP;
|
||||
END $$;
|
||||
`, username, username)
|
||||
|
||||
_, err = tx.Exec(ctx, defaultPrivilegesSQL)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to set default privileges: %w", err)
|
||||
}
|
||||
|
||||
// Step 7: Verify user creation before committing
|
||||
var verifyUsername string
|
||||
err = tx.QueryRow(ctx, fmt.Sprintf(`SELECT rolname FROM pg_roles WHERE rolname = '%s'`, username)).
|
||||
Scan(&verifyUsername)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to verify user creation: %w", err)
|
||||
}
|
||||
|
||||
if err := tx.Commit(ctx); err != nil {
|
||||
return "", "", fmt.Errorf("failed to commit transaction: %w", err)
|
||||
}
|
||||
|
||||
success = true
|
||||
logger.Info("Read-only user created successfully", "username", username)
|
||||
return username, newPassword, nil
|
||||
}
|
||||
|
||||
return "", "", errors.New("failed to generate unique username after 3 attempts")
|
||||
}
|
||||
|
||||
// testSingleDatabaseConnection tests connection to a specific database for pg_dump
|
||||
func testSingleDatabaseConnection(
|
||||
logger *slog.Logger,
|
||||
@@ -215,7 +593,8 @@ func buildConnectionStringForDB(p *PostgresqlDatabase, dbName string, password s
|
||||
sslMode = "require"
|
||||
}
|
||||
|
||||
return fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=%s",
|
||||
return fmt.Sprintf(
|
||||
"host=%s port=%d user=%s password=%s dbname=%s sslmode=%s default_query_exec_mode=simple_protocol standard_conforming_strings=on",
|
||||
p.Host,
|
||||
p.Port,
|
||||
p.Username,
|
||||
|
||||
@@ -0,0 +1,323 @@
|
||||
package postgresql
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/jmoiron/sqlx"
|
||||
_ "github.com/lib/pq"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"postgresus-backend/internal/config"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
)
|
||||
|
||||
func Test_IsUserReadOnly_AdminUser_ReturnsFalse(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
cases := []struct {
|
||||
name string
|
||||
version string
|
||||
port string
|
||||
}{
|
||||
{"PostgreSQL 12", "12", env.TestPostgres12Port},
|
||||
{"PostgreSQL 13", "13", env.TestPostgres13Port},
|
||||
{"PostgreSQL 14", "14", env.TestPostgres14Port},
|
||||
{"PostgreSQL 15", "15", env.TestPostgres15Port},
|
||||
{"PostgreSQL 16", "16", env.TestPostgres16Port},
|
||||
{"PostgreSQL 17", "17", env.TestPostgres17Port},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
container := connectToPostgresContainer(t, tc.port)
|
||||
defer container.DB.Close()
|
||||
|
||||
pgModel := createPostgresModel(container)
|
||||
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
|
||||
ctx := context.Background()
|
||||
|
||||
isReadOnly, err := pgModel.IsUserReadOnly(ctx, logger, nil, uuid.New())
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, isReadOnly, "Admin user should not be read-only")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_CreateReadOnlyUser_UserCanReadButNotWrite(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
cases := []struct {
|
||||
name string
|
||||
version string
|
||||
port string
|
||||
}{
|
||||
{"PostgreSQL 12", "12", env.TestPostgres12Port},
|
||||
{"PostgreSQL 13", "13", env.TestPostgres13Port},
|
||||
{"PostgreSQL 14", "14", env.TestPostgres14Port},
|
||||
{"PostgreSQL 15", "15", env.TestPostgres15Port},
|
||||
{"PostgreSQL 16", "16", env.TestPostgres16Port},
|
||||
{"PostgreSQL 17", "17", env.TestPostgres17Port},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
container := connectToPostgresContainer(t, tc.port)
|
||||
defer container.DB.Close()
|
||||
|
||||
_, err := container.DB.Exec(`
|
||||
DROP TABLE IF EXISTS readonly_test CASCADE;
|
||||
DROP TABLE IF EXISTS hack_table CASCADE;
|
||||
DROP TABLE IF EXISTS future_table CASCADE;
|
||||
CREATE TABLE readonly_test (
|
||||
id SERIAL PRIMARY KEY,
|
||||
data TEXT NOT NULL
|
||||
);
|
||||
INSERT INTO readonly_test (data) VALUES ('test1'), ('test2');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
pgModel := createPostgresModel(container)
|
||||
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
|
||||
ctx := context.Background()
|
||||
|
||||
username, password, err := pgModel.CreateReadOnlyUser(ctx, logger, nil, uuid.New())
|
||||
assert.NoError(t, err)
|
||||
assert.NotEmpty(t, username)
|
||||
assert.NotEmpty(t, password)
|
||||
assert.True(t, strings.HasPrefix(username, "postgresus-"))
|
||||
|
||||
readOnlyModel := &PostgresqlDatabase{
|
||||
Version: pgModel.Version,
|
||||
Host: pgModel.Host,
|
||||
Port: pgModel.Port,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Database: pgModel.Database,
|
||||
IsHttps: false,
|
||||
}
|
||||
|
||||
isReadOnly, err := readOnlyModel.IsUserReadOnly(ctx, logger, nil, uuid.New())
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, isReadOnly, "Created user should be read-only")
|
||||
|
||||
readOnlyDSN := fmt.Sprintf(
|
||||
"host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host,
|
||||
container.Port,
|
||||
username,
|
||||
password,
|
||||
container.Database,
|
||||
)
|
||||
readOnlyConn, err := sqlx.Connect("postgres", readOnlyDSN)
|
||||
assert.NoError(t, err)
|
||||
defer readOnlyConn.Close()
|
||||
|
||||
var count int
|
||||
err = readOnlyConn.Get(&count, "SELECT COUNT(*) FROM readonly_test")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 2, count)
|
||||
|
||||
_, err = readOnlyConn.Exec("INSERT INTO readonly_test (data) VALUES ('should-fail')")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
|
||||
_, err = readOnlyConn.Exec("UPDATE readonly_test SET data = 'hacked' WHERE id = 1")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
|
||||
_, err = readOnlyConn.Exec("DELETE FROM readonly_test WHERE id = 1")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
|
||||
_, err = readOnlyConn.Exec("CREATE TABLE hack_table (id INT)")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
|
||||
// Clean up: Drop user with CASCADE to handle default privilege dependencies
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP OWNED BY "%s" CASCADE`, username))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to drop owned objects: %v", err)
|
||||
}
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP USER IF EXISTS "%s"`, username))
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_ReadOnlyUser_FutureTables_HaveSelectPermission(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
container := connectToPostgresContainer(t, env.TestPostgres16Port)
|
||||
defer container.DB.Close()
|
||||
|
||||
pgModel := createPostgresModel(container)
|
||||
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
|
||||
ctx := context.Background()
|
||||
|
||||
username, password, err := pgModel.CreateReadOnlyUser(ctx, logger, nil, uuid.New())
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = container.DB.Exec(`
|
||||
CREATE TABLE future_table (
|
||||
id SERIAL PRIMARY KEY,
|
||||
data TEXT NOT NULL
|
||||
);
|
||||
INSERT INTO future_table (data) VALUES ('future_data');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
readOnlyDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, username, password, container.Database)
|
||||
readOnlyConn, err := sqlx.Connect("postgres", readOnlyDSN)
|
||||
assert.NoError(t, err)
|
||||
defer readOnlyConn.Close()
|
||||
|
||||
var data string
|
||||
err = readOnlyConn.Get(&data, "SELECT data FROM future_table LIMIT 1")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "future_data", data)
|
||||
|
||||
// Clean up: Drop user with CASCADE to handle default privilege dependencies
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP OWNED BY "%s" CASCADE`, username))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to drop owned objects: %v", err)
|
||||
}
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP USER IF EXISTS "%s"`, username))
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func Test_ReadOnlyUser_MultipleSchemas_AllAccessible(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
container := connectToPostgresContainer(t, env.TestPostgres16Port)
|
||||
defer container.DB.Close()
|
||||
|
||||
_, err := container.DB.Exec(`
|
||||
CREATE SCHEMA IF NOT EXISTS schema_a;
|
||||
CREATE SCHEMA IF NOT EXISTS schema_b;
|
||||
CREATE TABLE schema_a.table_a (id INT, data TEXT);
|
||||
CREATE TABLE schema_b.table_b (id INT, data TEXT);
|
||||
INSERT INTO schema_a.table_a VALUES (1, 'data_a');
|
||||
INSERT INTO schema_b.table_b VALUES (2, 'data_b');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
pgModel := createPostgresModel(container)
|
||||
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
|
||||
ctx := context.Background()
|
||||
|
||||
username, password, err := pgModel.CreateReadOnlyUser(ctx, logger, nil, uuid.New())
|
||||
assert.NoError(t, err)
|
||||
|
||||
readOnlyDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, username, password, container.Database)
|
||||
readOnlyConn, err := sqlx.Connect("postgres", readOnlyDSN)
|
||||
assert.NoError(t, err)
|
||||
defer readOnlyConn.Close()
|
||||
|
||||
var dataA string
|
||||
err = readOnlyConn.Get(&dataA, "SELECT data FROM schema_a.table_a LIMIT 1")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "data_a", dataA)
|
||||
|
||||
var dataB string
|
||||
err = readOnlyConn.Get(&dataB, "SELECT data FROM schema_b.table_b LIMIT 1")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "data_b", dataB)
|
||||
|
||||
// Clean up: Drop user with CASCADE to handle default privilege dependencies
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP OWNED BY "%s" CASCADE`, username))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to drop owned objects: %v", err)
|
||||
}
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP USER IF EXISTS "%s"`, username))
|
||||
assert.NoError(t, err)
|
||||
_, err = container.DB.Exec(`DROP SCHEMA schema_a CASCADE; DROP SCHEMA schema_b CASCADE;`)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
type PostgresContainer struct {
|
||||
Host string
|
||||
Port int
|
||||
Username string
|
||||
Password string
|
||||
Database string
|
||||
DB *sqlx.DB
|
||||
}
|
||||
|
||||
func connectToPostgresContainer(t *testing.T, port string) *PostgresContainer {
|
||||
dbName := "testdb"
|
||||
password := "testpassword"
|
||||
username := "testuser"
|
||||
host := "localhost"
|
||||
|
||||
portInt, err := strconv.Atoi(port)
|
||||
assert.NoError(t, err)
|
||||
|
||||
dsn := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
host, portInt, username, password, dbName)
|
||||
|
||||
db, err := sqlx.Connect("postgres", dsn)
|
||||
assert.NoError(t, err)
|
||||
|
||||
var versionStr string
|
||||
err = db.Get(&versionStr, "SELECT version()")
|
||||
assert.NoError(t, err)
|
||||
|
||||
return &PostgresContainer{
|
||||
Host: host,
|
||||
Port: portInt,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Database: dbName,
|
||||
DB: db,
|
||||
}
|
||||
}
|
||||
|
||||
func createPostgresModel(container *PostgresContainer) *PostgresqlDatabase {
|
||||
var versionStr string
|
||||
err := container.DB.Get(&versionStr, "SELECT version()")
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
version := extractPostgresVersion(versionStr)
|
||||
|
||||
return &PostgresqlDatabase{
|
||||
Version: version,
|
||||
Host: container.Host,
|
||||
Port: container.Port,
|
||||
Username: container.Username,
|
||||
Password: container.Password,
|
||||
Database: &container.Database,
|
||||
IsHttps: false,
|
||||
}
|
||||
}
|
||||
|
||||
func extractPostgresVersion(versionStr string) tools.PostgresqlVersion {
|
||||
if strings.Contains(versionStr, "PostgreSQL 12") {
|
||||
return tools.GetPostgresqlVersionEnum("12")
|
||||
} else if strings.Contains(versionStr, "PostgreSQL 13") {
|
||||
return tools.GetPostgresqlVersionEnum("13")
|
||||
} else if strings.Contains(versionStr, "PostgreSQL 14") {
|
||||
return tools.GetPostgresqlVersionEnum("14")
|
||||
} else if strings.Contains(versionStr, "PostgreSQL 15") {
|
||||
return tools.GetPostgresqlVersionEnum("15")
|
||||
} else if strings.Contains(versionStr, "PostgreSQL 16") {
|
||||
return tools.GetPostgresqlVersionEnum("16")
|
||||
} else if strings.Contains(versionStr, "PostgreSQL 17") {
|
||||
return tools.GetPostgresqlVersionEnum("17")
|
||||
}
|
||||
|
||||
return tools.GetPostgresqlVersionEnum("16")
|
||||
}
|
||||
10
backend/internal/features/databases/dto.go
Normal file
10
backend/internal/features/databases/dto.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package databases
|
||||
|
||||
type CreateReadOnlyUserResponse struct {
|
||||
Username string `json:"username"`
|
||||
Password string `json:"password"`
|
||||
}
|
||||
|
||||
type IsReadOnlyResponse struct {
|
||||
IsReadOnly bool `json:"isReadOnly"`
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
package databases
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
@@ -456,3 +457,148 @@ func (s *DatabaseService) OnBeforeWorkspaceDeletion(workspaceID uuid.UUID) error
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *DatabaseService) IsUserReadOnly(
|
||||
user *users_models.User,
|
||||
database *Database,
|
||||
) (bool, error) {
|
||||
var usingDatabase *Database
|
||||
|
||||
if database.ID != uuid.Nil {
|
||||
existingDatabase, err := s.dbRepository.FindByID(database.ID)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
if existingDatabase.WorkspaceID == nil {
|
||||
return false, errors.New("cannot check user for database without workspace")
|
||||
}
|
||||
|
||||
canAccess, _, err := s.workspaceService.CanUserAccessWorkspace(
|
||||
*existingDatabase.WorkspaceID,
|
||||
user,
|
||||
)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if !canAccess {
|
||||
return false, errors.New("insufficient permissions to access this database")
|
||||
}
|
||||
|
||||
if database.WorkspaceID != nil && *existingDatabase.WorkspaceID != *database.WorkspaceID {
|
||||
return false, errors.New("database does not belong to this workspace")
|
||||
}
|
||||
|
||||
existingDatabase.Update(database)
|
||||
|
||||
if err := existingDatabase.Validate(); err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
usingDatabase = existingDatabase
|
||||
} else {
|
||||
if database.WorkspaceID != nil {
|
||||
canAccess, _, err := s.workspaceService.CanUserAccessWorkspace(*database.WorkspaceID, user)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if !canAccess {
|
||||
return false, errors.New("insufficient permissions to access this workspace")
|
||||
}
|
||||
}
|
||||
|
||||
usingDatabase = database
|
||||
}
|
||||
|
||||
if usingDatabase.Type != DatabaseTypePostgres {
|
||||
return false, errors.New("read-only check only supported for PostgreSQL databases")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
|
||||
defer cancel()
|
||||
|
||||
return usingDatabase.Postgresql.IsUserReadOnly(
|
||||
ctx,
|
||||
s.logger,
|
||||
s.fieldEncryptor,
|
||||
usingDatabase.ID,
|
||||
)
|
||||
}
|
||||
|
||||
func (s *DatabaseService) CreateReadOnlyUser(
|
||||
user *users_models.User,
|
||||
database *Database,
|
||||
) (string, string, error) {
|
||||
var usingDatabase *Database
|
||||
|
||||
if database.ID != uuid.Nil {
|
||||
existingDatabase, err := s.dbRepository.FindByID(database.ID)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
|
||||
if existingDatabase.WorkspaceID == nil {
|
||||
return "", "", errors.New("cannot create user for database without workspace")
|
||||
}
|
||||
|
||||
canManage, err := s.workspaceService.CanUserManageDBs(*existingDatabase.WorkspaceID, user)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
if !canManage {
|
||||
return "", "", errors.New("insufficient permissions to manage this database")
|
||||
}
|
||||
|
||||
if database.WorkspaceID != nil && *existingDatabase.WorkspaceID != *database.WorkspaceID {
|
||||
return "", "", errors.New("database does not belong to this workspace")
|
||||
}
|
||||
|
||||
existingDatabase.Update(database)
|
||||
|
||||
if err := existingDatabase.Validate(); err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
|
||||
usingDatabase = existingDatabase
|
||||
} else {
|
||||
if database.WorkspaceID != nil {
|
||||
canManage, err := s.workspaceService.CanUserManageDBs(*database.WorkspaceID, user)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
if !canManage {
|
||||
return "", "", errors.New("insufficient permissions to manage this workspace")
|
||||
}
|
||||
}
|
||||
|
||||
usingDatabase = database
|
||||
}
|
||||
|
||||
if usingDatabase.Type != DatabaseTypePostgres {
|
||||
return "", "", errors.New("read-only user creation only supported for PostgreSQL")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
username, password, err := usingDatabase.Postgresql.CreateReadOnlyUser(
|
||||
ctx, s.logger, s.fieldEncryptor, usingDatabase.ID,
|
||||
)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
|
||||
if usingDatabase.WorkspaceID != nil {
|
||||
s.auditLogService.WriteAuditLog(
|
||||
fmt.Sprintf(
|
||||
"Read-only user created for database: %s (username: %s)",
|
||||
usingDatabase.Name,
|
||||
username,
|
||||
),
|
||||
&user.ID,
|
||||
usingDatabase.WorkspaceID,
|
||||
)
|
||||
}
|
||||
|
||||
return username, password, nil
|
||||
}
|
||||
|
||||
9
backend/internal/features/encryption/secrets/di.go
Normal file
9
backend/internal/features/encryption/secrets/di.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package secrets
|
||||
|
||||
var secretKeyService = &SecretKeyService{
|
||||
nil,
|
||||
}
|
||||
|
||||
func GetSecretKeyService() *SecretKeyService {
|
||||
return secretKeyService
|
||||
}
|
||||
1
backend/internal/features/encryption/secrets/model.go
Normal file
1
backend/internal/features/encryption/secrets/model.go
Normal file
@@ -0,0 +1 @@
|
||||
package secrets
|
||||
73
backend/internal/features/encryption/secrets/service.go
Normal file
73
backend/internal/features/encryption/secrets/service.go
Normal file
@@ -0,0 +1,73 @@
|
||||
package secrets
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"postgresus-backend/internal/config"
|
||||
user_models "postgresus-backend/internal/features/users/models"
|
||||
"postgresus-backend/internal/storage"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type SecretKeyService struct {
|
||||
cachedKey *string
|
||||
}
|
||||
|
||||
func (s *SecretKeyService) MigrateKeyFromDbToFileIfExist() error {
|
||||
var secretKey user_models.SecretKey
|
||||
|
||||
err := storage.GetDb().First(&secretKey).Error
|
||||
if err != nil {
|
||||
if errors.Is(err, gorm.ErrRecordNotFound) {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("failed to check for secret key in database: %w", err)
|
||||
}
|
||||
|
||||
if secretKey.Secret == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
secretKeyPath := config.GetEnv().SecretKeyPath
|
||||
if err := os.WriteFile(secretKeyPath, []byte(secretKey.Secret), 0600); err != nil {
|
||||
return fmt.Errorf("failed to write secret key to file: %w", err)
|
||||
}
|
||||
|
||||
if err := storage.GetDb().Exec("DELETE FROM secret_keys").Error; err != nil {
|
||||
return fmt.Errorf("failed to delete secret key from database: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *SecretKeyService) GetSecretKey() (string, error) {
|
||||
if s.cachedKey != nil {
|
||||
return *s.cachedKey, nil
|
||||
}
|
||||
|
||||
secretKeyPath := config.GetEnv().SecretKeyPath
|
||||
data, err := os.ReadFile(secretKeyPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
newKey := s.generateNewSecretKey()
|
||||
if err := os.WriteFile(secretKeyPath, []byte(newKey), 0600); err != nil {
|
||||
return "", fmt.Errorf("failed to write new secret key: %w", err)
|
||||
}
|
||||
s.cachedKey = &newKey
|
||||
return newKey, nil
|
||||
}
|
||||
return "", fmt.Errorf("failed to read secret key file: %w", err)
|
||||
}
|
||||
|
||||
key := string(data)
|
||||
s.cachedKey = &key
|
||||
return key, nil
|
||||
}
|
||||
|
||||
func (s *SecretKeyService) generateNewSecretKey() string {
|
||||
return uuid.New().String() + uuid.New().String()
|
||||
}
|
||||
@@ -0,0 +1,28 @@
|
||||
package email_notifier
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net/smtp"
|
||||
)
|
||||
|
||||
type loginAuth struct {
|
||||
username, password string
|
||||
}
|
||||
|
||||
func (a *loginAuth) Start(server *smtp.ServerInfo) (string, []byte, error) {
|
||||
return "LOGIN", []byte{}, nil
|
||||
}
|
||||
|
||||
func (a *loginAuth) Next(fromServer []byte, more bool) ([]byte, error) {
|
||||
if more {
|
||||
switch string(fromServer) {
|
||||
case "Username:":
|
||||
return []byte(a.username), nil
|
||||
case "Password:":
|
||||
return []byte(a.password), nil
|
||||
default:
|
||||
return nil, errors.New("unknown LOGIN challenge: " + string(fromServer))
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
@@ -58,11 +58,10 @@ func (e *EmailNotifier) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
|
||||
func (e *EmailNotifier) Send(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
_ *slog.Logger,
|
||||
heading string,
|
||||
message string,
|
||||
) error {
|
||||
// Decrypt SMTP password if provided
|
||||
var smtpPassword string
|
||||
if e.SMTPPassword != "" {
|
||||
decrypted, err := encryptor.Decrypt(e.NotifierID, e.SMTPPassword)
|
||||
@@ -72,7 +71,6 @@ func (e *EmailNotifier) Send(
|
||||
smtpPassword = decrypted
|
||||
}
|
||||
|
||||
// Compose email
|
||||
from := e.From
|
||||
if from == "" {
|
||||
from = e.SMTPUser
|
||||
@@ -81,153 +79,13 @@ func (e *EmailNotifier) Send(
|
||||
}
|
||||
}
|
||||
|
||||
to := []string{e.TargetEmail}
|
||||
|
||||
// Format the email content
|
||||
subject := fmt.Sprintf("Subject: %s\r\n", heading)
|
||||
mime := fmt.Sprintf(
|
||||
"MIME-version: 1.0;\nContent-Type: %s; charset=\"%s\";\n\n",
|
||||
MIMETypeHTML,
|
||||
MIMECharsetUTF8,
|
||||
)
|
||||
body := message
|
||||
fromHeader := fmt.Sprintf("From: %s\r\n", from)
|
||||
toHeader := fmt.Sprintf("To: %s\r\n", e.TargetEmail)
|
||||
|
||||
// Combine all parts of the email
|
||||
emailContent := []byte(fromHeader + toHeader + subject + mime + body)
|
||||
|
||||
addr := net.JoinHostPort(e.SMTPHost, fmt.Sprintf("%d", e.SMTPPort))
|
||||
timeout := DefaultTimeout
|
||||
|
||||
// Determine if authentication is required
|
||||
emailContent := e.buildEmailContent(heading, message, from)
|
||||
isAuthRequired := e.SMTPUser != "" && smtpPassword != ""
|
||||
|
||||
// Handle different port scenarios
|
||||
if e.SMTPPort == ImplicitTLSPort {
|
||||
// Implicit TLS (port 465)
|
||||
// Set up TLS config
|
||||
tlsConfig := &tls.Config{
|
||||
ServerName: e.SMTPHost,
|
||||
}
|
||||
|
||||
// Dial with timeout
|
||||
dialer := &net.Dialer{Timeout: timeout}
|
||||
conn, err := tls.DialWithDialer(dialer, "tcp", addr, tlsConfig)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to connect to SMTP server: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = conn.Close()
|
||||
}()
|
||||
|
||||
// Create SMTP client
|
||||
client, err := smtp.NewClient(conn, e.SMTPHost)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create SMTP client: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = client.Quit()
|
||||
}()
|
||||
|
||||
// Set up authentication only if credentials are provided
|
||||
if isAuthRequired {
|
||||
auth := smtp.PlainAuth("", e.SMTPUser, smtpPassword, e.SMTPHost)
|
||||
if err := client.Auth(auth); err != nil {
|
||||
return fmt.Errorf("SMTP authentication failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Set sender and recipients
|
||||
if err := client.Mail(from); err != nil {
|
||||
return fmt.Errorf("failed to set sender: %w", err)
|
||||
}
|
||||
for _, recipient := range to {
|
||||
if err := client.Rcpt(recipient); err != nil {
|
||||
return fmt.Errorf("failed to set recipient: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Send the email body
|
||||
writer, err := client.Data()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get data writer: %w", err)
|
||||
}
|
||||
_, err = writer.Write(emailContent)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write email content: %w", err)
|
||||
}
|
||||
err = writer.Close()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to close data writer: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
} else {
|
||||
// STARTTLS (port 587) or other ports
|
||||
// Create a custom dialer with timeout
|
||||
dialer := &net.Dialer{Timeout: timeout}
|
||||
conn, err := dialer.Dial("tcp", addr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to connect to SMTP server: %w", err)
|
||||
}
|
||||
|
||||
// Create client from connection
|
||||
client, err := smtp.NewClient(conn, e.SMTPHost)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create SMTP client: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = client.Quit()
|
||||
}()
|
||||
|
||||
// Send email using the client
|
||||
if err := client.Hello(DefaultHelloName); err != nil {
|
||||
return fmt.Errorf("SMTP hello failed: %w", err)
|
||||
}
|
||||
|
||||
// Start TLS if available
|
||||
if ok, _ := client.Extension("STARTTLS"); ok {
|
||||
if err := client.StartTLS(&tls.Config{ServerName: e.SMTPHost}); err != nil {
|
||||
return fmt.Errorf("STARTTLS failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Authenticate only if credentials are provided
|
||||
if isAuthRequired {
|
||||
auth := smtp.PlainAuth("", e.SMTPUser, smtpPassword, e.SMTPHost)
|
||||
if err := client.Auth(auth); err != nil {
|
||||
return fmt.Errorf("SMTP authentication failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if err := client.Mail(from); err != nil {
|
||||
return fmt.Errorf("failed to set sender: %w", err)
|
||||
}
|
||||
|
||||
for _, recipient := range to {
|
||||
if err := client.Rcpt(recipient); err != nil {
|
||||
return fmt.Errorf("failed to set recipient: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
writer, err := client.Data()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get data writer: %w", err)
|
||||
}
|
||||
|
||||
_, err = writer.Write(emailContent)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write email content: %w", err)
|
||||
}
|
||||
|
||||
err = writer.Close()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to close data writer: %w", err)
|
||||
}
|
||||
|
||||
return client.Quit()
|
||||
return e.sendImplicitTLS(emailContent, from, smtpPassword, isAuthRequired)
|
||||
}
|
||||
return e.sendStartTLS(emailContent, from, smtpPassword, isAuthRequired)
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) HideSensitiveData() {
|
||||
@@ -256,3 +114,166 @@ func (e *EmailNotifier) EncryptSensitiveData(encryptor encryption.FieldEncryptor
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) buildEmailContent(heading, message, from string) []byte {
|
||||
subject := fmt.Sprintf("Subject: %s\r\n", heading)
|
||||
mime := fmt.Sprintf(
|
||||
"MIME-version: 1.0;\nContent-Type: %s; charset=\"%s\";\n\n",
|
||||
MIMETypeHTML,
|
||||
MIMECharsetUTF8,
|
||||
)
|
||||
fromHeader := fmt.Sprintf("From: %s\r\n", from)
|
||||
toHeader := fmt.Sprintf("To: %s\r\n", e.TargetEmail)
|
||||
return []byte(fromHeader + toHeader + subject + mime + message)
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) sendImplicitTLS(
|
||||
emailContent []byte,
|
||||
from string,
|
||||
password string,
|
||||
isAuthRequired bool,
|
||||
) error {
|
||||
createClient := func() (*smtp.Client, func(), error) {
|
||||
return e.createImplicitTLSClient()
|
||||
}
|
||||
|
||||
client, cleanup, err := e.authenticateWithRetry(createClient, password, isAuthRequired)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cleanup()
|
||||
|
||||
return e.sendEmail(client, from, emailContent)
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) sendStartTLS(
|
||||
emailContent []byte,
|
||||
from string,
|
||||
password string,
|
||||
isAuthRequired bool,
|
||||
) error {
|
||||
createClient := func() (*smtp.Client, func(), error) {
|
||||
return e.createStartTLSClient()
|
||||
}
|
||||
|
||||
client, cleanup, err := e.authenticateWithRetry(createClient, password, isAuthRequired)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cleanup()
|
||||
|
||||
return e.sendEmail(client, from, emailContent)
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) createImplicitTLSClient() (*smtp.Client, func(), error) {
|
||||
addr := net.JoinHostPort(e.SMTPHost, fmt.Sprintf("%d", e.SMTPPort))
|
||||
tlsConfig := &tls.Config{ServerName: e.SMTPHost}
|
||||
dialer := &net.Dialer{Timeout: DefaultTimeout}
|
||||
|
||||
conn, err := tls.DialWithDialer(dialer, "tcp", addr, tlsConfig)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to connect to SMTP server: %w", err)
|
||||
}
|
||||
|
||||
client, err := smtp.NewClient(conn, e.SMTPHost)
|
||||
if err != nil {
|
||||
_ = conn.Close()
|
||||
return nil, nil, fmt.Errorf("failed to create SMTP client: %w", err)
|
||||
}
|
||||
|
||||
return client, func() { _ = client.Quit() }, nil
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) createStartTLSClient() (*smtp.Client, func(), error) {
|
||||
addr := net.JoinHostPort(e.SMTPHost, fmt.Sprintf("%d", e.SMTPPort))
|
||||
dialer := &net.Dialer{Timeout: DefaultTimeout}
|
||||
|
||||
conn, err := dialer.Dial("tcp", addr)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to connect to SMTP server: %w", err)
|
||||
}
|
||||
|
||||
client, err := smtp.NewClient(conn, e.SMTPHost)
|
||||
if err != nil {
|
||||
_ = conn.Close()
|
||||
return nil, nil, fmt.Errorf("failed to create SMTP client: %w", err)
|
||||
}
|
||||
|
||||
if err := client.Hello(DefaultHelloName); err != nil {
|
||||
_ = client.Quit()
|
||||
_ = conn.Close()
|
||||
return nil, nil, fmt.Errorf("SMTP hello failed: %w", err)
|
||||
}
|
||||
|
||||
if ok, _ := client.Extension("STARTTLS"); ok {
|
||||
if err := client.StartTLS(&tls.Config{ServerName: e.SMTPHost}); err != nil {
|
||||
_ = client.Quit()
|
||||
_ = conn.Close()
|
||||
return nil, nil, fmt.Errorf("STARTTLS failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return client, func() { _ = client.Quit() }, nil
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) authenticateWithRetry(
|
||||
createClient func() (*smtp.Client, func(), error),
|
||||
password string,
|
||||
isAuthRequired bool,
|
||||
) (*smtp.Client, func(), error) {
|
||||
client, cleanup, err := createClient()
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if !isAuthRequired {
|
||||
return client, cleanup, nil
|
||||
}
|
||||
|
||||
// Try PLAIN auth first
|
||||
plainAuth := smtp.PlainAuth("", e.SMTPUser, password, e.SMTPHost)
|
||||
if err := client.Auth(plainAuth); err == nil {
|
||||
return client, cleanup, nil
|
||||
}
|
||||
|
||||
// PLAIN auth failed, connection may be closed - recreate and try LOGIN auth
|
||||
cleanup()
|
||||
|
||||
client, cleanup, err = createClient()
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
loginAuth := &loginAuth{username: e.SMTPUser, password: password}
|
||||
if err := client.Auth(loginAuth); err != nil {
|
||||
cleanup()
|
||||
return nil, nil, fmt.Errorf("SMTP authentication failed: %w", err)
|
||||
}
|
||||
|
||||
return client, cleanup, nil
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) sendEmail(client *smtp.Client, from string, content []byte) error {
|
||||
if err := client.Mail(from); err != nil {
|
||||
return fmt.Errorf("failed to set sender: %w", err)
|
||||
}
|
||||
|
||||
if err := client.Rcpt(e.TargetEmail); err != nil {
|
||||
return fmt.Errorf("failed to set recipient: %w", err)
|
||||
}
|
||||
|
||||
writer, err := client.Data()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get data writer: %w", err)
|
||||
}
|
||||
|
||||
if _, err = writer.Write(content); err != nil {
|
||||
return fmt.Errorf("failed to write email content: %w", err)
|
||||
}
|
||||
|
||||
if err = writer.Close(); err != nil {
|
||||
return fmt.Errorf("failed to close data writer: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -70,6 +70,7 @@ func (s *SlackNotifier) Send(
|
||||
maxAttempts = 5
|
||||
defaultBackoff = 2 * time.Second // when Retry-After header missing
|
||||
backoffMultiplier = 1.5 // use exponential growth
|
||||
requestTimeout = 30 * time.Second
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -77,6 +78,10 @@ func (s *SlackNotifier) Send(
|
||||
attempts = 0
|
||||
)
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: requestTimeout,
|
||||
}
|
||||
|
||||
for {
|
||||
attempts++
|
||||
|
||||
@@ -92,7 +97,7 @@ func (s *SlackNotifier) Send(
|
||||
req.Header.Set("Content-Type", "application/json; charset=utf-8")
|
||||
req.Header.Set("Authorization", "Bearer "+botToken)
|
||||
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return fmt.Errorf("send slack message: %w", err)
|
||||
}
|
||||
|
||||
@@ -10,20 +10,57 @@ import (
|
||||
"net/http"
|
||||
"net/url"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strings"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type WebhookHeader struct {
|
||||
Key string `json:"key"`
|
||||
Value string `json:"value"`
|
||||
}
|
||||
|
||||
type WebhookNotifier struct {
|
||||
NotifierID uuid.UUID `json:"notifierId" gorm:"primaryKey;column:notifier_id"`
|
||||
WebhookURL string `json:"webhookUrl" gorm:"not null;column:webhook_url"`
|
||||
WebhookMethod WebhookMethod `json:"webhookMethod" gorm:"not null;column:webhook_method"`
|
||||
BodyTemplate *string `json:"bodyTemplate" gorm:"column:body_template;type:text"`
|
||||
HeadersJSON string `json:"-" gorm:"column:headers;type:text"`
|
||||
|
||||
Headers []WebhookHeader `json:"headers" gorm:"-"`
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) TableName() string {
|
||||
return "webhook_notifiers"
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) BeforeSave(_ *gorm.DB) error {
|
||||
if len(t.Headers) > 0 {
|
||||
data, err := json.Marshal(t.Headers)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
t.HeadersJSON = string(data)
|
||||
} else {
|
||||
t.HeadersJSON = "[]"
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) AfterFind(_ *gorm.DB) error {
|
||||
if t.HeadersJSON != "" {
|
||||
if err := json.Unmarshal([]byte(t.HeadersJSON), &t.Headers); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if t.WebhookURL == "" {
|
||||
return errors.New("webhook URL is required")
|
||||
@@ -49,66 +86,9 @@ func (t *WebhookNotifier) Send(
|
||||
|
||||
switch t.WebhookMethod {
|
||||
case WebhookMethodGET:
|
||||
reqURL := fmt.Sprintf("%s?heading=%s&message=%s",
|
||||
webhookURL,
|
||||
url.QueryEscape(heading),
|
||||
url.QueryEscape(message),
|
||||
)
|
||||
|
||||
resp, err := http.Get(reqURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to send GET webhook: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if cerr := resp.Body.Close(); cerr != nil {
|
||||
logger.Error("failed to close response body", "error", cerr)
|
||||
}
|
||||
}()
|
||||
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf(
|
||||
"webhook GET returned status: %s, body: %s",
|
||||
resp.Status,
|
||||
string(body),
|
||||
)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
return t.sendGET(webhookURL, heading, message, logger)
|
||||
case WebhookMethodPOST:
|
||||
payload := map[string]string{
|
||||
"heading": heading,
|
||||
"message": message,
|
||||
}
|
||||
|
||||
body, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal webhook payload: %w", err)
|
||||
}
|
||||
|
||||
resp, err := http.Post(webhookURL, "application/json", bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to send POST webhook: %w", err)
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if cerr := resp.Body.Close(); cerr != nil {
|
||||
logger.Error("failed to close response body", "error", cerr)
|
||||
}
|
||||
}()
|
||||
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf(
|
||||
"webhook POST returned status: %s, body: %s",
|
||||
resp.Status,
|
||||
string(body),
|
||||
)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
return t.sendPOST(webhookURL, heading, message, logger)
|
||||
default:
|
||||
return fmt.Errorf("unsupported webhook method: %s", t.WebhookMethod)
|
||||
}
|
||||
@@ -120,15 +100,144 @@ func (t *WebhookNotifier) HideSensitiveData() {
|
||||
func (t *WebhookNotifier) Update(incoming *WebhookNotifier) {
|
||||
t.WebhookURL = incoming.WebhookURL
|
||||
t.WebhookMethod = incoming.WebhookMethod
|
||||
t.BodyTemplate = incoming.BodyTemplate
|
||||
t.Headers = incoming.Headers
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
if t.WebhookURL != "" {
|
||||
encrypted, err := encryptor.Encrypt(t.NotifierID, t.WebhookURL)
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt webhook URL: %w", err)
|
||||
}
|
||||
|
||||
t.WebhookURL = encrypted
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) sendGET(webhookURL, heading, message string, logger *slog.Logger) error {
|
||||
reqURL := fmt.Sprintf("%s?heading=%s&message=%s",
|
||||
webhookURL,
|
||||
url.QueryEscape(heading),
|
||||
url.QueryEscape(message),
|
||||
)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, reqURL, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create GET request: %w", err)
|
||||
}
|
||||
|
||||
t.applyHeaders(req)
|
||||
|
||||
client := &http.Client{}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to send GET webhook: %w", err)
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if cerr := resp.Body.Close(); cerr != nil {
|
||||
logger.Error("failed to close response body", "error", cerr)
|
||||
}
|
||||
}()
|
||||
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf(
|
||||
"webhook GET returned status: %s, body: %s",
|
||||
resp.Status,
|
||||
string(body),
|
||||
)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) sendPOST(webhookURL, heading, message string, logger *slog.Logger) error {
|
||||
body := t.buildRequestBody(heading, message)
|
||||
|
||||
req, err := http.NewRequest(http.MethodPost, webhookURL, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create POST request: %w", err)
|
||||
}
|
||||
|
||||
hasContentType := false
|
||||
|
||||
for _, h := range t.Headers {
|
||||
if strings.EqualFold(h.Key, "Content-Type") {
|
||||
hasContentType = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !hasContentType {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
|
||||
t.applyHeaders(req)
|
||||
|
||||
client := &http.Client{}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to send POST webhook: %w", err)
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if cerr := resp.Body.Close(); cerr != nil {
|
||||
logger.Error("failed to close response body", "error", cerr)
|
||||
}
|
||||
}()
|
||||
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
respBody, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf(
|
||||
"webhook POST returned status: %s, body: %s",
|
||||
resp.Status,
|
||||
string(respBody),
|
||||
)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) buildRequestBody(heading, message string) []byte {
|
||||
if t.BodyTemplate != nil && *t.BodyTemplate != "" {
|
||||
result := *t.BodyTemplate
|
||||
result = strings.ReplaceAll(result, "{{heading}}", escapeJSONString(heading))
|
||||
result = strings.ReplaceAll(result, "{{message}}", escapeJSONString(message))
|
||||
return []byte(result)
|
||||
}
|
||||
|
||||
payload := map[string]string{
|
||||
"heading": heading,
|
||||
"message": message,
|
||||
}
|
||||
body, _ := json.Marshal(payload)
|
||||
|
||||
return body
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) applyHeaders(req *http.Request) {
|
||||
for _, h := range t.Headers {
|
||||
if h.Key != "" {
|
||||
req.Header.Set(h.Key, h.Value)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func escapeJSONString(s string) string {
|
||||
b, err := json.Marshal(s)
|
||||
if err != nil || len(b) < 2 {
|
||||
escaped := strings.ReplaceAll(s, `\`, `\\`)
|
||||
escaped = strings.ReplaceAll(escaped, `"`, `\"`)
|
||||
escaped = strings.ReplaceAll(escaped, "\n", `\n`)
|
||||
escaped = strings.ReplaceAll(escaped, "\r", `\r`)
|
||||
escaped = strings.ReplaceAll(escaped, "\t", `\t`)
|
||||
return escaped
|
||||
}
|
||||
|
||||
return string(b[1 : len(b)-1])
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package restores
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -340,7 +341,7 @@ func createTestBackup(
|
||||
dummyContent := []byte("dummy backup content for testing")
|
||||
reader := strings.NewReader(string(dummyContent))
|
||||
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
|
||||
if err := storages[0].SaveFile(fieldEncryptor, logger, backup.ID, reader); err != nil {
|
||||
if err := storages[0].SaveFile(context.Background(), fieldEncryptor, logger, backup.ID, reader); err != nil {
|
||||
panic(fmt.Sprintf("Failed to create test backup file: %v", err))
|
||||
}
|
||||
|
||||
|
||||
@@ -2,7 +2,6 @@ package models
|
||||
|
||||
import (
|
||||
"postgresus-backend/internal/features/backups/backups"
|
||||
"postgresus-backend/internal/features/databases/databases/postgresql"
|
||||
"postgresus-backend/internal/features/restores/enums"
|
||||
"time"
|
||||
|
||||
@@ -16,8 +15,6 @@ type Restore struct {
|
||||
BackupID uuid.UUID `json:"backupId" gorm:"column:backup_id;type:uuid;not null"`
|
||||
Backup *backups.Backup
|
||||
|
||||
Postgresql *postgresql.PostgresqlDatabase `json:"postgresql,omitempty" gorm:"foreignKey:RestoreID"`
|
||||
|
||||
FailMessage *string `json:"failMessage" gorm:"column:fail_message"`
|
||||
|
||||
RestoreDurationMs int64 `json:"restoreDurationMs" gorm:"column:restore_duration_ms;default:0"`
|
||||
|
||||
@@ -32,7 +32,6 @@ func (r *RestoreRepository) FindByBackupID(backupID uuid.UUID) ([]*models.Restor
|
||||
if err := storage.
|
||||
GetDb().
|
||||
Preload("Backup").
|
||||
Preload("Postgresql").
|
||||
Where("backup_id = ?", backupID).
|
||||
Order("created_at DESC").
|
||||
Find(&restores).Error; err != nil {
|
||||
@@ -48,7 +47,6 @@ func (r *RestoreRepository) FindByID(id uuid.UUID) (*models.Restore, error) {
|
||||
if err := storage.
|
||||
GetDb().
|
||||
Preload("Backup").
|
||||
Preload("Postgresql").
|
||||
Where("id = ?", id).
|
||||
First(&restore).Error; err != nil {
|
||||
return nil, err
|
||||
@@ -63,7 +61,6 @@ func (r *RestoreRepository) FindByStatus(status enums.RestoreStatus) ([]*models.
|
||||
if err := storage.
|
||||
GetDb().
|
||||
Preload("Backup").
|
||||
Preload("Postgresql").
|
||||
Where("status = ?", status).
|
||||
Order("created_at DESC").
|
||||
Find(&restores).Error; err != nil {
|
||||
|
||||
@@ -191,15 +191,9 @@ func (s *RestoreService) RestoreBackup(
|
||||
return err
|
||||
}
|
||||
|
||||
// Set the RestoreID on the PostgreSQL database and save it
|
||||
if requestDTO.PostgresqlDatabase != nil {
|
||||
requestDTO.PostgresqlDatabase.RestoreID = &restore.ID
|
||||
restore.Postgresql = requestDTO.PostgresqlDatabase
|
||||
|
||||
// Save the restore again to include the postgresql database
|
||||
if err := s.restoreRepository.Save(&restore); err != nil {
|
||||
return err
|
||||
}
|
||||
// Save the restore again to include the postgresql database
|
||||
if err := s.restoreRepository.Save(&restore); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
storage, err := s.storageService.GetStorageByID(backup.StorageID)
|
||||
@@ -216,10 +210,15 @@ func (s *RestoreService) RestoreBackup(
|
||||
|
||||
start := time.Now().UTC()
|
||||
|
||||
restoringToDB := &databases.Database{
|
||||
Postgresql: requestDTO.PostgresqlDatabase,
|
||||
}
|
||||
|
||||
err = s.restoreBackupUsecase.Execute(
|
||||
backupConfig,
|
||||
restore,
|
||||
database,
|
||||
restoringToDB,
|
||||
backup,
|
||||
storage,
|
||||
)
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
package usecases_postgresql
|
||||
|
||||
import (
|
||||
users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
)
|
||||
|
||||
var restorePostgresqlBackupUsecase = &RestorePostgresqlBackupUsecase{
|
||||
logger.GetLogger(),
|
||||
users_repositories.GetSecretKeyRepository(),
|
||||
secrets.GetSecretKeyService(),
|
||||
}
|
||||
|
||||
func GetRestorePostgresqlBackupUsecase() *RestorePostgresqlBackupUsecase {
|
||||
|
||||
@@ -20,9 +20,9 @@ import (
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
pgtypes "postgresus-backend/internal/features/databases/databases/postgresql"
|
||||
encryption_secrets "postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/features/restores/models"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
util_encryption "postgresus-backend/internal/util/encryption"
|
||||
files_utils "postgresus-backend/internal/util/files"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
@@ -31,18 +31,19 @@ import (
|
||||
)
|
||||
|
||||
type RestorePostgresqlBackupUsecase struct {
|
||||
logger *slog.Logger
|
||||
secretKeyRepo *users_repositories.SecretKeyRepository
|
||||
logger *slog.Logger
|
||||
secretKeyService *encryption_secrets.SecretKeyService
|
||||
}
|
||||
|
||||
func (uc *RestorePostgresqlBackupUsecase) Execute(
|
||||
database *databases.Database,
|
||||
originalDB *databases.Database,
|
||||
restoringToDB *databases.Database,
|
||||
backupConfig *backups_config.BackupConfig,
|
||||
restore models.Restore,
|
||||
backup *backups.Backup,
|
||||
storage *storages.Storage,
|
||||
) error {
|
||||
if database.Type != databases.DatabaseTypePostgres {
|
||||
if originalDB.Type != databases.DatabaseTypePostgres {
|
||||
return errors.New("database type not supported")
|
||||
}
|
||||
|
||||
@@ -54,7 +55,7 @@ func (uc *RestorePostgresqlBackupUsecase) Execute(
|
||||
backup.ID,
|
||||
)
|
||||
|
||||
pg := restore.Postgresql
|
||||
pg := restoringToDB.Postgresql
|
||||
if pg == nil {
|
||||
return fmt.Errorf("postgresql configuration is required for restore")
|
||||
}
|
||||
@@ -78,11 +79,12 @@ func (uc *RestorePostgresqlBackupUsecase) Execute(
|
||||
"--verbose", // Add verbose output to help with debugging
|
||||
"--clean", // Clean (drop) database objects before recreating them
|
||||
"--if-exists", // Use IF EXISTS when dropping objects
|
||||
"--no-owner",
|
||||
"--no-owner", // Skip restoring ownership
|
||||
"--no-acl", // Skip restoring access privileges (GRANT/REVOKE commands)
|
||||
}
|
||||
|
||||
return uc.restoreFromStorage(
|
||||
database,
|
||||
originalDB,
|
||||
tools.GetPostgresqlExecutable(
|
||||
pg.Version,
|
||||
"pg_restore",
|
||||
@@ -232,7 +234,7 @@ func (uc *RestorePostgresqlBackupUsecase) downloadBackupToTempFile(
|
||||
}
|
||||
|
||||
// Get master key
|
||||
masterKey, err := uc.secretKeyRepo.GetSecretKey()
|
||||
masterKey, err := uc.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
cleanupFunc()
|
||||
return "", nil, fmt.Errorf("failed to get master key for decryption: %w", err)
|
||||
@@ -376,7 +378,6 @@ func (uc *RestorePostgresqlBackupUsecase) setupPgRestoreEnvironment(
|
||||
// Add encoding-related environment variables
|
||||
cmd.Env = append(cmd.Env, "LC_ALL=C.UTF-8")
|
||||
cmd.Env = append(cmd.Env, "LANG=C.UTF-8")
|
||||
cmd.Env = append(cmd.Env, "PGOPTIONS=--client-encoding=UTF8")
|
||||
|
||||
shouldRequireSSL := pgConfig.IsHttps
|
||||
|
||||
@@ -562,11 +563,15 @@ func (uc *RestorePostgresqlBackupUsecase) createTempPgpassFile(
|
||||
return "", nil
|
||||
}
|
||||
|
||||
escapedHost := tools.EscapePgpassField(pgConfig.Host)
|
||||
escapedUsername := tools.EscapePgpassField(pgConfig.Username)
|
||||
escapedPassword := tools.EscapePgpassField(password)
|
||||
|
||||
pgpassContent := fmt.Sprintf("%s:%d:*:%s:%s",
|
||||
pgConfig.Host,
|
||||
escapedHost,
|
||||
pgConfig.Port,
|
||||
pgConfig.Username,
|
||||
password,
|
||||
escapedUsername,
|
||||
escapedPassword,
|
||||
)
|
||||
|
||||
tempDir, err := os.MkdirTemp("", "pgpass")
|
||||
|
||||
@@ -17,13 +17,15 @@ type RestoreBackupUsecase struct {
|
||||
func (uc *RestoreBackupUsecase) Execute(
|
||||
backupConfig *backups_config.BackupConfig,
|
||||
restore models.Restore,
|
||||
database *databases.Database,
|
||||
originalDB *databases.Database,
|
||||
restoringToDB *databases.Database,
|
||||
backup *backups.Backup,
|
||||
storage *storages.Storage,
|
||||
) error {
|
||||
if database.Type == databases.DatabaseTypePostgres {
|
||||
if originalDB.Type == databases.DatabaseTypePostgres {
|
||||
return uc.restorePostgresqlBackupUsecase.Execute(
|
||||
database,
|
||||
originalDB,
|
||||
restoringToDB,
|
||||
backupConfig,
|
||||
restore,
|
||||
backup,
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package storages
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"log/slog"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
@@ -10,6 +11,7 @@ import (
|
||||
|
||||
type StorageFileSaver interface {
|
||||
SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package storages
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
"log/slog"
|
||||
@@ -30,12 +31,13 @@ type Storage struct {
|
||||
}
|
||||
|
||||
func (s *Storage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
err := s.getSpecificStorage().SaveFile(encryptor, logger, fileID, file)
|
||||
err := s.getSpecificStorage().SaveFile(ctx, encryptor, logger, fileID, file)
|
||||
if err != nil {
|
||||
lastSaveError := err.Error()
|
||||
s.LastSaveError = &lastSaveError
|
||||
|
||||
@@ -167,6 +167,7 @@ func Test_Storage_BasicOperations(t *testing.T) {
|
||||
fileID := uuid.New()
|
||||
|
||||
err = tc.storage.SaveFile(
|
||||
context.Background(),
|
||||
encryptor,
|
||||
logger.GetLogger(),
|
||||
fileID,
|
||||
@@ -189,6 +190,7 @@ func Test_Storage_BasicOperations(t *testing.T) {
|
||||
|
||||
fileID := uuid.New()
|
||||
err = tc.storage.SaveFile(
|
||||
context.Background(),
|
||||
encryptor,
|
||||
logger.GetLogger(),
|
||||
fileID,
|
||||
@@ -238,7 +240,7 @@ func setupS3Container(ctx context.Context) (*S3Container, error) {
|
||||
secretKey := "testpassword"
|
||||
bucketName := "test-bucket"
|
||||
region := "us-east-1"
|
||||
endpoint := fmt.Sprintf("localhost:%s", env.TestMinioPort)
|
||||
endpoint := fmt.Sprintf("127.0.0.1:%s", env.TestMinioPort)
|
||||
|
||||
// Create MinIO client and ensure bucket exists
|
||||
minioClient, err := minio.New(endpoint, &minio.Options{
|
||||
|
||||
@@ -3,19 +3,44 @@ package azure_blob_storage
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
"net"
|
||||
"net/http"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
const (
|
||||
azureConnectTimeout = 30 * time.Second
|
||||
azureResponseTimeout = 30 * time.Second
|
||||
azureIdleConnTimeout = 90 * time.Second
|
||||
azureTLSHandshakeTimeout = 30 * time.Second
|
||||
|
||||
// Chunk size for block blob uploads - 16MB provides good balance between
|
||||
// memory usage and upload efficiency. This creates backpressure to pg_dump
|
||||
// by only reading one chunk at a time and waiting for Azure to confirm receipt.
|
||||
azureChunkSize = 16 * 1024 * 1024
|
||||
)
|
||||
|
||||
type readSeekCloser struct {
|
||||
*bytes.Reader
|
||||
}
|
||||
|
||||
func (r *readSeekCloser) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type AuthMethod string
|
||||
|
||||
const (
|
||||
@@ -39,27 +64,91 @@ func (s *AzureBlobStorage) TableName() string {
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled before start: %w", ctx.Err())
|
||||
default:
|
||||
}
|
||||
|
||||
client, err := s.getClient(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
blobName := s.buildBlobName(fileID.String())
|
||||
blockBlobClient := client.ServiceClient().
|
||||
NewContainerClient(s.ContainerName).
|
||||
NewBlockBlobClient(blobName)
|
||||
|
||||
_, err = client.UploadStream(
|
||||
context.TODO(),
|
||||
s.ContainerName,
|
||||
blobName,
|
||||
file,
|
||||
nil,
|
||||
)
|
||||
var blockIDs []string
|
||||
blockNumber := 0
|
||||
buf := make([]byte, azureChunkSize)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled: %w", ctx.Err())
|
||||
default:
|
||||
}
|
||||
|
||||
n, readErr := io.ReadFull(file, buf)
|
||||
|
||||
if n == 0 && readErr == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
if readErr != nil && readErr != io.EOF && readErr != io.ErrUnexpectedEOF {
|
||||
return fmt.Errorf("read error: %w", readErr)
|
||||
}
|
||||
|
||||
blockID := base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("%06d", blockNumber)))
|
||||
|
||||
_, err := blockBlobClient.StageBlock(
|
||||
ctx,
|
||||
blockID,
|
||||
&readSeekCloser{bytes.NewReader(buf[:n])},
|
||||
nil,
|
||||
)
|
||||
if err != nil {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled: %w", ctx.Err())
|
||||
default:
|
||||
return fmt.Errorf("failed to stage block %d: %w", blockNumber, err)
|
||||
}
|
||||
}
|
||||
|
||||
blockIDs = append(blockIDs, blockID)
|
||||
blockNumber++
|
||||
|
||||
if readErr == io.EOF || readErr == io.ErrUnexpectedEOF {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if len(blockIDs) == 0 {
|
||||
_, err = client.UploadStream(
|
||||
ctx,
|
||||
s.ContainerName,
|
||||
blobName,
|
||||
bytes.NewReader([]byte{}),
|
||||
nil,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to upload empty blob: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
_, err = blockBlobClient.CommitBlockList(ctx, blockIDs, &blockblob.CommitBlockListOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to upload blob to Azure: %w", err)
|
||||
return fmt.Errorf("failed to commit block list: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -253,6 +342,8 @@ func (s *AzureBlobStorage) getClient(encryptor encryption.FieldEncryptor) (*azbl
|
||||
var client *azblob.Client
|
||||
var err error
|
||||
|
||||
clientOptions := s.buildClientOptions()
|
||||
|
||||
switch s.AuthMethod {
|
||||
case AuthMethodConnectionString:
|
||||
connectionString, decryptErr := encryptor.Decrypt(s.StorageID, s.ConnectionString)
|
||||
@@ -260,7 +351,7 @@ func (s *AzureBlobStorage) getClient(encryptor encryption.FieldEncryptor) (*azbl
|
||||
return nil, fmt.Errorf("failed to decrypt Azure connection string: %w", decryptErr)
|
||||
}
|
||||
|
||||
client, err = azblob.NewClientFromConnectionString(connectionString, nil)
|
||||
client, err = azblob.NewClientFromConnectionString(connectionString, clientOptions)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(
|
||||
"failed to create Azure Blob client from connection string: %w",
|
||||
@@ -279,7 +370,7 @@ func (s *AzureBlobStorage) getClient(encryptor encryption.FieldEncryptor) (*azbl
|
||||
return nil, fmt.Errorf("failed to create Azure shared key credential: %w", credErr)
|
||||
}
|
||||
|
||||
client, err = azblob.NewClientWithSharedKeyCredential(accountURL, credential, nil)
|
||||
client, err = azblob.NewClientWithSharedKeyCredential(accountURL, credential, clientOptions)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create Azure Blob client with shared key: %w", err)
|
||||
}
|
||||
@@ -290,6 +381,26 @@ func (s *AzureBlobStorage) getClient(encryptor encryption.FieldEncryptor) (*azbl
|
||||
return client, nil
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) buildClientOptions() *azblob.ClientOptions {
|
||||
transport := &http.Transport{
|
||||
DialContext: (&net.Dialer{
|
||||
Timeout: azureConnectTimeout,
|
||||
}).DialContext,
|
||||
TLSHandshakeTimeout: azureTLSHandshakeTimeout,
|
||||
ResponseHeaderTimeout: azureResponseTimeout,
|
||||
IdleConnTimeout: azureIdleConnTimeout,
|
||||
}
|
||||
|
||||
return &azblob.ClientOptions{
|
||||
ClientOptions: azcore.ClientOptions{
|
||||
Transport: &http.Client{Transport: transport},
|
||||
Retry: policy.RetryOptions{
|
||||
MaxRetries: 0,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) buildAccountURL() string {
|
||||
if s.Endpoint != "" {
|
||||
endpoint := s.Endpoint
|
||||
|
||||
@@ -7,6 +7,8 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
"net"
|
||||
"net/http"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -16,9 +18,22 @@ import (
|
||||
"golang.org/x/oauth2/google"
|
||||
|
||||
drive "google.golang.org/api/drive/v3"
|
||||
"google.golang.org/api/googleapi"
|
||||
"google.golang.org/api/option"
|
||||
)
|
||||
|
||||
const (
|
||||
gdConnectTimeout = 30 * time.Second
|
||||
gdResponseTimeout = 30 * time.Second
|
||||
gdIdleConnTimeout = 90 * time.Second
|
||||
gdTLSHandshakeTimeout = 30 * time.Second
|
||||
|
||||
// Chunk size for Google Drive resumable uploads - 16MB provides good balance
|
||||
// between memory usage and upload efficiency. Google Drive requires chunks
|
||||
// to be multiples of 256KB for resumable uploads.
|
||||
gdChunkSize = 16 * 1024 * 1024
|
||||
)
|
||||
|
||||
type GoogleDriveStorage struct {
|
||||
StorageID uuid.UUID `json:"storageId" gorm:"primaryKey;type:uuid;column:storage_id"`
|
||||
ClientID string `json:"clientId" gorm:"not null;type:text;column:client_id"`
|
||||
@@ -31,31 +46,44 @@ func (s *GoogleDriveStorage) TableName() string {
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
return s.withRetryOnAuth(encryptor, func(driveService *drive.Service) error {
|
||||
ctx := context.Background()
|
||||
return s.withRetryOnAuth(ctx, encryptor, func(driveService *drive.Service) error {
|
||||
filename := fileID.String()
|
||||
|
||||
// Ensure the postgresus_backups folder exists
|
||||
folderID, err := s.ensureBackupsFolderExists(ctx, driveService)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create/find backups folder: %w", err)
|
||||
}
|
||||
|
||||
// Delete any previous copy so we keep at most one object per logical file.
|
||||
_ = s.deleteByName(ctx, driveService, filename, folderID) // ignore "not found"
|
||||
_ = s.deleteByName(ctx, driveService, filename, folderID)
|
||||
|
||||
fileMeta := &drive.File{
|
||||
Name: filename,
|
||||
Parents: []string{folderID},
|
||||
}
|
||||
|
||||
_, err = driveService.Files.Create(fileMeta).Media(file).Context(ctx).Do()
|
||||
backpressureReader := &backpressureReader{
|
||||
reader: file,
|
||||
ctx: ctx,
|
||||
chunkSize: gdChunkSize,
|
||||
buf: make([]byte, gdChunkSize),
|
||||
}
|
||||
|
||||
_, err = driveService.Files.Create(fileMeta).
|
||||
Media(backpressureReader, googleapi.ChunkSize(gdChunkSize)).
|
||||
Context(ctx).
|
||||
Do()
|
||||
if err != nil {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled: %w", ctx.Err())
|
||||
default:
|
||||
}
|
||||
return fmt.Errorf("failed to upload file to Google Drive: %w", err)
|
||||
}
|
||||
|
||||
@@ -70,30 +98,85 @@ func (s *GoogleDriveStorage) SaveFile(
|
||||
})
|
||||
}
|
||||
|
||||
type backpressureReader struct {
|
||||
reader io.Reader
|
||||
ctx context.Context
|
||||
chunkSize int
|
||||
buf []byte
|
||||
bufStart int
|
||||
bufEnd int
|
||||
totalBytes int64
|
||||
chunkCount int
|
||||
}
|
||||
|
||||
func (r *backpressureReader) Read(p []byte) (n int, err error) {
|
||||
select {
|
||||
case <-r.ctx.Done():
|
||||
return 0, r.ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
if r.bufStart >= r.bufEnd {
|
||||
r.chunkCount++
|
||||
|
||||
bytesRead, readErr := io.ReadFull(r.reader, r.buf)
|
||||
if bytesRead > 0 {
|
||||
r.bufStart = 0
|
||||
r.bufEnd = bytesRead
|
||||
}
|
||||
|
||||
if readErr != nil && readErr != io.EOF && readErr != io.ErrUnexpectedEOF {
|
||||
return 0, readErr
|
||||
}
|
||||
|
||||
if bytesRead == 0 && readErr == io.EOF {
|
||||
return 0, io.EOF
|
||||
}
|
||||
}
|
||||
|
||||
n = copy(p, r.buf[r.bufStart:r.bufEnd])
|
||||
r.bufStart += n
|
||||
r.totalBytes += int64(n)
|
||||
|
||||
if r.bufStart >= r.bufEnd {
|
||||
select {
|
||||
case <-r.ctx.Done():
|
||||
return n, r.ctx.Err()
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
return n, nil
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) GetFile(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fileID uuid.UUID,
|
||||
) (io.ReadCloser, error) {
|
||||
var result io.ReadCloser
|
||||
err := s.withRetryOnAuth(encryptor, func(driveService *drive.Service) error {
|
||||
folderID, err := s.findBackupsFolder(driveService)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to find backups folder: %w", err)
|
||||
}
|
||||
err := s.withRetryOnAuth(
|
||||
context.Background(),
|
||||
encryptor,
|
||||
func(driveService *drive.Service) error {
|
||||
folderID, err := s.findBackupsFolder(driveService)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to find backups folder: %w", err)
|
||||
}
|
||||
|
||||
fileIDGoogle, err := s.lookupFileID(driveService, fileID.String(), folderID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fileIDGoogle, err := s.lookupFileID(driveService, fileID.String(), folderID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
resp, err := driveService.Files.Get(fileIDGoogle).Download()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to download file from Google Drive: %w", err)
|
||||
}
|
||||
resp, err := driveService.Files.Get(fileIDGoogle).Download()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to download file from Google Drive: %w", err)
|
||||
}
|
||||
|
||||
result = resp.Body
|
||||
return nil
|
||||
})
|
||||
result = resp.Body
|
||||
return nil
|
||||
},
|
||||
)
|
||||
|
||||
return result, err
|
||||
}
|
||||
@@ -102,8 +185,8 @@ func (s *GoogleDriveStorage) DeleteFile(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fileID uuid.UUID,
|
||||
) error {
|
||||
return s.withRetryOnAuth(encryptor, func(driveService *drive.Service) error {
|
||||
ctx := context.Background()
|
||||
ctx := context.Background()
|
||||
return s.withRetryOnAuth(ctx, encryptor, func(driveService *drive.Service) error {
|
||||
folderID, err := s.findBackupsFolder(driveService)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to find backups folder: %w", err)
|
||||
@@ -142,8 +225,8 @@ func (s *GoogleDriveStorage) Validate(encryptor encryption.FieldEncryptor) error
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) TestConnection(encryptor encryption.FieldEncryptor) error {
|
||||
return s.withRetryOnAuth(encryptor, func(driveService *drive.Service) error {
|
||||
ctx := context.Background()
|
||||
ctx := context.Background()
|
||||
return s.withRetryOnAuth(ctx, encryptor, func(driveService *drive.Service) error {
|
||||
testFilename := "test-connection-" + uuid.New().String()
|
||||
testData := []byte("test")
|
||||
|
||||
@@ -243,9 +326,16 @@ func (s *GoogleDriveStorage) Update(incoming *GoogleDriveStorage) {
|
||||
|
||||
// withRetryOnAuth executes the provided function with retry logic for authentication errors
|
||||
func (s *GoogleDriveStorage) withRetryOnAuth(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fn func(*drive.Service) error,
|
||||
) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
driveService, err := s.getDriveService(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -253,6 +343,12 @@ func (s *GoogleDriveStorage) withRetryOnAuth(
|
||||
|
||||
err = fn(driveService)
|
||||
if err != nil && s.isAuthError(err) {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
// Try to refresh token and retry once
|
||||
fmt.Printf("Google Drive auth error detected, attempting token refresh: %v\n", err)
|
||||
|
||||
@@ -422,7 +518,6 @@ func (s *GoogleDriveStorage) getDriveService(
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Decrypt credentials before use
|
||||
clientSecret, err := encryptor.Decrypt(s.StorageID, s.ClientSecret)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt Google Drive client secret: %w", err)
|
||||
@@ -449,16 +544,16 @@ func (s *GoogleDriveStorage) getDriveService(
|
||||
|
||||
tokenSource := cfg.TokenSource(ctx, &token)
|
||||
|
||||
// Force token validation to ensure we're using the current token
|
||||
currentToken, err := tokenSource.Token()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get current token: %w", err)
|
||||
}
|
||||
|
||||
// Create a new token source with the validated token
|
||||
validatedTokenSource := oauth2.StaticTokenSource(currentToken)
|
||||
|
||||
driveService, err := drive.NewService(ctx, option.WithTokenSource(validatedTokenSource))
|
||||
httpClient := s.buildHTTPClient(validatedTokenSource)
|
||||
|
||||
driveService, err := drive.NewService(ctx, option.WithHTTPClient(httpClient))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to create Drive client: %w", err)
|
||||
}
|
||||
@@ -466,6 +561,24 @@ func (s *GoogleDriveStorage) getDriveService(
|
||||
return driveService, nil
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) buildHTTPClient(tokenSource oauth2.TokenSource) *http.Client {
|
||||
transport := &http.Transport{
|
||||
DialContext: (&net.Dialer{
|
||||
Timeout: gdConnectTimeout,
|
||||
}).DialContext,
|
||||
TLSHandshakeTimeout: gdTLSHandshakeTimeout,
|
||||
ResponseHeaderTimeout: gdResponseTimeout,
|
||||
IdleConnTimeout: gdIdleConnTimeout,
|
||||
}
|
||||
|
||||
return &http.Client{
|
||||
Transport: &oauth2.Transport{
|
||||
Source: tokenSource,
|
||||
Base: transport,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) lookupFileID(
|
||||
driveService *drive.Service,
|
||||
name string,
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
package local_storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
@@ -13,6 +15,13 @@ import (
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
const (
|
||||
// Chunk size for local storage writes - 16MB provides good balance between
|
||||
// memory usage and write efficiency. This creates backpressure to pg_dump
|
||||
// by only reading one chunk at a time and waiting for disk to confirm receipt.
|
||||
localChunkSize = 16 * 1024 * 1024
|
||||
)
|
||||
|
||||
// LocalStorage uses ./postgresus_local_backups folder as a
|
||||
// directory for backups and ./postgresus_local_temp folder as a
|
||||
// directory for temp files
|
||||
@@ -25,11 +34,18 @@ func (l *LocalStorage) TableName() string {
|
||||
}
|
||||
|
||||
func (l *LocalStorage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
logger.Info("Starting to save file to local storage", "fileId", fileID.String())
|
||||
|
||||
err := files_utils.EnsureDirectories([]string{
|
||||
@@ -60,7 +76,7 @@ func (l *LocalStorage) SaveFile(
|
||||
}()
|
||||
|
||||
logger.Debug("Copying file data to temp file", "fileId", fileID.String())
|
||||
_, err = io.Copy(tempFile, file)
|
||||
_, err = copyWithContext(ctx, tempFile, file)
|
||||
if err != nil {
|
||||
logger.Error("Failed to write to temp file", "fileId", fileID.String(), "error", err)
|
||||
return fmt.Errorf("failed to write to temp file: %w", err)
|
||||
@@ -175,3 +191,71 @@ func (l *LocalStorage) EncryptSensitiveData(encryptor encryption.FieldEncryptor)
|
||||
|
||||
func (l *LocalStorage) Update(incoming *LocalStorage) {
|
||||
}
|
||||
|
||||
type writeResult struct {
|
||||
bytesWritten int
|
||||
writeErr error
|
||||
}
|
||||
|
||||
func copyWithContext(ctx context.Context, dst io.Writer, src io.Reader) (int64, error) {
|
||||
buf := make([]byte, localChunkSize)
|
||||
var written int64
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return written, ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
nr, readErr := io.ReadFull(src, buf)
|
||||
|
||||
if nr == 0 && readErr == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
if readErr != nil && readErr != io.EOF && readErr != io.ErrUnexpectedEOF {
|
||||
return written, readErr
|
||||
}
|
||||
|
||||
writeResultCh := make(chan writeResult, 1)
|
||||
go func() {
|
||||
nw, writeErr := dst.Write(buf[0:nr])
|
||||
writeResultCh <- writeResult{nw, writeErr}
|
||||
}()
|
||||
|
||||
var nw int
|
||||
var writeErr error
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return written, ctx.Err()
|
||||
case result := <-writeResultCh:
|
||||
nw = result.bytesWritten
|
||||
writeErr = result.writeErr
|
||||
}
|
||||
|
||||
if nw < 0 || nr < nw {
|
||||
nw = 0
|
||||
if writeErr == nil {
|
||||
writeErr = errors.New("invalid write result")
|
||||
}
|
||||
}
|
||||
|
||||
if writeErr != nil {
|
||||
return written, writeErr
|
||||
}
|
||||
|
||||
if nr != nw {
|
||||
return written, io.ErrShortWrite
|
||||
}
|
||||
|
||||
written += int64(nw)
|
||||
|
||||
if readErr == io.EOF || readErr == io.ErrUnexpectedEOF {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return written, nil
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package nas_storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
@@ -16,6 +17,13 @@ import (
|
||||
"github.com/hirochachacha/go-smb2"
|
||||
)
|
||||
|
||||
const (
|
||||
// Chunk size for NAS uploads - 16MB provides good balance between
|
||||
// memory usage and upload efficiency. This creates backpressure to pg_dump
|
||||
// by only reading one chunk at a time and waiting for NAS to confirm receipt.
|
||||
nasChunkSize = 16 * 1024 * 1024
|
||||
)
|
||||
|
||||
type NASStorage struct {
|
||||
StorageID uuid.UUID `json:"storageId" gorm:"primaryKey;type:uuid;column:storage_id"`
|
||||
Host string `json:"host" gorm:"not null;type:text;column:host"`
|
||||
@@ -33,14 +41,21 @@ func (n *NASStorage) TableName() string {
|
||||
}
|
||||
|
||||
func (n *NASStorage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
logger.Info("Starting to save file to NAS storage", "fileId", fileID.String(), "host", n.Host)
|
||||
|
||||
session, err := n.createSession(encryptor)
|
||||
session, err := n.createSessionWithContext(ctx, encryptor)
|
||||
if err != nil {
|
||||
logger.Error("Failed to create NAS session", "fileId", fileID.String(), "error", err)
|
||||
return fmt.Errorf("failed to create NAS session: %w", err)
|
||||
@@ -121,7 +136,7 @@ func (n *NASStorage) SaveFile(
|
||||
}()
|
||||
|
||||
logger.Debug("Copying file data to NAS", "fileId", fileID.String())
|
||||
_, err = io.Copy(nasFile, file)
|
||||
_, err = copyWithContext(ctx, nasFile, file)
|
||||
if err != nil {
|
||||
logger.Error("Failed to write file to NAS", "fileId", fileID.String(), "error", err)
|
||||
return fmt.Errorf("failed to write file to NAS: %w", err)
|
||||
@@ -290,20 +305,24 @@ func (n *NASStorage) Update(incoming *NASStorage) {
|
||||
}
|
||||
|
||||
func (n *NASStorage) createSession(encryptor encryption.FieldEncryptor) (*smb2.Session, error) {
|
||||
// Create connection with timeout
|
||||
conn, err := n.createConnection()
|
||||
return n.createSessionWithContext(context.Background(), encryptor)
|
||||
}
|
||||
|
||||
func (n *NASStorage) createSessionWithContext(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
) (*smb2.Session, error) {
|
||||
conn, err := n.createConnectionWithContext(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Decrypt password before use
|
||||
password, err := encryptor.Decrypt(n.StorageID, n.Password)
|
||||
if err != nil {
|
||||
_ = conn.Close()
|
||||
return nil, fmt.Errorf("failed to decrypt NAS password: %w", err)
|
||||
}
|
||||
|
||||
// Create SMB2 dialer
|
||||
d := &smb2.Dialer{
|
||||
Initiator: &smb2.NTLMInitiator{
|
||||
User: n.Username,
|
||||
@@ -312,7 +331,6 @@ func (n *NASStorage) createSession(encryptor encryption.FieldEncryptor) (*smb2.S
|
||||
},
|
||||
}
|
||||
|
||||
// Create session
|
||||
session, err := d.Dial(conn)
|
||||
if err != nil {
|
||||
_ = conn.Close()
|
||||
@@ -322,34 +340,30 @@ func (n *NASStorage) createSession(encryptor encryption.FieldEncryptor) (*smb2.S
|
||||
return session, nil
|
||||
}
|
||||
|
||||
func (n *NASStorage) createConnection() (net.Conn, error) {
|
||||
func (n *NASStorage) createConnectionWithContext(ctx context.Context) (net.Conn, error) {
|
||||
address := net.JoinHostPort(n.Host, fmt.Sprintf("%d", n.Port))
|
||||
|
||||
// Create connection with timeout
|
||||
dialer := &net.Dialer{
|
||||
Timeout: 10 * time.Second,
|
||||
Timeout: 30 * time.Second,
|
||||
}
|
||||
|
||||
if n.UseSSL {
|
||||
// Use TLS connection
|
||||
tlsConfig := &tls.Config{
|
||||
ServerName: n.Host,
|
||||
InsecureSkipVerify: false, // Change to true if you want to skip cert verification
|
||||
InsecureSkipVerify: false,
|
||||
}
|
||||
|
||||
conn, err := tls.DialWithDialer(dialer, "tcp", address, tlsConfig)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create SSL connection to %s: %w", address, err)
|
||||
}
|
||||
return conn, nil
|
||||
} else {
|
||||
// Use regular TCP connection
|
||||
conn, err := dialer.Dial("tcp", address)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create connection to %s: %w", address, err)
|
||||
}
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
conn, err := dialer.DialContext(ctx, "tcp", address)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create connection to %s: %w", address, err)
|
||||
}
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
func (n *NASStorage) ensureDirectory(fs *smb2.Share, path string) error {
|
||||
@@ -444,3 +458,71 @@ func (r *nasFileReader) Close() error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
type writeResult struct {
|
||||
bytesWritten int
|
||||
writeErr error
|
||||
}
|
||||
|
||||
func copyWithContext(ctx context.Context, dst io.Writer, src io.Reader) (int64, error) {
|
||||
buf := make([]byte, nasChunkSize)
|
||||
var written int64
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return written, ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
nr, readErr := io.ReadFull(src, buf)
|
||||
|
||||
if nr == 0 && readErr == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
if readErr != nil && readErr != io.EOF && readErr != io.ErrUnexpectedEOF {
|
||||
return written, readErr
|
||||
}
|
||||
|
||||
writeResultCh := make(chan writeResult, 1)
|
||||
go func() {
|
||||
nw, writeErr := dst.Write(buf[0:nr])
|
||||
writeResultCh <- writeResult{nw, writeErr}
|
||||
}()
|
||||
|
||||
var nw int
|
||||
var writeErr error
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return written, ctx.Err()
|
||||
case result := <-writeResultCh:
|
||||
nw = result.bytesWritten
|
||||
writeErr = result.writeErr
|
||||
}
|
||||
|
||||
if nw < 0 || nr < nw {
|
||||
nw = 0
|
||||
if writeErr == nil {
|
||||
writeErr = errors.New("invalid write result")
|
||||
}
|
||||
}
|
||||
|
||||
if writeErr != nil {
|
||||
return written, writeErr
|
||||
}
|
||||
|
||||
if nr != nw {
|
||||
return written, io.ErrShortWrite
|
||||
}
|
||||
|
||||
written += int64(nw)
|
||||
|
||||
if readErr == io.EOF || readErr == io.ErrUnexpectedEOF {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return written, nil
|
||||
}
|
||||
|
||||
@@ -7,6 +7,8 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
"net"
|
||||
"net/http"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -16,6 +18,18 @@ import (
|
||||
"github.com/minio/minio-go/v7/pkg/credentials"
|
||||
)
|
||||
|
||||
const (
|
||||
s3ConnectTimeout = 30 * time.Second
|
||||
s3ResponseTimeout = 30 * time.Second
|
||||
s3IdleConnTimeout = 90 * time.Second
|
||||
s3TLSHandshakeTimeout = 30 * time.Second
|
||||
|
||||
// Chunk size for multipart uploads - 16MB provides good balance between
|
||||
// memory usage and upload efficiency. This creates backpressure to pg_dump
|
||||
// by only reading one chunk at a time and waiting for S3 to confirm receipt.
|
||||
multipartChunkSize = 16 * 1024 * 1024
|
||||
)
|
||||
|
||||
type S3Storage struct {
|
||||
StorageID uuid.UUID `json:"storageId" gorm:"primaryKey;type:uuid;column:storage_id"`
|
||||
S3Bucket string `json:"s3Bucket" gorm:"not null;type:text;column:s3_bucket"`
|
||||
@@ -33,29 +47,123 @@ func (s *S3Storage) TableName() string {
|
||||
}
|
||||
|
||||
func (s *S3Storage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
client, err := s.getClient(encryptor)
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled before start: %w", ctx.Err())
|
||||
default:
|
||||
}
|
||||
|
||||
coreClient, err := s.getCoreClient(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
objectKey := s.buildObjectKey(fileID.String())
|
||||
|
||||
// Upload the file using MinIO client with streaming (size = -1 for unknown size)
|
||||
_, err = client.PutObject(
|
||||
context.TODO(),
|
||||
uploadID, err := coreClient.NewMultipartUpload(
|
||||
ctx,
|
||||
s.S3Bucket,
|
||||
objectKey,
|
||||
file,
|
||||
-1,
|
||||
minio.PutObjectOptions{},
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to upload file to S3: %w", err)
|
||||
return fmt.Errorf("failed to initiate multipart upload: %w", err)
|
||||
}
|
||||
|
||||
var parts []minio.CompletePart
|
||||
partNumber := 1
|
||||
buf := make([]byte, multipartChunkSize)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
_ = coreClient.AbortMultipartUpload(ctx, s.S3Bucket, objectKey, uploadID)
|
||||
return fmt.Errorf("upload cancelled: %w", ctx.Err())
|
||||
default:
|
||||
}
|
||||
|
||||
n, readErr := io.ReadFull(file, buf)
|
||||
|
||||
if n == 0 && readErr == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
if readErr != nil && readErr != io.EOF && readErr != io.ErrUnexpectedEOF {
|
||||
_ = coreClient.AbortMultipartUpload(ctx, s.S3Bucket, objectKey, uploadID)
|
||||
return fmt.Errorf("read error: %w", readErr)
|
||||
}
|
||||
|
||||
part, err := coreClient.PutObjectPart(
|
||||
ctx,
|
||||
s.S3Bucket,
|
||||
objectKey,
|
||||
uploadID,
|
||||
partNumber,
|
||||
bytes.NewReader(buf[:n]),
|
||||
int64(n),
|
||||
minio.PutObjectPartOptions{},
|
||||
)
|
||||
if err != nil {
|
||||
_ = coreClient.AbortMultipartUpload(ctx, s.S3Bucket, objectKey, uploadID)
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled: %w", ctx.Err())
|
||||
default:
|
||||
return fmt.Errorf("failed to upload part %d: %w", partNumber, err)
|
||||
}
|
||||
}
|
||||
|
||||
parts = append(parts, minio.CompletePart{
|
||||
PartNumber: partNumber,
|
||||
ETag: part.ETag,
|
||||
})
|
||||
|
||||
partNumber++
|
||||
|
||||
if readErr == io.EOF || readErr == io.ErrUnexpectedEOF {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if len(parts) == 0 {
|
||||
_ = coreClient.AbortMultipartUpload(ctx, s.S3Bucket, objectKey, uploadID)
|
||||
|
||||
client, err := s.getClient(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = client.PutObject(
|
||||
ctx,
|
||||
s.S3Bucket,
|
||||
objectKey,
|
||||
bytes.NewReader([]byte{}),
|
||||
0,
|
||||
minio.PutObjectOptions{},
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to upload empty file: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
_, err = coreClient.CompleteMultipartUpload(
|
||||
ctx,
|
||||
s.S3Bucket,
|
||||
objectKey,
|
||||
uploadID,
|
||||
parts,
|
||||
minio.PutObjectOptions{},
|
||||
)
|
||||
if err != nil {
|
||||
_ = coreClient.AbortMultipartUpload(ctx, s.S3Bucket, objectKey, uploadID)
|
||||
return fmt.Errorf("failed to complete multipart upload: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -252,8 +360,54 @@ func (s *S3Storage) buildObjectKey(fileName string) string {
|
||||
}
|
||||
|
||||
func (s *S3Storage) getClient(encryptor encryption.FieldEncryptor) (*minio.Client, error) {
|
||||
endpoint := s.S3Endpoint
|
||||
useSSL := true
|
||||
endpoint, useSSL, accessKey, secretKey, bucketLookup, transport, err := s.getClientParams(
|
||||
encryptor,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
minioClient, err := minio.New(endpoint, &minio.Options{
|
||||
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
|
||||
Secure: useSSL,
|
||||
Region: s.S3Region,
|
||||
BucketLookup: bucketLookup,
|
||||
Transport: transport,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize MinIO client: %w", err)
|
||||
}
|
||||
|
||||
return minioClient, nil
|
||||
}
|
||||
|
||||
func (s *S3Storage) getCoreClient(encryptor encryption.FieldEncryptor) (*minio.Core, error) {
|
||||
endpoint, useSSL, accessKey, secretKey, bucketLookup, transport, err := s.getClientParams(
|
||||
encryptor,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
coreClient, err := minio.NewCore(endpoint, &minio.Options{
|
||||
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
|
||||
Secure: useSSL,
|
||||
Region: s.S3Region,
|
||||
BucketLookup: bucketLookup,
|
||||
Transport: transport,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize MinIO Core client: %w", err)
|
||||
}
|
||||
|
||||
return coreClient, nil
|
||||
}
|
||||
|
||||
func (s *S3Storage) getClientParams(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
) (endpoint string, useSSL bool, accessKey string, secretKey string, bucketLookup minio.BucketLookupType, transport *http.Transport, err error) {
|
||||
endpoint = s.S3Endpoint
|
||||
useSSL = true
|
||||
|
||||
if strings.HasPrefix(endpoint, "http://") {
|
||||
useSSL = false
|
||||
@@ -262,38 +416,33 @@ func (s *S3Storage) getClient(encryptor encryption.FieldEncryptor) (*minio.Clien
|
||||
endpoint = strings.TrimPrefix(endpoint, "https://")
|
||||
}
|
||||
|
||||
// If no endpoint is provided, use the AWS S3 endpoint for the region
|
||||
if endpoint == "" {
|
||||
endpoint = fmt.Sprintf("s3.%s.amazonaws.com", s.S3Region)
|
||||
}
|
||||
|
||||
// Decrypt credentials before use
|
||||
accessKey, err := encryptor.Decrypt(s.StorageID, s.S3AccessKey)
|
||||
accessKey, err = encryptor.Decrypt(s.StorageID, s.S3AccessKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt S3 access key: %w", err)
|
||||
return "", false, "", "", 0, nil, fmt.Errorf("failed to decrypt S3 access key: %w", err)
|
||||
}
|
||||
|
||||
secretKey, err := encryptor.Decrypt(s.StorageID, s.S3SecretKey)
|
||||
secretKey, err = encryptor.Decrypt(s.StorageID, s.S3SecretKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt S3 secret key: %w", err)
|
||||
return "", false, "", "", 0, nil, fmt.Errorf("failed to decrypt S3 secret key: %w", err)
|
||||
}
|
||||
|
||||
// Configure bucket lookup strategy
|
||||
bucketLookup := minio.BucketLookupAuto
|
||||
bucketLookup = minio.BucketLookupAuto
|
||||
if s.S3UseVirtualHostedStyle {
|
||||
bucketLookup = minio.BucketLookupDNS
|
||||
}
|
||||
|
||||
// Initialize the MinIO client
|
||||
minioClient, err := minio.New(endpoint, &minio.Options{
|
||||
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
|
||||
Secure: useSSL,
|
||||
Region: s.S3Region,
|
||||
BucketLookup: bucketLookup,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize MinIO client: %w", err)
|
||||
transport = &http.Transport{
|
||||
DialContext: (&net.Dialer{
|
||||
Timeout: s3ConnectTimeout,
|
||||
}).DialContext,
|
||||
TLSHandshakeTimeout: s3TLSHandshakeTimeout,
|
||||
ResponseHeaderTimeout: s3ResponseTimeout,
|
||||
IdleConnTimeout: s3IdleConnTimeout,
|
||||
}
|
||||
|
||||
return minioClient, nil
|
||||
return endpoint, useSSL, accessKey, secretKey, bucketLookup, transport, nil
|
||||
}
|
||||
|
||||
@@ -1,31 +1,36 @@
|
||||
package tests
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"postgresus-backend/internal/config"
|
||||
"postgresus-backend/internal/features/backups/backups"
|
||||
usecases_postgresql_backup "postgresus-backend/internal/features/backups/backups/usecases/postgresql"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
pgtypes "postgresus-backend/internal/features/databases/databases/postgresql"
|
||||
"postgresus-backend/internal/features/intervals"
|
||||
"postgresus-backend/internal/features/restores/models"
|
||||
usecases_postgresql_restore "postgresus-backend/internal/features/restores/usecases/postgresql"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
local_storage "postgresus-backend/internal/features/storages/models/local"
|
||||
"postgresus-backend/internal/util/period"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/google/uuid"
|
||||
"github.com/jmoiron/sqlx"
|
||||
_ "github.com/lib/pq"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"postgresus-backend/internal/config"
|
||||
"postgresus-backend/internal/features/backups/backups"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
pgtypes "postgresus-backend/internal/features/databases/databases/postgresql"
|
||||
"postgresus-backend/internal/features/restores"
|
||||
restores_enums "postgresus-backend/internal/features/restores/enums"
|
||||
restores_models "postgresus-backend/internal/features/restores/models"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
users_enums "postgresus-backend/internal/features/users/enums"
|
||||
users_testing "postgresus-backend/internal/features/users/testing"
|
||||
workspaces_controllers "postgresus-backend/internal/features/workspaces/controllers"
|
||||
workspaces_testing "postgresus-backend/internal/features/workspaces/testing"
|
||||
test_utils "postgresus-backend/internal/util/testing"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
)
|
||||
|
||||
const createAndFillTableQuery = `
|
||||
@@ -61,7 +66,6 @@ type TestDataItem struct {
|
||||
CreatedAt time.Time `db:"created_at"`
|
||||
}
|
||||
|
||||
// Main test functions for each PostgreSQL version
|
||||
func Test_BackupAndRestorePostgresql_RestoreIsSuccesful(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
cases := []struct {
|
||||
@@ -110,143 +114,7 @@ func Test_BackupAndRestorePostgresqlWithEncryption_RestoreIsSuccessful(t *testin
|
||||
}
|
||||
}
|
||||
|
||||
func testBackupRestoreWithEncryptionForVersion(t *testing.T, pgVersion string, port string) {
|
||||
// Connect to pre-configured PostgreSQL container
|
||||
container, err := connectToPostgresContainer(pgVersion, port)
|
||||
assert.NoError(t, err)
|
||||
defer func() {
|
||||
if container.DB != nil {
|
||||
container.DB.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
_, err = container.DB.Exec(createAndFillTableQuery)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Prepare data for backup
|
||||
backupID := uuid.New()
|
||||
pgVersionEnum := tools.GetPostgresqlVersionEnum(pgVersion)
|
||||
|
||||
backupDb := &databases.Database{
|
||||
ID: uuid.New(),
|
||||
Type: databases.DatabaseTypePostgres,
|
||||
Name: "Test Database",
|
||||
Postgresql: &pgtypes.PostgresqlDatabase{
|
||||
Version: pgVersionEnum,
|
||||
Host: container.Host,
|
||||
Port: container.Port,
|
||||
Username: container.Username,
|
||||
Password: container.Password,
|
||||
Database: &container.Database,
|
||||
IsHttps: false,
|
||||
},
|
||||
}
|
||||
|
||||
storageID := uuid.New()
|
||||
backupConfig := &backups_config.BackupConfig{
|
||||
DatabaseID: backupDb.ID,
|
||||
IsBackupsEnabled: true,
|
||||
StorePeriod: period.PeriodDay,
|
||||
BackupInterval: &intervals.Interval{Interval: intervals.IntervalDaily},
|
||||
StorageID: &storageID,
|
||||
CpuCount: 1,
|
||||
Encryption: backups_config.BackupEncryptionEncrypted,
|
||||
}
|
||||
|
||||
storage := &storages.Storage{
|
||||
WorkspaceID: uuid.New(),
|
||||
Type: storages.StorageTypeLocal,
|
||||
Name: "Test Storage",
|
||||
LocalStorage: &local_storage.LocalStorage{},
|
||||
}
|
||||
|
||||
// Make backup
|
||||
progressTracker := func(completedMBs float64) {}
|
||||
metadata, err := usecases_postgresql_backup.GetCreatePostgresqlBackupUsecase().Execute(
|
||||
context.Background(),
|
||||
backupID,
|
||||
backupConfig,
|
||||
backupDb,
|
||||
storage,
|
||||
progressTracker,
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, metadata)
|
||||
|
||||
// Verify encryption metadata is set
|
||||
assert.Equal(t, backups_config.BackupEncryptionEncrypted, metadata.Encryption)
|
||||
assert.NotNil(t, metadata.EncryptionSalt)
|
||||
assert.NotNil(t, metadata.EncryptionIV)
|
||||
|
||||
// Create new database
|
||||
newDBName := "restoreddb_encrypted"
|
||||
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Connect to the new database
|
||||
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, container.Username, container.Password, newDBName)
|
||||
newDB, err := sqlx.Connect("postgres", newDSN)
|
||||
assert.NoError(t, err)
|
||||
defer newDB.Close()
|
||||
|
||||
// Setup data for restore with encryption metadata
|
||||
completedBackup := &backups.Backup{
|
||||
ID: backupID,
|
||||
DatabaseID: backupDb.ID,
|
||||
StorageID: storage.ID,
|
||||
Status: backups.BackupStatusCompleted,
|
||||
CreatedAt: time.Now().UTC(),
|
||||
EncryptionSalt: metadata.EncryptionSalt,
|
||||
EncryptionIV: metadata.EncryptionIV,
|
||||
Encryption: metadata.Encryption,
|
||||
}
|
||||
|
||||
restoreID := uuid.New()
|
||||
restore := models.Restore{
|
||||
ID: restoreID,
|
||||
Backup: completedBackup,
|
||||
Postgresql: &pgtypes.PostgresqlDatabase{
|
||||
Version: pgVersionEnum,
|
||||
Host: container.Host,
|
||||
Port: container.Port,
|
||||
Username: container.Username,
|
||||
Password: container.Password,
|
||||
Database: &newDBName,
|
||||
IsHttps: false,
|
||||
},
|
||||
}
|
||||
|
||||
// Restore the encrypted backup
|
||||
restoreBackupUC := usecases_postgresql_restore.GetRestorePostgresqlBackupUsecase()
|
||||
err = restoreBackupUC.Execute(backupDb, backupConfig, restore, completedBackup, storage)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify restored table exists
|
||||
var tableExists bool
|
||||
err = newDB.Get(
|
||||
&tableExists,
|
||||
"SELECT EXISTS (SELECT FROM information_schema.tables WHERE table_name = 'test_data')",
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, tableExists, "Table 'test_data' should exist in restored database")
|
||||
|
||||
// Verify data integrity
|
||||
verifyDataIntegrity(t, container.DB, newDB)
|
||||
|
||||
// Clean up the backup file after the test
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backupID.String()))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to delete backup file: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Run a test for a specific PostgreSQL version
|
||||
func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
|
||||
// Connect to pre-configured PostgreSQL container
|
||||
container, err := connectToPostgresContainer(pgVersion, port)
|
||||
assert.NoError(t, err)
|
||||
defer func() {
|
||||
@@ -258,55 +126,30 @@ func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
|
||||
_, err = container.DB.Exec(createAndFillTableQuery)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Prepare data for backup
|
||||
backupID := uuid.New()
|
||||
router := createTestRouter()
|
||||
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Test Workspace", user, router)
|
||||
|
||||
storage := storages.CreateTestStorage(workspace.ID)
|
||||
|
||||
pgVersionEnum := tools.GetPostgresqlVersionEnum(pgVersion)
|
||||
|
||||
backupDb := &databases.Database{
|
||||
ID: uuid.New(),
|
||||
Type: databases.DatabaseTypePostgres,
|
||||
Name: "Test Database",
|
||||
Postgresql: &pgtypes.PostgresqlDatabase{
|
||||
Version: pgVersionEnum,
|
||||
Host: container.Host,
|
||||
Port: container.Port,
|
||||
Username: container.Username,
|
||||
Password: container.Password,
|
||||
Database: &container.Database,
|
||||
IsHttps: false,
|
||||
},
|
||||
}
|
||||
|
||||
storageID := uuid.New()
|
||||
backupConfig := &backups_config.BackupConfig{
|
||||
DatabaseID: backupDb.ID,
|
||||
IsBackupsEnabled: true,
|
||||
StorePeriod: period.PeriodDay,
|
||||
BackupInterval: &intervals.Interval{Interval: intervals.IntervalDaily},
|
||||
StorageID: &storageID,
|
||||
CpuCount: 1,
|
||||
}
|
||||
|
||||
storage := &storages.Storage{
|
||||
WorkspaceID: uuid.New(),
|
||||
Type: storages.StorageTypeLocal,
|
||||
Name: "Test Storage",
|
||||
LocalStorage: &local_storage.LocalStorage{},
|
||||
}
|
||||
|
||||
// Make backup
|
||||
progressTracker := func(completedMBs float64) {}
|
||||
_, err = usecases_postgresql_backup.GetCreatePostgresqlBackupUsecase().Execute(
|
||||
context.Background(),
|
||||
backupID,
|
||||
backupConfig,
|
||||
backupDb,
|
||||
storage,
|
||||
progressTracker,
|
||||
database := createDatabaseViaAPI(
|
||||
t, router, "Test Database", workspace.ID,
|
||||
pgVersionEnum, container.Host, container.Port,
|
||||
container.Username, container.Password, container.Database,
|
||||
user.Token,
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Create new database
|
||||
enableBackupsViaAPI(
|
||||
t, router, database.ID, storage.ID,
|
||||
backups_config.BackupEncryptionNone, user.Token,
|
||||
)
|
||||
|
||||
createBackupViaAPI(t, router, database.ID, user.Token)
|
||||
|
||||
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
|
||||
|
||||
newDBName := "restoreddb"
|
||||
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
@@ -314,43 +157,22 @@ func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
|
||||
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Connect to the new database
|
||||
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, container.Username, container.Password, newDBName)
|
||||
newDB, err := sqlx.Connect("postgres", newDSN)
|
||||
assert.NoError(t, err)
|
||||
defer newDB.Close()
|
||||
|
||||
// Setup data for restore
|
||||
completedBackup := &backups.Backup{
|
||||
ID: backupID,
|
||||
DatabaseID: backupDb.ID,
|
||||
StorageID: storage.ID,
|
||||
Status: backups.BackupStatusCompleted,
|
||||
CreatedAt: time.Now().UTC(),
|
||||
}
|
||||
createRestoreViaAPI(
|
||||
t, router, backup.ID, pgVersionEnum,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, newDBName,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
restoreID := uuid.New()
|
||||
restore := models.Restore{
|
||||
ID: restoreID,
|
||||
Backup: completedBackup,
|
||||
Postgresql: &pgtypes.PostgresqlDatabase{
|
||||
Version: pgVersionEnum,
|
||||
Host: container.Host,
|
||||
Port: container.Port,
|
||||
Username: container.Username,
|
||||
Password: container.Password,
|
||||
Database: &newDBName,
|
||||
IsHttps: false,
|
||||
},
|
||||
}
|
||||
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
|
||||
|
||||
// Restore the backup
|
||||
restoreBackupUC := usecases_postgresql_restore.GetRestorePostgresqlBackupUsecase()
|
||||
err = restoreBackupUC.Execute(backupDb, backupConfig, restore, completedBackup, storage)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify restored table exists
|
||||
var tableExists bool
|
||||
err = newDB.Get(
|
||||
&tableExists,
|
||||
@@ -359,17 +181,329 @@ func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, tableExists, "Table 'test_data' should exist in restored database")
|
||||
|
||||
// Verify data integrity
|
||||
verifyDataIntegrity(t, container.DB, newDB)
|
||||
|
||||
// Clean up the backup file after the test
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backupID.String()))
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to delete backup file: %v", err)
|
||||
}
|
||||
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/"+database.ID.String(),
|
||||
"Bearer "+user.Token,
|
||||
http.StatusNoContent,
|
||||
)
|
||||
storages.RemoveTestStorage(storage.ID)
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
}
|
||||
|
||||
func testBackupRestoreWithEncryptionForVersion(t *testing.T, pgVersion string, port string) {
|
||||
container, err := connectToPostgresContainer(pgVersion, port)
|
||||
assert.NoError(t, err)
|
||||
defer func() {
|
||||
if container.DB != nil {
|
||||
container.DB.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
_, err = container.DB.Exec(createAndFillTableQuery)
|
||||
assert.NoError(t, err)
|
||||
|
||||
router := createTestRouter()
|
||||
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Test Workspace", user, router)
|
||||
|
||||
storage := storages.CreateTestStorage(workspace.ID)
|
||||
|
||||
pgVersionEnum := tools.GetPostgresqlVersionEnum(pgVersion)
|
||||
database := createDatabaseViaAPI(
|
||||
t, router, "Test Database", workspace.ID,
|
||||
pgVersionEnum, container.Host, container.Port,
|
||||
container.Username, container.Password, container.Database,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
enableBackupsViaAPI(
|
||||
t, router, database.ID, storage.ID,
|
||||
backups_config.BackupEncryptionEncrypted, user.Token,
|
||||
)
|
||||
|
||||
createBackupViaAPI(t, router, database.ID, user.Token)
|
||||
|
||||
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
|
||||
assert.Equal(t, backups_config.BackupEncryptionEncrypted, backup.Encryption)
|
||||
|
||||
newDBName := "restoreddb_encrypted"
|
||||
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, container.Username, container.Password, newDBName)
|
||||
newDB, err := sqlx.Connect("postgres", newDSN)
|
||||
assert.NoError(t, err)
|
||||
defer newDB.Close()
|
||||
|
||||
createRestoreViaAPI(
|
||||
t, router, backup.ID, pgVersionEnum,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, newDBName,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
|
||||
|
||||
var tableExists bool
|
||||
err = newDB.Get(
|
||||
&tableExists,
|
||||
"SELECT EXISTS (SELECT FROM information_schema.tables WHERE table_name = 'test_data')",
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, tableExists, "Table 'test_data' should exist in restored database")
|
||||
|
||||
verifyDataIntegrity(t, container.DB, newDB)
|
||||
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to delete backup file: %v", err)
|
||||
}
|
||||
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/"+database.ID.String(),
|
||||
"Bearer "+user.Token,
|
||||
http.StatusNoContent,
|
||||
)
|
||||
storages.RemoveTestStorage(storage.ID)
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
}
|
||||
|
||||
func createTestRouter() *gin.Engine {
|
||||
router := workspaces_testing.CreateTestRouter(
|
||||
workspaces_controllers.GetWorkspaceController(),
|
||||
workspaces_controllers.GetMembershipController(),
|
||||
databases.GetDatabaseController(),
|
||||
backups_config.GetBackupConfigController(),
|
||||
backups.GetBackupController(),
|
||||
restores.GetRestoreController(),
|
||||
)
|
||||
return router
|
||||
}
|
||||
|
||||
func waitForBackupCompletion(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
databaseID uuid.UUID,
|
||||
token string,
|
||||
timeout time.Duration,
|
||||
) *backups.Backup {
|
||||
startTime := time.Now()
|
||||
pollInterval := 500 * time.Millisecond
|
||||
|
||||
for {
|
||||
if time.Since(startTime) > timeout {
|
||||
t.Fatalf("Timeout waiting for backup completion after %v", timeout)
|
||||
}
|
||||
|
||||
var response backups.GetBackupsResponse
|
||||
test_utils.MakeGetRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/backups?database_id=%s&limit=1", databaseID.String()),
|
||||
"Bearer "+token,
|
||||
http.StatusOK,
|
||||
&response,
|
||||
)
|
||||
|
||||
if len(response.Backups) > 0 {
|
||||
backup := response.Backups[0]
|
||||
if backup.Status == backups.BackupStatusCompleted {
|
||||
return backup
|
||||
}
|
||||
if backup.Status == backups.BackupStatusFailed {
|
||||
t.Fatalf("Backup failed: %v", backup.FailMessage)
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(pollInterval)
|
||||
}
|
||||
}
|
||||
|
||||
func waitForRestoreCompletion(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
backupID uuid.UUID,
|
||||
token string,
|
||||
timeout time.Duration,
|
||||
) *restores_models.Restore {
|
||||
startTime := time.Now()
|
||||
pollInterval := 500 * time.Millisecond
|
||||
|
||||
for {
|
||||
if time.Since(startTime) > timeout {
|
||||
t.Fatalf("Timeout waiting for restore completion after %v", timeout)
|
||||
}
|
||||
|
||||
var restores []*restores_models.Restore
|
||||
test_utils.MakeGetRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/restores/%s", backupID.String()),
|
||||
"Bearer "+token,
|
||||
http.StatusOK,
|
||||
&restores,
|
||||
)
|
||||
|
||||
for _, restore := range restores {
|
||||
if restore.Status == restores_enums.RestoreStatusCompleted {
|
||||
return restore
|
||||
}
|
||||
if restore.Status == restores_enums.RestoreStatusFailed {
|
||||
t.Fatalf("Restore failed: %v", restore.FailMessage)
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(pollInterval)
|
||||
}
|
||||
}
|
||||
|
||||
func createDatabaseViaAPI(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
name string,
|
||||
workspaceID uuid.UUID,
|
||||
pgVersion tools.PostgresqlVersion,
|
||||
host string,
|
||||
port int,
|
||||
username string,
|
||||
password string,
|
||||
database string,
|
||||
token string,
|
||||
) *databases.Database {
|
||||
request := databases.Database{
|
||||
Name: name,
|
||||
WorkspaceID: &workspaceID,
|
||||
Type: databases.DatabaseTypePostgres,
|
||||
Postgresql: &pgtypes.PostgresqlDatabase{
|
||||
Version: pgVersion,
|
||||
Host: host,
|
||||
Port: port,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Database: &database,
|
||||
},
|
||||
}
|
||||
|
||||
w := workspaces_testing.MakeAPIRequest(
|
||||
router,
|
||||
"POST",
|
||||
"/api/v1/databases/create",
|
||||
"Bearer "+token,
|
||||
request,
|
||||
)
|
||||
|
||||
if w.Code != http.StatusCreated {
|
||||
t.Fatalf("Failed to create database. Status: %d, Body: %s", w.Code, w.Body.String())
|
||||
}
|
||||
|
||||
var createdDatabase databases.Database
|
||||
if err := json.Unmarshal(w.Body.Bytes(), &createdDatabase); err != nil {
|
||||
t.Fatalf("Failed to unmarshal database response: %v", err)
|
||||
}
|
||||
|
||||
return &createdDatabase
|
||||
}
|
||||
|
||||
func enableBackupsViaAPI(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
databaseID uuid.UUID,
|
||||
storageID uuid.UUID,
|
||||
encryption backups_config.BackupEncryption,
|
||||
token string,
|
||||
) {
|
||||
var backupConfig backups_config.BackupConfig
|
||||
test_utils.MakeGetRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/backup-configs/database/%s", databaseID.String()),
|
||||
"Bearer "+token,
|
||||
http.StatusOK,
|
||||
&backupConfig,
|
||||
)
|
||||
|
||||
storage := &storages.Storage{ID: storageID}
|
||||
backupConfig.IsBackupsEnabled = true
|
||||
backupConfig.Storage = storage
|
||||
backupConfig.Encryption = encryption
|
||||
|
||||
test_utils.MakePostRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/backup-configs/save",
|
||||
"Bearer "+token,
|
||||
backupConfig,
|
||||
http.StatusOK,
|
||||
)
|
||||
}
|
||||
|
||||
func createBackupViaAPI(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
databaseID uuid.UUID,
|
||||
token string,
|
||||
) {
|
||||
request := backups.MakeBackupRequest{DatabaseID: databaseID}
|
||||
test_utils.MakePostRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/backups",
|
||||
"Bearer "+token,
|
||||
request,
|
||||
http.StatusOK,
|
||||
)
|
||||
}
|
||||
|
||||
func createRestoreViaAPI(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
backupID uuid.UUID,
|
||||
pgVersion tools.PostgresqlVersion,
|
||||
host string,
|
||||
port int,
|
||||
username string,
|
||||
password string,
|
||||
database string,
|
||||
token string,
|
||||
) {
|
||||
request := restores.RestoreBackupRequest{
|
||||
PostgresqlDatabase: &pgtypes.PostgresqlDatabase{
|
||||
Version: pgVersion,
|
||||
Host: host,
|
||||
Port: port,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Database: &database,
|
||||
},
|
||||
}
|
||||
|
||||
test_utils.MakePostRequest(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/restores/%s/restore", backupID.String()),
|
||||
"Bearer "+token,
|
||||
request,
|
||||
http.StatusOK,
|
||||
)
|
||||
}
|
||||
|
||||
// verifyDataIntegrity compares data in the original and restored databases
|
||||
func verifyDataIntegrity(t *testing.T, originalDB *sqlx.DB, restoredDB *sqlx.DB) {
|
||||
var originalData []TestDataItem
|
||||
var restoredData []TestDataItem
|
||||
@@ -382,7 +516,6 @@ func verifyDataIntegrity(t *testing.T, originalDB *sqlx.DB, restoredDB *sqlx.DB)
|
||||
|
||||
assert.Equal(t, len(originalData), len(restoredData), "Should have same number of rows")
|
||||
|
||||
// Only compare data if both slices have elements (to avoid panic)
|
||||
if len(originalData) > 0 && len(restoredData) > 0 {
|
||||
for i := range originalData {
|
||||
assert.Equal(t, originalData[i].ID, restoredData[i].ID, "ID should match")
|
||||
|
||||
@@ -1,13 +1,8 @@
|
||||
package users_repositories
|
||||
|
||||
var secretKeyRepository = &SecretKeyRepository{}
|
||||
var userRepository = &UserRepository{}
|
||||
var usersSettingsRepository = &UsersSettingsRepository{}
|
||||
|
||||
func GetSecretKeyRepository() *SecretKeyRepository {
|
||||
return secretKeyRepository
|
||||
}
|
||||
|
||||
func GetUserRepository() *UserRepository {
|
||||
return userRepository
|
||||
}
|
||||
|
||||
@@ -1,34 +0,0 @@
|
||||
package users_repositories
|
||||
|
||||
import (
|
||||
"errors"
|
||||
user_models "postgresus-backend/internal/features/users/models"
|
||||
"postgresus-backend/internal/storage"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type SecretKeyRepository struct{}
|
||||
|
||||
func (r *SecretKeyRepository) GetSecretKey() (string, error) {
|
||||
var secretKey user_models.SecretKey
|
||||
|
||||
if err := storage.GetDb().First(&secretKey).Error; err != nil {
|
||||
// create a new secret key if not found
|
||||
if errors.Is(err, gorm.ErrRecordNotFound) {
|
||||
newSecretKey := user_models.SecretKey{
|
||||
Secret: uuid.New().String() + uuid.New().String(),
|
||||
}
|
||||
if err := storage.GetDb().Create(&newSecretKey).Error; err != nil {
|
||||
return "", errors.New("failed to create new secret key")
|
||||
}
|
||||
|
||||
return newSecretKey.Secret, nil
|
||||
}
|
||||
|
||||
return "", err
|
||||
}
|
||||
|
||||
return secretKey.Secret, nil
|
||||
}
|
||||
@@ -1,10 +1,13 @@
|
||||
package users_services
|
||||
|
||||
import users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
import (
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
)
|
||||
|
||||
var userService = &UserService{
|
||||
users_repositories.GetUserRepository(),
|
||||
users_repositories.GetSecretKeyRepository(),
|
||||
secrets.GetSecretKeyService(),
|
||||
settingsService,
|
||||
nil,
|
||||
}
|
||||
|
||||
@@ -17,6 +17,7 @@ import (
|
||||
"golang.org/x/oauth2/google"
|
||||
|
||||
"postgresus-backend/internal/config"
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
users_dto "postgresus-backend/internal/features/users/dto"
|
||||
users_enums "postgresus-backend/internal/features/users/enums"
|
||||
users_interfaces "postgresus-backend/internal/features/users/interfaces"
|
||||
@@ -25,10 +26,10 @@ import (
|
||||
)
|
||||
|
||||
type UserService struct {
|
||||
userRepository *users_repositories.UserRepository
|
||||
secretKeyRepository *users_repositories.SecretKeyRepository
|
||||
settingsService *SettingsService
|
||||
auditLogWriter users_interfaces.AuditLogWriter
|
||||
userRepository *users_repositories.UserRepository
|
||||
secretKeyService *secrets.SecretKeyService
|
||||
settingsService *SettingsService
|
||||
auditLogWriter users_interfaces.AuditLogWriter
|
||||
}
|
||||
|
||||
func (s *UserService) SetAuditLogWriter(writer users_interfaces.AuditLogWriter) {
|
||||
@@ -162,7 +163,7 @@ func (s *UserService) SignIn(
|
||||
}
|
||||
|
||||
func (s *UserService) GetUserFromToken(token string) (*users_models.User, error) {
|
||||
secretKey, err := s.secretKeyRepository.GetSecretKey()
|
||||
secretKey, err := s.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get secret key: %w", err)
|
||||
}
|
||||
@@ -221,7 +222,7 @@ func (s *UserService) GetUserFromToken(token string) (*users_models.User, error)
|
||||
func (s *UserService) GenerateAccessToken(
|
||||
user *users_models.User,
|
||||
) (*users_dto.SignInResponseDTO, error) {
|
||||
secretKey, err := s.secretKeyRepository.GetSecretKey()
|
||||
secretKey, err := s.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get secret key: %w", err)
|
||||
}
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
package encryption
|
||||
|
||||
import users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
import "postgresus-backend/internal/features/encryption/secrets"
|
||||
|
||||
var fieldEncryptor = &SecretKeyFieldEncryptor{
|
||||
users_repositories.GetSecretKeyRepository(),
|
||||
secrets.GetSecretKeyService(),
|
||||
}
|
||||
|
||||
func GetFieldEncryptor() FieldEncryptor {
|
||||
|
||||
@@ -8,17 +8,16 @@ import (
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
"strings"
|
||||
|
||||
users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
const encryptedPrefix = "enc:"
|
||||
|
||||
type SecretKeyFieldEncryptor struct {
|
||||
secretKeyRepository *users_repositories.SecretKeyRepository
|
||||
secretKeyService *secrets.SecretKeyService
|
||||
}
|
||||
|
||||
func (e *SecretKeyFieldEncryptor) Encrypt(itemID uuid.UUID, plaintext string) (string, error) {
|
||||
@@ -30,7 +29,7 @@ func (e *SecretKeyFieldEncryptor) Encrypt(itemID uuid.UUID, plaintext string) (s
|
||||
return plaintext, nil
|
||||
}
|
||||
|
||||
masterKey, err := e.secretKeyRepository.GetSecretKey()
|
||||
masterKey, err := e.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get master key: %w", err)
|
||||
}
|
||||
@@ -82,7 +81,7 @@ func (e *SecretKeyFieldEncryptor) Decrypt(itemID uuid.UUID, ciphertext string) (
|
||||
return "", fmt.Errorf("failed to decode ciphertext: %w", err)
|
||||
}
|
||||
|
||||
masterKey, err := e.secretKeyRepository.GetSecretKey()
|
||||
masterKey, err := e.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get master key: %w", err)
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
env_utils "postgresus-backend/internal/util/env"
|
||||
)
|
||||
@@ -151,6 +152,24 @@ func VerifyPostgresesInstallation(
|
||||
logger.Info("All PostgreSQL version-specific client tools verification completed successfully!")
|
||||
}
|
||||
|
||||
// EscapePgpassField escapes special characters in a field value for .pgpass file format.
|
||||
// According to PostgreSQL documentation, the .pgpass file format requires:
|
||||
// - Backslash (\) must be escaped as \\
|
||||
// - Colon (:) must be escaped as \:
|
||||
// Additionally, newlines and carriage returns are removed to prevent format corruption.
|
||||
func EscapePgpassField(field string) string {
|
||||
// Remove newlines and carriage returns that would break .pgpass format
|
||||
field = strings.ReplaceAll(field, "\r", "")
|
||||
field = strings.ReplaceAll(field, "\n", "")
|
||||
|
||||
// Escape backslashes first (order matters!)
|
||||
// Then escape colons
|
||||
field = strings.ReplaceAll(field, "\\", "\\\\")
|
||||
field = strings.ReplaceAll(field, ":", "\\:")
|
||||
|
||||
return field
|
||||
}
|
||||
|
||||
func getPostgresqlBasePath(
|
||||
version PostgresqlVersion,
|
||||
envMode env_utils.EnvMode,
|
||||
|
||||
@@ -0,0 +1,28 @@
|
||||
-- +goose Up
|
||||
-- +goose StatementBegin
|
||||
|
||||
ALTER TABLE postgresql_databases
|
||||
DROP CONSTRAINT IF EXISTS fk_postgresql_databases_restore_id;
|
||||
|
||||
DROP INDEX IF EXISTS idx_postgresql_databases_restore_id;
|
||||
|
||||
ALTER TABLE postgresql_databases
|
||||
DROP COLUMN IF EXISTS restore_id;
|
||||
|
||||
-- +goose StatementEnd
|
||||
|
||||
-- +goose Down
|
||||
-- +goose StatementBegin
|
||||
|
||||
ALTER TABLE postgresql_databases
|
||||
ADD COLUMN restore_id UUID;
|
||||
|
||||
CREATE INDEX idx_postgresql_databases_restore_id ON postgresql_databases (restore_id);
|
||||
|
||||
ALTER TABLE postgresql_databases
|
||||
ADD CONSTRAINT fk_postgresql_databases_restore_id
|
||||
FOREIGN KEY (restore_id)
|
||||
REFERENCES restores (id)
|
||||
ON DELETE CASCADE;
|
||||
|
||||
-- +goose StatementEnd
|
||||
@@ -0,0 +1,18 @@
|
||||
-- +goose Up
|
||||
-- +goose StatementBegin
|
||||
|
||||
ALTER TABLE webhook_notifiers
|
||||
ADD COLUMN body_template TEXT,
|
||||
ADD COLUMN headers TEXT DEFAULT '[]';
|
||||
|
||||
-- +goose StatementEnd
|
||||
|
||||
-- +goose Down
|
||||
-- +goose StatementBegin
|
||||
|
||||
ALTER TABLE webhook_notifiers
|
||||
DROP COLUMN body_template,
|
||||
DROP COLUMN headers;
|
||||
|
||||
-- +goose StatementEnd
|
||||
|
||||
23
deploy/helm/.helmignore
Normal file
23
deploy/helm/.helmignore
Normal file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
22
deploy/helm/Chart.yaml
Normal file
22
deploy/helm/Chart.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
apiVersion: v2
|
||||
name: postgresus
|
||||
description: A Helm chart for Postgresus - PostgreSQL backup and management system
|
||||
type: application
|
||||
version: 0.0.0
|
||||
appVersion: "latest"
|
||||
keywords:
|
||||
- postgresql
|
||||
- backup
|
||||
- database
|
||||
- restore
|
||||
home: https://github.com/RostislavDugin/postgresus
|
||||
|
||||
sources:
|
||||
- https://github.com/RostislavDugin/postgresus
|
||||
- https://github.com/RostislavDugin/postgresus/tree/main/deploy/helm
|
||||
|
||||
maintainers:
|
||||
- name: Rostislav Dugin
|
||||
url: https://github.com/RostislavDugin
|
||||
|
||||
icon: https://raw.githubusercontent.com/RostislavDugin/postgresus/main/frontend/public/logo.svg
|
||||
211
deploy/helm/README.md
Normal file
211
deploy/helm/README.md
Normal file
@@ -0,0 +1,211 @@
|
||||
# Postgresus Helm Chart
|
||||
|
||||
## Installation
|
||||
|
||||
Install directly from the OCI registry (no need to clone the repository):
|
||||
|
||||
```bash
|
||||
helm install postgresus oci://ghcr.io/rostislavdugin/charts/postgresus -n postgresus --create-namespace
|
||||
```
|
||||
|
||||
## Accessing Postgresus
|
||||
|
||||
By default, the chart creates a ClusterIP service. Use port-forward to access:
|
||||
|
||||
```bash
|
||||
kubectl port-forward svc/postgresus-service 4005:4005 -n postgresus
|
||||
```
|
||||
|
||||
Then open `http://localhost:4005` in your browser.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Main Parameters
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| ------------------ | ------------------ | --------------------------- |
|
||||
| `namespace.create` | Create namespace | `true` |
|
||||
| `namespace.name` | Namespace name | `postgresus` |
|
||||
| `image.repository` | Docker image | `rostislavdugin/postgresus` |
|
||||
| `image.tag` | Image tag | `latest` |
|
||||
| `image.pullPolicy` | Image pull policy | `Always` |
|
||||
| `replicaCount` | Number of replicas | `1` |
|
||||
|
||||
### Service
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| -------------------------- | ----------------------- | ------------- |
|
||||
| `service.type` | Service type | `ClusterIP` |
|
||||
| `service.port` | Service port | `4005` |
|
||||
| `service.targetPort` | Container port | `4005` |
|
||||
| `service.headless.enabled` | Enable headless service | `true` |
|
||||
|
||||
### Storage
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| ------------------------------ | ------------------------- | ---------------------- |
|
||||
| `persistence.enabled` | Enable persistent storage | `true` |
|
||||
| `persistence.storageClassName` | Storage class | `""` (cluster default) |
|
||||
| `persistence.accessMode` | Access mode | `ReadWriteOnce` |
|
||||
| `persistence.size` | Storage size | `10Gi` |
|
||||
| `persistence.mountPath` | Mount path | `/postgresus-data` |
|
||||
|
||||
### Resources
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| --------------------------- | -------------- | ------------- |
|
||||
| `resources.requests.memory` | Memory request | `1Gi` |
|
||||
| `resources.requests.cpu` | CPU request | `500m` |
|
||||
| `resources.limits.memory` | Memory limit | `1Gi` |
|
||||
| `resources.limits.cpu` | CPU limit | `500m` |
|
||||
|
||||
## External Access Options
|
||||
|
||||
### Option 1: Port Forward (Default)
|
||||
|
||||
Best for development or quick access:
|
||||
|
||||
```bash
|
||||
kubectl port-forward svc/postgresus-service 4005:4005 -n postgresus
|
||||
```
|
||||
|
||||
Access at `http://localhost:4005`
|
||||
|
||||
### Option 2: NodePort
|
||||
|
||||
For direct access via node IP:
|
||||
|
||||
```yaml
|
||||
# nodeport-values.yaml
|
||||
service:
|
||||
type: NodePort
|
||||
port: 4005
|
||||
targetPort: 4005
|
||||
nodePort: 30080
|
||||
```
|
||||
|
||||
```bash
|
||||
helm install postgresus oci://ghcr.io/rostislavdugin/charts/postgresus -n postgresus --create-namespace -f nodeport-values.yaml
|
||||
```
|
||||
|
||||
Access at `http://<NODE-IP>:30080`
|
||||
|
||||
### Option 3: LoadBalancer
|
||||
|
||||
For cloud environments with load balancer support:
|
||||
|
||||
```yaml
|
||||
# loadbalancer-values.yaml
|
||||
service:
|
||||
type: LoadBalancer
|
||||
port: 80
|
||||
targetPort: 4005
|
||||
```
|
||||
|
||||
```bash
|
||||
helm install postgresus oci://ghcr.io/rostislavdugin/charts/postgresus -n postgresus --create-namespace -f loadbalancer-values.yaml
|
||||
```
|
||||
|
||||
Get the external IP:
|
||||
|
||||
```bash
|
||||
kubectl get svc -n postgresus
|
||||
```
|
||||
|
||||
Access at `http://<EXTERNAL-IP>`
|
||||
|
||||
### Option 4: Ingress
|
||||
|
||||
For domain-based access with TLS:
|
||||
|
||||
```yaml
|
||||
# ingress-values.yaml
|
||||
ingress:
|
||||
enabled: true
|
||||
className: nginx
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/ssl-redirect: "true"
|
||||
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
||||
hosts:
|
||||
- host: backup.example.com
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
tls:
|
||||
- secretName: backup-example-com-tls
|
||||
hosts:
|
||||
- backup.example.com
|
||||
```
|
||||
|
||||
```bash
|
||||
helm install postgresus oci://ghcr.io/rostislavdugin/charts/postgresus -n postgresus --create-namespace -f ingress-values.yaml
|
||||
```
|
||||
|
||||
### Option 5: HTTPRoute (Gateway API)
|
||||
|
||||
For clusters using Istio, Envoy Gateway, Cilium, or other Gateway API implementations:
|
||||
|
||||
```yaml
|
||||
# httproute-values.yaml
|
||||
route:
|
||||
enabled: true
|
||||
hostnames:
|
||||
- backup.example.com
|
||||
parentRefs:
|
||||
- name: my-gateway
|
||||
namespace: istio-system
|
||||
```
|
||||
|
||||
```bash
|
||||
helm install postgresus oci://ghcr.io/rostislavdugin/charts/postgresus -n postgresus --create-namespace -f httproute-values.yaml
|
||||
```
|
||||
|
||||
## Ingress Configuration
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| ----------------------- | ----------------- | ------------------------ |
|
||||
| `ingress.enabled` | Enable Ingress | `false` |
|
||||
| `ingress.className` | Ingress class | `nginx` |
|
||||
| `ingress.hosts[0].host` | Hostname | `postgresus.example.com` |
|
||||
| `ingress.tls` | TLS configuration | `[]` |
|
||||
|
||||
## HTTPRoute Configuration
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| ------------------ | ----------------------- | ------------------------------ |
|
||||
| `route.enabled` | Enable HTTPRoute | `false` |
|
||||
| `route.apiVersion` | Gateway API version | `gateway.networking.k8s.io/v1` |
|
||||
| `route.hostnames` | Hostnames for the route | `["postgresus.example.com"]` |
|
||||
| `route.parentRefs` | Gateway references | `[]` |
|
||||
|
||||
## Health Checks
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| ------------------------ | ---------------------- | ------------- |
|
||||
| `livenessProbe.enabled` | Enable liveness probe | `true` |
|
||||
| `readinessProbe.enabled` | Enable readiness probe | `true` |
|
||||
|
||||
## Custom Storage Size
|
||||
|
||||
```yaml
|
||||
# storage-values.yaml
|
||||
persistence:
|
||||
size: 50Gi
|
||||
storageClassName: "fast-ssd"
|
||||
```
|
||||
|
||||
```bash
|
||||
helm install postgresus oci://ghcr.io/rostislavdugin/charts/postgresus -n postgresus --create-namespace -f storage-values.yaml
|
||||
```
|
||||
|
||||
## Upgrade
|
||||
|
||||
```bash
|
||||
helm upgrade postgresus oci://ghcr.io/rostislavdugin/charts/postgresus -n postgresus
|
||||
```
|
||||
|
||||
## Uninstall
|
||||
|
||||
```bash
|
||||
helm uninstall postgresus -n postgresus
|
||||
```
|
||||
72
deploy/helm/templates/_helpers.tpl
Normal file
72
deploy/helm/templates/_helpers.tpl
Normal file
@@ -0,0 +1,72 @@
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "postgresus.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
*/}}
|
||||
{{- define "postgresus.fullname" -}}
|
||||
{{- if .Values.fullnameOverride }}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride }}
|
||||
{{- if contains $name .Release.Name }}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "postgresus.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "postgresus.labels" -}}
|
||||
helm.sh/chart: {{ include "postgresus.chart" . }}
|
||||
{{ include "postgresus.selectorLabels" . }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "postgresus.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "postgresus.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app: postgresus
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "postgresus.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create }}
|
||||
{{- default (include "postgresus.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else }}
|
||||
{{- default "default" .Values.serviceAccount.name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Namespace
|
||||
*/}}
|
||||
{{- define "postgresus.namespace" -}}
|
||||
{{- if .Values.namespace.create }}
|
||||
{{- .Values.namespace.name }}
|
||||
{{- else }}
|
||||
{{- .Release.Namespace }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
35
deploy/helm/templates/httproute.yaml
Normal file
35
deploy/helm/templates/httproute.yaml
Normal file
@@ -0,0 +1,35 @@
|
||||
{{- if .Values.route.enabled -}}
|
||||
apiVersion: {{ .Values.route.apiVersion}}
|
||||
kind: {{ .Values.route.kind}}
|
||||
metadata:
|
||||
name: {{ template "postgresus.fullname" . }}
|
||||
annotations: {{ toYaml .Values.route.annotations | nindent 4 }}
|
||||
labels:
|
||||
app.kubernetes.io/component: "app"
|
||||
{{- include "postgresus.labels" . | nindent 4 }}
|
||||
spec:
|
||||
{{- with .Values.route.parentRefs }}
|
||||
parentRefs:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.route.hostnames }}
|
||||
hostnames:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
rules:
|
||||
- backendRefs:
|
||||
- name: {{ template "postgresus.fullname" . }}-service
|
||||
port: {{ .Values.service.port }}
|
||||
{{- with .Values.route.filters }}
|
||||
filters:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.route.matches }}
|
||||
matches:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.route.timeouts }}
|
||||
timeouts:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
42
deploy/helm/templates/ingress.yaml
Normal file
42
deploy/helm/templates/ingress.yaml
Normal file
@@ -0,0 +1,42 @@
|
||||
{{- if .Values.ingress.enabled -}}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ include "postgresus.fullname" . }}-ingress
|
||||
namespace: {{ include "postgresus.namespace" . }}
|
||||
labels:
|
||||
{{- include "postgresus.labels" . | nindent 4 }}
|
||||
{{- with .Values.ingress.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.ingress.className }}
|
||||
ingressClassName: {{ .Values.ingress.className }}
|
||||
{{- end }}
|
||||
{{- if .Values.ingress.tls }}
|
||||
tls:
|
||||
{{- range .Values.ingress.tls }}
|
||||
- hosts:
|
||||
{{- range .hosts }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
secretName: {{ .secretName }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
rules:
|
||||
{{- range .Values.ingress.hosts }}
|
||||
- host: {{ .host | quote }}
|
||||
http:
|
||||
paths:
|
||||
{{- range .paths }}
|
||||
- path: {{ .path }}
|
||||
pathType: {{ .pathType }}
|
||||
backend:
|
||||
service:
|
||||
name: {{ include "postgresus.fullname" $ }}-service
|
||||
port:
|
||||
number: {{ $.Values.service.port }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
8
deploy/helm/templates/namespace.yaml
Normal file
8
deploy/helm/templates/namespace.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
{{- if .Values.namespace.create }}
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: {{ .Values.namespace.name }}
|
||||
labels:
|
||||
{{- include "postgresus.labels" . | nindent 4 }}
|
||||
{{- end }}
|
||||
36
deploy/helm/templates/service.yaml
Normal file
36
deploy/helm/templates/service.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "postgresus.fullname" . }}-service
|
||||
namespace: {{ include "postgresus.namespace" . }}
|
||||
labels:
|
||||
{{- include "postgresus.labels" . | nindent 4 }}
|
||||
spec:
|
||||
type: {{ .Values.service.type }}
|
||||
ports:
|
||||
- port: {{ .Values.service.port }}
|
||||
targetPort: {{ .Values.service.targetPort }}
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
{{- include "postgresus.selectorLabels" . | nindent 4 }}
|
||||
---
|
||||
{{- if .Values.service.headless.enabled }}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "postgresus.fullname" . }}-headless
|
||||
namespace: {{ include "postgresus.namespace" . }}
|
||||
labels:
|
||||
{{- include "postgresus.labels" . | nindent 4 }}
|
||||
spec:
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: {{ .Values.service.port }}
|
||||
targetPort: {{ .Values.service.targetPort }}
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
{{- include "postgresus.selectorLabels" . | nindent 4 }}
|
||||
{{- end }}
|
||||
84
deploy/helm/templates/statefulset.yaml
Normal file
84
deploy/helm/templates/statefulset.yaml
Normal file
@@ -0,0 +1,84 @@
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: {{ include "postgresus.fullname" . }}
|
||||
namespace: {{ include "postgresus.namespace" . }}
|
||||
labels:
|
||||
{{- include "postgresus.labels" . | nindent 4 }}
|
||||
spec:
|
||||
serviceName: {{ include "postgresus.fullname" . }}-headless
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "postgresus.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
{{- with .Values.podAnnotations }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "postgresus.selectorLabels" . | nindent 8 }}
|
||||
{{- with .Values.podLabels }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: {{ .Values.service.targetPort }}
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- name: postgresus-storage
|
||||
mountPath: {{ .Values.persistence.mountPath }}
|
||||
resources:
|
||||
{{- toYaml .Values.resources | nindent 12 }}
|
||||
{{- if .Values.livenessProbe.enabled }}
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
{{- toYaml .Values.livenessProbe.httpGet | nindent 14 }}
|
||||
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
|
||||
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
|
||||
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
|
||||
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
|
||||
{{- end }}
|
||||
{{- if .Values.readinessProbe.enabled }}
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
{{- toYaml .Values.readinessProbe.httpGet | nindent 14 }}
|
||||
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
|
||||
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
|
||||
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
|
||||
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
|
||||
{{- end }}
|
||||
{{- if .Values.persistence.enabled }}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: postgresus-storage
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.persistence.accessMode }}
|
||||
{{- if .Values.persistence.storageClassName }}
|
||||
storageClassName: {{ .Values.persistence.storageClassName }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistence.size }}
|
||||
{{- end }}
|
||||
updateStrategy:
|
||||
{{- toYaml .Values.updateStrategy | nindent 4 }}
|
||||
106
deploy/helm/values.yaml
Normal file
106
deploy/helm/values.yaml
Normal file
@@ -0,0 +1,106 @@
|
||||
# Default values for postgresus
|
||||
|
||||
# Namespace configuration
|
||||
namespace:
|
||||
create: true
|
||||
name: postgresus
|
||||
|
||||
# Image configuration
|
||||
image:
|
||||
repository: rostislavdugin/postgresus
|
||||
tag: latest
|
||||
pullPolicy: Always
|
||||
|
||||
# StatefulSet configuration
|
||||
replicaCount: 1
|
||||
|
||||
# Service configuration
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 4005 # Service port
|
||||
targetPort: 4005 # Internal container port
|
||||
# Headless service for StatefulSet
|
||||
headless:
|
||||
enabled: true
|
||||
|
||||
# Resource limits and requests
|
||||
resources:
|
||||
requests:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
|
||||
# Persistent storage configuration
|
||||
persistence:
|
||||
enabled: true
|
||||
# Storage class name. Leave empty to use cluster default.
|
||||
# Examples: "longhorn", "standard", "gp2", etc.
|
||||
storageClassName: ""
|
||||
accessMode: ReadWriteOnce
|
||||
size: 10Gi
|
||||
# Mount path in container
|
||||
mountPath: /postgresus-data
|
||||
|
||||
# Ingress configuration (disabled by default - using LoadBalancer instead)
|
||||
ingress:
|
||||
enabled: false
|
||||
className: nginx
|
||||
annotations: {}
|
||||
hosts:
|
||||
- host: postgresus.example.com
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
tls: []
|
||||
|
||||
# HTTPRoute configuration for Gateway API
|
||||
route:
|
||||
enabled: false
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: HTTPRoute
|
||||
annotations: {}
|
||||
hostnames:
|
||||
- postgresus.example.com
|
||||
parentRefs: []
|
||||
filters: []
|
||||
matches: []
|
||||
timeouts: {}
|
||||
|
||||
# Health checks configuration
|
||||
# Note: The application only has /api/v1/system/health endpoint
|
||||
livenessProbe:
|
||||
enabled: true
|
||||
httpGet:
|
||||
path: /api/v1/system/health
|
||||
port: 4005
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 3
|
||||
|
||||
readinessProbe:
|
||||
enabled: true
|
||||
httpGet:
|
||||
path: /api/v1/system/health
|
||||
port: 4005
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 5
|
||||
timeoutSeconds: 3
|
||||
failureThreshold: 3
|
||||
|
||||
# StatefulSet update strategy
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
partition: 0
|
||||
|
||||
# Pod labels and annotations
|
||||
podLabels: {}
|
||||
podAnnotations: {}
|
||||
|
||||
# Node selector, tolerations and affinity
|
||||
nodeSelector: {}
|
||||
tolerations: []
|
||||
affinity: {}
|
||||
@@ -3,7 +3,10 @@
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<link rel="icon" type="image/svg+xml" href="/logo.svg" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<meta
|
||||
name="viewport"
|
||||
content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no"
|
||||
/>
|
||||
<meta name="robots" content="noindex" />
|
||||
<title>Postgresus - PostgreSQL backups</title>
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { App as AntdApp, ConfigProvider } from 'antd';
|
||||
import { App as AntdApp, ConfigProvider, theme } from 'antd';
|
||||
import { useEffect, useState } from 'react';
|
||||
import { BrowserRouter, Route } from 'react-router';
|
||||
import { Routes } from 'react-router';
|
||||
@@ -7,10 +7,12 @@ import { userApi } from './entity/users';
|
||||
import { AuthPageComponent } from './pages/AuthPageComponent';
|
||||
import { OAuthCallbackPage } from './pages/OAuthCallbackPage';
|
||||
import { OauthStorageComponent } from './pages/OauthStorageComponent';
|
||||
import { ThemeProvider, useTheme } from './shared/theme';
|
||||
import { MainScreenComponent } from './widgets/main/MainScreenComponent';
|
||||
|
||||
function App() {
|
||||
function AppContent() {
|
||||
const [isAuthorized, setIsAuthorized] = useState(false);
|
||||
const { resolvedTheme } = useTheme();
|
||||
|
||||
useEffect(() => {
|
||||
const isAuthorized = userApi.isAuthorized();
|
||||
@@ -24,6 +26,7 @@ function App() {
|
||||
return (
|
||||
<ConfigProvider
|
||||
theme={{
|
||||
algorithm: resolvedTheme === 'dark' ? theme.darkAlgorithm : theme.defaultAlgorithm,
|
||||
token: {
|
||||
colorPrimary: '#155dfc', // Tailwind blue-600
|
||||
},
|
||||
@@ -45,4 +48,12 @@ function App() {
|
||||
);
|
||||
}
|
||||
|
||||
function App() {
|
||||
return (
|
||||
<ThemeProvider>
|
||||
<AppContent />
|
||||
</ThemeProvider>
|
||||
);
|
||||
}
|
||||
|
||||
export default App;
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import { getApplicationServer } from '../../../constants';
|
||||
import RequestOptions from '../../../shared/api/RequestOptions';
|
||||
import { apiHelper } from '../../../shared/api/apiHelper';
|
||||
import type { CreateReadOnlyUserResponse } from '../model/CreateReadOnlyUserResponse';
|
||||
import type { Database } from '../model/Database';
|
||||
import type { IsReadOnlyResponse } from '../model/IsReadOnlyResponse';
|
||||
|
||||
export const databaseApi = {
|
||||
async createDatabase(database: Database) {
|
||||
@@ -85,4 +87,22 @@ export const databaseApi = {
|
||||
)
|
||||
.then((res) => res.isUsing);
|
||||
},
|
||||
|
||||
async isUserReadOnly(database: Database) {
|
||||
const requestOptions: RequestOptions = new RequestOptions();
|
||||
requestOptions.setBody(JSON.stringify(database));
|
||||
return apiHelper.fetchPostJson<IsReadOnlyResponse>(
|
||||
`${getApplicationServer()}/api/v1/databases/is-readonly`,
|
||||
requestOptions,
|
||||
);
|
||||
},
|
||||
|
||||
async createReadOnlyUser(database: Database) {
|
||||
const requestOptions: RequestOptions = new RequestOptions();
|
||||
requestOptions.setBody(JSON.stringify(database));
|
||||
return apiHelper.fetchPostJson<CreateReadOnlyUserResponse>(
|
||||
`${getApplicationServer()}/api/v1/databases/create-readonly-user`,
|
||||
requestOptions,
|
||||
);
|
||||
},
|
||||
};
|
||||
|
||||
@@ -4,3 +4,5 @@ export { DatabaseType } from './model/DatabaseType';
|
||||
export { Period } from './model/Period';
|
||||
export { type PostgresqlDatabase } from './model/postgresql/PostgresqlDatabase';
|
||||
export { PostgresqlVersion } from './model/postgresql/PostgresqlVersion';
|
||||
export { type IsReadOnlyResponse } from './model/IsReadOnlyResponse';
|
||||
export { type CreateReadOnlyUserResponse } from './model/CreateReadOnlyUserResponse';
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
export interface CreateReadOnlyUserResponse {
|
||||
username: string;
|
||||
password: string;
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
export interface IsReadOnlyResponse {
|
||||
isReadOnly: boolean;
|
||||
}
|
||||
@@ -9,6 +9,7 @@ export type { TelegramNotifier } from './models/telegram/TelegramNotifier';
|
||||
export { validateTelegramNotifier } from './models/telegram/validateTelegramNotifier';
|
||||
|
||||
export type { WebhookNotifier } from './models/webhook/WebhookNotifier';
|
||||
export type { WebhookHeader } from './models/webhook/WebhookHeader';
|
||||
export { validateWebhookNotifier } from './models/webhook/validateWebhookNotifier';
|
||||
export { WebhookMethod } from './models/webhook/WebhookMethod';
|
||||
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
export interface WebhookHeader {
|
||||
key: string;
|
||||
value: string;
|
||||
}
|
||||
@@ -1,6 +1,9 @@
|
||||
import type { WebhookHeader } from './WebhookHeader';
|
||||
import type { WebhookMethod } from './WebhookMethod';
|
||||
|
||||
export interface WebhookNotifier {
|
||||
webhookUrl: string;
|
||||
webhookMethod: WebhookMethod;
|
||||
bodyTemplate?: string;
|
||||
headers?: WebhookHeader[];
|
||||
}
|
||||
|
||||
@@ -281,6 +281,163 @@ export const BackupsComponent = ({ database, isCanManageDBs, scrollContainerRef
|
||||
return () => container.removeEventListener('scroll', handleScroll);
|
||||
}, [hasMore, isLoadingMore, currentLimit, scrollContainerRef]);
|
||||
|
||||
const renderStatus = (status: BackupStatus, record: Backup) => {
|
||||
if (status === BackupStatus.FAILED) {
|
||||
return (
|
||||
<Tooltip title="Click to see error details">
|
||||
<div
|
||||
className="flex cursor-pointer items-center text-red-600 underline"
|
||||
onClick={() => setShowingBackupError(record)}
|
||||
>
|
||||
<ExclamationCircleOutlined className="mr-2" style={{ fontSize: 16 }} />
|
||||
<div>Failed</div>
|
||||
</div>
|
||||
</Tooltip>
|
||||
);
|
||||
}
|
||||
|
||||
if (status === BackupStatus.COMPLETED) {
|
||||
return (
|
||||
<div className="flex items-center text-green-600">
|
||||
<CheckCircleOutlined className="mr-2" style={{ fontSize: 16 }} />
|
||||
<div>Successful</div>
|
||||
{record.encryption === BackupEncryption.ENCRYPTED && (
|
||||
<Tooltip title="Encrypted">
|
||||
<LockOutlined className="ml-1" style={{ fontSize: 14 }} />
|
||||
</Tooltip>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (status === BackupStatus.DELETED) {
|
||||
return (
|
||||
<div className="flex items-center text-gray-600">
|
||||
<DeleteOutlined className="mr-2" style={{ fontSize: 16 }} />
|
||||
<div>Deleted</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (status === BackupStatus.IN_PROGRESS) {
|
||||
return (
|
||||
<div className="flex items-center font-bold text-blue-600">
|
||||
<SyncOutlined spin />
|
||||
<span className="ml-2">In progress</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (status === BackupStatus.CANCELED) {
|
||||
return (
|
||||
<div className="flex items-center text-gray-600">
|
||||
<CloseCircleOutlined className="mr-2" style={{ fontSize: 16 }} />
|
||||
<div>Canceled</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return <span className="font-bold">{status}</span>;
|
||||
};
|
||||
|
||||
const renderActions = (record: Backup) => {
|
||||
return (
|
||||
<div className="flex gap-2 text-lg">
|
||||
{record.status === BackupStatus.IN_PROGRESS && isCanManageDBs && (
|
||||
<div className="flex gap-2">
|
||||
{cancellingBackupId === record.id ? (
|
||||
<SyncOutlined spin />
|
||||
) : (
|
||||
<Tooltip title="Cancel backup">
|
||||
<CloseCircleOutlined
|
||||
className="cursor-pointer"
|
||||
onClick={() => {
|
||||
if (cancellingBackupId) return;
|
||||
cancelBackup(record.id);
|
||||
}}
|
||||
style={{ color: '#ff0000', opacity: cancellingBackupId ? 0.2 : 1 }}
|
||||
/>
|
||||
</Tooltip>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{record.status === BackupStatus.COMPLETED && (
|
||||
<div className="flex gap-2">
|
||||
{deletingBackupId === record.id ? (
|
||||
<SyncOutlined spin />
|
||||
) : (
|
||||
<>
|
||||
{isCanManageDBs && (
|
||||
<Tooltip title="Delete backup">
|
||||
<DeleteOutlined
|
||||
className="cursor-pointer"
|
||||
onClick={() => {
|
||||
if (deletingBackupId) return;
|
||||
setDeleteConfimationId(record.id);
|
||||
}}
|
||||
style={{ color: '#ff0000', opacity: deletingBackupId ? 0.2 : 1 }}
|
||||
/>
|
||||
</Tooltip>
|
||||
)}
|
||||
|
||||
<Tooltip title="Restore from backup">
|
||||
<CloudUploadOutlined
|
||||
className="cursor-pointer"
|
||||
onClick={() => {
|
||||
setShowingRestoresBackupId(record.id);
|
||||
}}
|
||||
style={{
|
||||
color: '#155dfc',
|
||||
}}
|
||||
/>
|
||||
</Tooltip>
|
||||
|
||||
<Tooltip title="Download backup file. It can be restored manually via pg_restore (from custom format)">
|
||||
{downloadingBackupId === record.id ? (
|
||||
<SyncOutlined spin style={{ color: '#155dfc' }} />
|
||||
) : (
|
||||
<DownloadOutlined
|
||||
className="cursor-pointer"
|
||||
onClick={() => {
|
||||
if (downloadingBackupId) return;
|
||||
setDownloadingBackupId(record.id);
|
||||
}}
|
||||
style={{
|
||||
opacity: downloadingBackupId ? 0.2 : 1,
|
||||
color: '#155dfc',
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
</Tooltip>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
const formatSize = (sizeMb: number) => {
|
||||
if (sizeMb >= 1024) {
|
||||
const sizeGb = sizeMb / 1024;
|
||||
return `${Number(sizeGb.toFixed(2)).toLocaleString()} GB`;
|
||||
}
|
||||
return `${Number(sizeMb?.toFixed(2)).toLocaleString()} MB`;
|
||||
};
|
||||
|
||||
const formatDuration = (durationMs: number) => {
|
||||
const hours = Math.floor(durationMs / 3600000);
|
||||
const minutes = Math.floor((durationMs % 3600000) / 60000);
|
||||
const seconds = Math.floor((durationMs % 60000) / 1000);
|
||||
|
||||
if (hours > 0) {
|
||||
return `${hours}h ${minutes}m ${seconds}s`;
|
||||
}
|
||||
|
||||
return `${minutes}m ${seconds}s`;
|
||||
};
|
||||
|
||||
const columns: ColumnsType<Backup> = [
|
||||
{
|
||||
title: 'Created at',
|
||||
@@ -289,7 +446,9 @@ export const BackupsComponent = ({ database, isCanManageDBs, scrollContainerRef
|
||||
render: (createdAt: string) => (
|
||||
<div>
|
||||
{dayjs.utc(createdAt).local().format(getUserTimeFormat().format)} <br />
|
||||
<span className="text-gray-500">({dayjs.utc(createdAt).local().fromNow()})</span>
|
||||
<span className="text-gray-500 dark:text-gray-400">
|
||||
({dayjs.utc(createdAt).local().fromNow()})
|
||||
</span>
|
||||
</div>
|
||||
),
|
||||
sorter: (a, b) => dayjs(a.createdAt).unix() - dayjs(b.createdAt).unix(),
|
||||
@@ -299,66 +458,7 @@ export const BackupsComponent = ({ database, isCanManageDBs, scrollContainerRef
|
||||
title: 'Status',
|
||||
dataIndex: 'status',
|
||||
key: 'status',
|
||||
render: (status: BackupStatus, record: Backup) => {
|
||||
if (status === BackupStatus.FAILED) {
|
||||
return (
|
||||
<Tooltip title="Click to see error details">
|
||||
<div
|
||||
className="flex cursor-pointer items-center text-red-600 underline"
|
||||
onClick={() => setShowingBackupError(record)}
|
||||
>
|
||||
<ExclamationCircleOutlined className="mr-2" style={{ fontSize: 16 }} />
|
||||
|
||||
<div>Failed</div>
|
||||
</div>
|
||||
</Tooltip>
|
||||
);
|
||||
}
|
||||
|
||||
if (status === BackupStatus.COMPLETED) {
|
||||
return (
|
||||
<div className="flex items-center text-green-600">
|
||||
<CheckCircleOutlined className="mr-2" style={{ fontSize: 16 }} />
|
||||
<div>Successful</div>
|
||||
|
||||
{record.encryption === BackupEncryption.ENCRYPTED && (
|
||||
<Tooltip title="Encrypted">
|
||||
<LockOutlined className="ml-1" style={{ fontSize: 14 }} />
|
||||
</Tooltip>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (status === BackupStatus.DELETED) {
|
||||
return (
|
||||
<div className="flex items-center text-gray-600">
|
||||
<DeleteOutlined className="mr-2" style={{ fontSize: 16 }} />
|
||||
<div>Deleted</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (status === BackupStatus.IN_PROGRESS) {
|
||||
return (
|
||||
<div className="flex items-center font-bold text-blue-600">
|
||||
<SyncOutlined spin />
|
||||
<span className="ml-2">In progress</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (status === BackupStatus.CANCELED) {
|
||||
return (
|
||||
<div className="flex items-center text-gray-600">
|
||||
<CloseCircleOutlined className="mr-2" style={{ fontSize: 16 }} />
|
||||
<div>Canceled</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return <span className="font-bold">{status}</span>;
|
||||
},
|
||||
render: (status: BackupStatus, record: Backup) => renderStatus(status, record),
|
||||
filters: [
|
||||
{
|
||||
value: BackupStatus.IN_PROGRESS,
|
||||
@@ -398,112 +498,20 @@ export const BackupsComponent = ({ database, isCanManageDBs, scrollContainerRef
|
||||
dataIndex: 'backupSizeMb',
|
||||
key: 'backupSizeMb',
|
||||
width: 150,
|
||||
render: (sizeMb: number) => {
|
||||
if (sizeMb >= 1024) {
|
||||
const sizeGb = sizeMb / 1024;
|
||||
return `${Number(sizeGb.toFixed(2)).toLocaleString()} GB`;
|
||||
}
|
||||
return `${Number(sizeMb?.toFixed(2)).toLocaleString()} MB`;
|
||||
},
|
||||
render: (sizeMb: number) => formatSize(sizeMb),
|
||||
},
|
||||
{
|
||||
title: 'Duration',
|
||||
dataIndex: 'backupDurationMs',
|
||||
key: 'backupDurationMs',
|
||||
width: 150,
|
||||
render: (durationMs: number) => {
|
||||
const hours = Math.floor(durationMs / 3600000);
|
||||
const minutes = Math.floor((durationMs % 3600000) / 60000);
|
||||
const seconds = Math.floor((durationMs % 60000) / 1000);
|
||||
|
||||
if (hours > 0) {
|
||||
return `${hours}h ${minutes}m ${seconds}s`;
|
||||
}
|
||||
|
||||
return `${minutes}m ${seconds}s`;
|
||||
},
|
||||
render: (durationMs: number) => formatDuration(durationMs),
|
||||
},
|
||||
{
|
||||
title: 'Actions',
|
||||
dataIndex: '',
|
||||
key: '',
|
||||
render: (_, record: Backup) => {
|
||||
return (
|
||||
<div className="flex gap-2 text-lg">
|
||||
{record.status === BackupStatus.IN_PROGRESS && isCanManageDBs && (
|
||||
<div className="flex gap-2">
|
||||
{cancellingBackupId === record.id ? (
|
||||
<SyncOutlined spin />
|
||||
) : (
|
||||
<Tooltip title="Cancel backup">
|
||||
<CloseCircleOutlined
|
||||
className="cursor-pointer"
|
||||
onClick={() => {
|
||||
if (cancellingBackupId) return;
|
||||
cancelBackup(record.id);
|
||||
}}
|
||||
style={{ color: '#ff0000', opacity: cancellingBackupId ? 0.2 : 1 }}
|
||||
/>
|
||||
</Tooltip>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{record.status === BackupStatus.COMPLETED && (
|
||||
<div className="flex gap-2">
|
||||
{deletingBackupId === record.id ? (
|
||||
<SyncOutlined spin />
|
||||
) : (
|
||||
<>
|
||||
{isCanManageDBs && (
|
||||
<Tooltip title="Delete backup">
|
||||
<DeleteOutlined
|
||||
className="cursor-pointer"
|
||||
onClick={() => {
|
||||
if (deletingBackupId) return;
|
||||
setDeleteConfimationId(record.id);
|
||||
}}
|
||||
style={{ color: '#ff0000', opacity: deletingBackupId ? 0.2 : 1 }}
|
||||
/>
|
||||
</Tooltip>
|
||||
)}
|
||||
|
||||
<Tooltip title="Restore from backup">
|
||||
<CloudUploadOutlined
|
||||
className="cursor-pointer"
|
||||
onClick={() => {
|
||||
setShowingRestoresBackupId(record.id);
|
||||
}}
|
||||
style={{
|
||||
color: '#155dfc',
|
||||
}}
|
||||
/>
|
||||
</Tooltip>
|
||||
|
||||
<Tooltip title="Download backup file. It can be restored manually via pg_restore (from custom format)">
|
||||
{downloadingBackupId === record.id ? (
|
||||
<SyncOutlined spin style={{ color: '#155dfc' }} />
|
||||
) : (
|
||||
<DownloadOutlined
|
||||
className="cursor-pointer"
|
||||
onClick={() => {
|
||||
if (downloadingBackupId) return;
|
||||
setDownloadingBackupId(record.id);
|
||||
}}
|
||||
style={{
|
||||
opacity: downloadingBackupId ? 0.2 : 1,
|
||||
color: '#155dfc',
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
</Tooltip>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
},
|
||||
render: (_, record: Backup) => renderActions(record),
|
||||
},
|
||||
];
|
||||
|
||||
@@ -516,11 +524,11 @@ export const BackupsComponent = ({ database, isCanManageDBs, scrollContainerRef
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="mt-5 w-full rounded-md bg-white p-5 shadow">
|
||||
<h2 className="text-xl font-bold">Backups</h2>
|
||||
<div className="mt-5 w-full rounded-md bg-white p-3 shadow md:p-5 dark:bg-gray-800">
|
||||
<h2 className="text-lg font-bold md:text-xl dark:text-white">Backups</h2>
|
||||
|
||||
{!isBackupConfigLoading && !backupConfig?.isBackupsEnabled && (
|
||||
<div className="text-red-600">
|
||||
<div className="text-sm text-red-600 md:text-base">
|
||||
Scheduled backups are disabled (you can enable it back in the backup configuration)
|
||||
</div>
|
||||
)}
|
||||
@@ -535,30 +543,98 @@ export const BackupsComponent = ({ database, isCanManageDBs, scrollContainerRef
|
||||
disabled={isMakeBackupRequestLoading}
|
||||
loading={isMakeBackupRequestLoading}
|
||||
>
|
||||
Make backup right now
|
||||
<span className="md:hidden">Backup now</span>
|
||||
<span className="hidden md:inline">Make backup right now</span>
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
<div className="mt-5 max-w-[850px]">
|
||||
<Table
|
||||
bordered
|
||||
columns={columns}
|
||||
dataSource={backups}
|
||||
rowKey="id"
|
||||
loading={isBackupsLoading}
|
||||
size="small"
|
||||
pagination={false}
|
||||
/>
|
||||
{isLoadingMore && (
|
||||
<div className="mt-2 flex justify-center">
|
||||
<Spin />
|
||||
</div>
|
||||
)}
|
||||
{!hasMore && backups.length > 0 && (
|
||||
<div className="mt-2 text-center text-gray-500">
|
||||
All backups loaded ({totalBackups} total)
|
||||
</div>
|
||||
)}
|
||||
<div className="mt-5 w-full md:max-w-[850px]">
|
||||
{/* Mobile card view */}
|
||||
<div className="md:hidden">
|
||||
{isBackupsLoading ? (
|
||||
<div className="flex justify-center py-8">
|
||||
<Spin />
|
||||
</div>
|
||||
) : (
|
||||
<div>
|
||||
{backups.map((backup) => (
|
||||
<div
|
||||
key={backup.id}
|
||||
className="mb-2 rounded-lg border border-gray-200 bg-white p-4 shadow-sm dark:border-gray-700 dark:bg-gray-800"
|
||||
>
|
||||
<div className="space-y-3">
|
||||
<div className="flex items-start justify-between">
|
||||
<div>
|
||||
<div className="text-xs text-gray-500 dark:text-gray-400">Created at</div>
|
||||
<div className="text-sm font-medium">
|
||||
{dayjs.utc(backup.createdAt).local().format(getUserTimeFormat().format)}
|
||||
</div>
|
||||
<div className="text-xs text-gray-500 dark:text-gray-400">
|
||||
({dayjs.utc(backup.createdAt).local().fromNow()})
|
||||
</div>
|
||||
</div>
|
||||
<div>{renderStatus(backup.status, backup)}</div>
|
||||
</div>
|
||||
|
||||
<div className="grid grid-cols-2 gap-4">
|
||||
<div>
|
||||
<div className="text-xs text-gray-500 dark:text-gray-400">Size</div>
|
||||
<div className="text-sm font-medium">{formatSize(backup.backupSizeMb)}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div className="text-xs text-gray-500 dark:text-gray-400">Duration</div>
|
||||
<div className="text-sm font-medium">
|
||||
{formatDuration(backup.backupDurationMs)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="flex items-center justify-end border-t border-gray-200 pt-3">
|
||||
{renderActions(backup)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{isLoadingMore && (
|
||||
<div className="mt-3 flex justify-center">
|
||||
<Spin />
|
||||
</div>
|
||||
)}
|
||||
{!hasMore && backups.length > 0 && (
|
||||
<div className="mt-3 text-center text-sm text-gray-500 dark:text-gray-400">
|
||||
All backups loaded ({totalBackups} total)
|
||||
</div>
|
||||
)}
|
||||
{!isBackupsLoading && backups.length === 0 && (
|
||||
<div className="py-8 text-center text-gray-500 dark:text-gray-400">No backups yet</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Desktop table view */}
|
||||
<div className="hidden md:block">
|
||||
<Table
|
||||
bordered
|
||||
columns={columns}
|
||||
dataSource={backups}
|
||||
rowKey="id"
|
||||
loading={isBackupsLoading}
|
||||
size="small"
|
||||
pagination={false}
|
||||
/>
|
||||
{isLoadingMore && (
|
||||
<div className="mt-2 flex justify-center">
|
||||
<Spin />
|
||||
</div>
|
||||
)}
|
||||
{!hasMore && backups.length > 0 && (
|
||||
<div className="mt-2 text-center text-gray-500 dark:text-gray-400">
|
||||
All backups loaded ({totalBackups} total)
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{deleteConfimationId && (
|
||||
|
||||
@@ -204,8 +204,8 @@ export const EditBackupConfigComponent = ({
|
||||
|
||||
return (
|
||||
<div>
|
||||
<div className="mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Backups enabled</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Backups enabled</div>
|
||||
<Switch
|
||||
checked={backupConfig.isBackupsEnabled}
|
||||
onChange={(checked) => {
|
||||
@@ -217,13 +217,13 @@ export const EditBackupConfigComponent = ({
|
||||
|
||||
{backupConfig.isBackupsEnabled && (
|
||||
<>
|
||||
<div className="mt-4 mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Backup interval</div>
|
||||
<div className="mt-4 mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Backup interval</div>
|
||||
<Select
|
||||
value={backupInterval?.interval}
|
||||
onChange={(v) => saveInterval({ interval: v })}
|
||||
size="small"
|
||||
className="max-w-[200px] grow"
|
||||
className="w-full max-w-[200px] grow"
|
||||
options={[
|
||||
{ label: 'Hourly', value: IntervalType.HOURLY },
|
||||
{ label: 'Daily', value: IntervalType.DAILY },
|
||||
@@ -234,8 +234,8 @@ export const EditBackupConfigComponent = ({
|
||||
</div>
|
||||
|
||||
{backupInterval?.interval === IntervalType.WEEKLY && (
|
||||
<div className="mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Backup weekday</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Backup weekday</div>
|
||||
<Select
|
||||
value={displayedWeekday}
|
||||
onChange={(localWeekday) => {
|
||||
@@ -244,15 +244,15 @@ export const EditBackupConfigComponent = ({
|
||||
saveInterval({ weekday: getUtcWeekday(localWeekday, ref) });
|
||||
}}
|
||||
size="small"
|
||||
className="max-w-[200px] grow"
|
||||
className="w-full max-w-[200px] grow"
|
||||
options={weekdayOptions}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{backupInterval?.interval === IntervalType.MONTHLY && (
|
||||
<div className="mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Backup day of month</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Backup day of month</div>
|
||||
<InputNumber
|
||||
min={1}
|
||||
max={31}
|
||||
@@ -263,21 +263,21 @@ export const EditBackupConfigComponent = ({
|
||||
saveInterval({ dayOfMonth: getUtcDayOfMonth(localDom, ref) });
|
||||
}}
|
||||
size="small"
|
||||
className="max-w-[200px] grow"
|
||||
className="w-full max-w-[200px] grow"
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{backupInterval?.interval !== IntervalType.HOURLY && (
|
||||
<div className="mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Backup time of day</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Backup time of day</div>
|
||||
<TimePicker
|
||||
value={localTime}
|
||||
format={timeFormat.format}
|
||||
use12Hours={timeFormat.use12Hours}
|
||||
allowClear={false}
|
||||
size="small"
|
||||
className="max-w-[200px] grow"
|
||||
className="w-full max-w-[200px] grow"
|
||||
onChange={(t) => {
|
||||
if (!t) return;
|
||||
const patch: Partial<Interval> = { timeOfDay: t.utc().format('HH:mm') };
|
||||
@@ -295,156 +295,168 @@ export const EditBackupConfigComponent = ({
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="mt-4 mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Retry backup if failed</div>
|
||||
<Switch
|
||||
size="small"
|
||||
checked={backupConfig.isRetryIfFailed}
|
||||
onChange={(checked) => updateBackupConfig({ isRetryIfFailed: checked })}
|
||||
/>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Automatically retry failed backups. Backups can fail due to network failures, storage issues or temporary database unavailability."
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
|
||||
{backupConfig.isRetryIfFailed && (
|
||||
<div className="mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Max failed tries count</div>
|
||||
<InputNumber
|
||||
min={1}
|
||||
max={10}
|
||||
value={backupConfig.maxFailedTriesCount}
|
||||
onChange={(value) => updateBackupConfig({ maxFailedTriesCount: value || 1 })}
|
||||
<div className="mt-4 mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Retry backup if failed</div>
|
||||
<div className="flex items-center">
|
||||
<Switch
|
||||
size="small"
|
||||
className="max-w-[200px] grow"
|
||||
checked={backupConfig.isRetryIfFailed}
|
||||
onChange={(checked) => updateBackupConfig({ isRetryIfFailed: checked })}
|
||||
/>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Maximum number of retry attempts for failed backups. You will receive a notification when all tries have failed."
|
||||
title="Automatically retry failed backups. Backups can fail due to network failures, storage issues or temporary database unavailability."
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="mt-5 mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">CPU count</div>
|
||||
<InputNumber
|
||||
min={1}
|
||||
max={16}
|
||||
value={backupConfig.cpuCount}
|
||||
onChange={(value) => updateBackupConfig({ cpuCount: value || 1 })}
|
||||
size="small"
|
||||
className="max-w-[200px] grow"
|
||||
/>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Number of CPU cores to use for restore processing. Higher values may speed up restores, but use more resources."
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Store period</div>
|
||||
<Select
|
||||
value={backupConfig.storePeriod}
|
||||
onChange={(v) => updateBackupConfig({ storePeriod: v })}
|
||||
size="small"
|
||||
className="max-w-[200px] grow"
|
||||
options={[
|
||||
{ label: '1 day', value: Period.DAY },
|
||||
{ label: '1 week', value: Period.WEEK },
|
||||
{ label: '1 month', value: Period.MONTH },
|
||||
{ label: '3 months', value: Period.THREE_MONTH },
|
||||
{ label: '6 months', value: Period.SIX_MONTH },
|
||||
{ label: '1 year', value: Period.YEAR },
|
||||
{ label: '2 years', value: Period.TWO_YEARS },
|
||||
{ label: '3 years', value: Period.THREE_YEARS },
|
||||
{ label: '4 years', value: Period.FOUR_YEARS },
|
||||
{ label: '5 years', value: Period.FIVE_YEARS },
|
||||
{ label: 'Forever', value: Period.FOREVER },
|
||||
]}
|
||||
/>
|
||||
{backupConfig.isRetryIfFailed && (
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Max failed tries count</div>
|
||||
<div className="flex items-center">
|
||||
<InputNumber
|
||||
min={1}
|
||||
max={10}
|
||||
value={backupConfig.maxFailedTriesCount}
|
||||
onChange={(value) => updateBackupConfig({ maxFailedTriesCount: value || 1 })}
|
||||
size="small"
|
||||
className="w-full max-w-[200px] grow"
|
||||
/>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="How long to keep the backups? Make sure you have enough storage space."
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Maximum number of retry attempts for failed backups. You will receive a notification when all tries have failed."
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="mt-5 mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">CPU count</div>
|
||||
<div className="flex items-center">
|
||||
<InputNumber
|
||||
min={1}
|
||||
max={16}
|
||||
value={backupConfig.cpuCount}
|
||||
onChange={(value) => updateBackupConfig({ cpuCount: value || 1 })}
|
||||
size="small"
|
||||
className="w-full max-w-[200px] grow"
|
||||
/>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Number of CPU cores to use for restore processing. Higher values may speed up restores, but use more resources."
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Store period</div>
|
||||
<div className="flex items-center">
|
||||
<Select
|
||||
value={backupConfig.storePeriod}
|
||||
onChange={(v) => updateBackupConfig({ storePeriod: v })}
|
||||
size="small"
|
||||
className="w-full max-w-[200px] grow"
|
||||
options={[
|
||||
{ label: '1 day', value: Period.DAY },
|
||||
{ label: '1 week', value: Period.WEEK },
|
||||
{ label: '1 month', value: Period.MONTH },
|
||||
{ label: '3 months', value: Period.THREE_MONTH },
|
||||
{ label: '6 months', value: Period.SIX_MONTH },
|
||||
{ label: '1 year', value: Period.YEAR },
|
||||
{ label: '2 years', value: Period.TWO_YEARS },
|
||||
{ label: '3 years', value: Period.THREE_YEARS },
|
||||
{ label: '4 years', value: Period.FOUR_YEARS },
|
||||
{ label: '5 years', value: Period.FIVE_YEARS },
|
||||
{ label: 'Forever', value: Period.FOREVER },
|
||||
]}
|
||||
/>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="How long to keep the backups? Make sure you have enough storage space."
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-3" />
|
||||
</>
|
||||
)}
|
||||
|
||||
<div className="mt-2 mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Storage</div>
|
||||
<Select
|
||||
value={backupConfig.storage?.id}
|
||||
onChange={(storageId) => {
|
||||
if (storageId.includes('create-new-storage')) {
|
||||
setShowCreateStorage(true);
|
||||
return;
|
||||
}
|
||||
<div className="mt-2 mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Storage</div>
|
||||
<div className="flex w-full items-center">
|
||||
<Select
|
||||
value={backupConfig.storage?.id}
|
||||
onChange={(storageId) => {
|
||||
if (storageId.includes('create-new-storage')) {
|
||||
setShowCreateStorage(true);
|
||||
return;
|
||||
}
|
||||
|
||||
const selectedStorage = storages.find((s) => s.id === storageId);
|
||||
updateBackupConfig({ storage: selectedStorage });
|
||||
const selectedStorage = storages.find((s) => s.id === storageId);
|
||||
updateBackupConfig({ storage: selectedStorage });
|
||||
|
||||
if (backupConfig.storage?.id) {
|
||||
setIsShowWarn(true);
|
||||
}
|
||||
}}
|
||||
size="small"
|
||||
className="mr-2 max-w-[200px] grow"
|
||||
options={[
|
||||
...storages.map((s) => ({ label: s.name, value: s.id })),
|
||||
{ label: 'Create new storage', value: 'create-new-storage' },
|
||||
]}
|
||||
placeholder="Select storage"
|
||||
/>
|
||||
|
||||
{backupConfig.storage?.type && (
|
||||
<img
|
||||
src={getStorageLogoFromType(backupConfig.storage.type)}
|
||||
alt="storageIcon"
|
||||
className="ml-1 h-4 w-4"
|
||||
if (backupConfig.storage?.id) {
|
||||
setIsShowWarn(true);
|
||||
}
|
||||
}}
|
||||
size="small"
|
||||
className="mr-2 max-w-[200px] grow"
|
||||
options={[
|
||||
...storages.map((s) => ({ label: s.name, value: s.id })),
|
||||
{ label: 'Create new storage', value: 'create-new-storage' },
|
||||
]}
|
||||
placeholder="Select storage"
|
||||
/>
|
||||
)}
|
||||
|
||||
{backupConfig.storage?.type && (
|
||||
<img
|
||||
src={getStorageLogoFromType(backupConfig.storage.type)}
|
||||
alt="storageIcon"
|
||||
className="ml-1 h-4 w-4"
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex w-full items-center">
|
||||
<div className="min-w-[150px]">Encryption</div>
|
||||
<Select
|
||||
value={backupConfig.encryption}
|
||||
onChange={(v) => updateBackupConfig({ encryption: v })}
|
||||
size="small"
|
||||
className="max-w-[200px] grow"
|
||||
options={[
|
||||
{ label: 'None', value: BackupEncryption.NONE },
|
||||
{ label: 'Encrypt backup files', value: BackupEncryption.ENCRYPTED },
|
||||
]}
|
||||
/>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Encryption</div>
|
||||
<div className="flex items-center">
|
||||
<Select
|
||||
value={backupConfig.encryption}
|
||||
onChange={(v) => updateBackupConfig({ encryption: v })}
|
||||
size="small"
|
||||
className="w-full max-w-[200px] grow"
|
||||
options={[
|
||||
{ label: 'None', value: BackupEncryption.NONE },
|
||||
{ label: 'Encrypt backup files', value: BackupEncryption.ENCRYPTED },
|
||||
]}
|
||||
/>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="If backup is encrypted, backup files in your storage (S3, local, etc.) cannot be used directly. You can restore backups through Postgresus or download them unencrypted via the 'Download' button."
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="If backup is encrypted, backup files in your storage (S3, local, etc.) cannot be used directly. You can restore backups through Postgresus or download them unencrypted via the 'Download' button."
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{backupConfig.isBackupsEnabled && (
|
||||
<>
|
||||
<div className="mt-4 mb-1 flex w-full items-start">
|
||||
<div className="mt-1 min-w-[150px]">Notifications</div>
|
||||
<div className="mt-4 mb-1 flex w-full flex-col items-start sm:flex-row sm:items-start">
|
||||
<div className="mt-0 mb-1 min-w-[150px] sm:mt-1 sm:mb-0">Notifications</div>
|
||||
<div className="flex flex-col space-y-2">
|
||||
<Checkbox
|
||||
checked={backupConfig.sendNotificationsOn.includes(
|
||||
@@ -517,7 +529,7 @@ export const EditBackupConfigComponent = ({
|
||||
open={isShowCreateStorage}
|
||||
onCancel={() => setShowCreateStorage(false)}
|
||||
>
|
||||
<div className="my-3 max-w-[275px] text-gray-500">
|
||||
<div className="my-3 max-w-[275px] text-gray-500 dark:text-gray-400">
|
||||
Storage - is a place where backups will be stored (local disk, S3, Google Drive, etc.)
|
||||
</div>
|
||||
|
||||
|
||||
@@ -9,6 +9,7 @@ import {
|
||||
databaseApi,
|
||||
} from '../../../entity/databases';
|
||||
import { EditBackupConfigComponent } from '../../backups';
|
||||
import { CreateReadOnlyComponent } from './edit/CreateReadOnlyComponent';
|
||||
import { EditDatabaseBaseInfoComponent } from './edit/EditDatabaseBaseInfoComponent';
|
||||
import { EditDatabaseNotifiersComponent } from './edit/EditDatabaseNotifiersComponent';
|
||||
import { EditDatabaseSpecificDataComponent } from './edit/EditDatabaseSpecificDataComponent';
|
||||
@@ -41,9 +42,9 @@ export const CreateDatabaseComponent = ({ workspaceId, onCreated, onClose }: Pro
|
||||
sendNotificationsOn: [],
|
||||
} as Database);
|
||||
|
||||
const [step, setStep] = useState<'base-info' | 'db-settings' | 'backup-config' | 'notifiers'>(
|
||||
'base-info',
|
||||
);
|
||||
const [step, setStep] = useState<
|
||||
'base-info' | 'db-settings' | 'create-readonly-user' | 'backup-config' | 'notifiers'
|
||||
>('base-info');
|
||||
|
||||
const createDatabase = async (database: Database, backupConfig: BackupConfig) => {
|
||||
setIsCreating(true);
|
||||
@@ -97,12 +98,25 @@ export const CreateDatabaseComponent = ({ workspaceId, onCreated, onClose }: Pro
|
||||
isSaveToApi={false}
|
||||
onSaved={(database) => {
|
||||
setDatabase({ ...database });
|
||||
setStep('backup-config');
|
||||
setStep('create-readonly-user');
|
||||
}}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
if (step === 'create-readonly-user') {
|
||||
return (
|
||||
<CreateReadOnlyComponent
|
||||
database={database}
|
||||
onReadOnlyUserUpdated={(database) => {
|
||||
setDatabase({ ...database });
|
||||
}}
|
||||
onGoBack={() => setStep('db-settings')}
|
||||
onContinue={() => setStep('backup-config')}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
if (step === 'backup-config') {
|
||||
return (
|
||||
<EditBackupConfigComponent
|
||||
|
||||
@@ -29,7 +29,7 @@ export const DatabaseCardComponent = ({
|
||||
|
||||
return (
|
||||
<div
|
||||
className={`mb-3 cursor-pointer rounded p-3 shadow ${selectedDatabaseId === database.id ? 'bg-blue-100' : 'bg-white'}`}
|
||||
className={`mb-3 cursor-pointer rounded p-3 shadow ${selectedDatabaseId === database.id ? 'bg-blue-100 dark:bg-blue-800' : 'bg-white dark:bg-gray-800'}`}
|
||||
onClick={() => setSelectedDatabaseId(database.id)}
|
||||
>
|
||||
<div className="flex">
|
||||
@@ -49,7 +49,7 @@ export const DatabaseCardComponent = ({
|
||||
</div>
|
||||
|
||||
{storage && (
|
||||
<div className="text-sm text-gray-500">
|
||||
<div className="text-sm text-gray-500 dark:text-gray-400">
|
||||
<span>Storage: </span>
|
||||
<span className="inline-flex items-center">
|
||||
{storage.name}{' '}
|
||||
@@ -65,11 +65,13 @@ export const DatabaseCardComponent = ({
|
||||
)}
|
||||
|
||||
{database.lastBackupTime && (
|
||||
<div className="text-gray-500">Last backup {dayjs(database.lastBackupTime).fromNow()}</div>
|
||||
<div className="text-gray-500 dark:text-gray-400">
|
||||
Last backup {dayjs(database.lastBackupTime).fromNow()}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{database.lastBackupErrorMessage && (
|
||||
<div className="mt-1 flex items-center text-sm text-red-600 underline">
|
||||
<div className="mt-1 flex items-center text-sm text-red-600 underline dark:text-red-400">
|
||||
<InfoCircleOutlined className="mr-1" style={{ color: 'red' }} />
|
||||
Has backup error
|
||||
</div>
|
||||
|
||||
@@ -51,14 +51,14 @@ export const DatabaseComponent = ({
|
||||
>
|
||||
<div className="flex">
|
||||
<div
|
||||
className={`mr-2 cursor-pointer rounded-tl-md rounded-tr-md px-6 py-2 ${currentTab === 'config' ? 'bg-white' : 'bg-gray-200'}`}
|
||||
className={`mr-2 cursor-pointer rounded-tl-md rounded-tr-md px-6 py-2 ${currentTab === 'config' ? 'bg-white dark:bg-gray-800' : 'bg-gray-200 dark:bg-gray-700'}`}
|
||||
onClick={() => setCurrentTab('config')}
|
||||
>
|
||||
Config
|
||||
</div>
|
||||
|
||||
<div
|
||||
className={`mr-2 cursor-pointer rounded-tl-md rounded-tr-md px-6 py-2 ${currentTab === 'backups' ? 'bg-white' : 'bg-gray-200'}`}
|
||||
className={`mr-2 cursor-pointer rounded-tl-md rounded-tr-md px-6 py-2 ${currentTab === 'backups' ? 'bg-white dark:bg-gray-800' : 'bg-gray-200 dark:bg-gray-700'}`}
|
||||
onClick={() => setCurrentTab('backups')}
|
||||
>
|
||||
Backups
|
||||
|
||||
@@ -147,9 +147,9 @@ export const DatabaseConfigComponent = ({
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="w-full rounded-tr-md rounded-br-md rounded-bl-md bg-white p-5 shadow">
|
||||
<div className="w-full rounded-tr-md rounded-br-md rounded-bl-md bg-white p-3 shadow sm:p-5 dark:bg-gray-800">
|
||||
{!isEditName ? (
|
||||
<div className="mb-5 flex items-center text-2xl font-bold">
|
||||
<div className="mb-5 flex items-center text-xl font-bold sm:text-2xl">
|
||||
{database.name}
|
||||
|
||||
{isCanManageDBs && (
|
||||
@@ -162,7 +162,7 @@ export const DatabaseConfigComponent = ({
|
||||
<div>
|
||||
<div className="flex items-center">
|
||||
<Input
|
||||
className="max-w-[250px]"
|
||||
className="max-w-full sm:max-w-[250px]"
|
||||
value={editDatabase?.name}
|
||||
onChange={(e) => {
|
||||
if (!editDatabase) return;
|
||||
@@ -174,7 +174,7 @@ export const DatabaseConfigComponent = ({
|
||||
size="large"
|
||||
/>
|
||||
|
||||
<div className="ml-1 flex items-center">
|
||||
<div className="ml-1 flex flex-shrink-0 items-center">
|
||||
<Button
|
||||
type="text"
|
||||
className="flex h-6 w-6 items-center justify-center p-0"
|
||||
@@ -184,7 +184,7 @@ export const DatabaseConfigComponent = ({
|
||||
setEditDatabase(undefined);
|
||||
}}
|
||||
>
|
||||
<CloseOutlined className="text-gray-500" />
|
||||
<CloseOutlined className="text-gray-500 dark:text-gray-400" />
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
@@ -204,7 +204,7 @@ export const DatabaseConfigComponent = ({
|
||||
)}
|
||||
|
||||
{database.lastBackupErrorMessage && (
|
||||
<div className="max-w-[400px] rounded border border-red-600 px-3 py-3">
|
||||
<div className="mb-4 max-w-full rounded border border-red-600 px-3 py-3 sm:max-w-[400px]">
|
||||
<div className="mt-1 flex items-center text-sm font-bold text-red-600">
|
||||
<InfoCircleOutlined className="mr-2" style={{ color: 'red' }} />
|
||||
Last backup error
|
||||
@@ -216,7 +216,7 @@ export const DatabaseConfigComponent = ({
|
||||
{database.lastBackupErrorMessage}
|
||||
</div>
|
||||
|
||||
<div className="mt-3 text-sm text-gray-500">
|
||||
<div className="mt-3 text-sm text-gray-500 dark:text-gray-400">
|
||||
To clean this error (choose any):
|
||||
<ul>
|
||||
<li>- test connection via button below (even if you updated settings);</li>
|
||||
@@ -226,8 +226,8 @@ export const DatabaseConfigComponent = ({
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="flex flex-wrap gap-10">
|
||||
<div className="w-[400px]">
|
||||
<div className="flex flex-col gap-6 lg:flex-row lg:flex-wrap lg:gap-10">
|
||||
<div className="w-full lg:w-[400px]">
|
||||
<div className="mt-5 flex items-center font-bold">
|
||||
<div>Database settings</div>
|
||||
|
||||
@@ -260,7 +260,7 @@ export const DatabaseConfigComponent = ({
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="w-[400px]">
|
||||
<div className="w-full lg:w-[400px]">
|
||||
<div className="mt-5 flex items-center font-bold">
|
||||
<div>Backup config</div>
|
||||
|
||||
@@ -299,8 +299,8 @@ export const DatabaseConfigComponent = ({
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="flex flex-wrap gap-10">
|
||||
<div className="w-[400px]">
|
||||
<div className="flex flex-col gap-6 lg:flex-row lg:flex-wrap lg:gap-10">
|
||||
<div className="w-full lg:w-[400px]">
|
||||
<div className="mt-5 flex items-center font-bold">
|
||||
<div>Healthcheck settings</div>
|
||||
|
||||
@@ -328,7 +328,7 @@ export const DatabaseConfigComponent = ({
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="w-[400px]">
|
||||
<div className="w-full lg:w-[400px]">
|
||||
<div className="mt-5 flex items-center font-bold">
|
||||
<div>Notifiers settings</div>
|
||||
|
||||
@@ -366,11 +366,10 @@ export const DatabaseConfigComponent = ({
|
||||
</div>
|
||||
|
||||
{!isEditDatabaseSpecificDataSettings && (
|
||||
<div className="mt-10">
|
||||
<div className="mt-10 flex flex-col gap-2 sm:flex-row sm:gap-0">
|
||||
<Button
|
||||
type="primary"
|
||||
className="mr-1"
|
||||
ghost
|
||||
className="w-full sm:mr-1 sm:w-auto"
|
||||
onClick={testConnection}
|
||||
loading={isTestingConnection}
|
||||
disabled={isTestingConnection}
|
||||
@@ -380,8 +379,7 @@ export const DatabaseConfigComponent = ({
|
||||
|
||||
<Button
|
||||
type="primary"
|
||||
className="mr-1"
|
||||
ghost
|
||||
className="w-full sm:mr-1 sm:w-auto"
|
||||
onClick={copyDatabase}
|
||||
loading={isCopying}
|
||||
disabled={isCopying}
|
||||
@@ -391,6 +389,7 @@ export const DatabaseConfigComponent = ({
|
||||
|
||||
<Button
|
||||
type="primary"
|
||||
className="w-full sm:w-auto"
|
||||
danger
|
||||
onClick={() => setIsShowRemoveConfirm(true)}
|
||||
ghost
|
||||
|
||||
@@ -4,6 +4,7 @@ import { useEffect, useState } from 'react';
|
||||
import { databaseApi } from '../../../entity/databases';
|
||||
import type { Database } from '../../../entity/databases';
|
||||
import type { WorkspaceResponse } from '../../../entity/workspaces';
|
||||
import { useIsMobile } from '../../../shared/hooks';
|
||||
import { CreateDatabaseComponent } from './CreateDatabaseComponent';
|
||||
import { DatabaseCardComponent } from './DatabaseCardComponent';
|
||||
import { DatabaseComponent } from './DatabaseComponent';
|
||||
@@ -17,6 +18,7 @@ interface Props {
|
||||
const SELECTED_DATABASE_STORAGE_KEY = 'selectedDatabaseId';
|
||||
|
||||
export const DatabasesComponent = ({ contentHeight, workspace, isCanManageDBs }: Props) => {
|
||||
const isMobile = useIsMobile();
|
||||
const [isLoading, setIsLoading] = useState(true);
|
||||
const [databases, setDatabases] = useState<Database[]>([]);
|
||||
const [searchQuery, setSearchQuery] = useState('');
|
||||
@@ -44,7 +46,8 @@ export const DatabasesComponent = ({ contentHeight, workspace, isCanManageDBs }:
|
||||
setDatabases(databases);
|
||||
if (selectDatabaseId) {
|
||||
updateSelectedDatabaseId(selectDatabaseId);
|
||||
} else if (!selectedDatabaseId && !isSilent) {
|
||||
} else if (!selectedDatabaseId && !isSilent && !isMobile) {
|
||||
// On desktop, auto-select a database; on mobile, keep it unselected to show the list first
|
||||
const savedDatabaseId = localStorage.getItem(
|
||||
`${SELECTED_DATABASE_STORAGE_KEY}_${workspace.id}`,
|
||||
);
|
||||
@@ -87,66 +90,86 @@ export const DatabasesComponent = ({ contentHeight, workspace, isCanManageDBs }:
|
||||
database.name.toLowerCase().includes(searchQuery.toLowerCase()),
|
||||
);
|
||||
|
||||
// On mobile, show either the list or the database details
|
||||
const showDatabaseList = !isMobile || !selectedDatabaseId;
|
||||
const showDatabaseDetails = selectedDatabaseId && (!isMobile || selectedDatabaseId);
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className="flex grow">
|
||||
<div
|
||||
className="mx-3 w-[250px] min-w-[250px] overflow-y-auto pr-2"
|
||||
style={{ height: contentHeight }}
|
||||
>
|
||||
{databases.length >= 5 && (
|
||||
<>
|
||||
{isCanManageDBs && addDatabaseButton}
|
||||
{showDatabaseList && (
|
||||
<div
|
||||
className="w-full overflow-y-auto md:mx-3 md:w-[250px] md:min-w-[250px] md:pr-2"
|
||||
style={{ height: contentHeight }}
|
||||
>
|
||||
{databases.length >= 5 && (
|
||||
<>
|
||||
{isCanManageDBs && addDatabaseButton}
|
||||
|
||||
<div className="mb-2">
|
||||
<input
|
||||
placeholder="Search database"
|
||||
value={searchQuery}
|
||||
onChange={(e) => setSearchQuery(e.target.value)}
|
||||
className="w-full border-b border-gray-300 p-1 text-gray-500 outline-none"
|
||||
/>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
|
||||
{filteredDatabases.length > 0
|
||||
? filteredDatabases.map((database) => (
|
||||
<DatabaseCardComponent
|
||||
key={database.id}
|
||||
database={database}
|
||||
selectedDatabaseId={selectedDatabaseId}
|
||||
setSelectedDatabaseId={updateSelectedDatabaseId}
|
||||
/>
|
||||
))
|
||||
: searchQuery && (
|
||||
<div className="mb-4 text-center text-sm text-gray-500">
|
||||
No databases found matching "{searchQuery}"
|
||||
<div className="mb-2">
|
||||
<input
|
||||
placeholder="Search database"
|
||||
value={searchQuery}
|
||||
onChange={(e) => setSearchQuery(e.target.value)}
|
||||
className="w-full border-b border-gray-300 p-1 text-gray-500 outline-none dark:text-gray-400"
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
|
||||
{databases.length < 5 && isCanManageDBs && addDatabaseButton}
|
||||
{filteredDatabases.length > 0
|
||||
? filteredDatabases.map((database) => (
|
||||
<DatabaseCardComponent
|
||||
key={database.id}
|
||||
database={database}
|
||||
selectedDatabaseId={selectedDatabaseId}
|
||||
setSelectedDatabaseId={updateSelectedDatabaseId}
|
||||
/>
|
||||
))
|
||||
: searchQuery && (
|
||||
<div className="mb-4 text-center text-sm text-gray-500 dark:text-gray-400">
|
||||
No databases found matching "{searchQuery}"
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="mx-3 text-center text-xs text-gray-500">
|
||||
Database - is a thing we are backing up
|
||||
{databases.length < 5 && isCanManageDBs && addDatabaseButton}
|
||||
|
||||
<div className="mx-3 text-center text-xs text-gray-500 dark:text-gray-400">
|
||||
Database - is a thing we are backing up
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{selectedDatabaseId && (
|
||||
<DatabaseComponent
|
||||
contentHeight={contentHeight}
|
||||
databaseId={selectedDatabaseId}
|
||||
onDatabaseChanged={() => {
|
||||
loadDatabases();
|
||||
}}
|
||||
onDatabaseDeleted={() => {
|
||||
const remainingDatabases = databases.filter(
|
||||
(database) => database.id !== selectedDatabaseId,
|
||||
);
|
||||
updateSelectedDatabaseId(remainingDatabases[0]?.id);
|
||||
loadDatabases();
|
||||
}}
|
||||
isCanManageDBs={isCanManageDBs}
|
||||
/>
|
||||
{showDatabaseDetails && (
|
||||
<div className="flex w-full flex-col md:flex-1">
|
||||
{isMobile && (
|
||||
<div className="mb-2">
|
||||
<Button
|
||||
type="default"
|
||||
onClick={() => updateSelectedDatabaseId(undefined)}
|
||||
className="w-full"
|
||||
>
|
||||
← Back to databases
|
||||
</Button>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<DatabaseComponent
|
||||
contentHeight={isMobile ? contentHeight - 50 : contentHeight}
|
||||
databaseId={selectedDatabaseId}
|
||||
onDatabaseChanged={() => {
|
||||
loadDatabases();
|
||||
}}
|
||||
onDatabaseDeleted={() => {
|
||||
const remainingDatabases = databases.filter(
|
||||
(database) => database.id !== selectedDatabaseId,
|
||||
);
|
||||
updateSelectedDatabaseId(remainingDatabases[0]?.id);
|
||||
loadDatabases();
|
||||
}}
|
||||
isCanManageDBs={isCanManageDBs}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
|
||||
@@ -0,0 +1,170 @@
|
||||
import { Button, Modal, Spin } from 'antd';
|
||||
import { useEffect, useState } from 'react';
|
||||
|
||||
import { type Database, databaseApi } from '../../../../entity/databases';
|
||||
|
||||
interface Props {
|
||||
database: Database;
|
||||
onReadOnlyUserUpdated: (database: Database) => void;
|
||||
|
||||
onGoBack: () => void;
|
||||
onContinue: () => void;
|
||||
}
|
||||
|
||||
export const CreateReadOnlyComponent = ({
|
||||
database,
|
||||
onReadOnlyUserUpdated,
|
||||
onGoBack,
|
||||
onContinue,
|
||||
}: Props) => {
|
||||
const [isCheckingReadOnlyUser, setIsCheckingReadOnlyUser] = useState(false);
|
||||
const [isCreatingReadOnlyUser, setIsCreatingReadOnlyUser] = useState(false);
|
||||
const [isShowSkipConfirmation, setShowSkipConfirmation] = useState(false);
|
||||
|
||||
const checkReadOnlyUser = async (): Promise<boolean> => {
|
||||
try {
|
||||
const response = await databaseApi.isUserReadOnly(database);
|
||||
return response.isReadOnly;
|
||||
} catch (e) {
|
||||
alert((e as Error).message);
|
||||
return false;
|
||||
}
|
||||
};
|
||||
|
||||
const createReadOnlyUser = async () => {
|
||||
setIsCreatingReadOnlyUser(true);
|
||||
|
||||
try {
|
||||
const response = await databaseApi.createReadOnlyUser(database);
|
||||
database.postgresql!.username = response.username;
|
||||
database.postgresql!.password = response.password;
|
||||
onReadOnlyUserUpdated(database);
|
||||
onContinue();
|
||||
} catch (e) {
|
||||
alert((e as Error).message);
|
||||
}
|
||||
|
||||
setIsCreatingReadOnlyUser(false);
|
||||
};
|
||||
|
||||
const handleSkip = () => {
|
||||
setShowSkipConfirmation(true);
|
||||
};
|
||||
|
||||
const handleSkipConfirmed = () => {
|
||||
setShowSkipConfirmation(false);
|
||||
onContinue();
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
const run = async () => {
|
||||
setIsCheckingReadOnlyUser(true);
|
||||
|
||||
const isReadOnly = await checkReadOnlyUser();
|
||||
if (isReadOnly) {
|
||||
// already has a read-only user
|
||||
onContinue();
|
||||
}
|
||||
|
||||
setIsCheckingReadOnlyUser(false);
|
||||
};
|
||||
run();
|
||||
}, []);
|
||||
|
||||
if (isCheckingReadOnlyUser) {
|
||||
return (
|
||||
<div className="flex items-center">
|
||||
<Spin />
|
||||
<span className="ml-3">Checking read-only user...</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div>
|
||||
<div className="mb-5">
|
||||
<p className="mb-3 text-lg font-bold">Create a read-only user for Postgresus?</p>
|
||||
|
||||
<p className="mb-2">
|
||||
A read-only user is a PostgreSQL user with limited permissions that can only read data
|
||||
from your database, not modify it. This is recommended for backup operations because:
|
||||
</p>
|
||||
|
||||
<ul className="mb-2 ml-5 list-disc">
|
||||
<li>it prevents accidental data modifications during backup</li>
|
||||
<li>it follows the principle of least privilege</li>
|
||||
<li>it's a security best practice</li>
|
||||
</ul>
|
||||
|
||||
<p className="mb-2">
|
||||
Postgresus enforce enterprise-grade security (
|
||||
<a
|
||||
href="https://postgresus.com/security"
|
||||
target="_blank"
|
||||
rel="noreferrer"
|
||||
className="!text-blue-600 dark:!text-blue-400"
|
||||
>
|
||||
read in details here
|
||||
</a>
|
||||
). However, it is not possible to be covered from all possible risks.
|
||||
</p>
|
||||
|
||||
<p className="mt-3">
|
||||
<b>Read-only user allows to avoid storing credentials with write access at all</b>. Even
|
||||
in the worst case of hacking, nobody will be able to corrupt your data.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div className="mt-5 flex">
|
||||
<Button className="mr-auto" type="primary" ghost onClick={() => onGoBack()}>
|
||||
Back
|
||||
</Button>
|
||||
|
||||
<Button className="mr-2 ml-auto" danger ghost onClick={handleSkip}>
|
||||
Skip
|
||||
</Button>
|
||||
|
||||
<Button
|
||||
type="primary"
|
||||
onClick={createReadOnlyUser}
|
||||
loading={isCreatingReadOnlyUser}
|
||||
disabled={isCreatingReadOnlyUser}
|
||||
>
|
||||
Yes, create read-only user
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
<Modal
|
||||
title="Skip read-only user creation?"
|
||||
open={isShowSkipConfirmation}
|
||||
onCancel={() => setShowSkipConfirmation(false)}
|
||||
footer={null}
|
||||
width={450}
|
||||
>
|
||||
<div className="mb-5">
|
||||
<p className="mb-2">Are you sure you want to skip creating a read-only user?</p>
|
||||
|
||||
<p className="mb-2">
|
||||
Using a user with full permissions for backups is not recommended and may pose security
|
||||
risks. Postgresus is highly recommending you to not skip this step.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
100% protection is never possible. It's better to be safe in case of 0.01% risk of
|
||||
full hacking. So it is better to follow the secure way with read-only user.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div className="flex justify-end">
|
||||
<Button className="mr-2" danger onClick={handleSkipConfirmed}>
|
||||
Yes, I accept risks
|
||||
</Button>
|
||||
|
||||
<Button type="primary" onClick={() => setShowSkipConfirmation(false)}>
|
||||
Let's continue with the secure way
|
||||
</Button>
|
||||
</div>
|
||||
</Modal>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
@@ -93,7 +93,7 @@ export const EditDatabaseNotifiersComponent = ({
|
||||
|
||||
return (
|
||||
<div>
|
||||
<div className="mb-5 max-w-[275px] text-gray-500">
|
||||
<div className="mb-5 max-w-[275px] text-gray-500 dark:text-gray-400">
|
||||
Notifier - is a place where notifications will be sent (email, Slack, Telegram, etc.)
|
||||
<br />
|
||||
<br />
|
||||
@@ -162,7 +162,7 @@ export const EditDatabaseNotifiersComponent = ({
|
||||
open={isShowCreateNotifier}
|
||||
onCancel={() => setShowCreateNotifier(false)}
|
||||
>
|
||||
<div className="my-3 max-w-[275px] text-gray-500">
|
||||
<div className="my-3 max-w-[275px] text-gray-500 dark:text-gray-400">
|
||||
Notifier - is a place where notifications will be sent (email, Slack, Telegram, etc.)
|
||||
</div>
|
||||
|
||||
|
||||
@@ -48,10 +48,12 @@ export const EditDatabaseSpecificDataComponent = ({
|
||||
|
||||
const [isConnectionTested, setIsConnectionTested] = useState(false);
|
||||
const [isTestingConnection, setIsTestingConnection] = useState(false);
|
||||
const [isConnectionFailed, setIsConnectionFailed] = useState(false);
|
||||
|
||||
const testConnection = async () => {
|
||||
if (!editingDatabase) return;
|
||||
setIsTestingConnection(true);
|
||||
setIsConnectionFailed(false);
|
||||
|
||||
try {
|
||||
await databaseApi.testDatabaseConnectionDirect(editingDatabase);
|
||||
@@ -61,6 +63,7 @@ export const EditDatabaseSpecificDataComponent = ({
|
||||
description: 'You can continue with the next step',
|
||||
});
|
||||
} catch (e) {
|
||||
setIsConnectionFailed(true);
|
||||
alert((e as Error).message);
|
||||
}
|
||||
|
||||
@@ -89,6 +92,7 @@ export const EditDatabaseSpecificDataComponent = ({
|
||||
setIsSaving(false);
|
||||
setIsConnectionTested(false);
|
||||
setIsTestingConnection(false);
|
||||
setIsConnectionFailed(false);
|
||||
|
||||
setEditingDatabase({ ...database });
|
||||
}, [database]);
|
||||
@@ -177,12 +181,13 @@ export const EditDatabaseSpecificDataComponent = ({
|
||||
{isLocalhostDb && (
|
||||
<div className="mb-1 flex">
|
||||
<div className="min-w-[150px]" />
|
||||
<div className="max-w-[200px] text-xs text-gray-500">
|
||||
<div className="max-w-[200px] text-xs text-gray-500 dark:text-gray-400">
|
||||
Please{' '}
|
||||
<a
|
||||
href="https://postgresus.com/faq#how-to-backup-localhost"
|
||||
target="_blank"
|
||||
rel="noreferrer"
|
||||
className="!text-blue-600 dark:!text-blue-400"
|
||||
>
|
||||
read this document
|
||||
</a>{' '}
|
||||
@@ -326,6 +331,13 @@ export const EditDatabaseSpecificDataComponent = ({
|
||||
</Button>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{isConnectionFailed && (
|
||||
<div className="mt-3 text-sm text-gray-500 dark:text-gray-400">
|
||||
If your database uses IP whitelist, make sure Postgresus server IP is added to the allowed
|
||||
list.
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -23,7 +23,7 @@ export const ShowDatabaseNotifiersComponent = ({ database }: Props) => {
|
||||
</div>
|
||||
))
|
||||
) : (
|
||||
<div className="text-gray-500">No notifiers configured</div>
|
||||
<div className="text-gray-500 dark:text-gray-400">No notifiers configured</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -79,9 +79,12 @@ export const HealthckeckAttemptsComponent = ({ database }: Props) => {
|
||||
|
||||
useEffect(() => {
|
||||
let interval: number | null = null;
|
||||
let isCancelled = false;
|
||||
|
||||
setIsHealthcheckConfigLoading(true);
|
||||
healthcheckConfigApi.getHealthcheckConfig(database.id).then((healthcheckConfig) => {
|
||||
if (isCancelled) return;
|
||||
|
||||
setIsHealthcheckConfigLoading(false);
|
||||
|
||||
if (healthcheckConfig.isHealthcheckEnabled) {
|
||||
@@ -93,17 +96,18 @@ export const HealthckeckAttemptsComponent = ({ database }: Props) => {
|
||||
if (period === 'today') {
|
||||
interval = setInterval(() => {
|
||||
loadHealthcheckAttempts(false);
|
||||
}, 60_000); // 5 seconds
|
||||
}, 60_000);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
return () => {
|
||||
isCancelled = true;
|
||||
if (interval) {
|
||||
clearInterval(interval);
|
||||
}
|
||||
};
|
||||
}, [period]);
|
||||
}, [database.id, period]);
|
||||
|
||||
if (isHealthcheckConfigLoading) {
|
||||
return (
|
||||
@@ -118,16 +122,16 @@ export const HealthckeckAttemptsComponent = ({ database }: Props) => {
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="w-full rounded-tr-md rounded-br-md rounded-bl-md bg-white p-5 shadow">
|
||||
<h2 className="text-xl font-bold">Healthcheck attempts</h2>
|
||||
<div className="w-full rounded-tr-md rounded-br-md rounded-bl-md bg-white p-3 shadow sm:p-5 dark:bg-gray-800">
|
||||
<h2 className="text-lg font-bold sm:text-xl">Healthcheck attempts</h2>
|
||||
|
||||
<div className="mt-4 flex items-center gap-2">
|
||||
<span className="mr-2 text-sm font-medium">Period</span>
|
||||
<div className="mt-3 flex flex-col gap-2 sm:mt-4 sm:flex-row sm:items-center">
|
||||
<span className="text-sm font-medium sm:mr-2">Period</span>
|
||||
<Select
|
||||
size="small"
|
||||
value={period}
|
||||
onChange={(value) => setPeriod(value)}
|
||||
style={{ width: 120 }}
|
||||
className="w-full sm:w-[120px]"
|
||||
options={[
|
||||
{ value: 'today', label: 'Today' },
|
||||
{ value: '7d', label: '7 days' },
|
||||
@@ -137,7 +141,7 @@ export const HealthckeckAttemptsComponent = ({ database }: Props) => {
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="mt-5" />
|
||||
<div className="mt-4 sm:mt-5" />
|
||||
|
||||
{isLoading ? (
|
||||
<div className="flex justify-center">
|
||||
|
||||
@@ -41,31 +41,31 @@ export const ShowHealthcheckConfigComponent = ({ databaseId }: Props) => {
|
||||
<div className="space-y-4">
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="min-w-[180px]">Is health check enabled</div>
|
||||
<div className="w-[250px]">{healthcheckConfig.isHealthcheckEnabled ? 'Yes' : 'No'}</div>
|
||||
<div>{healthcheckConfig.isHealthcheckEnabled ? 'Yes' : 'No'}</div>
|
||||
</div>
|
||||
|
||||
{healthcheckConfig.isHealthcheckEnabled && (
|
||||
<>
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="min-w-[180px]">Notify when unavailable</div>
|
||||
<div className="w-[250px]">
|
||||
<div className="lg:w-[200px]">
|
||||
{healthcheckConfig.isSentNotificationWhenUnavailable ? 'Yes' : 'No'}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="min-w-[180px]">Check interval (minutes)</div>
|
||||
<div className="w-[250px]">{healthcheckConfig.intervalMinutes}</div>
|
||||
<div className="lg:w-[200px]">{healthcheckConfig.intervalMinutes}</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="min-w-[180px]">Attempts before down</div>
|
||||
<div className="w-[250px]">{healthcheckConfig.attemptsBeforeConcideredAsDown}</div>
|
||||
<div className="lg:w-[200px]">{healthcheckConfig.attemptsBeforeConcideredAsDown}</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="min-w-[180px]">Store attempts (days)</div>
|
||||
<div className="w-[250px]">{healthcheckConfig.storeAttemptsDays}</div>
|
||||
<div className="lg:w-[200px]">{healthcheckConfig.storeAttemptsDays}</div>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
|
||||
@@ -17,13 +17,13 @@ export const NotifierCardComponent = ({
|
||||
}: Props) => {
|
||||
return (
|
||||
<div
|
||||
className={`mb-3 cursor-pointer rounded p-3 shadow ${selectedNotifierId === notifier.id ? 'bg-blue-100' : 'bg-white'}`}
|
||||
className={`mb-3 cursor-pointer rounded p-3 shadow ${selectedNotifierId === notifier.id ? 'bg-blue-100 dark:bg-blue-800' : 'bg-white dark:bg-gray-800'}`}
|
||||
onClick={() => setSelectedNotifierId(notifier.id)}
|
||||
>
|
||||
<div className="mb-1 font-bold">{notifier.name}</div>
|
||||
|
||||
<div className="flex items-center">
|
||||
<div className="text-sm text-gray-500">
|
||||
<div className="text-sm text-gray-500 dark:text-gray-400">
|
||||
Notify to {getNotifierNameFromType(notifier.notifierType)}
|
||||
</div>
|
||||
|
||||
@@ -35,7 +35,7 @@ export const NotifierCardComponent = ({
|
||||
</div>
|
||||
|
||||
{notifier.lastSendError && (
|
||||
<div className="mt-1 flex items-center text-sm text-red-600 underline">
|
||||
<div className="mt-1 flex items-center text-sm text-red-600 underline dark:text-red-400">
|
||||
<InfoCircleOutlined className="mr-1" style={{ color: 'red' }} />
|
||||
Has send error
|
||||
</div>
|
||||
|
||||
@@ -124,7 +124,7 @@ export const NotifierComponent = ({
|
||||
|
||||
return (
|
||||
<div className="w-full">
|
||||
<div className="grow overflow-y-auto rounded bg-white p-5 shadow">
|
||||
<div className="grow overflow-y-auto rounded bg-white p-5 shadow dark:bg-gray-800">
|
||||
{!notifier ? (
|
||||
<div className="mt-10 flex justify-center">
|
||||
<Spin />
|
||||
@@ -166,7 +166,7 @@ export const NotifierComponent = ({
|
||||
setEditNotifier(undefined);
|
||||
}}
|
||||
>
|
||||
<CloseOutlined className="text-gray-500" />
|
||||
<CloseOutlined className="text-gray-500 dark:text-gray-400" />
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
@@ -198,7 +198,7 @@ export const NotifierComponent = ({
|
||||
{notifier.lastSendError}
|
||||
</div>
|
||||
|
||||
<div className="mt-3 text-sm break-words whitespace-pre-wrap text-gray-500">
|
||||
<div className="mt-3 text-sm break-words whitespace-pre-wrap text-gray-500 dark:text-gray-400">
|
||||
To clean this error (choose any):
|
||||
<ul>
|
||||
<li>
|
||||
@@ -246,7 +246,6 @@ export const NotifierComponent = ({
|
||||
<Button
|
||||
type="primary"
|
||||
className="mr-1"
|
||||
ghost
|
||||
onClick={sendTestNotification}
|
||||
loading={isSendingTestNotification}
|
||||
disabled={isSendingTestNotification}
|
||||
|
||||
@@ -4,6 +4,7 @@ import { useEffect, useState } from 'react';
|
||||
import { notifierApi } from '../../../entity/notifiers';
|
||||
import type { Notifier } from '../../../entity/notifiers';
|
||||
import type { WorkspaceResponse } from '../../../entity/workspaces';
|
||||
import { useIsMobile } from '../../../shared/hooks';
|
||||
import { NotifierCardComponent } from './NotifierCardComponent';
|
||||
import { NotifierComponent } from './NotifierComponent';
|
||||
import { EditNotifierComponent } from './edit/EditNotifierComponent';
|
||||
@@ -14,21 +15,47 @@ interface Props {
|
||||
isCanManageNotifiers: boolean;
|
||||
}
|
||||
|
||||
const SELECTED_NOTIFIER_STORAGE_KEY = 'selectedNotifierId';
|
||||
|
||||
export const NotifiersComponent = ({ contentHeight, workspace, isCanManageNotifiers }: Props) => {
|
||||
const isMobile = useIsMobile();
|
||||
const [isLoading, setIsLoading] = useState(true);
|
||||
const [notifiers, setNotifiers] = useState<Notifier[]>([]);
|
||||
const [searchQuery, setSearchQuery] = useState('');
|
||||
|
||||
const [isShowAddNotifier, setIsShowAddNotifier] = useState(false);
|
||||
const [selectedNotifierId, setSelectedNotifierId] = useState<string | undefined>(undefined);
|
||||
const loadNotifiers = () => {
|
||||
setIsLoading(true);
|
||||
|
||||
const updateSelectedNotifierId = (notifierId: string | undefined) => {
|
||||
setSelectedNotifierId(notifierId);
|
||||
if (notifierId) {
|
||||
localStorage.setItem(`${SELECTED_NOTIFIER_STORAGE_KEY}_${workspace.id}`, notifierId);
|
||||
} else {
|
||||
localStorage.removeItem(`${SELECTED_NOTIFIER_STORAGE_KEY}_${workspace.id}`);
|
||||
}
|
||||
};
|
||||
|
||||
const loadNotifiers = (isSilent = false, selectNotifierId?: string) => {
|
||||
if (!isSilent) {
|
||||
setIsLoading(true);
|
||||
}
|
||||
|
||||
notifierApi
|
||||
.getNotifiers(workspace.id)
|
||||
.then((notifiers) => {
|
||||
setNotifiers(notifiers);
|
||||
if (!selectedNotifierId) {
|
||||
setSelectedNotifierId(notifiers[0]?.id);
|
||||
if (selectNotifierId) {
|
||||
updateSelectedNotifierId(selectNotifierId);
|
||||
} else if (!selectedNotifierId && !isSilent && !isMobile) {
|
||||
// On desktop, auto-select a notifier; on mobile, keep it unselected to show the list first
|
||||
const savedNotifierId = localStorage.getItem(
|
||||
`${SELECTED_NOTIFIER_STORAGE_KEY}_${workspace.id}`,
|
||||
);
|
||||
const notifierToSelect =
|
||||
savedNotifierId && notifiers.some((n) => n.id === savedNotifierId)
|
||||
? savedNotifierId
|
||||
: notifiers[0]?.id;
|
||||
updateSelectedNotifierId(notifierToSelect);
|
||||
}
|
||||
})
|
||||
.catch((e) => alert(e.message))
|
||||
@@ -37,6 +64,12 @@ export const NotifiersComponent = ({ contentHeight, workspace, isCanManageNotifi
|
||||
|
||||
useEffect(() => {
|
||||
loadNotifiers();
|
||||
|
||||
const interval = setInterval(() => {
|
||||
loadNotifiers(true);
|
||||
}, 5 * 60_000);
|
||||
|
||||
return () => clearInterval(interval);
|
||||
}, []);
|
||||
|
||||
if (isLoading) {
|
||||
@@ -53,45 +86,89 @@ export const NotifiersComponent = ({ contentHeight, workspace, isCanManageNotifi
|
||||
</Button>
|
||||
);
|
||||
|
||||
const filteredNotifiers = notifiers.filter((notifier) =>
|
||||
notifier.name.toLowerCase().includes(searchQuery.toLowerCase()),
|
||||
);
|
||||
|
||||
// On mobile, show either the list or the notifier details
|
||||
const showNotifierList = !isMobile || !selectedNotifierId;
|
||||
const showNotifierDetails = selectedNotifierId && (!isMobile || selectedNotifierId);
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className="flex grow">
|
||||
<div
|
||||
className="mx-3 w-[250px] min-w-[250px] overflow-y-auto"
|
||||
style={{ height: contentHeight }}
|
||||
>
|
||||
{notifiers.length >= 5 && isCanManageNotifiers && addNotifierButton}
|
||||
{showNotifierList && (
|
||||
<div
|
||||
className="w-full overflow-y-auto md:mx-3 md:w-[250px] md:min-w-[250px] md:pr-2"
|
||||
style={{ height: contentHeight }}
|
||||
>
|
||||
{notifiers.length >= 5 && (
|
||||
<>
|
||||
{isCanManageNotifiers && addNotifierButton}
|
||||
|
||||
{notifiers.map((notifier) => (
|
||||
<NotifierCardComponent
|
||||
key={notifier.id}
|
||||
notifier={notifier}
|
||||
selectedNotifierId={selectedNotifierId}
|
||||
setSelectedNotifierId={setSelectedNotifierId}
|
||||
/>
|
||||
))}
|
||||
<div className="mb-2">
|
||||
<input
|
||||
placeholder="Search notifier"
|
||||
value={searchQuery}
|
||||
onChange={(e) => setSearchQuery(e.target.value)}
|
||||
className="w-full border-b border-gray-300 p-1 text-gray-500 outline-none dark:text-gray-400"
|
||||
/>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
|
||||
{notifiers.length < 5 && isCanManageNotifiers && addNotifierButton}
|
||||
{filteredNotifiers.length > 0
|
||||
? filteredNotifiers.map((notifier) => (
|
||||
<NotifierCardComponent
|
||||
key={notifier.id}
|
||||
notifier={notifier}
|
||||
selectedNotifierId={selectedNotifierId}
|
||||
setSelectedNotifierId={updateSelectedNotifierId}
|
||||
/>
|
||||
))
|
||||
: searchQuery && (
|
||||
<div className="mb-4 text-center text-sm text-gray-500 dark:text-gray-400">
|
||||
No notifiers found matching "{searchQuery}"
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="mx-3 text-center text-xs text-gray-500">
|
||||
Notifier - is a place where notifications will be sent (email, Slack, Telegram, etc.)
|
||||
{notifiers.length < 5 && isCanManageNotifiers && addNotifierButton}
|
||||
|
||||
<div className="mx-3 text-center text-xs text-gray-500 dark:text-gray-400">
|
||||
Notifier - is a place where notifications will be sent (email, Slack, Telegram, etc.)
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{selectedNotifierId && (
|
||||
<NotifierComponent
|
||||
notifierId={selectedNotifierId}
|
||||
onNotifierChanged={() => {
|
||||
loadNotifiers();
|
||||
}}
|
||||
onNotifierDeleted={() => {
|
||||
loadNotifiers();
|
||||
setSelectedNotifierId(
|
||||
notifiers.filter((notifier) => notifier.id !== selectedNotifierId)[0]?.id,
|
||||
);
|
||||
}}
|
||||
isCanManageNotifiers={isCanManageNotifiers}
|
||||
/>
|
||||
{showNotifierDetails && (
|
||||
<div className="flex w-full flex-col md:flex-1">
|
||||
{isMobile && (
|
||||
<div className="mb-2">
|
||||
<Button
|
||||
type="default"
|
||||
onClick={() => updateSelectedNotifierId(undefined)}
|
||||
className="w-full"
|
||||
>
|
||||
← Back to notifiers
|
||||
</Button>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<NotifierComponent
|
||||
notifierId={selectedNotifierId}
|
||||
onNotifierChanged={() => {
|
||||
loadNotifiers();
|
||||
}}
|
||||
onNotifierDeleted={() => {
|
||||
const remainingNotifiers = notifiers.filter(
|
||||
(notifier) => notifier.id !== selectedNotifierId,
|
||||
);
|
||||
updateSelectedNotifierId(remainingNotifiers[0]?.id);
|
||||
loadNotifiers();
|
||||
}}
|
||||
isCanManageNotifiers={isCanManageNotifiers}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
@@ -102,7 +179,7 @@ export const NotifiersComponent = ({ contentHeight, workspace, isCanManageNotifi
|
||||
open={isShowAddNotifier}
|
||||
onCancel={() => setIsShowAddNotifier(false)}
|
||||
>
|
||||
<div className="my-3 max-w-[250px] text-gray-500">
|
||||
<div className="my-3 max-w-[250px] text-gray-500 dark:text-gray-400">
|
||||
Notifier - is a place where notifications will be sent (email, Slack, Telegram, etc.)
|
||||
</div>
|
||||
|
||||
@@ -111,8 +188,8 @@ export const NotifiersComponent = ({ contentHeight, workspace, isCanManageNotifi
|
||||
isShowName
|
||||
isShowClose={false}
|
||||
onClose={() => setIsShowAddNotifier(false)}
|
||||
onChanged={() => {
|
||||
loadNotifiers();
|
||||
onChanged={(notifier) => {
|
||||
loadNotifiers(false, notifier.id);
|
||||
setIsShowAddNotifier(false);
|
||||
}}
|
||||
/>
|
||||
|
||||
@@ -119,6 +119,7 @@ export function EditNotifierComponent({
|
||||
notifier.webhookNotifier = {
|
||||
webhookUrl: '',
|
||||
webhookMethod: WebhookMethod.POST,
|
||||
headers: [],
|
||||
};
|
||||
}
|
||||
|
||||
@@ -208,8 +209,8 @@ export function EditNotifierComponent({
|
||||
return (
|
||||
<div>
|
||||
{isShowName && (
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="min-w-[130px]">Name</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Name</div>
|
||||
|
||||
<Input
|
||||
value={notifier?.name || ''}
|
||||
@@ -224,28 +225,30 @@ export function EditNotifierComponent({
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">Type</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Type</div>
|
||||
|
||||
<Select
|
||||
value={notifier?.notifierType}
|
||||
options={[
|
||||
{ label: 'Telegram', value: NotifierType.TELEGRAM },
|
||||
{ label: 'Email', value: NotifierType.EMAIL },
|
||||
{ label: 'Webhook', value: NotifierType.WEBHOOK },
|
||||
{ label: 'Slack', value: NotifierType.SLACK },
|
||||
{ label: 'Discord', value: NotifierType.DISCORD },
|
||||
{ label: 'Teams', value: NotifierType.TEAMS },
|
||||
]}
|
||||
onChange={(value) => {
|
||||
setNotifierType(value);
|
||||
setIsUnsaved(true);
|
||||
}}
|
||||
size="small"
|
||||
className="w-full max-w-[250px]"
|
||||
/>
|
||||
<div className="flex items-center">
|
||||
<Select
|
||||
value={notifier?.notifierType}
|
||||
options={[
|
||||
{ label: 'Telegram', value: NotifierType.TELEGRAM },
|
||||
{ label: 'Email', value: NotifierType.EMAIL },
|
||||
{ label: 'Webhook', value: NotifierType.WEBHOOK },
|
||||
{ label: 'Slack', value: NotifierType.SLACK },
|
||||
{ label: 'Discord', value: NotifierType.DISCORD },
|
||||
{ label: 'Teams', value: NotifierType.TEAMS },
|
||||
]}
|
||||
onChange={(value) => {
|
||||
setNotifierType(value);
|
||||
setIsUnsaved(true);
|
||||
}}
|
||||
size="small"
|
||||
className="w-[250px] max-w-[250px]"
|
||||
/>
|
||||
|
||||
<img src={getNotifierLogoFromType(notifier?.notifierType)} className="ml-2 h-4 w-4" />
|
||||
<img src={getNotifierLogoFromType(notifier?.notifierType)} className="ml-2 h-4 w-4" />
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="mt-5" />
|
||||
|
||||
@@ -11,32 +11,29 @@ interface Props {
|
||||
export function EditDiscordNotifierComponent({ notifier, setNotifier, setUnsaved }: Props) {
|
||||
return (
|
||||
<>
|
||||
<div className="flex">
|
||||
<div className="w-[130px] max-w-[130px] min-w-[130px] pr-3">Channel webhook URL</div>
|
||||
|
||||
<div className="w-[250px]">
|
||||
<Input
|
||||
value={notifier?.discordNotifier?.channelWebhookUrl || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.discordNotifier) return;
|
||||
setNotifier({
|
||||
...notifier,
|
||||
discordNotifier: {
|
||||
...notifier.discordNotifier,
|
||||
channelWebhookUrl: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full"
|
||||
placeholder="1234567890:ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
/>
|
||||
</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Channel webhook URL</div>
|
||||
<Input
|
||||
value={notifier?.discordNotifier?.channelWebhookUrl || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.discordNotifier) return;
|
||||
setNotifier({
|
||||
...notifier,
|
||||
discordNotifier: {
|
||||
...notifier.discordNotifier,
|
||||
channelWebhookUrl: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="1234567890:ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="ml-[130px] max-w-[250px]">
|
||||
<div className="mt-1 text-xs text-gray-500">
|
||||
<div className="max-w-[250px] sm:ml-[150px]">
|
||||
<div className="mt-1 text-xs text-gray-500 dark:text-gray-400">
|
||||
<strong>How to get Discord webhook URL:</strong>
|
||||
<br />
|
||||
<br />
|
||||
|
||||
@@ -12,34 +12,39 @@ interface Props {
|
||||
export function EditEmailNotifierComponent({ notifier, setNotifier, setUnsaved }: Props) {
|
||||
return (
|
||||
<>
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">Target email</div>
|
||||
<Input
|
||||
value={notifier?.emailNotifier?.targetEmail || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.emailNotifier) return;
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Target email</div>
|
||||
<div className="flex items-center">
|
||||
<Input
|
||||
value={notifier?.emailNotifier?.targetEmail || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.emailNotifier) return;
|
||||
|
||||
setNotifier({
|
||||
...notifier,
|
||||
emailNotifier: {
|
||||
...notifier.emailNotifier,
|
||||
targetEmail: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="example@gmail.com"
|
||||
/>
|
||||
setNotifier({
|
||||
...notifier,
|
||||
emailNotifier: {
|
||||
...notifier.emailNotifier,
|
||||
targetEmail: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="example@gmail.com"
|
||||
/>
|
||||
|
||||
<Tooltip className="cursor-pointer" title="The email where you want to receive the message">
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="The email where you want to receive the message"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">SMTP host</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">SMTP host</div>
|
||||
<Input
|
||||
value={notifier?.emailNotifier?.smtpHost || ''}
|
||||
onChange={(e) => {
|
||||
@@ -60,8 +65,8 @@ export function EditEmailNotifierComponent({ notifier, setNotifier, setUnsaved }
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">SMTP port</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">SMTP port</div>
|
||||
<Input
|
||||
type="number"
|
||||
value={notifier?.emailNotifier?.smtpPort || ''}
|
||||
@@ -83,8 +88,8 @@ export function EditEmailNotifierComponent({ notifier, setNotifier, setUnsaved }
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">SMTP user</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">SMTP user</div>
|
||||
<Input
|
||||
value={notifier?.emailNotifier?.smtpUser || ''}
|
||||
onChange={(e) => {
|
||||
@@ -105,8 +110,8 @@ export function EditEmailNotifierComponent({ notifier, setNotifier, setUnsaved }
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">SMTP password</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">SMTP password</div>
|
||||
<Input
|
||||
type="password"
|
||||
value={notifier?.emailNotifier?.smtpPassword || ''}
|
||||
@@ -128,33 +133,35 @@ export function EditEmailNotifierComponent({ notifier, setNotifier, setUnsaved }
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">From</div>
|
||||
<Input
|
||||
value={notifier?.emailNotifier?.from || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.emailNotifier) return;
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">From</div>
|
||||
<div className="flex items-center">
|
||||
<Input
|
||||
value={notifier?.emailNotifier?.from || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.emailNotifier) return;
|
||||
|
||||
setNotifier({
|
||||
...notifier,
|
||||
emailNotifier: {
|
||||
...notifier.emailNotifier,
|
||||
from: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="example@example.com"
|
||||
/>
|
||||
setNotifier({
|
||||
...notifier,
|
||||
emailNotifier: {
|
||||
...notifier.emailNotifier,
|
||||
from: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="example@example.com"
|
||||
/>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Optional. Email address to use as sender. If empty, will use SMTP user or auto-generate from host"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Optional. Email address to use as sender. If empty, will use SMTP user or auto-generate from host"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
);
|
||||
|
||||
@@ -11,7 +11,7 @@ interface Props {
|
||||
export function EditSlackNotifierComponent({ notifier, setNotifier, setUnsaved }: Props) {
|
||||
return (
|
||||
<>
|
||||
<div className="mb-1 ml-[130px] max-w-[200px]" style={{ lineHeight: 1 }}>
|
||||
<div className="mb-1 max-w-[250px] sm:ml-[150px]" style={{ lineHeight: 1 }}>
|
||||
<a
|
||||
className="text-xs !text-blue-600"
|
||||
href="https://postgresus.com/notifiers/slack"
|
||||
@@ -22,54 +22,48 @@ export function EditSlackNotifierComponent({ notifier, setNotifier, setUnsaved }
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">Bot token</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Bot token</div>
|
||||
<Input
|
||||
value={notifier?.slackNotifier?.botToken || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.slackNotifier) return;
|
||||
|
||||
<div className="w-[250px]">
|
||||
<Input
|
||||
value={notifier?.slackNotifier?.botToken || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.slackNotifier) return;
|
||||
|
||||
setNotifier({
|
||||
...notifier,
|
||||
slackNotifier: {
|
||||
...notifier.slackNotifier,
|
||||
botToken: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full"
|
||||
placeholder="xoxb-..."
|
||||
/>
|
||||
</div>
|
||||
setNotifier({
|
||||
...notifier,
|
||||
slackNotifier: {
|
||||
...notifier.slackNotifier,
|
||||
botToken: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="xoxb-..."
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">Target chat ID</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Target chat ID</div>
|
||||
<Input
|
||||
value={notifier?.slackNotifier?.targetChatId || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.slackNotifier) return;
|
||||
|
||||
<div className="w-[250px]">
|
||||
<Input
|
||||
value={notifier?.slackNotifier?.targetChatId || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.slackNotifier) return;
|
||||
|
||||
setNotifier({
|
||||
...notifier,
|
||||
slackNotifier: {
|
||||
...notifier.slackNotifier,
|
||||
targetChatId: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full"
|
||||
placeholder="C1234567890"
|
||||
/>
|
||||
</div>
|
||||
setNotifier({
|
||||
...notifier,
|
||||
slackNotifier: {
|
||||
...notifier.slackNotifier,
|
||||
targetChatId: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="C1234567890"
|
||||
/>
|
||||
</div>
|
||||
</>
|
||||
);
|
||||
|
||||
@@ -27,7 +27,7 @@ export function EditTeamsNotifierComponent({ notifier, setNotifier, setUnsaved }
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className="mb-1 ml-[130px] max-w-[200px]" style={{ lineHeight: 1 }}>
|
||||
<div className="mb-1 max-w-[250px] sm:ml-[150px]" style={{ lineHeight: 1 }}>
|
||||
<a
|
||||
className="text-xs !text-blue-600"
|
||||
href="https://postgresus.com/notifiers/teams"
|
||||
@@ -38,25 +38,24 @@ export function EditTeamsNotifierComponent({ notifier, setNotifier, setUnsaved }
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<div className="flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">Power Automate URL</div>
|
||||
|
||||
<div className="w-[250px]">
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Power Automate URL</div>
|
||||
<div className="flex items-center">
|
||||
<Input
|
||||
value={value}
|
||||
onChange={onChange}
|
||||
size="small"
|
||||
className="w-full"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="https://prod-00.westeurope.logic.azure.com:443/workflows/....."
|
||||
/>
|
||||
</div>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="HTTP endpoint from your Power Automate flow (When an HTTP request is received)"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="HTTP endpoint from your Power Automate flow (When an HTTP request is received)"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
);
|
||||
|
||||
@@ -27,31 +27,28 @@ export function EditTelegramNotifierComponent({ notifier, setNotifier, setUnsave
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className="flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">Bot token</div>
|
||||
|
||||
<div className="w-[250px]">
|
||||
<Input
|
||||
value={notifier?.telegramNotifier?.botToken || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.telegramNotifier) return;
|
||||
setNotifier({
|
||||
...notifier,
|
||||
telegramNotifier: {
|
||||
...notifier.telegramNotifier,
|
||||
botToken: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full"
|
||||
placeholder="1234567890:ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
/>
|
||||
</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Bot token</div>
|
||||
<Input
|
||||
value={notifier?.telegramNotifier?.botToken || ''}
|
||||
onChange={(e) => {
|
||||
if (!notifier?.telegramNotifier) return;
|
||||
setNotifier({
|
||||
...notifier,
|
||||
telegramNotifier: {
|
||||
...notifier.telegramNotifier,
|
||||
botToken: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="1234567890:ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 ml-[130px]">
|
||||
<div className="mb-1 sm:ml-[150px]">
|
||||
<a
|
||||
className="text-xs !text-blue-600"
|
||||
href="https://www.siteguarding.com/en/how-to-get-telegram-bot-api-token"
|
||||
@@ -62,10 +59,9 @@ export function EditTelegramNotifierComponent({ notifier, setNotifier, setUnsave
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">Target chat ID</div>
|
||||
|
||||
<div className="w-[250px]">
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Target chat ID</div>
|
||||
<div className="flex items-center">
|
||||
<Input
|
||||
value={notifier?.telegramNotifier?.targetChatId || ''}
|
||||
onChange={(e) => {
|
||||
@@ -81,20 +77,20 @@ export function EditTelegramNotifierComponent({ notifier, setNotifier, setUnsave
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="-1001234567890"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="The chat where you want to receive the message (it can be your private chat or a group)"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="The chat where you want to receive the message (it can be your private chat or a group)"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="ml-[130px] max-w-[250px]">
|
||||
<div className="max-w-[250px] sm:ml-[150px]">
|
||||
{!isShowHowToGetChatId ? (
|
||||
<div
|
||||
className="mt-1 cursor-pointer text-xs text-blue-600"
|
||||
@@ -103,7 +99,7 @@ export function EditTelegramNotifierComponent({ notifier, setNotifier, setUnsave
|
||||
How to get Telegram chat ID?
|
||||
</div>
|
||||
) : (
|
||||
<div className="mt-1 text-xs text-gray-500">
|
||||
<div className="mt-1 text-xs text-gray-500 dark:text-gray-400">
|
||||
To get your chat ID, message{' '}
|
||||
<a href="https://t.me/getmyid_bot" target="_blank" rel="noreferrer">
|
||||
@getmyid_bot
|
||||
@@ -120,42 +116,42 @@ export function EditTelegramNotifierComponent({ notifier, setNotifier, setUnsave
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div className="mt-4 mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px] break-all">Send to group topic</div>
|
||||
<div className="mt-4 mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Send to group topic</div>
|
||||
<div className="flex items-center">
|
||||
<Switch
|
||||
checked={notifier?.telegramNotifier?.isSendToThreadEnabled || false}
|
||||
onChange={(checked) => {
|
||||
if (!notifier?.telegramNotifier) return;
|
||||
|
||||
<Switch
|
||||
checked={notifier?.telegramNotifier?.isSendToThreadEnabled || false}
|
||||
onChange={(checked) => {
|
||||
if (!notifier?.telegramNotifier) return;
|
||||
setNotifier({
|
||||
...notifier,
|
||||
telegramNotifier: {
|
||||
...notifier.telegramNotifier,
|
||||
isSendToThreadEnabled: checked,
|
||||
// Clear thread ID if disabling
|
||||
threadId: checked ? notifier.telegramNotifier.threadId : undefined,
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
/>
|
||||
|
||||
setNotifier({
|
||||
...notifier,
|
||||
telegramNotifier: {
|
||||
...notifier.telegramNotifier,
|
||||
isSendToThreadEnabled: checked,
|
||||
// Clear thread ID if disabling
|
||||
threadId: checked ? notifier.telegramNotifier.threadId : undefined,
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
/>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Enable this to send messages to a specific thread in a group chat"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="Enable this to send messages to a specific thread in a group chat"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{notifier?.telegramNotifier?.isSendToThreadEnabled && (
|
||||
<>
|
||||
<div className="mb-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">Thread ID</div>
|
||||
|
||||
<div className="w-[250px]">
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Thread ID</div>
|
||||
<div className="flex items-center">
|
||||
<Input
|
||||
value={notifier?.telegramNotifier?.threadId?.toString() || ''}
|
||||
onChange={(e) => {
|
||||
@@ -174,23 +170,23 @@ export function EditTelegramNotifierComponent({ notifier, setNotifier, setUnsave
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="3"
|
||||
type="number"
|
||||
min="1"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="The ID of the thread where messages should be sent"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="The ID of the thread where messages should be sent"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="ml-[130px] max-w-[250px]">
|
||||
<div className="mt-1 text-xs text-gray-500">
|
||||
<div className="max-w-[250px] sm:ml-[150px]">
|
||||
<div className="mt-1 text-xs text-gray-500 dark:text-gray-400">
|
||||
To get the thread ID, go to the thread in your Telegram group, tap on the thread name
|
||||
at the top, then tap “Thread Info”. Copy the thread link and take the last
|
||||
number from the URL.
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
import { InfoCircleOutlined } from '@ant-design/icons';
|
||||
import { Input, Select, Tooltip } from 'antd';
|
||||
import { DeleteOutlined, InfoCircleOutlined, PlusOutlined } from '@ant-design/icons';
|
||||
import { Button, Input, Select, Tooltip } from 'antd';
|
||||
import { useMemo } from 'react';
|
||||
|
||||
import type { Notifier } from '../../../../../entity/notifiers';
|
||||
import type { Notifier, WebhookHeader } from '../../../../../entity/notifiers';
|
||||
import { WebhookMethod } from '../../../../../entity/notifiers/models/webhook/WebhookMethod';
|
||||
|
||||
interface Props {
|
||||
@@ -10,87 +11,241 @@ interface Props {
|
||||
setUnsaved: () => void;
|
||||
}
|
||||
|
||||
const DEFAULT_BODY_TEMPLATE = `{
|
||||
"heading": "{{heading}}",
|
||||
"message": "{{message}}"
|
||||
}`;
|
||||
|
||||
function validateJsonTemplate(template: string): string | null {
|
||||
if (!template.trim()) {
|
||||
return null; // Empty is valid (will use default)
|
||||
}
|
||||
|
||||
// Replace placeholders with valid JSON strings before parsing
|
||||
const testJson = template.replace(/\{\{heading\}\}/g, 'test').replace(/\{\{message\}\}/g, 'test');
|
||||
|
||||
try {
|
||||
JSON.parse(testJson);
|
||||
return null;
|
||||
} catch (e) {
|
||||
if (e instanceof SyntaxError) {
|
||||
return 'Invalid JSON format';
|
||||
}
|
||||
return 'Invalid JSON';
|
||||
}
|
||||
}
|
||||
|
||||
export function EditWebhookNotifierComponent({ notifier, setNotifier, setUnsaved }: Props) {
|
||||
const headers = notifier?.webhookNotifier?.headers || [];
|
||||
const bodyTemplate = notifier?.webhookNotifier?.bodyTemplate || '';
|
||||
|
||||
const jsonError = useMemo(() => validateJsonTemplate(bodyTemplate), [bodyTemplate]);
|
||||
|
||||
const updateWebhookNotifier = (updates: Partial<typeof notifier.webhookNotifier>) => {
|
||||
setNotifier({
|
||||
...notifier,
|
||||
webhookNotifier: {
|
||||
...(notifier.webhookNotifier || { webhookUrl: '', webhookMethod: WebhookMethod.POST }),
|
||||
...updates,
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
};
|
||||
|
||||
const addHeader = () => {
|
||||
updateWebhookNotifier({
|
||||
headers: [...headers, { key: '', value: '' }],
|
||||
});
|
||||
};
|
||||
|
||||
const updateHeader = (index: number, field: 'key' | 'value', value: string) => {
|
||||
const newHeaders = [...headers];
|
||||
newHeaders[index] = { ...newHeaders[index], [field]: value };
|
||||
updateWebhookNotifier({ headers: newHeaders });
|
||||
};
|
||||
|
||||
const removeHeader = (index: number) => {
|
||||
const newHeaders = headers.filter((_, i) => i !== index);
|
||||
updateWebhookNotifier({ headers: newHeaders });
|
||||
};
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className="flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">Webhook URL</div>
|
||||
|
||||
<div className="w-[250px]">
|
||||
<Input
|
||||
value={notifier?.webhookNotifier?.webhookUrl || ''}
|
||||
onChange={(e) => {
|
||||
setNotifier({
|
||||
...notifier,
|
||||
webhookNotifier: {
|
||||
...(notifier.webhookNotifier || { webhookMethod: WebhookMethod.POST }),
|
||||
webhookUrl: e.target.value.trim(),
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
}}
|
||||
size="small"
|
||||
className="w-full"
|
||||
placeholder="https://example.com/webhook"
|
||||
/>
|
||||
</div>
|
||||
<div className="mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Webhook URL</div>
|
||||
<Input
|
||||
value={notifier?.webhookNotifier?.webhookUrl || ''}
|
||||
onChange={(e) => {
|
||||
updateWebhookNotifier({ webhookUrl: e.target.value.trim() });
|
||||
}}
|
||||
size="small"
|
||||
className="w-full max-w-[250px]"
|
||||
placeholder="https://example.com/webhook"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="mt-1 flex items-center">
|
||||
<div className="w-[130px] min-w-[130px]">Method</div>
|
||||
|
||||
<div className="w-[250px]">
|
||||
<div className="mt-1 mb-1 flex w-full flex-col items-start sm:flex-row sm:items-center">
|
||||
<div className="mb-1 min-w-[150px] sm:mb-0">Method</div>
|
||||
<div className="flex items-center">
|
||||
<Select
|
||||
value={notifier?.webhookNotifier?.webhookMethod || WebhookMethod.POST}
|
||||
onChange={(value) => {
|
||||
setNotifier({
|
||||
...notifier,
|
||||
webhookNotifier: {
|
||||
...(notifier.webhookNotifier || { webhookUrl: '' }),
|
||||
webhookMethod: value,
|
||||
},
|
||||
});
|
||||
setUnsaved();
|
||||
updateWebhookNotifier({ webhookMethod: value });
|
||||
}}
|
||||
size="small"
|
||||
className="w-full"
|
||||
className="w-[100px] max-w-[250px]"
|
||||
options={[
|
||||
{ value: WebhookMethod.POST, label: 'POST' },
|
||||
{ value: WebhookMethod.GET, label: 'GET' },
|
||||
]}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<Tooltip
|
||||
className="cursor-pointer"
|
||||
title="The HTTP method that will be used to call the webhook"
|
||||
>
|
||||
<InfoCircleOutlined className="ml-2" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
|
||||
<div className="mt-3 mb-1 flex w-full flex-col items-start">
|
||||
<div className="mb-1 flex items-center">
|
||||
<span className="min-w-[150px]">
|
||||
Custom headers{' '}
|
||||
<Tooltip title="Add custom HTTP headers to the webhook request (e.g., Authorization, X-API-Key)">
|
||||
<InfoCircleOutlined className="ml-1" style={{ color: 'gray' }} />
|
||||
</Tooltip>
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<div className="w-full max-w-[500px]">
|
||||
{headers.map((header: WebhookHeader, index: number) => (
|
||||
<div key={index} className="mb-1 flex items-center gap-2">
|
||||
<Input
|
||||
value={header.key}
|
||||
onChange={(e) => updateHeader(index, 'key', e.target.value)}
|
||||
size="small"
|
||||
style={{ width: 150, flexShrink: 0 }}
|
||||
placeholder="Header name"
|
||||
/>
|
||||
<Input
|
||||
value={header.value}
|
||||
onChange={(e) => updateHeader(index, 'value', e.target.value)}
|
||||
size="small"
|
||||
style={{ flex: 1, minWidth: 0 }}
|
||||
placeholder="Header value"
|
||||
/>
|
||||
<Button
|
||||
type="text"
|
||||
danger
|
||||
size="small"
|
||||
icon={<DeleteOutlined />}
|
||||
onClick={() => removeHeader(index)}
|
||||
/>
|
||||
</div>
|
||||
))}
|
||||
|
||||
<Button
|
||||
type="dashed"
|
||||
size="small"
|
||||
icon={<PlusOutlined />}
|
||||
onClick={addHeader}
|
||||
className="mt-1"
|
||||
>
|
||||
Add header
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{notifier?.webhookNotifier?.webhookMethod === WebhookMethod.POST && (
|
||||
<div className="mt-3 mb-1 flex w-full flex-col items-start">
|
||||
<div className="mb-1 flex items-center">
|
||||
<span className="min-w-[150px]">Body template </span>
|
||||
</div>
|
||||
|
||||
<div className="mb-2 text-xs text-gray-500 dark:text-gray-400">
|
||||
<span className="mr-4">
|
||||
<code className="rounded bg-gray-100 px-1.5 py-0.5 dark:bg-gray-700">
|
||||
{'{{heading}}'}
|
||||
</code>{' '}
|
||||
— notification title
|
||||
</span>
|
||||
<span>
|
||||
<code className="rounded bg-gray-100 px-1.5 py-0.5 dark:bg-gray-700">
|
||||
{'{{message}}'}
|
||||
</code>{' '}
|
||||
— notification message
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<Input.TextArea
|
||||
value={bodyTemplate}
|
||||
onChange={(e) => {
|
||||
updateWebhookNotifier({ bodyTemplate: e.target.value });
|
||||
}}
|
||||
className="w-full max-w-[500px] font-mono text-xs"
|
||||
rows={6}
|
||||
placeholder={DEFAULT_BODY_TEMPLATE}
|
||||
status={jsonError ? 'error' : undefined}
|
||||
/>
|
||||
{jsonError && <div className="mt-1 text-xs text-red-500">{jsonError}</div>}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{notifier?.webhookNotifier?.webhookUrl && (
|
||||
<div className="mt-3">
|
||||
<div className="mb-1">Example request</div>
|
||||
<div className="mt-4">
|
||||
<div className="mb-1 font-medium">Example request</div>
|
||||
|
||||
{notifier?.webhookNotifier?.webhookMethod === WebhookMethod.GET && (
|
||||
<div className="rounded bg-gray-100 p-2 px-3 text-sm break-all">
|
||||
GET {notifier?.webhookNotifier?.webhookUrl}?heading=✅ Backup completed for
|
||||
database&message=Backup completed successfully in 2m 17s.\nCompressed backup size:
|
||||
1.7GB
|
||||
<div className="rounded bg-gray-100 p-2 px-3 text-sm break-all dark:bg-gray-800">
|
||||
<div className="font-semibold text-blue-600 dark:text-blue-400">GET</div>
|
||||
<div className="mt-1">
|
||||
{notifier?.webhookNotifier?.webhookUrl}
|
||||
{
|
||||
'?heading=✅ Backup completed for database "my-database" (workspace "Production")&message=Backup completed successfully in 1m 23s.%0ACompressed backup size: 256.00 MB'
|
||||
}
|
||||
</div>
|
||||
{headers.length > 0 && (
|
||||
<div className="mt-2 border-t border-gray-200 pt-2 dark:border-gray-600">
|
||||
<div className="text-xs font-semibold text-gray-500 dark:text-gray-400">
|
||||
Headers:
|
||||
</div>
|
||||
{headers
|
||||
.filter((h) => h.key)
|
||||
.map((h, i) => (
|
||||
<div key={i} className="text-xs">
|
||||
{h.key}: {h.value || '(empty)'}
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{notifier?.webhookNotifier?.webhookMethod === WebhookMethod.POST && (
|
||||
<div className="rounded bg-gray-100 p-2 px-3 font-mono text-sm break-all whitespace-pre-line">
|
||||
{`POST ${notifier?.webhookNotifier?.webhookUrl}
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"heading": "✅ Backup completed for database",
|
||||
"message": "Backup completed successfully in 2m 17s.\\nCompressed backup size: 1.7GB"
|
||||
}
|
||||
`}
|
||||
<div className="rounded bg-gray-100 p-2 px-3 font-mono text-sm break-words whitespace-pre-wrap dark:bg-gray-800">
|
||||
<div className="font-semibold text-blue-600 dark:text-blue-400">
|
||||
POST {notifier?.webhookNotifier?.webhookUrl}
|
||||
</div>
|
||||
<div className="mt-1 text-gray-600 dark:text-gray-400">
|
||||
{headers.find((h) => h.key.toLowerCase() === 'content-type')
|
||||
? ''
|
||||
: 'Content-Type: application/json'}
|
||||
{headers
|
||||
.filter((h) => h.key)
|
||||
.map((h) => `\n${h.key}: ${h.value}`)
|
||||
.join('')}
|
||||
</div>
|
||||
<div className="mt-2 break-words whitespace-pre-wrap">
|
||||
{notifier?.webhookNotifier?.bodyTemplate
|
||||
? notifier.webhookNotifier.bodyTemplate
|
||||
.replace(
|
||||
'{{heading}}',
|
||||
'✅ Backup completed for database "my-database" (workspace "Production")',
|
||||
)
|
||||
.replace(
|
||||
'{{message}}',
|
||||
'Backup completed successfully in 1m 23s.\\nCompressed backup size: 256.00 MB',
|
||||
)
|
||||
: `{
|
||||
"heading": "✅ Backup completed for database "my-database" (workspace "My workspace")",
|
||||
"message": "Backup completed successfully in 1m 23s. Compressed backup size: 256.00 MB"
|
||||
}`}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
@@ -10,7 +10,7 @@ export function ShowDiscordNotifierComponent({ notifier }: Props) {
|
||||
<div className="flex">
|
||||
<div className="max-w-[110px] min-w-[110px] pr-3">Channel webhook URL</div>
|
||||
|
||||
<div className="w-[250px]">{notifier.webhookNotifier?.webhookUrl.slice(0, 10)}*******</div>
|
||||
<div>{notifier.webhookNotifier?.webhookUrl.slice(0, 10)}*******</div>
|
||||
</div>
|
||||
</>
|
||||
);
|
||||
|
||||
@@ -10,7 +10,7 @@ export function ShowSlackNotifierComponent({ notifier }: Props) {
|
||||
<div className="flex items-center">
|
||||
<div className="min-w-[110px]">Bot token</div>
|
||||
|
||||
<div className="w-[250px]">*********</div>
|
||||
<div>*********</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
|
||||
@@ -18,7 +18,7 @@ export function ShowTeamsNotifierComponent({ notifier }: Props) {
|
||||
<>
|
||||
<div className="flex items-center">
|
||||
<div className="min-w-[110px]">Power Automate URL: </div>
|
||||
<div className="w-[250px] break-all">
|
||||
<div className="w-[50px] break-all md:w-[250px]">
|
||||
{url ? (
|
||||
<>
|
||||
<span title={url}>{display}</span>
|
||||
|
||||
@@ -10,7 +10,7 @@ export function ShowTelegramNotifierComponent({ notifier }: Props) {
|
||||
<div className="flex items-center">
|
||||
<div className="min-w-[110px]">Bot token</div>
|
||||
|
||||
<div className="w-[250px]">*********</div>
|
||||
<div>*********</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-1 flex items-center">
|
||||
|
||||
@@ -1,22 +1,50 @@
|
||||
import type { Notifier } from '../../../../../entity/notifiers';
|
||||
import type { Notifier, WebhookHeader } from '../../../../../entity/notifiers';
|
||||
import { WebhookMethod } from '../../../../../entity/notifiers';
|
||||
|
||||
interface Props {
|
||||
notifier: Notifier;
|
||||
}
|
||||
|
||||
export function ShowWebhookNotifierComponent({ notifier }: Props) {
|
||||
const headers = notifier?.webhookNotifier?.headers || [];
|
||||
const hasHeaders = headers.filter((h: WebhookHeader) => h.key).length > 0;
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className="flex items-center">
|
||||
<div className="min-w-[110px]">Webhook URL</div>
|
||||
|
||||
<div className="w-[250px]">{notifier?.webhookNotifier?.webhookUrl || '-'}</div>
|
||||
<div className="max-w-[350px] truncate">{notifier?.webhookNotifier?.webhookUrl || '-'}</div>
|
||||
</div>
|
||||
|
||||
<div className="mt-1 mb-1 flex items-center">
|
||||
<div className="min-w-[110px]">Method</div>
|
||||
<div>{notifier?.webhookNotifier?.webhookMethod || '-'}</div>
|
||||
</div>
|
||||
|
||||
{hasHeaders && (
|
||||
<div className="mt-1 mb-1 flex items-start">
|
||||
<div className="min-w-[110px]">Headers</div>
|
||||
<div className="flex flex-col text-sm">
|
||||
{headers
|
||||
.filter((h: WebhookHeader) => h.key)
|
||||
.map((h: WebhookHeader, i: number) => (
|
||||
<div key={i} className="text-gray-600">
|
||||
<span className="font-medium">{h.key}:</span> {h.value || '(empty)'}
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{notifier?.webhookNotifier?.webhookMethod === WebhookMethod.POST &&
|
||||
notifier?.webhookNotifier?.bodyTemplate && (
|
||||
<div className="mt-1 mb-1 flex items-start">
|
||||
<div className="min-w-[110px]">Body Template</div>
|
||||
<div className="max-w-[350px] rounded bg-gray-50 p-2 font-mono text-xs whitespace-pre-wrap">
|
||||
{notifier.webhookNotifier.bodyTemplate}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import { ExclamationCircleOutlined, SyncOutlined } from '@ant-design/icons';
|
||||
import { CopyOutlined, ExclamationCircleOutlined, SyncOutlined } from '@ant-design/icons';
|
||||
import { CheckCircleOutlined } from '@ant-design/icons';
|
||||
import { Button, Modal, Spin, Tooltip } from 'antd';
|
||||
import { App, Button, Modal, Spin, Tooltip } from 'antd';
|
||||
import dayjs from 'dayjs';
|
||||
import { useEffect, useRef, useState } from 'react';
|
||||
|
||||
@@ -16,11 +16,14 @@ interface Props {
|
||||
}
|
||||
|
||||
export const RestoresComponent = ({ database, backup }: Props) => {
|
||||
const { message } = App.useApp();
|
||||
|
||||
const [editingDatabase, setEditingDatabase] = useState<Database>({
|
||||
...database,
|
||||
postgresql: database.postgresql
|
||||
? ({
|
||||
...database.postgresql,
|
||||
username: undefined,
|
||||
host: undefined,
|
||||
port: undefined,
|
||||
password: undefined,
|
||||
@@ -208,7 +211,7 @@ export const RestoresComponent = ({ database, backup }: Props) => {
|
||||
<div className="w-[75px] min-w-[75px]">Duration</div>
|
||||
<div>
|
||||
<div>{duration}</div>
|
||||
<div className="mt-2 text-xs text-gray-500">
|
||||
<div className="mt-2 text-xs text-gray-500 dark:text-gray-400">
|
||||
Expected restoration time usually 3x-5x longer than the backup duration
|
||||
(sometimes less, sometimes more depending on data type)
|
||||
<br />
|
||||
@@ -231,9 +234,21 @@ export const RestoresComponent = ({ database, backup }: Props) => {
|
||||
title="Restore error details"
|
||||
open={!!showingRestoreError}
|
||||
onCancel={() => setShowingRestoreError(undefined)}
|
||||
footer={null}
|
||||
footer={
|
||||
<Button
|
||||
icon={<CopyOutlined />}
|
||||
onClick={() => {
|
||||
navigator.clipboard.writeText(showingRestoreError.failMessage || '');
|
||||
message.success('Error message copied to clipboard');
|
||||
}}
|
||||
>
|
||||
Copy
|
||||
</Button>
|
||||
}
|
||||
>
|
||||
<div className="text-sm">{showingRestoreError.failMessage}</div>
|
||||
<div className="overflow-y-auto text-sm whitespace-pre-wrap" style={{ height: '400px' }}>
|
||||
{showingRestoreError.failMessage}
|
||||
</div>
|
||||
</Modal>
|
||||
)}
|
||||
</div>
|
||||
|
||||
@@ -7,6 +7,7 @@ import { useCallback, useEffect, useRef, useState } from 'react';
|
||||
import { auditLogApi } from '../../../entity/audit-logs/api/auditLogApi';
|
||||
import type { AuditLog } from '../../../entity/audit-logs/model/AuditLog';
|
||||
import type { GetAuditLogsRequest } from '../../../entity/audit-logs/model/GetAuditLogsRequest';
|
||||
import { useIsMobile } from '../../../shared/hooks';
|
||||
import { getUserTimeFormat } from '../../../shared/time';
|
||||
|
||||
interface Props {
|
||||
@@ -15,6 +16,7 @@ interface Props {
|
||||
|
||||
export function AuditLogsComponent({ scrollContainerRef: externalScrollRef }: Props) {
|
||||
const { message } = App.useApp();
|
||||
const isMobile = useIsMobile();
|
||||
const [auditLogs, setAuditLogs] = useState<AuditLog[]>([]);
|
||||
const [isLoading, setIsLoading] = useState(true);
|
||||
const [isLoadingMore, setIsLoadingMore] = useState(false);
|
||||
@@ -103,7 +105,7 @@ export function AuditLogsComponent({ scrollContainerRef: externalScrollRef }: Pr
|
||||
render: (_, record: AuditLog) => {
|
||||
if (!record.userEmail && !record.userName) {
|
||||
return (
|
||||
<span className="inline-block rounded-full bg-gray-100 px-1.5 py-0.5 text-xs font-medium text-gray-600">
|
||||
<span className="inline-block rounded-full bg-gray-100 px-1.5 py-0.5 text-xs font-medium text-gray-600 dark:bg-gray-700 dark:text-gray-300">
|
||||
System
|
||||
</span>
|
||||
);
|
||||
@@ -114,7 +116,7 @@ export function AuditLogsComponent({ scrollContainerRef: externalScrollRef }: Pr
|
||||
: record.userEmail;
|
||||
|
||||
return (
|
||||
<span className="inline-block rounded-full bg-blue-100 px-1.5 py-0.5 text-xs font-medium text-blue-800">
|
||||
<span className="inline-block rounded-full bg-blue-100 px-1.5 py-0.5 text-xs font-medium text-blue-800 dark:bg-blue-900 dark:text-blue-200">
|
||||
{displayText}
|
||||
</span>
|
||||
);
|
||||
@@ -124,7 +126,9 @@ export function AuditLogsComponent({ scrollContainerRef: externalScrollRef }: Pr
|
||||
title: 'Message',
|
||||
dataIndex: 'message',
|
||||
key: 'message',
|
||||
render: (message: string) => <span className="text-xs text-gray-900">{message}</span>,
|
||||
render: (message: string) => (
|
||||
<span className="text-xs text-gray-900 dark:text-gray-100">{message}</span>
|
||||
),
|
||||
},
|
||||
{
|
||||
title: 'Workspace',
|
||||
@@ -134,7 +138,9 @@ export function AuditLogsComponent({ scrollContainerRef: externalScrollRef }: Pr
|
||||
render: (workspaceId: string | undefined) => (
|
||||
<span
|
||||
className={`inline-block rounded-full px-1.5 py-0.5 text-xs font-medium ${
|
||||
workspaceId ? 'bg-blue-100 text-blue-800' : 'bg-gray-100 text-gray-600'
|
||||
workspaceId
|
||||
? 'bg-blue-100 text-blue-800 dark:bg-blue-900 dark:text-blue-200'
|
||||
: 'bg-gray-100 text-gray-600 dark:bg-gray-700 dark:text-gray-300'
|
||||
}`}
|
||||
>
|
||||
{workspaceId || '-'}
|
||||
@@ -150,7 +156,7 @@ export function AuditLogsComponent({ scrollContainerRef: externalScrollRef }: Pr
|
||||
const date = dayjs(createdAt);
|
||||
const timeFormat = getUserTimeFormat();
|
||||
return (
|
||||
<span className="text-xs text-gray-700">
|
||||
<span className="text-xs text-gray-700 dark:text-gray-300">
|
||||
{`${date.format(timeFormat.format)} (${date.fromNow()})`}
|
||||
</span>
|
||||
);
|
||||
@@ -158,11 +164,57 @@ export function AuditLogsComponent({ scrollContainerRef: externalScrollRef }: Pr
|
||||
},
|
||||
];
|
||||
|
||||
const renderAuditLogCard = (log: AuditLog) => {
|
||||
const date = dayjs(log.createdAt);
|
||||
const timeFormat = getUserTimeFormat();
|
||||
|
||||
const getUserDisplay = () => {
|
||||
if (!log.userEmail && !log.userName) {
|
||||
return (
|
||||
<span className="inline-block rounded-full bg-gray-100 px-1.5 py-0.5 text-xs font-medium text-gray-600 dark:bg-gray-700 dark:text-gray-300">
|
||||
System
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
const displayText = log.userName ? `${log.userName} (${log.userEmail})` : log.userEmail;
|
||||
|
||||
return (
|
||||
<span className="inline-block rounded-full bg-blue-100 px-1.5 py-0.5 text-xs font-medium text-blue-800 dark:bg-blue-900 dark:text-blue-200">
|
||||
{displayText}
|
||||
</span>
|
||||
);
|
||||
};
|
||||
|
||||
return (
|
||||
<div
|
||||
key={log.id}
|
||||
className="mb-3 rounded-lg border border-gray-200 bg-white p-3 shadow-sm dark:border-gray-700 dark:bg-gray-800"
|
||||
>
|
||||
<div className="flex items-start justify-between">
|
||||
<div className="flex-1">{getUserDisplay()}</div>
|
||||
<div className="text-right text-xs text-gray-500 dark:text-gray-400">
|
||||
<div>{date.format(timeFormat.format)}</div>
|
||||
<div className="text-gray-400 dark:text-gray-500">{date.fromNow()}</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="mt-2 text-sm text-gray-900 dark:text-gray-100">{log.message}</div>
|
||||
{log.workspaceName && (
|
||||
<div className="mt-2">
|
||||
<span className="inline-block rounded-full bg-blue-100 px-1.5 py-0.5 text-xs font-medium text-blue-800 dark:bg-blue-900 dark:text-blue-200">
|
||||
{log.workspaceName}
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="max-w-[1200px]">
|
||||
<div className="mb-4 flex items-center justify-between">
|
||||
<h2 className="text-xl font-bold">Audit Logs</h2>
|
||||
<div className="text-sm text-gray-500">
|
||||
<h2 className="text-xl font-bold dark:text-white">Audit Logs</h2>
|
||||
<div className="text-sm text-gray-500 dark:text-gray-400">
|
||||
{isLoading ? (
|
||||
<Spin indicator={<LoadingOutlined spin />} />
|
||||
) : (
|
||||
@@ -175,27 +227,37 @@ export function AuditLogsComponent({ scrollContainerRef: externalScrollRef }: Pr
|
||||
<div className="flex h-64 items-center justify-center">
|
||||
<Spin indicator={<LoadingOutlined spin />} size="large" />
|
||||
</div>
|
||||
) : auditLogs.length === 0 ? (
|
||||
<div className="flex h-32 items-center justify-center text-gray-500 dark:text-gray-400">
|
||||
No audit logs found.
|
||||
</div>
|
||||
) : (
|
||||
<>
|
||||
<Table
|
||||
columns={columns}
|
||||
dataSource={auditLogs}
|
||||
pagination={false}
|
||||
rowKey="id"
|
||||
size="small"
|
||||
className="mb-4"
|
||||
/>
|
||||
{isMobile ? (
|
||||
<div>{auditLogs.map(renderAuditLogCard)}</div>
|
||||
) : (
|
||||
<Table
|
||||
columns={columns}
|
||||
dataSource={auditLogs}
|
||||
pagination={false}
|
||||
rowKey="id"
|
||||
size="small"
|
||||
className="mb-4"
|
||||
/>
|
||||
)}
|
||||
|
||||
{isLoadingMore && (
|
||||
<div className="flex justify-center py-4">
|
||||
<Spin indicator={<LoadingOutlined spin />} />
|
||||
<span className="ml-2 text-sm text-gray-500">Loading more logs...</span>
|
||||
<span className="ml-2 text-sm text-gray-500 dark:text-gray-400">
|
||||
Loading more logs...
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{!hasMore && auditLogs.length > 0 && (
|
||||
<div className="py-4 text-center text-sm text-gray-500">
|
||||
All logs loaded ({total} total)
|
||||
<div className="py-4 text-center text-sm text-gray-500 dark:text-gray-400">
|
||||
All logs loaded ({auditLogs.length} total)
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user