mirror of
https://github.com/databasus/databasus.git
synced 2026-04-06 00:32:03 +02:00
Compare commits
53 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
75c88bac50 | ||
|
|
ff1b6536bf | ||
|
|
06197f986d | ||
|
|
fe72e9e0a6 | ||
|
|
640cceadbd | ||
|
|
80e573fcb3 | ||
|
|
35498d83f1 | ||
|
|
77ae8d1ac7 | ||
|
|
2f20845b3d | ||
|
|
a3d3df4093 | ||
|
|
8db83d40d5 | ||
|
|
065ded37bd | ||
|
|
71e801debb | ||
|
|
ffd4e3a27b | ||
|
|
d2a9085591 | ||
|
|
6f0152b60c | ||
|
|
7007236f2f | ||
|
|
db55cad310 | ||
|
|
25bd096c81 | ||
|
|
7e98dd578c | ||
|
|
ba37b30e83 | ||
|
|
34b3f822e3 | ||
|
|
14700130b7 | ||
|
|
de11ab8d8a | ||
|
|
06282bb435 | ||
|
|
a3b263bbac | ||
|
|
a956dccf7c | ||
|
|
ce9fa18d58 | ||
|
|
281e185f21 | ||
|
|
bb5b0064ea | ||
|
|
da95bbb178 | ||
|
|
cfe5993831 | ||
|
|
fa0e3d1ce2 | ||
|
|
d07085c462 | ||
|
|
c89c1f9654 | ||
|
|
6cfc0ca79b | ||
|
|
5d27123bd7 | ||
|
|
79ca374bb6 | ||
|
|
b3f1a6f7e5 | ||
|
|
d521e2abc6 | ||
|
|
82eca7501b | ||
|
|
51866437fd | ||
|
|
244a56d1bb | ||
|
|
95c833b619 | ||
|
|
878fad5747 | ||
|
|
6ff3096695 | ||
|
|
b4b514c2d5 | ||
|
|
da0fec6624 | ||
|
|
408675023a | ||
|
|
0bc93389cc | ||
|
|
c8e6aea6e1 | ||
|
|
981ad21471 | ||
|
|
177a9c782c |
5
.github/workflows/ci-release.yml
vendored
5
.github/workflows/ci-release.yml
vendored
@@ -137,6 +137,8 @@ jobs:
|
||||
# testing S3
|
||||
TEST_MINIO_PORT=9000
|
||||
TEST_MINIO_CONSOLE_PORT=9001
|
||||
# testing Azure Blob
|
||||
TEST_AZURITE_BLOB_PORT=10000
|
||||
# testing NAS
|
||||
TEST_NAS_PORT=7006
|
||||
# testing Telegram
|
||||
@@ -165,6 +167,9 @@ jobs:
|
||||
# Wait for MinIO
|
||||
timeout 60 bash -c 'until nc -z localhost 9000; do sleep 2; done'
|
||||
|
||||
# Wait for Azurite
|
||||
timeout 60 bash -c 'until nc -z localhost 10000; do sleep 2; done'
|
||||
|
||||
- name: Create data and temp directories
|
||||
run: |
|
||||
# Create directories that are used for backups and restore
|
||||
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -4,4 +4,5 @@ postgresus-data/
|
||||
pgdata/
|
||||
docker-compose.yml
|
||||
node_modules/
|
||||
.idea
|
||||
.idea
|
||||
/articles
|
||||
48
README.md
48
README.md
@@ -25,6 +25,8 @@
|
||||
<a href="https://postgresus.com" target="_blank"><strong>🌐 Postgresus website</strong></a>
|
||||
</p>
|
||||
|
||||
<img src="assets/dashboard-dark.svg" alt="Postgresus Dark Dashboard" width="800" style="margin-bottom: 10px;"/>
|
||||
|
||||
<img src="assets/dashboard.svg" alt="Postgresus Dashboard" width="800"/>
|
||||
|
||||
|
||||
@@ -40,13 +42,13 @@
|
||||
- **Precise timing**: run backups at specific times (e.g., 4 AM during low traffic)
|
||||
- **Smart compression**: 4-8x space savings with balanced compression (~20% overhead)
|
||||
|
||||
### 🗄️ **Multiple Storage Destinations** <a href="https://postgresus.com/storages">(docs)</a>
|
||||
### 🗄️ **Multiple Storage Destinations** <a href="https://postgresus.com/storages">(view supported)</a>
|
||||
|
||||
- **Local storage**: Keep backups on your VPS/server
|
||||
- **Cloud storage**: S3, Cloudflare R2, Google Drive, NAS, Dropbox and more
|
||||
- **Secure**: All data stays under your control
|
||||
|
||||
### 📱 **Smart Notifications** <a href="https://postgresus.com/notifiers">(docs)</a>
|
||||
### 📱 **Smart Notifications** <a href="https://postgresus.com/notifiers">(view supported)</a>
|
||||
|
||||
- **Multiple channels**: Email, Telegram, Slack, Discord, webhooks
|
||||
- **Real-time updates**: Success and failure notifications
|
||||
@@ -58,6 +60,13 @@
|
||||
- **SSL support**: Secure connections available
|
||||
- **Easy restoration**: One-click restore from any backup
|
||||
|
||||
### 🔒 **Enterprise-grade security** <a href="https://postgresus.com/security">(docs)</a>
|
||||
|
||||
- **AES-256-GCM encryption**: Enterprise-grade protection for backup files
|
||||
- **Zero-trust storage**: Backups are encrypted and they are useless to attackers, so you can keep them in shared storages like S3, Azure Blob Storage, etc.
|
||||
- **Encryption for secrets**: Any sensitive data is encrypted and never exposed, even in logs or error messages
|
||||
- **Read-only user**: Postgresus uses by default a read-only user for backups and never stores anything that can change your data
|
||||
|
||||
### 👥 **Suitable for Teams** <a href="https://postgresus.com/access-management">(docs)</a>
|
||||
|
||||
- **Workspaces**: Group databases, notifiers and storages for different projects or teams
|
||||
@@ -65,6 +74,12 @@
|
||||
- **Audit logs**: Track all system activities and changes made by users
|
||||
- **User roles**: Assign viewer, member, admin or owner roles within workspaces
|
||||
|
||||
### 🎨 **UX-Friendly**
|
||||
|
||||
- **Designer-polished UI**: Clean, intuitive interface crafted with attention to detail
|
||||
- **Dark & light themes**: Choose the look that suits your workflow
|
||||
- **Mobile adaptive**: Check your backups from anywhere on any device
|
||||
|
||||
### 🐳 **Self-Hosted & Secure**
|
||||
|
||||
- **Docker-based**: Easy deployment and management
|
||||
@@ -142,6 +157,35 @@ Then run:
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Option 4: Kubernetes with Helm
|
||||
|
||||
For Kubernetes deployments, use the official Helm chart.
|
||||
|
||||
**Step 1:** Clone the repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/RostislavDugin/postgresus.git
|
||||
cd postgresus
|
||||
```
|
||||
|
||||
**Step 2:** Install with Helm:
|
||||
|
||||
```bash
|
||||
helm install postgresus ./deploy/helm -n postgresus --create-namespace
|
||||
```
|
||||
|
||||
**Step 3:** Get the external IP:
|
||||
|
||||
```bash
|
||||
kubectl get svc -n postgresus
|
||||
```
|
||||
|
||||
Access Postgresus at `http://<EXTERNAL-IP>` (port 80).
|
||||
|
||||
To customize the installation (e.g., storage size, NodePort instead of LoadBalancer), see the [Helm chart README](deploy/helm/README.md) for all configuration options.
|
||||
|
||||
Config uses by default LoadBalancer, but has predefined values for Ingress and HTTPRoute as well.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Usage
|
||||
|
||||
1
assets/dashboard-dark.svg
Normal file
1
assets/dashboard-dark.svg
Normal file
File diff suppressed because one or more lines are too long
|
After Width: | Height: | Size: 537 KiB |
@@ -15,12 +15,12 @@
|
||||
<g filter="url(#filter1_d_204_373)">
|
||||
<path d="M76.5003 116.158H21.2503C17.3383 116.158 14.167 119.329 14.167 123.241V1055.34C14.167 1059.25 17.3383 1062.42 21.2503 1062.42H76.5003C80.4123 1062.42 83.5837 1059.25 83.5837 1055.34V123.241C83.5837 119.329 80.4123 116.158 76.5003 116.158Z" fill="#D9D9D9"/>
|
||||
</g>
|
||||
<path d="M76.5003 116.158H21.2503C17.3383 116.158 14.167 119.329 14.167 123.241V1055.34C14.167 1059.25 17.3383 1062.42 21.2503 1062.42H76.5003C80.4123 1062.42 83.5837 1059.25 83.5837 1055.34V123.241C83.5837 119.329 80.4123 116.158 76.5003 116.158Z" fill="white"/>
|
||||
<path d="M70.8333 124.657H28.3333C24.4213 124.657 21.25 127.828 21.25 131.74V174.236C21.25 178.148 24.4213 181.319 28.3333 181.319H70.8333C74.7454 181.319 77.9167 178.148 77.9167 174.236V131.74C77.9167 127.828 74.7454 124.657 70.8333 124.657Z" fill="#1677FF"/>
|
||||
<path d="M45.3333 159.275C45.0881 159.275 44.8486 159.347 44.6447 159.483C44.4408 159.619 44.2819 159.813 44.1881 160.04C44.0943 160.265 44.0698 160.514 44.1176 160.755C44.1654 160.996 44.2834 161.217 44.4568 161.39C44.6302 161.564 44.8511 161.682 45.0915 161.73C45.3319 161.776 45.5813 161.752 45.8076 161.659C46.0342 161.565 46.2278 161.405 46.364 161.203C46.5002 160.999 46.5729 160.759 46.5729 160.513C46.5729 160.184 46.4423 159.87 46.2098 159.637C45.9773 159.405 45.6621 159.275 45.3333 159.275ZM50.2917 141.302C45.3333 141.302 40.375 143 40.375 146.259V161.133C40.375 164.393 45.3333 166.091 50.2917 166.091C55.25 166.091 60.2083 164.393 60.2083 161.133V146.259C60.2083 143 55.25 141.302 50.2917 141.302ZM57.7292 161.133C57.7292 162.013 54.9029 163.612 50.2917 163.612C45.6804 163.612 42.8542 162.013 42.8542 161.133V157.081C45.1748 158.183 47.7231 158.722 50.2917 158.654C52.8602 158.722 55.4085 158.183 57.7292 157.081V161.133ZM57.7292 153.696C57.7292 154.576 54.9029 156.175 50.2917 156.175C45.6804 156.175 42.8542 154.576 42.8542 153.696V149.644C45.1748 150.746 47.7231 151.285 50.2917 151.217C52.8602 151.285 55.4085 150.746 57.7292 149.644V153.696ZM50.2917 148.738C45.6804 148.738 42.8542 147.139 42.8542 146.259C42.8542 145.38 45.6804 143.78 50.2917 143.78C54.9029 143.78 57.7292 145.38 57.7292 146.259C57.7292 147.139 54.9029 148.738 50.2917 148.738ZM45.3333 151.838C45.0881 151.838 44.8486 151.91 44.6447 152.046C44.4408 152.182 44.2819 152.376 44.1881 152.603C44.0943 152.828 44.0698 153.077 44.1176 153.318C44.1654 153.559 44.2834 153.78 44.4568 153.953C44.6302 154.127 44.8511 154.245 45.0915 154.293C45.3319 154.34 45.5813 154.315 45.8076 154.222C46.0342 154.128 46.2278 153.97 46.364 153.766C46.5002 153.562 46.5729 153.322 46.5729 153.076C46.5729 152.747 46.4423 152.433 46.2098 152.201C45.9773 151.968 45.6621 151.838 45.3333 151.838Z" fill="white"/>
|
||||
<path d="M62.6875 212.819C62.6931 212.763 62.6931 212.706 62.6875 212.649C62.6827 212.598 62.6827 212.546 62.6875 212.494L62.6254 212.338L55.1879 198.977C55.0692 198.764 54.8955 198.593 54.6871 198.48C54.4787 198.368 54.244 198.319 54.0104 198.341H46.5729C46.3557 198.339 46.1421 198.397 45.9531 198.509L45.8168 198.613C45.7769 198.644 45.7396 198.679 45.7052 198.718C45.6573 198.763 45.6157 198.815 45.5812 198.873C45.5812 198.873 45.5812 198.873 45.5069 198.977L38.0694 212.39C37.9606 212.587 37.9033 212.812 37.9033 213.039C37.9033 213.267 37.9606 213.491 38.0694 213.689L41.7881 220.285C41.8265 220.355 41.8723 220.421 41.9245 220.48V220.558C42.0784 220.714 42.2652 220.83 42.4699 220.895C42.5603 220.908 42.6521 220.908 42.7427 220.895H57.7291C57.9443 220.895 58.1557 220.837 58.3426 220.725C58.5296 220.613 58.6854 220.453 58.7952 220.259L62.5139 213.584L62.5758 213.428C62.571 213.377 62.571 213.325 62.5758 213.273C62.5826 213.217 62.5826 213.161 62.5758 213.104C62.5828 213.049 62.5828 212.992 62.5758 212.935C62.6171 212.901 62.6545 212.862 62.6875 212.819ZM42.8541 217.09L40.6354 213.066L46.5729 202.209L47.6514 204.157L48.8908 206.313L44.0937 214.727L42.8541 217.09ZM50.2916 208.884L51.8535 211.689H48.705L50.2916 208.884ZM57.0225 218.364H44.9614L47.2423 214.285H59.291L57.0225 218.364ZM54.7169 211.689L48.7298 200.937H53.3038L59.291 211.689H54.7169Z" fill="#CDCFD6"/>
|
||||
<path d="M58.9684 255.689H41.6143C40.628 255.689 39.6821 256.081 38.9846 256.778C38.2874 257.475 37.8955 258.421 37.8955 259.407V271.802C37.8955 272.788 38.2874 273.734 38.9846 274.431C39.6821 275.128 40.628 275.52 41.6143 275.52H58.9684C59.9546 275.52 60.9006 275.128 61.5979 274.431C62.2953 273.734 62.6872 272.788 62.6872 271.802V259.407C62.6872 258.421 62.2953 257.475 61.5979 256.778C60.9006 256.081 59.9546 255.689 58.9684 255.689ZM41.6143 258.168H58.9684C59.2971 258.168 59.6124 258.298 59.8449 258.53C60.0774 258.763 60.208 259.078 60.208 259.407L50.2913 265.456L40.3747 259.407C40.3747 259.078 40.5053 258.763 40.7378 258.53C40.9702 258.298 41.2854 258.168 41.6143 258.168ZM60.208 271.802C60.208 272.131 60.0774 272.446 59.8449 272.679C59.6124 272.911 59.2971 273.041 58.9684 273.041H41.6143C41.2854 273.041 40.9702 272.911 40.7378 272.679C40.5053 272.446 40.3747 272.131 40.3747 271.802V262.233L49.6468 267.898C49.8352 268.007 50.0489 268.064 50.2665 268.064C50.4841 268.064 50.6979 268.007 50.8863 267.898L60.208 262.233V271.802Z" fill="#CDCFD6"/>
|
||||
<path d="M76.3333 116.158H21.0833C17.1713 116.158 14 119.329 14 123.241V1055.34C14 1059.25 17.1713 1062.42 21.0833 1062.42H76.3333C80.2453 1062.42 83.4167 1059.25 83.4167 1055.34V123.241C83.4167 119.329 80.2453 116.158 76.3333 116.158Z" fill="white"/>
|
||||
<path d="M70.5833 124.657H28.0833C24.1713 124.657 21 127.828 21 131.74V174.236C21 178.148 24.1713 181.319 28.0833 181.319H70.5833C74.4954 181.319 77.6667 178.148 77.6667 174.236V131.74C77.6667 127.828 74.4954 124.657 70.5833 124.657Z" fill="#1677FF"/>
|
||||
<path d="M44.9583 159.275C44.7131 159.275 44.4736 159.347 44.2697 159.483C44.0658 159.619 43.9069 159.813 43.8131 160.04C43.7193 160.265 43.6948 160.514 43.7426 160.755C43.7904 160.996 43.9084 161.217 44.0818 161.39C44.2552 161.564 44.4761 161.682 44.7165 161.73C44.9569 161.776 45.2063 161.752 45.4326 161.659C45.6592 161.565 45.8528 161.405 45.989 161.203C46.1252 160.999 46.1979 160.759 46.1979 160.513C46.1979 160.184 46.0673 159.87 45.8348 159.637C45.6023 159.405 45.2871 159.275 44.9583 159.275ZM49.9167 141.302C44.9583 141.302 40 143 40 146.259V161.133C40 164.393 44.9583 166.091 49.9167 166.091C54.875 166.091 59.8333 164.393 59.8333 161.133V146.259C59.8333 143 54.875 141.302 49.9167 141.302ZM57.3542 161.133C57.3542 162.013 54.5279 163.612 49.9167 163.612C45.3054 163.612 42.4792 162.013 42.4792 161.133V157.081C44.7998 158.183 47.3481 158.722 49.9167 158.654C52.4852 158.722 55.0335 158.183 57.3542 157.081V161.133ZM57.3542 153.696C57.3542 154.576 54.5279 156.175 49.9167 156.175C45.3054 156.175 42.4792 154.576 42.4792 153.696V149.644C44.7998 150.746 47.3481 151.285 49.9167 151.217C52.4852 151.285 55.0335 150.746 57.3542 149.644V153.696ZM49.9167 148.738C45.3054 148.738 42.4792 147.139 42.4792 146.259C42.4792 145.38 45.3054 143.78 49.9167 143.78C54.5279 143.78 57.3542 145.38 57.3542 146.259C57.3542 147.139 54.5279 148.738 49.9167 148.738ZM44.9583 151.838C44.7131 151.838 44.4736 151.91 44.2697 152.046C44.0658 152.182 43.9069 152.376 43.8131 152.603C43.7193 152.828 43.6948 153.077 43.7426 153.318C43.7904 153.559 43.9084 153.78 44.0818 153.953C44.2552 154.127 44.4761 154.245 44.7165 154.293C44.9569 154.34 45.2063 154.315 45.4326 154.222C45.6592 154.128 45.8528 153.97 45.989 153.766C46.1252 153.562 46.1979 153.322 46.1979 153.076C46.1979 152.747 46.0673 152.433 45.8348 152.201C45.6023 151.968 45.2871 151.838 44.9583 151.838Z" fill="white"/>
|
||||
<path d="M62.7842 212.819C62.7898 212.763 62.7898 212.706 62.7842 212.649C62.7793 212.598 62.7793 212.546 62.7842 212.494L62.7221 212.338L55.2846 198.977C55.1659 198.764 54.9922 198.593 54.7838 198.48C54.5754 198.368 54.3407 198.319 54.1071 198.341H46.6696C46.4524 198.339 46.2388 198.397 46.0498 198.509L45.9135 198.613C45.8736 198.644 45.8363 198.679 45.8019 198.718C45.754 198.763 45.7123 198.815 45.6779 198.873C45.6779 198.873 45.6779 198.873 45.6035 198.977L38.166 212.39C38.0572 212.587 38 212.812 38 213.039C38 213.267 38.0572 213.491 38.166 213.689L41.8848 220.285C41.9232 220.355 41.9689 220.421 42.0212 220.48V220.558C42.1751 220.714 42.3619 220.83 42.5666 220.895C42.657 220.908 42.7488 220.908 42.8393 220.895H57.8258C58.041 220.895 58.2524 220.837 58.4392 220.725C58.6262 220.613 58.7821 220.453 58.8919 220.259L62.6106 213.584L62.6725 213.428C62.6677 213.377 62.6677 213.325 62.6725 213.273C62.6793 213.217 62.6793 213.161 62.6725 213.104C62.6795 213.049 62.6795 212.992 62.6725 212.935C62.7137 212.901 62.7511 212.862 62.7842 212.819ZM42.9508 217.09L40.732 213.066L46.6696 202.209L47.7481 204.157L48.9875 206.313L44.1904 214.727L42.9508 217.09ZM50.3883 208.884L51.9502 211.689H48.8017L50.3883 208.884ZM57.1192 218.364H45.0581L47.3389 214.285H59.3877L57.1192 218.364ZM54.8136 211.689L48.8264 200.937H53.4004L59.3877 211.689H54.8136Z" fill="#CDCFD6"/>
|
||||
<path d="M58.9684 255.689H41.6143C40.628 255.689 39.6821 256.081 38.9846 256.778C38.2874 257.475 37.8955 258.421 37.8955 259.407V271.802C37.8955 272.788 38.2874 273.734 38.9846 274.431C39.6821 275.128 40.628 275.52 41.6143 275.52H58.9684C59.9546 275.52 60.9006 275.128 61.5979 274.431C62.2953 273.734 62.6872 272.788 62.6872 271.802V259.407C62.6872 258.421 62.2953 257.475 61.5979 256.778C60.9006 256.081 59.9546 255.689 58.9684 255.689ZM41.6143 258.168H58.9684C59.2971 258.168 59.6124 258.298 59.8449 258.53C60.0774 258.763 60.208 259.078 60.208 259.407L50.2913 265.456L40.3747 259.407C40.3747 259.078 40.5053 258.763 40.7378 258.53C40.9702 258.298 41.2854 258.168 41.6143 258.168ZM60.208 271.802C60.208 272.131 60.0774 272.446 59.8449 272.679C59.6124 272.911 59.2971 273.041 58.9684 273.041H41.6143C41.2854 273.041 40.9702 272.911 40.7378 272.679C40.5053 272.446 40.3747 272.131 40.3747 271.802V262.233L49.6468 267.898C49.8352 268.007 50.0489 268.064 50.2665 268.064C50.4841 268.064 50.6979 268.007 50.8863 267.898L60.208 262.233V271.802Z" fill="#CDCFD6"/>
|
||||
<path d="M59.0729 255.689H41.7188C40.7325 255.689 39.7866 256.081 39.0891 256.778C38.3919 257.475 38 258.421 38 259.407V271.802C38 272.788 38.3919 273.734 39.0891 274.431C39.7866 275.128 40.7325 275.52 41.7188 275.52H59.0729C60.0591 275.52 61.0051 275.128 61.7024 274.431C62.3998 273.734 62.7917 272.788 62.7917 271.802V259.407C62.7917 258.421 62.3998 257.475 61.7024 256.778C61.0051 256.081 60.0591 255.689 59.0729 255.689ZM41.7188 258.168H59.0729C59.4016 258.168 59.7169 258.298 59.9494 258.53C60.1819 258.763 60.3125 259.078 60.3125 259.407L50.3958 265.456L40.4792 259.407C40.4792 259.078 40.6098 258.763 40.8423 258.53C41.0747 258.298 41.3899 258.168 41.7188 258.168ZM60.3125 271.802C60.3125 272.131 60.1819 272.446 59.9494 272.679C59.7169 272.911 59.4016 273.041 59.0729 273.041H41.7188C41.3899 273.041 41.0747 272.911 40.8423 272.679C40.6098 272.446 40.4792 272.131 40.4792 271.802V262.233L49.7512 267.898C49.9397 268.007 50.1534 268.064 50.371 268.064C50.5886 268.064 50.8024 268.007 50.9908 267.898L60.3125 262.233V271.802Z" fill="#CDCFD6"/>
|
||||
<g filter="url(#filter2_d_204_373)">
|
||||
<path d="M446.25 116.158H106.25C102.338 116.158 99.167 119.329 99.167 123.241V307.393C99.167 311.305 102.338 314.476 106.25 314.476H446.25C450.162 314.476 453.334 311.305 453.334 307.393V123.241C453.334 119.329 450.162 116.158 446.25 116.158Z" fill="#D9D9D9"/>
|
||||
</g>
|
||||
@@ -786,21 +786,23 @@
|
||||
<path d="M1613.58 301.727H1605.08C1603.52 301.727 1602.25 302.995 1602.25 304.56V313.059C1602.25 314.624 1603.52 315.892 1605.08 315.892H1613.58C1615.15 315.892 1616.42 314.624 1616.42 313.059V304.56C1616.42 302.995 1615.15 301.727 1613.58 301.727Z" fill="#00C951"/>
|
||||
<path d="M1632 301.727H1623.5C1621.94 301.727 1620.67 302.995 1620.67 304.56V313.059C1620.67 314.624 1621.94 315.892 1623.5 315.892H1632C1633.57 315.892 1634.83 314.624 1634.83 313.059V304.56C1634.83 302.995 1633.57 301.727 1632 301.727Z" fill="#00C951"/>
|
||||
<path d="M1650.42 301.727H1641.92C1640.35 301.727 1639.08 302.995 1639.08 304.56V313.059C1639.08 314.624 1640.35 315.892 1641.92 315.892H1650.42C1651.98 315.892 1653.25 314.624 1653.25 313.059V304.56C1653.25 302.995 1651.98 301.727 1650.42 301.727Z" fill="#00C951"/>
|
||||
<path d="M28.6002 44.5499L33.0527 38.5699L56.9727 39.0249L68.8677 43.8999L73.8727 52.5774L76.3752 59.7274L76.8952 64.7649V68.4374L71.8252 69.1524L61.5552 64.7649L47.8727 59.0449L36.5627 61.1574L28.6002 55.6649L26.0977 51.9599V51.2449L28.6002 44.5499Z" fill="#155DFC"/>
|
||||
<path d="M29.6496 56.6411C40.0902 63.7014 42.0494 67.1362 44.7817 65.8852C45.5038 65.5542 47.4468 64.3622 48.0521 59.0898C47.8797 59.0817 47.5394 59.0829 47.1656 59.2275C44.8234 60.1394 44.8905 65.3065 43.7228 65.3123C43.7043 65.3123 43.6638 65.3123 43.6117 65.2973C43.4972 65.2765 43.356 65.2325 43.1442 65.1341C42.5251 64.8448 42.2149 64.699 41.7451 64.4155C41.7451 64.4155 41.2648 64.1296 40.8274 63.8241C40.1817 63.3728 39.8125 62.8289 39.0881 62.1785C38.4817 61.6346 38.4736 61.7769 37.0582 60.7377C36.571 60.3802 36.2135 60.0978 35.518 59.5469C34.3758 58.6431 33.5865 58.0182 32.913 57.4361C32.0705 56.7082 30.9306 55.6331 29.6681 54.1414C28.5201 51.9832 27.1522 49.7393 26.4579 49.9649C26.0147 50.1096 25.9464 51.202 26.032 51.9346C26.3178 54.4053 28.7076 56.0058 29.6496 56.6422V56.6411Z" fill="#003A86"/>
|
||||
<path d="M30.8616 55.9884C36.4777 57.0809 42.0927 58.1745 47.7088 59.2669C48.3615 60.2772 49.0535 60.9056 49.5268 61.2747C49.5268 61.2747 52.3193 63.4515 56.5594 63.2895C56.7631 63.2814 57.2688 63.2421 57.9562 63.3161C58.6621 63.3925 58.7848 63.4677 59.9062 63.6702C61.0588 63.8785 61.1884 63.8461 61.649 64.0244C62.1848 64.2315 62.5644 64.4907 62.8305 64.6747C63.2714 64.9791 63.3756 65.1272 64.2192 65.8563C64.6868 66.2602 64.9205 66.4627 65.1346 66.6247C65.5281 66.9221 66.1588 67.3954 67.0556 67.7773C67.566 67.9937 68.0682 68.1418 68.545 68.2819C68.8066 68.3582 69.0264 68.4173 69.1827 68.4566C69.2614 69.2111 69.3169 70.2851 69.1827 71.5592C69.0426 72.8912 68.9454 73.8193 68.3853 74.8689C68.1365 75.3353 67.2327 76.9647 65.2827 77.6764C64.8893 77.8199 63.6823 78.2411 62.2392 77.8535C61.7543 77.7238 60.854 77.4681 60.1411 76.6129C59.4097 75.7345 59.3322 74.7729 59.3137 74.4859C59.2778 73.9339 59.228 73.1712 59.7569 72.5359C60.1747 72.0348 60.8713 71.7108 61.5298 71.7675C62.3179 71.8358 62.5354 72.4074 63.0851 72.264C63.3478 72.1957 63.3455 72.0533 64.0699 71.2953C64.7226 70.6126 64.8742 70.5709 64.8974 70.3325C64.9518 69.7608 64.1533 69.2528 63.8339 69.0491C63.5955 68.8975 63.0122 68.5631 61.9719 68.4288C61.3782 68.3525 60.2267 68.2171 58.8994 68.754C57.2804 69.4079 56.4692 70.6276 56.2111 71.0292C55.5329 72.0799 55.3686 73.0648 55.2656 73.7175C55.2089 74.0797 55.0538 75.2265 55.3547 76.6719C55.776 78.6994 56.8221 80.0256 57.1566 80.4249C58.6077 82.1561 60.3552 82.796 61.1745 83.0842C65.0062 84.4324 68.4571 83.0182 69.1225 82.7301C72.7794 81.1481 74.4088 78.0675 75.2096 76.555C76.7314 73.6793 76.9246 71.0928 77.1306 68.3409C77.2001 67.4197 77.3563 64.7095 76.7765 61.2493C76.3634 58.789 75.7616 57.0462 75.1216 55.1923C73.7862 51.3236 72.3153 48.5578 71.842 47.6875C71.842 47.6875 69.9464 44.2054 66.7895 40.4779C66.1738 39.7511 65.5535 39.0672 64.5444 38.4978C63.901 38.1356 62.7437 37.6183 59.758 37.5524C58.2478 37.5188 56.1093 37.5963 53.5529 38.1136C53.6814 38.3104 53.8086 38.5071 53.9371 38.705C53.2335 38.3358 52.2093 37.8475 50.9236 37.4054C49.6553 36.9691 48.6948 36.7701 47.1707 36.4599C44.7879 35.975 42.9942 35.769 42.5614 35.7216C40.6265 35.5063 39.21 35.4635 37.775 35.5735C36.8677 35.6429 36.038 35.7667 35.1747 36.2528C34.6261 36.5618 34.2443 36.9159 33.786 37.3464C32.8995 38.1784 32.3383 38.9341 32.0721 39.2963C30.1869 41.8608 29.2449 43.1442 28.3492 44.644C28.3492 44.644 27.6977 45.7352 26.6064 48.1307C26.4305 48.5161 26.1262 49.2 26.0521 50.1409C26.0405 50.2855 26.0035 50.7554 26.0753 51.2044C26.2569 52.3454 27.1017 53.1219 27.9662 53.8637C29.8768 55.5036 31.2169 56.434 31.2169 56.434C31.4333 56.5786 33.5268 58.0437 35.324 59.4475C38.2576 61.74 39.7238 62.8868 40.6218 63.6957C40.6218 63.6957 41.3706 64.3704 42.8866 65.0543C43.3379 65.258 43.6874 65.3783 44.0982 65.3124C44.693 65.2163 45.0668 64.7824 45.429 64.3519C45.5598 64.1956 45.8294 63.8543 46.6916 61.9263C46.9057 61.4483 47.0121 61.2088 47.1418 60.8848C47.2494 60.6163 47.5132 59.9404 47.8025 58.9579C48.0305 58.1861 48.4818 56.6562 48.7376 54.955C48.9702 53.4101 49.307 51.172 48.7075 48.3657C48.6531 48.1134 48.0571 45.4251 46.3838 42.8896C45.5124 41.5692 44.6629 40.7649 44.4824 40.5948C43.9362 40.0844 43.4258 39.706 43.0347 39.4421C43.4154 39.4607 43.9373 39.5116 44.5414 39.6493C44.987 39.75 46.1847 40.059 47.4959 40.9199C48.7781 41.7613 49.5465 42.7229 49.8891 43.1951C51.0995 44.8627 51.4201 46.5175 51.618 47.5382C51.8494 48.7337 51.9096 49.9696 51.9582 50.9359C52.0149 52.0978 51.9744 52.4507 52.1063 53.3002C52.1966 53.8834 52.354 54.8567 52.8643 55.9884C53.6316 57.6896 54.7333 58.7253 55.1487 59.091C55.8882 59.7402 56.5478 60.0747 57.8671 60.7459C58.9711 61.3072 59.6909 61.5467 60.4964 61.6323C60.8493 61.6694 61.1467 61.6694 61.3527 61.6624C61.1491 61.1799 61.0218 60.7482 60.9396 60.4068C60.869 60.1152 60.7602 59.6523 60.7324 59.0332C60.7105 58.5448 60.7475 58.1305 60.7915 57.8215C60.8783 58.1976 61.0079 58.6848 61.2046 59.2403C61.2995 59.5076 61.642 60.4473 62.3572 61.6636C62.8884 62.5685 63.5318 63.3636 64.6023 64.4711C65.623 65.5253 67.1262 66.9418 69.1815 68.4601C62.6674 64.89 56.1544 61.3199 49.6402 57.7498" fill="#155DFC"/>
|
||||
<path d="M65.5498 64.1829C65.691 63.6309 65.9386 63.2524 66.1018 63.0407C66.265 62.8289 66.81 62.1785 67.6977 61.9771C67.9175 61.9274 68.3376 61.8464 68.7415 62.0362C68.8098 62.0686 68.8364 62.0894 69.0482 62.2271C69.3178 62.403 69.5932 62.5708 69.864 62.7456C70.4484 63.1217 71.3858 63.6181 71.5582 63.7107C73.1529 64.5671 76.1548 64.7407 76.7392 64.7742C78.2274 64.8598 79.8383 64.9536 81.9596 64.5381C81.9596 64.5381 85.1605 63.9109 87.9078 62.1739C88.3661 61.8834 89.2247 61.3349 90.1725 60.4056C90.5255 60.0596 90.8032 59.7483 91 59.5145C90.9814 59.6846 90.9456 59.9416 90.8623 60.2436C90.6979 60.8419 90.4179 61.3765 89.5627 62.469C88.9019 63.3126 88.5709 63.7338 88.0062 64.2812C87.7342 64.5439 86.8061 65.4188 85.3665 66.2509C82.3669 67.9833 79.4391 68.2043 78.2749 68.2795C77.5481 68.327 76.2266 68.3675 74.6307 68.1418C70.5329 67.5632 67.4164 65.6248 65.5498 64.1829Z" fill="#8BC7FE"/>
|
||||
<path d="M59.0693 49.222C59.6931 48.8332 60.2208 48.5543 60.6062 48.3656C60.958 48.192 61.1396 48.1249 61.3549 48.0705C61.5042 48.0323 61.9844 47.9236 62.6059 48.008C62.8639 48.0439 63.2273 48.0972 63.6405 48.3263C64.1346 48.6006 64.4019 48.9628 64.4679 49.0554C64.5466 49.1664 64.7179 49.4106 64.822 49.7647C64.8868 49.9811 64.8729 50.0622 64.9597 50.2577C65.0511 50.4626 65.1518 50.5713 65.3138 50.7704C65.4446 50.9301 65.6228 51.1615 65.8265 51.4601C65.7189 51.3999 65.1738 51.1048 64.6056 51.1152C64.3487 51.1199 64.0929 51.1847 64.0929 51.1847C63.887 51.2368 63.7492 51.3016 63.6995 51.3224C63.2401 51.5133 62.1904 51.2599 61.7692 50.5737C61.503 50.1408 61.6893 49.8608 61.3734 49.5784C61.2565 49.4743 61.1026 49.3967 60.4106 49.3134C60.0831 49.2741 59.626 49.2324 59.0717 49.2243L59.0693 49.222Z" fill="#003C8D"/>
|
||||
<path d="M74.3768 53.1682C73.7993 53.1242 72.8747 53.1277 71.8135 53.4667C70.9988 53.7271 70.3785 54.0963 69.9619 54.3925C70.316 54.315 70.7766 54.2247 71.3205 54.146C71.4501 54.1275 71.8679 54.0685 72.4535 54.0176C73.0194 53.969 73.7693 53.9239 74.6684 53.9285C74.5712 53.675 74.474 53.4216 74.3768 53.167V53.1682Z" fill="#0052C9"/>
|
||||
<path d="M75.622 56.6632C74.9774 56.6412 74.0701 56.6852 73.0378 56.9837C72.3238 57.1897 71.7371 57.4651 71.2881 57.7174C71.6029 57.6711 71.9986 57.6202 72.4557 57.5774C72.6085 57.5635 73.0575 57.5218 73.6674 57.4964C74.2379 57.4721 74.989 57.4582 75.8858 57.4918C75.7979 57.2163 75.7099 56.9397 75.6231 56.6643L75.622 56.6632Z" fill="#0052C9"/>
|
||||
<path d="M76.4554 59.6141C75.8652 59.5805 74.9197 59.6106 73.8898 60.0446C73.4813 60.217 73.1399 60.4184 72.8633 60.6128C73.0496 60.5758 73.3065 60.5306 73.6132 60.489C74.2589 60.4033 74.7531 60.3883 75.2484 60.3755C75.606 60.3663 76.0712 60.3628 76.6185 60.3813L76.4554 59.6129V59.6141Z" fill="#0052C9"/>
|
||||
<path d="M65.2773 64.9409C65.7055 65.4177 66.2078 65.9234 66.7899 66.4326C67.6242 67.1617 68.4389 67.7345 69.1645 68.1812C69.1553 67.9845 69.1171 67.7322 69.006 67.4626C68.9736 67.3839 68.9389 67.3121 68.903 67.2461C69.2548 67.4244 69.6402 67.5945 70.0591 67.7438C72.0576 68.4555 73.9092 68.3872 75.1811 68.1997C73.4706 67.9879 71.2406 67.4903 68.8775 66.3076C68.0524 65.8945 67.3187 65.4513 66.6718 65.0104C66.6417 64.9895 66.4091 64.8182 66.0816 64.5822C65.8629 64.4248 65.6835 64.2952 65.5643 64.2107C65.4683 64.4549 65.3734 64.6979 65.2773 64.9421V64.9409Z" fill="#00398B"/>
|
||||
<path d="M65.5643 63.499C66.217 63.933 67.1462 64.5 68.315 65.0509C69.934 65.8146 73.459 67.4255 78.1493 67.1362C83.2667 66.8214 86.9178 64.4202 88.0831 63.3543C88.8932 62.6137 89.5609 61.7677 89.5609 61.7677C90.2275 60.923 90.6881 60.1291 90.9994 59.5134C90.9519 59.9312 90.8466 60.3733 90.6534 60.8176C90.4856 61.203 90.209 61.5861 89.6651 62.3359C89.2901 62.8521 89.1027 63.1101 88.8238 63.433C88.3817 63.9457 88.0033 64.2905 87.7325 64.5359C87.3506 64.883 86.7824 65.3934 85.9573 65.9246C85.7536 66.0553 85.5592 66.1722 85.374 66.2764C84.9701 66.5032 84.5477 66.7138 83.9923 66.9464C83.0109 67.3584 82.3108 67.6535 81.3283 67.893C81.3283 67.893 80.0414 68.2067 77.7825 68.3166C77.4538 68.3328 77.1321 68.3363 77.1321 68.3363C76.9296 68.3386 76.6588 68.3409 76.329 68.3259C76.329 68.3259 75.932 68.3143 75.4934 68.2553C71.8331 67.7657 69.3288 66.6421 69.3288 66.6421C68.2572 66.1595 66.8234 65.3922 65.3027 64.1864C65.3907 63.9561 65.4786 63.7258 65.5654 63.4955L65.5643 63.499Z" fill="#0087F7"/>
|
||||
<path d="M64.29 64.0336C64.3849 63.8357 64.525 63.5696 64.7217 63.2722C64.8617 63.0592 65.1001 62.7005 65.416 62.4008C65.6267 62.2006 66.4043 61.4634 67.469 61.5444C67.6935 61.5618 68.1657 61.6323 68.6841 62.0096C68.5429 61.9622 68.3254 61.9078 68.0615 61.917C67.5083 61.9355 67.1172 62.2121 66.8985 62.3718C66.2597 62.8382 65.9576 63.4689 65.5491 64.3218C65.4253 64.5799 65.335 64.7963 65.2772 64.9421C65.099 64.8333 64.8872 64.6817 64.6731 64.4745C64.5157 64.3218 64.3896 64.1702 64.29 64.0348V64.0336Z" fill="#0051CB"/>
|
||||
<path d="M59.5213 65.3447C61.2479 65.8758 63.0682 66.4707 64.6919 66.9995C65.6975 67.327 66.5191 67.599 67.0561 67.7772C65.1548 65.7381 63.2546 63.7002 61.3532 61.6611C61.0743 61.6577 60.6369 61.6368 60.1126 61.5373C59.74 61.4667 58.8894 61.2746 57.1188 60.3118C56.3041 59.8686 55.8968 59.6475 55.5033 59.3559C53.573 57.9255 52.8289 55.8633 52.616 55.25C52.4274 54.7049 52.2978 53.4898 52.0466 51.0828C51.7203 47.9617 51.7689 47.7626 51.5444 46.8576C51.2412 45.6368 50.916 44.3233 49.8895 42.9577C48.5263 41.1432 46.7418 40.3204 45.7535 39.9732C44.6565 39.5878 43.689 39.4906 43.0352 39.4709C43.5038 39.7822 44.1415 40.2544 44.8081 40.9187C46.5277 42.6314 47.2637 44.4483 47.8516 45.9411C48.2705 47.0046 48.8306 48.4616 48.9741 50.4613C49.0864 52.0201 48.8989 53.2399 48.8561 53.5049C48.5737 55.2523 48.0159 56.3193 47.7544 58.7842C47.7324 58.9902 47.7174 59.1592 47.7093 59.2656C48.0784 59.8292 48.4372 60.2493 48.7103 60.5409C48.9753 60.8233 49.1917 61.0096 49.2588 61.0756C49.2588 61.0756 49.6372 61.41 50.0712 61.7005C52.0582 63.0336 59.5213 65.3423 59.5213 65.3423V65.3447Z" fill="#0051C8"/>
|
||||
<path d="M60.777 57.7462C60.7133 58.3746 60.6891 59.3131 60.939 60.4056C61.0628 60.9472 61.211 61.3348 61.3359 61.6612C61.6449 62.4643 62.1576 63.7604 63.301 65.0415C64.096 65.9326 64.9014 66.4718 65.5483 66.9081C65.8237 67.0933 66.1929 67.3409 66.7079 67.6059C67.6985 68.1174 68.5919 68.3674 69.2156 68.4993C69.1196 68.0966 69.0224 67.6951 68.9263 67.2923C66.8664 66.3295 65.4442 65.1491 64.5832 64.3182C63.9479 63.7049 62.9202 62.6969 62.0257 61.0663C61.3151 59.7714 60.9633 58.5841 60.777 57.7439V57.7462Z" fill="#003C8D"/>
|
||||
<path d="M66.0316 77.3338C66.6195 77.1301 67.4689 76.7505 68.3114 76.0354C69.0775 75.385 69.5207 74.7265 69.7001 74.4395C70.2266 73.6005 70.4303 72.8645 70.5715 72.3414C70.73 71.7547 70.899 70.8868 70.8816 69.8152C71.1015 69.939 71.4128 70.0999 71.797 70.2584C72.6406 70.6056 73.3315 70.7306 73.762 70.8046C74.6207 70.9539 75.2942 70.9632 75.815 70.9666C76.2547 70.9701 76.6239 70.9539 76.8924 70.9365C76.9236 70.6588 76.9537 70.3776 76.9815 70.0941C77.0405 69.4969 77.0891 68.9102 77.1296 68.3362C76.7685 68.3316 76.389 68.3165 75.992 68.2888C75.6888 68.2668 75.396 68.2379 75.116 68.2054C74.7353 68.2575 74.2249 68.3084 73.6208 68.3119C71.8734 68.32 70.6039 67.9231 70.2891 67.8201C69.6989 67.6268 69.2268 67.4104 68.9004 67.2438C68.9583 67.459 69.0335 67.7657 69.0995 68.1372C69.1782 68.5781 69.3622 69.7585 69.1805 71.5568C69.059 72.7592 68.9976 73.3737 68.7176 74.1109C68.4769 74.7462 67.8253 76.1291 66.0293 77.3303L66.0316 77.3338Z" fill="#0050C8"/>
|
||||
<path d="M54.9944 63.83C54.712 63.5661 53.8198 62.7248 53.0548 61.8222C52.872 61.6069 52.5676 61.2435 52.2042 60.7286C52.0839 60.5573 51.9578 60.371 51.8281 60.1673C51.5226 59.687 51.1245 59.0529 50.775 58.1456C50.5586 57.5832 50.4279 57.0913 50.3469 56.7268C50.1281 57.766 50.1629 58.6293 50.2288 59.1941C50.3041 59.8444 50.4464 60.3177 50.5691 60.7158C50.701 61.1452 50.8977 61.696 51.1986 62.314C51.6141 62.5258 51.9716 62.6913 52.2436 62.8116C53.0085 63.1507 53.6196 63.3671 54.0165 63.506C54.4134 63.646 54.7502 63.7536 54.9955 63.83H54.9944Z" fill="#003C8D"/>
|
||||
<g clip-path="url(#clip1_204_373)">
|
||||
<path d="M39 40H45L59 39.5L70 45L76.5 61.5L71.5 69.5L70 68L60 64.5L52.5 62L49 59L40.5 59.5L30.5 49.5L39 46V40Z" fill="#155DFC"/>
|
||||
<path d="M31.7931 56.6646C41.8288 63.451 43.712 66.7525 46.3383 65.5501C47.0324 65.2319 48.9 64.0862 49.4818 59.0183C49.316 59.0105 48.989 59.0116 48.6297 59.1507C46.3783 60.0272 46.4428 64.9939 45.3205 64.9994C45.3027 64.9994 45.2637 64.9994 45.2137 64.985C45.1036 64.965 44.9679 64.9227 44.7643 64.8281C44.1692 64.55 43.8711 64.4099 43.4195 64.1374C43.4195 64.1374 42.9578 63.8626 42.5374 63.569C41.9167 63.1351 41.5618 62.6123 40.8655 61.9872C40.2826 61.4644 40.2748 61.6012 38.9144 60.6023C38.4461 60.2586 38.1024 59.9872 37.4339 59.4577C36.336 58.5889 35.5774 57.9883 34.93 57.4288C34.1202 56.7291 33.0245 55.6957 31.8109 54.2619C30.7075 52.1874 29.3927 50.0305 28.7253 50.2474C28.2992 50.3865 28.2336 51.4365 28.3159 52.1406C28.5907 54.5155 30.8877 56.0539 31.7931 56.6657V56.6646Z" fill="#003A86"/>
|
||||
<path d="M32.9575 56.0372C38.3557 57.0873 43.7529 58.1385 49.1511 59.1885C49.7785 60.1596 50.4437 60.7636 50.8986 61.1185C50.8986 61.1185 53.5827 63.2108 57.6584 63.0551C57.8542 63.0473 58.3403 63.0095 59.001 63.0807C59.6795 63.1541 59.7975 63.2264 60.8753 63.421C61.9832 63.6213 62.1078 63.5901 62.5505 63.7614C63.0655 63.9605 63.4304 64.2097 63.6862 64.3866C64.11 64.6791 64.2102 64.8215 65.0211 65.5223C65.4705 65.9105 65.6952 66.1052 65.9009 66.2609C66.2791 66.5468 66.8854 67.0017 67.7474 67.3688C68.238 67.5768 68.7207 67.7192 69.179 67.8538C69.4304 67.9272 69.6418 67.9839 69.7919 68.0217C69.8676 68.747 69.921 69.7793 69.7919 71.004C69.6574 72.2843 69.5639 73.1764 69.0255 74.1853C68.7864 74.6336 67.9176 76.1998 66.0433 76.8839C65.6651 77.0218 64.5049 77.4267 63.1178 77.0541C62.6517 76.9295 61.7863 76.6836 61.1011 75.8616C60.3981 75.0173 60.3236 74.093 60.3058 73.8171C60.2713 73.2865 60.2235 72.5535 60.7318 71.9428C61.1334 71.4611 61.803 71.1497 62.436 71.2042C63.1935 71.2698 63.4026 71.8193 63.931 71.6814C64.1835 71.6157 64.1812 71.4789 64.8776 70.7503C65.5049 70.0941 65.6507 70.054 65.6729 69.8249C65.7252 69.2754 64.9577 68.787 64.6507 68.5913C64.4215 68.4455 63.8609 68.1241 62.8609 67.995C62.2902 67.9216 61.1834 67.7915 59.9076 68.3076C58.3514 68.9361 57.5716 70.1085 57.3236 70.4945C56.6717 71.5045 56.5138 72.4511 56.4148 73.0785C56.3603 73.4267 56.2112 74.529 56.5004 75.9183C56.9053 77.8672 57.9109 79.1419 58.2324 79.5257C59.6273 81.1898 61.3069 81.8049 62.0945 82.0819C65.7775 83.3778 69.0945 82.0185 69.7341 81.7415C73.2491 80.2209 74.8153 77.2598 75.5851 75.806C77.0478 73.0418 77.2336 70.5557 77.4316 67.9105C77.4983 67.0251 77.6485 64.4199 77.0912 61.094C76.6941 58.7291 76.1157 57.0539 75.5005 55.2719C74.2169 51.5534 72.8031 48.8948 72.3481 48.0583C72.3481 48.0583 70.5261 44.7113 67.4916 41.1284C66.8998 40.4298 66.3036 39.7724 65.3336 39.2251C64.7152 38.877 63.6028 38.3797 60.7329 38.3163C59.2813 38.2841 57.2257 38.3586 54.7685 38.8558C54.892 39.0449 55.0143 39.234 55.1378 39.4242C54.4615 39.0694 53.4771 38.6 52.2412 38.1751C51.0221 37.7557 50.0988 37.5644 48.6339 37.2663C46.3435 36.8002 44.6194 36.6022 44.2034 36.5566C42.3435 36.3497 40.982 36.3085 39.6027 36.4142C38.7306 36.481 37.933 36.6 37.1032 37.0672C36.576 37.3642 36.2089 37.7046 35.7684 38.1183C34.9163 38.9181 34.3768 39.6445 34.121 39.9927C32.309 42.4576 31.4035 43.6912 30.5426 45.1328C30.5426 45.1328 29.9163 46.1818 28.8674 48.4844C28.6983 48.8548 28.4057 49.5122 28.3345 50.4165C28.3234 50.5556 28.2878 51.0072 28.3568 51.4388C28.5314 52.5356 29.3434 53.2819 30.1744 53.995C32.0109 55.5712 33.299 56.4655 33.299 56.4655C33.507 56.6045 35.5192 58.0128 37.2467 59.3621C40.0665 61.5656 41.4759 62.668 42.3391 63.4455C42.3391 63.4455 43.0588 64.094 44.5159 64.7514C44.9498 64.9472 45.2857 65.0629 45.6806 64.9995C46.2523 64.9071 46.6116 64.49 46.9598 64.0762C47.0855 63.9261 47.3447 63.5979 48.1734 61.7447C48.3791 61.2853 48.4815 61.0551 48.6061 60.7436C48.7095 60.4855 48.9631 59.8359 49.2412 58.8915C49.4604 58.1496 49.8942 56.6791 50.14 55.0439C50.3636 53.5589 50.6873 51.4076 50.1111 48.7102C50.0588 48.4677 49.4859 45.8837 47.8775 43.4465C47.0399 42.1773 46.2234 41.4042 46.0499 41.2407C45.5248 40.7502 45.0343 40.3864 44.6583 40.1328C45.0243 40.1506 45.526 40.1996 46.1066 40.3319C46.5349 40.4287 47.6861 40.7257 48.9464 41.5533C50.1789 42.362 50.9175 43.2863 51.2468 43.7402C52.4103 45.3431 52.7184 46.9337 52.9086 47.9148C53.1311 49.0639 53.189 50.2519 53.2357 51.1807C53.2902 52.2975 53.2513 52.6368 53.3781 53.4532C53.4648 54.0139 53.6161 54.9494 54.1067 56.0372C54.8441 57.6724 55.9031 58.668 56.3024 59.0195C57.0132 59.6435 57.6473 59.965 58.9154 60.6101C59.9765 61.1496 60.6684 61.3799 61.4426 61.4622C61.7819 61.4978 62.0678 61.4978 62.2658 61.4911C62.07 61.0273 61.9476 60.6124 61.8687 60.2842C61.8008 60.0039 61.6962 59.559 61.6695 58.9639C61.6484 58.4944 61.684 58.0962 61.7263 57.7992C61.8097 58.1607 61.9343 58.629 62.1234 59.163C62.2146 59.4199 62.5439 60.3231 63.2313 61.4922C63.7419 62.3621 64.3603 63.1263 65.3893 64.1908C66.3703 65.2042 67.8153 66.5657 69.7908 68.0251C63.5294 64.5935 57.2691 61.1619 51.0076 57.7302" fill="#155DFC"/>
|
||||
<path d="M60.0713 49.5333C60.6708 49.1596 61.1781 48.8915 61.5485 48.7102C61.8867 48.5433 62.0613 48.4788 62.2682 48.4265C62.4117 48.3898 62.8733 48.2852 63.4706 48.3664C63.7187 48.4009 64.068 48.4521 64.4651 48.6723C64.9401 48.936 65.197 49.2841 65.2604 49.3731C65.3361 49.4799 65.5007 49.7146 65.6008 50.055C65.6631 50.263 65.6497 50.3409 65.7332 50.5289C65.821 50.7257 65.9178 50.8303 66.0735 51.0216C66.1992 51.1751 66.3705 51.3976 66.5663 51.6846C66.4629 51.6268 65.939 51.3431 65.3928 51.3531C65.1458 51.3576 64.9 51.4199 64.9 51.4199C64.702 51.4699 64.5696 51.5322 64.5218 51.5522C64.0802 51.7358 63.0713 51.4922 62.6664 50.8325C62.4106 50.4165 62.5897 50.1473 62.286 49.8759C62.1736 49.7758 62.0257 49.7013 61.3605 49.6212C61.0457 49.5834 60.6063 49.5433 60.0735 49.5355L60.0713 49.5333Z" fill="#003C8D"/>
|
||||
<path d="M74.7856 53.3265C74.2306 53.2842 73.3418 53.2875 72.3218 53.6135C71.5387 53.8637 70.9424 54.2186 70.542 54.5033C70.8824 54.4288 71.3251 54.3421 71.8479 54.2664C71.9725 54.2486 72.374 54.1919 72.9369 54.1429C73.4808 54.0962 74.2016 54.0528 75.0659 54.0573C74.9725 53.8137 74.8791 53.5701 74.7856 53.3254V53.3265Z" fill="#0052C9"/>
|
||||
<path d="M75.9822 56.6858C75.3626 56.6647 74.4905 56.7069 73.4983 56.9939C72.812 57.1919 72.248 57.4567 71.8164 57.6992C72.119 57.6547 72.4994 57.6057 72.9388 57.5646C73.0856 57.5512 73.5172 57.5112 74.1034 57.4867C74.6518 57.4633 75.3737 57.45 76.2358 57.4822C76.1512 57.2175 76.0667 56.9517 75.9833 56.6869L75.9822 56.6858Z" fill="#0052C9"/>
|
||||
<path d="M76.7838 59.5223C76.2165 59.49 75.3077 59.5189 74.3177 59.9361C73.9251 60.1018 73.5969 60.2953 73.3311 60.4822C73.5101 60.4466 73.7571 60.4032 74.0519 60.3632C74.6726 60.2809 75.1475 60.2664 75.6236 60.2542C75.9673 60.2453 76.4145 60.242 76.9406 60.2598L76.7838 59.5211V59.5223Z" fill="#0052C9"/>
|
||||
<path d="M60.5063 65.0307C62.1659 65.5412 63.9156 66.113 65.4763 66.6213C66.4429 66.9361 67.2327 67.1975 67.7488 67.3688C65.9212 65.4089 64.0947 63.45 62.2671 61.49C61.999 61.4867 61.5786 61.4667 61.0747 61.371C60.7165 61.3032 59.8989 61.1185 58.197 60.193C57.4139 59.767 57.0224 59.5545 56.6442 59.2742C54.7888 57.8994 54.0735 55.9172 53.8688 55.3276C53.6875 54.8037 53.5629 53.6357 53.3216 51.322C53.0079 48.322 53.0546 48.1307 52.8388 47.2608C52.6708 46.5834 52.2938 44.9838 51.2481 43.5122C50.9033 43.0261 49.5518 41.1807 47.0301 40.4009C45.9767 40.075 45.0512 40.0383 44.4316 40.0606C44.9311 40.4221 45.6141 40.9671 46.3349 41.7224C46.6919 42.0962 48.2014 43.7202 49.2893 46.3821C49.6297 47.2141 50.2226 48.698 50.3683 50.7269C50.4762 52.2252 50.296 53.3977 50.2548 53.6524C49.9834 55.3321 49.4472 56.3576 49.1958 58.727C49.1747 58.925 49.1603 59.0874 49.1525 59.1897C49.5073 59.7314 49.8521 60.1352 50.1147 60.4155C50.3694 60.6869 50.5774 60.866 50.6419 60.9294C50.6419 60.9294 51.0056 61.2509 51.4228 61.5301C53.3327 62.8115 60.5063 65.0307 60.5063 65.0307Z" fill="#0051C8"/>
|
||||
<path d="M61.7128 57.727C61.636 58.3076 61.5893 59.1919 61.8251 60.2197C61.943 60.7314 62.0865 61.0896 62.25 61.49C62.6305 62.4244 63.1065 63.5824 64.1388 64.7392C64.903 65.5957 65.6772 66.1141 66.299 66.5335C66.5637 66.7114 66.9186 66.9495 67.4136 67.2042C68.3657 67.6959 69.2245 67.9361 69.824 68.0629C69.7317 67.6758 69.6383 67.2899 69.546 66.9028C68.5893 66.4523 67.4358 65.7949 66.2556 64.8282C65.8262 64.4756 65.5159 64.1831 65.3713 64.044C64.8229 63.5156 63.784 62.5056 62.913 60.9183C62.23 59.6736 61.8919 58.5323 61.7128 57.7247V57.727Z" fill="#00398B"/>
|
||||
<path d="M66.7639 76.5546C67.329 76.3588 68.1455 75.9939 68.9553 75.3065C69.6917 74.6814 70.1177 74.0484 70.2901 73.7726C70.7962 72.9661 70.992 72.2587 71.1277 71.7559C71.2801 71.1919 71.4425 70.3576 71.4258 69.3276C71.6372 69.4466 71.9364 69.6012 72.3057 69.7536C73.1166 70.0873 73.7807 70.2075 74.1945 70.2787C75.0198 70.4222 75.6672 70.4311 76.1678 70.4344C76.5905 70.4377 76.9453 70.4222 77.2034 70.4055C77.2334 70.1385 77.2623 69.8682 77.289 69.5957C77.3458 69.0217 77.3925 68.4577 77.4314 67.906C77.0844 67.9016 76.7195 67.8871 76.338 67.8604C76.0465 67.8393 75.7651 67.8115 75.4959 67.7803C75.13 67.8304 74.6394 67.8793 74.0588 67.8827C72.3791 67.8904 71.1589 67.5089 70.8563 67.4099C70.289 67.2241 69.8352 67.0161 69.5215 66.856C69.5771 67.0629 69.6494 67.3576 69.7128 67.7147C69.7884 68.1385 69.9653 69.2731 69.7907 71.0017C69.6739 72.1574 69.6149 72.7481 69.3457 73.4567C69.1143 74.0673 68.4881 75.3966 66.7617 76.5512L66.7639 76.5546Z" fill="#0050C8"/>
|
||||
<path d="M56.1552 63.5746C55.8838 63.321 55.0262 62.5123 54.2909 61.6446C54.1152 61.4377 53.8226 61.0885 53.4733 60.5935C53.3576 60.4288 53.2364 60.2498 53.1118 60.054C52.8182 59.5923 52.4355 58.9828 52.0996 58.1107C51.8916 57.5701 51.7659 57.0973 51.688 56.7469C51.4778 57.7458 51.5111 58.5757 51.5745 59.1185C51.6468 59.7436 51.7837 60.1986 51.9016 60.5812C52.0284 60.9939 52.2175 61.5234 52.5067 62.1174C52.906 62.321 53.2497 62.48 53.5111 62.5957C54.2464 62.9216 54.8337 63.1296 55.2153 63.2631C55.5968 63.3977 55.9205 63.5012 56.1563 63.5746H56.1552Z" fill="#003C8D"/>
|
||||
<path d="M66.1094 63.9739C66.234 63.4433 66.452 63.0796 66.5966 62.876C66.9959 62.312 67.4731 62.0985 68.0059 61.8537C68.4364 61.6557 68.7801 61.5045 69.0972 61.6535C69.155 61.6802 69.1817 61.7036 69.5154 61.9183C69.6311 61.995 69.8002 62.1051 70.0082 62.2308C70.5521 62.5601 72.7234 63.8237 75.8903 64.1908C78.036 64.44 79.6856 64.1663 80.6378 64.0006C82.5833 63.6624 84.1606 63.0651 85.2852 62.5434C84.6668 64.1741 83.818 65.143 83.3008 65.6446C81.3164 67.5701 78.8859 67.7637 77.353 67.9127C77.0082 67.9461 75.7957 68.0362 74.1328 67.7804C70.5121 67.2242 67.7579 65.361 66.1094 63.975V63.9739Z" fill="#8BC7FE"/>
|
||||
<path d="M65.8694 64.7025C66.2476 65.1608 66.6914 65.6469 67.2065 66.1363C67.9439 66.8371 68.6636 67.3877 69.3044 67.8171C69.2955 67.628 69.2621 67.3855 69.1642 67.1263C69.1353 67.0507 69.1041 66.9817 69.073 66.9183C69.3833 67.0896 69.7248 67.2531 70.0941 67.3966C71.8605 68.0807 73.4968 68.0151 74.6192 67.8349C73.1075 67.6313 71.1375 67.153 69.0485 66.0162C68.3199 65.6191 67.6703 65.1931 67.0997 64.7692C67.073 64.7492 66.8672 64.5846 66.578 64.3577C66.3844 64.2064 66.2254 64.0818 66.1208 64.0006C66.0363 64.2353 65.9517 64.4689 65.8672 64.7036L65.8694 64.7025Z" fill="#00398B"/>
|
||||
<path d="M84.588 63.9995C83.5992 65.638 82.3255 66.4578 81.8161 66.7525C81.3656 67.0128 80.6892 67.341 79.8083 67.5423C79.7971 67.5457 78.6625 67.8449 76.6759 67.9494C76.3855 67.965 76.1019 67.9683 76.1019 67.9683C76.1019 67.9683 75.7526 67.9728 75.3922 67.9583C72.2632 67.8382 69.232 66.5612 69.232 66.5612C68.7637 66.3766 67.994 66.0473 67.202 65.5668C66.5045 65.1441 66.1185 64.8048 65.9139 64.6135C65.816 64.5212 65.737 64.4422 65.6836 64.3866C66.1018 63.648 66.5201 62.9105 66.9394 62.1719C67.331 62.5634 67.9116 63.104 68.668 63.6624C69.7248 64.4422 72.7515 66.6791 76.8527 66.5445C81.4378 66.3955 84.4913 63.3777 85.2855 62.5401C85.1487 62.9116 84.9306 63.4288 84.5869 63.9972L84.588 63.9995Z" fill="#0087F7"/>
|
||||
<path d="M65.0449 63.7247C65.1206 63.5623 65.2296 63.3432 65.3786 63.0973C65.4476 62.9839 65.6957 62.5779 65.9926 62.2597C66.2285 62.0072 66.877 61.312 67.828 61.2831C68.3631 61.2664 68.8036 61.4666 69.0983 61.6524C68.9037 61.6702 68.6378 61.708 68.3297 61.7937C68.0027 61.886 67.6311 61.9694 67.3019 62.2308C66.8436 62.5945 66.59 63.0851 66.1094 64.1051C66.0127 64.312 65.9381 64.4833 65.8914 64.5934C65.7112 64.4188 65.5299 64.2364 65.3464 64.0451C65.2429 63.9372 65.1428 63.8304 65.0449 63.7236V63.7247Z" fill="#0051CB"/>
|
||||
</g>
|
||||
</g>
|
||||
<defs>
|
||||
<filter id="filter0_d_204_373" x="10.167" y="16.9987" width="1679.67" height="92.9934" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
|
||||
@@ -866,5 +868,8 @@
|
||||
<clipPath id="clip0_204_373">
|
||||
<rect width="1700" height="1078" fill="white"/>
|
||||
</clipPath>
|
||||
<clipPath id="clip1_204_373">
|
||||
<rect width="57" height="57" fill="white" transform="translate(28 33)"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
</svg>
|
||||
|
||||
|
Before Width: | Height: | Size: 914 KiB After Width: | Height: | Size: 913 KiB |
File diff suppressed because one or more lines are too long
|
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 34 KiB |
@@ -31,4 +31,6 @@ TEST_MINIO_CONSOLE_PORT=9001
|
||||
TEST_NAS_PORT=7006
|
||||
# testing Telegram
|
||||
TEST_TELEGRAM_BOT_TOKEN=
|
||||
TEST_TELEGRAM_CHAT_ID=
|
||||
TEST_TELEGRAM_CHAT_ID=
|
||||
# testing Azure Blob Storage
|
||||
TEST_AZURITE_BLOB_PORT=10000
|
||||
@@ -18,6 +18,7 @@ import (
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
"postgresus-backend/internal/features/disk"
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
healthcheck_attempt "postgresus-backend/internal/features/healthcheck/attempt"
|
||||
healthcheck_config "postgresus-backend/internal/features/healthcheck/config"
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
@@ -64,6 +65,12 @@ func main() {
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
err = secrets.GetSecretKeyService().MigrateKeyFromDbToFileIfExist()
|
||||
if err != nil {
|
||||
log.Error("Failed to migrate secret key from database to file", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
err = users_services.GetUserService().CreateInitialAdmin()
|
||||
if err != nil {
|
||||
log.Error("Failed to create initial admin", "error", err)
|
||||
|
||||
@@ -31,6 +31,14 @@ services:
|
||||
container_name: test-minio
|
||||
command: server /data --console-address ":9001"
|
||||
|
||||
# Test Azurite container
|
||||
test-azurite:
|
||||
image: mcr.microsoft.com/azure-storage/azurite
|
||||
ports:
|
||||
- "${TEST_AZURITE_BLOB_PORT:-10000}:10000"
|
||||
container_name: test-azurite
|
||||
command: azurite-blob --blobHost 0.0.0.0
|
||||
|
||||
# Test PostgreSQL containers
|
||||
test-postgres-12:
|
||||
image: postgres:12
|
||||
|
||||
@@ -3,6 +3,8 @@ module postgresus-backend
|
||||
go 1.23.3
|
||||
|
||||
require (
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3
|
||||
github.com/gin-contrib/cors v1.7.5
|
||||
github.com/gin-contrib/gzip v1.2.3
|
||||
github.com/gin-gonic/gin v1.10.0
|
||||
@@ -15,16 +17,18 @@ require (
|
||||
github.com/lib/pq v1.10.9
|
||||
github.com/minio/minio-go/v7 v7.0.92
|
||||
github.com/shirou/gopsutil/v4 v4.25.5
|
||||
github.com/stretchr/testify v1.10.0
|
||||
github.com/stretchr/testify v1.11.1
|
||||
github.com/swaggo/files v1.0.1
|
||||
github.com/swaggo/gin-swagger v1.6.0
|
||||
github.com/swaggo/swag v1.16.4
|
||||
golang.org/x/crypto v0.39.0
|
||||
golang.org/x/crypto v0.41.0
|
||||
golang.org/x/time v0.12.0
|
||||
gorm.io/driver/postgres v1.5.11
|
||||
gorm.io/gorm v1.26.1
|
||||
)
|
||||
|
||||
require github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
|
||||
|
||||
require (
|
||||
cloud.google.com/go/auth v0.16.2 // indirect
|
||||
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
|
||||
@@ -99,12 +103,12 @@ require (
|
||||
go.opentelemetry.io/otel/metric v1.36.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.36.0 // indirect
|
||||
golang.org/x/arch v0.17.0 // indirect
|
||||
golang.org/x/net v0.41.0 // indirect
|
||||
golang.org/x/net v0.43.0 // indirect
|
||||
golang.org/x/oauth2 v0.30.0
|
||||
golang.org/x/sync v0.15.0 // indirect
|
||||
golang.org/x/sys v0.33.0 // indirect
|
||||
golang.org/x/text v0.26.0 // indirect
|
||||
golang.org/x/tools v0.33.0 // indirect
|
||||
golang.org/x/sync v0.16.0 // indirect
|
||||
golang.org/x/sys v0.35.0 // indirect
|
||||
golang.org/x/text v0.28.0 // indirect
|
||||
golang.org/x/tools v0.35.0 // indirect
|
||||
google.golang.org/api v0.239.0
|
||||
google.golang.org/protobuf v1.36.6 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
|
||||
@@ -6,6 +6,18 @@ cloud.google.com/go/compute/metadata v0.7.0 h1:PBWF+iiAerVNe8UCHxdOt6eHLVc3ydFeO
|
||||
cloud.google.com/go/compute/metadata v0.7.0/go.mod h1:j5MvL9PprKL39t166CoB1uVHfQMs4tFQZZcKwksXUjo=
|
||||
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
||||
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 h1:JXg2dwJUmPB9JmtVmdEB16APJ7jurfbY5jnfXpJoRMc=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0/go.mod h1:YD5h/ldMsG0XiIw7PdyNhLxaM317eFh5yNLccNfGdyw=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.0 h1:KpMC6LFL7mqpExyMC9jVOYRiVhLmamjeZfRsUpB7l4s=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.0/go.mod h1:J7MUC/wtRpfGVbQ5sIItY5/FuVWmvzlY21WAOfQnq/I=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1 h1:/Zt+cDPnpC3OVDm/JKLOs7M2DKmLRIIp3XIx9pHHiig=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1/go.mod h1:Ng3urmn6dYe8gnbCMoHHVl5APYz2txho3koEkV2o2HA=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 h1:ZJJNFaQ86GVKQ9ehwqyAFE6pIfyicpuJ8IkVaPBc6/4=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3/go.mod h1:URuDvhmATVKqHBH9/0nOiNKk0+YcwfQ3WkK5PqHKxc8=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 h1:XkkQbfMyuH2jTSjQjSoihryI8GINRcs4xp8lNawg0FI=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
|
||||
github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
|
||||
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
|
||||
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
|
||||
@@ -80,6 +92,8 @@ github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
|
||||
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
@@ -131,6 +145,8 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
|
||||
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
|
||||
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
|
||||
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
||||
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||
@@ -159,6 +175,8 @@ github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0
|
||||
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
|
||||
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c h1:dAMKvw0MlJT1GshSTtih8C2gDs04w8dReiOGXrGLNoY=
|
||||
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
|
||||
@@ -180,8 +198,8 @@ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/swaggo/files v1.0.1 h1:J1bVJ4XHZNq0I46UU90611i9/YzdrF7x92oX1ig5IdE=
|
||||
github.com/swaggo/files v1.0.1/go.mod h1:0qXmMNH6sXNf+73t65aKeB+ApmgxdnkQzVTAj2uaMUg=
|
||||
github.com/swaggo/gin-swagger v1.6.0 h1:y8sxvQ3E20/RCyrXeFfg60r6H0Z+SwpTjMYsMm+zy8M=
|
||||
@@ -216,25 +234,25 @@ golang.org/x/arch v0.17.0/go.mod h1:bdwinDaKcfZUGpH09BB7ZmOfhalA8lQdzl62l8gGWsk=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
|
||||
golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
|
||||
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
|
||||
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w=
|
||||
golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
|
||||
golang.org/x/mod v0.26.0 h1:EGMPT//Ezu+ylkCijjPc+f4Aih7sZvaAr+O3EHBxvZg=
|
||||
golang.org/x/mod v0.26.0/go.mod h1:/j6NAhSk8iQ723BGAUyoAcn7SlD7s15Dp9Nd/SfeaFQ=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20210421230115-4e50805a0758/go.mod h1:72T/g9IO56b78aLF+1Kcs5dz7/ng1VjMUvfKvpfy+jM=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
|
||||
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
|
||||
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
|
||||
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
|
||||
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
|
||||
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
|
||||
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
|
||||
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
@@ -247,8 +265,8 @@ golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
|
||||
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI=
|
||||
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||
@@ -257,15 +275,15 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
|
||||
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
|
||||
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
|
||||
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
|
||||
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
|
||||
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/tools v0.33.0 h1:4qz2S3zmRxbGIhDIAgjxvFutSvH5EfnsYrRBj0UI0bc=
|
||||
golang.org/x/tools v0.33.0/go.mod h1:CIJMaWEY88juyUfo7UbgPqbC8rU2OqfAV1h2Qp0oMYI=
|
||||
golang.org/x/tools v0.35.0 h1:mBffYraMEf7aa0sB+NuKnuCy8qI/9Bughn8dC2Gu5r0=
|
||||
golang.org/x/tools v0.35.0/go.mod h1:NKdj5HkL/73byiZSJjqJgKn3ep7KjFkBOkR/Hps3VPw=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/api v0.239.0 h1:2hZKUnFZEy81eugPs4e2XzIJ5SOwQg0G82bpXD65Puo=
|
||||
google.golang.org/api v0.239.0/go.mod h1:cOVEm2TpdAGHL2z+UwyS+kmlGr3bVWQQ6sYEqkKje50=
|
||||
|
||||
@@ -26,8 +26,9 @@ type EnvVariables struct {
|
||||
EnvMode env_utils.EnvMode `env:"ENV_MODE" required:"true"`
|
||||
PostgresesInstallDir string `env:"POSTGRES_INSTALL_DIR"`
|
||||
|
||||
DataFolder string
|
||||
TempFolder string
|
||||
DataFolder string
|
||||
TempFolder string
|
||||
SecretKeyPath string
|
||||
|
||||
TestGoogleDriveClientID string `env:"TEST_GOOGLE_DRIVE_CLIENT_ID"`
|
||||
TestGoogleDriveClientSecret string `env:"TEST_GOOGLE_DRIVE_CLIENT_SECRET"`
|
||||
@@ -44,6 +45,8 @@ type EnvVariables struct {
|
||||
TestMinioPort string `env:"TEST_MINIO_PORT"`
|
||||
TestMinioConsolePort string `env:"TEST_MINIO_CONSOLE_PORT"`
|
||||
|
||||
TestAzuriteBlobPort string `env:"TEST_AZURITE_BLOB_PORT"`
|
||||
|
||||
TestNASPort string `env:"TEST_NAS_PORT"`
|
||||
|
||||
// oauth
|
||||
@@ -144,6 +147,7 @@ func loadEnvVariables() {
|
||||
// (projectRoot/postgresus-data -> /postgresus-data)
|
||||
env.DataFolder = filepath.Join(filepath.Dir(backendRoot), "postgresus-data", "backups")
|
||||
env.TempFolder = filepath.Join(filepath.Dir(backendRoot), "postgresus-data", "temp")
|
||||
env.SecretKeyPath = filepath.Join(filepath.Dir(backendRoot), "postgresus-data", "secret.key")
|
||||
|
||||
if env.IsTesting {
|
||||
if env.TestPostgres12Port == "" {
|
||||
@@ -184,6 +188,11 @@ func loadEnvVariables() {
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if env.TestAzuriteBlobPort == "" {
|
||||
log.Error("TEST_AZURITE_BLOB_PORT is empty")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if env.TestNASPort == "" {
|
||||
log.Error("TEST_NAS_PORT is empty")
|
||||
os.Exit(1)
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"postgresus-backend/internal/config"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/period"
|
||||
"time"
|
||||
)
|
||||
@@ -131,7 +132,8 @@ func (s *BackupBackgroundService) cleanOldBackups() error {
|
||||
continue
|
||||
}
|
||||
|
||||
err = storage.DeleteFile(backup.ID)
|
||||
encryptor := encryption.GetFieldEncryptor()
|
||||
err = storage.DeleteFile(encryptor, backup.ID)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to delete backup file", "backupId", backup.ID, "error", err)
|
||||
}
|
||||
|
||||
@@ -2,20 +2,21 @@ package backups
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"sync"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
type BackupContextManager struct {
|
||||
mu sync.RWMutex
|
||||
cancelFuncs map[uuid.UUID]context.CancelFunc
|
||||
mu sync.RWMutex
|
||||
cancelFuncs map[uuid.UUID]context.CancelFunc
|
||||
cancelledBackups map[uuid.UUID]bool
|
||||
}
|
||||
|
||||
func NewBackupContextManager() *BackupContextManager {
|
||||
return &BackupContextManager{
|
||||
cancelFuncs: make(map[uuid.UUID]context.CancelFunc),
|
||||
cancelFuncs: make(map[uuid.UUID]context.CancelFunc),
|
||||
cancelledBackups: make(map[uuid.UUID]bool),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -23,25 +24,37 @@ func (m *BackupContextManager) RegisterBackup(backupID uuid.UUID, cancelFunc con
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
m.cancelFuncs[backupID] = cancelFunc
|
||||
delete(m.cancelledBackups, backupID)
|
||||
}
|
||||
|
||||
func (m *BackupContextManager) CancelBackup(backupID uuid.UUID) error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
cancelFunc, exists := m.cancelFuncs[backupID]
|
||||
if !exists {
|
||||
return errors.New("backup is not in progress or already completed")
|
||||
if m.cancelledBackups[backupID] {
|
||||
return nil
|
||||
}
|
||||
|
||||
cancelFunc()
|
||||
delete(m.cancelFuncs, backupID)
|
||||
cancelFunc, exists := m.cancelFuncs[backupID]
|
||||
if exists {
|
||||
cancelFunc()
|
||||
delete(m.cancelFuncs, backupID)
|
||||
}
|
||||
|
||||
m.cancelledBackups[backupID] = true
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *BackupContextManager) IsCancelled(backupID uuid.UUID) bool {
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
return m.cancelledBackups[backupID]
|
||||
}
|
||||
|
||||
func (m *BackupContextManager) UnregisterBackup(backupID uuid.UUID) {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
delete(m.cancelFuncs, backupID)
|
||||
delete(m.cancelledBackups, backupID)
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package backups
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -26,6 +27,7 @@ import (
|
||||
users_testing "postgresus-backend/internal/features/users/testing"
|
||||
workspaces_models "postgresus-backend/internal/features/workspaces/models"
|
||||
workspaces_testing "postgresus-backend/internal/features/workspaces/testing"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
test_utils "postgresus-backend/internal/util/testing"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
)
|
||||
@@ -524,7 +526,7 @@ func Test_CancelBackup_InProgressBackup_SuccessfullyCancelled(t *testing.T) {
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Register a cancellable context for the backup
|
||||
GetBackupService().backupContextMgr.RegisterBackup(backup.ID, func() {})
|
||||
GetBackupService().backupContextManager.RegisterBackup(backup.ID, func() {})
|
||||
|
||||
resp := test_utils.MakePostRequest(
|
||||
t,
|
||||
@@ -700,7 +702,7 @@ func createTestBackup(
|
||||
dummyContent := []byte("dummy backup content for testing")
|
||||
reader := strings.NewReader(string(dummyContent))
|
||||
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
|
||||
if err := storages[0].SaveFile(logger, backup.ID, reader); err != nil {
|
||||
if err := storages[0].SaveFile(context.Background(), encryption.GetFieldEncryptor(), logger, backup.ID, reader); err != nil {
|
||||
panic(fmt.Sprintf("Failed to create test backup file: %v", err))
|
||||
}
|
||||
|
||||
|
||||
@@ -1,15 +1,18 @@
|
||||
package backups
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
audit_logs "postgresus-backend/internal/features/audit_logs"
|
||||
"postgresus-backend/internal/features/backups/backups/usecases"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
encryption_secrets "postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
"time"
|
||||
)
|
||||
|
||||
var backupRepository = &BackupRepository{}
|
||||
@@ -23,6 +26,8 @@ var backupService = &BackupService{
|
||||
notifiers.GetNotifierService(),
|
||||
notifiers.GetNotifierService(),
|
||||
backups_config.GetBackupConfigService(),
|
||||
encryption_secrets.GetSecretKeyService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
usecases.GetCreateBackupUsecase(),
|
||||
logger.GetLogger(),
|
||||
[]BackupRemoveListener{},
|
||||
|
||||
@@ -1,5 +1,10 @@
|
||||
package backups
|
||||
|
||||
import (
|
||||
"io"
|
||||
"postgresus-backend/internal/features/backups/backups/encryption"
|
||||
)
|
||||
|
||||
type GetBackupsRequest struct {
|
||||
DatabaseID string `form:"database_id" binding:"required"`
|
||||
Limit int `form:"limit"`
|
||||
@@ -12,3 +17,12 @@ type GetBackupsResponse struct {
|
||||
Limit int `json:"limit"`
|
||||
Offset int `json:"offset"`
|
||||
}
|
||||
|
||||
type decryptionReaderCloser struct {
|
||||
*encryption.DecryptionReader
|
||||
baseReader io.ReadCloser
|
||||
}
|
||||
|
||||
func (r *decryptionReaderCloser) Close() error {
|
||||
return r.baseReader.Close()
|
||||
}
|
||||
|
||||
@@ -0,0 +1,156 @@
|
||||
package encryption
|
||||
|
||||
import (
|
||||
"crypto/aes"
|
||||
"crypto/cipher"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
type DecryptionReader struct {
|
||||
baseReader io.Reader
|
||||
cipher cipher.AEAD
|
||||
buffer []byte
|
||||
nonce []byte
|
||||
chunkIndex uint64
|
||||
headerRead bool
|
||||
eof bool
|
||||
}
|
||||
|
||||
func NewDecryptionReader(
|
||||
baseReader io.Reader,
|
||||
masterKey string,
|
||||
backupID uuid.UUID,
|
||||
salt []byte,
|
||||
nonce []byte,
|
||||
) (*DecryptionReader, error) {
|
||||
if len(salt) != SaltLen {
|
||||
return nil, fmt.Errorf("salt must be %d bytes, got %d", SaltLen, len(salt))
|
||||
}
|
||||
if len(nonce) != NonceLen {
|
||||
return nil, fmt.Errorf("nonce must be %d bytes, got %d", NonceLen, len(nonce))
|
||||
}
|
||||
|
||||
derivedKey, err := DeriveBackupKey(masterKey, backupID, salt)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to derive backup key: %w", err)
|
||||
}
|
||||
|
||||
block, err := aes.NewCipher(derivedKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create cipher: %w", err)
|
||||
}
|
||||
|
||||
aesgcm, err := cipher.NewGCM(block)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create GCM: %w", err)
|
||||
}
|
||||
|
||||
reader := &DecryptionReader{
|
||||
baseReader,
|
||||
aesgcm,
|
||||
make([]byte, 0),
|
||||
nonce,
|
||||
0,
|
||||
false,
|
||||
false,
|
||||
}
|
||||
|
||||
if err := reader.readAndValidateHeader(salt, nonce); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return reader, nil
|
||||
}
|
||||
|
||||
func (r *DecryptionReader) Read(p []byte) (n int, err error) {
|
||||
for len(r.buffer) < len(p) && !r.eof {
|
||||
if err := r.readAndDecryptChunk(); err != nil {
|
||||
if err == io.EOF {
|
||||
r.eof = true
|
||||
break
|
||||
}
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
if len(r.buffer) == 0 {
|
||||
return 0, io.EOF
|
||||
}
|
||||
|
||||
n = copy(p, r.buffer)
|
||||
r.buffer = r.buffer[n:]
|
||||
|
||||
return n, nil
|
||||
}
|
||||
|
||||
func (r *DecryptionReader) readAndValidateHeader(expectedSalt, expectedNonce []byte) error {
|
||||
header := make([]byte, HeaderLen)
|
||||
|
||||
if _, err := io.ReadFull(r.baseReader, header); err != nil {
|
||||
return fmt.Errorf("failed to read header: %w", err)
|
||||
}
|
||||
|
||||
magic := string(header[0:MagicBytesLen])
|
||||
if magic != MagicBytes {
|
||||
return fmt.Errorf("invalid magic bytes: expected %s, got %s", MagicBytes, magic)
|
||||
}
|
||||
|
||||
salt := header[MagicBytesLen : MagicBytesLen+SaltLen]
|
||||
nonce := header[MagicBytesLen+SaltLen : MagicBytesLen+SaltLen+NonceLen]
|
||||
|
||||
if string(salt) != string(expectedSalt) {
|
||||
return fmt.Errorf("salt mismatch in file header")
|
||||
}
|
||||
|
||||
if string(nonce) != string(expectedNonce) {
|
||||
return fmt.Errorf("nonce mismatch in file header")
|
||||
}
|
||||
|
||||
r.headerRead = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *DecryptionReader) readAndDecryptChunk() error {
|
||||
lengthBuf := make([]byte, 4)
|
||||
if _, err := io.ReadFull(r.baseReader, lengthBuf); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
chunkLen := binary.BigEndian.Uint32(lengthBuf)
|
||||
if chunkLen == 0 || chunkLen > ChunkSize+16 {
|
||||
return fmt.Errorf("invalid chunk length: %d", chunkLen)
|
||||
}
|
||||
|
||||
encrypted := make([]byte, chunkLen)
|
||||
if _, err := io.ReadFull(r.baseReader, encrypted); err != nil {
|
||||
return fmt.Errorf("failed to read encrypted chunk: %w", err)
|
||||
}
|
||||
|
||||
chunkNonce := r.generateChunkNonce()
|
||||
|
||||
decrypted, err := r.cipher.Open(nil, chunkNonce, encrypted, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf(
|
||||
"failed to decrypt chunk (authentication failed - file may be corrupted or tampered): %w",
|
||||
err,
|
||||
)
|
||||
}
|
||||
|
||||
r.buffer = append(r.buffer, decrypted...)
|
||||
r.chunkIndex++
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *DecryptionReader) generateChunkNonce() []byte {
|
||||
chunkNonce := make([]byte, NonceLen)
|
||||
copy(chunkNonce, r.nonce)
|
||||
|
||||
binary.BigEndian.PutUint64(chunkNonce[4:], r.chunkIndex)
|
||||
|
||||
return chunkNonce
|
||||
}
|
||||
@@ -0,0 +1,147 @@
|
||||
package encryption
|
||||
|
||||
import (
|
||||
"crypto/aes"
|
||||
"crypto/cipher"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
type EncryptionWriter struct {
|
||||
baseWriter io.Writer
|
||||
cipher cipher.AEAD
|
||||
buffer []byte
|
||||
nonce []byte
|
||||
salt []byte
|
||||
chunkIndex uint64
|
||||
headerWritten bool
|
||||
}
|
||||
|
||||
func NewEncryptionWriter(
|
||||
baseWriter io.Writer,
|
||||
masterKey string,
|
||||
backupID uuid.UUID,
|
||||
salt []byte,
|
||||
nonce []byte,
|
||||
) (*EncryptionWriter, error) {
|
||||
if len(salt) != SaltLen {
|
||||
return nil, fmt.Errorf("salt must be %d bytes, got %d", SaltLen, len(salt))
|
||||
}
|
||||
if len(nonce) != NonceLen {
|
||||
return nil, fmt.Errorf("nonce must be %d bytes, got %d", NonceLen, len(nonce))
|
||||
}
|
||||
|
||||
derivedKey, err := DeriveBackupKey(masterKey, backupID, salt)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to derive backup key: %w", err)
|
||||
}
|
||||
|
||||
block, err := aes.NewCipher(derivedKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create cipher: %w", err)
|
||||
}
|
||||
|
||||
aesgcm, err := cipher.NewGCM(block)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create GCM: %w", err)
|
||||
}
|
||||
|
||||
writer := &EncryptionWriter{
|
||||
baseWriter: baseWriter,
|
||||
cipher: aesgcm,
|
||||
buffer: make([]byte, 0, ChunkSize),
|
||||
nonce: nonce,
|
||||
chunkIndex: 0,
|
||||
headerWritten: false,
|
||||
salt: salt, // Store salt for lazy header writing
|
||||
}
|
||||
|
||||
return writer, nil
|
||||
}
|
||||
|
||||
func (w *EncryptionWriter) Write(p []byte) (n int, err error) {
|
||||
// Write header on first write (lazy initialization)
|
||||
if !w.headerWritten {
|
||||
if err := w.writeHeader(w.salt, w.nonce); err != nil {
|
||||
return 0, fmt.Errorf("failed to write header: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
n = len(p)
|
||||
w.buffer = append(w.buffer, p...)
|
||||
|
||||
for len(w.buffer) >= ChunkSize {
|
||||
chunk := w.buffer[:ChunkSize]
|
||||
if err := w.encryptAndWriteChunk(chunk); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
w.buffer = w.buffer[ChunkSize:]
|
||||
}
|
||||
|
||||
return n, nil
|
||||
}
|
||||
|
||||
func (w *EncryptionWriter) Close() error {
|
||||
// Write header if it hasn't been written yet (in case Close is called without any writes)
|
||||
if !w.headerWritten {
|
||||
if err := w.writeHeader(w.salt, w.nonce); err != nil {
|
||||
return fmt.Errorf("failed to write header: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(w.buffer) > 0 {
|
||||
if err := w.encryptAndWriteChunk(w.buffer); err != nil {
|
||||
return err
|
||||
}
|
||||
w.buffer = nil
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *EncryptionWriter) writeHeader(salt, nonce []byte) error {
|
||||
header := make([]byte, HeaderLen)
|
||||
|
||||
copy(header[0:MagicBytesLen], []byte(MagicBytes))
|
||||
copy(header[MagicBytesLen:MagicBytesLen+SaltLen], salt)
|
||||
copy(header[MagicBytesLen+SaltLen:MagicBytesLen+SaltLen+NonceLen], nonce)
|
||||
|
||||
_, err := w.baseWriter.Write(header)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write header: %w", err)
|
||||
}
|
||||
|
||||
w.headerWritten = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *EncryptionWriter) encryptAndWriteChunk(chunk []byte) error {
|
||||
chunkNonce := w.generateChunkNonce()
|
||||
|
||||
encrypted := w.cipher.Seal(nil, chunkNonce, chunk, nil)
|
||||
|
||||
lengthBuf := make([]byte, 4)
|
||||
binary.BigEndian.PutUint32(lengthBuf, uint32(len(encrypted)))
|
||||
|
||||
if _, err := w.baseWriter.Write(lengthBuf); err != nil {
|
||||
return fmt.Errorf("failed to write chunk length: %w", err)
|
||||
}
|
||||
|
||||
if _, err := w.baseWriter.Write(encrypted); err != nil {
|
||||
return fmt.Errorf("failed to write encrypted chunk: %w", err)
|
||||
}
|
||||
|
||||
w.chunkIndex++
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *EncryptionWriter) generateChunkNonce() []byte {
|
||||
chunkNonce := make([]byte, NonceLen)
|
||||
copy(chunkNonce, w.nonce)
|
||||
|
||||
binary.BigEndian.PutUint64(chunkNonce[4:], w.chunkIndex)
|
||||
|
||||
return chunkNonce
|
||||
}
|
||||
@@ -0,0 +1,387 @@
|
||||
package encryption
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func Test_EncryptDecryptRoundTrip_ReturnsOriginalData(t *testing.T) {
|
||||
masterKey := uuid.New().String() + uuid.New().String()
|
||||
backupID := uuid.New()
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
nonce, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
originalData := []byte(
|
||||
"This is a test backup data that should be encrypted and then decrypted successfully.",
|
||||
)
|
||||
|
||||
var encrypted bytes.Buffer
|
||||
writer, err := NewEncryptionWriter(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
n, err := writer.Write(originalData)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, len(originalData), n)
|
||||
|
||||
err = writer.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
reader, err := NewDecryptionReader(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
decrypted := make([]byte, len(originalData))
|
||||
n, err = io.ReadFull(reader, decrypted)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, len(originalData), n)
|
||||
assert.Equal(t, originalData, decrypted)
|
||||
}
|
||||
|
||||
func Test_EncryptDecryptRoundTrip_LargeData_WorksCorrectly(t *testing.T) {
|
||||
masterKey := uuid.New().String() + uuid.New().String()
|
||||
backupID := uuid.New()
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
nonce, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
originalData := make([]byte, 100*1024)
|
||||
_, err = rand.Read(originalData)
|
||||
require.NoError(t, err)
|
||||
|
||||
var encrypted bytes.Buffer
|
||||
writer, err := NewEncryptionWriter(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
n, err := writer.Write(originalData)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, len(originalData), n)
|
||||
|
||||
err = writer.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
reader, err := NewDecryptionReader(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
decrypted, err := io.ReadAll(reader)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, originalData, decrypted)
|
||||
}
|
||||
|
||||
func Test_EncryptionWriter_MultipleWrites_CombinesCorrectly(t *testing.T) {
|
||||
masterKey := uuid.New().String() + uuid.New().String()
|
||||
backupID := uuid.New()
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
nonce, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
part1 := []byte("First part of data. ")
|
||||
part2 := []byte("Second part of data. ")
|
||||
part3 := []byte("Third part of data.")
|
||||
expectedData := append(append(part1, part2...), part3...)
|
||||
|
||||
var encrypted bytes.Buffer
|
||||
writer, err := NewEncryptionWriter(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = writer.Write(part1)
|
||||
require.NoError(t, err)
|
||||
_, err = writer.Write(part2)
|
||||
require.NoError(t, err)
|
||||
_, err = writer.Write(part3)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = writer.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
reader, err := NewDecryptionReader(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
decrypted, err := io.ReadAll(reader)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, expectedData, decrypted)
|
||||
}
|
||||
|
||||
func Test_DecryptionReader_InvalidHeader_ReturnsError(t *testing.T) {
|
||||
masterKey := uuid.New().String() + uuid.New().String()
|
||||
backupID := uuid.New()
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
nonce, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
invalidHeader := make([]byte, HeaderLen)
|
||||
copy(invalidHeader, []byte("INVALID!"))
|
||||
|
||||
invalidData := bytes.NewBuffer(invalidHeader)
|
||||
|
||||
_, err = NewDecryptionReader(invalidData, masterKey, backupID, salt, nonce)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "invalid magic bytes")
|
||||
}
|
||||
|
||||
func Test_DecryptionReader_TamperedData_ReturnsError(t *testing.T) {
|
||||
masterKey := uuid.New().String() + uuid.New().String()
|
||||
backupID := uuid.New()
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
nonce, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
originalData := []byte("This data will be tampered with.")
|
||||
|
||||
var encrypted bytes.Buffer
|
||||
writer, err := NewEncryptionWriter(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = writer.Write(originalData)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = writer.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
encryptedBytes := encrypted.Bytes()
|
||||
if len(encryptedBytes) > HeaderLen+10 {
|
||||
encryptedBytes[HeaderLen+10] ^= 0xFF
|
||||
}
|
||||
|
||||
tamperedBuffer := bytes.NewBuffer(encryptedBytes)
|
||||
|
||||
reader, err := NewDecryptionReader(tamperedBuffer, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = io.ReadAll(reader)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "authentication failed")
|
||||
}
|
||||
|
||||
func Test_DeriveBackupKey_SameInputs_ReturnsSameKey(t *testing.T) {
|
||||
masterKey := uuid.New().String() + uuid.New().String()
|
||||
backupID := uuid.New()
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
|
||||
key1, err := DeriveBackupKey(masterKey, backupID, salt)
|
||||
require.NoError(t, err)
|
||||
|
||||
key2, err := DeriveBackupKey(masterKey, backupID, salt)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, key1, key2)
|
||||
}
|
||||
|
||||
func Test_DeriveBackupKey_DifferentInputs_ReturnsDifferentKeys(t *testing.T) {
|
||||
masterKey1 := uuid.New().String() + uuid.New().String()
|
||||
masterKey2 := uuid.New().String() + uuid.New().String()
|
||||
backupID1 := uuid.New()
|
||||
backupID2 := uuid.New()
|
||||
salt1, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
salt2, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
|
||||
key1, err := DeriveBackupKey(masterKey1, backupID1, salt1)
|
||||
require.NoError(t, err)
|
||||
|
||||
key2, err := DeriveBackupKey(masterKey2, backupID1, salt1)
|
||||
require.NoError(t, err)
|
||||
assert.NotEqual(t, key1, key2)
|
||||
|
||||
key3, err := DeriveBackupKey(masterKey1, backupID2, salt1)
|
||||
require.NoError(t, err)
|
||||
assert.NotEqual(t, key1, key3)
|
||||
|
||||
key4, err := DeriveBackupKey(masterKey1, backupID1, salt2)
|
||||
require.NoError(t, err)
|
||||
assert.NotEqual(t, key1, key4)
|
||||
}
|
||||
|
||||
func Test_EncryptionWriter_PartialChunk_HandledCorrectly(t *testing.T) {
|
||||
masterKey := uuid.New().String() + uuid.New().String()
|
||||
backupID := uuid.New()
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
nonce, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
smallData := []byte("Small data less than chunk size")
|
||||
|
||||
var encrypted bytes.Buffer
|
||||
writer, err := NewEncryptionWriter(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = writer.Write(smallData)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = writer.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
reader, err := NewDecryptionReader(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
decrypted, err := io.ReadAll(reader)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, smallData, decrypted)
|
||||
}
|
||||
|
||||
func Test_GenerateSalt_ReturnsCorrectLength(t *testing.T) {
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, SaltLen, len(salt))
|
||||
}
|
||||
|
||||
func Test_GenerateSalt_GeneratesUniqueSalts(t *testing.T) {
|
||||
salt1, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
|
||||
salt2, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.NotEqual(t, salt1, salt2)
|
||||
}
|
||||
|
||||
func Test_GenerateNonce_ReturnsCorrectLength(t *testing.T) {
|
||||
nonce, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, NonceLen, len(nonce))
|
||||
}
|
||||
|
||||
func Test_GenerateNonce_GeneratesUniqueNonces(t *testing.T) {
|
||||
nonce1, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
nonce2, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.NotEqual(t, nonce1, nonce2)
|
||||
}
|
||||
|
||||
func Test_DecryptionReader_WrongMasterKey_ReturnsError(t *testing.T) {
|
||||
masterKey1 := uuid.New().String() + uuid.New().String()
|
||||
masterKey2 := uuid.New().String() + uuid.New().String()
|
||||
backupID := uuid.New()
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
nonce, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
originalData := []byte("Secret data")
|
||||
|
||||
var encrypted bytes.Buffer
|
||||
writer, err := NewEncryptionWriter(&encrypted, masterKey1, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = writer.Write(originalData)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = writer.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
reader, err := NewDecryptionReader(&encrypted, masterKey2, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = io.ReadAll(reader)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "authentication failed")
|
||||
}
|
||||
|
||||
func Test_EncryptionWriter_EmptyData_WorksCorrectly(t *testing.T) {
|
||||
masterKey := uuid.New().String() + uuid.New().String()
|
||||
backupID := uuid.New()
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
nonce, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
var encrypted bytes.Buffer
|
||||
writer, err := NewEncryptionWriter(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = writer.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
reader, err := NewDecryptionReader(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
decrypted, err := io.ReadAll(reader)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 0, len(decrypted))
|
||||
}
|
||||
|
||||
func Test_EncryptionWriter_MultipleChunks_WorksCorrectly(t *testing.T) {
|
||||
masterKey := uuid.New().String() + uuid.New().String()
|
||||
backupID := uuid.New()
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
nonce, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
dataSize := ChunkSize*3 + 1000
|
||||
originalData := make([]byte, dataSize)
|
||||
_, err = rand.Read(originalData)
|
||||
require.NoError(t, err)
|
||||
|
||||
var encrypted bytes.Buffer
|
||||
writer, err := NewEncryptionWriter(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = writer.Write(originalData)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = writer.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
reader, err := NewDecryptionReader(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
decrypted, err := io.ReadAll(reader)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, originalData, decrypted)
|
||||
}
|
||||
|
||||
func Test_DecryptionReader_SmallReads_WorksCorrectly(t *testing.T) {
|
||||
masterKey := uuid.New().String() + uuid.New().String()
|
||||
backupID := uuid.New()
|
||||
salt, err := GenerateSalt()
|
||||
require.NoError(t, err)
|
||||
nonce, err := GenerateNonce()
|
||||
require.NoError(t, err)
|
||||
|
||||
originalData := []byte("This is test data that will be read in small chunks.")
|
||||
|
||||
var encrypted bytes.Buffer
|
||||
writer, err := NewEncryptionWriter(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = writer.Write(originalData)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = writer.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
reader, err := NewDecryptionReader(&encrypted, masterKey, backupID, salt, nonce)
|
||||
require.NoError(t, err)
|
||||
|
||||
var decrypted []byte
|
||||
buf := make([]byte, 5)
|
||||
for {
|
||||
n, err := reader.Read(buf)
|
||||
if n > 0 {
|
||||
decrypted = append(decrypted, buf[:n]...)
|
||||
}
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
assert.Equal(t, originalData, decrypted)
|
||||
}
|
||||
@@ -0,0 +1,52 @@
|
||||
package encryption
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"crypto/sha256"
|
||||
"fmt"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"golang.org/x/crypto/pbkdf2"
|
||||
)
|
||||
|
||||
const (
|
||||
MagicBytes = "PGRSUS01"
|
||||
MagicBytesLen = 8
|
||||
SaltLen = 32
|
||||
NonceLen = 12
|
||||
ReservedLen = 12
|
||||
HeaderLen = MagicBytesLen + SaltLen + NonceLen + ReservedLen
|
||||
ChunkSize = 32 * 1024
|
||||
PBKDF2Iterations = 100000
|
||||
)
|
||||
|
||||
func DeriveBackupKey(masterKey string, backupID uuid.UUID, salt []byte) ([]byte, error) {
|
||||
if masterKey == "" {
|
||||
return nil, fmt.Errorf("master key cannot be empty")
|
||||
}
|
||||
if len(salt) != SaltLen {
|
||||
return nil, fmt.Errorf("salt must be %d bytes", SaltLen)
|
||||
}
|
||||
|
||||
keyMaterial := []byte(masterKey + backupID.String())
|
||||
|
||||
derivedKey := pbkdf2.Key(keyMaterial, salt, PBKDF2Iterations, 32, sha256.New)
|
||||
|
||||
return derivedKey, nil
|
||||
}
|
||||
|
||||
func GenerateSalt() ([]byte, error) {
|
||||
salt := make([]byte, SaltLen)
|
||||
if _, err := rand.Read(salt); err != nil {
|
||||
return nil, fmt.Errorf("failed to generate salt: %w", err)
|
||||
}
|
||||
return salt, nil
|
||||
}
|
||||
|
||||
func GenerateNonce() ([]byte, error) {
|
||||
nonce := make([]byte, NonceLen)
|
||||
if _, err := rand.Read(nonce); err != nil {
|
||||
return nil, fmt.Errorf("failed to generate nonce: %w", err)
|
||||
}
|
||||
return nonce, nil
|
||||
}
|
||||
@@ -3,6 +3,7 @@ package backups
|
||||
import (
|
||||
"context"
|
||||
|
||||
usecases_postgresql "postgresus-backend/internal/features/backups/backups/usecases/postgresql"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
@@ -29,7 +30,7 @@ type CreateBackupUsecase interface {
|
||||
backupProgressListener func(
|
||||
completedMBs float64,
|
||||
),
|
||||
) error
|
||||
) (*usecases_postgresql.BackupMetadata, error)
|
||||
}
|
||||
|
||||
type BackupRemoveListener interface {
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package backups
|
||||
|
||||
import (
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
@@ -19,5 +20,9 @@ type Backup struct {
|
||||
|
||||
BackupDurationMs int64 `json:"backupDurationMs" gorm:"column:backup_duration_ms;default:0"`
|
||||
|
||||
EncryptionSalt *string `json:"-" gorm:"column:encryption_salt"`
|
||||
EncryptionIV *string `json:"-" gorm:"column:encryption_iv"`
|
||||
Encryption backups_config.BackupEncryption `json:"encryption" gorm:"column:encryption;type:text;not null;default:'NONE'"`
|
||||
|
||||
CreatedAt time.Time `json:"createdAt" gorm:"column:created_at"`
|
||||
}
|
||||
|
||||
@@ -2,20 +2,25 @@ package backups
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
audit_logs "postgresus-backend/internal/features/audit_logs"
|
||||
"postgresus-backend/internal/features/backups/backups/encryption"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
encryption_secrets "postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
users_models "postgresus-backend/internal/features/users/models"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
util_encryption "postgresus-backend/internal/util/encryption"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
@@ -27,6 +32,8 @@ type BackupService struct {
|
||||
notifierService *notifiers.NotifierService
|
||||
notificationSender NotificationSender
|
||||
backupConfigService *backups_config.BackupConfigService
|
||||
secretKeyService *encryption_secrets.SecretKeyService
|
||||
fieldEncryptor util_encryption.FieldEncryptor
|
||||
|
||||
createBackupUseCase CreateBackupUsecase
|
||||
|
||||
@@ -34,9 +41,9 @@ type BackupService struct {
|
||||
|
||||
backupRemoveListeners []BackupRemoveListener
|
||||
|
||||
workspaceService *workspaces_services.WorkspaceService
|
||||
auditLogService *audit_logs.AuditLogService
|
||||
backupContextMgr *BackupContextManager
|
||||
workspaceService *workspaces_services.WorkspaceService
|
||||
auditLogService *audit_logs.AuditLogService
|
||||
backupContextManager *BackupContextManager
|
||||
}
|
||||
|
||||
func (s *BackupService) AddBackupRemoveListener(listener BackupRemoveListener) {
|
||||
@@ -253,10 +260,10 @@ func (s *BackupService) MakeBackup(databaseID uuid.UUID, isLastTry bool) {
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
s.backupContextMgr.RegisterBackup(backup.ID, cancel)
|
||||
defer s.backupContextMgr.UnregisterBackup(backup.ID)
|
||||
s.backupContextManager.RegisterBackup(backup.ID, cancel)
|
||||
defer s.backupContextManager.UnregisterBackup(backup.ID)
|
||||
|
||||
err = s.createBackupUseCase.Execute(
|
||||
backupMetadata, err := s.createBackupUseCase.Execute(
|
||||
ctx,
|
||||
backup.ID,
|
||||
backupConfig,
|
||||
@@ -268,7 +275,12 @@ func (s *BackupService) MakeBackup(databaseID uuid.UUID, isLastTry bool) {
|
||||
errMsg := err.Error()
|
||||
|
||||
// Check if backup was cancelled (not due to shutdown)
|
||||
if strings.Contains(errMsg, "backup cancelled") && !strings.Contains(errMsg, "shutdown") {
|
||||
isCancelled := strings.Contains(errMsg, "backup cancelled") ||
|
||||
strings.Contains(errMsg, "context canceled") ||
|
||||
errors.Is(err, context.Canceled)
|
||||
isShutdown := strings.Contains(errMsg, "shutdown")
|
||||
|
||||
if isCancelled && !isShutdown {
|
||||
backup.Status = BackupStatusCanceled
|
||||
backup.BackupDurationMs = time.Since(start).Milliseconds()
|
||||
backup.BackupSizeMb = 0
|
||||
@@ -280,7 +292,7 @@ func (s *BackupService) MakeBackup(databaseID uuid.UUID, isLastTry bool) {
|
||||
// Delete partial backup from storage
|
||||
storage, storageErr := s.storageService.GetStorageByID(backup.StorageID)
|
||||
if storageErr == nil {
|
||||
if deleteErr := storage.DeleteFile(backup.ID); deleteErr != nil {
|
||||
if deleteErr := storage.DeleteFile(s.fieldEncryptor, backup.ID); deleteErr != nil {
|
||||
s.logger.Error(
|
||||
"Failed to delete partial backup file",
|
||||
"backupId",
|
||||
@@ -326,6 +338,13 @@ func (s *BackupService) MakeBackup(databaseID uuid.UUID, isLastTry bool) {
|
||||
backup.Status = BackupStatusCompleted
|
||||
backup.BackupDurationMs = time.Since(start).Milliseconds()
|
||||
|
||||
// Update backup with encryption metadata if provided
|
||||
if backupMetadata != nil {
|
||||
backup.EncryptionSalt = backupMetadata.EncryptionSalt
|
||||
backup.EncryptionIV = backupMetadata.EncryptionIV
|
||||
backup.Encryption = backupMetadata.Encryption
|
||||
}
|
||||
|
||||
if err := s.backupRepository.Save(backup); err != nil {
|
||||
s.logger.Error("Failed to save backup", "error", err)
|
||||
return
|
||||
@@ -366,6 +385,11 @@ func (s *BackupService) SendBackupNotification(
|
||||
return
|
||||
}
|
||||
|
||||
workspace, err := s.workspaceService.GetWorkspaceByID(*database.WorkspaceID)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
for _, notifier := range database.Notifiers {
|
||||
if !slices.Contains(
|
||||
backupConfig.SendNotificationsOn,
|
||||
@@ -377,9 +401,17 @@ func (s *BackupService) SendBackupNotification(
|
||||
title := ""
|
||||
switch notificationType {
|
||||
case backups_config.NotificationBackupFailed:
|
||||
title = fmt.Sprintf("❌ Backup failed for database \"%s\"", database.Name)
|
||||
title = fmt.Sprintf(
|
||||
"❌ Backup failed for database \"%s\" (workspace \"%s\")",
|
||||
database.Name,
|
||||
workspace.Name,
|
||||
)
|
||||
case backups_config.NotificationBackupSuccess:
|
||||
title = fmt.Sprintf("✅ Backup completed for database \"%s\"", database.Name)
|
||||
title = fmt.Sprintf(
|
||||
"✅ Backup completed for database \"%s\" (workspace \"%s\")",
|
||||
database.Name,
|
||||
workspace.Name,
|
||||
)
|
||||
}
|
||||
|
||||
message := ""
|
||||
@@ -450,7 +482,7 @@ func (s *BackupService) CancelBackup(
|
||||
return errors.New("backup is not in progress")
|
||||
}
|
||||
|
||||
if err := s.backupContextMgr.CancelBackup(backupID); err != nil {
|
||||
if err := s.backupContextManager.CancelBackup(backupID); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -496,11 +528,6 @@ func (s *BackupService) GetBackupFile(
|
||||
return nil, errors.New("insufficient permissions to download backup for this database")
|
||||
}
|
||||
|
||||
storage, err := s.storageService.GetStorageByID(backup.StorageID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.auditLogService.WriteAuditLog(
|
||||
fmt.Sprintf(
|
||||
"Backup file downloaded for database: %s (ID: %s)",
|
||||
@@ -511,7 +538,7 @@ func (s *BackupService) GetBackupFile(
|
||||
database.WorkspaceID,
|
||||
)
|
||||
|
||||
return storage.GetFile(backup.ID)
|
||||
return s.getBackupReader(backupID)
|
||||
}
|
||||
|
||||
func (s *BackupService) deleteBackup(backup *Backup) error {
|
||||
@@ -526,7 +553,7 @@ func (s *BackupService) deleteBackup(backup *Backup) error {
|
||||
return err
|
||||
}
|
||||
|
||||
err = storage.DeleteFile(backup.ID)
|
||||
err = storage.DeleteFile(s.fieldEncryptor, backup.ID)
|
||||
if err != nil {
|
||||
// we do not return error here, because sometimes clean up performed
|
||||
// before unavailable storage removal or change - therefore we should
|
||||
@@ -566,3 +593,91 @@ func (s *BackupService) deleteDbBackups(databaseID uuid.UUID) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetBackupReader returns a reader for the backup file
|
||||
// If encrypted, wraps with DecryptionReader
|
||||
func (s *BackupService) getBackupReader(backupID uuid.UUID) (io.ReadCloser, error) {
|
||||
backup, err := s.backupRepository.FindByID(backupID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to find backup: %w", err)
|
||||
}
|
||||
|
||||
storage, err := s.storageService.GetStorageByID(backup.StorageID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get storage: %w", err)
|
||||
}
|
||||
|
||||
fileReader, err := storage.GetFile(s.fieldEncryptor, backup.ID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get backup file: %w", err)
|
||||
}
|
||||
|
||||
// If not encrypted, return raw reader
|
||||
if backup.Encryption == backups_config.BackupEncryptionNone {
|
||||
s.logger.Info("Returning non-encrypted backup", "backupId", backupID)
|
||||
return fileReader, nil
|
||||
}
|
||||
|
||||
// Decrypt on-the-fly for encrypted backups
|
||||
if backup.Encryption != backups_config.BackupEncryptionEncrypted {
|
||||
if err := fileReader.Close(); err != nil {
|
||||
s.logger.Error("Failed to close file reader", "error", err)
|
||||
}
|
||||
return nil, fmt.Errorf("unsupported encryption type: %s", backup.Encryption)
|
||||
}
|
||||
|
||||
if backup.EncryptionSalt == nil || backup.EncryptionIV == nil {
|
||||
if err := fileReader.Close(); err != nil {
|
||||
s.logger.Error("Failed to close file reader", "error", err)
|
||||
}
|
||||
return nil, fmt.Errorf("backup marked as encrypted but missing encryption metadata")
|
||||
}
|
||||
|
||||
// Get master key
|
||||
masterKey, err := s.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
if closeErr := fileReader.Close(); closeErr != nil {
|
||||
s.logger.Error("Failed to close file reader", "error", closeErr)
|
||||
}
|
||||
return nil, fmt.Errorf("failed to get master key: %w", err)
|
||||
}
|
||||
|
||||
// Decode salt and IV
|
||||
salt, err := base64.StdEncoding.DecodeString(*backup.EncryptionSalt)
|
||||
if err != nil {
|
||||
if closeErr := fileReader.Close(); closeErr != nil {
|
||||
s.logger.Error("Failed to close file reader", "error", closeErr)
|
||||
}
|
||||
return nil, fmt.Errorf("failed to decode salt: %w", err)
|
||||
}
|
||||
|
||||
iv, err := base64.StdEncoding.DecodeString(*backup.EncryptionIV)
|
||||
if err != nil {
|
||||
if closeErr := fileReader.Close(); closeErr != nil {
|
||||
s.logger.Error("Failed to close file reader", "error", closeErr)
|
||||
}
|
||||
return nil, fmt.Errorf("failed to decode IV: %w", err)
|
||||
}
|
||||
|
||||
// Wrap with decrypting reader
|
||||
decryptionReader, err := encryption.NewDecryptionReader(
|
||||
fileReader,
|
||||
masterKey,
|
||||
backup.ID,
|
||||
salt,
|
||||
iv,
|
||||
)
|
||||
if err != nil {
|
||||
if closeErr := fileReader.Close(); closeErr != nil {
|
||||
s.logger.Error("Failed to close file reader", "error", closeErr)
|
||||
}
|
||||
return nil, fmt.Errorf("failed to create decrypting reader: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Returning encrypted backup with decryption", "backupId", backupID)
|
||||
|
||||
return &decryptionReaderCloser{
|
||||
decryptionReader,
|
||||
fileReader,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -3,17 +3,22 @@ package backups
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
usecases_postgresql "postgresus-backend/internal/features/backups/backups/usecases/postgresql"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
encryption_secrets "postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
users_enums "postgresus-backend/internal/features/users/enums"
|
||||
users_testing "postgresus-backend/internal/features/users/testing"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
workspaces_testing "postgresus-backend/internal/features/workspaces/testing"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -52,11 +57,13 @@ func Test_BackupExecuted_NotificationSent(t *testing.T) {
|
||||
notifiers.GetNotifierService(),
|
||||
mockNotificationSender,
|
||||
backups_config.GetBackupConfigService(),
|
||||
encryption_secrets.GetSecretKeyService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
&CreateFailedBackupUsecase{},
|
||||
logger.GetLogger(),
|
||||
[]BackupRemoveListener{},
|
||||
nil, // workspaceService
|
||||
nil, // auditLogService
|
||||
workspaces_services.GetWorkspaceService(),
|
||||
nil,
|
||||
NewBackupContextManager(),
|
||||
}
|
||||
|
||||
@@ -98,11 +105,13 @@ func Test_BackupExecuted_NotificationSent(t *testing.T) {
|
||||
notifiers.GetNotifierService(),
|
||||
mockNotificationSender,
|
||||
backups_config.GetBackupConfigService(),
|
||||
encryption_secrets.GetSecretKeyService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
&CreateSuccessBackupUsecase{},
|
||||
logger.GetLogger(),
|
||||
[]BackupRemoveListener{},
|
||||
nil, // workspaceService
|
||||
nil, // auditLogService
|
||||
workspaces_services.GetWorkspaceService(),
|
||||
nil,
|
||||
NewBackupContextManager(),
|
||||
}
|
||||
|
||||
@@ -121,11 +130,13 @@ func Test_BackupExecuted_NotificationSent(t *testing.T) {
|
||||
notifiers.GetNotifierService(),
|
||||
mockNotificationSender,
|
||||
backups_config.GetBackupConfigService(),
|
||||
encryption_secrets.GetSecretKeyService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
&CreateSuccessBackupUsecase{},
|
||||
logger.GetLogger(),
|
||||
[]BackupRemoveListener{},
|
||||
nil, // workspaceService
|
||||
nil, // auditLogService
|
||||
workspaces_services.GetWorkspaceService(),
|
||||
nil,
|
||||
NewBackupContextManager(),
|
||||
}
|
||||
|
||||
@@ -170,9 +181,9 @@ func (uc *CreateFailedBackupUsecase) Execute(
|
||||
backupProgressListener func(
|
||||
completedMBs float64,
|
||||
),
|
||||
) error {
|
||||
) (*usecases_postgresql.BackupMetadata, error) {
|
||||
backupProgressListener(10) // Assume we completed 10MB
|
||||
return errors.New("backup failed")
|
||||
return nil, errors.New("backup failed")
|
||||
}
|
||||
|
||||
type CreateSuccessBackupUsecase struct {
|
||||
@@ -187,7 +198,11 @@ func (uc *CreateSuccessBackupUsecase) Execute(
|
||||
backupProgressListener func(
|
||||
completedMBs float64,
|
||||
),
|
||||
) error {
|
||||
) (*usecases_postgresql.BackupMetadata, error) {
|
||||
backupProgressListener(10) // Assume we completed 10MB
|
||||
return nil
|
||||
return &usecases_postgresql.BackupMetadata{
|
||||
EncryptionSalt: nil,
|
||||
EncryptionIV: nil,
|
||||
Encryption: backups_config.BackupEncryptionNone,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -15,7 +15,7 @@ type CreateBackupUsecase struct {
|
||||
CreatePostgresqlBackupUsecase *usecases_postgresql.CreatePostgresqlBackupUsecase
|
||||
}
|
||||
|
||||
// Execute creates a backup of the database and returns the backup size in MB
|
||||
// Execute creates a backup of the database and returns the backup metadata
|
||||
func (uc *CreateBackupUsecase) Execute(
|
||||
ctx context.Context,
|
||||
backupID uuid.UUID,
|
||||
@@ -25,7 +25,7 @@ func (uc *CreateBackupUsecase) Execute(
|
||||
backupProgressListener func(
|
||||
completedMBs float64,
|
||||
),
|
||||
) error {
|
||||
) (*usecases_postgresql.BackupMetadata, error) {
|
||||
if database.Type == databases.DatabaseTypePostgres {
|
||||
return uc.CreatePostgresqlBackupUsecase.Execute(
|
||||
ctx,
|
||||
@@ -37,5 +37,5 @@ func (uc *CreateBackupUsecase) Execute(
|
||||
)
|
||||
}
|
||||
|
||||
return errors.New("database type not supported")
|
||||
return nil, errors.New("database type not supported")
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package usecases_postgresql
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -14,17 +15,39 @@ import (
|
||||
"time"
|
||||
|
||||
"postgresus-backend/internal/config"
|
||||
backup_encryption "postgresus-backend/internal/features/backups/backups/encryption"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
pgtypes "postgresus-backend/internal/features/databases/databases/postgresql"
|
||||
encryption_secrets "postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
const (
|
||||
backupTimeout = 23 * time.Hour
|
||||
shutdownCheckInterval = 1 * time.Second
|
||||
copyBufferSize = 32 * 1024
|
||||
progressReportIntervalMB = 1.0
|
||||
pgConnectTimeout = 30
|
||||
compressionLevel = 5
|
||||
exitCodeAccessViolation = -1073741819
|
||||
exitCodeGenericError = 1
|
||||
exitCodeConnectionError = 2
|
||||
)
|
||||
|
||||
type CreatePostgresqlBackupUsecase struct {
|
||||
logger *slog.Logger
|
||||
logger *slog.Logger
|
||||
secretKeyService *encryption_secrets.SecretKeyService
|
||||
fieldEncryptor encryption.FieldEncryptor
|
||||
}
|
||||
|
||||
type writeResult struct {
|
||||
bytesWritten int
|
||||
writeErr error
|
||||
}
|
||||
|
||||
// Execute creates a backup of the database
|
||||
@@ -37,7 +60,7 @@ func (uc *CreatePostgresqlBackupUsecase) Execute(
|
||||
backupProgressListener func(
|
||||
completedMBs float64,
|
||||
),
|
||||
) error {
|
||||
) (*BackupMetadata, error) {
|
||||
uc.logger.Info(
|
||||
"Creating PostgreSQL backup via pg_dump custom format",
|
||||
"databaseId",
|
||||
@@ -47,38 +70,24 @@ func (uc *CreatePostgresqlBackupUsecase) Execute(
|
||||
)
|
||||
|
||||
if !backupConfig.IsBackupsEnabled {
|
||||
return fmt.Errorf("backups are not enabled for this database: \"%s\"", db.Name)
|
||||
return nil, fmt.Errorf("backups are not enabled for this database: \"%s\"", db.Name)
|
||||
}
|
||||
|
||||
pg := db.Postgresql
|
||||
|
||||
if pg == nil {
|
||||
return fmt.Errorf("postgresql database configuration is required for pg_dump backups")
|
||||
return nil, fmt.Errorf("postgresql database configuration is required for pg_dump backups")
|
||||
}
|
||||
|
||||
if pg.Database == nil || *pg.Database == "" {
|
||||
return fmt.Errorf("database name is required for pg_dump backups")
|
||||
return nil, fmt.Errorf("database name is required for pg_dump backups")
|
||||
}
|
||||
|
||||
args := []string{
|
||||
"-Fc", // custom format with built-in compression
|
||||
"--no-password", // Use environment variable for password, prevent prompts
|
||||
"-h", pg.Host,
|
||||
"-p", strconv.Itoa(pg.Port),
|
||||
"-U", pg.Username,
|
||||
"-d", *pg.Database,
|
||||
"--verbose", // Add verbose output to help with debugging
|
||||
}
|
||||
args := uc.buildPgDumpArgs(pg)
|
||||
|
||||
// Use zstd compression level 5 for PostgreSQL 16+ (better compression and speed)
|
||||
// Fall back to gzip compression level 5 for older versions (12-15)
|
||||
if pg.Version == tools.PostgresqlVersion12 || pg.Version == tools.PostgresqlVersion13 ||
|
||||
pg.Version == tools.PostgresqlVersion14 || pg.Version == tools.PostgresqlVersion15 {
|
||||
args = append(args, "-Z", "5")
|
||||
uc.logger.Info("Using gzip compression level 5 (zstd not available)", "version", pg.Version)
|
||||
} else {
|
||||
args = append(args, "--compress=zstd:5")
|
||||
uc.logger.Info("Using zstd compression level 5", "version", pg.Version)
|
||||
decryptedPassword, err := uc.fieldEncryptor.Decrypt(db.ID, pg.Password)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt database password: %w", err)
|
||||
}
|
||||
|
||||
return uc.streamToStorage(
|
||||
@@ -92,7 +101,7 @@ func (uc *CreatePostgresqlBackupUsecase) Execute(
|
||||
config.GetEnv().PostgresesInstallDir,
|
||||
),
|
||||
args,
|
||||
pg.Password,
|
||||
decryptedPassword,
|
||||
storage,
|
||||
db,
|
||||
backupProgressListener,
|
||||
@@ -110,36 +119,15 @@ func (uc *CreatePostgresqlBackupUsecase) streamToStorage(
|
||||
storage *storages.Storage,
|
||||
db *databases.Database,
|
||||
backupProgressListener func(completedMBs float64),
|
||||
) error {
|
||||
) (*BackupMetadata, error) {
|
||||
uc.logger.Info("Streaming PostgreSQL backup to storage", "pgBin", pgBin, "args", args)
|
||||
|
||||
// if backup not fit into 23 hours, Postgresus
|
||||
// seems not to work for such database size
|
||||
ctx, cancel := context.WithTimeout(parentCtx, 23*time.Hour)
|
||||
ctx, cancel := uc.createBackupContext(parentCtx)
|
||||
defer cancel()
|
||||
|
||||
// Monitor for shutdown and cancel context if needed
|
||||
go func() {
|
||||
ticker := time.NewTicker(1 * time.Second)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
if config.IsShouldShutdown() {
|
||||
cancel()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Create temporary .pgpass file as a more reliable alternative to PGPASSWORD
|
||||
pgpassFile, err := uc.createTempPgpassFile(db.Postgresql, password)
|
||||
pgpassFile, err := uc.setupPgpassFile(db.Postgresql, password)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create temporary .pgpass file: %w", err)
|
||||
return nil, err
|
||||
}
|
||||
defer func() {
|
||||
if pgpassFile != "" {
|
||||
@@ -147,87 +135,21 @@ func (uc *CreatePostgresqlBackupUsecase) streamToStorage(
|
||||
}
|
||||
}()
|
||||
|
||||
// Verify .pgpass file was created successfully
|
||||
if pgpassFile == "" {
|
||||
return fmt.Errorf("temporary .pgpass file was not created")
|
||||
}
|
||||
|
||||
// Verify .pgpass file was created correctly
|
||||
if info, err := os.Stat(pgpassFile); err == nil {
|
||||
uc.logger.Info("Temporary .pgpass file created successfully",
|
||||
"pgpassFile", pgpassFile,
|
||||
"size", info.Size(),
|
||||
"mode", info.Mode(),
|
||||
)
|
||||
} else {
|
||||
return fmt.Errorf("failed to verify .pgpass file: %w", err)
|
||||
}
|
||||
|
||||
cmd := exec.CommandContext(ctx, pgBin, args...)
|
||||
uc.logger.Info("Executing PostgreSQL backup command", "command", cmd.String())
|
||||
|
||||
// Start with system environment variables to preserve Windows PATH, SystemRoot, etc.
|
||||
cmd.Env = os.Environ()
|
||||
|
||||
// Use the .pgpass file for authentication
|
||||
cmd.Env = append(cmd.Env, "PGPASSFILE="+pgpassFile)
|
||||
uc.logger.Info("Using temporary .pgpass file for authentication", "pgpassFile", pgpassFile)
|
||||
|
||||
// Debug password setup (without exposing the actual password)
|
||||
uc.logger.Info("Setting up PostgreSQL environment",
|
||||
"passwordLength", len(password),
|
||||
"passwordEmpty", password == "",
|
||||
"pgBin", pgBin,
|
||||
"usingPgpassFile", true,
|
||||
"parallelJobs", backupConfig.CpuCount,
|
||||
)
|
||||
|
||||
// Add PostgreSQL-specific environment variables
|
||||
cmd.Env = append(cmd.Env, "PGCLIENTENCODING=UTF8")
|
||||
cmd.Env = append(cmd.Env, "PGCONNECT_TIMEOUT=30")
|
||||
|
||||
// Add encoding-related environment variables to handle character encoding issues
|
||||
cmd.Env = append(cmd.Env, "LC_ALL=C.UTF-8")
|
||||
cmd.Env = append(cmd.Env, "LANG=C.UTF-8")
|
||||
|
||||
// Add PostgreSQL-specific encoding settings
|
||||
cmd.Env = append(cmd.Env, "PGOPTIONS=--client-encoding=UTF8")
|
||||
|
||||
shouldRequireSSL := db.Postgresql.IsHttps
|
||||
|
||||
// Require SSL when explicitly configured
|
||||
if shouldRequireSSL {
|
||||
cmd.Env = append(cmd.Env, "PGSSLMODE=require")
|
||||
uc.logger.Info("Using required SSL mode", "configuredHttps", db.Postgresql.IsHttps)
|
||||
} else {
|
||||
// SSL not explicitly required, but prefer it if available
|
||||
cmd.Env = append(cmd.Env, "PGSSLMODE=prefer")
|
||||
uc.logger.Info("Using preferred SSL mode", "configuredHttps", db.Postgresql.IsHttps)
|
||||
}
|
||||
|
||||
// Set other SSL parameters to avoid certificate issues
|
||||
cmd.Env = append(cmd.Env, "PGSSLCERT=") // No client certificate
|
||||
cmd.Env = append(cmd.Env, "PGSSLKEY=") // No client key
|
||||
cmd.Env = append(cmd.Env, "PGSSLROOTCERT=") // No root certificate verification
|
||||
cmd.Env = append(cmd.Env, "PGSSLCRL=") // No certificate revocation list
|
||||
|
||||
// Verify executable exists and is accessible
|
||||
if _, err := exec.LookPath(pgBin); err != nil {
|
||||
return fmt.Errorf(
|
||||
"PostgreSQL executable not found or not accessible: %s - %w",
|
||||
pgBin,
|
||||
err,
|
||||
)
|
||||
if err := uc.setupPgEnvironment(cmd, pgpassFile, db.Postgresql.IsHttps, password, backupConfig.CpuCount, pgBin); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
pgStdout, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
return fmt.Errorf("stdout pipe: %w", err)
|
||||
return nil, fmt.Errorf("stdout pipe: %w", err)
|
||||
}
|
||||
|
||||
pgStderr, err := cmd.StderrPipe()
|
||||
if err != nil {
|
||||
return fmt.Errorf("stderr pipe: %w", err)
|
||||
return nil, fmt.Errorf("stderr pipe: %w", err)
|
||||
}
|
||||
|
||||
// Capture stderr in a separate goroutine to ensure we don't miss any error output
|
||||
@@ -237,23 +159,31 @@ func (uc *CreatePostgresqlBackupUsecase) streamToStorage(
|
||||
stderrCh <- stderrOutput
|
||||
}()
|
||||
|
||||
// A pipe connecting pg_dump output → storage
|
||||
storageReader, storageWriter := io.Pipe()
|
||||
|
||||
// Create a counting writer to track bytes
|
||||
countingWriter := &CountingWriter{writer: storageWriter}
|
||||
finalWriter, encryptionWriter, backupMetadata, err := uc.setupBackupEncryption(
|
||||
backupID,
|
||||
backupConfig,
|
||||
storageWriter,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
countingWriter := &CountingWriter{writer: finalWriter}
|
||||
|
||||
// The backup ID becomes the object key / filename in storage
|
||||
|
||||
// Start streaming into storage in its own goroutine
|
||||
saveErrCh := make(chan error, 1)
|
||||
go func() {
|
||||
saveErrCh <- storage.SaveFile(uc.logger, backupID, storageReader)
|
||||
saveErr := storage.SaveFile(ctx, uc.fieldEncryptor, uc.logger, backupID, storageReader)
|
||||
saveErrCh <- saveErr
|
||||
}()
|
||||
|
||||
// Start pg_dump
|
||||
if err = cmd.Start(); err != nil {
|
||||
return fmt.Errorf("start %s: %w", filepath.Base(pgBin), err)
|
||||
return nil, fmt.Errorf("start %s: %w", filepath.Base(pgBin), err)
|
||||
}
|
||||
|
||||
// Copy pg output directly to storage with shutdown checks
|
||||
@@ -270,37 +200,22 @@ func (uc *CreatePostgresqlBackupUsecase) streamToStorage(
|
||||
copyResultCh <- err
|
||||
}()
|
||||
|
||||
// Wait for the copy to finish first, then the dump process
|
||||
copyErr := <-copyResultCh
|
||||
bytesWritten := <-bytesWrittenCh
|
||||
waitErr := cmd.Wait()
|
||||
|
||||
// Check for shutdown or cancellation before finalizing
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
if pipeWriter, ok := countingWriter.writer.(*io.PipeWriter); ok {
|
||||
if err := pipeWriter.Close(); err != nil {
|
||||
uc.logger.Error("Failed to close counting writer", "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
<-saveErrCh // Wait for storage to finish
|
||||
|
||||
if config.IsShouldShutdown() {
|
||||
return fmt.Errorf("backup cancelled due to shutdown")
|
||||
}
|
||||
return fmt.Errorf("backup cancelled")
|
||||
uc.cleanupOnCancellation(encryptionWriter, storageWriter, saveErrCh)
|
||||
return nil, uc.checkCancellationReason()
|
||||
default:
|
||||
}
|
||||
|
||||
// Close the pipe writer to signal end of data
|
||||
if pipeWriter, ok := countingWriter.writer.(*io.PipeWriter); ok {
|
||||
if err := pipeWriter.Close(); err != nil {
|
||||
uc.logger.Error("Failed to close counting writer", "error", err)
|
||||
}
|
||||
if err := uc.closeWriters(encryptionWriter, storageWriter); err != nil {
|
||||
<-saveErrCh
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Wait until storage ends reading
|
||||
saveErr := <-saveErrCh
|
||||
stderrOutput := <-stderrCh
|
||||
|
||||
@@ -312,149 +227,34 @@ func (uc *CreatePostgresqlBackupUsecase) streamToStorage(
|
||||
|
||||
switch {
|
||||
case waitErr != nil:
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
if config.IsShouldShutdown() {
|
||||
return fmt.Errorf("backup cancelled due to shutdown")
|
||||
}
|
||||
return fmt.Errorf("backup cancelled")
|
||||
default:
|
||||
if err := uc.checkCancellation(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Enhanced error handling for PostgreSQL connection and SSL issues
|
||||
stderrStr := string(stderrOutput)
|
||||
errorMsg := fmt.Sprintf(
|
||||
"%s failed: %v – stderr: %s",
|
||||
filepath.Base(pgBin),
|
||||
waitErr,
|
||||
stderrStr,
|
||||
)
|
||||
|
||||
// Check for specific PostgreSQL error patterns
|
||||
if exitErr, ok := waitErr.(*exec.ExitError); ok {
|
||||
exitCode := exitErr.ExitCode()
|
||||
|
||||
// Enhanced debugging for exit status 1 with empty stderr
|
||||
if exitCode == 1 && strings.TrimSpace(stderrStr) == "" {
|
||||
uc.logger.Error("pg_dump failed with exit status 1 but no stderr output",
|
||||
"pgBin", pgBin,
|
||||
"args", args,
|
||||
"env_vars", []string{
|
||||
"PGCLIENTENCODING=UTF8",
|
||||
"PGCONNECT_TIMEOUT=30",
|
||||
"LC_ALL=C.UTF-8",
|
||||
"LANG=C.UTF-8",
|
||||
"PGOPTIONS=--client-encoding=UTF8",
|
||||
},
|
||||
)
|
||||
|
||||
errorMsg = fmt.Sprintf(
|
||||
"%s failed with exit status 1 but provided no error details. "+
|
||||
"This often indicates: "+
|
||||
"1) Connection timeout or refused connection, "+
|
||||
"2) Authentication failure with incorrect credentials, "+
|
||||
"3) Database does not exist, "+
|
||||
"4) Network connectivity issues, "+
|
||||
"5) PostgreSQL server not running. "+
|
||||
"Command executed: %s %s",
|
||||
filepath.Base(pgBin),
|
||||
pgBin,
|
||||
strings.Join(args, " "),
|
||||
)
|
||||
} else if exitCode == -1073741819 { // 0xC0000005 in decimal
|
||||
uc.logger.Error("PostgreSQL tool crashed with access violation",
|
||||
"pgBin", pgBin,
|
||||
"args", args,
|
||||
"exitCode", fmt.Sprintf("0x%X", uint32(exitCode)),
|
||||
)
|
||||
|
||||
errorMsg = fmt.Sprintf(
|
||||
"%s crashed with access violation (0xC0000005). This may indicate incompatible PostgreSQL version, corrupted installation, or connection issues. stderr: %s",
|
||||
filepath.Base(pgBin),
|
||||
stderrStr,
|
||||
)
|
||||
} else if exitCode == 1 || exitCode == 2 {
|
||||
// Check for common connection and authentication issues
|
||||
if containsIgnoreCase(stderrStr, "pg_hba.conf") {
|
||||
errorMsg = fmt.Sprintf(
|
||||
"PostgreSQL connection rejected by server configuration (pg_hba.conf). The server may not allow connections from your IP address or may require different authentication settings. stderr: %s",
|
||||
stderrStr,
|
||||
)
|
||||
} else if containsIgnoreCase(stderrStr, "no password supplied") || containsIgnoreCase(stderrStr, "fe_sendauth") {
|
||||
errorMsg = fmt.Sprintf(
|
||||
"PostgreSQL authentication failed - no password supplied. "+
|
||||
"PGPASSWORD environment variable may not be working correctly on this system. "+
|
||||
"Password length: %d, Password empty: %v. "+
|
||||
"Consider using a .pgpass file as an alternative. stderr: %s",
|
||||
len(password),
|
||||
password == "",
|
||||
stderrStr,
|
||||
)
|
||||
} else if containsIgnoreCase(stderrStr, "ssl") && containsIgnoreCase(stderrStr, "connection") {
|
||||
errorMsg = fmt.Sprintf(
|
||||
"PostgreSQL SSL connection failed. The server may require SSL encryption or have SSL configuration issues. stderr: %s",
|
||||
stderrStr,
|
||||
)
|
||||
} else if containsIgnoreCase(stderrStr, "connection") && containsIgnoreCase(stderrStr, "refused") {
|
||||
errorMsg = fmt.Sprintf(
|
||||
"PostgreSQL connection refused. Check if the server is running and accessible from your network. stderr: %s",
|
||||
stderrStr,
|
||||
)
|
||||
} else if containsIgnoreCase(stderrStr, "authentication") || containsIgnoreCase(stderrStr, "password") {
|
||||
errorMsg = fmt.Sprintf(
|
||||
"PostgreSQL authentication failed. Check username and password. stderr: %s",
|
||||
stderrStr,
|
||||
)
|
||||
} else if containsIgnoreCase(stderrStr, "timeout") {
|
||||
errorMsg = fmt.Sprintf(
|
||||
"PostgreSQL connection timeout. The server may be unreachable or overloaded. stderr: %s",
|
||||
stderrStr,
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return errors.New(errorMsg)
|
||||
return nil, uc.buildPgDumpErrorMessage(waitErr, stderrOutput, pgBin, args, password)
|
||||
case copyErr != nil:
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
if config.IsShouldShutdown() {
|
||||
return fmt.Errorf("backup cancelled due to shutdown")
|
||||
}
|
||||
return fmt.Errorf("backup cancelled")
|
||||
default:
|
||||
if err := uc.checkCancellation(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return fmt.Errorf("copy to storage: %w", copyErr)
|
||||
return nil, fmt.Errorf("copy to storage: %w", copyErr)
|
||||
case saveErr != nil:
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
if config.IsShouldShutdown() {
|
||||
return fmt.Errorf("backup cancelled due to shutdown")
|
||||
}
|
||||
return fmt.Errorf("backup cancelled")
|
||||
default:
|
||||
if err := uc.checkCancellation(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return fmt.Errorf("save to storage: %w", saveErr)
|
||||
return nil, fmt.Errorf("save to storage: %w", saveErr)
|
||||
}
|
||||
|
||||
return nil
|
||||
return &backupMetadata, nil
|
||||
}
|
||||
|
||||
// copyWithShutdownCheck copies data from src to dst while checking for shutdown
|
||||
func (uc *CreatePostgresqlBackupUsecase) copyWithShutdownCheck(
|
||||
ctx context.Context,
|
||||
dst io.Writer,
|
||||
src io.Reader,
|
||||
backupProgressListener func(completedMBs float64),
|
||||
) (int64, error) {
|
||||
buf := make([]byte, 32*1024) // 32KB buffer
|
||||
buf := make([]byte, copyBufferSize)
|
||||
var totalBytesWritten int64
|
||||
|
||||
// Progress reporting interval - report every 1MB of data
|
||||
var lastReportedMB float64
|
||||
const reportIntervalMB = 1.0
|
||||
|
||||
for {
|
||||
select {
|
||||
@@ -469,7 +269,23 @@ func (uc *CreatePostgresqlBackupUsecase) copyWithShutdownCheck(
|
||||
|
||||
bytesRead, readErr := src.Read(buf)
|
||||
if bytesRead > 0 {
|
||||
bytesWritten, writeErr := dst.Write(buf[0:bytesRead])
|
||||
writeResultCh := make(chan writeResult, 1)
|
||||
go func() {
|
||||
bytesWritten, writeErr := dst.Write(buf[0:bytesRead])
|
||||
writeResultCh <- writeResult{bytesWritten, writeErr}
|
||||
}()
|
||||
|
||||
var bytesWritten int
|
||||
var writeErr error
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return totalBytesWritten, fmt.Errorf("copy cancelled during write: %w", ctx.Err())
|
||||
case result := <-writeResultCh:
|
||||
bytesWritten = result.bytesWritten
|
||||
writeErr = result.writeErr
|
||||
}
|
||||
|
||||
if bytesWritten < 0 || bytesRead < bytesWritten {
|
||||
bytesWritten = 0
|
||||
if writeErr == nil {
|
||||
@@ -487,12 +303,9 @@ func (uc *CreatePostgresqlBackupUsecase) copyWithShutdownCheck(
|
||||
|
||||
totalBytesWritten += int64(bytesWritten)
|
||||
|
||||
// Report progress based on total size
|
||||
if backupProgressListener != nil {
|
||||
currentSizeMB := float64(totalBytesWritten) / (1024 * 1024)
|
||||
|
||||
// Only report if we've written at least 1MB more data than last report
|
||||
if currentSizeMB >= lastReportedMB+reportIntervalMB {
|
||||
if currentSizeMB >= lastReportedMB+progressReportIntervalMB {
|
||||
backupProgressListener(currentSizeMB)
|
||||
lastReportedMB = currentSizeMB
|
||||
}
|
||||
@@ -503,7 +316,6 @@ func (uc *CreatePostgresqlBackupUsecase) copyWithShutdownCheck(
|
||||
if readErr != io.EOF {
|
||||
return totalBytesWritten, readErr
|
||||
}
|
||||
|
||||
break
|
||||
}
|
||||
}
|
||||
@@ -511,12 +323,413 @@ func (uc *CreatePostgresqlBackupUsecase) copyWithShutdownCheck(
|
||||
return totalBytesWritten, nil
|
||||
}
|
||||
|
||||
// containsIgnoreCase checks if a string contains a substring, ignoring case
|
||||
func containsIgnoreCase(str, substr string) bool {
|
||||
return strings.Contains(strings.ToLower(str), strings.ToLower(substr))
|
||||
func (uc *CreatePostgresqlBackupUsecase) buildPgDumpArgs(pg *pgtypes.PostgresqlDatabase) []string {
|
||||
args := []string{
|
||||
"-Fc",
|
||||
"--no-password",
|
||||
"-h", pg.Host,
|
||||
"-p", strconv.Itoa(pg.Port),
|
||||
"-U", pg.Username,
|
||||
"-d", *pg.Database,
|
||||
"--verbose",
|
||||
}
|
||||
|
||||
compressionArgs := uc.getCompressionArgs(pg.Version)
|
||||
return append(args, compressionArgs...)
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) getCompressionArgs(
|
||||
version tools.PostgresqlVersion,
|
||||
) []string {
|
||||
if uc.isOlderPostgresVersion(version) {
|
||||
uc.logger.Info("Using gzip compression level 5 (zstd not available)", "version", version)
|
||||
return []string{"-Z", strconv.Itoa(compressionLevel)}
|
||||
}
|
||||
|
||||
uc.logger.Info("Using zstd compression level 5", "version", version)
|
||||
return []string{fmt.Sprintf("--compress=zstd:%d", compressionLevel)}
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) isOlderPostgresVersion(
|
||||
version tools.PostgresqlVersion,
|
||||
) bool {
|
||||
return version == tools.PostgresqlVersion12 ||
|
||||
version == tools.PostgresqlVersion13 ||
|
||||
version == tools.PostgresqlVersion14 ||
|
||||
version == tools.PostgresqlVersion15
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) createBackupContext(
|
||||
parentCtx context.Context,
|
||||
) (context.Context, context.CancelFunc) {
|
||||
ctx, cancel := context.WithTimeout(parentCtx, backupTimeout)
|
||||
|
||||
go func() {
|
||||
ticker := time.NewTicker(shutdownCheckInterval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-parentCtx.Done():
|
||||
cancel()
|
||||
return
|
||||
case <-ticker.C:
|
||||
if config.IsShouldShutdown() {
|
||||
cancel()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return ctx, cancel
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) setupPgpassFile(
|
||||
pgConfig *pgtypes.PostgresqlDatabase,
|
||||
password string,
|
||||
) (string, error) {
|
||||
pgpassFile, err := uc.createTempPgpassFile(pgConfig, password)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create temporary .pgpass file: %w", err)
|
||||
}
|
||||
|
||||
if pgpassFile == "" {
|
||||
return "", fmt.Errorf("temporary .pgpass file was not created")
|
||||
}
|
||||
|
||||
if info, err := os.Stat(pgpassFile); err == nil {
|
||||
uc.logger.Info("Temporary .pgpass file created successfully",
|
||||
"pgpassFile", pgpassFile,
|
||||
"size", info.Size(),
|
||||
"mode", info.Mode(),
|
||||
)
|
||||
} else {
|
||||
return "", fmt.Errorf("failed to verify .pgpass file: %w", err)
|
||||
}
|
||||
|
||||
return pgpassFile, nil
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) setupPgEnvironment(
|
||||
cmd *exec.Cmd,
|
||||
pgpassFile string,
|
||||
shouldRequireSSL bool,
|
||||
password string,
|
||||
cpuCount int,
|
||||
pgBin string,
|
||||
) error {
|
||||
cmd.Env = os.Environ()
|
||||
cmd.Env = append(cmd.Env, "PGPASSFILE="+pgpassFile)
|
||||
|
||||
uc.logger.Info("Using temporary .pgpass file for authentication", "pgpassFile", pgpassFile)
|
||||
uc.logger.Info("Setting up PostgreSQL environment",
|
||||
"passwordLength", len(password),
|
||||
"passwordEmpty", password == "",
|
||||
"pgBin", pgBin,
|
||||
"usingPgpassFile", true,
|
||||
"parallelJobs", cpuCount,
|
||||
)
|
||||
|
||||
cmd.Env = append(cmd.Env,
|
||||
"PGCLIENTENCODING=UTF8",
|
||||
"PGCONNECT_TIMEOUT="+strconv.Itoa(pgConnectTimeout),
|
||||
"LC_ALL=C.UTF-8",
|
||||
"LANG=C.UTF-8",
|
||||
)
|
||||
|
||||
if shouldRequireSSL {
|
||||
cmd.Env = append(cmd.Env, "PGSSLMODE=require")
|
||||
uc.logger.Info("Using required SSL mode", "configuredHttps", shouldRequireSSL)
|
||||
} else {
|
||||
cmd.Env = append(cmd.Env, "PGSSLMODE=prefer")
|
||||
uc.logger.Info("Using preferred SSL mode", "configuredHttps", shouldRequireSSL)
|
||||
}
|
||||
|
||||
cmd.Env = append(cmd.Env,
|
||||
"PGSSLCERT=",
|
||||
"PGSSLKEY=",
|
||||
"PGSSLROOTCERT=",
|
||||
"PGSSLCRL=",
|
||||
)
|
||||
|
||||
if _, err := exec.LookPath(pgBin); err != nil {
|
||||
return fmt.Errorf("PostgreSQL executable not found or not accessible: %s - %w", pgBin, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) setupBackupEncryption(
|
||||
backupID uuid.UUID,
|
||||
backupConfig *backups_config.BackupConfig,
|
||||
storageWriter io.WriteCloser,
|
||||
) (io.Writer, *backup_encryption.EncryptionWriter, BackupMetadata, error) {
|
||||
metadata := BackupMetadata{}
|
||||
|
||||
if backupConfig.Encryption != backups_config.BackupEncryptionEncrypted {
|
||||
metadata.Encryption = backups_config.BackupEncryptionNone
|
||||
uc.logger.Info("Encryption disabled for backup", "backupId", backupID)
|
||||
return storageWriter, nil, metadata, nil
|
||||
}
|
||||
|
||||
salt, err := backup_encryption.GenerateSalt()
|
||||
if err != nil {
|
||||
return nil, nil, metadata, fmt.Errorf("failed to generate salt: %w", err)
|
||||
}
|
||||
|
||||
nonce, err := backup_encryption.GenerateNonce()
|
||||
if err != nil {
|
||||
return nil, nil, metadata, fmt.Errorf("failed to generate nonce: %w", err)
|
||||
}
|
||||
|
||||
masterKey, err := uc.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
return nil, nil, metadata, fmt.Errorf("failed to get master key: %w", err)
|
||||
}
|
||||
|
||||
encWriter, err := backup_encryption.NewEncryptionWriter(
|
||||
storageWriter,
|
||||
masterKey,
|
||||
backupID,
|
||||
salt,
|
||||
nonce,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, nil, metadata, fmt.Errorf("failed to create encrypting writer: %w", err)
|
||||
}
|
||||
|
||||
saltBase64 := base64.StdEncoding.EncodeToString(salt)
|
||||
nonceBase64 := base64.StdEncoding.EncodeToString(nonce)
|
||||
metadata.EncryptionSalt = &saltBase64
|
||||
metadata.EncryptionIV = &nonceBase64
|
||||
metadata.Encryption = backups_config.BackupEncryptionEncrypted
|
||||
|
||||
uc.logger.Info("Encryption enabled for backup", "backupId", backupID)
|
||||
return encWriter, encWriter, metadata, nil
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) cleanupOnCancellation(
|
||||
encryptionWriter *backup_encryption.EncryptionWriter,
|
||||
storageWriter io.WriteCloser,
|
||||
saveErrCh chan error,
|
||||
) {
|
||||
if encryptionWriter != nil {
|
||||
go func() {
|
||||
if closeErr := encryptionWriter.Close(); closeErr != nil {
|
||||
uc.logger.Error(
|
||||
"Failed to close encrypting writer during cancellation",
|
||||
"error",
|
||||
closeErr,
|
||||
)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
if err := storageWriter.Close(); err != nil {
|
||||
uc.logger.Error("Failed to close pipe writer during cancellation", "error", err)
|
||||
}
|
||||
|
||||
<-saveErrCh
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) closeWriters(
|
||||
encryptionWriter *backup_encryption.EncryptionWriter,
|
||||
storageWriter io.WriteCloser,
|
||||
) error {
|
||||
encryptionCloseErrCh := make(chan error, 1)
|
||||
if encryptionWriter != nil {
|
||||
go func() {
|
||||
closeErr := encryptionWriter.Close()
|
||||
if closeErr != nil {
|
||||
uc.logger.Error("Failed to close encrypting writer", "error", closeErr)
|
||||
}
|
||||
encryptionCloseErrCh <- closeErr
|
||||
}()
|
||||
} else {
|
||||
encryptionCloseErrCh <- nil
|
||||
}
|
||||
|
||||
encryptionCloseErr := <-encryptionCloseErrCh
|
||||
if encryptionCloseErr != nil {
|
||||
if err := storageWriter.Close(); err != nil {
|
||||
uc.logger.Error("Failed to close pipe writer after encryption error", "error", err)
|
||||
}
|
||||
return fmt.Errorf("failed to close encryption writer: %w", encryptionCloseErr)
|
||||
}
|
||||
|
||||
if err := storageWriter.Close(); err != nil {
|
||||
uc.logger.Error("Failed to close pipe writer", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) checkCancellation(ctx context.Context) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
if config.IsShouldShutdown() {
|
||||
return fmt.Errorf("backup cancelled due to shutdown")
|
||||
}
|
||||
return fmt.Errorf("backup cancelled")
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) checkCancellationReason() error {
|
||||
if config.IsShouldShutdown() {
|
||||
return fmt.Errorf("backup cancelled due to shutdown")
|
||||
}
|
||||
return fmt.Errorf("backup cancelled")
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) buildPgDumpErrorMessage(
|
||||
waitErr error,
|
||||
stderrOutput []byte,
|
||||
pgBin string,
|
||||
args []string,
|
||||
password string,
|
||||
) error {
|
||||
stderrStr := string(stderrOutput)
|
||||
errorMsg := fmt.Sprintf("%s failed: %v – stderr: %s", filepath.Base(pgBin), waitErr, stderrStr)
|
||||
|
||||
exitErr, ok := waitErr.(*exec.ExitError)
|
||||
if !ok {
|
||||
return errors.New(errorMsg)
|
||||
}
|
||||
|
||||
exitCode := exitErr.ExitCode()
|
||||
|
||||
if exitCode == exitCodeGenericError && strings.TrimSpace(stderrStr) == "" {
|
||||
return uc.handleExitCode1NoStderr(pgBin, args)
|
||||
}
|
||||
|
||||
if exitCode == exitCodeAccessViolation {
|
||||
return uc.handleAccessViolation(pgBin, stderrStr)
|
||||
}
|
||||
|
||||
if exitCode == exitCodeGenericError || exitCode == exitCodeConnectionError {
|
||||
return uc.handleConnectionErrors(stderrStr, password)
|
||||
}
|
||||
|
||||
return errors.New(errorMsg)
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) handleExitCode1NoStderr(
|
||||
pgBin string,
|
||||
args []string,
|
||||
) error {
|
||||
uc.logger.Error("pg_dump failed with exit status 1 but no stderr output",
|
||||
"pgBin", pgBin,
|
||||
"args", args,
|
||||
"env_vars", []string{
|
||||
"PGCLIENTENCODING=UTF8",
|
||||
"PGCONNECT_TIMEOUT=" + strconv.Itoa(pgConnectTimeout),
|
||||
"LC_ALL=C.UTF-8",
|
||||
"LANG=C.UTF-8",
|
||||
},
|
||||
)
|
||||
|
||||
return fmt.Errorf(
|
||||
"%s failed with exit status 1 but provided no error details. "+
|
||||
"This often indicates: "+
|
||||
"1) Connection timeout or refused connection, "+
|
||||
"2) Authentication failure with incorrect credentials, "+
|
||||
"3) Database does not exist, "+
|
||||
"4) Network connectivity issues, "+
|
||||
"5) PostgreSQL server not running. "+
|
||||
"Command executed: %s %s",
|
||||
filepath.Base(pgBin),
|
||||
pgBin,
|
||||
strings.Join(args, " "),
|
||||
)
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) handleAccessViolation(
|
||||
pgBin string,
|
||||
stderrStr string,
|
||||
) error {
|
||||
uc.logger.Error("PostgreSQL tool crashed with access violation",
|
||||
"pgBin", pgBin,
|
||||
"exitCode", "0xC0000005",
|
||||
)
|
||||
|
||||
return fmt.Errorf(
|
||||
"%s crashed with access violation (0xC0000005). "+
|
||||
"This may indicate incompatible PostgreSQL version, corrupted installation, or connection issues. "+
|
||||
"stderr: %s",
|
||||
filepath.Base(pgBin),
|
||||
stderrStr,
|
||||
)
|
||||
}
|
||||
|
||||
func (uc *CreatePostgresqlBackupUsecase) handleConnectionErrors(
|
||||
stderrStr string,
|
||||
password string,
|
||||
) error {
|
||||
if containsIgnoreCase(stderrStr, "pg_hba.conf") {
|
||||
return fmt.Errorf(
|
||||
"PostgreSQL connection rejected by server configuration (pg_hba.conf). "+
|
||||
"The server may not allow connections from your IP address or may require different authentication settings. "+
|
||||
"stderr: %s",
|
||||
stderrStr,
|
||||
)
|
||||
}
|
||||
|
||||
if containsIgnoreCase(stderrStr, "no password supplied") ||
|
||||
containsIgnoreCase(stderrStr, "fe_sendauth") {
|
||||
return fmt.Errorf(
|
||||
"PostgreSQL authentication failed - no password supplied. "+
|
||||
"PGPASSWORD environment variable may not be working correctly on this system. "+
|
||||
"Password length: %d, Password empty: %v. "+
|
||||
"Consider using a .pgpass file as an alternative. "+
|
||||
"stderr: %s",
|
||||
len(password),
|
||||
password == "",
|
||||
stderrStr,
|
||||
)
|
||||
}
|
||||
|
||||
if containsIgnoreCase(stderrStr, "ssl") && containsIgnoreCase(stderrStr, "connection") {
|
||||
return fmt.Errorf(
|
||||
"PostgreSQL SSL connection failed. "+
|
||||
"The server may require SSL encryption or have SSL configuration issues. "+
|
||||
"stderr: %s",
|
||||
stderrStr,
|
||||
)
|
||||
}
|
||||
|
||||
if containsIgnoreCase(stderrStr, "connection") && containsIgnoreCase(stderrStr, "refused") {
|
||||
return fmt.Errorf(
|
||||
"PostgreSQL connection refused. "+
|
||||
"Check if the server is running and accessible from your network. "+
|
||||
"stderr: %s",
|
||||
stderrStr,
|
||||
)
|
||||
}
|
||||
|
||||
if containsIgnoreCase(stderrStr, "authentication") ||
|
||||
containsIgnoreCase(stderrStr, "password") {
|
||||
return fmt.Errorf(
|
||||
"PostgreSQL authentication failed. Check username and password. stderr: %s",
|
||||
stderrStr,
|
||||
)
|
||||
}
|
||||
|
||||
if containsIgnoreCase(stderrStr, "timeout") {
|
||||
return fmt.Errorf(
|
||||
"PostgreSQL connection timeout. The server may be unreachable or overloaded. stderr: %s",
|
||||
stderrStr,
|
||||
)
|
||||
}
|
||||
|
||||
return fmt.Errorf("PostgreSQL connection or authentication error. stderr: %s", stderrStr)
|
||||
}
|
||||
|
||||
// createTempPgpassFile creates a temporary .pgpass file with the given password
|
||||
func (uc *CreatePostgresqlBackupUsecase) createTempPgpassFile(
|
||||
pgConfig *pgtypes.PostgresqlDatabase,
|
||||
password string,
|
||||
@@ -525,14 +738,17 @@ func (uc *CreatePostgresqlBackupUsecase) createTempPgpassFile(
|
||||
return "", nil
|
||||
}
|
||||
|
||||
escapedHost := tools.EscapePgpassField(pgConfig.Host)
|
||||
escapedUsername := tools.EscapePgpassField(pgConfig.Username)
|
||||
escapedPassword := tools.EscapePgpassField(password)
|
||||
|
||||
pgpassContent := fmt.Sprintf("%s:%d:*:%s:%s",
|
||||
pgConfig.Host,
|
||||
escapedHost,
|
||||
pgConfig.Port,
|
||||
pgConfig.Username,
|
||||
password,
|
||||
escapedUsername,
|
||||
escapedPassword,
|
||||
)
|
||||
|
||||
// it always create unique directory like /tmp/pgpass-1234567890
|
||||
tempDir, err := os.MkdirTemp("", "pgpass")
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create temporary directory: %w", err)
|
||||
@@ -546,3 +762,7 @@ func (uc *CreatePostgresqlBackupUsecase) createTempPgpassFile(
|
||||
|
||||
return pgpassFile, nil
|
||||
}
|
||||
|
||||
func containsIgnoreCase(str, substr string) bool {
|
||||
return strings.Contains(strings.ToLower(str), strings.ToLower(substr))
|
||||
}
|
||||
|
||||
@@ -1,11 +1,15 @@
|
||||
package usecases_postgresql
|
||||
|
||||
import (
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
)
|
||||
|
||||
var createPostgresqlBackupUsecase = &CreatePostgresqlBackupUsecase{
|
||||
logger.GetLogger(),
|
||||
secrets.GetSecretKeyService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
}
|
||||
|
||||
func GetCreatePostgresqlBackupUsecase() *CreatePostgresqlBackupUsecase {
|
||||
|
||||
@@ -0,0 +1,15 @@
|
||||
package usecases_postgresql
|
||||
|
||||
import backups_config "postgresus-backend/internal/features/backups/config"
|
||||
|
||||
type EncryptionMetadata struct {
|
||||
Salt string
|
||||
IV string
|
||||
Encryption backups_config.BackupEncryption
|
||||
}
|
||||
|
||||
type BackupMetadata struct {
|
||||
EncryptionSalt *string
|
||||
EncryptionIV *string
|
||||
Encryption backups_config.BackupEncryption
|
||||
}
|
||||
@@ -20,15 +20,15 @@ func (c *BackupConfigController) RegisterRoutes(router *gin.RouterGroup) {
|
||||
|
||||
// SaveBackupConfig
|
||||
// @Summary Save backup configuration
|
||||
// @Description Save or update backup configuration for a database
|
||||
// @Description Save or update backup configuration for a database. Encryption can be set to NONE (no encryption) or ENCRYPTED (AES-256-GCM encryption).
|
||||
// @Tags backup-configs
|
||||
// @Accept json
|
||||
// @Produce json
|
||||
// @Param request body BackupConfig true "Backup configuration data"
|
||||
// @Success 200 {object} BackupConfig
|
||||
// @Failure 400
|
||||
// @Failure 401
|
||||
// @Failure 500
|
||||
// @Param request body BackupConfig true "Backup configuration data (encryption field: NONE or ENCRYPTED)"
|
||||
// @Success 200 {object} BackupConfig "Returns the saved backup configuration including encryption settings"
|
||||
// @Failure 400 {object} map[string]string "Invalid encryption value or other validation errors"
|
||||
// @Failure 401 {object} map[string]string "User not authenticated"
|
||||
// @Failure 500 {object} map[string]string "Internal server error"
|
||||
// @Router /backup-configs/save [post]
|
||||
func (c *BackupConfigController) SaveBackupConfig(ctx *gin.Context) {
|
||||
user, ok := users_middleware.GetUserFromContext(ctx)
|
||||
@@ -57,14 +57,14 @@ func (c *BackupConfigController) SaveBackupConfig(ctx *gin.Context) {
|
||||
|
||||
// GetBackupConfigByDbID
|
||||
// @Summary Get backup configuration by database ID
|
||||
// @Description Get backup configuration for a specific database
|
||||
// @Description Get backup configuration for a specific database including encryption settings (NONE or ENCRYPTED)
|
||||
// @Tags backup-configs
|
||||
// @Produce json
|
||||
// @Param id path string true "Database ID"
|
||||
// @Success 200 {object} BackupConfig
|
||||
// @Failure 400
|
||||
// @Failure 401
|
||||
// @Failure 404
|
||||
// @Success 200 {object} BackupConfig "Returns backup configuration with encryption field"
|
||||
// @Failure 400 {object} map[string]string "Invalid database ID"
|
||||
// @Failure 401 {object} map[string]string "User not authenticated"
|
||||
// @Failure 404 {object} map[string]string "Backup configuration not found"
|
||||
// @Router /backup-configs/database/{id} [get]
|
||||
func (c *BackupConfigController) GetBackupConfigByDbID(ctx *gin.Context) {
|
||||
user, ok := users_middleware.GetUserFromContext(ctx)
|
||||
|
||||
@@ -368,6 +368,86 @@ func Test_IsStorageUsing_PermissionsEnforced(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func Test_SaveBackupConfig_WithEncryptionNone_ConfigSaved(t *testing.T) {
|
||||
router := createTestRouter()
|
||||
owner := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Test Workspace", owner, router)
|
||||
|
||||
database := createTestDatabaseViaAPI("Test Database", workspace.ID, owner.Token, router)
|
||||
|
||||
timeOfDay := "04:00"
|
||||
request := BackupConfig{
|
||||
DatabaseID: database.ID,
|
||||
IsBackupsEnabled: true,
|
||||
StorePeriod: period.PeriodWeek,
|
||||
BackupInterval: &intervals.Interval{
|
||||
Interval: intervals.IntervalDaily,
|
||||
TimeOfDay: &timeOfDay,
|
||||
},
|
||||
SendNotificationsOn: []BackupNotificationType{
|
||||
NotificationBackupFailed,
|
||||
},
|
||||
CpuCount: 2,
|
||||
IsRetryIfFailed: true,
|
||||
MaxFailedTriesCount: 3,
|
||||
Encryption: BackupEncryptionNone,
|
||||
}
|
||||
|
||||
var response BackupConfig
|
||||
test_utils.MakePostRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/backup-configs/save",
|
||||
"Bearer "+owner.Token,
|
||||
request,
|
||||
http.StatusOK,
|
||||
&response,
|
||||
)
|
||||
|
||||
assert.Equal(t, database.ID, response.DatabaseID)
|
||||
assert.Equal(t, BackupEncryptionNone, response.Encryption)
|
||||
}
|
||||
|
||||
func Test_SaveBackupConfig_WithEncryptionEncrypted_ConfigSaved(t *testing.T) {
|
||||
router := createTestRouter()
|
||||
owner := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Test Workspace", owner, router)
|
||||
|
||||
database := createTestDatabaseViaAPI("Test Database", workspace.ID, owner.Token, router)
|
||||
|
||||
timeOfDay := "04:00"
|
||||
request := BackupConfig{
|
||||
DatabaseID: database.ID,
|
||||
IsBackupsEnabled: true,
|
||||
StorePeriod: period.PeriodWeek,
|
||||
BackupInterval: &intervals.Interval{
|
||||
Interval: intervals.IntervalDaily,
|
||||
TimeOfDay: &timeOfDay,
|
||||
},
|
||||
SendNotificationsOn: []BackupNotificationType{
|
||||
NotificationBackupFailed,
|
||||
},
|
||||
CpuCount: 2,
|
||||
IsRetryIfFailed: true,
|
||||
MaxFailedTriesCount: 3,
|
||||
Encryption: BackupEncryptionEncrypted,
|
||||
}
|
||||
|
||||
var response BackupConfig
|
||||
test_utils.MakePostRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/backup-configs/save",
|
||||
"Bearer "+owner.Token,
|
||||
request,
|
||||
http.StatusOK,
|
||||
&response,
|
||||
)
|
||||
|
||||
assert.Equal(t, database.ID, response.DatabaseID)
|
||||
assert.Equal(t, BackupEncryptionEncrypted, response.Encryption)
|
||||
}
|
||||
|
||||
func createTestDatabaseViaAPI(
|
||||
name string,
|
||||
workspaceID uuid.UUID,
|
||||
|
||||
@@ -6,3 +6,10 @@ const (
|
||||
NotificationBackupFailed BackupNotificationType = "BACKUP_FAILED"
|
||||
NotificationBackupSuccess BackupNotificationType = "BACKUP_SUCCESS"
|
||||
)
|
||||
|
||||
type BackupEncryption string
|
||||
|
||||
const (
|
||||
BackupEncryptionNone BackupEncryption = "NONE"
|
||||
BackupEncryptionEncrypted BackupEncryption = "ENCRYPTED"
|
||||
)
|
||||
|
||||
@@ -31,6 +31,8 @@ type BackupConfig struct {
|
||||
MaxFailedTriesCount int `json:"maxFailedTriesCount" gorm:"column:max_failed_tries_count;type:int;not null"`
|
||||
|
||||
CpuCount int `json:"cpuCount" gorm:"type:int;not null"`
|
||||
|
||||
Encryption BackupEncryption `json:"encryption" gorm:"column:encryption;type:text;not null;default:'NONE'"`
|
||||
}
|
||||
|
||||
func (h *BackupConfig) TableName() string {
|
||||
@@ -88,6 +90,11 @@ func (b *BackupConfig) Validate() error {
|
||||
return errors.New("max failed tries count must be greater than 0")
|
||||
}
|
||||
|
||||
if b.Encryption != "" && b.Encryption != BackupEncryptionNone &&
|
||||
b.Encryption != BackupEncryptionEncrypted {
|
||||
return errors.New("encryption must be NONE or ENCRYPTED")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -103,5 +110,6 @@ func (b *BackupConfig) Copy(newDatabaseID uuid.UUID) *BackupConfig {
|
||||
IsRetryIfFailed: b.IsRetryIfFailed,
|
||||
MaxFailedTriesCount: b.MaxFailedTriesCount,
|
||||
CpuCount: b.CpuCount,
|
||||
Encryption: b.Encryption,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -171,6 +171,7 @@ func (s *BackupConfigService) initializeDefaultConfig(
|
||||
CpuCount: 1,
|
||||
IsRetryIfFailed: true,
|
||||
MaxFailedTriesCount: 3,
|
||||
Encryption: BackupEncryptionNone,
|
||||
})
|
||||
|
||||
return err
|
||||
|
||||
@@ -26,7 +26,8 @@ func (c *DatabaseController) RegisterRoutes(router *gin.RouterGroup) {
|
||||
router.POST("/databases/test-connection-direct", c.TestDatabaseConnectionDirect)
|
||||
router.POST("/databases/:id/copy", c.CopyDatabase)
|
||||
router.GET("/databases/notifier/:id/is-using", c.IsNotifierUsing)
|
||||
|
||||
router.POST("/databases/is-readonly", c.IsUserReadOnly)
|
||||
router.POST("/databases/create-readonly-user", c.CreateReadOnlyUser)
|
||||
}
|
||||
|
||||
// CreateDatabase
|
||||
@@ -330,3 +331,76 @@ func (c *DatabaseController) CopyDatabase(ctx *gin.Context) {
|
||||
|
||||
ctx.JSON(http.StatusCreated, copiedDatabase)
|
||||
}
|
||||
|
||||
// IsUserReadOnly
|
||||
// @Summary Check if database user is read-only
|
||||
// @Description Check if current database credentials have only read (SELECT) privileges
|
||||
// @Tags databases
|
||||
// @Accept json
|
||||
// @Produce json
|
||||
// @Security BearerAuth
|
||||
// @Param request body Database true "Database configuration to check"
|
||||
// @Success 200 {object} IsReadOnlyResponse
|
||||
// @Failure 400 {object} map[string]string
|
||||
// @Failure 401 {object} map[string]string
|
||||
// @Failure 403 {object} map[string]string
|
||||
// @Router /databases/is-readonly [post]
|
||||
func (c *DatabaseController) IsUserReadOnly(ctx *gin.Context) {
|
||||
user, ok := users_middleware.GetUserFromContext(ctx)
|
||||
if !ok {
|
||||
ctx.JSON(http.StatusUnauthorized, gin.H{"error": "User not authenticated"})
|
||||
return
|
||||
}
|
||||
|
||||
var request Database
|
||||
if err := ctx.ShouldBindJSON(&request); err != nil {
|
||||
ctx.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
isReadOnly, err := c.databaseService.IsUserReadOnly(user, &request)
|
||||
if err != nil {
|
||||
ctx.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
ctx.JSON(http.StatusOK, IsReadOnlyResponse{IsReadOnly: isReadOnly})
|
||||
}
|
||||
|
||||
// CreateReadOnlyUser
|
||||
// @Summary Create read-only database user
|
||||
// @Description Create a new PostgreSQL user with read-only privileges for backup operations
|
||||
// @Tags databases
|
||||
// @Accept json
|
||||
// @Produce json
|
||||
// @Security BearerAuth
|
||||
// @Param request body Database true "Database configuration to create user for"
|
||||
// @Success 200 {object} CreateReadOnlyUserResponse
|
||||
// @Failure 400 {object} map[string]string
|
||||
// @Failure 401 {object} map[string]string
|
||||
// @Failure 403 {object} map[string]string
|
||||
// @Router /databases/create-readonly-user [post]
|
||||
func (c *DatabaseController) CreateReadOnlyUser(ctx *gin.Context) {
|
||||
user, ok := users_middleware.GetUserFromContext(ctx)
|
||||
if !ok {
|
||||
ctx.JSON(http.StatusUnauthorized, gin.H{"error": "User not authenticated"})
|
||||
return
|
||||
}
|
||||
|
||||
var request Database
|
||||
if err := ctx.ShouldBindJSON(&request); err != nil {
|
||||
ctx.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
username, password, err := c.databaseService.CreateReadOnlyUser(user, &request)
|
||||
if err != nil {
|
||||
ctx.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
ctx.JSON(http.StatusOK, CreateReadOnlyUserResponse{
|
||||
Username: username,
|
||||
Password: password,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
users_testing "postgresus-backend/internal/features/users/testing"
|
||||
workspaces_controllers "postgresus-backend/internal/features/workspaces/controllers"
|
||||
workspaces_testing "postgresus-backend/internal/features/workspaces/testing"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
test_utils "postgresus-backend/internal/util/testing"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
)
|
||||
@@ -769,6 +770,71 @@ func createTestDatabaseViaAPI(
|
||||
return &database
|
||||
}
|
||||
|
||||
func Test_CreateDatabase_PasswordIsEncryptedInDB(t *testing.T) {
|
||||
router := createTestRouter()
|
||||
owner := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Test Workspace", owner, router)
|
||||
|
||||
testDbName := "test_db"
|
||||
plainPassword := "my-super-secret-password-123"
|
||||
request := Database{
|
||||
Name: "Test Database",
|
||||
WorkspaceID: &workspace.ID,
|
||||
Type: DatabaseTypePostgres,
|
||||
Postgresql: &postgresql.PostgresqlDatabase{
|
||||
Version: tools.PostgresqlVersion16,
|
||||
Host: "localhost",
|
||||
Port: 5432,
|
||||
Username: "postgres",
|
||||
Password: plainPassword,
|
||||
Database: &testDbName,
|
||||
},
|
||||
}
|
||||
|
||||
var createdDatabase Database
|
||||
test_utils.MakePostRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/create",
|
||||
"Bearer "+owner.Token,
|
||||
request,
|
||||
http.StatusCreated,
|
||||
&createdDatabase,
|
||||
)
|
||||
|
||||
repository := &DatabaseRepository{}
|
||||
databaseFromDB, err := repository.FindByID(createdDatabase.ID)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, databaseFromDB)
|
||||
assert.NotNil(t, databaseFromDB.Postgresql)
|
||||
|
||||
assert.True(
|
||||
t,
|
||||
strings.HasPrefix(databaseFromDB.Postgresql.Password, "enc:"),
|
||||
"Password should be encrypted in database with 'enc:' prefix, got: %s",
|
||||
databaseFromDB.Postgresql.Password,
|
||||
)
|
||||
|
||||
encryptor := encryption.GetFieldEncryptor()
|
||||
decryptedPassword, err := encryptor.Decrypt(
|
||||
databaseFromDB.ID,
|
||||
databaseFromDB.Postgresql.Password,
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, plainPassword, decryptedPassword,
|
||||
"Decrypted password should match original plaintext password")
|
||||
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/"+createdDatabase.ID.String(),
|
||||
"Bearer "+owner.Token,
|
||||
http.StatusNoContent,
|
||||
)
|
||||
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
}
|
||||
|
||||
func Test_DatabaseSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
@@ -815,7 +881,15 @@ func Test_DatabaseSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, database *Database) {
|
||||
assert.Equal(t, "original-password-secret", database.Postgresql.Password)
|
||||
// Verify password is encrypted
|
||||
assert.True(t, strings.HasPrefix(database.Postgresql.Password, "enc:"),
|
||||
"Password should be encrypted in database")
|
||||
|
||||
// Verify it can be decrypted back to original
|
||||
encryptor := encryption.GetFieldEncryptor()
|
||||
decrypted, err := encryptor.Decrypt(database.ID, database.Postgresql.Password)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "original-password-secret", decrypted)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, database *Database) {
|
||||
assert.Equal(t, "", database.Postgresql.Password)
|
||||
|
||||
@@ -5,9 +5,10 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
"regexp"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
@@ -18,7 +19,6 @@ type PostgresqlDatabase struct {
|
||||
ID uuid.UUID `json:"id" gorm:"primaryKey;type:uuid;default:gen_random_uuid()"`
|
||||
|
||||
DatabaseID *uuid.UUID `json:"databaseId" gorm:"type:uuid;column:database_id"`
|
||||
RestoreID *uuid.UUID `json:"restoreId" gorm:"type:uuid;column:restore_id"`
|
||||
|
||||
Version tools.PostgresqlVersion `json:"version" gorm:"type:text;not null"`
|
||||
|
||||
@@ -59,11 +59,15 @@ func (p *PostgresqlDatabase) Validate() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *PostgresqlDatabase) TestConnection(logger *slog.Logger) error {
|
||||
func (p *PostgresqlDatabase) TestConnection(
|
||||
logger *slog.Logger,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
databaseID uuid.UUID,
|
||||
) error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
|
||||
defer cancel()
|
||||
|
||||
return testSingleDatabaseConnection(logger, ctx, p)
|
||||
return testSingleDatabaseConnection(logger, ctx, p, encryptor, databaseID)
|
||||
}
|
||||
|
||||
func (p *PostgresqlDatabase) HideSensitiveData() {
|
||||
@@ -87,19 +91,420 @@ func (p *PostgresqlDatabase) Update(incoming *PostgresqlDatabase) {
|
||||
}
|
||||
}
|
||||
|
||||
func (p *PostgresqlDatabase) EncryptSensitiveFields(
|
||||
databaseID uuid.UUID,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
) error {
|
||||
if p.Password != "" {
|
||||
encrypted, err := encryptor.Encrypt(databaseID, p.Password)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
p.Password = encrypted
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsUserReadOnly checks if the database user has read-only privileges.
|
||||
//
|
||||
// This method performs a comprehensive security check by examining:
|
||||
// - Role-level attributes (superuser, createrole, createdb)
|
||||
// - Database-level privileges (CREATE, TEMP)
|
||||
// - Table-level write permissions (INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER)
|
||||
//
|
||||
// A user is considered read-only only if they have ZERO write privileges
|
||||
// across all three levels. This ensures the database user follows the
|
||||
// principle of least privilege for backup operations.
|
||||
func (p *PostgresqlDatabase) IsUserReadOnly(
|
||||
ctx context.Context,
|
||||
logger *slog.Logger,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
databaseID uuid.UUID,
|
||||
) (bool, error) {
|
||||
password, err := decryptPasswordIfNeeded(p.Password, encryptor, databaseID)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to decrypt password: %w", err)
|
||||
}
|
||||
|
||||
connStr := buildConnectionStringForDB(p, *p.Database, password)
|
||||
|
||||
conn, err := pgx.Connect(ctx, connStr)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to connect to database: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if closeErr := conn.Close(ctx); closeErr != nil {
|
||||
logger.Error("Failed to close connection", "error", closeErr)
|
||||
}
|
||||
}()
|
||||
|
||||
// LEVEL 1: Check role-level attributes
|
||||
var isSuperuser, canCreateRole, canCreateDB bool
|
||||
err = conn.QueryRow(ctx, `
|
||||
SELECT
|
||||
rolsuper,
|
||||
rolcreaterole,
|
||||
rolcreatedb
|
||||
FROM pg_roles
|
||||
WHERE rolname = current_user
|
||||
`).Scan(&isSuperuser, &canCreateRole, &canCreateDB)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to check role attributes: %w", err)
|
||||
}
|
||||
|
||||
if isSuperuser || canCreateRole || canCreateDB {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// LEVEL 2: Check database-level privileges
|
||||
var canCreate, canTemp bool
|
||||
err = conn.QueryRow(ctx, `
|
||||
SELECT
|
||||
has_database_privilege(current_user, current_database(), 'CREATE') as can_create,
|
||||
has_database_privilege(current_user, current_database(), 'TEMP') as can_temp
|
||||
`).Scan(&canCreate, &canTemp)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to check database privileges: %w", err)
|
||||
}
|
||||
|
||||
if canCreate || canTemp {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// LEVEL 2.5: Check schema-level CREATE privileges
|
||||
schemaRows, err := conn.Query(ctx, `
|
||||
SELECT DISTINCT nspname
|
||||
FROM pg_namespace n
|
||||
WHERE has_schema_privilege(current_user, n.nspname, 'CREATE')
|
||||
AND nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')
|
||||
`)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to check schema privileges: %w", err)
|
||||
}
|
||||
defer schemaRows.Close()
|
||||
|
||||
// If user has CREATE privilege on any schema, they're not read-only
|
||||
if schemaRows.Next() {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
if err := schemaRows.Err(); err != nil {
|
||||
return false, fmt.Errorf("error iterating schema privileges: %w", err)
|
||||
}
|
||||
|
||||
// LEVEL 3: Check table-level write permissions
|
||||
rows, err := conn.Query(ctx, `
|
||||
SELECT DISTINCT privilege_type
|
||||
FROM information_schema.role_table_grants
|
||||
WHERE grantee = current_user
|
||||
AND table_schema NOT IN ('pg_catalog', 'information_schema')
|
||||
`)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to check table privileges: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
writePrivileges := map[string]bool{
|
||||
"INSERT": true,
|
||||
"UPDATE": true,
|
||||
"DELETE": true,
|
||||
"TRUNCATE": true,
|
||||
"REFERENCES": true,
|
||||
"TRIGGER": true,
|
||||
}
|
||||
|
||||
for rows.Next() {
|
||||
var privilege string
|
||||
if err := rows.Scan(&privilege); err != nil {
|
||||
return false, fmt.Errorf("failed to scan privilege: %w", err)
|
||||
}
|
||||
|
||||
if writePrivileges[privilege] {
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
return false, fmt.Errorf("error iterating privileges: %w", err)
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// CreateReadOnlyUser creates a new PostgreSQL user with read-only privileges.
|
||||
//
|
||||
// This method performs the following operations atomically in a single transaction:
|
||||
// 1. Creates a PostgreSQL user with a UUID-based password
|
||||
// 2. Grants CONNECT privilege on the database
|
||||
// 3. Grants USAGE on all non-system schemas
|
||||
// 4. Grants SELECT on all existing tables and sequences
|
||||
// 5. Sets default privileges for future tables and sequences
|
||||
//
|
||||
// Security features:
|
||||
// - Username format: "postgresus-{8-char-uuid}" for uniqueness
|
||||
// - Password: Full UUID (36 characters) for strong entropy
|
||||
// - Transaction safety: All operations rollback on any failure
|
||||
// - Retry logic: Up to 3 attempts if username collision occurs
|
||||
// - Pre-validation: Checks CREATEROLE privilege before starting transaction
|
||||
func (p *PostgresqlDatabase) CreateReadOnlyUser(
|
||||
ctx context.Context,
|
||||
logger *slog.Logger,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
databaseID uuid.UUID,
|
||||
) (string, string, error) {
|
||||
password, err := decryptPasswordIfNeeded(p.Password, encryptor, databaseID)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to decrypt password: %w", err)
|
||||
}
|
||||
|
||||
connStr := buildConnectionStringForDB(p, *p.Database, password)
|
||||
|
||||
conn, err := pgx.Connect(ctx, connStr)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to connect to database: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if closeErr := conn.Close(ctx); closeErr != nil {
|
||||
logger.Error("Failed to close connection", "error", closeErr)
|
||||
}
|
||||
}()
|
||||
|
||||
// Pre-validate: Check if current user can create roles
|
||||
var canCreateRole, isSuperuser bool
|
||||
err = conn.QueryRow(ctx, `
|
||||
SELECT rolcreaterole, rolsuper
|
||||
FROM pg_roles
|
||||
WHERE rolname = current_user
|
||||
`).Scan(&canCreateRole, &isSuperuser)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to check permissions: %w", err)
|
||||
}
|
||||
if !canCreateRole && !isSuperuser {
|
||||
return "", "", errors.New("current database user lacks CREATEROLE privilege")
|
||||
}
|
||||
|
||||
// Retry logic for username collision
|
||||
maxRetries := 3
|
||||
for attempt := 0; attempt < maxRetries; attempt++ {
|
||||
username := fmt.Sprintf("postgresus-%s", uuid.New().String()[:8])
|
||||
newPassword := uuid.New().String()
|
||||
|
||||
tx, err := conn.Begin(ctx)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to begin transaction: %w", err)
|
||||
}
|
||||
|
||||
success := false
|
||||
defer func() {
|
||||
if !success {
|
||||
if rollbackErr := tx.Rollback(ctx); rollbackErr != nil {
|
||||
logger.Error("Failed to rollback transaction", "error", rollbackErr)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Step 1: Create PostgreSQL user with LOGIN privilege
|
||||
_, err = tx.Exec(
|
||||
ctx,
|
||||
fmt.Sprintf(`CREATE USER "%s" WITH PASSWORD '%s' LOGIN`, username, newPassword),
|
||||
)
|
||||
if err != nil {
|
||||
if err.Error() != "" && attempt < maxRetries-1 {
|
||||
continue
|
||||
}
|
||||
return "", "", fmt.Errorf("failed to create user: %w", err)
|
||||
}
|
||||
|
||||
// Step 1.5: Revoke CREATE privilege from PUBLIC role on public schema
|
||||
// This is necessary because all PostgreSQL users inherit CREATE privilege on the
|
||||
// public schema through the PUBLIC role. This is a one-time operation that affects
|
||||
// the entire database, making it more secure by default.
|
||||
// Note: This only affects the public schema; other schemas are unaffected.
|
||||
_, err = tx.Exec(ctx, `REVOKE CREATE ON SCHEMA public FROM PUBLIC`)
|
||||
if err != nil {
|
||||
logger.Error("Failed to revoke CREATE on public from PUBLIC", "error", err)
|
||||
if !strings.Contains(err.Error(), "schema \"public\" does not exist") &&
|
||||
!strings.Contains(err.Error(), "permission denied") {
|
||||
return "", "", fmt.Errorf("failed to revoke CREATE from PUBLIC: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Now revoke from the specific user as well (belt and suspenders)
|
||||
_, err = tx.Exec(ctx, fmt.Sprintf(`REVOKE CREATE ON SCHEMA public FROM "%s"`, username))
|
||||
if err != nil {
|
||||
logger.Error(
|
||||
"Failed to revoke CREATE on public schema from user",
|
||||
"error",
|
||||
err,
|
||||
"username",
|
||||
username,
|
||||
)
|
||||
}
|
||||
|
||||
// Step 2: Grant database connection privilege and revoke TEMP
|
||||
_, err = tx.Exec(
|
||||
ctx,
|
||||
fmt.Sprintf(`GRANT CONNECT ON DATABASE %s TO "%s"`, *p.Database, username),
|
||||
)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to grant connect privilege: %w", err)
|
||||
}
|
||||
|
||||
// Revoke TEMP privilege from PUBLIC role (like CREATE on public schema, TEMP is granted to PUBLIC by default)
|
||||
_, err = tx.Exec(ctx, fmt.Sprintf(`REVOKE TEMP ON DATABASE %s FROM PUBLIC`, *p.Database))
|
||||
if err != nil {
|
||||
logger.Warn("Failed to revoke TEMP from PUBLIC", "error", err)
|
||||
}
|
||||
|
||||
// Also revoke from the specific user (belt and suspenders)
|
||||
_, err = tx.Exec(
|
||||
ctx,
|
||||
fmt.Sprintf(`REVOKE TEMP ON DATABASE %s FROM "%s"`, *p.Database, username),
|
||||
)
|
||||
if err != nil {
|
||||
logger.Warn("Failed to revoke TEMP privilege", "error", err, "username", username)
|
||||
}
|
||||
|
||||
// Step 3: Discover all user-created schemas
|
||||
rows, err := tx.Query(ctx, `
|
||||
SELECT schema_name
|
||||
FROM information_schema.schemata
|
||||
WHERE schema_name NOT IN ('pg_catalog', 'information_schema')
|
||||
`)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get schemas: %w", err)
|
||||
}
|
||||
|
||||
var schemas []string
|
||||
for rows.Next() {
|
||||
var schema string
|
||||
if err := rows.Scan(&schema); err != nil {
|
||||
rows.Close()
|
||||
return "", "", fmt.Errorf("failed to scan schema: %w", err)
|
||||
}
|
||||
schemas = append(schemas, schema)
|
||||
}
|
||||
rows.Close()
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
return "", "", fmt.Errorf("error iterating schemas: %w", err)
|
||||
}
|
||||
|
||||
// Step 4: Grant USAGE on each schema and explicitly prevent CREATE
|
||||
for _, schema := range schemas {
|
||||
// Revoke CREATE specifically (handles inheritance from PUBLIC role)
|
||||
_, err = tx.Exec(
|
||||
ctx,
|
||||
fmt.Sprintf(`REVOKE CREATE ON SCHEMA "%s" FROM "%s"`, schema, username),
|
||||
)
|
||||
if err != nil {
|
||||
logger.Warn(
|
||||
"Failed to revoke CREATE on schema",
|
||||
"error",
|
||||
err,
|
||||
"schema",
|
||||
schema,
|
||||
"username",
|
||||
username,
|
||||
)
|
||||
}
|
||||
|
||||
// Grant only USAGE (not CREATE)
|
||||
_, err = tx.Exec(
|
||||
ctx,
|
||||
fmt.Sprintf(`GRANT USAGE ON SCHEMA "%s" TO "%s"`, schema, username),
|
||||
)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to grant usage on schema %s: %w", schema, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Step 5: Grant SELECT on ALL existing tables and sequences
|
||||
grantSelectSQL := fmt.Sprintf(`
|
||||
DO $$
|
||||
DECLARE
|
||||
schema_rec RECORD;
|
||||
BEGIN
|
||||
FOR schema_rec IN
|
||||
SELECT schema_name
|
||||
FROM information_schema.schemata
|
||||
WHERE schema_name NOT IN ('pg_catalog', 'information_schema')
|
||||
LOOP
|
||||
EXECUTE format('GRANT SELECT ON ALL TABLES IN SCHEMA %%I TO "%s"', schema_rec.schema_name);
|
||||
EXECUTE format('GRANT SELECT ON ALL SEQUENCES IN SCHEMA %%I TO "%s"', schema_rec.schema_name);
|
||||
END LOOP;
|
||||
END $$;
|
||||
`, username, username)
|
||||
|
||||
_, err = tx.Exec(ctx, grantSelectSQL)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to grant select on tables: %w", err)
|
||||
}
|
||||
|
||||
// Step 6: Set default privileges for FUTURE tables and sequences
|
||||
defaultPrivilegesSQL := fmt.Sprintf(`
|
||||
DO $$
|
||||
DECLARE
|
||||
schema_rec RECORD;
|
||||
BEGIN
|
||||
FOR schema_rec IN
|
||||
SELECT schema_name
|
||||
FROM information_schema.schemata
|
||||
WHERE schema_name NOT IN ('pg_catalog', 'information_schema')
|
||||
LOOP
|
||||
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %%I GRANT SELECT ON TABLES TO "%s"', schema_rec.schema_name);
|
||||
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %%I GRANT SELECT ON SEQUENCES TO "%s"', schema_rec.schema_name);
|
||||
END LOOP;
|
||||
END $$;
|
||||
`, username, username)
|
||||
|
||||
_, err = tx.Exec(ctx, defaultPrivilegesSQL)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to set default privileges: %w", err)
|
||||
}
|
||||
|
||||
// Step 7: Verify user creation before committing
|
||||
var verifyUsername string
|
||||
err = tx.QueryRow(ctx, fmt.Sprintf(`SELECT rolname FROM pg_roles WHERE rolname = '%s'`, username)).
|
||||
Scan(&verifyUsername)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to verify user creation: %w", err)
|
||||
}
|
||||
|
||||
if err := tx.Commit(ctx); err != nil {
|
||||
return "", "", fmt.Errorf("failed to commit transaction: %w", err)
|
||||
}
|
||||
|
||||
success = true
|
||||
logger.Info("Read-only user created successfully", "username", username)
|
||||
return username, newPassword, nil
|
||||
}
|
||||
|
||||
return "", "", errors.New("failed to generate unique username after 3 attempts")
|
||||
}
|
||||
|
||||
// testSingleDatabaseConnection tests connection to a specific database for pg_dump
|
||||
func testSingleDatabaseConnection(
|
||||
logger *slog.Logger,
|
||||
ctx context.Context,
|
||||
postgresDb *PostgresqlDatabase,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
databaseID uuid.UUID,
|
||||
) error {
|
||||
// For single database backup, we need to connect to the specific database
|
||||
if postgresDb.Database == nil || *postgresDb.Database == "" {
|
||||
return errors.New("database name is required for single database backup (pg_dump)")
|
||||
}
|
||||
|
||||
// Decrypt password if needed
|
||||
password, err := decryptPasswordIfNeeded(postgresDb.Password, encryptor, databaseID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt password: %w", err)
|
||||
}
|
||||
|
||||
// Build connection string for the specific database
|
||||
connStr := buildConnectionStringForDB(postgresDb, *postgresDb.Database)
|
||||
connStr := buildConnectionStringForDB(postgresDb, *postgresDb.Database, password)
|
||||
|
||||
// Test connection
|
||||
conn, err := pgx.Connect(ctx, connStr)
|
||||
@@ -182,116 +587,30 @@ func testBasicOperations(ctx context.Context, conn *pgx.Conn, dbName string) err
|
||||
}
|
||||
|
||||
// buildConnectionStringForDB builds connection string for specific database
|
||||
func buildConnectionStringForDB(p *PostgresqlDatabase, dbName string) string {
|
||||
func buildConnectionStringForDB(p *PostgresqlDatabase, dbName string, password string) string {
|
||||
sslMode := "disable"
|
||||
if p.IsHttps {
|
||||
sslMode = "require"
|
||||
}
|
||||
|
||||
return fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=%s",
|
||||
return fmt.Sprintf(
|
||||
"host=%s port=%d user=%s password=%s dbname=%s sslmode=%s default_query_exec_mode=simple_protocol standard_conforming_strings=on",
|
||||
p.Host,
|
||||
p.Port,
|
||||
p.Username,
|
||||
p.Password,
|
||||
password,
|
||||
dbName,
|
||||
sslMode,
|
||||
)
|
||||
}
|
||||
|
||||
func (p *PostgresqlDatabase) InstallExtensions(extensions []tools.PostgresqlExtension) error {
|
||||
if len(extensions) == 0 {
|
||||
return nil
|
||||
func decryptPasswordIfNeeded(
|
||||
password string,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
databaseID uuid.UUID,
|
||||
) (string, error) {
|
||||
if encryptor == nil {
|
||||
return password, nil
|
||||
}
|
||||
|
||||
if p.Database == nil || *p.Database == "" {
|
||||
return errors.New("database name is required for installing extensions")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Build connection string for the specific database
|
||||
connStr := buildConnectionStringForDB(p, *p.Database)
|
||||
|
||||
// Connect to database
|
||||
conn, err := pgx.Connect(ctx, connStr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to connect to database '%s': %w", *p.Database, err)
|
||||
}
|
||||
defer func() {
|
||||
if closeErr := conn.Close(ctx); closeErr != nil {
|
||||
fmt.Println("failed to close connection: %w", closeErr)
|
||||
}
|
||||
}()
|
||||
|
||||
// Check which extensions are already installed
|
||||
installedExtensions, err := p.getInstalledExtensions(ctx, conn)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to check installed extensions: %w", err)
|
||||
}
|
||||
|
||||
// Install missing extensions
|
||||
for _, extension := range extensions {
|
||||
if contains(installedExtensions, string(extension)) {
|
||||
continue // Extension already installed
|
||||
}
|
||||
|
||||
if err := p.installExtension(ctx, conn, string(extension)); err != nil {
|
||||
return fmt.Errorf("failed to install extension '%s': %w", extension, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// getInstalledExtensions queries the database for currently installed extensions
|
||||
func (p *PostgresqlDatabase) getInstalledExtensions(
|
||||
ctx context.Context,
|
||||
conn *pgx.Conn,
|
||||
) ([]string, error) {
|
||||
query := "SELECT extname FROM pg_extension"
|
||||
|
||||
rows, err := conn.Query(ctx, query)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to query installed extensions: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var extensions []string
|
||||
for rows.Next() {
|
||||
var extname string
|
||||
|
||||
if err := rows.Scan(&extname); err != nil {
|
||||
return nil, fmt.Errorf("failed to scan extension name: %w", err)
|
||||
}
|
||||
|
||||
extensions = append(extensions, extname)
|
||||
}
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("error iterating over extension rows: %w", err)
|
||||
}
|
||||
|
||||
return extensions, nil
|
||||
}
|
||||
|
||||
// installExtension installs a single PostgreSQL extension
|
||||
func (p *PostgresqlDatabase) installExtension(
|
||||
ctx context.Context,
|
||||
conn *pgx.Conn,
|
||||
extensionName string,
|
||||
) error {
|
||||
query := fmt.Sprintf("CREATE EXTENSION IF NOT EXISTS %s", extensionName)
|
||||
|
||||
_, err := conn.Exec(ctx, query)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to execute CREATE EXTENSION: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// contains checks if a string slice contains a specific string
|
||||
func contains(slice []string, item string) bool {
|
||||
return slices.Contains(slice, item)
|
||||
return encryptor.Decrypt(databaseID, password)
|
||||
}
|
||||
|
||||
@@ -0,0 +1,323 @@
|
||||
package postgresql
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/jmoiron/sqlx"
|
||||
_ "github.com/lib/pq"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"postgresus-backend/internal/config"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
)
|
||||
|
||||
func Test_IsUserReadOnly_AdminUser_ReturnsFalse(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
cases := []struct {
|
||||
name string
|
||||
version string
|
||||
port string
|
||||
}{
|
||||
{"PostgreSQL 12", "12", env.TestPostgres12Port},
|
||||
{"PostgreSQL 13", "13", env.TestPostgres13Port},
|
||||
{"PostgreSQL 14", "14", env.TestPostgres14Port},
|
||||
{"PostgreSQL 15", "15", env.TestPostgres15Port},
|
||||
{"PostgreSQL 16", "16", env.TestPostgres16Port},
|
||||
{"PostgreSQL 17", "17", env.TestPostgres17Port},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
container := connectToPostgresContainer(t, tc.port)
|
||||
defer container.DB.Close()
|
||||
|
||||
pgModel := createPostgresModel(container)
|
||||
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
|
||||
ctx := context.Background()
|
||||
|
||||
isReadOnly, err := pgModel.IsUserReadOnly(ctx, logger, nil, uuid.New())
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, isReadOnly, "Admin user should not be read-only")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_CreateReadOnlyUser_UserCanReadButNotWrite(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
cases := []struct {
|
||||
name string
|
||||
version string
|
||||
port string
|
||||
}{
|
||||
{"PostgreSQL 12", "12", env.TestPostgres12Port},
|
||||
{"PostgreSQL 13", "13", env.TestPostgres13Port},
|
||||
{"PostgreSQL 14", "14", env.TestPostgres14Port},
|
||||
{"PostgreSQL 15", "15", env.TestPostgres15Port},
|
||||
{"PostgreSQL 16", "16", env.TestPostgres16Port},
|
||||
{"PostgreSQL 17", "17", env.TestPostgres17Port},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
container := connectToPostgresContainer(t, tc.port)
|
||||
defer container.DB.Close()
|
||||
|
||||
_, err := container.DB.Exec(`
|
||||
DROP TABLE IF EXISTS readonly_test CASCADE;
|
||||
DROP TABLE IF EXISTS hack_table CASCADE;
|
||||
DROP TABLE IF EXISTS future_table CASCADE;
|
||||
CREATE TABLE readonly_test (
|
||||
id SERIAL PRIMARY KEY,
|
||||
data TEXT NOT NULL
|
||||
);
|
||||
INSERT INTO readonly_test (data) VALUES ('test1'), ('test2');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
pgModel := createPostgresModel(container)
|
||||
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
|
||||
ctx := context.Background()
|
||||
|
||||
username, password, err := pgModel.CreateReadOnlyUser(ctx, logger, nil, uuid.New())
|
||||
assert.NoError(t, err)
|
||||
assert.NotEmpty(t, username)
|
||||
assert.NotEmpty(t, password)
|
||||
assert.True(t, strings.HasPrefix(username, "postgresus-"))
|
||||
|
||||
readOnlyModel := &PostgresqlDatabase{
|
||||
Version: pgModel.Version,
|
||||
Host: pgModel.Host,
|
||||
Port: pgModel.Port,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Database: pgModel.Database,
|
||||
IsHttps: false,
|
||||
}
|
||||
|
||||
isReadOnly, err := readOnlyModel.IsUserReadOnly(ctx, logger, nil, uuid.New())
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, isReadOnly, "Created user should be read-only")
|
||||
|
||||
readOnlyDSN := fmt.Sprintf(
|
||||
"host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host,
|
||||
container.Port,
|
||||
username,
|
||||
password,
|
||||
container.Database,
|
||||
)
|
||||
readOnlyConn, err := sqlx.Connect("postgres", readOnlyDSN)
|
||||
assert.NoError(t, err)
|
||||
defer readOnlyConn.Close()
|
||||
|
||||
var count int
|
||||
err = readOnlyConn.Get(&count, "SELECT COUNT(*) FROM readonly_test")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 2, count)
|
||||
|
||||
_, err = readOnlyConn.Exec("INSERT INTO readonly_test (data) VALUES ('should-fail')")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
|
||||
_, err = readOnlyConn.Exec("UPDATE readonly_test SET data = 'hacked' WHERE id = 1")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
|
||||
_, err = readOnlyConn.Exec("DELETE FROM readonly_test WHERE id = 1")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
|
||||
_, err = readOnlyConn.Exec("CREATE TABLE hack_table (id INT)")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
|
||||
// Clean up: Drop user with CASCADE to handle default privilege dependencies
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP OWNED BY "%s" CASCADE`, username))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to drop owned objects: %v", err)
|
||||
}
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP USER IF EXISTS "%s"`, username))
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_ReadOnlyUser_FutureTables_HaveSelectPermission(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
container := connectToPostgresContainer(t, env.TestPostgres16Port)
|
||||
defer container.DB.Close()
|
||||
|
||||
pgModel := createPostgresModel(container)
|
||||
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
|
||||
ctx := context.Background()
|
||||
|
||||
username, password, err := pgModel.CreateReadOnlyUser(ctx, logger, nil, uuid.New())
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = container.DB.Exec(`
|
||||
CREATE TABLE future_table (
|
||||
id SERIAL PRIMARY KEY,
|
||||
data TEXT NOT NULL
|
||||
);
|
||||
INSERT INTO future_table (data) VALUES ('future_data');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
readOnlyDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, username, password, container.Database)
|
||||
readOnlyConn, err := sqlx.Connect("postgres", readOnlyDSN)
|
||||
assert.NoError(t, err)
|
||||
defer readOnlyConn.Close()
|
||||
|
||||
var data string
|
||||
err = readOnlyConn.Get(&data, "SELECT data FROM future_table LIMIT 1")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "future_data", data)
|
||||
|
||||
// Clean up: Drop user with CASCADE to handle default privilege dependencies
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP OWNED BY "%s" CASCADE`, username))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to drop owned objects: %v", err)
|
||||
}
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP USER IF EXISTS "%s"`, username))
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func Test_ReadOnlyUser_MultipleSchemas_AllAccessible(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
container := connectToPostgresContainer(t, env.TestPostgres16Port)
|
||||
defer container.DB.Close()
|
||||
|
||||
_, err := container.DB.Exec(`
|
||||
CREATE SCHEMA IF NOT EXISTS schema_a;
|
||||
CREATE SCHEMA IF NOT EXISTS schema_b;
|
||||
CREATE TABLE schema_a.table_a (id INT, data TEXT);
|
||||
CREATE TABLE schema_b.table_b (id INT, data TEXT);
|
||||
INSERT INTO schema_a.table_a VALUES (1, 'data_a');
|
||||
INSERT INTO schema_b.table_b VALUES (2, 'data_b');
|
||||
`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
pgModel := createPostgresModel(container)
|
||||
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
|
||||
ctx := context.Background()
|
||||
|
||||
username, password, err := pgModel.CreateReadOnlyUser(ctx, logger, nil, uuid.New())
|
||||
assert.NoError(t, err)
|
||||
|
||||
readOnlyDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, username, password, container.Database)
|
||||
readOnlyConn, err := sqlx.Connect("postgres", readOnlyDSN)
|
||||
assert.NoError(t, err)
|
||||
defer readOnlyConn.Close()
|
||||
|
||||
var dataA string
|
||||
err = readOnlyConn.Get(&dataA, "SELECT data FROM schema_a.table_a LIMIT 1")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "data_a", dataA)
|
||||
|
||||
var dataB string
|
||||
err = readOnlyConn.Get(&dataB, "SELECT data FROM schema_b.table_b LIMIT 1")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "data_b", dataB)
|
||||
|
||||
// Clean up: Drop user with CASCADE to handle default privilege dependencies
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP OWNED BY "%s" CASCADE`, username))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to drop owned objects: %v", err)
|
||||
}
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf(`DROP USER IF EXISTS "%s"`, username))
|
||||
assert.NoError(t, err)
|
||||
_, err = container.DB.Exec(`DROP SCHEMA schema_a CASCADE; DROP SCHEMA schema_b CASCADE;`)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
type PostgresContainer struct {
|
||||
Host string
|
||||
Port int
|
||||
Username string
|
||||
Password string
|
||||
Database string
|
||||
DB *sqlx.DB
|
||||
}
|
||||
|
||||
func connectToPostgresContainer(t *testing.T, port string) *PostgresContainer {
|
||||
dbName := "testdb"
|
||||
password := "testpassword"
|
||||
username := "testuser"
|
||||
host := "localhost"
|
||||
|
||||
portInt, err := strconv.Atoi(port)
|
||||
assert.NoError(t, err)
|
||||
|
||||
dsn := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
host, portInt, username, password, dbName)
|
||||
|
||||
db, err := sqlx.Connect("postgres", dsn)
|
||||
assert.NoError(t, err)
|
||||
|
||||
var versionStr string
|
||||
err = db.Get(&versionStr, "SELECT version()")
|
||||
assert.NoError(t, err)
|
||||
|
||||
return &PostgresContainer{
|
||||
Host: host,
|
||||
Port: portInt,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Database: dbName,
|
||||
DB: db,
|
||||
}
|
||||
}
|
||||
|
||||
func createPostgresModel(container *PostgresContainer) *PostgresqlDatabase {
|
||||
var versionStr string
|
||||
err := container.DB.Get(&versionStr, "SELECT version()")
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
version := extractPostgresVersion(versionStr)
|
||||
|
||||
return &PostgresqlDatabase{
|
||||
Version: version,
|
||||
Host: container.Host,
|
||||
Port: container.Port,
|
||||
Username: container.Username,
|
||||
Password: container.Password,
|
||||
Database: &container.Database,
|
||||
IsHttps: false,
|
||||
}
|
||||
}
|
||||
|
||||
func extractPostgresVersion(versionStr string) tools.PostgresqlVersion {
|
||||
if strings.Contains(versionStr, "PostgreSQL 12") {
|
||||
return tools.GetPostgresqlVersionEnum("12")
|
||||
} else if strings.Contains(versionStr, "PostgreSQL 13") {
|
||||
return tools.GetPostgresqlVersionEnum("13")
|
||||
} else if strings.Contains(versionStr, "PostgreSQL 14") {
|
||||
return tools.GetPostgresqlVersionEnum("14")
|
||||
} else if strings.Contains(versionStr, "PostgreSQL 15") {
|
||||
return tools.GetPostgresqlVersionEnum("15")
|
||||
} else if strings.Contains(versionStr, "PostgreSQL 16") {
|
||||
return tools.GetPostgresqlVersionEnum("16")
|
||||
} else if strings.Contains(versionStr, "PostgreSQL 17") {
|
||||
return tools.GetPostgresqlVersionEnum("17")
|
||||
}
|
||||
|
||||
return tools.GetPostgresqlVersionEnum("16")
|
||||
}
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
users_services "postgresus-backend/internal/features/users/services"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
)
|
||||
|
||||
@@ -19,6 +20,7 @@ var databaseService = &DatabaseService{
|
||||
[]DatabaseCopyListener{},
|
||||
workspaces_services.GetWorkspaceService(),
|
||||
audit_logs.GetAuditLogService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
}
|
||||
|
||||
var databaseController = &DatabaseController{
|
||||
|
||||
10
backend/internal/features/databases/dto.go
Normal file
10
backend/internal/features/databases/dto.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package databases
|
||||
|
||||
type CreateReadOnlyUserResponse struct {
|
||||
Username string `json:"username"`
|
||||
Password string `json:"password"`
|
||||
}
|
||||
|
||||
type IsReadOnlyResponse struct {
|
||||
IsReadOnly bool `json:"isReadOnly"`
|
||||
}
|
||||
@@ -2,6 +2,7 @@ package databases
|
||||
|
||||
import (
|
||||
"log/slog"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
@@ -11,7 +12,11 @@ type DatabaseValidator interface {
|
||||
}
|
||||
|
||||
type DatabaseConnector interface {
|
||||
TestConnection(logger *slog.Logger) error
|
||||
TestConnection(
|
||||
logger *slog.Logger,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
databaseID uuid.UUID,
|
||||
) error
|
||||
|
||||
HideSensitiveData()
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"log/slog"
|
||||
"postgresus-backend/internal/features/databases/databases/postgresql"
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
@@ -56,14 +57,24 @@ func (d *Database) ValidateUpdate(old, new Database) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *Database) TestConnection(logger *slog.Logger) error {
|
||||
return d.getSpecificDatabase().TestConnection(logger)
|
||||
func (d *Database) TestConnection(
|
||||
logger *slog.Logger,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
) error {
|
||||
return d.getSpecificDatabase().TestConnection(logger, encryptor, d.ID)
|
||||
}
|
||||
|
||||
func (d *Database) HideSensitiveData() {
|
||||
d.getSpecificDatabase().HideSensitiveData()
|
||||
}
|
||||
|
||||
func (d *Database) EncryptSensitiveFields(encryptor encryption.FieldEncryptor) error {
|
||||
if d.Postgresql != nil {
|
||||
return d.Postgresql.EncryptSensitiveFields(d.ID, encryptor)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *Database) Update(incoming *Database) {
|
||||
d.Name = incoming.Name
|
||||
d.Type = incoming.Type
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package databases
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
@@ -11,6 +12,7 @@ import (
|
||||
"postgresus-backend/internal/features/notifiers"
|
||||
users_models "postgresus-backend/internal/features/users/models"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
@@ -26,6 +28,7 @@ type DatabaseService struct {
|
||||
|
||||
workspaceService *workspaces_services.WorkspaceService
|
||||
auditLogService *audit_logs.AuditLogService
|
||||
fieldEncryptor encryption.FieldEncryptor
|
||||
}
|
||||
|
||||
func (s *DatabaseService) AddDbCreationListener(
|
||||
@@ -65,6 +68,10 @@ func (s *DatabaseService) CreateDatabase(
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := database.EncryptSensitiveFields(s.fieldEncryptor); err != nil {
|
||||
return nil, fmt.Errorf("failed to encrypt sensitive fields: %w", err)
|
||||
}
|
||||
|
||||
database, err = s.dbRepository.Save(database)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -118,6 +125,10 @@ func (s *DatabaseService) UpdateDatabase(
|
||||
return err
|
||||
}
|
||||
|
||||
if err := existingDatabase.EncryptSensitiveFields(s.fieldEncryptor); err != nil {
|
||||
return fmt.Errorf("failed to encrypt sensitive fields: %w", err)
|
||||
}
|
||||
|
||||
_, err = s.dbRepository.Save(existingDatabase)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -250,7 +261,7 @@ func (s *DatabaseService) TestDatabaseConnection(
|
||||
return errors.New("insufficient permissions to test connection for this database")
|
||||
}
|
||||
|
||||
err = database.TestConnection(s.logger)
|
||||
err = database.TestConnection(s.logger, s.fieldEncryptor)
|
||||
if err != nil {
|
||||
lastSaveError := err.Error()
|
||||
database.LastBackupErrorMessage = &lastSaveError
|
||||
@@ -294,7 +305,7 @@ func (s *DatabaseService) TestDatabaseConnectionDirect(
|
||||
usingDatabase = database
|
||||
}
|
||||
|
||||
return usingDatabase.TestConnection(s.logger)
|
||||
return usingDatabase.TestConnection(s.logger, s.fieldEncryptor)
|
||||
}
|
||||
|
||||
func (s *DatabaseService) GetDatabaseByID(
|
||||
@@ -446,3 +457,148 @@ func (s *DatabaseService) OnBeforeWorkspaceDeletion(workspaceID uuid.UUID) error
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *DatabaseService) IsUserReadOnly(
|
||||
user *users_models.User,
|
||||
database *Database,
|
||||
) (bool, error) {
|
||||
var usingDatabase *Database
|
||||
|
||||
if database.ID != uuid.Nil {
|
||||
existingDatabase, err := s.dbRepository.FindByID(database.ID)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
if existingDatabase.WorkspaceID == nil {
|
||||
return false, errors.New("cannot check user for database without workspace")
|
||||
}
|
||||
|
||||
canAccess, _, err := s.workspaceService.CanUserAccessWorkspace(
|
||||
*existingDatabase.WorkspaceID,
|
||||
user,
|
||||
)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if !canAccess {
|
||||
return false, errors.New("insufficient permissions to access this database")
|
||||
}
|
||||
|
||||
if database.WorkspaceID != nil && *existingDatabase.WorkspaceID != *database.WorkspaceID {
|
||||
return false, errors.New("database does not belong to this workspace")
|
||||
}
|
||||
|
||||
existingDatabase.Update(database)
|
||||
|
||||
if err := existingDatabase.Validate(); err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
usingDatabase = existingDatabase
|
||||
} else {
|
||||
if database.WorkspaceID != nil {
|
||||
canAccess, _, err := s.workspaceService.CanUserAccessWorkspace(*database.WorkspaceID, user)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if !canAccess {
|
||||
return false, errors.New("insufficient permissions to access this workspace")
|
||||
}
|
||||
}
|
||||
|
||||
usingDatabase = database
|
||||
}
|
||||
|
||||
if usingDatabase.Type != DatabaseTypePostgres {
|
||||
return false, errors.New("read-only check only supported for PostgreSQL databases")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
|
||||
defer cancel()
|
||||
|
||||
return usingDatabase.Postgresql.IsUserReadOnly(
|
||||
ctx,
|
||||
s.logger,
|
||||
s.fieldEncryptor,
|
||||
usingDatabase.ID,
|
||||
)
|
||||
}
|
||||
|
||||
func (s *DatabaseService) CreateReadOnlyUser(
|
||||
user *users_models.User,
|
||||
database *Database,
|
||||
) (string, string, error) {
|
||||
var usingDatabase *Database
|
||||
|
||||
if database.ID != uuid.Nil {
|
||||
existingDatabase, err := s.dbRepository.FindByID(database.ID)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
|
||||
if existingDatabase.WorkspaceID == nil {
|
||||
return "", "", errors.New("cannot create user for database without workspace")
|
||||
}
|
||||
|
||||
canManage, err := s.workspaceService.CanUserManageDBs(*existingDatabase.WorkspaceID, user)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
if !canManage {
|
||||
return "", "", errors.New("insufficient permissions to manage this database")
|
||||
}
|
||||
|
||||
if database.WorkspaceID != nil && *existingDatabase.WorkspaceID != *database.WorkspaceID {
|
||||
return "", "", errors.New("database does not belong to this workspace")
|
||||
}
|
||||
|
||||
existingDatabase.Update(database)
|
||||
|
||||
if err := existingDatabase.Validate(); err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
|
||||
usingDatabase = existingDatabase
|
||||
} else {
|
||||
if database.WorkspaceID != nil {
|
||||
canManage, err := s.workspaceService.CanUserManageDBs(*database.WorkspaceID, user)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
if !canManage {
|
||||
return "", "", errors.New("insufficient permissions to manage this workspace")
|
||||
}
|
||||
}
|
||||
|
||||
usingDatabase = database
|
||||
}
|
||||
|
||||
if usingDatabase.Type != DatabaseTypePostgres {
|
||||
return "", "", errors.New("read-only user creation only supported for PostgreSQL")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
username, password, err := usingDatabase.Postgresql.CreateReadOnlyUser(
|
||||
ctx, s.logger, s.fieldEncryptor, usingDatabase.ID,
|
||||
)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
|
||||
if usingDatabase.WorkspaceID != nil {
|
||||
s.auditLogService.WriteAuditLog(
|
||||
fmt.Sprintf(
|
||||
"Read-only user created for database: %s (username: %s)",
|
||||
usingDatabase.Name,
|
||||
username,
|
||||
),
|
||||
&user.ID,
|
||||
usingDatabase.WorkspaceID,
|
||||
)
|
||||
}
|
||||
|
||||
return username, password, nil
|
||||
}
|
||||
|
||||
9
backend/internal/features/encryption/secrets/di.go
Normal file
9
backend/internal/features/encryption/secrets/di.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package secrets
|
||||
|
||||
var secretKeyService = &SecretKeyService{
|
||||
nil,
|
||||
}
|
||||
|
||||
func GetSecretKeyService() *SecretKeyService {
|
||||
return secretKeyService
|
||||
}
|
||||
1
backend/internal/features/encryption/secrets/model.go
Normal file
1
backend/internal/features/encryption/secrets/model.go
Normal file
@@ -0,0 +1 @@
|
||||
package secrets
|
||||
73
backend/internal/features/encryption/secrets/service.go
Normal file
73
backend/internal/features/encryption/secrets/service.go
Normal file
@@ -0,0 +1,73 @@
|
||||
package secrets
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"postgresus-backend/internal/config"
|
||||
user_models "postgresus-backend/internal/features/users/models"
|
||||
"postgresus-backend/internal/storage"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type SecretKeyService struct {
|
||||
cachedKey *string
|
||||
}
|
||||
|
||||
func (s *SecretKeyService) MigrateKeyFromDbToFileIfExist() error {
|
||||
var secretKey user_models.SecretKey
|
||||
|
||||
err := storage.GetDb().First(&secretKey).Error
|
||||
if err != nil {
|
||||
if errors.Is(err, gorm.ErrRecordNotFound) {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("failed to check for secret key in database: %w", err)
|
||||
}
|
||||
|
||||
if secretKey.Secret == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
secretKeyPath := config.GetEnv().SecretKeyPath
|
||||
if err := os.WriteFile(secretKeyPath, []byte(secretKey.Secret), 0600); err != nil {
|
||||
return fmt.Errorf("failed to write secret key to file: %w", err)
|
||||
}
|
||||
|
||||
if err := storage.GetDb().Exec("DELETE FROM secret_keys").Error; err != nil {
|
||||
return fmt.Errorf("failed to delete secret key from database: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *SecretKeyService) GetSecretKey() (string, error) {
|
||||
if s.cachedKey != nil {
|
||||
return *s.cachedKey, nil
|
||||
}
|
||||
|
||||
secretKeyPath := config.GetEnv().SecretKeyPath
|
||||
data, err := os.ReadFile(secretKeyPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
newKey := s.generateNewSecretKey()
|
||||
if err := os.WriteFile(secretKeyPath, []byte(newKey), 0600); err != nil {
|
||||
return "", fmt.Errorf("failed to write new secret key: %w", err)
|
||||
}
|
||||
s.cachedKey = &newKey
|
||||
return newKey, nil
|
||||
}
|
||||
return "", fmt.Errorf("failed to read secret key file: %w", err)
|
||||
}
|
||||
|
||||
key := string(data)
|
||||
s.cachedKey = &key
|
||||
return key, nil
|
||||
}
|
||||
|
||||
func (s *SecretKeyService) generateNewSecretKey() string {
|
||||
return uuid.New().String() + uuid.New().String()
|
||||
}
|
||||
@@ -453,70 +453,6 @@ func Test_CrossWorkspaceSecurity_CannotAccessNotifierFromAnotherWorkspace(t *tes
|
||||
workspaces_testing.RemoveTestWorkspace(workspace2, router)
|
||||
}
|
||||
|
||||
func createRouter() *gin.Engine {
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
|
||||
v1 := router.Group("/api/v1")
|
||||
protected := v1.Group("").Use(users_middleware.AuthMiddleware(users_services.GetUserService()))
|
||||
|
||||
if routerGroup, ok := protected.(*gin.RouterGroup); ok {
|
||||
GetNotifierController().RegisterRoutes(routerGroup)
|
||||
workspaces_controllers.GetWorkspaceController().RegisterRoutes(routerGroup)
|
||||
workspaces_controllers.GetMembershipController().RegisterRoutes(routerGroup)
|
||||
}
|
||||
|
||||
audit_logs.SetupDependencies()
|
||||
|
||||
return router
|
||||
}
|
||||
|
||||
func createNewNotifier(workspaceID uuid.UUID) *Notifier {
|
||||
return &Notifier{
|
||||
WorkspaceID: workspaceID,
|
||||
Name: "Test Notifier " + uuid.New().String(),
|
||||
NotifierType: NotifierTypeWebhook,
|
||||
WebhookNotifier: &webhook_notifier.WebhookNotifier{
|
||||
WebhookURL: "https://webhook.site/test-" + uuid.New().String(),
|
||||
WebhookMethod: webhook_notifier.WebhookMethodPOST,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func createTelegramNotifier(workspaceID uuid.UUID) *Notifier {
|
||||
env := config.GetEnv()
|
||||
return &Notifier{
|
||||
WorkspaceID: workspaceID,
|
||||
Name: "Test Telegram Notifier " + uuid.New().String(),
|
||||
NotifierType: NotifierTypeTelegram,
|
||||
TelegramNotifier: &telegram_notifier.TelegramNotifier{
|
||||
BotToken: env.TestTelegramBotToken,
|
||||
TargetChatID: env.TestTelegramChatID,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func verifyNotifierData(t *testing.T, expected *Notifier, actual *Notifier) {
|
||||
assert.Equal(t, expected.Name, actual.Name)
|
||||
assert.Equal(t, expected.NotifierType, actual.NotifierType)
|
||||
assert.Equal(t, expected.WorkspaceID, actual.WorkspaceID)
|
||||
}
|
||||
|
||||
func deleteNotifier(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
notifierID, workspaceID uuid.UUID,
|
||||
token string,
|
||||
) {
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/notifiers/%s", notifierID.String()),
|
||||
"Bearer "+token,
|
||||
http.StatusOK,
|
||||
)
|
||||
}
|
||||
|
||||
func Test_NotifierSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
@@ -553,7 +489,13 @@ func Test_NotifierSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, notifier *Notifier) {
|
||||
assert.Equal(t, "original-bot-token-12345", notifier.TelegramNotifier.BotToken)
|
||||
assert.True(
|
||||
t,
|
||||
isEncrypted(notifier.TelegramNotifier.BotToken),
|
||||
"BotToken should be encrypted in DB",
|
||||
)
|
||||
decrypted := decryptField(t, notifier.ID, notifier.TelegramNotifier.BotToken)
|
||||
assert.Equal(t, "original-bot-token-12345", decrypted)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, notifier *Notifier) {
|
||||
assert.Equal(t, "", notifier.TelegramNotifier.BotToken)
|
||||
@@ -592,7 +534,13 @@ func Test_NotifierSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, notifier *Notifier) {
|
||||
assert.Equal(t, "original-password-secret", notifier.EmailNotifier.SMTPPassword)
|
||||
assert.True(
|
||||
t,
|
||||
isEncrypted(notifier.EmailNotifier.SMTPPassword),
|
||||
"SMTPPassword should be encrypted in DB",
|
||||
)
|
||||
decrypted := decryptField(t, notifier.ID, notifier.EmailNotifier.SMTPPassword)
|
||||
assert.Equal(t, "original-password-secret", decrypted)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, notifier *Notifier) {
|
||||
assert.Equal(t, "", notifier.EmailNotifier.SMTPPassword)
|
||||
@@ -625,7 +573,13 @@ func Test_NotifierSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, notifier *Notifier) {
|
||||
assert.Equal(t, "xoxb-original-slack-token", notifier.SlackNotifier.BotToken)
|
||||
assert.True(
|
||||
t,
|
||||
isEncrypted(notifier.SlackNotifier.BotToken),
|
||||
"BotToken should be encrypted in DB",
|
||||
)
|
||||
decrypted := decryptField(t, notifier.ID, notifier.SlackNotifier.BotToken)
|
||||
assert.Equal(t, "xoxb-original-slack-token", decrypted)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, notifier *Notifier) {
|
||||
assert.Equal(t, "", notifier.SlackNotifier.BotToken)
|
||||
@@ -656,11 +610,17 @@ func Test_NotifierSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, notifier *Notifier) {
|
||||
assert.Equal(
|
||||
assert.True(
|
||||
t,
|
||||
"https://discord.com/api/webhooks/123/original-token",
|
||||
isEncrypted(notifier.DiscordNotifier.ChannelWebhookURL),
|
||||
"WebhookURL should be encrypted in DB",
|
||||
)
|
||||
decrypted := decryptField(
|
||||
t,
|
||||
notifier.ID,
|
||||
notifier.DiscordNotifier.ChannelWebhookURL,
|
||||
)
|
||||
assert.Equal(t, "https://discord.com/api/webhooks/123/original-token", decrypted)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, notifier *Notifier) {
|
||||
assert.Equal(t, "", notifier.DiscordNotifier.ChannelWebhookURL)
|
||||
@@ -691,10 +651,16 @@ func Test_NotifierSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, notifier *Notifier) {
|
||||
assert.True(
|
||||
t,
|
||||
isEncrypted(notifier.TeamsNotifier.WebhookURL),
|
||||
"WebhookURL should be encrypted in DB",
|
||||
)
|
||||
decrypted := decryptField(t, notifier.ID, notifier.TeamsNotifier.WebhookURL)
|
||||
assert.Equal(
|
||||
t,
|
||||
"https://outlook.office.com/webhook/original-token",
|
||||
notifier.TeamsNotifier.WebhookURL,
|
||||
decrypted,
|
||||
)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, notifier *Notifier) {
|
||||
@@ -813,3 +779,263 @@ func Test_NotifierSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_CreateNotifier_AllSensitiveFieldsEncryptedInDB(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
createNotifier func(workspaceID uuid.UUID) *Notifier
|
||||
verifySensitiveEncryption func(t *testing.T, notifier *Notifier)
|
||||
}{
|
||||
{
|
||||
name: "Telegram Notifier - BotToken encrypted",
|
||||
createNotifier: func(workspaceID uuid.UUID) *Notifier {
|
||||
return &Notifier{
|
||||
WorkspaceID: workspaceID,
|
||||
Name: "Test Telegram",
|
||||
NotifierType: NotifierTypeTelegram,
|
||||
TelegramNotifier: &telegram_notifier.TelegramNotifier{
|
||||
BotToken: "plain-telegram-token-123",
|
||||
TargetChatID: "123456789",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveEncryption: func(t *testing.T, notifier *Notifier) {
|
||||
assert.True(
|
||||
t,
|
||||
isEncrypted(notifier.TelegramNotifier.BotToken),
|
||||
"BotToken should be encrypted",
|
||||
)
|
||||
decrypted := decryptField(t, notifier.ID, notifier.TelegramNotifier.BotToken)
|
||||
assert.Equal(t, "plain-telegram-token-123", decrypted)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Email Notifier - SMTPPassword encrypted",
|
||||
createNotifier: func(workspaceID uuid.UUID) *Notifier {
|
||||
return &Notifier{
|
||||
WorkspaceID: workspaceID,
|
||||
Name: "Test Email",
|
||||
NotifierType: NotifierTypeEmail,
|
||||
EmailNotifier: &email_notifier.EmailNotifier{
|
||||
TargetEmail: "test@example.com",
|
||||
SMTPHost: "smtp.example.com",
|
||||
SMTPPort: 587,
|
||||
SMTPUser: "user@example.com",
|
||||
SMTPPassword: "plain-smtp-password-456",
|
||||
From: "noreply@example.com",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveEncryption: func(t *testing.T, notifier *Notifier) {
|
||||
assert.True(
|
||||
t,
|
||||
isEncrypted(notifier.EmailNotifier.SMTPPassword),
|
||||
"SMTPPassword should be encrypted",
|
||||
)
|
||||
decrypted := decryptField(t, notifier.ID, notifier.EmailNotifier.SMTPPassword)
|
||||
assert.Equal(t, "plain-smtp-password-456", decrypted)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Slack Notifier - BotToken encrypted",
|
||||
createNotifier: func(workspaceID uuid.UUID) *Notifier {
|
||||
return &Notifier{
|
||||
WorkspaceID: workspaceID,
|
||||
Name: "Test Slack",
|
||||
NotifierType: NotifierTypeSlack,
|
||||
SlackNotifier: &slack_notifier.SlackNotifier{
|
||||
BotToken: "plain-slack-token-789",
|
||||
TargetChatID: "C0123456789",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveEncryption: func(t *testing.T, notifier *Notifier) {
|
||||
assert.True(
|
||||
t,
|
||||
isEncrypted(notifier.SlackNotifier.BotToken),
|
||||
"BotToken should be encrypted",
|
||||
)
|
||||
decrypted := decryptField(t, notifier.ID, notifier.SlackNotifier.BotToken)
|
||||
assert.Equal(t, "plain-slack-token-789", decrypted)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Discord Notifier - WebhookURL encrypted",
|
||||
createNotifier: func(workspaceID uuid.UUID) *Notifier {
|
||||
return &Notifier{
|
||||
WorkspaceID: workspaceID,
|
||||
Name: "Test Discord",
|
||||
NotifierType: NotifierTypeDiscord,
|
||||
DiscordNotifier: &discord_notifier.DiscordNotifier{
|
||||
ChannelWebhookURL: "https://discord.com/api/webhooks/123/abc",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveEncryption: func(t *testing.T, notifier *Notifier) {
|
||||
assert.True(
|
||||
t,
|
||||
isEncrypted(notifier.DiscordNotifier.ChannelWebhookURL),
|
||||
"WebhookURL should be encrypted",
|
||||
)
|
||||
decrypted := decryptField(
|
||||
t,
|
||||
notifier.ID,
|
||||
notifier.DiscordNotifier.ChannelWebhookURL,
|
||||
)
|
||||
assert.Equal(t, "https://discord.com/api/webhooks/123/abc", decrypted)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Teams Notifier - WebhookURL encrypted",
|
||||
createNotifier: func(workspaceID uuid.UUID) *Notifier {
|
||||
return &Notifier{
|
||||
WorkspaceID: workspaceID,
|
||||
Name: "Test Teams",
|
||||
NotifierType: NotifierTypeTeams,
|
||||
TeamsNotifier: &teams_notifier.TeamsNotifier{
|
||||
WebhookURL: "https://outlook.office.com/webhook/test123",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveEncryption: func(t *testing.T, notifier *Notifier) {
|
||||
assert.True(
|
||||
t,
|
||||
isEncrypted(notifier.TeamsNotifier.WebhookURL),
|
||||
"WebhookURL should be encrypted",
|
||||
)
|
||||
decrypted := decryptField(t, notifier.ID, notifier.TeamsNotifier.WebhookURL)
|
||||
assert.Equal(t, "https://outlook.office.com/webhook/test123", decrypted)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Webhook Notifier - WebhookURL encrypted",
|
||||
createNotifier: func(workspaceID uuid.UUID) *Notifier {
|
||||
return &Notifier{
|
||||
WorkspaceID: workspaceID,
|
||||
Name: "Test Webhook",
|
||||
NotifierType: NotifierTypeWebhook,
|
||||
WebhookNotifier: &webhook_notifier.WebhookNotifier{
|
||||
WebhookURL: "https://webhook.example.com/test456",
|
||||
WebhookMethod: webhook_notifier.WebhookMethodPOST,
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveEncryption: func(t *testing.T, notifier *Notifier) {
|
||||
assert.True(
|
||||
t,
|
||||
isEncrypted(notifier.WebhookNotifier.WebhookURL),
|
||||
"WebhookURL should be encrypted",
|
||||
)
|
||||
decrypted := decryptField(t, notifier.ID, notifier.WebhookNotifier.WebhookURL)
|
||||
assert.Equal(t, "https://webhook.example.com/test456", decrypted)
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
owner := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
router := createRouter()
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Test Workspace", owner, router)
|
||||
|
||||
// Create notifier via API (plaintext credentials)
|
||||
var createdNotifier Notifier
|
||||
test_utils.MakePostRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/notifiers",
|
||||
"Bearer "+owner.Token,
|
||||
tc.createNotifier(workspace.ID),
|
||||
http.StatusOK,
|
||||
&createdNotifier,
|
||||
)
|
||||
|
||||
// Read from DB directly (bypass service layer)
|
||||
repository := &NotifierRepository{}
|
||||
notifierFromDB, err := repository.FindByID(createdNotifier.ID)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify encryption
|
||||
tc.verifySensitiveEncryption(t, notifierFromDB)
|
||||
|
||||
// Cleanup
|
||||
deleteNotifier(t, router, createdNotifier.ID, workspace.ID, owner.Token)
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func createRouter() *gin.Engine {
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
|
||||
v1 := router.Group("/api/v1")
|
||||
protected := v1.Group("").Use(users_middleware.AuthMiddleware(users_services.GetUserService()))
|
||||
|
||||
if routerGroup, ok := protected.(*gin.RouterGroup); ok {
|
||||
GetNotifierController().RegisterRoutes(routerGroup)
|
||||
workspaces_controllers.GetWorkspaceController().RegisterRoutes(routerGroup)
|
||||
workspaces_controllers.GetMembershipController().RegisterRoutes(routerGroup)
|
||||
}
|
||||
|
||||
audit_logs.SetupDependencies()
|
||||
|
||||
return router
|
||||
}
|
||||
|
||||
func createNewNotifier(workspaceID uuid.UUID) *Notifier {
|
||||
return &Notifier{
|
||||
WorkspaceID: workspaceID,
|
||||
Name: "Test Notifier " + uuid.New().String(),
|
||||
NotifierType: NotifierTypeWebhook,
|
||||
WebhookNotifier: &webhook_notifier.WebhookNotifier{
|
||||
WebhookURL: "https://webhook.site/test-" + uuid.New().String(),
|
||||
WebhookMethod: webhook_notifier.WebhookMethodPOST,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func createTelegramNotifier(workspaceID uuid.UUID) *Notifier {
|
||||
env := config.GetEnv()
|
||||
return &Notifier{
|
||||
WorkspaceID: workspaceID,
|
||||
Name: "Test Telegram Notifier " + uuid.New().String(),
|
||||
NotifierType: NotifierTypeTelegram,
|
||||
TelegramNotifier: &telegram_notifier.TelegramNotifier{
|
||||
BotToken: env.TestTelegramBotToken,
|
||||
TargetChatID: env.TestTelegramChatID,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func verifyNotifierData(t *testing.T, expected *Notifier, actual *Notifier) {
|
||||
assert.Equal(t, expected.Name, actual.Name)
|
||||
assert.Equal(t, expected.NotifierType, actual.NotifierType)
|
||||
assert.Equal(t, expected.WorkspaceID, actual.WorkspaceID)
|
||||
}
|
||||
|
||||
func deleteNotifier(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
notifierID, workspaceID uuid.UUID,
|
||||
token string,
|
||||
) {
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/notifiers/%s", notifierID.String()),
|
||||
"Bearer "+token,
|
||||
http.StatusOK,
|
||||
)
|
||||
}
|
||||
|
||||
func isEncrypted(value string) bool {
|
||||
return len(value) > 4 && value[:4] == "enc:"
|
||||
}
|
||||
|
||||
func decryptField(t *testing.T, notifierID uuid.UUID, encryptedValue string) string {
|
||||
encryptor := GetNotifierService().fieldEncryptor
|
||||
decrypted, err := encryptor.Decrypt(notifierID, encryptedValue)
|
||||
assert.NoError(t, err)
|
||||
return decrypted
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package notifiers
|
||||
import (
|
||||
audit_logs "postgresus-backend/internal/features/audit_logs"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
)
|
||||
|
||||
@@ -12,6 +13,7 @@ var notifierService = &NotifierService{
|
||||
logger.GetLogger(),
|
||||
workspaces_services.GetWorkspaceService(),
|
||||
audit_logs.GetAuditLogService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
}
|
||||
var notifierController = &NotifierController{
|
||||
notifierService,
|
||||
|
||||
@@ -1,11 +1,21 @@
|
||||
package notifiers
|
||||
|
||||
import "log/slog"
|
||||
import (
|
||||
"log/slog"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
)
|
||||
|
||||
type NotificationSender interface {
|
||||
Send(logger *slog.Logger, heading string, message string) error
|
||||
Send(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
heading string,
|
||||
message string,
|
||||
) error
|
||||
|
||||
Validate() error
|
||||
Validate(encryptor encryption.FieldEncryptor) error
|
||||
|
||||
HideSensitiveData()
|
||||
|
||||
EncryptSensitiveData(encryptor encryption.FieldEncryptor) error
|
||||
}
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
teams_notifier "postgresus-backend/internal/features/notifiers/models/teams"
|
||||
telegram_notifier "postgresus-backend/internal/features/notifiers/models/telegram"
|
||||
webhook_notifier "postgresus-backend/internal/features/notifiers/models/webhook"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
@@ -33,16 +34,21 @@ func (n *Notifier) TableName() string {
|
||||
return "notifiers"
|
||||
}
|
||||
|
||||
func (n *Notifier) Validate() error {
|
||||
func (n *Notifier) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if n.Name == "" {
|
||||
return errors.New("name is required")
|
||||
}
|
||||
|
||||
return n.getSpecificNotifier().Validate()
|
||||
return n.getSpecificNotifier().Validate(encryptor)
|
||||
}
|
||||
|
||||
func (n *Notifier) Send(logger *slog.Logger, heading string, message string) error {
|
||||
err := n.getSpecificNotifier().Send(logger, heading, message)
|
||||
func (n *Notifier) Send(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
heading string,
|
||||
message string,
|
||||
) error {
|
||||
err := n.getSpecificNotifier().Send(encryptor, logger, heading, message)
|
||||
|
||||
if err != nil {
|
||||
lastSendError := err.Error()
|
||||
@@ -58,6 +64,10 @@ func (n *Notifier) HideSensitiveData() {
|
||||
n.getSpecificNotifier().HideSensitiveData()
|
||||
}
|
||||
|
||||
func (n *Notifier) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
return n.getSpecificNotifier().EncryptSensitiveData(encryptor)
|
||||
}
|
||||
|
||||
func (n *Notifier) Update(incoming *Notifier) {
|
||||
n.Name = incoming.Name
|
||||
n.NotifierType = incoming.NotifierType
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"io"
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
@@ -21,7 +22,7 @@ func (d *DiscordNotifier) TableName() string {
|
||||
return "discord_notifiers"
|
||||
}
|
||||
|
||||
func (d *DiscordNotifier) Validate() error {
|
||||
func (d *DiscordNotifier) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if d.ChannelWebhookURL == "" {
|
||||
return errors.New("webhook URL is required")
|
||||
}
|
||||
@@ -29,7 +30,17 @@ func (d *DiscordNotifier) Validate() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *DiscordNotifier) Send(logger *slog.Logger, heading string, message string) error {
|
||||
func (d *DiscordNotifier) Send(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
heading string,
|
||||
message string,
|
||||
) error {
|
||||
webhookURL, err := encryptor.Decrypt(d.NotifierID, d.ChannelWebhookURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt webhook URL: %w", err)
|
||||
}
|
||||
|
||||
fullMessage := heading
|
||||
if message != "" {
|
||||
fullMessage = fmt.Sprintf("%s\n\n%s", heading, message)
|
||||
@@ -44,7 +55,7 @@ func (d *DiscordNotifier) Send(logger *slog.Logger, heading string, message stri
|
||||
return fmt.Errorf("failed to marshal Discord payload: %w", err)
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", d.ChannelWebhookURL, bytes.NewReader(jsonPayload))
|
||||
req, err := http.NewRequest("POST", webhookURL, bytes.NewReader(jsonPayload))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
@@ -81,3 +92,14 @@ func (d *DiscordNotifier) Update(incoming *DiscordNotifier) {
|
||||
d.ChannelWebhookURL = incoming.ChannelWebhookURL
|
||||
}
|
||||
}
|
||||
|
||||
func (d *DiscordNotifier) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
if d.ChannelWebhookURL != "" {
|
||||
encrypted, err := encryptor.Encrypt(d.NotifierID, d.ChannelWebhookURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt webhook URL: %w", err)
|
||||
}
|
||||
d.ChannelWebhookURL = encrypted
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -0,0 +1,28 @@
|
||||
package email_notifier
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net/smtp"
|
||||
)
|
||||
|
||||
type loginAuth struct {
|
||||
username, password string
|
||||
}
|
||||
|
||||
func (a *loginAuth) Start(server *smtp.ServerInfo) (string, []byte, error) {
|
||||
return "LOGIN", []byte{}, nil
|
||||
}
|
||||
|
||||
func (a *loginAuth) Next(fromServer []byte, more bool) ([]byte, error) {
|
||||
if more {
|
||||
switch string(fromServer) {
|
||||
case "Username:":
|
||||
return []byte(a.username), nil
|
||||
case "Password:":
|
||||
return []byte(a.password), nil
|
||||
default:
|
||||
return nil, errors.New("unknown LOGIN challenge: " + string(fromServer))
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"log/slog"
|
||||
"net"
|
||||
"net/smtp"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
@@ -34,7 +35,7 @@ func (e *EmailNotifier) TableName() string {
|
||||
return "email_notifiers"
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) Validate() error {
|
||||
func (e *EmailNotifier) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if e.TargetEmail == "" {
|
||||
return errors.New("target email is required")
|
||||
}
|
||||
@@ -55,8 +56,21 @@ func (e *EmailNotifier) Validate() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) Send(logger *slog.Logger, heading string, message string) error {
|
||||
// Compose email
|
||||
func (e *EmailNotifier) Send(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
_ *slog.Logger,
|
||||
heading string,
|
||||
message string,
|
||||
) error {
|
||||
var smtpPassword string
|
||||
if e.SMTPPassword != "" {
|
||||
decrypted, err := encryptor.Decrypt(e.NotifierID, e.SMTPPassword)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt SMTP password: %w", err)
|
||||
}
|
||||
smtpPassword = decrypted
|
||||
}
|
||||
|
||||
from := e.From
|
||||
if from == "" {
|
||||
from = e.SMTPUser
|
||||
@@ -65,152 +79,13 @@ func (e *EmailNotifier) Send(logger *slog.Logger, heading string, message string
|
||||
}
|
||||
}
|
||||
|
||||
to := []string{e.TargetEmail}
|
||||
emailContent := e.buildEmailContent(heading, message, from)
|
||||
isAuthRequired := e.SMTPUser != "" && smtpPassword != ""
|
||||
|
||||
// Format the email content
|
||||
subject := fmt.Sprintf("Subject: %s\r\n", heading)
|
||||
mime := fmt.Sprintf(
|
||||
"MIME-version: 1.0;\nContent-Type: %s; charset=\"%s\";\n\n",
|
||||
MIMETypeHTML,
|
||||
MIMECharsetUTF8,
|
||||
)
|
||||
body := message
|
||||
fromHeader := fmt.Sprintf("From: %s\r\n", from)
|
||||
|
||||
// Combine all parts of the email
|
||||
emailContent := []byte(fromHeader + subject + mime + body)
|
||||
|
||||
addr := net.JoinHostPort(e.SMTPHost, fmt.Sprintf("%d", e.SMTPPort))
|
||||
timeout := DefaultTimeout
|
||||
|
||||
// Determine if authentication is required
|
||||
isAuthRequired := e.SMTPUser != "" && e.SMTPPassword != ""
|
||||
|
||||
// Handle different port scenarios
|
||||
if e.SMTPPort == ImplicitTLSPort {
|
||||
// Implicit TLS (port 465)
|
||||
// Set up TLS config
|
||||
tlsConfig := &tls.Config{
|
||||
ServerName: e.SMTPHost,
|
||||
}
|
||||
|
||||
// Dial with timeout
|
||||
dialer := &net.Dialer{Timeout: timeout}
|
||||
conn, err := tls.DialWithDialer(dialer, "tcp", addr, tlsConfig)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to connect to SMTP server: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = conn.Close()
|
||||
}()
|
||||
|
||||
// Create SMTP client
|
||||
client, err := smtp.NewClient(conn, e.SMTPHost)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create SMTP client: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = client.Quit()
|
||||
}()
|
||||
|
||||
// Set up authentication only if credentials are provided
|
||||
if isAuthRequired {
|
||||
auth := smtp.PlainAuth("", e.SMTPUser, e.SMTPPassword, e.SMTPHost)
|
||||
if err := client.Auth(auth); err != nil {
|
||||
return fmt.Errorf("SMTP authentication failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Set sender and recipients
|
||||
if err := client.Mail(from); err != nil {
|
||||
return fmt.Errorf("failed to set sender: %w", err)
|
||||
}
|
||||
for _, recipient := range to {
|
||||
if err := client.Rcpt(recipient); err != nil {
|
||||
return fmt.Errorf("failed to set recipient: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Send the email body
|
||||
writer, err := client.Data()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get data writer: %w", err)
|
||||
}
|
||||
_, err = writer.Write(emailContent)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write email content: %w", err)
|
||||
}
|
||||
err = writer.Close()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to close data writer: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
} else {
|
||||
// STARTTLS (port 587) or other ports
|
||||
// Create a custom dialer with timeout
|
||||
dialer := &net.Dialer{Timeout: timeout}
|
||||
conn, err := dialer.Dial("tcp", addr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to connect to SMTP server: %w", err)
|
||||
}
|
||||
|
||||
// Create client from connection
|
||||
client, err := smtp.NewClient(conn, e.SMTPHost)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create SMTP client: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = client.Quit()
|
||||
}()
|
||||
|
||||
// Send email using the client
|
||||
if err := client.Hello(DefaultHelloName); err != nil {
|
||||
return fmt.Errorf("SMTP hello failed: %w", err)
|
||||
}
|
||||
|
||||
// Start TLS if available
|
||||
if ok, _ := client.Extension("STARTTLS"); ok {
|
||||
if err := client.StartTLS(&tls.Config{ServerName: e.SMTPHost}); err != nil {
|
||||
return fmt.Errorf("STARTTLS failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Authenticate only if credentials are provided
|
||||
if isAuthRequired {
|
||||
auth := smtp.PlainAuth("", e.SMTPUser, e.SMTPPassword, e.SMTPHost)
|
||||
if err := client.Auth(auth); err != nil {
|
||||
return fmt.Errorf("SMTP authentication failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if err := client.Mail(from); err != nil {
|
||||
return fmt.Errorf("failed to set sender: %w", err)
|
||||
}
|
||||
|
||||
for _, recipient := range to {
|
||||
if err := client.Rcpt(recipient); err != nil {
|
||||
return fmt.Errorf("failed to set recipient: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
writer, err := client.Data()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get data writer: %w", err)
|
||||
}
|
||||
|
||||
_, err = writer.Write(emailContent)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write email content: %w", err)
|
||||
}
|
||||
|
||||
err = writer.Close()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to close data writer: %w", err)
|
||||
}
|
||||
|
||||
return client.Quit()
|
||||
return e.sendImplicitTLS(emailContent, from, smtpPassword, isAuthRequired)
|
||||
}
|
||||
return e.sendStartTLS(emailContent, from, smtpPassword, isAuthRequired)
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) HideSensitiveData() {
|
||||
@@ -228,3 +103,177 @@ func (e *EmailNotifier) Update(incoming *EmailNotifier) {
|
||||
e.SMTPPassword = incoming.SMTPPassword
|
||||
}
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
if e.SMTPPassword != "" {
|
||||
encrypted, err := encryptor.Encrypt(e.NotifierID, e.SMTPPassword)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt SMTP password: %w", err)
|
||||
}
|
||||
e.SMTPPassword = encrypted
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) buildEmailContent(heading, message, from string) []byte {
|
||||
subject := fmt.Sprintf("Subject: %s\r\n", heading)
|
||||
mime := fmt.Sprintf(
|
||||
"MIME-version: 1.0;\nContent-Type: %s; charset=\"%s\";\n\n",
|
||||
MIMETypeHTML,
|
||||
MIMECharsetUTF8,
|
||||
)
|
||||
fromHeader := fmt.Sprintf("From: %s\r\n", from)
|
||||
toHeader := fmt.Sprintf("To: %s\r\n", e.TargetEmail)
|
||||
return []byte(fromHeader + toHeader + subject + mime + message)
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) sendImplicitTLS(
|
||||
emailContent []byte,
|
||||
from string,
|
||||
password string,
|
||||
isAuthRequired bool,
|
||||
) error {
|
||||
createClient := func() (*smtp.Client, func(), error) {
|
||||
return e.createImplicitTLSClient()
|
||||
}
|
||||
|
||||
client, cleanup, err := e.authenticateWithRetry(createClient, password, isAuthRequired)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cleanup()
|
||||
|
||||
return e.sendEmail(client, from, emailContent)
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) sendStartTLS(
|
||||
emailContent []byte,
|
||||
from string,
|
||||
password string,
|
||||
isAuthRequired bool,
|
||||
) error {
|
||||
createClient := func() (*smtp.Client, func(), error) {
|
||||
return e.createStartTLSClient()
|
||||
}
|
||||
|
||||
client, cleanup, err := e.authenticateWithRetry(createClient, password, isAuthRequired)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cleanup()
|
||||
|
||||
return e.sendEmail(client, from, emailContent)
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) createImplicitTLSClient() (*smtp.Client, func(), error) {
|
||||
addr := net.JoinHostPort(e.SMTPHost, fmt.Sprintf("%d", e.SMTPPort))
|
||||
tlsConfig := &tls.Config{ServerName: e.SMTPHost}
|
||||
dialer := &net.Dialer{Timeout: DefaultTimeout}
|
||||
|
||||
conn, err := tls.DialWithDialer(dialer, "tcp", addr, tlsConfig)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to connect to SMTP server: %w", err)
|
||||
}
|
||||
|
||||
client, err := smtp.NewClient(conn, e.SMTPHost)
|
||||
if err != nil {
|
||||
_ = conn.Close()
|
||||
return nil, nil, fmt.Errorf("failed to create SMTP client: %w", err)
|
||||
}
|
||||
|
||||
return client, func() { _ = client.Quit() }, nil
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) createStartTLSClient() (*smtp.Client, func(), error) {
|
||||
addr := net.JoinHostPort(e.SMTPHost, fmt.Sprintf("%d", e.SMTPPort))
|
||||
dialer := &net.Dialer{Timeout: DefaultTimeout}
|
||||
|
||||
conn, err := dialer.Dial("tcp", addr)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to connect to SMTP server: %w", err)
|
||||
}
|
||||
|
||||
client, err := smtp.NewClient(conn, e.SMTPHost)
|
||||
if err != nil {
|
||||
_ = conn.Close()
|
||||
return nil, nil, fmt.Errorf("failed to create SMTP client: %w", err)
|
||||
}
|
||||
|
||||
if err := client.Hello(DefaultHelloName); err != nil {
|
||||
_ = client.Quit()
|
||||
_ = conn.Close()
|
||||
return nil, nil, fmt.Errorf("SMTP hello failed: %w", err)
|
||||
}
|
||||
|
||||
if ok, _ := client.Extension("STARTTLS"); ok {
|
||||
if err := client.StartTLS(&tls.Config{ServerName: e.SMTPHost}); err != nil {
|
||||
_ = client.Quit()
|
||||
_ = conn.Close()
|
||||
return nil, nil, fmt.Errorf("STARTTLS failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return client, func() { _ = client.Quit() }, nil
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) authenticateWithRetry(
|
||||
createClient func() (*smtp.Client, func(), error),
|
||||
password string,
|
||||
isAuthRequired bool,
|
||||
) (*smtp.Client, func(), error) {
|
||||
client, cleanup, err := createClient()
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if !isAuthRequired {
|
||||
return client, cleanup, nil
|
||||
}
|
||||
|
||||
// Try PLAIN auth first
|
||||
plainAuth := smtp.PlainAuth("", e.SMTPUser, password, e.SMTPHost)
|
||||
if err := client.Auth(plainAuth); err == nil {
|
||||
return client, cleanup, nil
|
||||
}
|
||||
|
||||
// PLAIN auth failed, connection may be closed - recreate and try LOGIN auth
|
||||
cleanup()
|
||||
|
||||
client, cleanup, err = createClient()
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
loginAuth := &loginAuth{username: e.SMTPUser, password: password}
|
||||
if err := client.Auth(loginAuth); err != nil {
|
||||
cleanup()
|
||||
return nil, nil, fmt.Errorf("SMTP authentication failed: %w", err)
|
||||
}
|
||||
|
||||
return client, cleanup, nil
|
||||
}
|
||||
|
||||
func (e *EmailNotifier) sendEmail(client *smtp.Client, from string, content []byte) error {
|
||||
if err := client.Mail(from); err != nil {
|
||||
return fmt.Errorf("failed to set sender: %w", err)
|
||||
}
|
||||
|
||||
if err := client.Rcpt(e.TargetEmail); err != nil {
|
||||
return fmt.Errorf("failed to set recipient: %w", err)
|
||||
}
|
||||
|
||||
writer, err := client.Data()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get data writer: %w", err)
|
||||
}
|
||||
|
||||
if _, err = writer.Write(content); err != nil {
|
||||
return fmt.Errorf("failed to write email content: %w", err)
|
||||
}
|
||||
|
||||
if err = writer.Close(); err != nil {
|
||||
return fmt.Errorf("failed to close data writer: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"io"
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -23,7 +24,7 @@ type SlackNotifier struct {
|
||||
|
||||
func (s *SlackNotifier) TableName() string { return "slack_notifiers" }
|
||||
|
||||
func (s *SlackNotifier) Validate() error {
|
||||
func (s *SlackNotifier) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if s.BotToken == "" {
|
||||
return errors.New("bot token is required")
|
||||
}
|
||||
@@ -43,7 +44,16 @@ func (s *SlackNotifier) Validate() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *SlackNotifier) Send(logger *slog.Logger, heading, message string) error {
|
||||
func (s *SlackNotifier) Send(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
heading, message string,
|
||||
) error {
|
||||
botToken, err := encryptor.Decrypt(s.NotifierID, s.BotToken)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt bot token: %w", err)
|
||||
}
|
||||
|
||||
full := fmt.Sprintf("*%s*", heading)
|
||||
|
||||
if message != "" {
|
||||
@@ -60,6 +70,7 @@ func (s *SlackNotifier) Send(logger *slog.Logger, heading, message string) error
|
||||
maxAttempts = 5
|
||||
defaultBackoff = 2 * time.Second // when Retry-After header missing
|
||||
backoffMultiplier = 1.5 // use exponential growth
|
||||
requestTimeout = 30 * time.Second
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -67,6 +78,10 @@ func (s *SlackNotifier) Send(logger *slog.Logger, heading, message string) error
|
||||
attempts = 0
|
||||
)
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: requestTimeout,
|
||||
}
|
||||
|
||||
for {
|
||||
attempts++
|
||||
|
||||
@@ -80,9 +95,9 @@ func (s *SlackNotifier) Send(logger *slog.Logger, heading, message string) error
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json; charset=utf-8")
|
||||
req.Header.Set("Authorization", "Bearer "+s.BotToken)
|
||||
req.Header.Set("Authorization", "Bearer "+botToken)
|
||||
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return fmt.Errorf("send slack message: %w", err)
|
||||
}
|
||||
@@ -144,3 +159,14 @@ func (s *SlackNotifier) Update(incoming *SlackNotifier) {
|
||||
s.BotToken = incoming.BotToken
|
||||
}
|
||||
}
|
||||
|
||||
func (s *SlackNotifier) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
if s.BotToken != "" {
|
||||
encrypted, err := encryptor.Encrypt(s.NotifierID, s.BotToken)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt bot token: %w", err)
|
||||
}
|
||||
s.BotToken = encrypted
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
@@ -21,11 +22,17 @@ func (TeamsNotifier) TableName() string {
|
||||
return "teams_notifiers"
|
||||
}
|
||||
|
||||
func (n *TeamsNotifier) Validate() error {
|
||||
func (n *TeamsNotifier) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if n.WebhookURL == "" {
|
||||
return errors.New("webhook_url is required")
|
||||
}
|
||||
u, err := url.Parse(n.WebhookURL)
|
||||
|
||||
webhookURL, err := encryptor.Decrypt(n.NotifierID, n.WebhookURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt webhook URL: %w", err)
|
||||
}
|
||||
|
||||
u, err := url.Parse(webhookURL)
|
||||
if err != nil || (u.Scheme != "http" && u.Scheme != "https") {
|
||||
return errors.New("invalid webhook_url")
|
||||
}
|
||||
@@ -33,8 +40,8 @@ func (n *TeamsNotifier) Validate() error {
|
||||
}
|
||||
|
||||
type cardAttachment struct {
|
||||
ContentType string `json:"contentType"`
|
||||
Content interface{} `json:"content"`
|
||||
ContentType string `json:"contentType"`
|
||||
Content any `json:"content"`
|
||||
}
|
||||
|
||||
type payload struct {
|
||||
@@ -43,11 +50,20 @@ type payload struct {
|
||||
Attachments []cardAttachment `json:"attachments,omitempty"`
|
||||
}
|
||||
|
||||
func (n *TeamsNotifier) Send(logger *slog.Logger, heading, message string) error {
|
||||
if err := n.Validate(); err != nil {
|
||||
func (n *TeamsNotifier) Send(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
heading, message string,
|
||||
) error {
|
||||
if err := n.Validate(encryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
webhookURL, err := encryptor.Decrypt(n.NotifierID, n.WebhookURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt webhook URL: %w", err)
|
||||
}
|
||||
|
||||
card := map[string]any{
|
||||
"type": "AdaptiveCard",
|
||||
"version": "1.4",
|
||||
@@ -71,7 +87,7 @@ func (n *TeamsNotifier) Send(logger *slog.Logger, heading, message string) error
|
||||
}
|
||||
|
||||
body, _ := json.Marshal(p)
|
||||
req, err := http.NewRequest(http.MethodPost, n.WebhookURL, bytes.NewReader(body))
|
||||
req, err := http.NewRequest(http.MethodPost, webhookURL, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -104,3 +120,14 @@ func (n *TeamsNotifier) Update(incoming *TeamsNotifier) {
|
||||
n.WebhookURL = incoming.WebhookURL
|
||||
}
|
||||
}
|
||||
|
||||
func (n *TeamsNotifier) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
if n.WebhookURL != "" {
|
||||
encrypted, err := encryptor.Encrypt(n.NotifierID, n.WebhookURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt webhook URL: %w", err)
|
||||
}
|
||||
n.WebhookURL = encrypted
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
@@ -24,7 +25,7 @@ func (t *TelegramNotifier) TableName() string {
|
||||
return "telegram_notifiers"
|
||||
}
|
||||
|
||||
func (t *TelegramNotifier) Validate() error {
|
||||
func (t *TelegramNotifier) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if t.BotToken == "" {
|
||||
return errors.New("bot token is required")
|
||||
}
|
||||
@@ -36,13 +37,23 @@ func (t *TelegramNotifier) Validate() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *TelegramNotifier) Send(logger *slog.Logger, heading string, message string) error {
|
||||
func (t *TelegramNotifier) Send(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
heading string,
|
||||
message string,
|
||||
) error {
|
||||
botToken, err := encryptor.Decrypt(t.NotifierID, t.BotToken)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt bot token: %w", err)
|
||||
}
|
||||
|
||||
fullMessage := heading
|
||||
if message != "" {
|
||||
fullMessage = fmt.Sprintf("%s\n\n%s", heading, message)
|
||||
}
|
||||
|
||||
apiURL := fmt.Sprintf("https://api.telegram.org/bot%s/sendMessage", t.BotToken)
|
||||
apiURL := fmt.Sprintf("https://api.telegram.org/bot%s/sendMessage", botToken)
|
||||
|
||||
data := url.Values{}
|
||||
data.Set("chat_id", t.TargetChatID)
|
||||
@@ -93,3 +104,14 @@ func (t *TelegramNotifier) Update(incoming *TelegramNotifier) {
|
||||
t.BotToken = incoming.BotToken
|
||||
}
|
||||
}
|
||||
|
||||
func (t *TelegramNotifier) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
if t.BotToken != "" {
|
||||
encrypted, err := encryptor.Encrypt(t.NotifierID, t.BotToken)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt bot token: %w", err)
|
||||
}
|
||||
t.BotToken = encrypted
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -9,21 +9,59 @@ import (
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strings"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type WebhookHeader struct {
|
||||
Key string `json:"key"`
|
||||
Value string `json:"value"`
|
||||
}
|
||||
|
||||
type WebhookNotifier struct {
|
||||
NotifierID uuid.UUID `json:"notifierId" gorm:"primaryKey;column:notifier_id"`
|
||||
WebhookURL string `json:"webhookUrl" gorm:"not null;column:webhook_url"`
|
||||
WebhookMethod WebhookMethod `json:"webhookMethod" gorm:"not null;column:webhook_method"`
|
||||
BodyTemplate *string `json:"bodyTemplate" gorm:"column:body_template;type:text"`
|
||||
HeadersJSON string `json:"-" gorm:"column:headers;type:text"`
|
||||
|
||||
Headers []WebhookHeader `json:"headers" gorm:"-"`
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) TableName() string {
|
||||
return "webhook_notifiers"
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) Validate() error {
|
||||
func (t *WebhookNotifier) BeforeSave(_ *gorm.DB) error {
|
||||
if len(t.Headers) > 0 {
|
||||
data, err := json.Marshal(t.Headers)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
t.HeadersJSON = string(data)
|
||||
} else {
|
||||
t.HeadersJSON = "[]"
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) AfterFind(_ *gorm.DB) error {
|
||||
if t.HeadersJSON != "" {
|
||||
if err := json.Unmarshal([]byte(t.HeadersJSON), &t.Headers); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if t.WebhookURL == "" {
|
||||
return errors.New("webhook URL is required")
|
||||
}
|
||||
@@ -35,69 +73,22 @@ func (t *WebhookNotifier) Validate() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) Send(logger *slog.Logger, heading string, message string) error {
|
||||
func (t *WebhookNotifier) Send(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
heading string,
|
||||
message string,
|
||||
) error {
|
||||
webhookURL, err := encryptor.Decrypt(t.NotifierID, t.WebhookURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt webhook URL: %w", err)
|
||||
}
|
||||
|
||||
switch t.WebhookMethod {
|
||||
case WebhookMethodGET:
|
||||
reqURL := fmt.Sprintf("%s?heading=%s&message=%s",
|
||||
t.WebhookURL,
|
||||
url.QueryEscape(heading),
|
||||
url.QueryEscape(message),
|
||||
)
|
||||
|
||||
resp, err := http.Get(reqURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to send GET webhook: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if cerr := resp.Body.Close(); cerr != nil {
|
||||
logger.Error("failed to close response body", "error", cerr)
|
||||
}
|
||||
}()
|
||||
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf(
|
||||
"webhook GET returned status: %s, body: %s",
|
||||
resp.Status,
|
||||
string(body),
|
||||
)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
return t.sendGET(webhookURL, heading, message, logger)
|
||||
case WebhookMethodPOST:
|
||||
payload := map[string]string{
|
||||
"heading": heading,
|
||||
"message": message,
|
||||
}
|
||||
|
||||
body, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal webhook payload: %w", err)
|
||||
}
|
||||
|
||||
resp, err := http.Post(t.WebhookURL, "application/json", bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to send POST webhook: %w", err)
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if cerr := resp.Body.Close(); cerr != nil {
|
||||
logger.Error("failed to close response body", "error", cerr)
|
||||
}
|
||||
}()
|
||||
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf(
|
||||
"webhook POST returned status: %s, body: %s",
|
||||
resp.Status,
|
||||
string(body),
|
||||
)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
return t.sendPOST(webhookURL, heading, message, logger)
|
||||
default:
|
||||
return fmt.Errorf("unsupported webhook method: %s", t.WebhookMethod)
|
||||
}
|
||||
@@ -109,4 +100,144 @@ func (t *WebhookNotifier) HideSensitiveData() {
|
||||
func (t *WebhookNotifier) Update(incoming *WebhookNotifier) {
|
||||
t.WebhookURL = incoming.WebhookURL
|
||||
t.WebhookMethod = incoming.WebhookMethod
|
||||
t.BodyTemplate = incoming.BodyTemplate
|
||||
t.Headers = incoming.Headers
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
if t.WebhookURL != "" {
|
||||
encrypted, err := encryptor.Encrypt(t.NotifierID, t.WebhookURL)
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt webhook URL: %w", err)
|
||||
}
|
||||
|
||||
t.WebhookURL = encrypted
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) sendGET(webhookURL, heading, message string, logger *slog.Logger) error {
|
||||
reqURL := fmt.Sprintf("%s?heading=%s&message=%s",
|
||||
webhookURL,
|
||||
url.QueryEscape(heading),
|
||||
url.QueryEscape(message),
|
||||
)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, reqURL, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create GET request: %w", err)
|
||||
}
|
||||
|
||||
t.applyHeaders(req)
|
||||
|
||||
client := &http.Client{}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to send GET webhook: %w", err)
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if cerr := resp.Body.Close(); cerr != nil {
|
||||
logger.Error("failed to close response body", "error", cerr)
|
||||
}
|
||||
}()
|
||||
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf(
|
||||
"webhook GET returned status: %s, body: %s",
|
||||
resp.Status,
|
||||
string(body),
|
||||
)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) sendPOST(webhookURL, heading, message string, logger *slog.Logger) error {
|
||||
body := t.buildRequestBody(heading, message)
|
||||
|
||||
req, err := http.NewRequest(http.MethodPost, webhookURL, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create POST request: %w", err)
|
||||
}
|
||||
|
||||
hasContentType := false
|
||||
|
||||
for _, h := range t.Headers {
|
||||
if strings.EqualFold(h.Key, "Content-Type") {
|
||||
hasContentType = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !hasContentType {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
|
||||
t.applyHeaders(req)
|
||||
|
||||
client := &http.Client{}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to send POST webhook: %w", err)
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if cerr := resp.Body.Close(); cerr != nil {
|
||||
logger.Error("failed to close response body", "error", cerr)
|
||||
}
|
||||
}()
|
||||
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
respBody, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf(
|
||||
"webhook POST returned status: %s, body: %s",
|
||||
resp.Status,
|
||||
string(respBody),
|
||||
)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) buildRequestBody(heading, message string) []byte {
|
||||
if t.BodyTemplate != nil && *t.BodyTemplate != "" {
|
||||
result := *t.BodyTemplate
|
||||
result = strings.ReplaceAll(result, "{{heading}}", escapeJSONString(heading))
|
||||
result = strings.ReplaceAll(result, "{{message}}", escapeJSONString(message))
|
||||
return []byte(result)
|
||||
}
|
||||
|
||||
payload := map[string]string{
|
||||
"heading": heading,
|
||||
"message": message,
|
||||
}
|
||||
body, _ := json.Marshal(payload)
|
||||
|
||||
return body
|
||||
}
|
||||
|
||||
func (t *WebhookNotifier) applyHeaders(req *http.Request) {
|
||||
for _, h := range t.Headers {
|
||||
if h.Key != "" {
|
||||
req.Header.Set(h.Key, h.Value)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func escapeJSONString(s string) string {
|
||||
b, err := json.Marshal(s)
|
||||
if err != nil || len(b) < 2 {
|
||||
escaped := strings.ReplaceAll(s, `\`, `\\`)
|
||||
escaped = strings.ReplaceAll(escaped, `"`, `\"`)
|
||||
escaped = strings.ReplaceAll(escaped, "\n", `\n`)
|
||||
escaped = strings.ReplaceAll(escaped, "\r", `\r`)
|
||||
escaped = strings.ReplaceAll(escaped, "\t", `\t`)
|
||||
return escaped
|
||||
}
|
||||
|
||||
return string(b[1 : len(b)-1])
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
audit_logs "postgresus-backend/internal/features/audit_logs"
|
||||
users_models "postgresus-backend/internal/features/users/models"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
@@ -17,6 +18,7 @@ type NotifierService struct {
|
||||
logger *slog.Logger
|
||||
workspaceService *workspaces_services.WorkspaceService
|
||||
auditLogService *audit_logs.AuditLogService
|
||||
fieldEncryptor encryption.FieldEncryptor
|
||||
}
|
||||
|
||||
func (s *NotifierService) SaveNotifier(
|
||||
@@ -46,7 +48,11 @@ func (s *NotifierService) SaveNotifier(
|
||||
|
||||
existingNotifier.Update(notifier)
|
||||
|
||||
if err := existingNotifier.Validate(); err != nil {
|
||||
if err := existingNotifier.EncryptSensitiveData(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := existingNotifier.Validate(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -63,7 +69,11 @@ func (s *NotifierService) SaveNotifier(
|
||||
} else {
|
||||
notifier.WorkspaceID = workspaceID
|
||||
|
||||
if err := notifier.Validate(); err != nil {
|
||||
if err := notifier.EncryptSensitiveData(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := notifier.Validate(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -175,7 +185,7 @@ func (s *NotifierService) SendTestNotification(
|
||||
return errors.New("insufficient permissions to test notifier in this workspace")
|
||||
}
|
||||
|
||||
err = notifier.Send(s.logger, "Test message", "This is a test message")
|
||||
err = notifier.Send(s.fieldEncryptor, s.logger, "Test message", "This is a test message")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -205,16 +215,24 @@ func (s *NotifierService) SendTestNotificationToNotifier(
|
||||
|
||||
existingNotifier.Update(notifier)
|
||||
|
||||
if err := existingNotifier.Validate(); err != nil {
|
||||
if err := existingNotifier.EncryptSensitiveData(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := existingNotifier.Validate(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
usingNotifier = existingNotifier
|
||||
} else {
|
||||
if err := notifier.EncryptSensitiveData(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
usingNotifier = notifier
|
||||
}
|
||||
|
||||
return usingNotifier.Send(s.logger, "Test message", "This is a test message")
|
||||
return usingNotifier.Send(s.fieldEncryptor, s.logger, "Test message", "This is a test message")
|
||||
}
|
||||
|
||||
func (s *NotifierService) SendNotification(
|
||||
@@ -233,7 +251,7 @@ func (s *NotifierService) SendNotification(
|
||||
return
|
||||
}
|
||||
|
||||
err = notifiedFromDb.Send(s.logger, title, message)
|
||||
err = notifiedFromDb.Send(s.fieldEncryptor, s.logger, title, message)
|
||||
if err != nil {
|
||||
errMsg := err.Error()
|
||||
notifiedFromDb.LastSendError = &errMsg
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package restores
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -29,6 +30,7 @@ import (
|
||||
workspaces_controllers "postgresus-backend/internal/features/workspaces/controllers"
|
||||
workspaces_models "postgresus-backend/internal/features/workspaces/models"
|
||||
workspaces_testing "postgresus-backend/internal/features/workspaces/testing"
|
||||
util_encryption "postgresus-backend/internal/util/encryption"
|
||||
test_utils "postgresus-backend/internal/util/testing"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
)
|
||||
@@ -309,6 +311,7 @@ func createTestBackup(
|
||||
database *databases.Database,
|
||||
owner *users_dto.SignInResponseDTO,
|
||||
) *backups.Backup {
|
||||
fieldEncryptor := util_encryption.GetFieldEncryptor()
|
||||
userService := users_services.GetUserService()
|
||||
user, err := userService.GetUserFromToken(owner.Token)
|
||||
if err != nil {
|
||||
@@ -338,7 +341,7 @@ func createTestBackup(
|
||||
dummyContent := []byte("dummy backup content for testing")
|
||||
reader := strings.NewReader(string(dummyContent))
|
||||
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
|
||||
if err := storages[0].SaveFile(logger, backup.ID, reader); err != nil {
|
||||
if err := storages[0].SaveFile(context.Background(), fieldEncryptor, logger, backup.ID, reader); err != nil {
|
||||
panic(fmt.Sprintf("Failed to create test backup file: %v", err))
|
||||
}
|
||||
|
||||
|
||||
@@ -2,7 +2,6 @@ package models
|
||||
|
||||
import (
|
||||
"postgresus-backend/internal/features/backups/backups"
|
||||
"postgresus-backend/internal/features/databases/databases/postgresql"
|
||||
"postgresus-backend/internal/features/restores/enums"
|
||||
"time"
|
||||
|
||||
@@ -16,8 +15,6 @@ type Restore struct {
|
||||
BackupID uuid.UUID `json:"backupId" gorm:"column:backup_id;type:uuid;not null"`
|
||||
Backup *backups.Backup
|
||||
|
||||
Postgresql *postgresql.PostgresqlDatabase `json:"postgresql,omitempty" gorm:"foreignKey:RestoreID"`
|
||||
|
||||
FailMessage *string `json:"failMessage" gorm:"column:fail_message"`
|
||||
|
||||
RestoreDurationMs int64 `json:"restoreDurationMs" gorm:"column:restore_duration_ms;default:0"`
|
||||
|
||||
@@ -32,7 +32,6 @@ func (r *RestoreRepository) FindByBackupID(backupID uuid.UUID) ([]*models.Restor
|
||||
if err := storage.
|
||||
GetDb().
|
||||
Preload("Backup").
|
||||
Preload("Postgresql").
|
||||
Where("backup_id = ?", backupID).
|
||||
Order("created_at DESC").
|
||||
Find(&restores).Error; err != nil {
|
||||
@@ -48,7 +47,6 @@ func (r *RestoreRepository) FindByID(id uuid.UUID) (*models.Restore, error) {
|
||||
if err := storage.
|
||||
GetDb().
|
||||
Preload("Backup").
|
||||
Preload("Postgresql").
|
||||
Where("id = ?", id).
|
||||
First(&restore).Error; err != nil {
|
||||
return nil, err
|
||||
@@ -63,7 +61,6 @@ func (r *RestoreRepository) FindByStatus(status enums.RestoreStatus) ([]*models.
|
||||
if err := storage.
|
||||
GetDb().
|
||||
Preload("Backup").
|
||||
Preload("Postgresql").
|
||||
Where("status = ?", status).
|
||||
Order("created_at DESC").
|
||||
Find(&restores).Error; err != nil {
|
||||
|
||||
@@ -191,15 +191,9 @@ func (s *RestoreService) RestoreBackup(
|
||||
return err
|
||||
}
|
||||
|
||||
// Set the RestoreID on the PostgreSQL database and save it
|
||||
if requestDTO.PostgresqlDatabase != nil {
|
||||
requestDTO.PostgresqlDatabase.RestoreID = &restore.ID
|
||||
restore.Postgresql = requestDTO.PostgresqlDatabase
|
||||
|
||||
// Save the restore again to include the postgresql database
|
||||
if err := s.restoreRepository.Save(&restore); err != nil {
|
||||
return err
|
||||
}
|
||||
// Save the restore again to include the postgresql database
|
||||
if err := s.restoreRepository.Save(&restore); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
storage, err := s.storageService.GetStorageByID(backup.StorageID)
|
||||
@@ -216,10 +210,15 @@ func (s *RestoreService) RestoreBackup(
|
||||
|
||||
start := time.Now().UTC()
|
||||
|
||||
restoringToDB := &databases.Database{
|
||||
Postgresql: requestDTO.PostgresqlDatabase,
|
||||
}
|
||||
|
||||
err = s.restoreBackupUsecase.Execute(
|
||||
backupConfig,
|
||||
restore,
|
||||
database,
|
||||
restoringToDB,
|
||||
backup,
|
||||
storage,
|
||||
)
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
package usecases_postgresql
|
||||
|
||||
import (
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
)
|
||||
|
||||
var restorePostgresqlBackupUsecase = &RestorePostgresqlBackupUsecase{
|
||||
logger.GetLogger(),
|
||||
secrets.GetSecretKeyService(),
|
||||
}
|
||||
|
||||
func GetRestorePostgresqlBackupUsecase() *RestorePostgresqlBackupUsecase {
|
||||
|
||||
@@ -2,6 +2,7 @@ package usecases_postgresql
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -15,11 +16,14 @@ import (
|
||||
|
||||
"postgresus-backend/internal/config"
|
||||
"postgresus-backend/internal/features/backups/backups"
|
||||
"postgresus-backend/internal/features/backups/backups/encryption"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
pgtypes "postgresus-backend/internal/features/databases/databases/postgresql"
|
||||
encryption_secrets "postgresus-backend/internal/features/encryption/secrets"
|
||||
"postgresus-backend/internal/features/restores/models"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
util_encryption "postgresus-backend/internal/util/encryption"
|
||||
files_utils "postgresus-backend/internal/util/files"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
|
||||
@@ -27,17 +31,19 @@ import (
|
||||
)
|
||||
|
||||
type RestorePostgresqlBackupUsecase struct {
|
||||
logger *slog.Logger
|
||||
logger *slog.Logger
|
||||
secretKeyService *encryption_secrets.SecretKeyService
|
||||
}
|
||||
|
||||
func (uc *RestorePostgresqlBackupUsecase) Execute(
|
||||
database *databases.Database,
|
||||
originalDB *databases.Database,
|
||||
restoringToDB *databases.Database,
|
||||
backupConfig *backups_config.BackupConfig,
|
||||
restore models.Restore,
|
||||
backup *backups.Backup,
|
||||
storage *storages.Storage,
|
||||
) error {
|
||||
if database.Type != databases.DatabaseTypePostgres {
|
||||
if originalDB.Type != databases.DatabaseTypePostgres {
|
||||
return errors.New("database type not supported")
|
||||
}
|
||||
|
||||
@@ -49,7 +55,7 @@ func (uc *RestorePostgresqlBackupUsecase) Execute(
|
||||
backup.ID,
|
||||
)
|
||||
|
||||
pg := restore.Postgresql
|
||||
pg := restoringToDB.Postgresql
|
||||
if pg == nil {
|
||||
return fmt.Errorf("postgresql configuration is required for restore")
|
||||
}
|
||||
@@ -73,11 +79,12 @@ func (uc *RestorePostgresqlBackupUsecase) Execute(
|
||||
"--verbose", // Add verbose output to help with debugging
|
||||
"--clean", // Clean (drop) database objects before recreating them
|
||||
"--if-exists", // Use IF EXISTS when dropping objects
|
||||
"--no-owner",
|
||||
"--no-owner", // Skip restoring ownership
|
||||
"--no-acl", // Skip restoring access privileges (GRANT/REVOKE commands)
|
||||
}
|
||||
|
||||
return uc.restoreFromStorage(
|
||||
database,
|
||||
originalDB,
|
||||
tools.GetPostgresqlExecutable(
|
||||
pg.Version,
|
||||
"pg_restore",
|
||||
@@ -202,18 +209,67 @@ func (uc *RestorePostgresqlBackupUsecase) downloadBackupToTempFile(
|
||||
backup.ID,
|
||||
"tempFile",
|
||||
tempBackupFile,
|
||||
"encrypted",
|
||||
backup.Encryption == backups_config.BackupEncryptionEncrypted,
|
||||
)
|
||||
backupReader, err := storage.GetFile(backup.ID)
|
||||
fieldEncryptor := util_encryption.GetFieldEncryptor()
|
||||
rawReader, err := storage.GetFile(fieldEncryptor, backup.ID)
|
||||
if err != nil {
|
||||
cleanupFunc()
|
||||
return "", nil, fmt.Errorf("failed to get backup file from storage: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := backupReader.Close(); err != nil {
|
||||
if err := rawReader.Close(); err != nil {
|
||||
uc.logger.Error("Failed to close backup reader", "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Create a reader that handles decryption if needed
|
||||
var backupReader io.Reader = rawReader
|
||||
if backup.Encryption == backups_config.BackupEncryptionEncrypted {
|
||||
// Validate encryption metadata
|
||||
if backup.EncryptionSalt == nil || backup.EncryptionIV == nil {
|
||||
cleanupFunc()
|
||||
return "", nil, fmt.Errorf("backup is encrypted but missing encryption metadata")
|
||||
}
|
||||
|
||||
// Get master key
|
||||
masterKey, err := uc.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
cleanupFunc()
|
||||
return "", nil, fmt.Errorf("failed to get master key for decryption: %w", err)
|
||||
}
|
||||
|
||||
// Decode salt and IV from base64
|
||||
salt, err := base64.StdEncoding.DecodeString(*backup.EncryptionSalt)
|
||||
if err != nil {
|
||||
cleanupFunc()
|
||||
return "", nil, fmt.Errorf("failed to decode encryption salt: %w", err)
|
||||
}
|
||||
|
||||
iv, err := base64.StdEncoding.DecodeString(*backup.EncryptionIV)
|
||||
if err != nil {
|
||||
cleanupFunc()
|
||||
return "", nil, fmt.Errorf("failed to decode encryption IV: %w", err)
|
||||
}
|
||||
|
||||
// Create decryption reader
|
||||
decryptReader, err := encryption.NewDecryptionReader(
|
||||
rawReader,
|
||||
masterKey,
|
||||
backup.ID,
|
||||
salt,
|
||||
iv,
|
||||
)
|
||||
if err != nil {
|
||||
cleanupFunc()
|
||||
return "", nil, fmt.Errorf("failed to create decryption reader: %w", err)
|
||||
}
|
||||
|
||||
backupReader = decryptReader
|
||||
uc.logger.Info("Using decryption for encrypted backup", "backupId", backup.ID)
|
||||
}
|
||||
|
||||
// Create temporary backup file
|
||||
tempFile, err := os.Create(tempBackupFile)
|
||||
if err != nil {
|
||||
@@ -322,7 +378,6 @@ func (uc *RestorePostgresqlBackupUsecase) setupPgRestoreEnvironment(
|
||||
// Add encoding-related environment variables
|
||||
cmd.Env = append(cmd.Env, "LC_ALL=C.UTF-8")
|
||||
cmd.Env = append(cmd.Env, "LANG=C.UTF-8")
|
||||
cmd.Env = append(cmd.Env, "PGOPTIONS=--client-encoding=UTF8")
|
||||
|
||||
shouldRequireSSL := pgConfig.IsHttps
|
||||
|
||||
@@ -508,11 +563,15 @@ func (uc *RestorePostgresqlBackupUsecase) createTempPgpassFile(
|
||||
return "", nil
|
||||
}
|
||||
|
||||
escapedHost := tools.EscapePgpassField(pgConfig.Host)
|
||||
escapedUsername := tools.EscapePgpassField(pgConfig.Username)
|
||||
escapedPassword := tools.EscapePgpassField(password)
|
||||
|
||||
pgpassContent := fmt.Sprintf("%s:%d:*:%s:%s",
|
||||
pgConfig.Host,
|
||||
escapedHost,
|
||||
pgConfig.Port,
|
||||
pgConfig.Username,
|
||||
password,
|
||||
escapedUsername,
|
||||
escapedPassword,
|
||||
)
|
||||
|
||||
tempDir, err := os.MkdirTemp("", "pgpass")
|
||||
|
||||
@@ -17,13 +17,15 @@ type RestoreBackupUsecase struct {
|
||||
func (uc *RestoreBackupUsecase) Execute(
|
||||
backupConfig *backups_config.BackupConfig,
|
||||
restore models.Restore,
|
||||
database *databases.Database,
|
||||
originalDB *databases.Database,
|
||||
restoringToDB *databases.Database,
|
||||
backup *backups.Backup,
|
||||
storage *storages.Storage,
|
||||
) error {
|
||||
if database.Type == databases.DatabaseTypePostgres {
|
||||
if originalDB.Type == databases.DatabaseTypePostgres {
|
||||
return uc.restorePostgresqlBackupUsecase.Execute(
|
||||
database,
|
||||
originalDB,
|
||||
restoringToDB,
|
||||
backupConfig,
|
||||
restore,
|
||||
backup,
|
||||
|
||||
@@ -3,10 +3,14 @@ package storages
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
audit_logs "postgresus-backend/internal/features/audit_logs"
|
||||
azure_blob_storage "postgresus-backend/internal/features/storages/models/azure_blob"
|
||||
google_drive_storage "postgresus-backend/internal/features/storages/models/google_drive"
|
||||
local_storage "postgresus-backend/internal/features/storages/models/local"
|
||||
nas_storage "postgresus-backend/internal/features/storages/models/nas"
|
||||
s3_storage "postgresus-backend/internal/features/storages/models/s3"
|
||||
users_enums "postgresus-backend/internal/features/users/enums"
|
||||
users_middleware "postgresus-backend/internal/features/users/middleware"
|
||||
@@ -14,6 +18,7 @@ import (
|
||||
users_testing "postgresus-backend/internal/features/users/testing"
|
||||
workspaces_controllers "postgresus-backend/internal/features/workspaces/controllers"
|
||||
workspaces_testing "postgresus-backend/internal/features/workspaces/testing"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
test_utils "postgresus-backend/internal/util/testing"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
@@ -438,6 +443,386 @@ func Test_CrossWorkspaceSecurity_CannotAccessStorageFromAnotherWorkspace(t *test
|
||||
workspaces_testing.RemoveTestWorkspace(workspace2, router)
|
||||
}
|
||||
|
||||
func Test_StorageSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
storageType StorageType
|
||||
createStorage func(workspaceID uuid.UUID) *Storage
|
||||
updateStorage func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage
|
||||
verifySensitiveData func(t *testing.T, storage *Storage)
|
||||
verifyHiddenData func(t *testing.T, storage *Storage)
|
||||
}{
|
||||
{
|
||||
name: "S3 Storage",
|
||||
storageType: StorageTypeS3,
|
||||
createStorage: func(workspaceID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeS3,
|
||||
Name: "Test S3 Storage",
|
||||
S3Storage: &s3_storage.S3Storage{
|
||||
S3Bucket: "test-bucket",
|
||||
S3Region: "us-east-1",
|
||||
S3AccessKey: "original-access-key",
|
||||
S3SecretKey: "original-secret-key",
|
||||
S3Endpoint: "https://s3.amazonaws.com",
|
||||
},
|
||||
}
|
||||
},
|
||||
updateStorage: func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
ID: storageID,
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeS3,
|
||||
Name: "Updated S3 Storage",
|
||||
S3Storage: &s3_storage.S3Storage{
|
||||
S3Bucket: "updated-bucket",
|
||||
S3Region: "us-west-2",
|
||||
S3AccessKey: "",
|
||||
S3SecretKey: "",
|
||||
S3Endpoint: "https://s3.us-west-2.amazonaws.com",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, storage *Storage) {
|
||||
assert.True(t, strings.HasPrefix(storage.S3Storage.S3AccessKey, "enc:"),
|
||||
"S3AccessKey should be encrypted with 'enc:' prefix")
|
||||
assert.True(t, strings.HasPrefix(storage.S3Storage.S3SecretKey, "enc:"),
|
||||
"S3SecretKey should be encrypted with 'enc:' prefix")
|
||||
|
||||
encryptor := encryption.GetFieldEncryptor()
|
||||
accessKey, err := encryptor.Decrypt(storage.ID, storage.S3Storage.S3AccessKey)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "original-access-key", accessKey)
|
||||
|
||||
secretKey, err := encryptor.Decrypt(storage.ID, storage.S3Storage.S3SecretKey)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "original-secret-key", secretKey)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, storage *Storage) {
|
||||
assert.Equal(t, "", storage.S3Storage.S3AccessKey)
|
||||
assert.Equal(t, "", storage.S3Storage.S3SecretKey)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Local Storage",
|
||||
storageType: StorageTypeLocal,
|
||||
createStorage: func(workspaceID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeLocal,
|
||||
Name: "Test Local Storage",
|
||||
LocalStorage: &local_storage.LocalStorage{},
|
||||
}
|
||||
},
|
||||
updateStorage: func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
ID: storageID,
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeLocal,
|
||||
Name: "Updated Local Storage",
|
||||
LocalStorage: &local_storage.LocalStorage{},
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, storage *Storage) {
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, storage *Storage) {
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "NAS Storage",
|
||||
storageType: StorageTypeNAS,
|
||||
createStorage: func(workspaceID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeNAS,
|
||||
Name: "Test NAS Storage",
|
||||
NASStorage: &nas_storage.NASStorage{
|
||||
Host: "nas.example.com",
|
||||
Port: 445,
|
||||
Share: "backups",
|
||||
Username: "testuser",
|
||||
Password: "original-password",
|
||||
UseSSL: false,
|
||||
Domain: "WORKGROUP",
|
||||
Path: "/test",
|
||||
},
|
||||
}
|
||||
},
|
||||
updateStorage: func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
ID: storageID,
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeNAS,
|
||||
Name: "Updated NAS Storage",
|
||||
NASStorage: &nas_storage.NASStorage{
|
||||
Host: "nas2.example.com",
|
||||
Port: 445,
|
||||
Share: "backups2",
|
||||
Username: "testuser2",
|
||||
Password: "",
|
||||
UseSSL: true,
|
||||
Domain: "WORKGROUP2",
|
||||
Path: "/test2",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, storage *Storage) {
|
||||
assert.True(t, strings.HasPrefix(storage.NASStorage.Password, "enc:"),
|
||||
"Password should be encrypted with 'enc:' prefix")
|
||||
|
||||
encryptor := encryption.GetFieldEncryptor()
|
||||
password, err := encryptor.Decrypt(storage.ID, storage.NASStorage.Password)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "original-password", password)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, storage *Storage) {
|
||||
assert.Equal(t, "", storage.NASStorage.Password)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Azure Blob Storage (Connection String)",
|
||||
storageType: StorageTypeAzureBlob,
|
||||
createStorage: func(workspaceID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeAzureBlob,
|
||||
Name: "Test Azure Blob Storage",
|
||||
AzureBlobStorage: &azure_blob_storage.AzureBlobStorage{
|
||||
AuthMethod: azure_blob_storage.AuthMethodConnectionString,
|
||||
ConnectionString: "original-connection-string",
|
||||
ContainerName: "test-container",
|
||||
Endpoint: "",
|
||||
Prefix: "backups/",
|
||||
},
|
||||
}
|
||||
},
|
||||
updateStorage: func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
ID: storageID,
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeAzureBlob,
|
||||
Name: "Updated Azure Blob Storage",
|
||||
AzureBlobStorage: &azure_blob_storage.AzureBlobStorage{
|
||||
AuthMethod: azure_blob_storage.AuthMethodConnectionString,
|
||||
ConnectionString: "",
|
||||
ContainerName: "updated-container",
|
||||
Endpoint: "https://custom.blob.core.windows.net",
|
||||
Prefix: "backups2/",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, storage *Storage) {
|
||||
assert.True(t, strings.HasPrefix(storage.AzureBlobStorage.ConnectionString, "enc:"),
|
||||
"ConnectionString should be encrypted with 'enc:' prefix")
|
||||
|
||||
encryptor := encryption.GetFieldEncryptor()
|
||||
connectionString, err := encryptor.Decrypt(
|
||||
storage.ID,
|
||||
storage.AzureBlobStorage.ConnectionString,
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "original-connection-string", connectionString)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, storage *Storage) {
|
||||
assert.Equal(t, "", storage.AzureBlobStorage.ConnectionString)
|
||||
assert.Equal(t, "", storage.AzureBlobStorage.AccountKey)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Azure Blob Storage (Account Key)",
|
||||
storageType: StorageTypeAzureBlob,
|
||||
createStorage: func(workspaceID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeAzureBlob,
|
||||
Name: "Test Azure Blob with Account Key",
|
||||
AzureBlobStorage: &azure_blob_storage.AzureBlobStorage{
|
||||
AuthMethod: azure_blob_storage.AuthMethodAccountKey,
|
||||
AccountName: "testaccount",
|
||||
AccountKey: "original-account-key",
|
||||
ContainerName: "test-container",
|
||||
Endpoint: "",
|
||||
Prefix: "backups/",
|
||||
},
|
||||
}
|
||||
},
|
||||
updateStorage: func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
ID: storageID,
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeAzureBlob,
|
||||
Name: "Updated Azure Blob with Account Key",
|
||||
AzureBlobStorage: &azure_blob_storage.AzureBlobStorage{
|
||||
AuthMethod: azure_blob_storage.AuthMethodAccountKey,
|
||||
AccountName: "updatedaccount",
|
||||
AccountKey: "",
|
||||
ContainerName: "updated-container",
|
||||
Endpoint: "https://custom.blob.core.windows.net",
|
||||
Prefix: "backups2/",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, storage *Storage) {
|
||||
assert.True(t, strings.HasPrefix(storage.AzureBlobStorage.AccountKey, "enc:"),
|
||||
"AccountKey should be encrypted with 'enc:' prefix")
|
||||
|
||||
encryptor := encryption.GetFieldEncryptor()
|
||||
accountKey, err := encryptor.Decrypt(
|
||||
storage.ID,
|
||||
storage.AzureBlobStorage.AccountKey,
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "original-account-key", accountKey)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, storage *Storage) {
|
||||
assert.Equal(t, "", storage.AzureBlobStorage.ConnectionString)
|
||||
assert.Equal(t, "", storage.AzureBlobStorage.AccountKey)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Google Drive Storage",
|
||||
storageType: StorageTypeGoogleDrive,
|
||||
createStorage: func(workspaceID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeGoogleDrive,
|
||||
Name: "Test Google Drive Storage",
|
||||
GoogleDriveStorage: &google_drive_storage.GoogleDriveStorage{
|
||||
ClientID: "original-client-id",
|
||||
ClientSecret: "original-client-secret",
|
||||
TokenJSON: `{"access_token":"ya29.test-access-token","token_type":"Bearer","expiry":"2030-12-31T23:59:59Z","refresh_token":"1//test-refresh-token"}`,
|
||||
},
|
||||
}
|
||||
},
|
||||
updateStorage: func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
ID: storageID,
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeGoogleDrive,
|
||||
Name: "Updated Google Drive Storage",
|
||||
GoogleDriveStorage: &google_drive_storage.GoogleDriveStorage{
|
||||
ClientID: "updated-client-id",
|
||||
ClientSecret: "",
|
||||
TokenJSON: "",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, storage *Storage) {
|
||||
assert.True(t, strings.HasPrefix(storage.GoogleDriveStorage.ClientSecret, "enc:"),
|
||||
"ClientSecret should be encrypted with 'enc:' prefix")
|
||||
assert.True(t, strings.HasPrefix(storage.GoogleDriveStorage.TokenJSON, "enc:"),
|
||||
"TokenJSON should be encrypted with 'enc:' prefix")
|
||||
|
||||
encryptor := encryption.GetFieldEncryptor()
|
||||
clientSecret, err := encryptor.Decrypt(
|
||||
storage.ID,
|
||||
storage.GoogleDriveStorage.ClientSecret,
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "original-client-secret", clientSecret)
|
||||
|
||||
tokenJSON, err := encryptor.Decrypt(
|
||||
storage.ID,
|
||||
storage.GoogleDriveStorage.TokenJSON,
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(
|
||||
t,
|
||||
`{"access_token":"ya29.test-access-token","token_type":"Bearer","expiry":"2030-12-31T23:59:59Z","refresh_token":"1//test-refresh-token"}`,
|
||||
tokenJSON,
|
||||
)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, storage *Storage) {
|
||||
assert.Equal(t, "", storage.GoogleDriveStorage.ClientSecret)
|
||||
assert.Equal(t, "", storage.GoogleDriveStorage.TokenJSON)
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
owner := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
router := createRouter()
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Test Workspace", owner, router)
|
||||
|
||||
// Phase 1: Create storage with sensitive data
|
||||
initialStorage := tc.createStorage(workspace.ID)
|
||||
var createdStorage Storage
|
||||
test_utils.MakePostRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/storages",
|
||||
"Bearer "+owner.Token,
|
||||
*initialStorage,
|
||||
http.StatusOK,
|
||||
&createdStorage,
|
||||
)
|
||||
|
||||
assert.NotEmpty(t, createdStorage.ID)
|
||||
assert.Equal(t, initialStorage.Name, createdStorage.Name)
|
||||
|
||||
// Phase 2: Verify sensitive data is encrypted in repository after creation
|
||||
repository := &StorageRepository{}
|
||||
storageFromDBAfterCreate, err := repository.FindByID(createdStorage.ID)
|
||||
assert.NoError(t, err)
|
||||
tc.verifySensitiveData(t, storageFromDBAfterCreate)
|
||||
|
||||
// Phase 3: Read via service - sensitive data should be hidden
|
||||
var retrievedStorage Storage
|
||||
test_utils.MakeGetRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/storages/%s", createdStorage.ID.String()),
|
||||
"Bearer "+owner.Token,
|
||||
http.StatusOK,
|
||||
&retrievedStorage,
|
||||
)
|
||||
|
||||
tc.verifyHiddenData(t, &retrievedStorage)
|
||||
assert.Equal(t, initialStorage.Name, retrievedStorage.Name)
|
||||
|
||||
// Phase 4: Update with non-sensitive changes only (sensitive fields empty)
|
||||
updatedStorage := tc.updateStorage(workspace.ID, createdStorage.ID)
|
||||
var updateResponse Storage
|
||||
test_utils.MakePostRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/storages",
|
||||
"Bearer "+owner.Token,
|
||||
*updatedStorage,
|
||||
http.StatusOK,
|
||||
&updateResponse,
|
||||
)
|
||||
|
||||
// Verify non-sensitive fields were updated
|
||||
assert.Equal(t, updatedStorage.Name, updateResponse.Name)
|
||||
|
||||
// Phase 5: Retrieve directly from repository to verify sensitive data preservation
|
||||
storageFromDB, err := repository.FindByID(createdStorage.ID)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify original sensitive data is still present in DB
|
||||
tc.verifySensitiveData(t, storageFromDB)
|
||||
|
||||
// Verify non-sensitive fields were updated in DB
|
||||
assert.Equal(t, updatedStorage.Name, storageFromDB.Name)
|
||||
|
||||
// Additional verification: Check via GET that data is still hidden
|
||||
var finalRetrieved Storage
|
||||
test_utils.MakeGetRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/storages/%s", createdStorage.ID.String()),
|
||||
"Bearer "+owner.Token,
|
||||
http.StatusOK,
|
||||
&finalRetrieved,
|
||||
)
|
||||
tc.verifyHiddenData(t, &finalRetrieved)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func createRouter() *gin.Engine {
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
@@ -485,158 +870,3 @@ func deleteStorage(
|
||||
http.StatusOK,
|
||||
)
|
||||
}
|
||||
|
||||
func Test_StorageSensitiveDataLifecycle_AllTypes(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
storageType StorageType
|
||||
createStorage func(workspaceID uuid.UUID) *Storage
|
||||
updateStorage func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage
|
||||
verifySensitiveData func(t *testing.T, storage *Storage)
|
||||
verifyHiddenData func(t *testing.T, storage *Storage)
|
||||
}{
|
||||
{
|
||||
name: "S3 Storage",
|
||||
storageType: StorageTypeS3,
|
||||
createStorage: func(workspaceID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeS3,
|
||||
Name: "Test S3 Storage",
|
||||
S3Storage: &s3_storage.S3Storage{
|
||||
S3Bucket: "test-bucket",
|
||||
S3Region: "us-east-1",
|
||||
S3AccessKey: "original-access-key",
|
||||
S3SecretKey: "original-secret-key",
|
||||
S3Endpoint: "https://s3.amazonaws.com",
|
||||
},
|
||||
}
|
||||
},
|
||||
updateStorage: func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
ID: storageID,
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeS3,
|
||||
Name: "Updated S3 Storage",
|
||||
S3Storage: &s3_storage.S3Storage{
|
||||
S3Bucket: "updated-bucket",
|
||||
S3Region: "us-west-2",
|
||||
S3AccessKey: "",
|
||||
S3SecretKey: "",
|
||||
S3Endpoint: "https://s3.us-west-2.amazonaws.com",
|
||||
},
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, storage *Storage) {
|
||||
assert.Equal(t, "original-access-key", storage.S3Storage.S3AccessKey)
|
||||
assert.Equal(t, "original-secret-key", storage.S3Storage.S3SecretKey)
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, storage *Storage) {
|
||||
assert.Equal(t, "", storage.S3Storage.S3AccessKey)
|
||||
assert.Equal(t, "", storage.S3Storage.S3SecretKey)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Local Storage",
|
||||
storageType: StorageTypeLocal,
|
||||
createStorage: func(workspaceID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeLocal,
|
||||
Name: "Test Local Storage",
|
||||
LocalStorage: &local_storage.LocalStorage{},
|
||||
}
|
||||
},
|
||||
updateStorage: func(workspaceID uuid.UUID, storageID uuid.UUID) *Storage {
|
||||
return &Storage{
|
||||
ID: storageID,
|
||||
WorkspaceID: workspaceID,
|
||||
Type: StorageTypeLocal,
|
||||
Name: "Updated Local Storage",
|
||||
LocalStorage: &local_storage.LocalStorage{},
|
||||
}
|
||||
},
|
||||
verifySensitiveData: func(t *testing.T, storage *Storage) {
|
||||
},
|
||||
verifyHiddenData: func(t *testing.T, storage *Storage) {
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
owner := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
router := createRouter()
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Test Workspace", owner, router)
|
||||
|
||||
// Phase 1: Create storage with sensitive data
|
||||
initialStorage := tc.createStorage(workspace.ID)
|
||||
var createdStorage Storage
|
||||
test_utils.MakePostRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/storages",
|
||||
"Bearer "+owner.Token,
|
||||
*initialStorage,
|
||||
http.StatusOK,
|
||||
&createdStorage,
|
||||
)
|
||||
|
||||
assert.NotEmpty(t, createdStorage.ID)
|
||||
assert.Equal(t, initialStorage.Name, createdStorage.Name)
|
||||
|
||||
// Phase 2: Read via service - sensitive data should be hidden
|
||||
var retrievedStorage Storage
|
||||
test_utils.MakeGetRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/storages/%s", createdStorage.ID.String()),
|
||||
"Bearer "+owner.Token,
|
||||
http.StatusOK,
|
||||
&retrievedStorage,
|
||||
)
|
||||
|
||||
tc.verifyHiddenData(t, &retrievedStorage)
|
||||
assert.Equal(t, initialStorage.Name, retrievedStorage.Name)
|
||||
|
||||
// Phase 3: Update with non-sensitive changes only (sensitive fields empty)
|
||||
updatedStorage := tc.updateStorage(workspace.ID, createdStorage.ID)
|
||||
var updateResponse Storage
|
||||
test_utils.MakePostRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/storages",
|
||||
"Bearer "+owner.Token,
|
||||
*updatedStorage,
|
||||
http.StatusOK,
|
||||
&updateResponse,
|
||||
)
|
||||
|
||||
// Verify non-sensitive fields were updated
|
||||
assert.Equal(t, updatedStorage.Name, updateResponse.Name)
|
||||
|
||||
// Phase 4: Retrieve directly from repository to verify sensitive data preservation
|
||||
repository := &StorageRepository{}
|
||||
storageFromDB, err := repository.FindByID(createdStorage.ID)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify original sensitive data is still present in DB
|
||||
tc.verifySensitiveData(t, storageFromDB)
|
||||
|
||||
// Verify non-sensitive fields were updated in DB
|
||||
assert.Equal(t, updatedStorage.Name, storageFromDB.Name)
|
||||
|
||||
// Additional verification: Check via GET that data is still hidden
|
||||
var finalRetrieved Storage
|
||||
test_utils.MakeGetRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/storages/%s", createdStorage.ID.String()),
|
||||
"Bearer "+owner.Token,
|
||||
http.StatusOK,
|
||||
&finalRetrieved,
|
||||
)
|
||||
tc.verifyHiddenData(t, &finalRetrieved)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package storages
|
||||
import (
|
||||
audit_logs "postgresus-backend/internal/features/audit_logs"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
)
|
||||
|
||||
var storageRepository = &StorageRepository{}
|
||||
@@ -10,6 +11,7 @@ var storageService = &StorageService{
|
||||
storageRepository,
|
||||
workspaces_services.GetWorkspaceService(),
|
||||
audit_logs.GetAuditLogService(),
|
||||
encryption.GetFieldEncryptor(),
|
||||
}
|
||||
var storageController = &StorageController{
|
||||
storageService,
|
||||
|
||||
@@ -7,4 +7,5 @@ const (
|
||||
StorageTypeS3 StorageType = "S3"
|
||||
StorageTypeGoogleDrive StorageType = "GOOGLE_DRIVE"
|
||||
StorageTypeNAS StorageType = "NAS"
|
||||
StorageTypeAzureBlob StorageType = "AZURE_BLOB"
|
||||
)
|
||||
|
||||
@@ -1,22 +1,32 @@
|
||||
package storages
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"log/slog"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
type StorageFileSaver interface {
|
||||
SaveFile(logger *slog.Logger, fileID uuid.UUID, file io.Reader) error
|
||||
SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error
|
||||
|
||||
GetFile(fileID uuid.UUID) (io.ReadCloser, error)
|
||||
GetFile(encryptor encryption.FieldEncryptor, fileID uuid.UUID) (io.ReadCloser, error)
|
||||
|
||||
DeleteFile(fileID uuid.UUID) error
|
||||
DeleteFile(encryptor encryption.FieldEncryptor, fileID uuid.UUID) error
|
||||
|
||||
Validate() error
|
||||
Validate(encryptor encryption.FieldEncryptor) error
|
||||
|
||||
TestConnection() error
|
||||
TestConnection(encryptor encryption.FieldEncryptor) error
|
||||
|
||||
HideSensitiveData()
|
||||
|
||||
EncryptSensitiveData(encryptor encryption.FieldEncryptor) error
|
||||
}
|
||||
|
||||
@@ -1,13 +1,16 @@
|
||||
package storages
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
"log/slog"
|
||||
azure_blob_storage "postgresus-backend/internal/features/storages/models/azure_blob"
|
||||
google_drive_storage "postgresus-backend/internal/features/storages/models/google_drive"
|
||||
local_storage "postgresus-backend/internal/features/storages/models/local"
|
||||
nas_storage "postgresus-backend/internal/features/storages/models/nas"
|
||||
s3_storage "postgresus-backend/internal/features/storages/models/s3"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
@@ -24,10 +27,17 @@ type Storage struct {
|
||||
S3Storage *s3_storage.S3Storage `json:"s3Storage" gorm:"foreignKey:StorageID"`
|
||||
GoogleDriveStorage *google_drive_storage.GoogleDriveStorage `json:"googleDriveStorage" gorm:"foreignKey:StorageID"`
|
||||
NASStorage *nas_storage.NASStorage `json:"nasStorage" gorm:"foreignKey:StorageID"`
|
||||
AzureBlobStorage *azure_blob_storage.AzureBlobStorage `json:"azureBlobStorage" gorm:"foreignKey:StorageID"`
|
||||
}
|
||||
|
||||
func (s *Storage) SaveFile(logger *slog.Logger, fileID uuid.UUID, file io.Reader) error {
|
||||
err := s.getSpecificStorage().SaveFile(logger, fileID, file)
|
||||
func (s *Storage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
err := s.getSpecificStorage().SaveFile(ctx, encryptor, logger, fileID, file)
|
||||
if err != nil {
|
||||
lastSaveError := err.Error()
|
||||
s.LastSaveError = &lastSaveError
|
||||
@@ -39,15 +49,18 @@ func (s *Storage) SaveFile(logger *slog.Logger, fileID uuid.UUID, file io.Reader
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Storage) GetFile(fileID uuid.UUID) (io.ReadCloser, error) {
|
||||
return s.getSpecificStorage().GetFile(fileID)
|
||||
func (s *Storage) GetFile(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fileID uuid.UUID,
|
||||
) (io.ReadCloser, error) {
|
||||
return s.getSpecificStorage().GetFile(encryptor, fileID)
|
||||
}
|
||||
|
||||
func (s *Storage) DeleteFile(fileID uuid.UUID) error {
|
||||
return s.getSpecificStorage().DeleteFile(fileID)
|
||||
func (s *Storage) DeleteFile(encryptor encryption.FieldEncryptor, fileID uuid.UUID) error {
|
||||
return s.getSpecificStorage().DeleteFile(encryptor, fileID)
|
||||
}
|
||||
|
||||
func (s *Storage) Validate() error {
|
||||
func (s *Storage) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if s.Type == "" {
|
||||
return errors.New("storage type is required")
|
||||
}
|
||||
@@ -56,17 +69,21 @@ func (s *Storage) Validate() error {
|
||||
return errors.New("storage name is required")
|
||||
}
|
||||
|
||||
return s.getSpecificStorage().Validate()
|
||||
return s.getSpecificStorage().Validate(encryptor)
|
||||
}
|
||||
|
||||
func (s *Storage) TestConnection() error {
|
||||
return s.getSpecificStorage().TestConnection()
|
||||
func (s *Storage) TestConnection(encryptor encryption.FieldEncryptor) error {
|
||||
return s.getSpecificStorage().TestConnection(encryptor)
|
||||
}
|
||||
|
||||
func (s *Storage) HideSensitiveData() {
|
||||
s.getSpecificStorage().HideSensitiveData()
|
||||
}
|
||||
|
||||
func (s *Storage) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
return s.getSpecificStorage().EncryptSensitiveData(encryptor)
|
||||
}
|
||||
|
||||
func (s *Storage) Update(incoming *Storage) {
|
||||
s.Name = incoming.Name
|
||||
s.Type = incoming.Type
|
||||
@@ -88,6 +105,10 @@ func (s *Storage) Update(incoming *Storage) {
|
||||
if s.NASStorage != nil && incoming.NASStorage != nil {
|
||||
s.NASStorage.Update(incoming.NASStorage)
|
||||
}
|
||||
case StorageTypeAzureBlob:
|
||||
if s.AzureBlobStorage != nil && incoming.AzureBlobStorage != nil {
|
||||
s.AzureBlobStorage.Update(incoming.AzureBlobStorage)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -101,6 +122,8 @@ func (s *Storage) getSpecificStorage() StorageFileSaver {
|
||||
return s.GoogleDriveStorage
|
||||
case StorageTypeNAS:
|
||||
return s.NASStorage
|
||||
case StorageTypeAzureBlob:
|
||||
return s.AzureBlobStorage
|
||||
default:
|
||||
panic("invalid storage type: " + string(s.Type))
|
||||
}
|
||||
|
||||
@@ -8,15 +8,18 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"postgresus-backend/internal/config"
|
||||
azure_blob_storage "postgresus-backend/internal/features/storages/models/azure_blob"
|
||||
google_drive_storage "postgresus-backend/internal/features/storages/models/google_drive"
|
||||
local_storage "postgresus-backend/internal/features/storages/models/local"
|
||||
nas_storage "postgresus-backend/internal/features/storages/models/nas"
|
||||
s3_storage "postgresus-backend/internal/features/storages/models/s3"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"postgresus-backend/internal/util/logger"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
|
||||
"github.com/google/uuid"
|
||||
"github.com/minio/minio-go/v7"
|
||||
"github.com/minio/minio-go/v7/pkg/credentials"
|
||||
@@ -32,6 +35,15 @@ type S3Container struct {
|
||||
region string
|
||||
}
|
||||
|
||||
type AzuriteContainer struct {
|
||||
endpoint string
|
||||
accountName string
|
||||
accountKey string
|
||||
containerNameKey string
|
||||
containerNameStr string
|
||||
connectionString string
|
||||
}
|
||||
|
||||
func Test_Storage_BasicOperations(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
@@ -41,6 +53,10 @@ func Test_Storage_BasicOperations(t *testing.T) {
|
||||
s3Container, err := setupS3Container(ctx)
|
||||
require.NoError(t, err, "Failed to setup S3 container")
|
||||
|
||||
// Setup Azurite connection
|
||||
azuriteContainer, err := setupAzuriteContainer(ctx)
|
||||
require.NoError(t, err, "Failed to setup Azurite container")
|
||||
|
||||
// Setup test file
|
||||
testFilePath, err := setupTestFile()
|
||||
require.NoError(t, err, "Failed to setup test file")
|
||||
@@ -88,6 +104,26 @@ func Test_Storage_BasicOperations(t *testing.T) {
|
||||
Path: "test-files",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "AzureBlobStorage_AccountKey",
|
||||
storage: &azure_blob_storage.AzureBlobStorage{
|
||||
StorageID: uuid.New(),
|
||||
AuthMethod: azure_blob_storage.AuthMethodAccountKey,
|
||||
AccountName: azuriteContainer.accountName,
|
||||
AccountKey: azuriteContainer.accountKey,
|
||||
ContainerName: azuriteContainer.containerNameKey,
|
||||
Endpoint: azuriteContainer.endpoint,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "AzureBlobStorage_ConnectionString",
|
||||
storage: &azure_blob_storage.AzureBlobStorage{
|
||||
StorageID: uuid.New(),
|
||||
AuthMethod: azure_blob_storage.AuthMethodConnectionString,
|
||||
ConnectionString: azuriteContainer.connectionString,
|
||||
ContainerName: azuriteContainer.containerNameStr,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Add Google Drive storage test only if environment variables are available
|
||||
@@ -112,13 +148,15 @@ func Test_Storage_BasicOperations(t *testing.T) {
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
encryptor := encryption.GetFieldEncryptor()
|
||||
|
||||
t.Run("Test_TestConnection_ConnectionSucceeds", func(t *testing.T) {
|
||||
err := tc.storage.TestConnection()
|
||||
err := tc.storage.TestConnection(encryptor)
|
||||
assert.NoError(t, err, "TestConnection should succeed")
|
||||
})
|
||||
|
||||
t.Run("Test_TestValidation_ValidationSucceeds", func(t *testing.T) {
|
||||
err := tc.storage.Validate()
|
||||
err := tc.storage.Validate(encryptor)
|
||||
assert.NoError(t, err, "Validate should succeed")
|
||||
})
|
||||
|
||||
@@ -128,10 +166,16 @@ func Test_Storage_BasicOperations(t *testing.T) {
|
||||
|
||||
fileID := uuid.New()
|
||||
|
||||
err = tc.storage.SaveFile(logger.GetLogger(), fileID, bytes.NewReader(fileData))
|
||||
err = tc.storage.SaveFile(
|
||||
context.Background(),
|
||||
encryptor,
|
||||
logger.GetLogger(),
|
||||
fileID,
|
||||
bytes.NewReader(fileData),
|
||||
)
|
||||
require.NoError(t, err, "SaveFile should succeed")
|
||||
|
||||
file, err := tc.storage.GetFile(fileID)
|
||||
file, err := tc.storage.GetFile(encryptor, fileID)
|
||||
assert.NoError(t, err, "GetFile should succeed")
|
||||
defer file.Close()
|
||||
|
||||
@@ -145,13 +189,19 @@ func Test_Storage_BasicOperations(t *testing.T) {
|
||||
require.NoError(t, err, "Should be able to read test file")
|
||||
|
||||
fileID := uuid.New()
|
||||
err = tc.storage.SaveFile(logger.GetLogger(), fileID, bytes.NewReader(fileData))
|
||||
err = tc.storage.SaveFile(
|
||||
context.Background(),
|
||||
encryptor,
|
||||
logger.GetLogger(),
|
||||
fileID,
|
||||
bytes.NewReader(fileData),
|
||||
)
|
||||
require.NoError(t, err, "SaveFile should succeed")
|
||||
|
||||
err = tc.storage.DeleteFile(fileID)
|
||||
err = tc.storage.DeleteFile(encryptor, fileID)
|
||||
assert.NoError(t, err, "DeleteFile should succeed")
|
||||
|
||||
file, err := tc.storage.GetFile(fileID)
|
||||
file, err := tc.storage.GetFile(encryptor, fileID)
|
||||
assert.Error(t, err, "GetFile should fail for non-existent file")
|
||||
if file != nil {
|
||||
file.Close()
|
||||
@@ -161,7 +211,7 @@ func Test_Storage_BasicOperations(t *testing.T) {
|
||||
t.Run("Test_TestDeleteNonExistentFile_DoesNotError", func(t *testing.T) {
|
||||
// Try to delete a non-existent file
|
||||
nonExistentID := uuid.New()
|
||||
err := tc.storage.DeleteFile(nonExistentID)
|
||||
err := tc.storage.DeleteFile(encryptor, nonExistentID)
|
||||
assert.NoError(t, err, "DeleteFile should not error for non-existent file")
|
||||
})
|
||||
})
|
||||
@@ -190,7 +240,7 @@ func setupS3Container(ctx context.Context) (*S3Container, error) {
|
||||
secretKey := "testpassword"
|
||||
bucketName := "test-bucket"
|
||||
region := "us-east-1"
|
||||
endpoint := fmt.Sprintf("localhost:%s", env.TestMinioPort)
|
||||
endpoint := fmt.Sprintf("127.0.0.1:%s", env.TestMinioPort)
|
||||
|
||||
// Create MinIO client and ensure bucket exists
|
||||
minioClient, err := minio.New(endpoint, &minio.Options{
|
||||
@@ -230,8 +280,59 @@ func setupS3Container(ctx context.Context) (*S3Container, error) {
|
||||
}, nil
|
||||
}
|
||||
|
||||
func setupAzuriteContainer(ctx context.Context) (*AzuriteContainer, error) {
|
||||
env := config.GetEnv()
|
||||
|
||||
accountName := "devstoreaccount1"
|
||||
// this is real testing key for azurite, it's not a real key
|
||||
accountKey := "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
|
||||
serviceURL := fmt.Sprintf("http://127.0.0.1:%s/%s", env.TestAzuriteBlobPort, accountName)
|
||||
containerNameKey := "test-container-key"
|
||||
containerNameStr := "test-container-connstr"
|
||||
|
||||
// Build explicit connection string for Azurite
|
||||
connectionString := fmt.Sprintf(
|
||||
"DefaultEndpointsProtocol=http;AccountName=%s;AccountKey=%s;BlobEndpoint=http://127.0.0.1:%s/%s",
|
||||
accountName,
|
||||
accountKey,
|
||||
env.TestAzuriteBlobPort,
|
||||
accountName,
|
||||
)
|
||||
|
||||
// Create client using connection string to set up containers
|
||||
client, err := azblob.NewClientFromConnectionString(connectionString, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create azblob client: %w", err)
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Create container for account key auth
|
||||
_, err = client.CreateContainer(ctx, containerNameKey, nil)
|
||||
if err != nil {
|
||||
// Container might already exist, that's okay
|
||||
}
|
||||
|
||||
// Create container for connection string auth
|
||||
_, err = client.CreateContainer(ctx, containerNameStr, nil)
|
||||
if err != nil {
|
||||
// Container might already exist, that's okay
|
||||
}
|
||||
|
||||
return &AzuriteContainer{
|
||||
endpoint: serviceURL,
|
||||
accountName: accountName,
|
||||
accountKey: accountKey,
|
||||
containerNameKey: containerNameKey,
|
||||
containerNameStr: containerNameStr,
|
||||
connectionString: connectionString,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func validateEnvVariables(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
assert.NotEmpty(t, env.TestMinioPort, "TEST_MINIO_PORT is empty")
|
||||
assert.NotEmpty(t, env.TestAzuriteBlobPort, "TEST_AZURITE_BLOB_PORT is empty")
|
||||
assert.NotEmpty(t, env.TestNASPort, "TEST_NAS_PORT is empty")
|
||||
}
|
||||
|
||||
414
backend/internal/features/storages/models/azure_blob/model.go
Normal file
414
backend/internal/features/storages/models/azure_blob/model.go
Normal file
@@ -0,0 +1,414 @@
|
||||
package azure_blob_storage
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
"net"
|
||||
"net/http"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
const (
|
||||
azureConnectTimeout = 30 * time.Second
|
||||
azureResponseTimeout = 30 * time.Second
|
||||
azureIdleConnTimeout = 90 * time.Second
|
||||
azureTLSHandshakeTimeout = 30 * time.Second
|
||||
|
||||
// Chunk size for block blob uploads - 16MB provides good balance between
|
||||
// memory usage and upload efficiency. This creates backpressure to pg_dump
|
||||
// by only reading one chunk at a time and waiting for Azure to confirm receipt.
|
||||
azureChunkSize = 16 * 1024 * 1024
|
||||
)
|
||||
|
||||
type readSeekCloser struct {
|
||||
*bytes.Reader
|
||||
}
|
||||
|
||||
func (r *readSeekCloser) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type AuthMethod string
|
||||
|
||||
const (
|
||||
AuthMethodConnectionString AuthMethod = "CONNECTION_STRING"
|
||||
AuthMethodAccountKey AuthMethod = "ACCOUNT_KEY"
|
||||
)
|
||||
|
||||
type AzureBlobStorage struct {
|
||||
StorageID uuid.UUID `json:"storageId" gorm:"primaryKey;type:uuid;column:storage_id"`
|
||||
AuthMethod AuthMethod `json:"authMethod" gorm:"not null;type:text;column:auth_method"`
|
||||
ConnectionString string `json:"connectionString" gorm:"type:text;column:connection_string"`
|
||||
AccountName string `json:"accountName" gorm:"type:text;column:account_name"`
|
||||
AccountKey string `json:"accountKey" gorm:"type:text;column:account_key"`
|
||||
ContainerName string `json:"containerName" gorm:"not null;type:text;column:container_name"`
|
||||
Endpoint string `json:"endpoint" gorm:"type:text;column:endpoint"`
|
||||
Prefix string `json:"prefix" gorm:"type:text;column:prefix"`
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) TableName() string {
|
||||
return "azure_blob_storages"
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled before start: %w", ctx.Err())
|
||||
default:
|
||||
}
|
||||
|
||||
client, err := s.getClient(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
blobName := s.buildBlobName(fileID.String())
|
||||
blockBlobClient := client.ServiceClient().
|
||||
NewContainerClient(s.ContainerName).
|
||||
NewBlockBlobClient(blobName)
|
||||
|
||||
var blockIDs []string
|
||||
blockNumber := 0
|
||||
buf := make([]byte, azureChunkSize)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled: %w", ctx.Err())
|
||||
default:
|
||||
}
|
||||
|
||||
n, readErr := io.ReadFull(file, buf)
|
||||
|
||||
if n == 0 && readErr == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
if readErr != nil && readErr != io.EOF && readErr != io.ErrUnexpectedEOF {
|
||||
return fmt.Errorf("read error: %w", readErr)
|
||||
}
|
||||
|
||||
blockID := base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("%06d", blockNumber)))
|
||||
|
||||
_, err := blockBlobClient.StageBlock(
|
||||
ctx,
|
||||
blockID,
|
||||
&readSeekCloser{bytes.NewReader(buf[:n])},
|
||||
nil,
|
||||
)
|
||||
if err != nil {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled: %w", ctx.Err())
|
||||
default:
|
||||
return fmt.Errorf("failed to stage block %d: %w", blockNumber, err)
|
||||
}
|
||||
}
|
||||
|
||||
blockIDs = append(blockIDs, blockID)
|
||||
blockNumber++
|
||||
|
||||
if readErr == io.EOF || readErr == io.ErrUnexpectedEOF {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if len(blockIDs) == 0 {
|
||||
_, err = client.UploadStream(
|
||||
ctx,
|
||||
s.ContainerName,
|
||||
blobName,
|
||||
bytes.NewReader([]byte{}),
|
||||
nil,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to upload empty blob: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
_, err = blockBlobClient.CommitBlockList(ctx, blockIDs, &blockblob.CommitBlockListOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to commit block list: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) GetFile(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fileID uuid.UUID,
|
||||
) (io.ReadCloser, error) {
|
||||
client, err := s.getClient(encryptor)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
blobName := s.buildBlobName(fileID.String())
|
||||
|
||||
response, err := client.DownloadStream(
|
||||
context.TODO(),
|
||||
s.ContainerName,
|
||||
blobName,
|
||||
nil,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to download blob from Azure: %w", err)
|
||||
}
|
||||
|
||||
return response.Body, nil
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) DeleteFile(encryptor encryption.FieldEncryptor, fileID uuid.UUID) error {
|
||||
client, err := s.getClient(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
blobName := s.buildBlobName(fileID.String())
|
||||
|
||||
_, err = client.DeleteBlob(
|
||||
context.TODO(),
|
||||
s.ContainerName,
|
||||
blobName,
|
||||
nil,
|
||||
)
|
||||
if err != nil {
|
||||
var respErr *azcore.ResponseError
|
||||
if errors.As(err, &respErr) && respErr.StatusCode == 404 {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("failed to delete blob from Azure: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if s.ContainerName == "" {
|
||||
return errors.New("container name is required")
|
||||
}
|
||||
|
||||
switch s.AuthMethod {
|
||||
case AuthMethodConnectionString:
|
||||
if s.ConnectionString == "" {
|
||||
return errors.New(
|
||||
"connection string is required when using CONNECTION_STRING auth method",
|
||||
)
|
||||
}
|
||||
case AuthMethodAccountKey:
|
||||
if s.AccountName == "" {
|
||||
return errors.New("account name is required when using ACCOUNT_KEY auth method")
|
||||
}
|
||||
if s.AccountKey == "" {
|
||||
return errors.New("account key is required when using ACCOUNT_KEY auth method")
|
||||
}
|
||||
default:
|
||||
return fmt.Errorf("invalid auth method: %s", s.AuthMethod)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) TestConnection(encryptor encryption.FieldEncryptor) error {
|
||||
client, err := s.getClient(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
containerClient := client.ServiceClient().NewContainerClient(s.ContainerName)
|
||||
_, err = containerClient.GetProperties(ctx, nil)
|
||||
if err != nil {
|
||||
var respErr *azcore.ResponseError
|
||||
if errors.As(err, &respErr) {
|
||||
if respErr.StatusCode == 404 {
|
||||
return fmt.Errorf("container '%s' does not exist", s.ContainerName)
|
||||
}
|
||||
}
|
||||
if errors.Is(err, context.DeadlineExceeded) {
|
||||
return errors.New("failed to connect to Azure Blob Storage. Please check params")
|
||||
}
|
||||
return fmt.Errorf("failed to connect to Azure Blob Storage: %w", err)
|
||||
}
|
||||
|
||||
testBlobName := s.buildBlobName(uuid.New().String() + "-test")
|
||||
testData := []byte("test connection")
|
||||
|
||||
_, err = client.UploadStream(
|
||||
ctx,
|
||||
s.ContainerName,
|
||||
testBlobName,
|
||||
bytes.NewReader(testData),
|
||||
nil,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to upload test blob to Azure: %w", err)
|
||||
}
|
||||
|
||||
_, err = client.DeleteBlob(
|
||||
ctx,
|
||||
s.ContainerName,
|
||||
testBlobName,
|
||||
nil,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to delete test blob from Azure: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) HideSensitiveData() {
|
||||
s.ConnectionString = ""
|
||||
s.AccountKey = ""
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
var err error
|
||||
|
||||
if s.ConnectionString != "" {
|
||||
s.ConnectionString, err = encryptor.Encrypt(s.StorageID, s.ConnectionString)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt Azure connection string: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if s.AccountKey != "" {
|
||||
s.AccountKey, err = encryptor.Encrypt(s.StorageID, s.AccountKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt Azure account key: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) Update(incoming *AzureBlobStorage) {
|
||||
s.AuthMethod = incoming.AuthMethod
|
||||
s.ContainerName = incoming.ContainerName
|
||||
s.Endpoint = incoming.Endpoint
|
||||
|
||||
if incoming.ConnectionString != "" {
|
||||
s.ConnectionString = incoming.ConnectionString
|
||||
}
|
||||
|
||||
if incoming.AccountName != "" {
|
||||
s.AccountName = incoming.AccountName
|
||||
}
|
||||
|
||||
if incoming.AccountKey != "" {
|
||||
s.AccountKey = incoming.AccountKey
|
||||
}
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) buildBlobName(fileName string) string {
|
||||
if s.Prefix == "" {
|
||||
return fileName
|
||||
}
|
||||
|
||||
prefix := s.Prefix
|
||||
prefix = strings.TrimPrefix(prefix, "/")
|
||||
|
||||
if !strings.HasSuffix(prefix, "/") {
|
||||
prefix = prefix + "/"
|
||||
}
|
||||
|
||||
return prefix + fileName
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) getClient(encryptor encryption.FieldEncryptor) (*azblob.Client, error) {
|
||||
var client *azblob.Client
|
||||
var err error
|
||||
|
||||
clientOptions := s.buildClientOptions()
|
||||
|
||||
switch s.AuthMethod {
|
||||
case AuthMethodConnectionString:
|
||||
connectionString, decryptErr := encryptor.Decrypt(s.StorageID, s.ConnectionString)
|
||||
if decryptErr != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt Azure connection string: %w", decryptErr)
|
||||
}
|
||||
|
||||
client, err = azblob.NewClientFromConnectionString(connectionString, clientOptions)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(
|
||||
"failed to create Azure Blob client from connection string: %w",
|
||||
err,
|
||||
)
|
||||
}
|
||||
case AuthMethodAccountKey:
|
||||
accountKey, decryptErr := encryptor.Decrypt(s.StorageID, s.AccountKey)
|
||||
if decryptErr != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt Azure account key: %w", decryptErr)
|
||||
}
|
||||
|
||||
accountURL := s.buildAccountURL()
|
||||
credential, credErr := azblob.NewSharedKeyCredential(s.AccountName, accountKey)
|
||||
if credErr != nil {
|
||||
return nil, fmt.Errorf("failed to create Azure shared key credential: %w", credErr)
|
||||
}
|
||||
|
||||
client, err = azblob.NewClientWithSharedKeyCredential(accountURL, credential, clientOptions)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create Azure Blob client with shared key: %w", err)
|
||||
}
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported auth method: %s", s.AuthMethod)
|
||||
}
|
||||
|
||||
return client, nil
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) buildClientOptions() *azblob.ClientOptions {
|
||||
transport := &http.Transport{
|
||||
DialContext: (&net.Dialer{
|
||||
Timeout: azureConnectTimeout,
|
||||
}).DialContext,
|
||||
TLSHandshakeTimeout: azureTLSHandshakeTimeout,
|
||||
ResponseHeaderTimeout: azureResponseTimeout,
|
||||
IdleConnTimeout: azureIdleConnTimeout,
|
||||
}
|
||||
|
||||
return &azblob.ClientOptions{
|
||||
ClientOptions: azcore.ClientOptions{
|
||||
Transport: &http.Client{Transport: transport},
|
||||
Retry: policy.RetryOptions{
|
||||
MaxRetries: 0,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *AzureBlobStorage) buildAccountURL() string {
|
||||
if s.Endpoint != "" {
|
||||
endpoint := s.Endpoint
|
||||
if !strings.HasPrefix(endpoint, "http://") && !strings.HasPrefix(endpoint, "https://") {
|
||||
endpoint = "https://" + endpoint
|
||||
}
|
||||
return endpoint
|
||||
}
|
||||
|
||||
return fmt.Sprintf("https://%s.blob.core.windows.net/", s.AccountName)
|
||||
}
|
||||
@@ -7,6 +7,9 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
"net"
|
||||
"net/http"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -15,9 +18,22 @@ import (
|
||||
"golang.org/x/oauth2/google"
|
||||
|
||||
drive "google.golang.org/api/drive/v3"
|
||||
"google.golang.org/api/googleapi"
|
||||
"google.golang.org/api/option"
|
||||
)
|
||||
|
||||
const (
|
||||
gdConnectTimeout = 30 * time.Second
|
||||
gdResponseTimeout = 30 * time.Second
|
||||
gdIdleConnTimeout = 90 * time.Second
|
||||
gdTLSHandshakeTimeout = 30 * time.Second
|
||||
|
||||
// Chunk size for Google Drive resumable uploads - 16MB provides good balance
|
||||
// between memory usage and upload efficiency. Google Drive requires chunks
|
||||
// to be multiples of 256KB for resumable uploads.
|
||||
gdChunkSize = 16 * 1024 * 1024
|
||||
)
|
||||
|
||||
type GoogleDriveStorage struct {
|
||||
StorageID uuid.UUID `json:"storageId" gorm:"primaryKey;type:uuid;column:storage_id"`
|
||||
ClientID string `json:"clientId" gorm:"not null;type:text;column:client_id"`
|
||||
@@ -30,30 +46,44 @@ func (s *GoogleDriveStorage) TableName() string {
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
return s.withRetryOnAuth(func(driveService *drive.Service) error {
|
||||
ctx := context.Background()
|
||||
return s.withRetryOnAuth(ctx, encryptor, func(driveService *drive.Service) error {
|
||||
filename := fileID.String()
|
||||
|
||||
// Ensure the postgresus_backups folder exists
|
||||
folderID, err := s.ensureBackupsFolderExists(ctx, driveService)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create/find backups folder: %w", err)
|
||||
}
|
||||
|
||||
// Delete any previous copy so we keep at most one object per logical file.
|
||||
_ = s.deleteByName(ctx, driveService, filename, folderID) // ignore "not found"
|
||||
_ = s.deleteByName(ctx, driveService, filename, folderID)
|
||||
|
||||
fileMeta := &drive.File{
|
||||
Name: filename,
|
||||
Parents: []string{folderID},
|
||||
}
|
||||
|
||||
_, err = driveService.Files.Create(fileMeta).Media(file).Context(ctx).Do()
|
||||
backpressureReader := &backpressureReader{
|
||||
reader: file,
|
||||
ctx: ctx,
|
||||
chunkSize: gdChunkSize,
|
||||
buf: make([]byte, gdChunkSize),
|
||||
}
|
||||
|
||||
_, err = driveService.Files.Create(fileMeta).
|
||||
Media(backpressureReader, googleapi.ChunkSize(gdChunkSize)).
|
||||
Context(ctx).
|
||||
Do()
|
||||
if err != nil {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled: %w", ctx.Err())
|
||||
default:
|
||||
}
|
||||
return fmt.Errorf("failed to upload file to Google Drive: %w", err)
|
||||
}
|
||||
|
||||
@@ -68,34 +98,95 @@ func (s *GoogleDriveStorage) SaveFile(
|
||||
})
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) GetFile(fileID uuid.UUID) (io.ReadCloser, error) {
|
||||
type backpressureReader struct {
|
||||
reader io.Reader
|
||||
ctx context.Context
|
||||
chunkSize int
|
||||
buf []byte
|
||||
bufStart int
|
||||
bufEnd int
|
||||
totalBytes int64
|
||||
chunkCount int
|
||||
}
|
||||
|
||||
func (r *backpressureReader) Read(p []byte) (n int, err error) {
|
||||
select {
|
||||
case <-r.ctx.Done():
|
||||
return 0, r.ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
if r.bufStart >= r.bufEnd {
|
||||
r.chunkCount++
|
||||
|
||||
bytesRead, readErr := io.ReadFull(r.reader, r.buf)
|
||||
if bytesRead > 0 {
|
||||
r.bufStart = 0
|
||||
r.bufEnd = bytesRead
|
||||
}
|
||||
|
||||
if readErr != nil && readErr != io.EOF && readErr != io.ErrUnexpectedEOF {
|
||||
return 0, readErr
|
||||
}
|
||||
|
||||
if bytesRead == 0 && readErr == io.EOF {
|
||||
return 0, io.EOF
|
||||
}
|
||||
}
|
||||
|
||||
n = copy(p, r.buf[r.bufStart:r.bufEnd])
|
||||
r.bufStart += n
|
||||
r.totalBytes += int64(n)
|
||||
|
||||
if r.bufStart >= r.bufEnd {
|
||||
select {
|
||||
case <-r.ctx.Done():
|
||||
return n, r.ctx.Err()
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
return n, nil
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) GetFile(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fileID uuid.UUID,
|
||||
) (io.ReadCloser, error) {
|
||||
var result io.ReadCloser
|
||||
err := s.withRetryOnAuth(func(driveService *drive.Service) error {
|
||||
folderID, err := s.findBackupsFolder(driveService)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to find backups folder: %w", err)
|
||||
}
|
||||
err := s.withRetryOnAuth(
|
||||
context.Background(),
|
||||
encryptor,
|
||||
func(driveService *drive.Service) error {
|
||||
folderID, err := s.findBackupsFolder(driveService)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to find backups folder: %w", err)
|
||||
}
|
||||
|
||||
fileIDGoogle, err := s.lookupFileID(driveService, fileID.String(), folderID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fileIDGoogle, err := s.lookupFileID(driveService, fileID.String(), folderID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
resp, err := driveService.Files.Get(fileIDGoogle).Download()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to download file from Google Drive: %w", err)
|
||||
}
|
||||
resp, err := driveService.Files.Get(fileIDGoogle).Download()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to download file from Google Drive: %w", err)
|
||||
}
|
||||
|
||||
result = resp.Body
|
||||
return nil
|
||||
})
|
||||
result = resp.Body
|
||||
return nil
|
||||
},
|
||||
)
|
||||
|
||||
return result, err
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) DeleteFile(fileID uuid.UUID) error {
|
||||
return s.withRetryOnAuth(func(driveService *drive.Service) error {
|
||||
ctx := context.Background()
|
||||
func (s *GoogleDriveStorage) DeleteFile(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fileID uuid.UUID,
|
||||
) error {
|
||||
ctx := context.Background()
|
||||
return s.withRetryOnAuth(ctx, encryptor, func(driveService *drive.Service) error {
|
||||
folderID, err := s.findBackupsFolder(driveService)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to find backups folder: %w", err)
|
||||
@@ -105,7 +196,7 @@ func (s *GoogleDriveStorage) DeleteFile(fileID uuid.UUID) error {
|
||||
})
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) Validate() error {
|
||||
func (s *GoogleDriveStorage) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
switch {
|
||||
case s.ClientID == "":
|
||||
return errors.New("client ID is required")
|
||||
@@ -115,7 +206,12 @@ func (s *GoogleDriveStorage) Validate() error {
|
||||
return errors.New("token JSON is required")
|
||||
}
|
||||
|
||||
// Also validate that the token JSON contains a refresh token
|
||||
// Skip JSON validation if token is already encrypted
|
||||
if strings.HasPrefix(s.TokenJSON, "enc:") {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Validate that the token JSON contains a refresh token
|
||||
var token oauth2.Token
|
||||
if err := json.Unmarshal([]byte(s.TokenJSON), &token); err != nil {
|
||||
return fmt.Errorf("invalid token JSON format: %w", err)
|
||||
@@ -128,9 +224,9 @@ func (s *GoogleDriveStorage) Validate() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) TestConnection() error {
|
||||
return s.withRetryOnAuth(func(driveService *drive.Service) error {
|
||||
ctx := context.Background()
|
||||
func (s *GoogleDriveStorage) TestConnection(encryptor encryption.FieldEncryptor) error {
|
||||
ctx := context.Background()
|
||||
return s.withRetryOnAuth(ctx, encryptor, func(driveService *drive.Service) error {
|
||||
testFilename := "test-connection-" + uuid.New().String()
|
||||
testData := []byte("test")
|
||||
|
||||
@@ -196,6 +292,26 @@ func (s *GoogleDriveStorage) HideSensitiveData() {
|
||||
s.TokenJSON = ""
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
var err error
|
||||
|
||||
if s.ClientSecret != "" {
|
||||
s.ClientSecret, err = encryptor.Encrypt(s.StorageID, s.ClientSecret)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt Google Drive client secret: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if s.TokenJSON != "" {
|
||||
s.TokenJSON, err = encryptor.Encrypt(s.StorageID, s.TokenJSON)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt Google Drive token JSON: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) Update(incoming *GoogleDriveStorage) {
|
||||
s.ClientID = incoming.ClientID
|
||||
|
||||
@@ -209,18 +325,34 @@ func (s *GoogleDriveStorage) Update(incoming *GoogleDriveStorage) {
|
||||
}
|
||||
|
||||
// withRetryOnAuth executes the provided function with retry logic for authentication errors
|
||||
func (s *GoogleDriveStorage) withRetryOnAuth(fn func(*drive.Service) error) error {
|
||||
driveService, err := s.getDriveService()
|
||||
func (s *GoogleDriveStorage) withRetryOnAuth(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fn func(*drive.Service) error,
|
||||
) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
driveService, err := s.getDriveService(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = fn(driveService)
|
||||
if err != nil && s.isAuthError(err) {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
// Try to refresh token and retry once
|
||||
fmt.Printf("Google Drive auth error detected, attempting token refresh: %v\n", err)
|
||||
|
||||
if refreshErr := s.refreshToken(); refreshErr != nil {
|
||||
if refreshErr := s.refreshToken(encryptor); refreshErr != nil {
|
||||
// If refresh fails, return a more helpful error message
|
||||
if strings.Contains(refreshErr.Error(), "invalid_grant") ||
|
||||
strings.Contains(refreshErr.Error(), "refresh token") {
|
||||
@@ -237,7 +369,7 @@ func (s *GoogleDriveStorage) withRetryOnAuth(fn func(*drive.Service) error) erro
|
||||
fmt.Printf("Token refresh successful, retrying operation\n")
|
||||
|
||||
// Get new service with refreshed token
|
||||
driveService, err = s.getDriveService()
|
||||
driveService, err = s.getDriveService(encryptor)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create service after token refresh: %w", err)
|
||||
}
|
||||
@@ -268,13 +400,24 @@ func (s *GoogleDriveStorage) isAuthError(err error) bool {
|
||||
}
|
||||
|
||||
// refreshToken refreshes the OAuth2 token and updates the TokenJSON field
|
||||
func (s *GoogleDriveStorage) refreshToken() error {
|
||||
if err := s.Validate(); err != nil {
|
||||
func (s *GoogleDriveStorage) refreshToken(encryptor encryption.FieldEncryptor) error {
|
||||
if err := s.Validate(encryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Decrypt credentials before use
|
||||
clientSecret, err := encryptor.Decrypt(s.StorageID, s.ClientSecret)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt Google Drive client secret: %w", err)
|
||||
}
|
||||
|
||||
tokenJSON, err := encryptor.Decrypt(s.StorageID, s.TokenJSON)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decrypt Google Drive token JSON: %w", err)
|
||||
}
|
||||
|
||||
var token oauth2.Token
|
||||
if err := json.Unmarshal([]byte(s.TokenJSON), &token); err != nil {
|
||||
if err := json.Unmarshal([]byte(tokenJSON), &token); err != nil {
|
||||
return fmt.Errorf("invalid token JSON: %w", err)
|
||||
}
|
||||
|
||||
@@ -289,12 +432,12 @@ func (s *GoogleDriveStorage) refreshToken() error {
|
||||
token.Expiry)
|
||||
|
||||
// Debug: Print the full token JSON structure (sensitive data masked)
|
||||
fmt.Printf("Original token JSON structure: %s\n", maskSensitiveData(s.TokenJSON))
|
||||
fmt.Printf("Original token JSON structure: %s\n", maskSensitiveData(tokenJSON))
|
||||
|
||||
ctx := context.Background()
|
||||
cfg := &oauth2.Config{
|
||||
ClientID: s.ClientID,
|
||||
ClientSecret: s.ClientSecret,
|
||||
ClientSecret: clientSecret,
|
||||
Endpoint: google.Endpoint,
|
||||
Scopes: []string{"https://www.googleapis.com/auth/drive.file"},
|
||||
}
|
||||
@@ -330,7 +473,7 @@ func (s *GoogleDriveStorage) refreshToken() error {
|
||||
newToken.RefreshToken = token.RefreshToken
|
||||
}
|
||||
|
||||
// Update the stored token JSON
|
||||
// Update the stored token JSON (keep as plaintext in memory, encryption happens on save)
|
||||
newTokenJSON, err := json.Marshal(newToken)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal refreshed token: %w", err)
|
||||
@@ -368,13 +511,25 @@ func truncateString(s string, maxLen int) string {
|
||||
return s[:maxLen]
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) getDriveService() (*drive.Service, error) {
|
||||
if err := s.Validate(); err != nil {
|
||||
func (s *GoogleDriveStorage) getDriveService(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
) (*drive.Service, error) {
|
||||
if err := s.Validate(encryptor); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
clientSecret, err := encryptor.Decrypt(s.StorageID, s.ClientSecret)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt Google Drive client secret: %w", err)
|
||||
}
|
||||
|
||||
tokenJSON, err := encryptor.Decrypt(s.StorageID, s.TokenJSON)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt Google Drive token JSON: %w", err)
|
||||
}
|
||||
|
||||
var token oauth2.Token
|
||||
if err := json.Unmarshal([]byte(s.TokenJSON), &token); err != nil {
|
||||
if err := json.Unmarshal([]byte(tokenJSON), &token); err != nil {
|
||||
return nil, fmt.Errorf("invalid token JSON: %w", err)
|
||||
}
|
||||
|
||||
@@ -382,23 +537,23 @@ func (s *GoogleDriveStorage) getDriveService() (*drive.Service, error) {
|
||||
|
||||
cfg := &oauth2.Config{
|
||||
ClientID: s.ClientID,
|
||||
ClientSecret: s.ClientSecret,
|
||||
ClientSecret: clientSecret,
|
||||
Endpoint: google.Endpoint,
|
||||
Scopes: []string{"https://www.googleapis.com/auth/drive.file"},
|
||||
}
|
||||
|
||||
tokenSource := cfg.TokenSource(ctx, &token)
|
||||
|
||||
// Force token validation to ensure we're using the current token
|
||||
currentToken, err := tokenSource.Token()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get current token: %w", err)
|
||||
}
|
||||
|
||||
// Create a new token source with the validated token
|
||||
validatedTokenSource := oauth2.StaticTokenSource(currentToken)
|
||||
|
||||
driveService, err := drive.NewService(ctx, option.WithTokenSource(validatedTokenSource))
|
||||
httpClient := s.buildHTTPClient(validatedTokenSource)
|
||||
|
||||
driveService, err := drive.NewService(ctx, option.WithHTTPClient(httpClient))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to create Drive client: %w", err)
|
||||
}
|
||||
@@ -406,6 +561,24 @@ func (s *GoogleDriveStorage) getDriveService() (*drive.Service, error) {
|
||||
return driveService, nil
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) buildHTTPClient(tokenSource oauth2.TokenSource) *http.Client {
|
||||
transport := &http.Transport{
|
||||
DialContext: (&net.Dialer{
|
||||
Timeout: gdConnectTimeout,
|
||||
}).DialContext,
|
||||
TLSHandshakeTimeout: gdTLSHandshakeTimeout,
|
||||
ResponseHeaderTimeout: gdResponseTimeout,
|
||||
IdleConnTimeout: gdIdleConnTimeout,
|
||||
}
|
||||
|
||||
return &http.Client{
|
||||
Transport: &oauth2.Transport{
|
||||
Source: tokenSource,
|
||||
Base: transport,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *GoogleDriveStorage) lookupFileID(
|
||||
driveService *drive.Service,
|
||||
name string,
|
||||
|
||||
@@ -1,17 +1,27 @@
|
||||
package local_storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"postgresus-backend/internal/config"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
files_utils "postgresus-backend/internal/util/files"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
const (
|
||||
// Chunk size for local storage writes - 16MB provides good balance between
|
||||
// memory usage and write efficiency. This creates backpressure to pg_dump
|
||||
// by only reading one chunk at a time and waiting for disk to confirm receipt.
|
||||
localChunkSize = 16 * 1024 * 1024
|
||||
)
|
||||
|
||||
// LocalStorage uses ./postgresus_local_backups folder as a
|
||||
// directory for backups and ./postgresus_local_temp folder as a
|
||||
// directory for temp files
|
||||
@@ -23,7 +33,19 @@ func (l *LocalStorage) TableName() string {
|
||||
return "local_storages"
|
||||
}
|
||||
|
||||
func (l *LocalStorage) SaveFile(logger *slog.Logger, fileID uuid.UUID, file io.Reader) error {
|
||||
func (l *LocalStorage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
logger.Info("Starting to save file to local storage", "fileId", fileID.String())
|
||||
|
||||
err := files_utils.EnsureDirectories([]string{
|
||||
@@ -54,7 +76,7 @@ func (l *LocalStorage) SaveFile(logger *slog.Logger, fileID uuid.UUID, file io.R
|
||||
}()
|
||||
|
||||
logger.Debug("Copying file data to temp file", "fileId", fileID.String())
|
||||
_, err = io.Copy(tempFile, file)
|
||||
_, err = copyWithContext(ctx, tempFile, file)
|
||||
if err != nil {
|
||||
logger.Error("Failed to write to temp file", "fileId", fileID.String(), "error", err)
|
||||
return fmt.Errorf("failed to write to temp file: %w", err)
|
||||
@@ -107,7 +129,10 @@ func (l *LocalStorage) SaveFile(logger *slog.Logger, fileID uuid.UUID, file io.R
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *LocalStorage) GetFile(fileID uuid.UUID) (io.ReadCloser, error) {
|
||||
func (l *LocalStorage) GetFile(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fileID uuid.UUID,
|
||||
) (io.ReadCloser, error) {
|
||||
filePath := filepath.Join(config.GetEnv().DataFolder, fileID.String())
|
||||
|
||||
if _, err := os.Stat(filePath); os.IsNotExist(err) {
|
||||
@@ -122,7 +147,7 @@ func (l *LocalStorage) GetFile(fileID uuid.UUID) (io.ReadCloser, error) {
|
||||
return file, nil
|
||||
}
|
||||
|
||||
func (l *LocalStorage) DeleteFile(fileID uuid.UUID) error {
|
||||
func (l *LocalStorage) DeleteFile(encryptor encryption.FieldEncryptor, fileID uuid.UUID) error {
|
||||
filePath := filepath.Join(config.GetEnv().DataFolder, fileID.String())
|
||||
|
||||
if _, err := os.Stat(filePath); os.IsNotExist(err) {
|
||||
@@ -136,11 +161,11 @@ func (l *LocalStorage) DeleteFile(fileID uuid.UUID) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *LocalStorage) Validate() error {
|
||||
func (l *LocalStorage) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *LocalStorage) TestConnection() error {
|
||||
func (l *LocalStorage) TestConnection(encryptor encryption.FieldEncryptor) error {
|
||||
testFile := filepath.Join(config.GetEnv().TempFolder, "test_connection")
|
||||
f, err := os.Create(testFile)
|
||||
if err != nil {
|
||||
@@ -160,5 +185,77 @@ func (l *LocalStorage) TestConnection() error {
|
||||
func (l *LocalStorage) HideSensitiveData() {
|
||||
}
|
||||
|
||||
func (l *LocalStorage) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *LocalStorage) Update(incoming *LocalStorage) {
|
||||
}
|
||||
|
||||
type writeResult struct {
|
||||
bytesWritten int
|
||||
writeErr error
|
||||
}
|
||||
|
||||
func copyWithContext(ctx context.Context, dst io.Writer, src io.Reader) (int64, error) {
|
||||
buf := make([]byte, localChunkSize)
|
||||
var written int64
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return written, ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
nr, readErr := io.ReadFull(src, buf)
|
||||
|
||||
if nr == 0 && readErr == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
if readErr != nil && readErr != io.EOF && readErr != io.ErrUnexpectedEOF {
|
||||
return written, readErr
|
||||
}
|
||||
|
||||
writeResultCh := make(chan writeResult, 1)
|
||||
go func() {
|
||||
nw, writeErr := dst.Write(buf[0:nr])
|
||||
writeResultCh <- writeResult{nw, writeErr}
|
||||
}()
|
||||
|
||||
var nw int
|
||||
var writeErr error
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return written, ctx.Err()
|
||||
case result := <-writeResultCh:
|
||||
nw = result.bytesWritten
|
||||
writeErr = result.writeErr
|
||||
}
|
||||
|
||||
if nw < 0 || nr < nw {
|
||||
nw = 0
|
||||
if writeErr == nil {
|
||||
writeErr = errors.New("invalid write result")
|
||||
}
|
||||
}
|
||||
|
||||
if writeErr != nil {
|
||||
return written, writeErr
|
||||
}
|
||||
|
||||
if nr != nw {
|
||||
return written, io.ErrShortWrite
|
||||
}
|
||||
|
||||
written += int64(nw)
|
||||
|
||||
if readErr == io.EOF || readErr == io.ErrUnexpectedEOF {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return written, nil
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package nas_storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
@@ -8,6 +9,7 @@ import (
|
||||
"log/slog"
|
||||
"net"
|
||||
"path/filepath"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -15,6 +17,13 @@ import (
|
||||
"github.com/hirochachacha/go-smb2"
|
||||
)
|
||||
|
||||
const (
|
||||
// Chunk size for NAS uploads - 16MB provides good balance between
|
||||
// memory usage and upload efficiency. This creates backpressure to pg_dump
|
||||
// by only reading one chunk at a time and waiting for NAS to confirm receipt.
|
||||
nasChunkSize = 16 * 1024 * 1024
|
||||
)
|
||||
|
||||
type NASStorage struct {
|
||||
StorageID uuid.UUID `json:"storageId" gorm:"primaryKey;type:uuid;column:storage_id"`
|
||||
Host string `json:"host" gorm:"not null;type:text;column:host"`
|
||||
@@ -31,10 +40,22 @@ func (n *NASStorage) TableName() string {
|
||||
return "nas_storages"
|
||||
}
|
||||
|
||||
func (n *NASStorage) SaveFile(logger *slog.Logger, fileID uuid.UUID, file io.Reader) error {
|
||||
func (n *NASStorage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
logger.Info("Starting to save file to NAS storage", "fileId", fileID.String(), "host", n.Host)
|
||||
|
||||
session, err := n.createSession()
|
||||
session, err := n.createSessionWithContext(ctx, encryptor)
|
||||
if err != nil {
|
||||
logger.Error("Failed to create NAS session", "fileId", fileID.String(), "error", err)
|
||||
return fmt.Errorf("failed to create NAS session: %w", err)
|
||||
@@ -115,7 +136,7 @@ func (n *NASStorage) SaveFile(logger *slog.Logger, fileID uuid.UUID, file io.Rea
|
||||
}()
|
||||
|
||||
logger.Debug("Copying file data to NAS", "fileId", fileID.String())
|
||||
_, err = io.Copy(nasFile, file)
|
||||
_, err = copyWithContext(ctx, nasFile, file)
|
||||
if err != nil {
|
||||
logger.Error("Failed to write file to NAS", "fileId", fileID.String(), "error", err)
|
||||
return fmt.Errorf("failed to write file to NAS: %w", err)
|
||||
@@ -131,8 +152,11 @@ func (n *NASStorage) SaveFile(logger *slog.Logger, fileID uuid.UUID, file io.Rea
|
||||
return nil
|
||||
}
|
||||
|
||||
func (n *NASStorage) GetFile(fileID uuid.UUID) (io.ReadCloser, error) {
|
||||
session, err := n.createSession()
|
||||
func (n *NASStorage) GetFile(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fileID uuid.UUID,
|
||||
) (io.ReadCloser, error) {
|
||||
session, err := n.createSession(encryptor)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create NAS session: %w", err)
|
||||
}
|
||||
@@ -168,8 +192,8 @@ func (n *NASStorage) GetFile(fileID uuid.UUID) (io.ReadCloser, error) {
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (n *NASStorage) DeleteFile(fileID uuid.UUID) error {
|
||||
session, err := n.createSession()
|
||||
func (n *NASStorage) DeleteFile(encryptor encryption.FieldEncryptor, fileID uuid.UUID) error {
|
||||
session, err := n.createSession(encryptor)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create NAS session: %w", err)
|
||||
}
|
||||
@@ -202,7 +226,7 @@ func (n *NASStorage) DeleteFile(fileID uuid.UUID) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (n *NASStorage) Validate() error {
|
||||
func (n *NASStorage) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if n.Host == "" {
|
||||
return errors.New("NAS host is required")
|
||||
}
|
||||
@@ -219,12 +243,11 @@ func (n *NASStorage) Validate() error {
|
||||
return errors.New("NAS port must be between 1 and 65535")
|
||||
}
|
||||
|
||||
// Test the configuration by creating a session
|
||||
return n.TestConnection()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (n *NASStorage) TestConnection() error {
|
||||
session, err := n.createSession()
|
||||
func (n *NASStorage) TestConnection(encryptor encryption.FieldEncryptor) error {
|
||||
session, err := n.createSession(encryptor)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to connect to NAS: %w", err)
|
||||
}
|
||||
@@ -255,6 +278,18 @@ func (n *NASStorage) HideSensitiveData() {
|
||||
n.Password = ""
|
||||
}
|
||||
|
||||
func (n *NASStorage) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
if n.Password != "" {
|
||||
encrypted, err := encryptor.Encrypt(n.StorageID, n.Password)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt NAS password: %w", err)
|
||||
}
|
||||
n.Password = encrypted
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (n *NASStorage) Update(incoming *NASStorage) {
|
||||
n.Host = incoming.Host
|
||||
n.Port = incoming.Port
|
||||
@@ -269,23 +304,33 @@ func (n *NASStorage) Update(incoming *NASStorage) {
|
||||
}
|
||||
}
|
||||
|
||||
func (n *NASStorage) createSession() (*smb2.Session, error) {
|
||||
// Create connection with timeout
|
||||
conn, err := n.createConnection()
|
||||
func (n *NASStorage) createSession(encryptor encryption.FieldEncryptor) (*smb2.Session, error) {
|
||||
return n.createSessionWithContext(context.Background(), encryptor)
|
||||
}
|
||||
|
||||
func (n *NASStorage) createSessionWithContext(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
) (*smb2.Session, error) {
|
||||
conn, err := n.createConnectionWithContext(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Create SMB2 dialer
|
||||
password, err := encryptor.Decrypt(n.StorageID, n.Password)
|
||||
if err != nil {
|
||||
_ = conn.Close()
|
||||
return nil, fmt.Errorf("failed to decrypt NAS password: %w", err)
|
||||
}
|
||||
|
||||
d := &smb2.Dialer{
|
||||
Initiator: &smb2.NTLMInitiator{
|
||||
User: n.Username,
|
||||
Password: n.Password,
|
||||
Password: password,
|
||||
Domain: n.Domain,
|
||||
},
|
||||
}
|
||||
|
||||
// Create session
|
||||
session, err := d.Dial(conn)
|
||||
if err != nil {
|
||||
_ = conn.Close()
|
||||
@@ -295,34 +340,30 @@ func (n *NASStorage) createSession() (*smb2.Session, error) {
|
||||
return session, nil
|
||||
}
|
||||
|
||||
func (n *NASStorage) createConnection() (net.Conn, error) {
|
||||
func (n *NASStorage) createConnectionWithContext(ctx context.Context) (net.Conn, error) {
|
||||
address := net.JoinHostPort(n.Host, fmt.Sprintf("%d", n.Port))
|
||||
|
||||
// Create connection with timeout
|
||||
dialer := &net.Dialer{
|
||||
Timeout: 10 * time.Second,
|
||||
Timeout: 30 * time.Second,
|
||||
}
|
||||
|
||||
if n.UseSSL {
|
||||
// Use TLS connection
|
||||
tlsConfig := &tls.Config{
|
||||
ServerName: n.Host,
|
||||
InsecureSkipVerify: false, // Change to true if you want to skip cert verification
|
||||
InsecureSkipVerify: false,
|
||||
}
|
||||
|
||||
conn, err := tls.DialWithDialer(dialer, "tcp", address, tlsConfig)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create SSL connection to %s: %w", address, err)
|
||||
}
|
||||
return conn, nil
|
||||
} else {
|
||||
// Use regular TCP connection
|
||||
conn, err := dialer.Dial("tcp", address)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create connection to %s: %w", address, err)
|
||||
}
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
conn, err := dialer.DialContext(ctx, "tcp", address)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create connection to %s: %w", address, err)
|
||||
}
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
func (n *NASStorage) ensureDirectory(fs *smb2.Share, path string) error {
|
||||
@@ -417,3 +458,71 @@ func (r *nasFileReader) Close() error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
type writeResult struct {
|
||||
bytesWritten int
|
||||
writeErr error
|
||||
}
|
||||
|
||||
func copyWithContext(ctx context.Context, dst io.Writer, src io.Reader) (int64, error) {
|
||||
buf := make([]byte, nasChunkSize)
|
||||
var written int64
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return written, ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
nr, readErr := io.ReadFull(src, buf)
|
||||
|
||||
if nr == 0 && readErr == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
if readErr != nil && readErr != io.EOF && readErr != io.ErrUnexpectedEOF {
|
||||
return written, readErr
|
||||
}
|
||||
|
||||
writeResultCh := make(chan writeResult, 1)
|
||||
go func() {
|
||||
nw, writeErr := dst.Write(buf[0:nr])
|
||||
writeResultCh <- writeResult{nw, writeErr}
|
||||
}()
|
||||
|
||||
var nw int
|
||||
var writeErr error
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return written, ctx.Err()
|
||||
case result := <-writeResultCh:
|
||||
nw = result.bytesWritten
|
||||
writeErr = result.writeErr
|
||||
}
|
||||
|
||||
if nw < 0 || nr < nw {
|
||||
nw = 0
|
||||
if writeErr == nil {
|
||||
writeErr = errors.New("invalid write result")
|
||||
}
|
||||
}
|
||||
|
||||
if writeErr != nil {
|
||||
return written, writeErr
|
||||
}
|
||||
|
||||
if nr != nw {
|
||||
return written, io.ErrShortWrite
|
||||
}
|
||||
|
||||
written += int64(nw)
|
||||
|
||||
if readErr == io.EOF || readErr == io.ErrUnexpectedEOF {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return written, nil
|
||||
}
|
||||
|
||||
@@ -7,6 +7,9 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log/slog"
|
||||
"net"
|
||||
"net/http"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -15,6 +18,18 @@ import (
|
||||
"github.com/minio/minio-go/v7/pkg/credentials"
|
||||
)
|
||||
|
||||
const (
|
||||
s3ConnectTimeout = 30 * time.Second
|
||||
s3ResponseTimeout = 30 * time.Second
|
||||
s3IdleConnTimeout = 90 * time.Second
|
||||
s3TLSHandshakeTimeout = 30 * time.Second
|
||||
|
||||
// Chunk size for multipart uploads - 16MB provides good balance between
|
||||
// memory usage and upload efficiency. This creates backpressure to pg_dump
|
||||
// by only reading one chunk at a time and waiting for S3 to confirm receipt.
|
||||
multipartChunkSize = 16 * 1024 * 1024
|
||||
)
|
||||
|
||||
type S3Storage struct {
|
||||
StorageID uuid.UUID `json:"storageId" gorm:"primaryKey;type:uuid;column:storage_id"`
|
||||
S3Bucket string `json:"s3Bucket" gorm:"not null;type:text;column:s3_bucket"`
|
||||
@@ -22,44 +37,153 @@ type S3Storage struct {
|
||||
S3AccessKey string `json:"s3AccessKey" gorm:"not null;type:text;column:s3_access_key"`
|
||||
S3SecretKey string `json:"s3SecretKey" gorm:"not null;type:text;column:s3_secret_key"`
|
||||
S3Endpoint string `json:"s3Endpoint" gorm:"type:text;column:s3_endpoint"`
|
||||
|
||||
S3Prefix string `json:"s3Prefix" gorm:"type:text;column:s3_prefix"`
|
||||
S3UseVirtualHostedStyle bool `json:"s3UseVirtualHostedStyle" gorm:"default:false;column:s3_use_virtual_hosted_style"`
|
||||
}
|
||||
|
||||
func (s *S3Storage) TableName() string {
|
||||
return "s3_storages"
|
||||
}
|
||||
|
||||
func (s *S3Storage) SaveFile(logger *slog.Logger, fileID uuid.UUID, file io.Reader) error {
|
||||
client, err := s.getClient()
|
||||
func (s *S3Storage) SaveFile(
|
||||
ctx context.Context,
|
||||
encryptor encryption.FieldEncryptor,
|
||||
logger *slog.Logger,
|
||||
fileID uuid.UUID,
|
||||
file io.Reader,
|
||||
) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled before start: %w", ctx.Err())
|
||||
default:
|
||||
}
|
||||
|
||||
coreClient, err := s.getCoreClient(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Upload the file using MinIO client with streaming (size = -1 for unknown size)
|
||||
_, err = client.PutObject(
|
||||
context.TODO(),
|
||||
objectKey := s.buildObjectKey(fileID.String())
|
||||
|
||||
uploadID, err := coreClient.NewMultipartUpload(
|
||||
ctx,
|
||||
s.S3Bucket,
|
||||
fileID.String(),
|
||||
file,
|
||||
-1,
|
||||
objectKey,
|
||||
minio.PutObjectOptions{},
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to upload file to S3: %w", err)
|
||||
return fmt.Errorf("failed to initiate multipart upload: %w", err)
|
||||
}
|
||||
|
||||
var parts []minio.CompletePart
|
||||
partNumber := 1
|
||||
buf := make([]byte, multipartChunkSize)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
_ = coreClient.AbortMultipartUpload(ctx, s.S3Bucket, objectKey, uploadID)
|
||||
return fmt.Errorf("upload cancelled: %w", ctx.Err())
|
||||
default:
|
||||
}
|
||||
|
||||
n, readErr := io.ReadFull(file, buf)
|
||||
|
||||
if n == 0 && readErr == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
if readErr != nil && readErr != io.EOF && readErr != io.ErrUnexpectedEOF {
|
||||
_ = coreClient.AbortMultipartUpload(ctx, s.S3Bucket, objectKey, uploadID)
|
||||
return fmt.Errorf("read error: %w", readErr)
|
||||
}
|
||||
|
||||
part, err := coreClient.PutObjectPart(
|
||||
ctx,
|
||||
s.S3Bucket,
|
||||
objectKey,
|
||||
uploadID,
|
||||
partNumber,
|
||||
bytes.NewReader(buf[:n]),
|
||||
int64(n),
|
||||
minio.PutObjectPartOptions{},
|
||||
)
|
||||
if err != nil {
|
||||
_ = coreClient.AbortMultipartUpload(ctx, s.S3Bucket, objectKey, uploadID)
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return fmt.Errorf("upload cancelled: %w", ctx.Err())
|
||||
default:
|
||||
return fmt.Errorf("failed to upload part %d: %w", partNumber, err)
|
||||
}
|
||||
}
|
||||
|
||||
parts = append(parts, minio.CompletePart{
|
||||
PartNumber: partNumber,
|
||||
ETag: part.ETag,
|
||||
})
|
||||
|
||||
partNumber++
|
||||
|
||||
if readErr == io.EOF || readErr == io.ErrUnexpectedEOF {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if len(parts) == 0 {
|
||||
_ = coreClient.AbortMultipartUpload(ctx, s.S3Bucket, objectKey, uploadID)
|
||||
|
||||
client, err := s.getClient(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = client.PutObject(
|
||||
ctx,
|
||||
s.S3Bucket,
|
||||
objectKey,
|
||||
bytes.NewReader([]byte{}),
|
||||
0,
|
||||
minio.PutObjectOptions{},
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to upload empty file: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
_, err = coreClient.CompleteMultipartUpload(
|
||||
ctx,
|
||||
s.S3Bucket,
|
||||
objectKey,
|
||||
uploadID,
|
||||
parts,
|
||||
minio.PutObjectOptions{},
|
||||
)
|
||||
if err != nil {
|
||||
_ = coreClient.AbortMultipartUpload(ctx, s.S3Bucket, objectKey, uploadID)
|
||||
return fmt.Errorf("failed to complete multipart upload: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *S3Storage) GetFile(fileID uuid.UUID) (io.ReadCloser, error) {
|
||||
client, err := s.getClient()
|
||||
func (s *S3Storage) GetFile(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
fileID uuid.UUID,
|
||||
) (io.ReadCloser, error) {
|
||||
client, err := s.getClient(encryptor)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
objectKey := s.buildObjectKey(fileID.String())
|
||||
|
||||
object, err := client.GetObject(
|
||||
context.TODO(),
|
||||
s.S3Bucket,
|
||||
fileID.String(),
|
||||
objectKey,
|
||||
minio.GetObjectOptions{},
|
||||
)
|
||||
if err != nil {
|
||||
@@ -84,17 +208,19 @@ func (s *S3Storage) GetFile(fileID uuid.UUID) (io.ReadCloser, error) {
|
||||
return object, nil
|
||||
}
|
||||
|
||||
func (s *S3Storage) DeleteFile(fileID uuid.UUID) error {
|
||||
client, err := s.getClient()
|
||||
func (s *S3Storage) DeleteFile(encryptor encryption.FieldEncryptor, fileID uuid.UUID) error {
|
||||
client, err := s.getClient(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
objectKey := s.buildObjectKey(fileID.String())
|
||||
|
||||
// Delete the object using MinIO client
|
||||
err = client.RemoveObject(
|
||||
context.TODO(),
|
||||
s.S3Bucket,
|
||||
fileID.String(),
|
||||
objectKey,
|
||||
minio.RemoveObjectOptions{},
|
||||
)
|
||||
if err != nil {
|
||||
@@ -104,7 +230,7 @@ func (s *S3Storage) DeleteFile(fileID uuid.UUID) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *S3Storage) Validate() error {
|
||||
func (s *S3Storage) Validate(encryptor encryption.FieldEncryptor) error {
|
||||
if s.S3Bucket == "" {
|
||||
return errors.New("S3 bucket is required")
|
||||
}
|
||||
@@ -115,17 +241,11 @@ func (s *S3Storage) Validate() error {
|
||||
return errors.New("S3 secret key is required")
|
||||
}
|
||||
|
||||
// Try to create a client to validate the configuration
|
||||
_, err := s.getClient()
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid S3 configuration: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *S3Storage) TestConnection() error {
|
||||
client, err := s.getClient()
|
||||
func (s *S3Storage) TestConnection(encryptor encryption.FieldEncryptor) error {
|
||||
client, err := s.getClient(encryptor)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -150,6 +270,7 @@ func (s *S3Storage) TestConnection() error {
|
||||
|
||||
// Test write and delete permissions by uploading and removing a small test file
|
||||
testFileID := uuid.New().String() + "-test"
|
||||
testObjectKey := s.buildObjectKey(testFileID)
|
||||
testData := []byte("test connection")
|
||||
testReader := bytes.NewReader(testData)
|
||||
|
||||
@@ -157,7 +278,7 @@ func (s *S3Storage) TestConnection() error {
|
||||
_, err = client.PutObject(
|
||||
ctx,
|
||||
s.S3Bucket,
|
||||
testFileID,
|
||||
testObjectKey,
|
||||
testReader,
|
||||
int64(len(testData)),
|
||||
minio.PutObjectOptions{},
|
||||
@@ -170,7 +291,7 @@ func (s *S3Storage) TestConnection() error {
|
||||
err = client.RemoveObject(
|
||||
ctx,
|
||||
s.S3Bucket,
|
||||
testFileID,
|
||||
testObjectKey,
|
||||
minio.RemoveObjectOptions{},
|
||||
)
|
||||
if err != nil {
|
||||
@@ -185,10 +306,31 @@ func (s *S3Storage) HideSensitiveData() {
|
||||
s.S3SecretKey = ""
|
||||
}
|
||||
|
||||
func (s *S3Storage) EncryptSensitiveData(encryptor encryption.FieldEncryptor) error {
|
||||
var err error
|
||||
|
||||
if s.S3AccessKey != "" {
|
||||
s.S3AccessKey, err = encryptor.Encrypt(s.StorageID, s.S3AccessKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt S3 access key: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if s.S3SecretKey != "" {
|
||||
s.S3SecretKey, err = encryptor.Encrypt(s.StorageID, s.S3SecretKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt S3 secret key: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *S3Storage) Update(incoming *S3Storage) {
|
||||
s.S3Bucket = incoming.S3Bucket
|
||||
s.S3Region = incoming.S3Region
|
||||
s.S3Endpoint = incoming.S3Endpoint
|
||||
s.S3UseVirtualHostedStyle = incoming.S3UseVirtualHostedStyle
|
||||
|
||||
if incoming.S3AccessKey != "" {
|
||||
s.S3AccessKey = incoming.S3AccessKey
|
||||
@@ -197,11 +339,75 @@ func (s *S3Storage) Update(incoming *S3Storage) {
|
||||
if incoming.S3SecretKey != "" {
|
||||
s.S3SecretKey = incoming.S3SecretKey
|
||||
}
|
||||
|
||||
// we do not allow to change the prefix after creation,
|
||||
// otherwise we will have to migrate all the data to the new prefix
|
||||
}
|
||||
|
||||
func (s *S3Storage) getClient() (*minio.Client, error) {
|
||||
endpoint := s.S3Endpoint
|
||||
useSSL := true
|
||||
func (s *S3Storage) buildObjectKey(fileName string) string {
|
||||
if s.S3Prefix == "" {
|
||||
return fileName
|
||||
}
|
||||
|
||||
prefix := s.S3Prefix
|
||||
prefix = strings.TrimPrefix(prefix, "/")
|
||||
|
||||
if !strings.HasSuffix(prefix, "/") {
|
||||
prefix = prefix + "/"
|
||||
}
|
||||
|
||||
return prefix + fileName
|
||||
}
|
||||
|
||||
func (s *S3Storage) getClient(encryptor encryption.FieldEncryptor) (*minio.Client, error) {
|
||||
endpoint, useSSL, accessKey, secretKey, bucketLookup, transport, err := s.getClientParams(
|
||||
encryptor,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
minioClient, err := minio.New(endpoint, &minio.Options{
|
||||
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
|
||||
Secure: useSSL,
|
||||
Region: s.S3Region,
|
||||
BucketLookup: bucketLookup,
|
||||
Transport: transport,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize MinIO client: %w", err)
|
||||
}
|
||||
|
||||
return minioClient, nil
|
||||
}
|
||||
|
||||
func (s *S3Storage) getCoreClient(encryptor encryption.FieldEncryptor) (*minio.Core, error) {
|
||||
endpoint, useSSL, accessKey, secretKey, bucketLookup, transport, err := s.getClientParams(
|
||||
encryptor,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
coreClient, err := minio.NewCore(endpoint, &minio.Options{
|
||||
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
|
||||
Secure: useSSL,
|
||||
Region: s.S3Region,
|
||||
BucketLookup: bucketLookup,
|
||||
Transport: transport,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize MinIO Core client: %w", err)
|
||||
}
|
||||
|
||||
return coreClient, nil
|
||||
}
|
||||
|
||||
func (s *S3Storage) getClientParams(
|
||||
encryptor encryption.FieldEncryptor,
|
||||
) (endpoint string, useSSL bool, accessKey string, secretKey string, bucketLookup minio.BucketLookupType, transport *http.Transport, err error) {
|
||||
endpoint = s.S3Endpoint
|
||||
useSSL = true
|
||||
|
||||
if strings.HasPrefix(endpoint, "http://") {
|
||||
useSSL = false
|
||||
@@ -210,20 +416,33 @@ func (s *S3Storage) getClient() (*minio.Client, error) {
|
||||
endpoint = strings.TrimPrefix(endpoint, "https://")
|
||||
}
|
||||
|
||||
// If no endpoint is provided, use the AWS S3 endpoint for the region
|
||||
if endpoint == "" {
|
||||
endpoint = fmt.Sprintf("s3.%s.amazonaws.com", s.S3Region)
|
||||
}
|
||||
|
||||
// Initialize the MinIO client
|
||||
minioClient, err := minio.New(endpoint, &minio.Options{
|
||||
Creds: credentials.NewStaticV4(s.S3AccessKey, s.S3SecretKey, ""),
|
||||
Secure: useSSL,
|
||||
Region: s.S3Region,
|
||||
})
|
||||
accessKey, err = encryptor.Decrypt(s.StorageID, s.S3AccessKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize MinIO client: %w", err)
|
||||
return "", false, "", "", 0, nil, fmt.Errorf("failed to decrypt S3 access key: %w", err)
|
||||
}
|
||||
|
||||
return minioClient, nil
|
||||
secretKey, err = encryptor.Decrypt(s.StorageID, s.S3SecretKey)
|
||||
if err != nil {
|
||||
return "", false, "", "", 0, nil, fmt.Errorf("failed to decrypt S3 secret key: %w", err)
|
||||
}
|
||||
|
||||
bucketLookup = minio.BucketLookupAuto
|
||||
if s.S3UseVirtualHostedStyle {
|
||||
bucketLookup = minio.BucketLookupDNS
|
||||
}
|
||||
|
||||
transport = &http.Transport{
|
||||
DialContext: (&net.Dialer{
|
||||
Timeout: s3ConnectTimeout,
|
||||
}).DialContext,
|
||||
TLSHandshakeTimeout: s3TLSHandshakeTimeout,
|
||||
ResponseHeaderTimeout: s3ResponseTimeout,
|
||||
IdleConnTimeout: s3IdleConnTimeout,
|
||||
}
|
||||
|
||||
return endpoint, useSSL, accessKey, secretKey, bucketLookup, transport, nil
|
||||
}
|
||||
|
||||
@@ -30,17 +30,21 @@ func (r *StorageRepository) Save(storage *Storage) (*Storage, error) {
|
||||
if storage.NASStorage != nil {
|
||||
storage.NASStorage.StorageID = storage.ID
|
||||
}
|
||||
case StorageTypeAzureBlob:
|
||||
if storage.AzureBlobStorage != nil {
|
||||
storage.AzureBlobStorage.StorageID = storage.ID
|
||||
}
|
||||
}
|
||||
|
||||
if storage.ID == uuid.Nil {
|
||||
if err := tx.Create(storage).
|
||||
Omit("LocalStorage", "S3Storage", "GoogleDriveStorage", "NASStorage").
|
||||
Omit("LocalStorage", "S3Storage", "GoogleDriveStorage", "NASStorage", "AzureBlobStorage").
|
||||
Error; err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if err := tx.Save(storage).
|
||||
Omit("LocalStorage", "S3Storage", "GoogleDriveStorage", "NASStorage").
|
||||
Omit("LocalStorage", "S3Storage", "GoogleDriveStorage", "NASStorage", "AzureBlobStorage").
|
||||
Error; err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -75,6 +79,13 @@ func (r *StorageRepository) Save(storage *Storage) (*Storage, error) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
case StorageTypeAzureBlob:
|
||||
if storage.AzureBlobStorage != nil {
|
||||
storage.AzureBlobStorage.StorageID = storage.ID // Ensure ID is set
|
||||
if err := tx.Save(storage.AzureBlobStorage).Error; err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -96,6 +107,7 @@ func (r *StorageRepository) FindByID(id uuid.UUID) (*Storage, error) {
|
||||
Preload("S3Storage").
|
||||
Preload("GoogleDriveStorage").
|
||||
Preload("NASStorage").
|
||||
Preload("AzureBlobStorage").
|
||||
Where("id = ?", id).
|
||||
First(&s).Error; err != nil {
|
||||
return nil, err
|
||||
@@ -113,6 +125,7 @@ func (r *StorageRepository) FindByWorkspaceID(workspaceID uuid.UUID) ([]*Storage
|
||||
Preload("S3Storage").
|
||||
Preload("GoogleDriveStorage").
|
||||
Preload("NASStorage").
|
||||
Preload("AzureBlobStorage").
|
||||
Where("workspace_id = ?", workspaceID).
|
||||
Order("name ASC").
|
||||
Find(&storages).Error; err != nil {
|
||||
@@ -150,6 +163,12 @@ func (r *StorageRepository) Delete(s *Storage) error {
|
||||
return err
|
||||
}
|
||||
}
|
||||
case StorageTypeAzureBlob:
|
||||
if s.AzureBlobStorage != nil {
|
||||
if err := tx.Delete(s.AzureBlobStorage).Error; err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Delete the main storage
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
audit_logs "postgresus-backend/internal/features/audit_logs"
|
||||
users_models "postgresus-backend/internal/features/users/models"
|
||||
workspaces_services "postgresus-backend/internal/features/workspaces/services"
|
||||
"postgresus-backend/internal/util/encryption"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
@@ -15,6 +16,7 @@ type StorageService struct {
|
||||
storageRepository *StorageRepository
|
||||
workspaceService *workspaces_services.WorkspaceService
|
||||
auditLogService *audit_logs.AuditLogService
|
||||
fieldEncryptor encryption.FieldEncryptor
|
||||
}
|
||||
|
||||
func (s *StorageService) SaveStorage(
|
||||
@@ -44,7 +46,11 @@ func (s *StorageService) SaveStorage(
|
||||
|
||||
existingStorage.Update(storage)
|
||||
|
||||
if err := existingStorage.Validate(); err != nil {
|
||||
if err := existingStorage.EncryptSensitiveData(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := existingStorage.Validate(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -61,7 +67,11 @@ func (s *StorageService) SaveStorage(
|
||||
} else {
|
||||
storage.WorkspaceID = workspaceID
|
||||
|
||||
if err := storage.Validate(); err != nil {
|
||||
if err := storage.EncryptSensitiveData(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := storage.Validate(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -174,7 +184,7 @@ func (s *StorageService) TestStorageConnection(
|
||||
return errors.New("insufficient permissions to test storage in this workspace")
|
||||
}
|
||||
|
||||
err = storage.TestConnection()
|
||||
err = storage.TestConnection(s.fieldEncryptor)
|
||||
if err != nil {
|
||||
lastSaveError := err.Error()
|
||||
storage.LastSaveError = &lastSaveError
|
||||
@@ -207,7 +217,7 @@ func (s *StorageService) TestStorageConnectionDirect(
|
||||
|
||||
existingStorage.Update(storage)
|
||||
|
||||
if err := existingStorage.Validate(); err != nil {
|
||||
if err := existingStorage.Validate(s.fieldEncryptor); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -216,7 +226,7 @@ func (s *StorageService) TestStorageConnectionDirect(
|
||||
usingStorage = storage
|
||||
}
|
||||
|
||||
return usingStorage.TestConnection()
|
||||
return usingStorage.TestConnection(s.fieldEncryptor)
|
||||
}
|
||||
|
||||
func (s *StorageService) GetStorageByID(
|
||||
|
||||
@@ -1,31 +1,36 @@
|
||||
package tests
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"postgresus-backend/internal/config"
|
||||
"postgresus-backend/internal/features/backups/backups"
|
||||
usecases_postgresql_backup "postgresus-backend/internal/features/backups/backups/usecases/postgresql"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
pgtypes "postgresus-backend/internal/features/databases/databases/postgresql"
|
||||
"postgresus-backend/internal/features/intervals"
|
||||
"postgresus-backend/internal/features/restores/models"
|
||||
usecases_postgresql_restore "postgresus-backend/internal/features/restores/usecases/postgresql"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
local_storage "postgresus-backend/internal/features/storages/models/local"
|
||||
"postgresus-backend/internal/util/period"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/google/uuid"
|
||||
"github.com/jmoiron/sqlx"
|
||||
_ "github.com/lib/pq"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"postgresus-backend/internal/config"
|
||||
"postgresus-backend/internal/features/backups/backups"
|
||||
backups_config "postgresus-backend/internal/features/backups/config"
|
||||
"postgresus-backend/internal/features/databases"
|
||||
pgtypes "postgresus-backend/internal/features/databases/databases/postgresql"
|
||||
"postgresus-backend/internal/features/restores"
|
||||
restores_enums "postgresus-backend/internal/features/restores/enums"
|
||||
restores_models "postgresus-backend/internal/features/restores/models"
|
||||
"postgresus-backend/internal/features/storages"
|
||||
users_enums "postgresus-backend/internal/features/users/enums"
|
||||
users_testing "postgresus-backend/internal/features/users/testing"
|
||||
workspaces_controllers "postgresus-backend/internal/features/workspaces/controllers"
|
||||
workspaces_testing "postgresus-backend/internal/features/workspaces/testing"
|
||||
test_utils "postgresus-backend/internal/util/testing"
|
||||
"postgresus-backend/internal/util/tools"
|
||||
)
|
||||
|
||||
const createAndFillTableQuery = `
|
||||
@@ -61,7 +66,6 @@ type TestDataItem struct {
|
||||
CreatedAt time.Time `db:"created_at"`
|
||||
}
|
||||
|
||||
// Main test functions for each PostgreSQL version
|
||||
func Test_BackupAndRestorePostgresql_RestoreIsSuccesful(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
cases := []struct {
|
||||
@@ -79,17 +83,38 @@ func Test_BackupAndRestorePostgresql_RestoreIsSuccesful(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
tc := tc // capture loop variable
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel() // Enable parallel execution
|
||||
t.Parallel()
|
||||
testBackupRestoreForVersion(t, tc.version, tc.port)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Run a test for a specific PostgreSQL version
|
||||
func Test_BackupAndRestorePostgresqlWithEncryption_RestoreIsSuccessful(t *testing.T) {
|
||||
env := config.GetEnv()
|
||||
cases := []struct {
|
||||
name string
|
||||
version string
|
||||
port string
|
||||
}{
|
||||
{"PostgreSQL 12", "12", env.TestPostgres12Port},
|
||||
{"PostgreSQL 13", "13", env.TestPostgres13Port},
|
||||
{"PostgreSQL 14", "14", env.TestPostgres14Port},
|
||||
{"PostgreSQL 15", "15", env.TestPostgres15Port},
|
||||
{"PostgreSQL 16", "16", env.TestPostgres16Port},
|
||||
{"PostgreSQL 17", "17", env.TestPostgres17Port},
|
||||
{"PostgreSQL 18", "18", env.TestPostgres18Port},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
testBackupRestoreWithEncryptionForVersion(t, tc.version, tc.port)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
|
||||
// Connect to pre-configured PostgreSQL container
|
||||
container, err := connectToPostgresContainer(pgVersion, port)
|
||||
assert.NoError(t, err)
|
||||
defer func() {
|
||||
@@ -101,55 +126,30 @@ func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
|
||||
_, err = container.DB.Exec(createAndFillTableQuery)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Prepare data for backup
|
||||
backupID := uuid.New()
|
||||
router := createTestRouter()
|
||||
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Test Workspace", user, router)
|
||||
|
||||
storage := storages.CreateTestStorage(workspace.ID)
|
||||
|
||||
pgVersionEnum := tools.GetPostgresqlVersionEnum(pgVersion)
|
||||
|
||||
backupDb := &databases.Database{
|
||||
ID: uuid.New(),
|
||||
Type: databases.DatabaseTypePostgres,
|
||||
Name: "Test Database",
|
||||
Postgresql: &pgtypes.PostgresqlDatabase{
|
||||
Version: pgVersionEnum,
|
||||
Host: container.Host,
|
||||
Port: container.Port,
|
||||
Username: container.Username,
|
||||
Password: container.Password,
|
||||
Database: &container.Database,
|
||||
IsHttps: false,
|
||||
},
|
||||
}
|
||||
|
||||
storageID := uuid.New()
|
||||
backupConfig := &backups_config.BackupConfig{
|
||||
DatabaseID: backupDb.ID,
|
||||
IsBackupsEnabled: true,
|
||||
StorePeriod: period.PeriodDay,
|
||||
BackupInterval: &intervals.Interval{Interval: intervals.IntervalDaily},
|
||||
StorageID: &storageID,
|
||||
CpuCount: 1,
|
||||
}
|
||||
|
||||
storage := &storages.Storage{
|
||||
WorkspaceID: uuid.New(),
|
||||
Type: storages.StorageTypeLocal,
|
||||
Name: "Test Storage",
|
||||
LocalStorage: &local_storage.LocalStorage{},
|
||||
}
|
||||
|
||||
// Make backup
|
||||
progressTracker := func(completedMBs float64) {}
|
||||
err = usecases_postgresql_backup.GetCreatePostgresqlBackupUsecase().Execute(
|
||||
context.Background(),
|
||||
backupID,
|
||||
backupConfig,
|
||||
backupDb,
|
||||
storage,
|
||||
progressTracker,
|
||||
database := createDatabaseViaAPI(
|
||||
t, router, "Test Database", workspace.ID,
|
||||
pgVersionEnum, container.Host, container.Port,
|
||||
container.Username, container.Password, container.Database,
|
||||
user.Token,
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Create new database
|
||||
enableBackupsViaAPI(
|
||||
t, router, database.ID, storage.ID,
|
||||
backups_config.BackupEncryptionNone, user.Token,
|
||||
)
|
||||
|
||||
createBackupViaAPI(t, router, database.ID, user.Token)
|
||||
|
||||
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
|
||||
|
||||
newDBName := "restoreddb"
|
||||
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
@@ -157,43 +157,22 @@ func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
|
||||
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Connect to the new database
|
||||
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, container.Username, container.Password, newDBName)
|
||||
newDB, err := sqlx.Connect("postgres", newDSN)
|
||||
assert.NoError(t, err)
|
||||
defer newDB.Close()
|
||||
|
||||
// Setup data for restore
|
||||
completedBackup := &backups.Backup{
|
||||
ID: backupID,
|
||||
DatabaseID: backupDb.ID,
|
||||
StorageID: storage.ID,
|
||||
Status: backups.BackupStatusCompleted,
|
||||
CreatedAt: time.Now().UTC(),
|
||||
}
|
||||
createRestoreViaAPI(
|
||||
t, router, backup.ID, pgVersionEnum,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, newDBName,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
restoreID := uuid.New()
|
||||
restore := models.Restore{
|
||||
ID: restoreID,
|
||||
Backup: completedBackup,
|
||||
Postgresql: &pgtypes.PostgresqlDatabase{
|
||||
Version: pgVersionEnum,
|
||||
Host: container.Host,
|
||||
Port: container.Port,
|
||||
Username: container.Username,
|
||||
Password: container.Password,
|
||||
Database: &newDBName,
|
||||
IsHttps: false,
|
||||
},
|
||||
}
|
||||
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
|
||||
|
||||
// Restore the backup
|
||||
restoreBackupUC := usecases_postgresql_restore.GetRestorePostgresqlBackupUsecase()
|
||||
err = restoreBackupUC.Execute(backupDb, backupConfig, restore, completedBackup, storage)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify restored table exists
|
||||
var tableExists bool
|
||||
err = newDB.Get(
|
||||
&tableExists,
|
||||
@@ -202,17 +181,329 @@ func testBackupRestoreForVersion(t *testing.T, pgVersion string, port string) {
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, tableExists, "Table 'test_data' should exist in restored database")
|
||||
|
||||
// Verify data integrity
|
||||
verifyDataIntegrity(t, container.DB, newDB)
|
||||
|
||||
// Clean up the backup file after the test
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backupID.String()))
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to delete backup file: %v", err)
|
||||
}
|
||||
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/"+database.ID.String(),
|
||||
"Bearer "+user.Token,
|
||||
http.StatusNoContent,
|
||||
)
|
||||
storages.RemoveTestStorage(storage.ID)
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
}
|
||||
|
||||
func testBackupRestoreWithEncryptionForVersion(t *testing.T, pgVersion string, port string) {
|
||||
container, err := connectToPostgresContainer(pgVersion, port)
|
||||
assert.NoError(t, err)
|
||||
defer func() {
|
||||
if container.DB != nil {
|
||||
container.DB.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
_, err = container.DB.Exec(createAndFillTableQuery)
|
||||
assert.NoError(t, err)
|
||||
|
||||
router := createTestRouter()
|
||||
user := users_testing.CreateTestUser(users_enums.UserRoleMember)
|
||||
workspace := workspaces_testing.CreateTestWorkspace("Test Workspace", user, router)
|
||||
|
||||
storage := storages.CreateTestStorage(workspace.ID)
|
||||
|
||||
pgVersionEnum := tools.GetPostgresqlVersionEnum(pgVersion)
|
||||
database := createDatabaseViaAPI(
|
||||
t, router, "Test Database", workspace.ID,
|
||||
pgVersionEnum, container.Host, container.Port,
|
||||
container.Username, container.Password, container.Database,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
enableBackupsViaAPI(
|
||||
t, router, database.ID, storage.ID,
|
||||
backups_config.BackupEncryptionEncrypted, user.Token,
|
||||
)
|
||||
|
||||
createBackupViaAPI(t, router, database.ID, user.Token)
|
||||
|
||||
backup := waitForBackupCompletion(t, router, database.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, backups.BackupStatusCompleted, backup.Status)
|
||||
assert.Equal(t, backups_config.BackupEncryptionEncrypted, backup.Encryption)
|
||||
|
||||
newDBName := "restoreddb_encrypted"
|
||||
_, err = container.DB.Exec(fmt.Sprintf("DROP DATABASE IF EXISTS %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = container.DB.Exec(fmt.Sprintf("CREATE DATABASE %s;", newDBName))
|
||||
assert.NoError(t, err)
|
||||
|
||||
newDSN := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=disable",
|
||||
container.Host, container.Port, container.Username, container.Password, newDBName)
|
||||
newDB, err := sqlx.Connect("postgres", newDSN)
|
||||
assert.NoError(t, err)
|
||||
defer newDB.Close()
|
||||
|
||||
createRestoreViaAPI(
|
||||
t, router, backup.ID, pgVersionEnum,
|
||||
container.Host, container.Port,
|
||||
container.Username, container.Password, newDBName,
|
||||
user.Token,
|
||||
)
|
||||
|
||||
restore := waitForRestoreCompletion(t, router, backup.ID, user.Token, 5*time.Minute)
|
||||
assert.Equal(t, restores_enums.RestoreStatusCompleted, restore.Status)
|
||||
|
||||
var tableExists bool
|
||||
err = newDB.Get(
|
||||
&tableExists,
|
||||
"SELECT EXISTS (SELECT FROM information_schema.tables WHERE table_name = 'test_data')",
|
||||
)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, tableExists, "Table 'test_data' should exist in restored database")
|
||||
|
||||
verifyDataIntegrity(t, container.DB, newDB)
|
||||
|
||||
err = os.Remove(filepath.Join(config.GetEnv().DataFolder, backup.ID.String()))
|
||||
if err != nil {
|
||||
t.Logf("Warning: Failed to delete backup file: %v", err)
|
||||
}
|
||||
|
||||
test_utils.MakeDeleteRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/databases/"+database.ID.String(),
|
||||
"Bearer "+user.Token,
|
||||
http.StatusNoContent,
|
||||
)
|
||||
storages.RemoveTestStorage(storage.ID)
|
||||
workspaces_testing.RemoveTestWorkspace(workspace, router)
|
||||
}
|
||||
|
||||
func createTestRouter() *gin.Engine {
|
||||
router := workspaces_testing.CreateTestRouter(
|
||||
workspaces_controllers.GetWorkspaceController(),
|
||||
workspaces_controllers.GetMembershipController(),
|
||||
databases.GetDatabaseController(),
|
||||
backups_config.GetBackupConfigController(),
|
||||
backups.GetBackupController(),
|
||||
restores.GetRestoreController(),
|
||||
)
|
||||
return router
|
||||
}
|
||||
|
||||
func waitForBackupCompletion(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
databaseID uuid.UUID,
|
||||
token string,
|
||||
timeout time.Duration,
|
||||
) *backups.Backup {
|
||||
startTime := time.Now()
|
||||
pollInterval := 500 * time.Millisecond
|
||||
|
||||
for {
|
||||
if time.Since(startTime) > timeout {
|
||||
t.Fatalf("Timeout waiting for backup completion after %v", timeout)
|
||||
}
|
||||
|
||||
var response backups.GetBackupsResponse
|
||||
test_utils.MakeGetRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/backups?database_id=%s&limit=1", databaseID.String()),
|
||||
"Bearer "+token,
|
||||
http.StatusOK,
|
||||
&response,
|
||||
)
|
||||
|
||||
if len(response.Backups) > 0 {
|
||||
backup := response.Backups[0]
|
||||
if backup.Status == backups.BackupStatusCompleted {
|
||||
return backup
|
||||
}
|
||||
if backup.Status == backups.BackupStatusFailed {
|
||||
t.Fatalf("Backup failed: %v", backup.FailMessage)
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(pollInterval)
|
||||
}
|
||||
}
|
||||
|
||||
func waitForRestoreCompletion(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
backupID uuid.UUID,
|
||||
token string,
|
||||
timeout time.Duration,
|
||||
) *restores_models.Restore {
|
||||
startTime := time.Now()
|
||||
pollInterval := 500 * time.Millisecond
|
||||
|
||||
for {
|
||||
if time.Since(startTime) > timeout {
|
||||
t.Fatalf("Timeout waiting for restore completion after %v", timeout)
|
||||
}
|
||||
|
||||
var restores []*restores_models.Restore
|
||||
test_utils.MakeGetRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/restores/%s", backupID.String()),
|
||||
"Bearer "+token,
|
||||
http.StatusOK,
|
||||
&restores,
|
||||
)
|
||||
|
||||
for _, restore := range restores {
|
||||
if restore.Status == restores_enums.RestoreStatusCompleted {
|
||||
return restore
|
||||
}
|
||||
if restore.Status == restores_enums.RestoreStatusFailed {
|
||||
t.Fatalf("Restore failed: %v", restore.FailMessage)
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(pollInterval)
|
||||
}
|
||||
}
|
||||
|
||||
func createDatabaseViaAPI(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
name string,
|
||||
workspaceID uuid.UUID,
|
||||
pgVersion tools.PostgresqlVersion,
|
||||
host string,
|
||||
port int,
|
||||
username string,
|
||||
password string,
|
||||
database string,
|
||||
token string,
|
||||
) *databases.Database {
|
||||
request := databases.Database{
|
||||
Name: name,
|
||||
WorkspaceID: &workspaceID,
|
||||
Type: databases.DatabaseTypePostgres,
|
||||
Postgresql: &pgtypes.PostgresqlDatabase{
|
||||
Version: pgVersion,
|
||||
Host: host,
|
||||
Port: port,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Database: &database,
|
||||
},
|
||||
}
|
||||
|
||||
w := workspaces_testing.MakeAPIRequest(
|
||||
router,
|
||||
"POST",
|
||||
"/api/v1/databases/create",
|
||||
"Bearer "+token,
|
||||
request,
|
||||
)
|
||||
|
||||
if w.Code != http.StatusCreated {
|
||||
t.Fatalf("Failed to create database. Status: %d, Body: %s", w.Code, w.Body.String())
|
||||
}
|
||||
|
||||
var createdDatabase databases.Database
|
||||
if err := json.Unmarshal(w.Body.Bytes(), &createdDatabase); err != nil {
|
||||
t.Fatalf("Failed to unmarshal database response: %v", err)
|
||||
}
|
||||
|
||||
return &createdDatabase
|
||||
}
|
||||
|
||||
func enableBackupsViaAPI(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
databaseID uuid.UUID,
|
||||
storageID uuid.UUID,
|
||||
encryption backups_config.BackupEncryption,
|
||||
token string,
|
||||
) {
|
||||
var backupConfig backups_config.BackupConfig
|
||||
test_utils.MakeGetRequestAndUnmarshal(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/backup-configs/database/%s", databaseID.String()),
|
||||
"Bearer "+token,
|
||||
http.StatusOK,
|
||||
&backupConfig,
|
||||
)
|
||||
|
||||
storage := &storages.Storage{ID: storageID}
|
||||
backupConfig.IsBackupsEnabled = true
|
||||
backupConfig.Storage = storage
|
||||
backupConfig.Encryption = encryption
|
||||
|
||||
test_utils.MakePostRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/backup-configs/save",
|
||||
"Bearer "+token,
|
||||
backupConfig,
|
||||
http.StatusOK,
|
||||
)
|
||||
}
|
||||
|
||||
func createBackupViaAPI(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
databaseID uuid.UUID,
|
||||
token string,
|
||||
) {
|
||||
request := backups.MakeBackupRequest{DatabaseID: databaseID}
|
||||
test_utils.MakePostRequest(
|
||||
t,
|
||||
router,
|
||||
"/api/v1/backups",
|
||||
"Bearer "+token,
|
||||
request,
|
||||
http.StatusOK,
|
||||
)
|
||||
}
|
||||
|
||||
func createRestoreViaAPI(
|
||||
t *testing.T,
|
||||
router *gin.Engine,
|
||||
backupID uuid.UUID,
|
||||
pgVersion tools.PostgresqlVersion,
|
||||
host string,
|
||||
port int,
|
||||
username string,
|
||||
password string,
|
||||
database string,
|
||||
token string,
|
||||
) {
|
||||
request := restores.RestoreBackupRequest{
|
||||
PostgresqlDatabase: &pgtypes.PostgresqlDatabase{
|
||||
Version: pgVersion,
|
||||
Host: host,
|
||||
Port: port,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Database: &database,
|
||||
},
|
||||
}
|
||||
|
||||
test_utils.MakePostRequest(
|
||||
t,
|
||||
router,
|
||||
fmt.Sprintf("/api/v1/restores/%s/restore", backupID.String()),
|
||||
"Bearer "+token,
|
||||
request,
|
||||
http.StatusOK,
|
||||
)
|
||||
}
|
||||
|
||||
// verifyDataIntegrity compares data in the original and restored databases
|
||||
func verifyDataIntegrity(t *testing.T, originalDB *sqlx.DB, restoredDB *sqlx.DB) {
|
||||
var originalData []TestDataItem
|
||||
var restoredData []TestDataItem
|
||||
@@ -225,7 +516,6 @@ func verifyDataIntegrity(t *testing.T, originalDB *sqlx.DB, restoredDB *sqlx.DB)
|
||||
|
||||
assert.Equal(t, len(originalData), len(restoredData), "Should have same number of rows")
|
||||
|
||||
// Only compare data if both slices have elements (to avoid panic)
|
||||
if len(originalData) > 0 && len(restoredData) > 0 {
|
||||
for i := range originalData {
|
||||
assert.Equal(t, originalData[i].ID, restoredData[i].ID, "ID should match")
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
package users_models
|
||||
|
||||
type SecretKey struct {
|
||||
Secret string `gorm:"column:secret"`
|
||||
Secret string `gorm:"column:secret" json:"-"`
|
||||
}
|
||||
|
||||
func (SecretKey) TableName() string {
|
||||
|
||||
12
backend/internal/features/users/repositories/di.go
Normal file
12
backend/internal/features/users/repositories/di.go
Normal file
@@ -0,0 +1,12 @@
|
||||
package users_repositories
|
||||
|
||||
var userRepository = &UserRepository{}
|
||||
var usersSettingsRepository = &UsersSettingsRepository{}
|
||||
|
||||
func GetUserRepository() *UserRepository {
|
||||
return userRepository
|
||||
}
|
||||
|
||||
func GetUsersSettingsRepository() *UsersSettingsRepository {
|
||||
return usersSettingsRepository
|
||||
}
|
||||
@@ -1,36 +0,0 @@
|
||||
package users_repositories
|
||||
|
||||
import (
|
||||
"errors"
|
||||
user_models "postgresus-backend/internal/features/users/models"
|
||||
"postgresus-backend/internal/storage"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type SecretKeyRepository struct{}
|
||||
|
||||
func (r *SecretKeyRepository) GetSecretKey() (string, error) {
|
||||
var secretKey user_models.SecretKey
|
||||
|
||||
if err := storage.
|
||||
GetDb().
|
||||
First(&secretKey).Error; err != nil {
|
||||
// create a new secret key if not found
|
||||
if errors.Is(err, gorm.ErrRecordNotFound) {
|
||||
newSecretKey := user_models.SecretKey{
|
||||
Secret: uuid.New().String() + uuid.New().String(),
|
||||
}
|
||||
if err := storage.GetDb().Create(&newSecretKey).Error; err != nil {
|
||||
return "", errors.New("failed to create new secret key")
|
||||
}
|
||||
|
||||
return newSecretKey.Secret, nil
|
||||
}
|
||||
|
||||
return "", err
|
||||
}
|
||||
|
||||
return secretKey.Secret, nil
|
||||
}
|
||||
@@ -1,25 +1,22 @@
|
||||
package users_services
|
||||
|
||||
import (
|
||||
user_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
users_repositories "postgresus-backend/internal/features/users/repositories"
|
||||
)
|
||||
|
||||
var secretKeyRepository = &user_repositories.SecretKeyRepository{}
|
||||
var userRepository = &user_repositories.UserRepository{}
|
||||
var usersSettingsRepository = &user_repositories.UsersSettingsRepository{}
|
||||
|
||||
var userService = &UserService{
|
||||
userRepository,
|
||||
secretKeyRepository,
|
||||
users_repositories.GetUserRepository(),
|
||||
secrets.GetSecretKeyService(),
|
||||
settingsService,
|
||||
nil,
|
||||
}
|
||||
var settingsService = &SettingsService{
|
||||
usersSettingsRepository,
|
||||
users_repositories.GetUsersSettingsRepository(),
|
||||
nil,
|
||||
}
|
||||
var managementService = &UserManagementService{
|
||||
userRepository,
|
||||
users_repositories.GetUserRepository(),
|
||||
nil,
|
||||
}
|
||||
|
||||
|
||||
@@ -17,6 +17,7 @@ import (
|
||||
"golang.org/x/oauth2/google"
|
||||
|
||||
"postgresus-backend/internal/config"
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
users_dto "postgresus-backend/internal/features/users/dto"
|
||||
users_enums "postgresus-backend/internal/features/users/enums"
|
||||
users_interfaces "postgresus-backend/internal/features/users/interfaces"
|
||||
@@ -25,10 +26,10 @@ import (
|
||||
)
|
||||
|
||||
type UserService struct {
|
||||
userRepository *users_repositories.UserRepository
|
||||
secretKeyRepository *users_repositories.SecretKeyRepository
|
||||
settingsService *SettingsService
|
||||
auditLogWriter users_interfaces.AuditLogWriter
|
||||
userRepository *users_repositories.UserRepository
|
||||
secretKeyService *secrets.SecretKeyService
|
||||
settingsService *SettingsService
|
||||
auditLogWriter users_interfaces.AuditLogWriter
|
||||
}
|
||||
|
||||
func (s *UserService) SetAuditLogWriter(writer users_interfaces.AuditLogWriter) {
|
||||
@@ -162,7 +163,7 @@ func (s *UserService) SignIn(
|
||||
}
|
||||
|
||||
func (s *UserService) GetUserFromToken(token string) (*users_models.User, error) {
|
||||
secretKey, err := s.secretKeyRepository.GetSecretKey()
|
||||
secretKey, err := s.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get secret key: %w", err)
|
||||
}
|
||||
@@ -221,7 +222,7 @@ func (s *UserService) GetUserFromToken(token string) (*users_models.User, error)
|
||||
func (s *UserService) GenerateAccessToken(
|
||||
user *users_models.User,
|
||||
) (*users_dto.SignInResponseDTO, error) {
|
||||
secretKey, err := s.secretKeyRepository.GetSecretKey()
|
||||
secretKey, err := s.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get secret key: %w", err)
|
||||
}
|
||||
@@ -309,15 +310,6 @@ func (s *UserService) ChangeUserPasswordByEmail(email string, newPassword string
|
||||
}
|
||||
|
||||
func (s *UserService) ChangeUserPassword(userID uuid.UUID, newPassword string) error {
|
||||
user, err := s.userRepository.GetUserByID(userID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get user: %w", err)
|
||||
}
|
||||
|
||||
if !user.HasPassword() {
|
||||
return errors.New("user has no password set")
|
||||
}
|
||||
|
||||
hashedPassword, err := bcrypt.GenerateFromPassword([]byte(newPassword), bcrypt.DefaultCost)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to hash new password: %w", err)
|
||||
|
||||
@@ -309,7 +309,7 @@ func (s *WorkspaceService) GetAllWorkspaces() ([]*workspaces_models.Workspace, e
|
||||
return s.workspaceRepository.GetAllWorkspaces()
|
||||
}
|
||||
|
||||
func (s *WorkspaceService) GetWorkspaceByIDInternal(
|
||||
func (s *WorkspaceService) GetWorkspaceByID(
|
||||
workspaceID uuid.UUID,
|
||||
) (*workspaces_models.Workspace, error) {
|
||||
return s.workspaceRepository.GetWorkspaceByID(workspaceID)
|
||||
|
||||
11
backend/internal/util/encryption/di.go
Normal file
11
backend/internal/util/encryption/di.go
Normal file
@@ -0,0 +1,11 @@
|
||||
package encryption
|
||||
|
||||
import "postgresus-backend/internal/features/encryption/secrets"
|
||||
|
||||
var fieldEncryptor = &SecretKeyFieldEncryptor{
|
||||
secrets.GetSecretKeyService(),
|
||||
}
|
||||
|
||||
func GetFieldEncryptor() FieldEncryptor {
|
||||
return fieldEncryptor
|
||||
}
|
||||
15
backend/internal/util/encryption/field_encryptor.go
Normal file
15
backend/internal/util/encryption/field_encryptor.go
Normal file
@@ -0,0 +1,15 @@
|
||||
package encryption
|
||||
|
||||
import "github.com/google/uuid"
|
||||
|
||||
type FieldEncryptor interface {
|
||||
// Encrypt encrypts a plaintext string and returns an encrypted string.
|
||||
// If the string is already encrypted, returns it as-is.
|
||||
// Empty strings are returned unchanged.
|
||||
Encrypt(itemID uuid.UUID, plaintext string) (string, error)
|
||||
|
||||
// Decrypt decrypts an encrypted string and returns a plaintext string.
|
||||
// If the string is not encrypted, returns it as-is.
|
||||
// Empty strings are returned unchanged.
|
||||
Decrypt(itemID uuid.UUID, ciphertext string) (string, error)
|
||||
}
|
||||
120
backend/internal/util/encryption/secret_key_field_encryptor.go
Normal file
120
backend/internal/util/encryption/secret_key_field_encryptor.go
Normal file
@@ -0,0 +1,120 @@
|
||||
package encryption
|
||||
|
||||
import (
|
||||
"crypto/aes"
|
||||
"crypto/cipher"
|
||||
"crypto/hmac"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"postgresus-backend/internal/features/encryption/secrets"
|
||||
"strings"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
const encryptedPrefix = "enc:"
|
||||
|
||||
type SecretKeyFieldEncryptor struct {
|
||||
secretKeyService *secrets.SecretKeyService
|
||||
}
|
||||
|
||||
func (e *SecretKeyFieldEncryptor) Encrypt(itemID uuid.UUID, plaintext string) (string, error) {
|
||||
if plaintext == "" {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
if e.isEncrypted(plaintext) {
|
||||
return plaintext, nil
|
||||
}
|
||||
|
||||
masterKey, err := e.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get master key: %w", err)
|
||||
}
|
||||
|
||||
block, err := aes.NewCipher([]byte(masterKey)[:32])
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create cipher: %w", err)
|
||||
}
|
||||
|
||||
gcm, err := cipher.NewGCM(block)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create GCM: %w", err)
|
||||
}
|
||||
|
||||
nonce := e.deriveNonce(itemID, masterKey, gcm.NonceSize())
|
||||
|
||||
ciphertext := gcm.Seal(nil, nonce, []byte(plaintext), nil)
|
||||
|
||||
nonceBase64 := base64.StdEncoding.EncodeToString(nonce)
|
||||
ciphertextBase64 := base64.StdEncoding.EncodeToString(ciphertext)
|
||||
|
||||
return fmt.Sprintf("%s%s:%s", encryptedPrefix, nonceBase64, ciphertextBase64), nil
|
||||
}
|
||||
|
||||
func (e *SecretKeyFieldEncryptor) Decrypt(itemID uuid.UUID, ciphertext string) (string, error) {
|
||||
if ciphertext == "" {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
if !e.isEncrypted(ciphertext) {
|
||||
return ciphertext, nil
|
||||
}
|
||||
|
||||
parts := strings.SplitN(ciphertext, ":", 3)
|
||||
if len(parts) != 3 {
|
||||
return "", errors.New("invalid encrypted format")
|
||||
}
|
||||
|
||||
nonceBase64 := parts[1]
|
||||
ciphertextBase64 := parts[2]
|
||||
|
||||
nonce, err := base64.StdEncoding.DecodeString(nonceBase64)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to decode nonce: %w", err)
|
||||
}
|
||||
|
||||
encryptedData, err := base64.StdEncoding.DecodeString(ciphertextBase64)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to decode ciphertext: %w", err)
|
||||
}
|
||||
|
||||
masterKey, err := e.secretKeyService.GetSecretKey()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get master key: %w", err)
|
||||
}
|
||||
|
||||
block, err := aes.NewCipher([]byte(masterKey)[:32])
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create cipher: %w", err)
|
||||
}
|
||||
|
||||
gcm, err := cipher.NewGCM(block)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create GCM: %w", err)
|
||||
}
|
||||
|
||||
plaintext, err := gcm.Open(nil, nonce, encryptedData, nil)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to decrypt: %w", err)
|
||||
}
|
||||
|
||||
return string(plaintext), nil
|
||||
}
|
||||
|
||||
func (e *SecretKeyFieldEncryptor) isEncrypted(value string) bool {
|
||||
return strings.HasPrefix(value, encryptedPrefix)
|
||||
}
|
||||
|
||||
func (e *SecretKeyFieldEncryptor) deriveNonce(
|
||||
itemID uuid.UUID,
|
||||
masterKey string,
|
||||
nonceSize int,
|
||||
) []byte {
|
||||
h := hmac.New(sha256.New, []byte(masterKey))
|
||||
h.Write(itemID[:])
|
||||
hash := h.Sum(nil)
|
||||
return hash[:nonceSize]
|
||||
}
|
||||
@@ -0,0 +1,120 @@
|
||||
package encryption
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func Test_Encrypt_Decrypt_RoundTrip(t *testing.T) {
|
||||
encryptor := GetFieldEncryptor()
|
||||
itemID := uuid.New()
|
||||
plaintext := "my-secret-password"
|
||||
|
||||
encrypted, err := encryptor.Encrypt(itemID, plaintext)
|
||||
assert.NoError(t, err)
|
||||
assert.NotEmpty(t, encrypted)
|
||||
assert.NotEqual(t, plaintext, encrypted)
|
||||
assert.Contains(t, encrypted, "enc:")
|
||||
|
||||
decrypted, err := encryptor.Decrypt(itemID, encrypted)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, plaintext, decrypted)
|
||||
}
|
||||
|
||||
func Test_Encrypt_EmptyString_ReturnsEmpty(t *testing.T) {
|
||||
encryptor := GetFieldEncryptor()
|
||||
itemID := uuid.New()
|
||||
|
||||
encrypted, err := encryptor.Encrypt(itemID, "")
|
||||
assert.NoError(t, err)
|
||||
assert.Empty(t, encrypted)
|
||||
}
|
||||
|
||||
func Test_Decrypt_EmptyString_ReturnsEmpty(t *testing.T) {
|
||||
encryptor := GetFieldEncryptor()
|
||||
itemID := uuid.New()
|
||||
|
||||
decrypted, err := encryptor.Decrypt(itemID, "")
|
||||
assert.NoError(t, err)
|
||||
assert.Empty(t, decrypted)
|
||||
}
|
||||
|
||||
func Test_Decrypt_PlaintextValue_ReturnsAsIs(t *testing.T) {
|
||||
encryptor := GetFieldEncryptor()
|
||||
itemID := uuid.New()
|
||||
plaintext := "not-encrypted-password"
|
||||
|
||||
decrypted, err := encryptor.Decrypt(itemID, plaintext)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, plaintext, decrypted)
|
||||
}
|
||||
|
||||
func Test_Encrypt_DetectsAlreadyEncryptedFormat(t *testing.T) {
|
||||
encryptor := GetFieldEncryptor()
|
||||
itemID := uuid.New()
|
||||
alreadyEncrypted := "enc:nonce:ciphertext"
|
||||
|
||||
result, err := encryptor.Encrypt(itemID, alreadyEncrypted)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, alreadyEncrypted, result)
|
||||
}
|
||||
|
||||
func Test_Encrypt_SamePlaintext_DifferentItemIDs_ProducesDifferentCiphertext(t *testing.T) {
|
||||
encryptor := GetFieldEncryptor()
|
||||
plaintext := "shared-secret"
|
||||
itemID1 := uuid.New()
|
||||
itemID2 := uuid.New()
|
||||
|
||||
encrypted1, err := encryptor.Encrypt(itemID1, plaintext)
|
||||
assert.NoError(t, err)
|
||||
|
||||
encrypted2, err := encryptor.Encrypt(itemID2, plaintext)
|
||||
assert.NoError(t, err)
|
||||
|
||||
assert.NotEqual(t, encrypted1, encrypted2)
|
||||
|
||||
decrypted1, err := encryptor.Decrypt(itemID1, encrypted1)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, plaintext, decrypted1)
|
||||
|
||||
decrypted2, err := encryptor.Decrypt(itemID2, encrypted2)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, plaintext, decrypted2)
|
||||
}
|
||||
|
||||
func Test_Encrypt_AlreadyEncrypted_ReturnsAsIs(t *testing.T) {
|
||||
encryptor := GetFieldEncryptor()
|
||||
itemID := uuid.New()
|
||||
plaintext := "my-password"
|
||||
|
||||
encrypted1, err := encryptor.Encrypt(itemID, plaintext)
|
||||
assert.NoError(t, err)
|
||||
|
||||
encrypted2, err := encryptor.Encrypt(itemID, encrypted1)
|
||||
assert.NoError(t, err)
|
||||
|
||||
assert.Equal(t, encrypted1, encrypted2)
|
||||
}
|
||||
|
||||
func Test_Decrypt_MalformedData_ReturnsError(t *testing.T) {
|
||||
encryptor := GetFieldEncryptor()
|
||||
itemID := uuid.New()
|
||||
|
||||
_, err := encryptor.Decrypt(itemID, "enc:invalid")
|
||||
assert.Error(t, err)
|
||||
|
||||
_, err = encryptor.Decrypt(itemID, "enc:invalid:invalid-base64")
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func Test_EncryptedFormat_ContainsPrefix(t *testing.T) {
|
||||
encryptor := GetFieldEncryptor()
|
||||
itemID := uuid.New()
|
||||
plaintext := "test-secret"
|
||||
|
||||
encrypted, err := encryptor.Encrypt(itemID, plaintext)
|
||||
assert.NoError(t, err)
|
||||
assert.Contains(t, encrypted, "enc:")
|
||||
}
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
env_utils "postgresus-backend/internal/util/env"
|
||||
)
|
||||
@@ -151,6 +152,24 @@ func VerifyPostgresesInstallation(
|
||||
logger.Info("All PostgreSQL version-specific client tools verification completed successfully!")
|
||||
}
|
||||
|
||||
// EscapePgpassField escapes special characters in a field value for .pgpass file format.
|
||||
// According to PostgreSQL documentation, the .pgpass file format requires:
|
||||
// - Backslash (\) must be escaped as \\
|
||||
// - Colon (:) must be escaped as \:
|
||||
// Additionally, newlines and carriage returns are removed to prevent format corruption.
|
||||
func EscapePgpassField(field string) string {
|
||||
// Remove newlines and carriage returns that would break .pgpass format
|
||||
field = strings.ReplaceAll(field, "\r", "")
|
||||
field = strings.ReplaceAll(field, "\n", "")
|
||||
|
||||
// Escape backslashes first (order matters!)
|
||||
// Then escape colons
|
||||
field = strings.ReplaceAll(field, "\\", "\\\\")
|
||||
field = strings.ReplaceAll(field, ":", "\\:")
|
||||
|
||||
return field
|
||||
}
|
||||
|
||||
func getPostgresqlBasePath(
|
||||
version PostgresqlVersion,
|
||||
envMode env_utils.EnvMode,
|
||||
|
||||
@@ -0,0 +1,17 @@
|
||||
-- +goose Up
|
||||
-- +goose StatementBegin
|
||||
ALTER TABLE s3_storages
|
||||
ADD COLUMN s3_prefix TEXT;
|
||||
|
||||
ALTER TABLE s3_storages
|
||||
ADD COLUMN s3_use_virtual_hosted_style BOOLEAN NOT NULL DEFAULT FALSE;
|
||||
-- +goose StatementEnd
|
||||
|
||||
-- +goose Down
|
||||
-- +goose StatementBegin
|
||||
ALTER TABLE s3_storages
|
||||
DROP COLUMN s3_use_virtual_hosted_style;
|
||||
|
||||
ALTER TABLE s3_storages
|
||||
DROP COLUMN s3_prefix;
|
||||
-- +goose StatementEnd
|
||||
28
backend/migrations/20251116195618_add_azure_blob_storage.sql
Normal file
28
backend/migrations/20251116195618_add_azure_blob_storage.sql
Normal file
@@ -0,0 +1,28 @@
|
||||
-- +goose Up
|
||||
-- +goose StatementBegin
|
||||
|
||||
CREATE TABLE azure_blob_storages (
|
||||
storage_id UUID PRIMARY KEY,
|
||||
auth_method TEXT NOT NULL,
|
||||
connection_string TEXT,
|
||||
account_name TEXT,
|
||||
account_key TEXT,
|
||||
container_name TEXT NOT NULL,
|
||||
endpoint TEXT,
|
||||
prefix TEXT
|
||||
);
|
||||
|
||||
ALTER TABLE azure_blob_storages
|
||||
ADD CONSTRAINT fk_azure_blob_storages_storage
|
||||
FOREIGN KEY (storage_id)
|
||||
REFERENCES storages (id)
|
||||
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
|
||||
|
||||
-- +goose StatementEnd
|
||||
|
||||
-- +goose Down
|
||||
-- +goose StatementBegin
|
||||
|
||||
DROP TABLE IF EXISTS azure_blob_storages;
|
||||
|
||||
-- +goose StatementEnd
|
||||
@@ -0,0 +1,25 @@
|
||||
-- +goose Up
|
||||
-- +goose StatementBegin
|
||||
|
||||
ALTER TABLE backup_configs
|
||||
ADD COLUMN encryption TEXT NOT NULL DEFAULT 'NONE';
|
||||
|
||||
ALTER TABLE backups
|
||||
ADD COLUMN encryption_salt TEXT,
|
||||
ADD COLUMN encryption_iv TEXT,
|
||||
ADD COLUMN encryption TEXT NOT NULL DEFAULT 'NONE';
|
||||
|
||||
-- +goose StatementEnd
|
||||
|
||||
-- +goose Down
|
||||
-- +goose StatementBegin
|
||||
|
||||
ALTER TABLE backups
|
||||
DROP COLUMN IF EXISTS encryption,
|
||||
DROP COLUMN IF EXISTS encryption_iv,
|
||||
DROP COLUMN IF EXISTS encryption_salt;
|
||||
|
||||
ALTER TABLE backup_configs
|
||||
DROP COLUMN IF EXISTS encryption;
|
||||
|
||||
-- +goose StatementEnd
|
||||
@@ -0,0 +1,28 @@
|
||||
-- +goose Up
|
||||
-- +goose StatementBegin
|
||||
|
||||
ALTER TABLE postgresql_databases
|
||||
DROP CONSTRAINT IF EXISTS fk_postgresql_databases_restore_id;
|
||||
|
||||
DROP INDEX IF EXISTS idx_postgresql_databases_restore_id;
|
||||
|
||||
ALTER TABLE postgresql_databases
|
||||
DROP COLUMN IF EXISTS restore_id;
|
||||
|
||||
-- +goose StatementEnd
|
||||
|
||||
-- +goose Down
|
||||
-- +goose StatementBegin
|
||||
|
||||
ALTER TABLE postgresql_databases
|
||||
ADD COLUMN restore_id UUID;
|
||||
|
||||
CREATE INDEX idx_postgresql_databases_restore_id ON postgresql_databases (restore_id);
|
||||
|
||||
ALTER TABLE postgresql_databases
|
||||
ADD CONSTRAINT fk_postgresql_databases_restore_id
|
||||
FOREIGN KEY (restore_id)
|
||||
REFERENCES restores (id)
|
||||
ON DELETE CASCADE;
|
||||
|
||||
-- +goose StatementEnd
|
||||
@@ -0,0 +1,18 @@
|
||||
-- +goose Up
|
||||
-- +goose StatementBegin
|
||||
|
||||
ALTER TABLE webhook_notifiers
|
||||
ADD COLUMN body_template TEXT,
|
||||
ADD COLUMN headers TEXT DEFAULT '[]';
|
||||
|
||||
-- +goose StatementEnd
|
||||
|
||||
-- +goose Down
|
||||
-- +goose StatementBegin
|
||||
|
||||
ALTER TABLE webhook_notifiers
|
||||
DROP COLUMN body_template,
|
||||
DROP COLUMN headers;
|
||||
|
||||
-- +goose StatementEnd
|
||||
|
||||
23
deploy/helm/.helmignore
Normal file
23
deploy/helm/.helmignore
Normal file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
12
deploy/helm/Chart.yaml
Normal file
12
deploy/helm/Chart.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v2
|
||||
name: postgresus
|
||||
description: A Helm chart for Postgresus - PostgreSQL backup and management system
|
||||
type: application
|
||||
version: 1.0.0
|
||||
appVersion: "v1.45.3"
|
||||
keywords:
|
||||
- postgresql
|
||||
- backup
|
||||
- database
|
||||
- restore
|
||||
home: https://github.com/RostislavDugin/postgresus
|
||||
190
deploy/helm/README.md
Normal file
190
deploy/helm/README.md
Normal file
@@ -0,0 +1,190 @@
|
||||
# Postgresus Helm Chart
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
helm install postgresus ./deploy/helm -n postgresus --create-namespace
|
||||
```
|
||||
|
||||
After installation, get the external IP:
|
||||
|
||||
```bash
|
||||
kubectl get svc -n postgresus
|
||||
```
|
||||
|
||||
Access Postgresus at `http://<EXTERNAL-IP>` (port 80).
|
||||
|
||||
## Configuration
|
||||
|
||||
### Main Parameters
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| ------------------ | ------------------ | --------------------------- |
|
||||
| `namespace.create` | Create namespace | `true` |
|
||||
| `namespace.name` | Namespace name | `postgresus` |
|
||||
| `image.repository` | Docker image | `rostislavdugin/postgresus` |
|
||||
| `image.tag` | Image tag | `latest` |
|
||||
| `image.pullPolicy` | Image pull policy | `Always` |
|
||||
| `replicaCount` | Number of replicas | `1` |
|
||||
|
||||
### Resources
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| --------------------------- | -------------- | ------------- |
|
||||
| `resources.requests.memory` | Memory request | `1Gi` |
|
||||
| `resources.requests.cpu` | CPU request | `500m` |
|
||||
| `resources.limits.memory` | Memory limit | `1Gi` |
|
||||
| `resources.limits.cpu` | CPU limit | `500m` |
|
||||
|
||||
### Storage
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| ------------------------------ | ------------------------- | ---------------------- |
|
||||
| `persistence.enabled` | Enable persistent storage | `true` |
|
||||
| `persistence.storageClassName` | Storage class | `""` (cluster default) |
|
||||
| `persistence.accessMode` | Access mode | `ReadWriteOnce` |
|
||||
| `persistence.size` | Storage size | `10Gi` |
|
||||
| `persistence.mountPath` | Mount path | `/postgresus-data` |
|
||||
|
||||
### Service
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| -------------------------- | ----------------------- | -------------- |
|
||||
| `service.type` | Service type | `LoadBalancer` |
|
||||
| `service.port` | External port | `80` |
|
||||
| `service.targetPort` | Container port | `4005` |
|
||||
| `service.headless.enabled` | Enable headless service | `true` |
|
||||
|
||||
### Traffic Exposure (3 Options)
|
||||
|
||||
The chart supports 3 ways to expose Postgresus:
|
||||
|
||||
| Method | Use Case | Default |
|
||||
| ------ | -------- | ------- |
|
||||
| **LoadBalancer/NodePort** | Simple cloud clusters | Enabled |
|
||||
| **Ingress** | Traditional nginx/traefik ingress controllers | Disabled |
|
||||
| **HTTPRoute (Gateway API)** | Modern gateways (Istio, Envoy, Cilium) | Disabled |
|
||||
|
||||
#### Ingress
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| ----------------------- | ----------------- | ------------------------ |
|
||||
| `ingress.enabled` | Enable Ingress | `false` |
|
||||
| `ingress.className` | Ingress class | `nginx` |
|
||||
| `ingress.hosts[0].host` | Hostname | `postgresus.example.com` |
|
||||
| `ingress.tls` | TLS configuration | `[]` |
|
||||
|
||||
#### HTTPRoute (Gateway API)
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| --------------------- | -------------------------- | ---------------------------------- |
|
||||
| `route.enabled` | Enable HTTPRoute | `false` |
|
||||
| `route.apiVersion` | Gateway API version | `gateway.networking.k8s.io/v1` |
|
||||
| `route.hostnames` | Hostnames for the route | `["postgresus.example.com"]` |
|
||||
| `route.parentRefs` | Gateway references | `[]` |
|
||||
| `route.annotations` | Route annotations | `{}` |
|
||||
|
||||
### Health Checks
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| ------------------------ | ---------------------- | ------------- |
|
||||
| `livenessProbe.enabled` | Enable liveness probe | `true` |
|
||||
| `readinessProbe.enabled` | Enable readiness probe | `true` |
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic Installation (LoadBalancer on port 80)
|
||||
|
||||
Default installation exposes Postgresus via LoadBalancer on port 80:
|
||||
|
||||
```bash
|
||||
helm install postgresus ./deploy/helm -n postgresus --create-namespace
|
||||
```
|
||||
|
||||
Access via `http://<EXTERNAL-IP>`
|
||||
|
||||
### Using NodePort
|
||||
|
||||
If your cluster doesn't support LoadBalancer:
|
||||
|
||||
```yaml
|
||||
# nodeport-values.yaml
|
||||
service:
|
||||
type: NodePort
|
||||
port: 80
|
||||
targetPort: 4005
|
||||
nodePort: 30080
|
||||
```
|
||||
|
||||
```bash
|
||||
helm install postgresus ./deploy/helm -n postgresus --create-namespace -f nodeport-values.yaml
|
||||
```
|
||||
|
||||
Access via `http://<NODE-IP>:30080`
|
||||
|
||||
### Enable Ingress with HTTPS
|
||||
|
||||
For domain-based access with TLS:
|
||||
|
||||
```yaml
|
||||
# ingress-values.yaml
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 4005
|
||||
targetPort: 4005
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
className: nginx
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/ssl-redirect: "true"
|
||||
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
||||
hosts:
|
||||
- host: backup.example.com
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
tls:
|
||||
- secretName: backup-example-com-tls
|
||||
hosts:
|
||||
- backup.example.com
|
||||
```
|
||||
|
||||
```bash
|
||||
helm install postgresus ./deploy/helm -n postgresus --create-namespace -f ingress-values.yaml
|
||||
```
|
||||
|
||||
### HTTPRoute (Gateway API)
|
||||
|
||||
For clusters using Istio, Envoy Gateway, Cilium, or other Gateway API implementations:
|
||||
|
||||
```yaml
|
||||
# httproute-values.yaml
|
||||
service:
|
||||
type: ClusterIP
|
||||
|
||||
route:
|
||||
enabled: true
|
||||
hostnames:
|
||||
- backup.example.com
|
||||
parentRefs:
|
||||
- name: my-gateway
|
||||
namespace: istio-system
|
||||
```
|
||||
|
||||
```bash
|
||||
helm install postgresus ./deploy/helm -n postgresus --create-namespace -f httproute-values.yaml
|
||||
```
|
||||
|
||||
### Custom Storage Size
|
||||
|
||||
```yaml
|
||||
# storage-values.yaml
|
||||
persistence:
|
||||
size: 50Gi
|
||||
storageClassName: "fast-ssd"
|
||||
```
|
||||
|
||||
```bash
|
||||
helm install postgresus ./deploy/helm -n postgresus --create-namespace -f storage-values.yaml
|
||||
```
|
||||
72
deploy/helm/templates/_helpers.tpl
Normal file
72
deploy/helm/templates/_helpers.tpl
Normal file
@@ -0,0 +1,72 @@
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "postgresus.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
*/}}
|
||||
{{- define "postgresus.fullname" -}}
|
||||
{{- if .Values.fullnameOverride }}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride }}
|
||||
{{- if contains $name .Release.Name }}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "postgresus.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "postgresus.labels" -}}
|
||||
helm.sh/chart: {{ include "postgresus.chart" . }}
|
||||
{{ include "postgresus.selectorLabels" . }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "postgresus.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "postgresus.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app: postgresus
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "postgresus.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create }}
|
||||
{{- default (include "postgresus.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else }}
|
||||
{{- default "default" .Values.serviceAccount.name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Namespace
|
||||
*/}}
|
||||
{{- define "postgresus.namespace" -}}
|
||||
{{- if .Values.namespace.create }}
|
||||
{{- .Values.namespace.name }}
|
||||
{{- else }}
|
||||
{{- .Release.Namespace }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
35
deploy/helm/templates/httproute.yaml
Normal file
35
deploy/helm/templates/httproute.yaml
Normal file
@@ -0,0 +1,35 @@
|
||||
{{- if .Values.route.enabled -}}
|
||||
apiVersion: {{ .Values.route.apiVersion}}
|
||||
kind: {{ .Values.route.kind}}
|
||||
metadata:
|
||||
name: {{ template "postgresus.fullname" . }}
|
||||
annotations: {{ toYaml .Values.route.annotations | nindent 4 }}
|
||||
labels:
|
||||
app.kubernetes.io/component: "app"
|
||||
{{- include "postgresus.labels" . | nindent 4 }}
|
||||
spec:
|
||||
{{- with .Values.route.parentRefs }}
|
||||
parentRefs:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.route.hostnames }}
|
||||
hostnames:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
rules:
|
||||
- backendRefs:
|
||||
- name: {{ template "postgresus.fullname" . }}-service
|
||||
port: {{ .Values.service.port }}
|
||||
{{- with .Values.route.filters }}
|
||||
filters:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.route.matches }}
|
||||
matches:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.route.timeouts }}
|
||||
timeouts:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user