Compare commits

...

64 Commits

Author SHA1 Message Date
gyptazy
8630333e4b feature: Add Helm chart support for ProxLB
Fixes: #281
2025-08-19 06:27:38 +02:00
gyptazy
7bd9a9b038 Merge pull request #282 from MaoMaoCake/main
Feat: Add support for helm
2025-08-18 19:31:14 +02:00
maomaocake
16651351de Implemented changes from comments 2025-08-18 12:46:29 +07:00
maomaocake
63805f1f50 Added support for resource limits and requests 2025-08-17 16:11:18 +07:00
maomaocake
c0ff1b5273 Added support for new user defined labels 2025-08-17 15:59:17 +07:00
maomaocake
07f8596fc5 Helm Chart 2025-08-17 15:41:55 +07:00
Florian
affbe433f9 Merge pull request #280 from gyptazy/docs/fix-ipv6-example-syntax
docs: Fix IPv6 address syntax in given examples
2025-08-06 16:42:13 +02:00
Florian Paul Azim Hoberg
7bda22e754 docs: Fix IPv6 address syntax in given examples 2025-08-06 16:38:23 +02:00
Florian
253dcf8eb9 Merge pull request #274 from gyptazy/docs/273-proxmox-9-compatibility
docs: Add compatibility matrix of tested Proxmox versions
2025-07-20 12:37:20 +02:00
gyptazy
6212d23268 docs: Add compatibility matrix of tested Proxmox versions
Fixes: #273
2025-07-20 12:36:25 +02:00
Florian
cf8c06393f Merge pull request #270 from gyptazy/docs/258-extend-doc-of-cluster-node-names-usage
docs: Adjust docs regarding maintenance node hostnames
2025-07-17 11:53:33 +02:00
Florian Paul Azim Hoberg
5c23fd3433 docs: Adjust docs regarding maintenance node hostnames
Fixes: #258
2025-07-17 11:52:08 +02:00
Florian
0fb732fc8c Merge pull request #269 from gyptazy/fix/268-evaluate-balancing-types
fix: Fix balancing evaluation of guest types (e.g., VM or CT).
2025-07-17 11:44:29 +02:00
Florian Paul Azim Hoberg
f36d96c72a fix: Fix balancing evaluation of guest types (e.g., VM or CT).
Fixes: #268
2025-07-17 11:41:00 +02:00
Florian
9cc03717ef Merge pull request #267 from gyptazy/release/1.1.5
release: Create release 1.1.5
2025-07-14 11:13:12 +02:00
Florian Paul Azim Hoberg
4848887ccc release: Create release 1.1.5
Fixes: #266
2025-07-14 11:08:38 +02:00
Gombócz Márton
04476feeaf Docs/custom api port (#264)
* docs(README.md): added description and examples of port declaration in hosts
* docs(docs/03_configuration): updated hosts description about using ports
2025-07-10 10:33:44 +02:00
Florian
b3765bf0ae Merge pull request #261 from gyptazy/feature/260-custom-api-ports
feature: Allow custom (instead of static tcp/8006) API ports for API hosts
2025-07-10 09:19:34 +02:00
Florian Paul Azim Hoberg
806b728a14 feature: Allow custom (instead of static tcp/8006) API ports for API hosts.
Fixes: #260
2025-07-08 17:39:29 +02:00
Florian
2c34ec91b1 Merge pull request #257 from gyptazy/release/prepare-1.1.5
release: Create release 1.1.5 beta 1
2025-06-29 11:08:15 +02:00
gyptazy
08b746a53b release: Create release 1.1.5 beta 1 2025-06-27 16:50:17 +02:00
Florian
615e2f5608 Merge pull request #256 from gyptazy/release/1.1.4
release: Create release 1.1.4
2025-06-27 16:40:53 +02:00
gyptazy
fa1e1ad8a3 release: Create release 1.1.4
Fixes: #254
2025-06-27 16:23:31 +02:00
pmarasse
c78def3919 Fix loglevels (#255)
* Modified some loglevels to make output lighter at INFO level

Co-authored-by: Philippe MARASSE <philippe@marasse.fr>
2025-06-27 15:10:57 +02:00
Florian
54c53b9860 Merge pull request #253 from gyptazy/feature/245-add-guest-pinning-to-group-of-nodes
feature: Allow pinning of guests to a group of nodes
2025-06-26 13:59:44 +02:00
Florian Paul Azim Hoberg
1fe8f703cc feature: Allow pinning of guests to a group of nodes
* You can now simply define multiple tags with plb_pin_node names
    where nodes are being evaluated and the one with the lowest
    resource usage will be taken.

Fixes: #245
2025-06-26 13:54:05 +02:00
Florian
7ba806abf7 Merge pull request #252 from gyptazy/fix/248-dry-run-with-deactivated-balancing
Fix an issue where balancing was performed in combination of deactivated balancing and dry-run mode
2025-06-24 10:09:10 +02:00
Florian Paul Azim Hoberg
6b2e120739 Fix: Fixed an issue where balancing was performed in combination of deactivated balancing and dry-run mode
Fixes: #248
2025-06-24 10:06:28 +02:00
Florian
e4103df326 Merge pull request #251 from gyptazy/prepare/1.1.4
release: Prepare release 1.1.4 beta
2025-06-24 10:00:27 +02:00
Florian Paul Azim Hoberg
f2acd4efa6 release: Prepare release 1.1.4 beta 2025-06-24 09:56:06 +02:00
Florian
f4ed8d9928 Merge pull request #247 from gyptazy/fix/readme-1.1.3
fix: Adjust readme for Container image version of release 1.1.3.
2025-06-19 09:32:13 +02:00
gyptazy
ba74254b93 fix: Adjust readme for Container image version of release 1.1.3. 2025-06-19 09:31:31 +02:00
Florian
792a0f3820 Merge pull request #246 from gyptazy/release/1.1.3
release: Prepare release 1.1.3
2025-06-19 09:30:31 +02:00
gyptazy
b766041c4c release: Prepare release 1.1.3
Fixes: #242
2025-06-19 09:25:56 +02:00
Florian
a31e41f839 Merge pull request #243 from gyptazy/feature/241-make-amount-of-parallel-migrations-configureable
feature: Make the amount of parallel migrations configurable
2025-06-10 18:46:39 +02:00
gyptazy
7cb5a31b89 feature: Make the amount of parallel migrations configurable
Fixes: #241
2025-06-05 16:12:47 +02:00
Florian
617d0a3ae3 Merge pull request #240 from gyptazy/feature/239-add-optional-wait-time-until-service-starts
feature: Add optional wait time before service action.
2025-06-04 16:49:24 +02:00
gyptazy
db3a3b77fc feature: Add optional wait time before service action.
Fixes: #239
2025-06-01 16:06:42 +02:00
Florian
5a9643275a Merge pull request #237 from gyptazy/feature/94-balance-cpu-by-average-consumption
feature:  Use the average CPU consumption of a guest within the last 60 minutes instead of the current CPU usage
2025-05-29 12:01:13 +02:00
Florian
60d1e333aa Merge pull request #238 from gyptazy/feature/189-add-reload-function
feature: Add relaod (SIGHUP) function to ProxLB to reload the configuration.
2025-05-29 12:00:42 +02:00
gyptazy
96dc435cf6 feature: Add relaod (SIGHUP) function to ProxLB to reload the configuration.
Fixes: #189
2025-05-24 09:56:20 +02:00
gyptazy
263b08b53a feature: Add reload method to ProxLB systemd file
Fixes: #189
2025-05-24 09:19:42 +02:00
gyptazy
89102d517e feature: Use the average CPU consumption of a guest within the last 60 minutes instead of the current CPU usage
- Using the current CPU consumption of a guest object is too volatile and does not represent
    the real usage. Therefore, we use the average consumption of the cpu values within the
    last 60 minutes.

Thanks-to: @philslab-ninja
Fixes: #94
2025-05-24 09:17:14 +02:00
Florian
845af4abc8 Merge pull request #236 from gyptazy/prepare/dev1.1.3beta
development: Adjust beta release 1.1.3
2025-05-22 13:42:05 +02:00
gyptazy
3e02403598 development: Adjust beta release 1.1.3 2025-05-22 06:58:53 +02:00
Florian
0b0d569877 Merge pull request #235 from gyptazy/feature/232-align-proxmox-ha-maintenance-mode
feature: Align maintenance mode with Proxmox HA maintenance mode
2025-05-22 06:55:32 +02:00
Florian Paul Azim Hoberg
1cbda2e2f9 feature: Align maintenance mode with Proxmox HA maintenance mode
Fixes: #232
2025-05-21 18:19:50 +02:00
gyptazy
b6febf1933 feature: Add action to create multiarch container 2025-05-20 19:57:04 +02:00
Florian
53a6d2a459 Merge pull request #233 from gyptazy/feature/231-arm64-container
feature: Add workflows to build container images for AMD64 + ARM64 architecture
2025-05-20 12:11:03 +02:00
Florian Paul Azim Hoberg
6c82ce010b feature: Add workflows to build container images for AMD64 + ARM64 architecture
Fixes: #231
2025-05-20 12:06:22 +02:00
Florian
4b8b73e468 Merge pull request #228 from gyptazy/release/1.1.2-readme
docs: Update readme with new image version
2025-05-13 08:26:48 +02:00
Florian Paul Azim Hoberg
a75729dd6a docs: Update readme with new image version 2025-05-13 08:26:03 +02:00
Florian
b8792a87af Merge pull request #227 from gyptazy/release/1.1.2
release: Create release 1.1.2
2025-05-13 08:18:23 +02:00
Florian Paul Azim Hoberg
c1261a2d3c release: Create release 1.1.2
Fixes: #226
2025-05-13 08:13:43 +02:00
Florian
0035f57738 Merge pull request #223 from gyptazy/fix/222-extend-debug-messages
fix: Force type cast guest cpu count to int where in some corner cases a str got returned.
2025-05-08 16:23:10 +02:00
gyptazy
b372d361e7 fix: Force type cast guest cpu count to int where in some corner cases a str got returned.
Fixes: #222
2025-05-03 08:53:56 +02:00
Florian
1e096e1aae Merge pull request #221 from gyptazy/fix/137-systemd-unit-file
fix: Adjust the systemd unit file to run after the network target on non PVE nodes
2025-04-26 08:43:33 +02:00
gyptazy
420d669236 fix: Adjust the systemd unit file to run after the network target on non PVE nodes
Fixes: #137
2025-04-26 08:42:24 +02:00
Florian
24aa6aabc6 Merge pull request #220 from gyptazy/feature/157-add-retry-proxmox-api
feature: Add a retry mechanism when connecting to the Proxmox API
2025-04-24 13:49:55 +02:00
Florian Paul Azim Hoberg
5a9a4af532 feature: Add a retry mechanism when connecting to the Proxmox API
Fixes: #157
2025-04-24 13:29:41 +02:00
Florian
50f93e5f59 Merge pull request #219 from gyptazy/feature/218-add-1-to-1-relations-guest-hypervisor
feature: Add possibility to pin guests to a specific hypervisor node.
2025-04-24 13:01:44 +02:00
Florian Paul Azim Hoberg
33784f60b4 feature: Add possibility to pin guests to a specific hypervisor node.
Fixes: #218
2025-04-24 08:54:58 +02:00
Florian
9a261aa781 Merge pull request #213 from gyptazy/prepare/release-v1.1.2
release: Prepare release v1.1.2
2025-04-19 20:14:12 +02:00
gyptazy
366d5bc264 release: Prepare release v1.1.2 2025-04-19 20:10:49 +02:00
44 changed files with 849 additions and 114 deletions

View File

@@ -0,0 +1,2 @@
fixed:
- Fix systemd unit file to run after network on non PVE nodes (by @robertdahlem) [#137]

View File

@@ -0,0 +1,2 @@
added:
- Add a configurable retry mechanism when connecting to the Proxmox API (by @gyptazy) [#157]

View File

@@ -0,0 +1,2 @@
added:
- Add 1-to-1 relationships between guest and hypervisor node to ping a guest on a node (by @gyptazy) [#218]

View File

@@ -0,0 +1,2 @@
fixed:
- Force type cast cpu count of guests to int for some corner cases where a str got returned (by @gyptazy). [#222]

View File

@@ -0,0 +1 @@
date: 2025-05-13

View File

@@ -0,0 +1,2 @@
added:
- Add relaod (SIGHUP) function to ProxLB to reload the configuration (by @gyptazy). [#189]

View File

@@ -0,0 +1,2 @@
fixed:
- Align maintenance mode with Proxmox HA maintenance mode (by @gyptazy). [#232]

View File

@@ -0,0 +1,2 @@
added:
- Add optional wait time parameter to delay execution until the service takes action (by @gyptazy). #239

View File

@@ -0,0 +1,2 @@
added:
- Make the amount of parallel migrations configurable (by @gyptazy). [#241]

View File

@@ -0,0 +1,2 @@
changed:
- Use the average CPU consumption of a guest within the last 60 minutes instead of the current CPU usage (by @philslab-ninja & @gyptazy). [#94]

View File

@@ -0,0 +1 @@
date: 2025-06-19

View File

@@ -0,0 +1,2 @@
added:
- Allow pinning of guests to a group of nodes (@gyptazy). [#245]

View File

@@ -0,0 +1,2 @@
fixed:
- Fixed an issue where balancing was performed in combination of deactivated balancing and dry-run mode (@gyptazy). [#248]

View File

@@ -0,0 +1,2 @@
fixed:
- Modified log levels to make output lighter at INFO level (@pmarasse) [#255]

View File

@@ -0,0 +1 @@
date: 2025-06-27

View File

@@ -0,0 +1,2 @@
added:
- Allow custom API ports instead of fixed tcp/8006 (@gyptazy). [#260]

View File

@@ -0,0 +1 @@
date: 2025-07-14

View File

@@ -0,0 +1,2 @@
fixed:
- Fix balancing evaluation of guest types (e.g., VM or CT) (@gyptazy). [#268]

View File

@@ -0,0 +1 @@
date: TBD

View File

@@ -0,0 +1,26 @@
name: "Build Container Image: AMD64"
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build amd64 image and save as tar
run: |
docker buildx build \
--platform linux/amd64 \
--load \
-t proxlb-image:amd64 \
.
docker save proxlb-image:amd64 -o proxlb_image_amd64.tar
- name: Upload Docker image artifact
uses: actions/upload-artifact@v4
with:
name: proxlb-image-amd64
path: proxlb_image_amd64.tar

View File

@@ -0,0 +1,26 @@
name: "Build Container Image: ARM64"
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build arm64 image and save as tar
run: |
docker buildx build \
--platform linux/arm64 \
--load \
-t proxlb-image:arm64 \
.
docker save proxlb-image:arm64 -o proxlb_image_arm64.tar
- name: Upload Docker image artifact
uses: actions/upload-artifact@v4
with:
name: proxlb-image-arm64
path: proxlb_image_arm64.tar

View File

@@ -0,0 +1,23 @@
name: "Build Container Image: Multiarch"
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build multi-arch image and save as tar
run: |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--output type=tar,dest=proxlb_image_multiarch.tar \
.
- name: Upload Docker image artifact
uses: actions/upload-artifact@v4
with:
name: proxlb-image-multiarch
path: proxlb_image_multiarch.tar

View File

@@ -5,6 +5,56 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.1.5] - 2025-07-14
### Added
- Allow custom API ports instead of fixed tcp/8006 (@gyptazy). [#260]
## [1.1.4] - 2025-06-27
### Added
- Allow pinning of guests to a group of nodes (@gyptazy). [#245]
### Fixed
- Modified log levels to make output lighter at INFO level (@pmarasse) [#255]
- Fixed an issue where balancing was performed in combination of deactivated balancing and dry-run mode (@gyptazy). [#248]
## [1.1.3] - 2025-06-19
### Added
- Add relaod (SIGHUP) function to ProxLB to reload the configuration (by @gyptazy). [#189]
- Add optional wait time parameter to delay execution until the service takes action (by @gyptazy). [#239]
- Make the amount of parallel migrations configurable (by @gyptazy). [#241]
### Changed
- Use the average CPU consumption of a guest within the last 60 minutes instead of the current CPU usage (by @philslab-ninja & @gyptazy). [#94]
### Fixed
- Align maintenance mode with Proxmox HA maintenance mode (by @gyptazy). [#232]
## [1.1.2] - 2025-05-13
### Added
- Add a configurable retry mechanism when connecting to the Proxmox API (by @gyptazy) [#157]
- Add 1-to-1 relationships between guest and hypervisor node to ping a guest on a node (by @gyptazy) [#218]
### Fixed
- Force type cast cpu count of guests to int for some corner cases where a str got returned (by @gyptazy). [#222]
- Fix systemd unit file to run after network on non PVE nodes (by @robertdahlem) [#137]
## [1.1.1] - 2025-04-20
### Added

View File

@@ -21,6 +21,7 @@
1. [Affinity Rules](#affinity-rules)
2. [Anti-Affinity Rules](#anti-affinity-rules)
3. [Ignore VMs](#ignore-vms)
4. [Pin VMs to Hypervisor Nodes](#pin-vms-to-hypervisor-nodes)
7. [Maintenance](#maintenance)
8. [Misc](#misc)
1. [Bugs](#bugs)
@@ -76,6 +77,10 @@ Before starting any migrations, ProxLB validates that rebalancing actions are ne
## Installation
### Requirements / Dependencies
* Proxmox
* Proxmox 7.x
* Proxmox 8.x
* Proxmox 9.x (Beta 1 tested)
* Python3.x
* proxmoxer
* requests
@@ -160,6 +165,10 @@ docker run -it --rm -v $(pwd)/proxlb.yaml:/etc/proxlb/proxlb.yaml proxlb
| Version | Image |
|------|:------:|
| latest | cr.gyptazy.com/proxlb/proxlb:latest |
| v1.1.5 | cr.gyptazy.com/proxlb/proxlb:v1.1.5 |
| v1.1.4 | cr.gyptazy.com/proxlb/proxlb:v1.1.4 |
| v1.1.3 | cr.gyptazy.com/proxlb/proxlb:v1.1.3 |
| v1.1.2 | cr.gyptazy.com/proxlb/proxlb:v1.1.2 |
| v1.1.1 | cr.gyptazy.com/proxlb/proxlb:v1.1.1 |
| v1.1.0 | cr.gyptazy.com/proxlb/proxlb:v1.1.0 |
| v1.0.6 | cr.gyptazy.com/proxlb/proxlb:v1.0.6 |
@@ -236,21 +245,24 @@ The following options can be set in the configuration file `proxlb.yaml`:
| Section | Option | Sub Option | Example | Type | Description |
|---------|:------:|:----------:|:-------:|:----:|:-----------:|
| `proxmox_api` | | | | | |
| | hosts | | ['virt01.example.com', '10.10.10.10', 'fe01::bad:code::cafe'] | `List` | List of Proxmox nodes. Can be IPv4, IPv6 or mixed. |
| | hosts | | ['virt01.example.com', '10.10.10.10', 'fe01:bad:code::cafe', 'virt01.example.com:443', '[fc00::1]', '[fc00::1]:443', 'fc00::1:8006'] | `List` | List of Proxmox nodes. Can be IPv4, IPv6 or mixed. You can specify custom ports. In case of IPv6 without brackets the port is considered after the last colon |
| | user | | root@pam | `Str` | Username for the API. |
| | pass | | FooBar | `Str` | Password for the API. (Recommended: Use API token authorization!) |
| | token_id | | proxlb | `Str` | Token ID of the user for the API. |
| | token_secret | | 430e308f-1337-1337-beef-1337beefcafe | `Str` | Secret of the token ID for the API. |
| | ssl_verification | | True | `Bool` | Validate SSL certificates (1) or ignore (0). [values: `1` (default), `0`] |
| | timeout | | 10 | `Int` | Timeout for the Proxmox API in sec. |
| | retries | | 1 | `Int` | How often a connection attempt to the defined API host should be performed. |
| | wait_time | | 1 | `Int` | How many seconds should be waited before performing another connection attempt to the API host. |
| `proxmox_cluster` | | | | | |
| | maintenance_nodes | | ['virt66.example.com'] | `List` | A list of Proxmox nodes that are defined to be in a maintenance. |
| | maintenance_nodes | | ['virt66.example.com'] | `List` | A list of Proxmox nodes that are defined to be in a maintenance. (must be the same node names as used within the cluster) |
| | ignore_nodes | | [] | `List` | A list of Proxmox nodes that are defined to be ignored. |
| | overprovisioning | | False | `Bool` | Avoids balancing when nodes would become overprovisioned. |
| `balancing` | | | | | |
| | enable | | True | `Bool` | Enables the guest balancing.|
| | enforce_affinity | | True | `Bool` | Enforcing affinity/anti-affinity rules but balancing might become worse. |
| | parallel | | False | `Bool` | If guests should be moved in parallel or sequentially.|
| | parallel_jobs | | 5 | `Int` | The amount if parallel jobs when migrating guests. (default: `5`)|
| | live | | True | `Bool` | If guests should be moved live or shutdown.|
| | with_local_disks | | True | `Bool` | If balancing of guests should include local disks.|
| | balance_types | | ['vm', 'ct'] | `List` | Defined the types of guests that should be honored. [values: `vm`, `ct`]|
@@ -263,19 +275,27 @@ The following options can be set in the configuration file `proxlb.yaml`:
| | `schedule` | | | `Dict` | Schedule config block for rebalancing. |
| | | interval | 12 | `Int` | How often rebalancing should occur in daemon mode.|
| | | format | hours | `Str` | Sets the time format. [values: `hours` (default), `minutes`]|
| | `delay` | | | `Dict` | Schedule config block for an optional delay until the service starts. |
| | | enable | False | `Bool` | If a delay time should be validated.|
| | | time | 1 | `Int` | Delay time until the service starts after the initial execution.|
| | | format | hours | `Str` | Sets the time format. [values: `hours` (default), `minutes`]|
| | log_level | | INFO | `Str` | Defines the default log level that should be logged. [values: `INFO` (default), `WARNING`, `CRITICAL`, `DEBUG`] |
An example of the configuration file looks like:
```
proxmox_api:
hosts: ['virt01.example.com', '10.10.10.10', 'fe01::bad:code::cafe']
hosts: ['virt01.example.com', '10.10.10.10', 'fe01:bad:code::cafe']
user: root@pam
#pass: crazyPassw0rd!
token_id: proxlb
token_secret: 430e308f-1337-1337-beef-1337beefcafe
pass: crazyPassw0rd!
# API Token method
# token_id: proxlb
# token_secret: 430e308f-1337-1337-beef-1337beefcafe
ssl_verification: True
timeout: 10
# API Connection retries
# retries: 1
# wait_time: 1
proxmox_cluster:
maintenance_nodes: ['virt66.example.com']
@@ -299,6 +319,10 @@ service:
schedule:
interval: 12
format: hours
delay:
enable: False
time: 1
format: hours
log_level: INFO
```
@@ -344,7 +368,7 @@ As a result, ProxLB will try to place the VMs with the `plb_anti_affinity_ntp` t
**Note:** While this ensures that ProxLB tries distribute these VMs across different physical hosts within the Proxmox cluster this may not always work. If you have more guests attached to the group than nodes in the cluster, we still need to run them anywhere. If this case occurs, the next one with the most free resources will be selected.
### Ignore VMs / CTs
### Ignore VMs
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-ignore-vm-movement.jpg"/> Guests, such as VMs or CTs, can also be completely ignored. This means, they won't be affected by any migration (even when (anti-)affinity rules are enforced). To ensure a proper resource evaluation, these guests are still collected and evaluated but simply skipped for balancing actions. Another thing is the implementation. While ProxLB might have a very restricted configuration file including the file permissions, this file is only read- and writeable by the Proxmox administrators. However, we might have user and groups who want to define on their own that their systems shouldn't be moved. Therefore, these users can simpy set a specific tag to the guest object - just like the (anti)affinity rules.
To define a guest to be ignored from the balancing, users assign a tag with the prefix `plb_ignore_$TAG`:
@@ -358,6 +382,22 @@ As a result, ProxLB will not migrate this guest with the `plb_ignore_dev` tag to
**Note:** Ignored guests are really ignored. Even by enforcing affinity rules this guest will be ignored.
### Pin VMs to Specific Hypervisor Nodes
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-tag-node-pinning.jpg"/> Guests, such as VMs or CTs, can also be pinned to specific (and multiple) nodes in the cluster. This might be usefull when running applications with some special licensing requirements that are only fulfilled on certain nodes. It might also be interesting, when some physical hardware is attached to a node, that is not available in general within the cluster.
To pin a guest to a specific cluster node, users assign a tag with the prefix `plb_pin_$nodename` to the desired guest:
#### Example for Screenshot
```
plb_pin_node03
```
As a result, ProxLB will pin the guest `dev-vm01` to the node `virt03`.
You can also repeat this step multiple times for different node names to create a potential group of allowed hosts where a the guest may be served on. In this case, ProxLB takes the node with the lowest used resources according to the defined balancing values from this group.
**Note:** The given node names from the tag are validated. This means, ProxLB validated if the given node name is really part of the cluster. In case of a wrongly defined or unavailable node name it continous to use the regular processes to make sure the guest keeps running.
## Maintenance
<img src="https://cdn.gyptazy.com/images/proxlb-rebalancing-demo.gif"/>

View File

@@ -1,5 +1,5 @@
proxmox_api:
hosts: ['virt01.example.com', '10.10.10.10', 'fe01::bad:code::cafe']
hosts: ['virt01.example.com', '10.10.10.10', 'fe01:bad:code::cafe']
user: root@pam
pass: crazyPassw0rd!
# API Token method
@@ -7,6 +7,9 @@ proxmox_api:
# token_secret: 430e308f-1337-1337-beef-1337beefcafe
ssl_verification: True
timeout: 10
# API Connection retries
# retries: 1
# wait_time: 1
proxmox_cluster:
maintenance_nodes: ['virt66.example.com']
@@ -17,6 +20,9 @@ balancing:
enable: True
enforce_affinity: False
parallel: False
# If running parallel job, you can define
# the amount of prallel jobs (default: 5)
parallel_jobs: 1
live: True
with_local_disks: True
balance_types: ['vm', 'ct']
@@ -30,4 +36,8 @@ service:
schedule:
interval: 12
format: hours
delay:
enable: False
time: 1
format: hours
log_level: INFO

35
debian/changelog vendored
View File

@@ -1,3 +1,36 @@
proxlb (1.1.5) stable; urgency=medium
* Allow custom API ports instead of fixed tcp/8006. (Closes: #260)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Mon, 14 Jul 2025 11:07:34 +0000
proxlb (1.1.4) stable; urgency=medium
* Allow pinning of guests to a group of nodes. (Closes: #245)
* Modified log levels to make output lighter at INFO level. (Closes: #255)
* ixed an issue where balancing was performed in combination of deactivated balancing and dry-run mode. (Closes: #248)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Fri, 27 Jun 2025 16:22:58 +0000
proxlb (1.1.3) stable; urgency=medium
* Add relaod (SIGHUP) function to ProxLB to reload the configuration. (Closes: #189)
* Add optional wait time parameter to delay execution until the service takes action. (Closes: #239)
* Make the amount of parallel migrations configurable. (Closes: #241)
* Use the average CPU consumption of a guest within the last 60 minutes instead of the current CPU usage. (Closes: #94)
* Align maintenance mode with Proxmox HA maintenance mode. (Closes: #232)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Thu, 19 Jun 2025 09:10:43 +0000
proxlb (1.1.2) stable; urgency=medium
* Add a configurable retry mechanism when connecting to the Proxmox API. (Closed: #157)
* Add 1-to-1 relationships between guest and hypervisor node to ping a guest on a node. (Closes #218)
* Force type cast cpu count of guests to int for some corner cases where a str got returned. (Closed #222)
* Fix systemd unit file to run after network on non PVE nodes. (Closes #137)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Mon, 13 May 2025 18:12:04 +0000
proxlb (1.1.1) stable; urgency=medium
* Fix tag evluation for VMs for being ignored for further balancing. (Closes: #163)
@@ -12,7 +45,7 @@ proxlb (1.1.1) stable; urgency=medium
* Fix the guest type relationship in the logs when a migration job failed. (Closes: #204)
* Requery a guest if that running guest reports 0 cpu usage. (Closes: #200)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Tue, 1 Apr 2025 18:55:02 +0000
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Sat, 20 Apr 2025 20:55:02 +0000
proxlb (1.1.0) stable; urgency=medium

View File

@@ -11,6 +11,7 @@
2. [Anti-Affinity Rules](#anti-affinity-rules)
3. [Affinity / Anti-Affinity Enforcing](#affinity--anti-affinity-enforcing)
4. [Ignore VMs](#ignore-vms)
5. [Pin VMs to Hypervisor Nodes](#pin-vms-to-hypervisor-nodes)
2. [API Loadbalancing](#api-loadbalancing)
3. [Ignore Host-Nodes or Guests](#ignore-host-nodes-or-guests)
4. [IPv6 Support](#ipv6-support)
@@ -18,6 +19,7 @@
6. [Parallel Migrations](#parallel-migrations)
7. [Run as a Systemd-Service](#run-as-a-systemd-service)
8. [SSL Self-Signed Certificates](#ssl-self-signed-certificates)
9. [Node Maintenances](#node-maintenances)
## Authentication / User Accounts / Permissions
### Authentication
@@ -124,12 +126,32 @@ As a result, ProxLB will not migrate this guest with the `plb_ignore_dev` tag to
**Note:** Ignored guests are really ignored. Even by enforcing affinity rules this guest will be ignored.
### Pin VMs to Specific Hypervisor Nodes
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-tag-node-pinning.jpg"/> Guests, such as VMs or CTs, can also be pinned to specific nodes in the cluster. This might be usefull when running applications with some special licensing requirements that are only fulfilled on certain nodes. It might also be interesting, when some physical hardware is attached to a node, that is not available in general within the cluster.
To pin a guest to a specific cluster node, users assign a tag with the prefix `plb_pin_$nodename` to the desired guest:
#### Example for Screenshot
```
plb_pin_node03
```
As a result, ProxLB will pin the guest `dev-vm01` to the node `virt03`.
You can also repeat this step multiple times for different node names to create a potential group of allowed hosts where a the guest may be served on. In this case, ProxLB takes the node with the lowest used resources according to the defined balancing values from this group.
**Note:** The given node names from the tag are validated. This means, ProxLB validated if the given node name is really part of the cluster. In case of a wrongly defined or unavailable node name it continous to use the regular processes to make sure the guest keeps running.
### API Loadbalancing
ProxLB supports API loadbalancing, where one or more host objects can be defined as a list. This ensures, that you can even operator ProxLB without further changes when one or more nodes are offline or in a maintenance. When defining multiple hosts, the first reachable one will be picked.
ProxLB supports API loadbalancing, where one or more host objects can be defined as a list. This ensures, that you can even operator ProxLB without further changes when one or more nodes are offline or in a maintenance. When defining multiple hosts, the first reachable one will be picked. You can speficy custom ports in the list. There are 4 ways of defining hosts with ports:
1. Hostname of IPv4 without port (in this case the default 8006 will be used)
2. Hostname or IPv4 with port
3. IPv6 in brackets with optional port
4. IPv6 without brackets, in this case the port is assumed after last colon
```
proxmox_api:
hosts: ['virt01.example.com', '10.10.10.10', 'fe01::bad:code::cafe']
hosts: ['virt01.example.com', '10.10.10.10', 'fe01::bad:code::cafe', 'virt01.example.com:443', '[fc00::1]', '[fc00::1]:443', 'fc00::1:8006']
```
### Ignore Host-Nodes or Guests
@@ -192,4 +214,25 @@ proxmox_api:
ssl_verification: False
```
*Note: Disabling SSL certificate validation is not recommended.*
*Note: Disabling SSL certificate validation is not recommended.*
### Node Maintenances
To exclude specific nodes from receiving any new workloads during the balancing process, the `maintenance_nodes` configuration option can be used. This option allows administrators to define a list of nodes that are currently undergoing maintenance or should otherwise not be used for running virtual machines or containers.
```yaml
maintenance_nodes:
- virt66.example.com
```
which can also be written as:
```yaml
maintenance_nodes: ['virt66.example.com']
```
The maintenance_nodes key must be defined as a list, even if it only includes a single node. Each entry in the list must exactly match the node name as it is known within the Proxmox VE cluster. Do not use IP addresses, alternative DNS names, or aliases—only the actual cluster node names are valid. Once a node is marked as being in maintenance mode:
* No new workloads will be balanced or migrated onto it.
* Any existing workloads currently running on the node will be migrated away in accordance with the configured balancing strategies, assuming resources on other nodes allow.
This feature is particularly useful during planned maintenance, upgrades, or troubleshooting, ensuring that services continue to run with minimal disruption while the specified node is being worked on.

24
helm/proxlb/Chart.yaml Normal file
View File

@@ -0,0 +1,24 @@
apiVersion: v3
name: proxlb
description: A Helm chart for self-hosted ProxLB
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: "1.1.5"
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "v1.1.5"

View File

@@ -0,0 +1,13 @@
{{- define "proxlb.fullname" -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{ define "proxlb.labels" }}
app.kubernetes.io/name: {{ .Release.Name }}
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: proxlb
{{- if .Values.labels }}
{{ toYaml .Values.labels }}
{{- end }}
{{ end }}

View File

@@ -0,0 +1,11 @@
{{- if .Values.configmap.create }}
apiVersion: v1
kind: ConfigMap
metadata:
name: proxlb-config
labels:
{{- include "proxlb.labels" . | nindent 4 }}
data:
proxlb.yaml: |
{{ toYaml .Values.configmap.config | indent 4 }}
{{ end }}

View File

@@ -0,0 +1,44 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
{{- include "proxlb.labels" . | nindent 4 }}
spec:
replicas: 1 # Number of replicas cannot be more than 1
selector:
matchLabels:
{{- include "proxlb.labels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "proxlb.labels" . | nindent 8 }}
spec:
{{- with .Values.image.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
# not interacting with the k8s cluster
automountServiceAccountToken: False
containers:
- name: proxlb
image: "{{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
{{- if .Values.extraArgs.dryRun }}
- --dry-run
{{- end }}
volumeMounts:
- name: config
mountPath: /etc/proxlb/proxlb.yaml
subPath: proxlb.yaml
{{ if .Values.resources }}
resources:
{{ with .Values.resources }}
{{ toYaml . | nindent 10 }}
{{ end }}
{{ end }}
volumes:
- name: config
configMap:
name: proxlb-config

60
helm/proxlb/values.yaml Normal file
View File

@@ -0,0 +1,60 @@
image:
registry: cr.gyptazy.com
repository: proxlb/proxlb
tag: v1.1.5
pullPolicy: IfNotPresent
imagePullSecrets: [ ]
resources:
limits:
cpu: "1000m"
memory: "2Gi"
requests:
cpu: "100m"
memory: "100Mi"
labels: {}
extraArgs:
dryRun: false
configmap:
create: true
config:
proxmox_api:
hosts: []
#Can be either a user or a token
# user: ""
# pass: ""
# token_id: ""
# token_secret: ""
ssl_verification: True
timeout: 10
proxmox_cluster:
maintenance_nodes: [ ]
ignore_nodes: [ ]
overprovisioning: True
balancing:
enable: True
enforce_affinity: False
parallel: False
# If running parallel job, you can define
# the amount of prallel jobs (default: 5)
parallel_jobs: 1
live: True
with_local_disks: True
balance_types: [ 'vm', 'ct' ]
max_job_validation: 1800
balanciness: 5
method: memory
mode: used
service:
daemon: True
schedule:
interval: 12
format: "hours"
delay:
enable: False
time: 1
format: "hours"
log_level: INFO

View File

@@ -1,6 +1,12 @@
#!/usr/bin/env bash
VERSION="1.1.1"
VERSION="1.1.4"
# ProxLB
sed -i "s/^__version__ = .*/__version__ = \"$VERSION\"/" "proxlb/utils/version.py"
sed -i "s/version=\"[0-9]*\.[0-9]*\.[0-9]*\"/version=\"$VERSION\"/" setup.py
# Helm Chart
sed -i "s/^version: .*/version: \"$VERSION\"/" helm/proxlb/Chart.yaml
sed -i "s/^appVersion: .*/appVersion: \"v$VERSION\"/" helm/proxlb/Chart.yaml
echo "OK: Versions have been sucessfully set to $VERSION"

View File

@@ -13,6 +13,7 @@ __license__ = "GPL-3.0"
import logging
import signal
from utils.logger import SystemdLogger
from utils.cli_parser import CliParser
from utils.config_parser import ConfigParser
@@ -32,6 +33,9 @@ def main():
# Initialize logging handler
logger = SystemdLogger(level=logging.INFO)
# Signal handler for SIGHUP
signal.signal(signal.SIGHUP, Helper.handler_sighup)
# Parses arguments passed from the CLI
cli_parser = CliParser()
cli_args = cli_parser.parse_args()
@@ -44,6 +48,9 @@ def main():
# Update log level from config and fallback to INFO if not defined
logger.set_log_level(proxlb_config.get('service', {}).get('log_level', 'INFO'))
# Validate of an optional service delay
Helper.get_service_delay(proxlb_config)
# Connect to Proxmox API & create API object
proxmox_api = ProxmoxApi(proxlb_config)
@@ -51,6 +58,15 @@ def main():
proxlb_config["proxmox_api"]["pass"] = "********"
while True:
# Validate if reload signal was sent during runtime
# and reload the ProxLB configuration and adjust log level
if Helper.proxlb_reload:
logger.info("Reloading ProxLB configuration.")
proxlb_config = config_parser.get_config()
logger.set_log_level(proxlb_config.get('service', {}).get('log_level', 'INFO'))
Helper.proxlb_reload = False
# Get all required objects from the Proxmox cluster
meta = {"meta": proxlb_config}
nodes = Nodes.get_nodes(proxmox_api, proxlb_config)
@@ -71,8 +87,9 @@ def main():
Helper.log_node_metrics(proxlb_data, init=False)
# Perform balancing actions via Proxmox API
if not cli_args.dry_run or not proxlb_data["meta"]["balancing"].get("enable", False):
Balancing(proxmox_api, proxlb_data)
if proxlb_data["meta"]["balancing"].get("enable", False):
if not cli_args.dry_run:
Balancing(proxmox_api, proxlb_data)
# Validate if the JSON output should be
# printed to stdout

View File

@@ -12,6 +12,7 @@ __license__ = "GPL-3.0"
import proxmoxer
import time
from itertools import islice
from utils.logger import SystemdLogger
from typing import Dict, Any
@@ -48,34 +49,80 @@ class Balancing:
Initializes the Balancing class with the provided ProxLB data.
Args:
proxlb_data (dict): The data required for balancing VMs and CTs.
proxmox_api (object): The Proxmox API client instance used to interact with the Proxmox cluster.
proxlb_data (dict): A dictionary containing data related to the ProxLB load balancing configuration.
"""
for guest_name, guest_meta in proxlb_data["guests"].items():
def chunk_dict(data, size):
"""
Splits a dictionary into chunks of a specified size.
Args:
data (dict): The dictionary to be split into chunks.
size (int): The size of each chunk.
Yields:
dict: A chunk of the original dictionary with the specified size.
"""
logger.debug("Starting: chunk_dict.")
it = iter(data.items())
for chunk in range(0, len(data), size):
yield dict(islice(it, size))
# Check if the guest's target is not the same as the current node
if guest_meta["node_current"] != guest_meta["node_target"]:
# Check if the guest is not ignored and perform the balancing
# operation based on the guest type
if not guest_meta["ignore"]:
guest_id = guest_meta["id"]
guest_node_current = guest_meta["node_current"]
guest_node_target = guest_meta["node_target"]
# Validate if balancing should be performed in parallel or sequentially.
# If parallel balancing is enabled, set the number of parallel jobs.
parallel_jobs = proxlb_data["meta"]["balancing"].get("parallel_jobs", 5)
if not proxlb_data["meta"]["balancing"].get("parallel", False):
parallel_jobs = 1
logger.debug("Balancing: Parallel balancing is disabled. Running sequentially.")
else:
logger.debug(f"Balancing: Parallel balancing is enabled. Running with {parallel_jobs} parallel jobs.")
# VM Balancing
if guest_meta["type"] == "vm":
self.exec_rebalancing_vm(proxmox_api, proxlb_data, guest_name)
for chunk in chunk_dict(proxlb_data["guests"], parallel_jobs):
jobs_to_wait = []
# CT Balancing
elif guest_meta["type"] == "ct":
self.exec_rebalancing_ct(proxmox_api, proxlb_data, guest_name)
for guest_name, guest_meta in chunk.items():
# Check if the guest's target is not the same as the current node
if guest_meta["node_current"] != guest_meta["node_target"]:
# Check if the guest is not ignored and perform the balancing
# operation based on the guest type
if not guest_meta["ignore"]:
job_id = None
# VM Balancing
if guest_meta["type"] == "vm":
if 'vm' in proxlb_data["meta"]["balancing"].get("balance_types", []):
logger.debug("Balancing: Balancing for guest {guest_name} of type VM started.")
job_id = self.exec_rebalancing_vm(proxmox_api, proxlb_data, guest_name)
else:
logger.debug(
f"Balancing: Balancing for guest {guest_name} will not be performed. "
"Guest is of type VM which is not included in allowed balancing types.")
# CT Balancing
elif guest_meta["type"] == "ct":
if 'ct' in proxlb_data["meta"]["balancing"].get("balance_types", []):
logger.debug("Balancing: Balancing for guest {guest_name} of type CT started.")
job_id = self.exec_rebalancing_ct(proxmox_api, proxlb_data, guest_name)
else:
logger.debug(
f"Balancing: Balancing for guest {guest_name} will not be performed. "
"Guest is of type CT which is not included in allowed balancing types.")
# Just in case we get a new type of guest in the future
else:
logger.critical(f"Balancing: Got unexpected guest type: {guest_meta['type']}. Cannot proceed guest: {guest_meta['name']}.")
if job_id:
jobs_to_wait.append((guest_name, guest_meta["node_current"], job_id))
# Just in case we get a new type of guest in the future
else:
logger.critical(f"Balancing: Got unexpected guest type: {guest_meta['type']}. Cannot proceed guest: {guest_meta['name']}.")
logger.debug(f"Balancing: Guest {guest_name} is ignored and will not be rebalanced.")
else:
logger.debug(f"Balancing: Guest {guest_name} is ignored and will not be rebalanced.")
else:
logger.debug(f"Balancing: Guest {guest_name} is already on the target node {guest_meta['node_target']} and will not be rebalanced.")
logger.debug(f"Balancing: Guest {guest_name} is already on the target node {guest_meta['node_target']} and will not be rebalanced.")
# Wait for all jobs in the current chunk to complete
for guest_name, node, job_id in jobs_to_wait:
self.get_rebalancing_job_status(proxmox_api, proxlb_data, guest_name, node, job_id)
def exec_rebalancing_vm(self, proxmox_api: any, proxlb_data: Dict[str, Any], guest_name: str) -> None:
"""
@@ -114,13 +161,13 @@ class Balancing:
}
try:
logger.debug(f"Balancing: Starting to migrate guest {guest_name} of type VM.")
logger.info(f"Balancing: Starting to migrate VM guest {guest_name} from {guest_node_current} to {guest_node_target}.")
job_id = proxmox_api.nodes(guest_node_current).qemu(guest_id).migrate().post(**migration_options)
self.get_rebalancing_job_status(proxmox_api, proxlb_data, guest_name, guest_node_current, job_id)
except proxmoxer.core.ResourceException as proxmox_api_error:
logger.critical(f"Balancing: Failed to migrate guest {guest_name} of type VM due to some Proxmox errors. Please check if resource is locked or similar.")
logger.debug(f"Balancing: Failed to migrate guest {guest_name} of type VM due to some Proxmox errors: {proxmox_api_error}")
logger.debug("Finished: exec_rebalancing_vm.")
return job_id
def exec_rebalancing_ct(self, proxmox_api: any, proxlb_data: Dict[str, Any], guest_name: str) -> None:
"""
@@ -143,13 +190,13 @@ class Balancing:
guest_node_target = proxlb_data["guests"][guest_name]["node_target"]
try:
logger.debug(f"Balancing: Starting to migrate guest {guest_name} of type CT.")
logger.info(f"Balancing: Starting to migrate CT guest {guest_name} from {guest_node_current} to {guest_node_target}.")
job_id = proxmox_api.nodes(guest_node_current).lxc(guest_id).migrate().post(target=guest_node_target, restart=1)
self.get_rebalancing_job_status(proxmox_api, proxlb_data, guest_name, guest_node_current, job_id)
except proxmoxer.core.ResourceException as proxmox_api_error:
logger.critical(f"Balancing: Failed to migrate guest {guest_name} of type CT due to some Proxmox errors. Please check if resource is locked or similar.")
logger.debug(f"Balancing: Failed to migrate guest {guest_name} of type CT due to some Proxmox errors: {proxmox_api_error}")
logger.debug("Finished: exec_rebalancing_ct.")
return job_id
def get_rebalancing_job_status(self, proxmox_api: any, proxlb_data: Dict[str, Any], guest_name: str, guest_current_node: str, job_id: int, retry_counter: int = 1) -> bool:
"""
@@ -167,35 +214,32 @@ class Balancing:
bool: True if the job completed successfully, False otherwise.
"""
logger.debug("Starting: get_rebalancing_job_status.")
# Parallel migrations can take a huge time and create a higher load, if not defined by an
# operator we will use a sequential mode by default
if not proxlb_data["meta"]["balancing"].get("parallel", False):
job = proxmox_api.nodes(guest_current_node).tasks(job_id).status().get()
job = proxmox_api.nodes(guest_current_node).tasks(job_id).status().get()
# Watch job id until it finalizes
if job["status"] == "running":
# Do not hammer the API while
# watching the job status
time.sleep(10)
retry_counter += 1
# Watch job id until it finalizes
if job["status"] == "running":
# Do not hammer the API while
# watching the job status
time.sleep(10)
retry_counter += 1
# Run recursion until we hit the soft-limit of maximum migration time for a guest
if retry_counter < proxlb_data["meta"]["balancing"].get("max_job_validation", 1800):
logger.debug(f"Balancing: Job ID {job_id} (guest: {guest_name}) for migration is still running... (Run: {retry_counter})")
self.get_rebalancing_job_status(proxmox_api, proxlb_data, guest_name, guest_current_node, job_id, retry_counter)
else:
logger.warning(f"Balancing: Job ID {job_id} (guest: {guest_name}) for migration took too long. Please check manually.")
logger.debug("Finished: get_rebalancing_job_status.")
return False
# Run recursion until we hit the soft-limit of maximum migration time for a guest
if retry_counter < proxlb_data["meta"]["balancing"].get("max_job_validation", 1800):
logger.debug(f"Balancing: Job ID {job_id} (guest: {guest_name}) for migration is still running... (Run: {retry_counter})")
self.get_rebalancing_job_status(proxmox_api, proxlb_data, guest_name, guest_current_node, job_id, retry_counter)
else:
logger.warning(f"Balancing: Job ID {job_id} (guest: {guest_name}) for migration took too long. Please check manually.")
logger.debug("Finished: get_rebalancing_job_status.")
return False
# Validate job output for errors when finished
if job["status"] == "stopped":
# Validate job output for errors when finished
if job["status"] == "stopped":
if job["exitstatus"] == "OK":
logger.debug(f"Balancing: Job ID {job_id} (guest: {guest_name}) was successfully.")
logger.debug("Finished: get_rebalancing_job_status.")
return True
else:
logger.critical(f"Balancing: Job ID {job_id} (guest: {guest_name}) went into an error! Please check manually.")
logger.debug("Finished: get_rebalancing_job_status.")
return False
if job["exitstatus"] == "OK":
logger.debug(f"Balancing: Job ID {job_id} (guest: {guest_name}) was successfully.")
logger.debug("Finished: get_rebalancing_job_status.")
return True
else:
logger.critical(f"Balancing: Job ID {job_id} (guest: {guest_name}) went into an error! Please check manually.")
logger.debug("Finished: get_rebalancing_job_status.")
return False

View File

@@ -82,6 +82,7 @@ class Calculations:
guest_node_current = proxlb_data["guests"][guest_name]["node_current"]
# Update Hardware assignments
# Update assigned values for the current node
logger.debug(f"set_node_assignment of guest {guest_name} on node {guest_node_current} with cpu_total: {proxlb_data['guests'][guest_name]['cpu_total']}, memory_total: {proxlb_data['guests'][guest_name]['memory_total']}, disk_total: {proxlb_data['guests'][guest_name]['disk_total']}.")
proxlb_data["nodes"][guest_node_current]["cpu_assigned"] += proxlb_data["guests"][guest_name]["cpu_total"]
proxlb_data["nodes"][guest_node_current]["memory_assigned"] += proxlb_data["guests"][guest_name]["memory_total"]
proxlb_data["nodes"][guest_node_current]["disk_assigned"] += proxlb_data["guests"][guest_name]["disk_total"]
@@ -128,7 +129,7 @@ class Calculations:
logger.debug("Finished: get_balanciness.")
@staticmethod
def get_most_free_node(proxlb_data: Dict[str, Any], return_node: bool = False) -> Dict[str, Any]:
def get_most_free_node(proxlb_data: Dict[str, Any], return_node: bool = False, guest_node_relation_list: list = []) -> Dict[str, Any]:
"""
Get the name of the Proxmox node in the cluster with the most free resources based on
the user defined method (e.g.: memory) and mode (e.g.: used).
@@ -137,6 +138,8 @@ class Calculations:
proxlb_data (Dict[str, Any]): The data holding all content of all objects.
return_node (bool): The indicator to simply return the best node for further
assignments.
guest_node_relation_list (list): A list of nodes that have a tag on the given
guest relationship for pinning.
Returns:
Dict[str, Any]: Updated meta data section of the node with the most free resources that should
@@ -145,8 +148,15 @@ class Calculations:
logger.debug("Starting: get_most_free_node.")
proxlb_data["meta"]["balancing"]["balance_next_node"] = ""
# Do not include nodes that are marked in 'maintenance'
# Filter and exclude nodes that are in maintenance mode
filtered_nodes = [node for node in proxlb_data["nodes"].values() if not node["maintenance"]]
# Filter and include nodes that given by a relationship between guest and node. This is only
# used if the guest has a relationship to a node defined by "pin" tags.
if len(guest_node_relation_list) > 0:
filtered_nodes = [node for node in proxlb_data["nodes"].values() if node["name"] in guest_node_relation_list]
# Filter by the defined methods and modes for balancing
method = proxlb_data["meta"]["balancing"].get("method", "memory")
mode = proxlb_data["meta"]["balancing"].get("mode", "used")
lowest_usage_node = min(filtered_nodes, key=lambda x: x[f"{method}_{mode}_percent"])
@@ -225,6 +235,7 @@ class Calculations:
for guest_name in proxlb_data["groups"]["affinity"][group_name]["guests"]:
proxlb_data["meta"]["balancing"]["balance_next_guest"] = guest_name
Calculations.val_anti_affinity(proxlb_data, guest_name)
Calculations.val_node_relationships(proxlb_data, guest_name)
Calculations.update_node_resources(proxlb_data)
logger.debug("Finished: relocate_guests.")
@@ -278,6 +289,39 @@ class Calculations:
logger.debug("Finished: val_anti_affinity.")
@staticmethod
def val_node_relationships(proxlb_data: Dict[str, Any], guest_name: str):
"""
Validates and assigns guests to nodes based on defined relationships based on tags.
Parameters:
proxlb_data (Dict[str, Any]): The data holding all content of all objects.
guest_name (str): The name of the guest to be validated and assigned a node.
Returns:
None
"""
logger.debug("Starting: val_node_relationships.")
proxlb_data["guests"][guest_name]["processed"] = True
if len(proxlb_data["guests"][guest_name]["node_relationships"]) > 0:
logger.debug(f"Guest '{guest_name}' has relationships defined to node(s): {','.join(proxlb_data['guests'][guest_name]['node_relationships'])}. Pinning to node.")
# Get the node with the most free resources of the group
guest_node_relation_list = proxlb_data["guests"][guest_name]["node_relationships"]
Calculations.get_most_free_node(proxlb_data, False, guest_node_relation_list)
# Validate if the specified node name is really part of the cluster
if proxlb_data["meta"]["balancing"]["balance_next_node"] in proxlb_data["nodes"].keys():
logger.debug(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['meta']['balancing']['balance_next_node']} is a known hypervisor node in the cluster.")
else:
logger.warning(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['meta']['balancing']['balance_next_node']} but this node name is not known in the cluster!")
else:
logger.debug(f"Guest '{guest_name}' does not have any specific node relationships.")
logger.debug("Finished: val_node_relationships.")
@staticmethod
def update_node_resources(proxlb_data):
"""

View File

@@ -63,21 +63,10 @@ class Guests:
for guest in proxmox_api.nodes(node).qemu.get():
if guest['status'] == 'running':
# If the balancing method is set to cpu, we need to wait for the guest to report
# cpu usage. This is important for the balancing process to ensure that we do not
# wait for a guest for an infinite time.
if meta["meta"]["balancing"]["method"] == "cpu":
retry_counter = 0
while guest['cpu'] == 0 and retry_counter < 10:
guest = proxmox_api.nodes(node).qemu(guest['vmid']).status.current.get()
logger.debug(f"Guest {guest['name']} (type VM) is reporting {guest['cpu']} cpu usage on retry {retry_counter}.")
retry_counter += 1
time.sleep(1)
guests['guests'][guest['name']] = {}
guests['guests'][guest['name']]['name'] = guest['name']
guests['guests'][guest['name']]['cpu_total'] = guest['cpus']
guests['guests'][guest['name']]['cpu_used'] = guest['cpu'] * guest['cpus']
guests['guests'][guest['name']]['cpu_total'] = int(guest['cpus'])
guests['guests'][guest['name']]['cpu_used'] = Guests.get_guest_cpu_usage(proxmox_api, node, guest['vmid'], guest['name'])
guests['guests'][guest['name']]['memory_total'] = guest['maxmem']
guests['guests'][guest['name']]['memory_used'] = guest['mem']
guests['guests'][guest['name']]['disk_total'] = guest['maxdisk']
@@ -90,7 +79,10 @@ class Guests:
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['type'] = 'vm'
logger.debug(f"Resources of Guest {guest['name']} (type VM) added: {guests['guests'][guest['name']]}")
else:
logger.debug(f'Metric for VM {guest["name"]} ignored because VM is not running.')
@@ -101,8 +93,8 @@ class Guests:
if guest['status'] == 'running':
guests['guests'][guest['name']] = {}
guests['guests'][guest['name']]['name'] = guest['name']
guests['guests'][guest['name']]['cpu_total'] = guest['cpus']
guests['guests'][guest['name']]['cpu_used'] = guest['cpu']
guests['guests'][guest['name']]['cpu_total'] = int(guest['cpus'])
guests['guests'][guest['name']]['cpu_used'] = Guests.get_guest_cpu_usage(proxmox_api, node, guest['vmid'], guest['name'])
guests['guests'][guest['name']]['memory_total'] = guest['maxmem']
guests['guests'][guest['name']]['memory_used'] = guest['mem']
guests['guests'][guest['name']]['disk_total'] = guest['maxdisk']
@@ -115,9 +107,47 @@ class Guests:
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['type'] = 'ct'
logger.debug(f"Resources of Guest {guest['name']} (type CT) added: {guests['guests'][guest['name']]}")
else:
logger.debug(f'Metric for CT {guest["name"]} ignored because CT is not running.')
logger.debug("Finished: get_guests.")
return guests
@staticmethod
def get_guest_cpu_usage(proxmox_api, node_name: str, vm_id: int, vm_name: str) -> float:
"""
Retrieve the average CPU usage of a guest instance (VM/CT) over the past hour.
This method queries the Proxmox VE API for RRD (Round-Robin Database) data
related to CPU usage of a specific guest instance and calculates the average CPU usage
over the last hour using the "AVERAGE" consolidation function.
Args:
proxmox_api: An instance of the Proxmox API client.
node_name (str): The name of the Proxmox node hosting the VM.
vm_id (int): The unique identifier of the guest instance (VM/CT).
vm_name (str): The name of the guest instance (VM/CT).
Returns:
float: The average CPU usage as a fraction (0.0 to 1.0) over the past hour.
Returns 0.0 if no data is available.
"""
logger.debug("Finished: get_guest_cpu_usage.")
time.sleep(0.1)
try:
logger.debug(f"Getting RRD dara for guest: {vm_name}.")
guest_data_rrd = proxmox_api.nodes(node_name).qemu(vm_id).rrddata.get(timeframe="hour", cf="AVERAGE")
except Exception:
logger.error(f"Failed to retrieve RRD data for guest: {vm_name} (ID: {vm_id}) on node: {node_name}. Using 0.0 as CPU usage.")
logger.debug("Finished: get_guest_cpu_usage.")
return 0.0
cpu_usage = sum(entry.get("cpu", 0.0) for entry in guest_data_rrd) / len(guest_data_rrd)
logger.debug(f"CPU RRD data for guest: {vm_name}: {cpu_usage}")
logger.debug("Finished: get_guest_cpu_usage.")
return cpu_usage

View File

@@ -84,14 +84,14 @@ class Nodes:
nodes["nodes"][node["node"]]["disk_used_percent"] = nodes["nodes"][node["node"]]["disk_used"] / node["maxdisk"] * 100
# Evaluate if node should be set to maintenance mode
if Nodes.set_node_maintenance(proxlb_config, node["node"]):
if Nodes.set_node_maintenance(proxmox_api, proxlb_config, node["node"]):
nodes["nodes"][node["node"]]["maintenance"] = True
logger.debug("Finished: get_nodes.")
return nodes
@staticmethod
def set_node_maintenance(proxlb_config: Dict[str, Any], node_name: str) -> Dict[str, Any]:
def set_node_maintenance(proxmox_api, proxlb_config: Dict[str, Any], node_name: str) -> Dict[str, Any]:
"""
Set nodes to maintenance mode based on the provided configuration.
@@ -99,6 +99,7 @@ class Nodes:
based on the configuration provided in proxlb_config.
Args:
proxmox_api (any): The Proxmox API client instance.
proxlb_config (Dict[str, Any]): A dictionary containing the ProxLB configuration, including maintenance nodes.
node_name: (str): The current node name within the outer iteration.
@@ -107,11 +108,24 @@ class Nodes:
"""
logger.debug("Starting: set_node_maintenance.")
# Evaluate maintenance mode by config
if proxlb_config.get("proxmox_cluster", None).get("maintenance_nodes", None) is not None:
if len(proxlb_config.get("proxmox_cluster", {}).get("maintenance_nodes", [])) > 0:
if node_name in proxlb_config.get("proxmox_cluster", {}).get("maintenance_nodes", []):
logger.warning(f"Node: {node_name} has been set to maintenance mode.")
logger.info(f"Node: {node_name} has been set to maintenance mode (by ProxLB config).")
return True
else:
logger.debug(f"Node: {node_name} is not in maintenance mode by ProxLB config.")
# Evaluate maintenance mode by Proxmox HA
for ha_element in proxmox_api.cluster.ha.status.current.get():
if ha_element.get("status"):
if "maintenance mode" in ha_element.get("status"):
if ha_element.get("node") == node_name:
logger.info(f"Node: {node_name} has been set to maintenance mode (by Proxmox HA API).")
return True
else:
logger.debug(f"Node: {node_name} is not in maintenance mode by Proxmox HA API.")
logger.debug("Finished: set_node_maintenance.")
@@ -135,7 +149,7 @@ class Nodes:
if proxlb_config.get("proxmox_cluster", None).get("ignore_nodes", None) is not None:
if len(proxlb_config.get("proxmox_cluster", {}).get("ignore_nodes", [])) > 0:
if node_name in proxlb_config.get("proxmox_cluster", {}).get("ignore_nodes", []):
logger.warning(f"Node: {node_name} has been set to be ignored. Not adding node!")
logger.info(f"Node: {node_name} has been set to be ignored. Not adding node!")
return True
logger.debug("Finished: set_node_ignore.")

View File

@@ -151,3 +151,30 @@ class Tags:
logger.debug("Finished: get_ignore.")
return ignore_tag
@staticmethod
def get_node_relationships(tags: List[str]) -> str:
"""
Get a node relationship tag for a guest from the Proxmox cluster by the API to pin
a guest to a node.
This method retrieves a relationship tag between a guest and a specific
hypervisor node to pin the guest to a specific node (e.g., for licensing reason).
Args:
tags (List): A list holding all defined tags for a given guest.
Returns:
Str: The related hypervisor node name.
"""
logger.debug("Starting: get_node_relationships.")
node_relationship_tags = []
if len(tags) > 0:
for tag in tags:
if tag.startswith("plb_pin"):
node_relationship_tag = tag.replace("plb_pin_", "")
node_relationship_tags.append(node_relationship_tag)
logger.debug("Finished: get_node_relationships.")
return node_relationship_tags

View File

@@ -10,6 +10,7 @@ __license__ = "GPL-3.0"
import json
import uuid
import re
import sys
import time
import utils.version
@@ -40,6 +41,8 @@ class Helper:
get_daemon_mode(proxlb_config: Dict[str, Any]) -> None:
Checks if the daemon mode is active and handles the scheduling accordingly.
"""
proxlb_reload = False
def __init__(self):
"""
Initializes the general Helper clas.
@@ -143,6 +146,39 @@ class Helper:
logger.debug("Finished: get_daemon_mode.")
@staticmethod
def get_service_delay(proxlb_config: Dict[str, Any]) -> None:
"""
Checks if a start up delay for the service is defined and waits to proceed until
the time is up.
Parameters:
proxlb_config (Dict[str, Any]): A dictionary containing the ProxLB configuration.
Returns:
None
"""
logger.debug("Starting: get_service_delay.")
if proxlb_config.get("service", {}).get("delay", {}).get("enable", False):
# Convert hours to seconds
if proxlb_config["service"]["delay"].get("format", "hours") == "hours":
sleep_seconds = proxlb_config.get("service", {}).get("delay", {}).get("time", 1) * 3600
# Convert minutes to seconds
elif proxlb_config["service"]["delay"].get("format", "hours") == "minutes":
sleep_seconds = proxlb_config.get("service", {}).get("delay", {}).get("time", 60) * 60
else:
logger.error("Invalid format for service delay. Please use 'hours' or 'minutes'.")
sys.exit(1)
logger.info(f"Service delay active: First run in: {proxlb_config.get('service', {}).get('delay', {}).get('time', 1)} {proxlb_config['service']['delay'].get('format', 'hours')}.")
time.sleep(sleep_seconds)
else:
logger.debug("Service delay not active. Proceeding without delay.")
logger.debug("Finished: get_service_delay.")
@staticmethod
def print_json(proxlb_config: Dict[str, Any], print_json: bool = False) -> None:
"""
@@ -162,3 +198,70 @@ class Helper:
print(json.dumps(filtered_data, indent=4))
logger.debug("Finished: print_json.")
@staticmethod
def handler_sighup(signum, frame):
"""
Signal handler for SIGHUP.
This method is triggered when the process receives a SIGHUP signal.
It sets the `proxlb_reload` class variable to True to indicate that
configuration should be reloaded in the main loop.
Args:
signum (int): The signal number (expected to be signal.SIGHUP).
frame (frame object): Current stack frame (unused but required by signal handler signature).
"""
logger.debug("Starting: handle_sighup.")
logger.debug("Got SIGHUP signal. Reloading...")
Helper.proxlb_reload = True
logger.debug("Finished: handle_sighup.")
@staticmethod
def get_host_port_from_string(host_object):
"""
Parses a string containing a host (IPv4, IPv6, or hostname) and an optional port, and returns a tuple of (host, port).
Supported formats:
- Hostname or IPv4 without port: "example.com" or "192.168.0.1"
- Hostname or IPv4 with port: "example.com:8006" or "192.168.0.1:8006"
- IPv6 in brackets with optional port: "[fc00::1]" or "[fc00::1]:8006"
- IPv6 without brackets, port is assumed after last colon: "fc00::1:8006"
If no port is specified, port 8006 is used as the default.
Args:
host_object (str): A string representing a host with or without a port.
Returns:
tuple: A tuple (host: str, port: int)
"""
logger.debug("Starting: get_host_port_from_string.")
# IPv6 (with or without port, written in brackets)
match = re.match(r'^\[(.+)\](?::(\d+))?$', host_object)
if match:
host = match.group(1)
port = int(match.group(2)) if match.group(2) else 8006
return host, port
# Count colons to identify IPv6 addresses without brackets
colon_count = host_object.count(':')
# IPv4 or hostname without port
if colon_count == 0:
return host_object, 8006
# IPv4 or hostname with port
elif colon_count == 1:
host, port = host_object.split(':')
return host, int(port)
# IPv6 (with or without port, assume last colon is port)
else:
parts = host_object.rsplit(':', 1)
try:
port = int(parts[1])
return parts[0], port
except ValueError:
return host_object, 8006

View File

@@ -20,18 +20,20 @@ except ImportError:
PROXMOXER_PRESENT = False
import random
import socket
import sys
try:
import requests
REQUESTS_PRESENT = True
except ImportError:
REQUESTS_PRESENT = False
import sys
import time
try:
import urllib3
URLLIB3_PRESENT = True
except ImportError:
URLLIB3_PRESENT = False
from typing import Dict, Any
from utils.helper import Helper
from utils.logger import SystemdLogger
@@ -141,7 +143,7 @@ class ProxmoxApi:
logger.debug("Finished: validate_config.")
def api_connect_get_hosts(self, proxmox_api_endpoints: list) -> str:
def api_connect_get_hosts(self, proxlb_config, proxmox_api_endpoints: list) -> str:
"""
Perform a connectivity test to determine a working host for the Proxmox API.
@@ -152,6 +154,7 @@ class ProxmoxApi:
are found, one is chosen at random to distribute the load across the cluster.
Args:
proxlb_config (Dict[str, Any]): A dictionary containing the ProxLB configuration.
proxmox_api_endpoints (list): A list of Proxmox API endpoints to test.
Returns:
@@ -175,26 +178,30 @@ class ProxmoxApi:
logger.critical(f"No proxmox_api hosts are defined.")
sys.exit(1)
# Get a suitable Proxmox API endpoint. Therefore, we check if we only have
# a single Proxmox API endpoint or multiple ones. If only one, we can return
# this one immediately. If this one does not work, the urllib will raise an
# exception during the connection attempt.
if len(proxmox_api_endpoints) == 1:
return proxmox_api_endpoints[0]
# If we have multiple Proxmox API endpoints, we need to check each one by
# doing a connection attempt for IPv4 and IPv6. If we find a working one,
# we return that one. This allows us to define multiple endpoints in a cluster.
validated_api_hosts = []
for host in proxmox_api_endpoints:
validated = self.test_api_proxmox_host(host)
if validated:
validated_api_hosts.append(validated)
# Get or set a default value for a maximum of retries when connecting to
# the Proxmox API
api_connection_retries = proxlb_config["proxmox_api"].get("retries", 1)
api_connection_wait_time = proxlb_config["proxmox_api"].get("wait_time", 1)
for api_connection_attempt in range(api_connection_retries):
validated_api_host, api_port = self.test_api_proxmox_host(host)
if validated_api_host:
validated_api_hosts.append(validated_api_host)
break
else:
logger.warning(f"Attempt {api_connection_attempt + 1}/{api_connection_retries} failed for host {host}. Retrying in {api_connection_wait_time} seconds...")
time.sleep(api_connection_wait_time)
if len(validated_api_hosts) > 0:
# Choose a random host to distribute the load across the cluster
# as a simple load balancing mechanism.
return random.choice(validated_api_hosts)
return random.choice(validated_api_hosts), api_port
logger.critical("No valid Proxmox API hosts found.")
print("No valid Proxmox API hosts found.")
@@ -222,6 +229,10 @@ class ProxmoxApi:
"""
logger.debug("Starting: test_api_proxmox_host.")
# Validate for custom ports in API hosts which might indicate
# that an external loadbalancer will be used.
host, port = Helper.get_host_port_from_string(host)
# Try resolving DNS to IP and log non-resolvable ones
try:
ip = socket.getaddrinfo(host, None, socket.AF_UNSPEC)
@@ -233,12 +244,12 @@ class ProxmoxApi:
for address_type in ip:
if address_type[0] == socket.AF_INET:
logger.debug(f"{host} is type ipv4.")
if self.test_api_proxmox_host_ipv4(host):
return host
if self.test_api_proxmox_host_ipv4(host, port):
return host, port
elif address_type[0] == socket.AF_INET6:
logger.debug(f"{host} is type ipv6.")
if self.test_api_proxmox_host_ipv6(host):
return host
if self.test_api_proxmox_host_ipv6(host, port):
return host, port
else:
return False
@@ -307,7 +318,7 @@ class ProxmoxApi:
sock.close()
logger.warning(f"Host {host} is unreachable on IPv6 for tcp/{port}.")
logger.debug("Finished: test_api_proxmox_host_ipv4.")
logger.debug("Finished: test_api_proxmox_host_ipv6.")
return False
def test_api_user_permissions(self, proxmox_api: any):
@@ -372,7 +383,7 @@ class ProxmoxApi:
self.validate_config(proxlb_config)
# Get a valid Proxmox API endpoint
proxmox_api_endpoint = self.api_connect_get_hosts(proxlb_config.get("proxmox_api", {}).get("hosts", []))
proxmox_api_endpoint, proxmox_api_port = self.api_connect_get_hosts(proxlb_config, proxlb_config.get("proxmox_api", {}).get("hosts", []))
# Disable warnings for SSL certificate validation
if not proxlb_config.get("proxmox_api").get("ssl_verification", True):
@@ -386,6 +397,7 @@ class ProxmoxApi:
if proxlb_config.get("proxmox_api").get("token_secret", False):
proxmox_api = proxmoxer.ProxmoxAPI(
proxmox_api_endpoint,
port=proxmox_api_port,
user=proxlb_config.get("proxmox_api").get("user", True),
token_name=proxlb_config.get("proxmox_api").get("token_id", True),
token_value=proxlb_config.get("proxmox_api").get("token_secret", True),
@@ -395,6 +407,7 @@ class ProxmoxApi:
else:
proxmox_api = proxmoxer.ProxmoxAPI(
proxmox_api_endpoint,
port=proxmox_api_port,
user=proxlb_config.get("proxmox_api").get("user", True),
password=proxlb_config.get("proxmox_api").get("pass", True),
verify_ssl=proxlb_config.get("proxmox_api").get("ssl_verification", True),
@@ -414,6 +427,5 @@ class ProxmoxApi:
sys.exit(2)
logger.info(f"API connection to host {proxmox_api_endpoint} succeeded.")
logger.debug("Finished: api_connect.")
return proxmox_api

View File

@@ -3,5 +3,5 @@ __app_desc__ = "A DRS alike loadbalancer for Proxmox clusters."
__author__ = "Florian Paul Azim Hoberg <gyptazy>"
__copyright__ = "Copyright (C) 2025 Florian Paul Azim Hoberg (@gyptazy)"
__license__ = "GPL-3.0"
__version__ = "1.1.1"
__version__ = "1.1.5"
__url__ = "https://github.com/gyptazy/ProxLB"

View File

@@ -1,11 +1,13 @@
[Unit]
Description=ProxLB - A loadbalancer for Proxmox clusters
After=pveproxy.service
Wants=pveproxy.service
After=network-online.target pveproxy.service
Wants=network-online.target pveproxy.service
[Service]
ExecStart=python3 /usr/lib/python3/dist-packages/proxlb/main.py -c /etc/proxlb/proxlb.yaml
User=plb
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
[Install]
WantedBy=multi-user.target
WantedBy=multi-user.target

View File

@@ -2,7 +2,7 @@ from setuptools import setup
setup(
name="proxlb",
version="1.1.1",
version="1.1.5",
description="A DRS alike loadbalancer for Proxmox clusters.",
long_description="An advanced DRS alike loadbalancer for Proxmox clusters that also supports maintenance modes and affinity/anti-affinity rules.",
author="Florian Paul Azim Hoberg",