Compare commits

...

54 Commits

Author SHA1 Message Date
gyptazy
30e3b66be9 feature(balancing): Add an optional threshold in percent for balancing
Fixes: #342
2025-10-23 12:10:01 +02:00
gyptazy
b9be405194 Merge pull request #345 from gyptazy/fix/version
release: Set release version 1.1.9
2025-10-23 11:47:12 +02:00
gyptazy
ac108f2abe release: Set release version 1.1.9 2025-10-23 11:45:59 +02:00
gyptazy
02b43d3ef7 Merge pull request #339 from gyptazy/feature/337-add-pressure-based-balancing
feature(balancing): Add pressure (memory, cpu, disk) based balancing
2025-10-23 11:41:04 +02:00
Florian Paul Azim Hoberg
581d6d480b feature(balancing): Add pressure (memory, cpu, disk) based balancing
Fixes: #337
2025-10-23 11:32:25 +02:00
gyptazy
5b395b7f15 Merge pull request #334 from gyptazy/docs/changelog-docker-nonroot
docs: Add missing changelog fragment for container changes
2025-10-09 09:26:30 +02:00
gyptazy
7d94c52883 docs: Add missing changelog fragment for container changes
Fixes: #317
2025-10-09 09:25:07 +02:00
gyptazy
7d19788be1 Merge pull request #333 from gyptazy/release/prepare-1.1.8
release: Prepare release 1.1.8
2025-10-09 09:13:17 +02:00
gyptazy
0bbc5992ca release: Prepare release 1.1.8
Fixes: #332
2025-10-09 09:05:17 +02:00
gyptazy
a4a5d9e68a Merge pull request #331 from gyptazy/fix/329-add-proxlb-prefix-log
fix(logging): Add a static ProxLB prefix to the log output when used by journal handler
2025-10-08 18:43:11 +02:00
gyptazy
af98ee8d5b fix(logging): Add a static ProxLB prefix to the log output when used by journal handler
Fixes: #329
2025-10-08 18:39:02 +02:00
gyptazy
afc93f7b21 docs: Fix typo in readme 2025-09-26 12:14:48 +02:00
gyptazy
bc6d8c8509 Merge pull request #325 from gyptazy/doc/324-adjust-readme-docs-chapter
docs: Adjust the order and presence of the documentation in the readme.
2025-09-26 12:13:17 +02:00
gyptazy
6d50f32486 docs: Adjust the order and presence of the documentation in the readme.
Fixes: #324
2025-09-26 12:12:28 +02:00
gyptazy
5fe49a9dc1 Merge pull request #323 from gyptazy/docs/fix-img-links
docs: Fix broken images
2025-09-25 16:44:55 +02:00
gyptazy
fca1d1211c docs: Fix broken images 2025-09-25 16:43:52 +02:00
no ack hosting ab
36388d9429 Secure Dockerfile and application (#317)
fix: Run ProxLB in container as plb user instead of root
* Run ProxLB as plb user
* Run within a venv
2025-09-25 14:41:47 +02:00
gyptazy
3f424e9e6d Merge pull request #320 from gyptazy/docs/enterprise-support
docs: Add enterprise support list
2025-09-25 10:37:42 +02:00
Florian Paul Azim Hoberg
44a733aed3 docs: Add enterprise support list 2025-09-25 10:35:21 +02:00
gyptazy
2f44ff48a0 Merge pull request #319 from gyptazy/fix/318-conntrack-state-migrations-pve8
fix(balancing): Fix API errors when using conntrack aware migration with older PVE versions
2025-09-25 08:40:57 +02:00
Florian Paul Azim Hoberg
7b6db9cfdd fix(balancing): Fix API errors when using conntrack aware migration with older PVE versions.
Fixes: #318
2025-09-25 06:59:54 +02:00
gyptazy
8c473b416c docs: Adjust description 2025-09-23 20:34:04 +02:00
gyptazy
51c8afe5c5 docs: Update description 2025-09-23 20:30:20 +02:00
gyptazy
a8a154abde Merge pull request #316 from gyptazy/fix/missing-1.1.7-version
release: Replace missing 1.1.7 version
2025-09-19 10:12:53 +02:00
Florian Paul Azim Hoberg
554a3eaf72 release: Replace missing 1.1.7 version 2025-09-19 10:11:54 +02:00
gyptazy
0b35987403 Merge pull request #315 from gyptazy/release/create-release-1.7
release: Add release 1.1.7 information
2025-09-19 10:08:42 +02:00
Florian Paul Azim Hoberg
d93048db69 release: Add release 1.1.7 information
Fixes: #314
2025-09-19 07:54:01 +02:00
gyptazy
2aba7dbe23 Merge pull request #313 from gyptazy/adjust-style
fix: Add missing new lines
2025-09-19 07:41:30 +02:00
Florian Paul Azim Hoberg
ba388dfd7c fix: Add missing new lines 2025-09-19 07:39:42 +02:00
gyptazy
5aa8257d40 Merge pull request #311 from gyptazy/fix/310-guest-object-names-not-being-evaluated-in-log
fix: Fix guest object names are not being evaluated in debug log.
2025-09-15 09:53:26 +02:00
Florian Paul Azim Hoberg
99fefe20bf fix: Fix guest object names are not being evaluated in debug log.
Fixes: #310
2025-09-15 09:48:59 +02:00
gyptazy
b9fb3a60e1 Merge pull request #309 from gyptazy/fix/308-fix-validating-migration-job-ids
fix: Fix crash when validating absent migration job ids
2025-09-15 09:29:00 +02:00
Florian Paul Azim Hoberg
88b3288eb7 fix: Fix crash when validating absent migration job ids
Fixes: #308
2025-09-15 09:26:01 +02:00
gyptazy
fa0113f112 Merge pull request #307 from gyptazy/feature/304-catch-keyboard-interrupt
Add graceful shutdown for SIGINT (e.g., CTRL + C abort).
2025-09-15 07:30:30 +02:00
Florian Paul Azim Hoberg
0039ae9093 Add graceful shutdown for SIGINT (e.g., CTRL + C abort).
Fixes: #304
2025-09-15 07:26:03 +02:00
gyptazy
e3bbf31fdd Merge pull request #306 from gyptazy/feature/305-add-conntrack-options-for-migrations
feature: Add conntrack state aware migrations of VMs.
2025-09-15 07:10:30 +02:00
Florian Paul Azim Hoberg
bf393c6bbf feature: Add conntrack state aware migrations of VMs.
Fixes: #305
2025-09-15 06:59:41 +02:00
gyptazy
7e5b72cfc7 docs: Adjust container version to 1.1.6.1 2025-09-04 17:39:43 +02:00
gyptazy
0ba76f80f3 Merge pull request #302 from gyptazy/fix/296-vm
fix: Validate for node presence when pinning VMs to avoid crashing
2025-09-04 17:33:47 +02:00
gyptazy
b48ff9d677 fix: Validate for node presence when pinning VMs to avoid crashing
Fixes: #296
2025-09-04 17:31:24 +02:00
gyptazy
b5c11af474 Merge pull request #301 from gyptazy/gyptazy-patch-1
Set image version to 1.1.6
2025-09-04 08:34:49 +02:00
gyptazy
af2992747d Set image version to 1.1.6 2025-09-04 08:34:23 +02:00
gyptazy
fb8dc40c16 Merge pull request #300 from gyptazy/release/1.1.6
release: Create release 1.1.6
2025-09-04 07:55:38 +02:00
Florian Paul Azim Hoberg
34f1de8367 release: Create release 1.1.6
Fixes: #297
2025-09-04 07:53:29 +02:00
gyptazy
0e992e99de Merge pull request #298 from gyptazy/fix/296-validate-node-presence-when-pinning
fix: Validate for node presence when pinning guests to avoid crashing
2025-08-29 11:33:29 +02:00
gyptazy
f5d073dc02 fix: Validate for node presence when pinning guests to avoid crashing
Fixes: #296
Fixes: #271
2025-08-29 11:26:15 +02:00
gyptazy
70ba1f2dfc Merge pull request #295 from MiBUl-eu/main
fix: overprovisioning first node if anti_affinity group has only one member
2025-08-28 17:16:28 +02:00
MiBUl-eu
c9855f1991 Added changeloge and fixed typo 2025-08-26 10:34:48 +00:00
MiBUl-eu
9bd29158b9 fix: overprovisioning first node if anti_affinity group has only one member 2025-08-25 10:11:45 +02:00
gyptazy
1ff0c5d96e Merge pull request #293 from gyptazy/feature/290-validate-token-input-from-config-file
feature: Add validation for provided API user token id to avoid confusions
2025-08-25 08:11:56 +02:00
Florian Paul Azim Hoberg
3eb4038723 feature: Add validation for provided API user token id to avoid confusions
Fixes: #290
2025-08-25 08:07:44 +02:00
gyptazy
47e7dd3c56 Merge pull request #292 from gyptazy/fix/291-catch-stack-trace-when-user-account-is-not-given-or-wrong
fix(proxmox-api): Fix stacktrace output when validating permissions on non existing users in Proxmox
2025-08-25 07:58:32 +02:00
Florian Paul Azim Hoberg
bb8cf9033d fix(proxmox-api): Fix stacktrace output when validating permissions on non existing users in Proxmox
Fixes: #291
2025-08-25 07:55:02 +02:00
gyptazy
756b4efcbd Merge pull request #288 from gyptazy/feature/281-helm-chart-versioning
feature: Add Helm chart support for ProxLB
2025-08-19 06:28:31 +02:00
38 changed files with 853 additions and 130 deletions

View File

@@ -0,0 +1,2 @@
added:
- Add validation for provided API user token id to avoid confusions (@gyptazy). [#291]

View File

@@ -0,0 +1,2 @@
fixed:
- Fix stacktrace output when validating permissions on non existing users in Proxmox (@gyptazy). [#291]

View File

@@ -0,0 +1,3 @@
fixed:
- Fix Overprovisioning first node if anti_affinity_group has only one member (@MiBUl-eu). [#295]

View File

@@ -0,0 +1,3 @@
fixed:
- Validate for node presence when pinning guests to avoid crashing (@gyptazy). [#296]

View File

@@ -1 +1 @@
date: TBD
date: 2025-09-04

View File

@@ -0,0 +1,2 @@
added:
- Add graceful shutdown for SIGINT (e.g., CTRL + C abort). (@gyptazy). [#304]

View File

@@ -0,0 +1,2 @@
added:
- Add conntrack state aware migrations of VMs (@gyptazy). [#305]

View File

@@ -0,0 +1,2 @@
fixed:
- Fix crash when validating absent migration job ids. (@gyptazy). [#308]

View File

@@ -0,0 +1,2 @@
fixed:
- Fix guest object names are not being evaluated in debug log. (@gyptazy). [#310]

View File

@@ -0,0 +1 @@
date: 2025-09-19

View File

@@ -0,0 +1,3 @@
changed:
- Container image does not run as root anymore (@mikaelkrantz945). [#317]
- Container image uses venv for running ProxLB (@mikaelkrantz945). [#317]

View File

@@ -0,0 +1,2 @@
fixed:
- Fix API errors when using conntrack aware migration with older PVE versions (@gyptazy). [#318]

View File

@@ -0,0 +1,2 @@
fixed:
- Add a static ProxLB prefix to the log output when used by journal handler (@gyptazy). [#329]

View File

@@ -0,0 +1 @@
date: 2025-10-09

View File

@@ -0,0 +1,5 @@
added:
- Add pressure (PSI) based balancing for memory, cpu, disk (req. PVE9 or greater) (@gyptazy). [#337|
- Pressure (PSI) based balancing for nodes
- Pressure (PSI) based balancing for guests
- Add PVE version evaluation

View File

@@ -0,0 +1,2 @@
added:
- Add an optional memory balancing threshold (@gyptazy). [#342]

View File

@@ -0,0 +1 @@
date: TBD

View File

@@ -6,6 +6,48 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.1.8] - 2025-10-09
### Fixed
- Fix API errors when using conntrack aware migration with older PVE versions (@gyptazy). [#318]
- Add a static ProxLB prefix to the log output when used by journal handler (@gyptazy). [#329]
### Changed
- Container image does not run as root anymore (@mikaelkrantz945). [#317]
- Container image uses venv for running ProxLB (@mikaelkrantz945). [#317]
## [1.1.7] - 2025-09-19
### Added
- Add conntrack state aware migrations of VMs (@gyptazy). [#305]
- Add graceful shutdown for SIGINT (e.g., CTRL + C abort). (@gyptazy). [#304]
### Fixed
- Fix crash when validating absent migration job ids. (@gyptazy). [#308]
- Fix guest object names are not being evaluated in debug log. (@gyptazy). [#310]
## [1.1.6.1] - 2025-09-04
### Fixed
- Validate for node presence when pinning VMs to avoid crashing (@gyptazy). [#296]
## [1.1.6] - 2025-09-04
### Added
- Add validation for provided API user token id to avoid confusions (@gyptazy). [#291]
### Fixed
- Fix stacktrace output when validating permissions on non existing users in Proxmox (@gyptazy). [#291]
- Fix Overprovisioning first node if anti_affinity_group has only one member (@MiBUl-eu). [#295]
- Validate for node presence when pinning guests to avoid crashing (@gyptazy). [#296]
- Fix balancing evaluation of guest types (e.g., VM or CT) (@gyptazy). [#268]
## [1.1.5] - 2025-07-14
### Added

View File

@@ -9,20 +9,33 @@ LABEL org.label-schema.vendor="gyptazy"
LABEL org.label-schema.url="https://proxlb.de"
LABEL org.label-schema.vcs-url="https://github.com/gyptazy/ProxLB"
# Install Python3
RUN apk add --no-cache python3 py3-pip
# --- Step 1 (root): system deps, user, dirs ---
RUN apk add --no-cache python3 py3-pip \
&& addgroup -S plb \
&& adduser -S -G plb -h /home/plb plb \
&& mkdir -p /app/conf /opt/venv \
&& chown -R plb:plb /app /home/plb /opt/venv
# Create a directory for the app
WORKDIR /app
# Copy the python program from the current directory to /app
COPY proxlb /app/proxlb
# Copy only requirements first for better layer caching
COPY --chown=plb:plb requirements.txt /app/requirements.txt
# Copy requirements to the container
COPY requirements.txt /app/requirements.txt
# --- Step 2 (appuser): venv + deps + code ---
USER plb
# Install dependencies in the virtual environment
RUN pip install --break-system-packages -r /app/requirements.txt
# Create venv owned by appuser and put it on PATH
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:${PATH}"
# Set the entry point to use the virtual environment's python
ENTRYPOINT ["/usr/bin/python3", "/app/proxlb/main.py"]
# Install Python dependencies into the venv (no PEP 668 issues)
RUN pip install --no-cache-dir -r /app/requirements.txt
# Copy application code (owned by appuser)
COPY --chown=plb:plb proxlb /app/proxlb
# Optional: placeholder config so a bind-mount can override cleanly
RUN touch /app/conf/proxlb.yaml
# Run as non-root using venv Python
ENTRYPOINT ["/opt/venv/bin/python", "/app/proxlb/main.py"]

103
README.md
View File

@@ -1,5 +1,5 @@
# ProxLB - (Re)Balance VM Workloads in Proxmox Clusters
<img align="left" src="https://cdn.gyptazy.com/images/Prox-LB-logo.jpg"/>
<img align="left" src="https://cdn.gyptazy.com/img/ProxLB.jpg"/>
<br>
<p float="center"><img src="https://img.shields.io/github/license/gyptazy/ProxLB"/><img src="https://img.shields.io/github/contributors/gyptazy/ProxLB"/><img src="https://img.shields.io/github/last-commit/gyptazy/ProxLB/main"/><img src="https://img.shields.io/github/issues-raw/gyptazy/ProxLB"/><img src="https://img.shields.io/github/issues-pr/gyptazy/ProxLB"/></p>
@@ -8,31 +8,32 @@
1. [Introduction](#introduction)
2. [Features](#features)
3. [How does it work?](#how-does-it-work)
4. [Installation](#installation)
4. [Documentation](#documentation)
5. [Installation](#installation)
1. [Requirements / Dependencies](#requirements--dependencies)
2. [Debian Package](#debian-package)
4. [Container / Docker](#container--docker)
5. [Source](#source)
5. [Usage / Configuration](#usage--configuration)
6. [Usage / Configuration](#usage--configuration)
1. [GUI Integration](#gui-integration)
2. [Proxmox HA Integration](#proxmox-ha-integration)
3. [Options](#options)
6. [Affinity & Anti-Affinity Rules](#affinity--anti-affinity-rules)
7. [Affinity & Anti-Affinity Rules](#affinity--anti-affinity-rules)
1. [Affinity Rules](#affinity-rules)
2. [Anti-Affinity Rules](#anti-affinity-rules)
3. [Ignore VMs](#ignore-vms)
4. [Pin VMs to Hypervisor Nodes](#pin-vms-to-hypervisor-nodes)
7. [Maintenance](#maintenance)
8. [Misc](#misc)
8. [Maintenance](#maintenance)
9. [Misc](#misc)
1. [Bugs](#bugs)
2. [Contributing](#contributing)
3. [Documentation](#documentation)
4. [Support](#support)
9. [Author(s)](#authors)
3. [Support](#support)
4. [Enterprise-Support](#enterprise-support)
10. [Author(s)](#authors)
## Introduction
ProxLB is an advanced load balancing solution specifically designed for Proxmox clusters, addressing the absence of a Dynamic Resource Scheduler (DRS) that is familiar to VMware users. As a third-party solution, ProxLB enhances the management and efficiency of Proxmox clusters by intelligently distributing workloads across available nodes. Workloads can be balanced by different times like the guest's memory, CPU or disk usage or their assignment to avoid overprovisioning and ensuring resources.
ProxLB is an advanced load balancing solution specifically designed for Proxmox clusters, addressing the absence of an intelligent and more advanced resource scheduler. As a third-party solution, ProxLB enhances the management and efficiency of Proxmox clusters by intelligently distributing workloads across available nodes. Workloads can be balanced by different times like the guest's memory, CPU or disk usage or their assignment to avoid overprovisioning and ensuring resources.
One of the key advantages of ProxLB is that it is fully open-source and free, making it accessible for anyone to use, modify, and contribute to. This ensures transparency and fosters community-driven improvements. ProxLB supports filtering and ignoring specific nodes and guests through configuration files and API calls, providing administrators with the flexibility to tailor the load balancing behavior to their specific needs.
@@ -43,7 +44,7 @@ ProxLB can also return the best next node for guest placement, which can be inte
Overall, ProxLB significantly enhances resource management by intelligently distributing workloads, reducing downtime through its maintenance mode, and providing improved flexibility with affinity and anti-affinity rules. Its seamless integration with CI/CD tools and reliance on the Proxmox API make it a robust and secure solution for optimizing Proxmox cluster performance.
### Video of Migration
<img src="https://cdn.gyptazy.com/images/proxlb-rebalancing-demo.gif"/>
<img src="https://cdn.gyptazy.com/img/proxlb-rebalancing-demo.gif"/>
## Features
ProxLB's key features are by enabling automatic rebalancing of VMs and CTs across a Proxmox cluster based on memory, CPU, and local disk usage while identifying optimal nodes for automation. It supports maintenance mode, affinity rules, and seamless Proxmox API integration with ACL support, offering flexible usage as a one-time operation, a daemon, or through the Proxmox Web GUI.
@@ -53,6 +54,10 @@ ProxLB's key features are by enabling automatic rebalancing of VMs and CTs acros
* Memory
* Disk (only local storage)
* CPU
* Rebalance by different modes:
* Used resources
* Assigned resources
* PSI (Pressure) of resources
* Get best nodes for further automation
* Supported Guest Types
* VMs
@@ -74,13 +79,16 @@ ProxLB is a load-balancing system designed to optimize the distribution of virtu
Before starting any migrations, ProxLB validates that rebalancing actions are necessary and beneficial. Depending on the selected balancing mode — such as CPU, memory, or disk — it creates a balancing matrix. This matrix sorts the VMs by their maximum used or assigned resources, identifying the VM with the highest usage. ProxLB then places this VM on the node with the most free resources in the selected balancing type. This process runs recursively until the operator-defined Balanciness is achieved. Balancing can be defined for the used or max. assigned resources of VMs/CTs.
## Documentation
This `README.md` doesn't contain all information and only highlights the most important facts. Extended information, such like API permissions, creating dedicated user, best-practices in running ProxLB and much more can be found in the [docs/](https://github.com/gyptazy/ProxLB/tree/main/docs) directory. Please consult the documentation before creating issues.
## Installation
### Requirements / Dependencies
* Proxmox
* Proxmox 7.x
* Proxmox 8.x
* Proxmox 9.x (Beta 1 tested)
* Proxmox 9.x
* Python3.x
* proxmoxer
* requests
@@ -134,7 +142,7 @@ wget -O /etc/apt/trusted.gpg.d/proxlb.asc https://repo.gyptazy.com/repository.gp
#### Debian Packages (.deb files)
If you do not want to use the repository you can also find the debian packages as a .deb file on gyptazy's CDN at:
* https://cdn.gyptazy.com/files/os/debian/proxlb/
* https://cdn.gyptazy.com/debian/
Afterwards, you can simply install the package by running:
```bash
@@ -165,6 +173,11 @@ docker run -it --rm -v $(pwd)/proxlb.yaml:/etc/proxlb/proxlb.yaml proxlb
| Version | Image |
|------|:------:|
| latest | cr.gyptazy.com/proxlb/proxlb:latest |
| v1.1.9 | cr.gyptazy.com/proxlb/proxlb:v1.1.9 |
| v1.1.8 | cr.gyptazy.com/proxlb/proxlb:v1.1.8 |
| v1.1.7 | cr.gyptazy.com/proxlb/proxlb:v1.1.7 |
| v1.1.6.1 | cr.gyptazy.com/proxlb/proxlb:v1.1.6.1 |
| v1.1.6 | cr.gyptazy.com/proxlb/proxlb:v1.1.6 |
| v1.1.5 | cr.gyptazy.com/proxlb/proxlb:v1.1.5 |
| v1.1.4 | cr.gyptazy.com/proxlb/proxlb:v1.1.4 |
| v1.1.3 | cr.gyptazy.com/proxlb/proxlb:v1.1.3 |
@@ -222,7 +235,7 @@ docker run -it --rm -v $(pwd)/proxlb.yaml:/etc/proxlb/proxlb.yaml proxlb
Running ProxLB is straightforward and versatile, as it only requires `Python3` and the `proxmoxer` library. This means ProxLB can be executed directly on a Proxmox node or on dedicated systems such as Debian, RedHat, or even FreeBSD, provided that the Proxmox API is accessible from the client running ProxLB. ProxLB can also run inside a Container - Docker or LXC - and is simply up to you.
### GUI Integration
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-GUI-integration.jpg"/> ProxLB can also be accessed through the Proxmox Web UI by installing the optional `pve-proxmoxlb-service-ui` package, which depends on the proxlb package. For full Web UI integration, this package must be installed on all nodes within the cluster. Once installed, a new menu item - `Rebalancing`, appears in the cluster level under the HA section. Once installed, it offers two key functionalities:
<img align="left" src="https://cdn.gyptazy.com/img/rebalance-ui.jpg"/> ProxLB can also be accessed through the Proxmox Web UI by installing the optional `pve-proxmoxlb-service-ui` package, which depends on the proxlb package. For full Web UI integration, this package must be installed on all nodes within the cluster. Once installed, a new menu item - `Rebalancing`, appears in the cluster level under the HA section. Once installed, it offers two key functionalities:
* Rebalancing VM workloads
* Migrate VM workloads away from a defined node (e.g. maintenance preparation)
@@ -265,11 +278,14 @@ The following options can be set in the configuration file `proxlb.yaml`:
| | parallel_jobs | | 5 | `Int` | The amount if parallel jobs when migrating guests. (default: `5`)|
| | live | | True | `Bool` | If guests should be moved live or shutdown.|
| | with_local_disks | | True | `Bool` | If balancing of guests should include local disks.|
| | with_conntrack_state | | True | `Bool` | If balancing of guests should including the conntrack state.|
| | balance_types | | ['vm', 'ct'] | `List` | Defined the types of guests that should be honored. [values: `vm`, `ct`]|
| | max_job_validation | | 1800 | `Int` | How long a job validation may take in seconds. (default: 1800) |
| | balanciness | | 10 | `Int` | The maximum delta of resource usage between node with highest and lowest usage. |
| | memory_threshold | | 75 | `Int` | The maximum threshold (in percent) that needs to be hit to perform balancing actions. (Optional) |
| | method | | memory | `Str` | The balancing method that should be used. [values: `memory` (default), `cpu`, `disk`]|
| | mode | | used | `Str` | The balancing mode that should be used. [values: `used` (default), `assigned`] |
| | mode | | used | `Str` | The balancing mode that should be used. [values: `used` (default), `assigned`, `psi` (pressure)] |
| | psi | | { nodes: { memory: { pressure_full: 0.20, pressure_some: 0.20, pressure_spikes: 1.00 } } } | `Dict` | A dict of PSI based thresholds for nodes and guests |
| `service` | | | | | |
| | daemon | | True | `Bool` | If daemon mode should be activated. |
| | `schedule` | | | `Dict` | Schedule config block for rebalancing. |
@@ -308,11 +324,42 @@ balancing:
parallel: False
live: True
with_local_disks: True
with_conntrack_state: True
balance_types: ['vm', 'ct']
max_job_validation: 1800
memory_threshold: 75
balanciness: 5
method: memory
mode: used
# # PSI thresholds only apply when using mode 'psi'
# # PSI based balancing is currently in beta and req. PVE >= 9
# psi:
# nodes:
# memory:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# cpu:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# disk:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# guests:
# memory:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# cpu:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# disk:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
service:
daemon: True
@@ -343,7 +390,7 @@ ProxLB provides an advanced mechanism to define affinity and anti-affinity rules
ProxLB implements affinity and anti-affinity rules through a tag-based system within the Proxmox web interface. Each guest (virtual machine or container) can be assigned specific tags, which then dictate its placement behavior. This method maintains a streamlined and secure approach to managing VM relationships while preserving Proxmoxs inherent permission model.
### Affinity Rules
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-affinity-rules.jpg"/> Affinity rules are used to group certain VMs together, ensuring that they run on the same host whenever possible. This can be beneficial for workloads requiring low-latency communication, such as clustered databases or application servers that frequently exchange data.
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-affinity-rules.jpg"/> Affinity rules are used to group certain VMs together, ensuring that they run on the same host whenever possible. This can be beneficial for workloads requiring low-latency communication, such as clustered databases or application servers that frequently exchange data.
To define an affinity rule which keeps all guests assigned to this tag together on a node, users assign a tag with the prefix `plb_affinity_$TAG`:
@@ -355,7 +402,7 @@ plb_affinity_talos
As a result, ProxLB will attempt to place all VMs with the `plb_affinity_web` tag on the same host (see also the attached screenshot with the same node).
### Anti-Affinity Rules
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-anti-affinity-rules.jpg"/> Conversely, anti-affinity rules ensure that designated VMs do not run on the same physical host. This is particularly useful for high-availability setups, where redundancy is crucial. Ensuring that critical services are distributed across multiple hosts reduces the risk of a single point of failure.
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-anti-affinity-rules.jpg"/> Conversely, anti-affinity rules ensure that designated VMs do not run on the same physical host. This is particularly useful for high-availability setups, where redundancy is crucial. Ensuring that critical services are distributed across multiple hosts reduces the risk of a single point of failure.
To define an anti-affinity rule that ensures to not move systems within this group to the same node, users assign a tag with the prefix:
@@ -369,7 +416,7 @@ As a result, ProxLB will try to place the VMs with the `plb_anti_affinity_ntp` t
**Note:** While this ensures that ProxLB tries distribute these VMs across different physical hosts within the Proxmox cluster this may not always work. If you have more guests attached to the group than nodes in the cluster, we still need to run them anywhere. If this case occurs, the next one with the most free resources will be selected.
### Ignore VMs
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-ignore-vm-movement.jpg"/> Guests, such as VMs or CTs, can also be completely ignored. This means, they won't be affected by any migration (even when (anti-)affinity rules are enforced). To ensure a proper resource evaluation, these guests are still collected and evaluated but simply skipped for balancing actions. Another thing is the implementation. While ProxLB might have a very restricted configuration file including the file permissions, this file is only read- and writeable by the Proxmox administrators. However, we might have user and groups who want to define on their own that their systems shouldn't be moved. Therefore, these users can simpy set a specific tag to the guest object - just like the (anti)affinity rules.
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-ignore-vm-movement.jpg"/> Guests, such as VMs or CTs, can also be completely ignored. This means, they won't be affected by any migration (even when (anti-)affinity rules are enforced). To ensure a proper resource evaluation, these guests are still collected and evaluated but simply skipped for balancing actions. Another thing is the implementation. While ProxLB might have a very restricted configuration file including the file permissions, this file is only read- and writeable by the Proxmox administrators. However, we might have user and groups who want to define on their own that their systems shouldn't be moved. Therefore, these users can simpy set a specific tag to the guest object - just like the (anti)affinity rules.
To define a guest to be ignored from the balancing, users assign a tag with the prefix `plb_ignore_$TAG`:
@@ -383,7 +430,7 @@ As a result, ProxLB will not migrate this guest with the `plb_ignore_dev` tag to
**Note:** Ignored guests are really ignored. Even by enforcing affinity rules this guest will be ignored.
### Pin VMs to Specific Hypervisor Nodes
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-tag-node-pinning.jpg"/> Guests, such as VMs or CTs, can also be pinned to specific (and multiple) nodes in the cluster. This might be usefull when running applications with some special licensing requirements that are only fulfilled on certain nodes. It might also be interesting, when some physical hardware is attached to a node, that is not available in general within the cluster.
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-tag-node-pinning.jpg"/> Guests, such as VMs or CTs, can also be pinned to specific (and multiple) nodes in the cluster. This might be usefull when running applications with some special licensing requirements that are only fulfilled on certain nodes. It might also be interesting, when some physical hardware is attached to a node, that is not available in general within the cluster.
To pin a guest to a specific cluster node, users assign a tag with the prefix `plb_pin_$nodename` to the desired guest:
@@ -399,8 +446,6 @@ You can also repeat this step multiple times for different node names to create
**Note:** The given node names from the tag are validated. This means, ProxLB validated if the given node name is really part of the cluster. In case of a wrongly defined or unavailable node name it continous to use the regular processes to make sure the guest keeps running.
## Maintenance
<img src="https://cdn.gyptazy.com/images/proxlb-rebalancing-demo.gif"/>
The `maintenance_nodes` option allows operators to designate one or more Proxmox nodes for maintenance mode. When a node is set to maintenance, no new guest workloads will be assigned to it, and all existing workloads will be migrated to other available nodes within the cluster. This process ensures that (anti)-affinity rules and resource availability are respected, preventing disruptions while maintaining optimal performance across the infrastructure.
### Adding / Removing Nodes from Maintenance
@@ -418,9 +463,6 @@ Bugs can be reported via the GitHub issue tracker [here](https://github.com/gypt
### Contributing
Feel free to add further documentation, to adjust already existing one or to contribute with code. Please take care about the style guide and naming conventions. You can find more in our [CONTRIBUTING.md](https://github.com/gyptazy/ProxLB/blob/main/CONTRIBUTING.md) file.
### Documentation
You can also find additional and more detailed documentation within the [docs/](https://github.com/gyptazy/ProxLB/tree/main/docs) directory.
### Support
If you need assistance or have any questions, we offer support through our dedicated [chat room](https://matrix.to/#/#proxlb:gyptazy.com) in Matrix or [Discord](https://discord.gg/JemGu7WbfQ). Join our community for real-time help, advice, and discussions. The Matrix and Discord room are bridged to ensure that the communication is not splitted - so simply feel free to join which fits most to you!
@@ -435,5 +477,18 @@ Connect with us in our dedicated chat room for immediate support and live intera
**Note:** Please always keep in mind that this is a one-man show project without any further help. This includes coding, testing, packaging and all the infrastructure around it to keep this project up and running.
### Enterprise-Support
Running critical infrastructure in an enterprise environment often comes with requirements that go far beyond functionality alone. Enterprises typically expect predictable service levels, defined escalation paths, and guaranteed response times. In many cases, organizations also demand 24x7 support availability to ensure that their systems remain stable and resilient, even under unexpected circumstances.
As the creator and maintainer of ProxLB, I operate as a one-man project. While I am continuously working to improve the software, I cannot provide the type of enterprise-grade support that large organizations may require. To address this need, several companies have stepped in to offer professional services around ProxLB in Proxmox VE clusters.
Below is a list of organizations currently known to provide enterprise-level support for ProxLB. If your business relies on ProxLB in production and you require more than community-based support, these providers may be a good fit for your needs:
| Company| Country | Web |
|------|:------:|:------:|
| credativ | DE | [credativ.de](https://www.credativ.de/en/portfolio/support/proxmox-virtualization/) |
*Note: If you provide support for ProxLB, feel free to create PR with your addition.*
### Author(s)
* Florian Paul Azim Hoberg @gyptazy (https://gyptazy.com)

View File

@@ -25,11 +25,41 @@ balancing:
parallel_jobs: 1
live: True
with_local_disks: True
balance_types: ['vm', 'ct']
max_job_validation: 1800
balanciness: 5
method: memory
mode: used
with_conntrack_state: True
balance_types: ['vm', 'ct'] # 'vm' | 'ct'
max_job_validation: 1800 # Maximum time (in seconds) a job validation may take
memory_threshold: 75 # Optional: Maximum threshold (in percent) to trigger balancing actions
balanciness: 5 # Maximum delta of resource usage between highest and lowest usage node
method: memory # 'memory' | 'cpu' | 'disk'
mode: used # 'assigned' | 'used' | 'psi'
# # PSI thresholds only apply when using mode 'psi'
# psi:
# nodes:
# memory:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# cpu:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# disk:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# guests:
# memory:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# cpu:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# disk:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
service:
daemon: True

39
debian/changelog vendored
View File

@@ -1,3 +1,42 @@
proxlb (1.1.9) stable; urgency=medium
* Add pressure (PSI) based balancing for memory, cpu, disk (req. PVE9 or greater). (Closes: #339)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Thu, 22 Oct 2025 09:04:13 +0002
proxlb (1.1.8) stable; urgency=medium
* Fix API errors when using conntrack aware migration with older PVE version. (Closes: #318)
* Add a static ProxLB prefix to the log output when used by journal handler. (Closes: #329)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Thu, 09 Oct 2025 09:04:13 +0002
proxlb (1.1.7) stable; urgency=medium
* Add conntrack state aware migrations of VMs. (Closes: #305)
* Add graceful shutdown for SIGINT command. (Closes: #304)
* Fix crash when validating absent migration job ids. (Closes: #308)
* Fix guest object names are not being evaluated in debug log. (Closes: #310)
* Note: Have a great Dutch Proxmox Day 2025!
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Thu, 04 Sep 2025 19:23:51 +0000
proxlb (1.1.6.1) stable; urgency=medium
* Validate for node presence when pinning VMs to avoid crashing. (Closes: #296)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Thu, 04 Sep 2025 19:23:51 +0000
proxlb (1.1.6) stable; urgency=medium
* Add validation for provided API user token id to avoid confusions. (Closes: #291)
* Fix stacktrace output when validating permissions on non existing users in Proxmox. (Closes: #291)
* Fix Overprovisioning first node if anti_affinity_group has only one member. (Closes: #295)
* Validate for node presence when pinning guests to avoid crashing. (Closes: #296)
* Fix balancing evaluation of guest types (e.g., VM or CT). (Closes: #268)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Thu, 04 Sep 2025 05:12:19 +0000
proxlb (1.1.5) stable; urgency=medium
* Allow custom API ports instead of fixed tcp/8006. (Closes: #260)

4
debian/control vendored
View File

@@ -8,5 +8,5 @@ Build-Depends: debhelper-compat (= 13), dh-python, python3-all, python3-setuptoo
Package: proxlb
Architecture: all
Depends: ${python3:Depends}, ${misc:Depends}, python3-requests, python3-urllib3, python3-proxmoxer, python3-yaml
Description: A DRS alike Load Balancer for Proxmox Clusters
An advanced DRS alike loadbalancer for Proxmox clusters that also supports maintenance modes and affinity/anti-affinity rules.
Description: An advanced resource scheduler and load balancer for Proxmox clusters
An advanced resource scheduler and load balancer for Proxmox clusters that also supports maintenance mode and affinity/anti-affinity rules.

View File

@@ -20,6 +20,10 @@
7. [Run as a Systemd-Service](#run-as-a-systemd-service)
8. [SSL Self-Signed Certificates](#ssl-self-signed-certificates)
9. [Node Maintenances](#node-maintenances)
10. [Balancing Methods](#balancing-methods)
1. [Used Resources](#used-resources)
2. [Assigned Resources](#assigned-resources)
3. [Pressure (PSI) based Resources](#pressure-psi-based-resources)
## Authentication / User Accounts / Permissions
### Authentication
@@ -235,4 +239,115 @@ The maintenance_nodes key must be defined as a list, even if it only includes a
* No new workloads will be balanced or migrated onto it.
* Any existing workloads currently running on the node will be migrated away in accordance with the configured balancing strategies, assuming resources on other nodes allow.
This feature is particularly useful during planned maintenance, upgrades, or troubleshooting, ensuring that services continue to run with minimal disruption while the specified node is being worked on.
This feature is particularly useful during planned maintenance, upgrades, or troubleshooting, ensuring that services continue to run with minimal disruption while the specified node is being worked on.
## 10. Balancing Methods
ProxLB provides multiple balancing modes that define *how* resources are evaluated and compared during cluster balancing.
Each mode reflects a different strategy for determining load and distributing guests (VMs or containers) between nodes.
Depending on your environment, provisioning strategy, and performance goals, you can choose between:
| Mode | Description | Typical Use Case |
|------|--------------|------------------|
| `used` | Uses the *actual runtime resource usage* (e.g. CPU, memory, disk). | Dynamic or lab environments with frequent workload changes and tolerance for overprovisioning. |
| `assigned` | Uses the *statically defined resource allocations* from guest configurations. | Production or SLA-driven clusters that require guaranteed resources and predictable performance. |
| `psi` | Uses Linux *Pressure Stall Information (PSI)* metrics to evaluate real system contention and pressure. | Advanced clusters that require pressure-aware decisions for proactive rebalancing. |
### 10.1 Used Resources
When **mode: `used`** is configured, ProxLB evaluates the *real usage metrics* of guest objects (VMs and CTs).
It collects the current CPU, memory, and disk usage directly from the Proxmox API to determine the *actual consumption* of each guest and node.
This mode is ideal for **dynamic environments** where workloads frequently change and **overprovisioning is acceptable**. It provides the most reactive balancing behavior, since decisions are based on live usage instead of static assignment.
Typical scenarios include:
- Production environments to distribute workloads across the nodes.
- Test or development clusters with frequent VM changes.
- Clusters where resource spikes are short-lived.
- Environments where slight resource contention is tolerable.
#### Example Configuration
```yaml
balancing:
mode: used
```
### 10.2 Assigned Resources
When **mode: `assigned`** is configured, ProxLB evaluates the *provisioned or allocated resources* of each guest (VM or CT) instead of their runtime usage.
It uses data such as **CPU cores**, **memory limits**, and **disk allocations** defined in Proxmox to calculate how much of each nodes capacity is reserved.
This mode is ideal for **production clusters** where:
- Overcommitment is *not allowed or only minimally tolerated*.
- Each nodes workload is planned based on the assigned capacities.
- Administrators want predictable resource distribution aligned with provisioning policies.
Unlike the `used` mode, `assigned` focuses purely on the *declared configuration* of guests and remains stable even if actual usage varies temporarily.
Typical scenarios include:
- Enterprise environments with SLA or QoS requirements.
- Clusters where workloads are sized deterministically.
- Situations where consistent node utilization and capacity awareness are crucial.
#### Example Configuration
```yaml
balancing:
mode: assigned
```
### 10.3 Pressure (PSI) based Resources
> [!IMPORTANT]
> PSI based balancing is still in beta! If you find any bugs, please raise an issue including metrics of all nodes and affected guests. You can provide metrics directly from PVE or Grafana (via node_exporter or pve_exporter).
When **mode: `psi`** is configured, ProxLB uses the **Linux Pressure Stall Information (PSI)** interface to measure the *real-time pressure* on system resources such as **CPU**, **memory**, and **disk I/O**.
Unlike the `used` or `assigned` modes, which rely on static or average metrics, PSI provides *direct insight into how often and how long tasks are stalled* because of insufficient resources.
This enables ProxLB to make **proactive balancing decisions** — moving workloads *before* performance degradation becomes visible to the user.
**IMPORTANT**: Predicting distributing workloads is dangerous and might not result into the expected state. Therefore, ProxLB migrates only a single instance each 60 minutes to obtain new real-metrics and to validate if further changes are required. Keep in mind, that migrations are also costly and should be avoided as much as possible.
PSI metrics are available for both **nodes** and **guest objects**, allowing fine-grained balancing decisions:
- **Node-level PSI:** Detects cluster nodes under systemic load or contention.
- **Guest-level PSI:** Identifies individual guests suffering from memory, CPU, or I/O stalls.
### PSI Metrics Explained
Each monitored resource defines three pressure thresholds:
| Key | Description |
|-----|--------------|
| `pressure_some` | Indicates partial stall conditions where some tasks are waiting for a resource. |
| `pressure_full` | Represents complete stall conditions where *all* tasks are blocked waiting for a resource. |
| `pressure_spikes` | Defines short-term burst conditions that may signal saturation spikes. |
These thresholds are expressed in **percentages** and represent how much time the kernel reports stalls over specific averaging windows (e.g. 5s, 10s, 60s).
### Example Configuration
```yaml
balancing:
mode: psi
psi:
nodes:
memory:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
cpu:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
disk:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
guests:
memory:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
cpu:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
disk:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
```

View File

@@ -1,24 +1,6 @@
apiVersion: v3
name: proxlb
description: A Helm chart for self-hosted ProxLB
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: "1.1.5"
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "v1.1.5"
version: "1.1.9"
appVersion: "v1.1.9"

View File

@@ -1,7 +1,7 @@
image:
registry: cr.gyptazy.com
repository: proxlb/proxlb
tag: v1.1.5
tag: v1.1.9
pullPolicy: IfNotPresent
imagePullSecrets: [ ]
@@ -43,6 +43,7 @@ configmap:
parallel_jobs: 1
live: True
with_local_disks: True
with_conntrack_state: True
balance_types: [ 'vm', 'ct' ]
max_job_validation: 1800
balanciness: 5
@@ -57,4 +58,4 @@ configmap:
enable: False
time: 1
format: "hours"
log_level: INFO
log_level: INFO

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env bash
VERSION="1.1.4"
VERSION="1.1.9"
# ProxLB
sed -i "s/^__version__ = .*/__version__ = \"$VERSION\"/" "proxlb/utils/version.py"
@@ -8,5 +8,6 @@ sed -i "s/version=\"[0-9]*\.[0-9]*\.[0-9]*\"/version=\"$VERSION\"/" setup.py
# Helm Chart
sed -i "s/^version: .*/version: \"$VERSION\"/" helm/proxlb/Chart.yaml
sed -i "s/^appVersion: .*/appVersion: \"v$VERSION\"/" helm/proxlb/Chart.yaml
sed -i "s/^tag: .*/tag: \"v$VERSION\"/" helm/proxlb/values.yaml
echo "OK: Versions have been sucessfully set to $VERSION"

View File

@@ -33,8 +33,9 @@ def main():
# Initialize logging handler
logger = SystemdLogger(level=logging.INFO)
# Signal handler for SIGHUP
# Initialize handlers
signal.signal(signal.SIGHUP, Helper.handler_sighup)
signal.signal(signal.SIGINT, Helper.handler_sigint)
# Parses arguments passed from the CLI
cli_parser = CliParser()
@@ -80,6 +81,8 @@ def main():
# Update the initial node resource assignments
# by the previously created groups.
Calculations.set_node_assignments(proxlb_data)
Calculations.set_node_hot(proxlb_data)
Calculations.set_guest_hot(proxlb_data)
Calculations.get_most_free_node(proxlb_data, cli_args.best_node)
Calculations.relocate_guests_on_maintenance_nodes(proxlb_data)
Calculations.get_balanciness(proxlb_data)

View File

@@ -91,7 +91,7 @@ class Balancing:
# VM Balancing
if guest_meta["type"] == "vm":
if 'vm' in proxlb_data["meta"]["balancing"].get("balance_types", []):
logger.debug("Balancing: Balancing for guest {guest_name} of type VM started.")
logger.debug(f"Balancing: Balancing for guest {guest_name} of type VM started.")
job_id = self.exec_rebalancing_vm(proxmox_api, proxlb_data, guest_name)
else:
logger.debug(
@@ -101,7 +101,7 @@ class Balancing:
# CT Balancing
elif guest_meta["type"] == "ct":
if 'ct' in proxlb_data["meta"]["balancing"].get("balance_types", []):
logger.debug("Balancing: Balancing for guest {guest_name} of type CT started.")
logger.debug(f"Balancing: Balancing for guest {guest_name} of type CT started.")
job_id = self.exec_rebalancing_ct(proxmox_api, proxlb_data, guest_name)
else:
logger.debug(
@@ -122,7 +122,8 @@ class Balancing:
# Wait for all jobs in the current chunk to complete
for guest_name, node, job_id in jobs_to_wait:
self.get_rebalancing_job_status(proxmox_api, proxlb_data, guest_name, node, job_id)
if job_id:
self.get_rebalancing_job_status(proxmox_api, proxlb_data, guest_name, node, job_id)
def exec_rebalancing_vm(self, proxmox_api: any, proxlb_data: Dict[str, Any], guest_name: str) -> None:
"""
@@ -143,6 +144,7 @@ class Balancing:
guest_id = proxlb_data["guests"][guest_name]["id"]
guest_node_current = proxlb_data["guests"][guest_name]["node_current"]
guest_node_target = proxlb_data["guests"][guest_name]["node_target"]
job_id = None
if proxlb_data["meta"]["balancing"].get("live", True):
online_migration = 1
@@ -155,17 +157,23 @@ class Balancing:
with_local_disks = 0
migration_options = {
'target': {guest_node_target},
'target': guest_node_target,
'online': online_migration,
'with-local-disks': with_local_disks
'with-local-disks': with_local_disks,
}
# Conntrack state aware migrations are not supported in older
# PVE versions, so we should not add it by default.
if proxlb_data["meta"]["balancing"].get("with_conntrack_state", True):
migration_options['with-conntrack-state'] = 1
try:
logger.info(f"Balancing: Starting to migrate VM guest {guest_name} from {guest_node_current} to {guest_node_target}.")
job_id = proxmox_api.nodes(guest_node_current).qemu(guest_id).migrate().post(**migration_options)
except proxmoxer.core.ResourceException as proxmox_api_error:
logger.critical(f"Balancing: Failed to migrate guest {guest_name} of type VM due to some Proxmox errors. Please check if resource is locked or similar.")
logger.debug(f"Balancing: Failed to migrate guest {guest_name} of type VM due to some Proxmox errors: {proxmox_api_error}")
logger.debug("Finished: exec_rebalancing_vm.")
return job_id
@@ -188,6 +196,7 @@ class Balancing:
guest_id = proxlb_data["guests"][guest_name]["id"]
guest_node_current = proxlb_data["guests"][guest_name]["node_current"]
guest_node_target = proxlb_data["guests"][guest_name]["node_target"]
job_id = None
try:
logger.info(f"Balancing: Starting to migrate CT guest {guest_name} from {guest_node_current} to {guest_node_target}.")
@@ -195,6 +204,7 @@ class Balancing:
except proxmoxer.core.ResourceException as proxmox_api_error:
logger.critical(f"Balancing: Failed to migrate guest {guest_name} of type CT due to some Proxmox errors. Please check if resource is locked or similar.")
logger.debug(f"Balancing: Failed to migrate guest {guest_name} of type CT due to some Proxmox errors: {proxmox_api_error}")
logger.debug("Finished: exec_rebalancing_ct.")
return job_id

View File

@@ -80,7 +80,7 @@ class Calculations:
for guest_name in group_meta["guests"]:
guest_node_current = proxlb_data["guests"][guest_name]["node_current"]
# Update Hardware assignments
# Update resource assignments
# Update assigned values for the current node
logger.debug(f"set_node_assignment of guest {guest_name} on node {guest_node_current} with cpu_total: {proxlb_data['guests'][guest_name]['cpu_total']}, memory_total: {proxlb_data['guests'][guest_name]['memory_total']}, disk_total: {proxlb_data['guests'][guest_name]['disk_total']}.")
proxlb_data["nodes"][guest_node_current]["cpu_assigned"] += proxlb_data["guests"][guest_name]["cpu_total"]
@@ -93,6 +93,83 @@ class Calculations:
logger.debug("Finished: set_node_assignments.")
def set_node_hot(proxlb_data: Dict[str, Any]) -> Dict[str, Any]:
"""
Evaluates node 'full' pressure metrics for memory, cpu, and io
against defined thresholds and sets <metric>_pressure_hot = True
when a node is considered HOT.
Returns the modified proxlb_data dict.
"""
logger.debug("Starting: set_node_hot.")
balancing_cfg = proxlb_data.get("meta", {}).get("balancing", {})
thresholds = balancing_cfg.get("psi_thresholds", balancing_cfg.get("psi", {}).get("nodes", {}))
nodes = proxlb_data.get("nodes", {})
for node_name, node in nodes.items():
if node.get("maintenance"):
continue
if node.get("ignore"):
continue
# PSI metrics are only availavble on Proxmox VE 9.0 and higher.
if proxlb_data["meta"]["balancing"].get("mode", "used") == "psi":
if tuple(map(int, proxlb_data["nodes"][node["name"]]["pve_version"].split('.'))) < tuple(map(int, "9.0".split('.'))):
logger.critical(f"Proxmox node {node['name']} runs Proxmox VE version {proxlb_data['nodes'][node['name']]['pve_version']}."
" PSI metrics require Proxmox VE 9.0 or higher. Balancing deactivated!")
for metric, threshold in thresholds.items():
pressure_full = node.get(f"{metric}_pressure_full_percent", 0.0)
pressure_some = node.get(f"{metric}_pressure_some_percent", 0.0)
pressure_spikes = node.get(f"{metric}_pressure_full_spikes_percent", 0.0)
is_hot = (pressure_full >= threshold["pressure_full"] and pressure_some >= threshold["pressure_some"]) or (pressure_spikes >= threshold["pressure_spikes"])
if is_hot:
logger.debug(f"Set node {node["name"]} as hot based on {metric} pressure metrics.")
proxlb_data["nodes"][node["name"]][f"{metric}_pressure_hot"] = True
proxlb_data["nodes"][node["name"]][f"pressure_hot"] = True
else:
logger.debug(f"Node {node["name"]} is not hot based on {metric} pressure metrics.")
logger.debug("Finished: set_node_hot.")
return proxlb_data
def set_guest_hot(proxlb_data: Dict[str, Any]) -> Dict[str, Any]:
"""
Evaluates guest 'full' pressure metrics for memory, cpu, and io
against defined thresholds and sets <metric>_pressure_hot = True
when a guest is considered HOT.
Returns the modified proxlb_data dict.
"""
logger.debug("Starting: set_guest_hot.")
balancing_cfg = proxlb_data.get("meta", {}).get("balancing", {})
thresholds = balancing_cfg.get("psi_thresholds", balancing_cfg.get("psi", {}).get("guests", {}))
guests = proxlb_data.get("guests", {})
for guest_name, guest in guests.items():
if guest.get("ignore"):
continue
for metric, threshold in thresholds.items():
pressure_full = guest.get(f"{metric}_pressure_full_percent", 0.0)
pressure_some = guest.get(f"{metric}_pressure_some_percent", 0.0)
pressure_spikes = guest.get(f"{metric}_pressure_full_spikes_percent", 0.0)
is_hot = (pressure_full >= threshold["pressure_full"] and pressure_some >= threshold["pressure_some"]) or (pressure_spikes >= threshold["pressure_spikes"])
if is_hot:
logger.debug(f"Set guest {guest["name"]} as hot based on {metric} pressure metrics.")
proxlb_data["guests"][guest["name"]][f"{metric}_pressure_hot"] = True
proxlb_data["guests"][guest["name"]][f"pressure_hot"] = True
else:
logger.debug(f"guest {guest["name"]} is not hot based on {metric} pressure metrics.")
logger.debug("Finished: set_guest_hot.")
return proxlb_data
@staticmethod
def get_balanciness(proxlb_data: Dict[str, Any]) -> Dict[str, Any]:
"""
@@ -113,7 +190,66 @@ class Calculations:
method = proxlb_data["meta"]["balancing"].get("method", "memory")
mode = proxlb_data["meta"]["balancing"].get("mode", "used")
balanciness = proxlb_data["meta"]["balancing"].get("balanciness", 10)
method_value = [node_meta[f"{method}_{mode}_percent"] for node_meta in proxlb_data["nodes"].values()]
if mode == "assigned":
method_value = [node_meta[f"{method}_{mode}_percent"] for node_meta in proxlb_data["nodes"].values()]
if proxlb_data["meta"]["balancing"].get(f"{method}_threshold", None):
threshold = proxlb_data["meta"]["balancing"].get(f"{method}_threshold")
highest_usage_node = max(proxlb_data["nodes"].values(), key=lambda x: x[f"{method}_{mode}_percent"])
highest_node_value = highest_usage_node[f"{method}_{mode}_percent"]
if highest_node_value >= threshold:
logger.debug(f"Guest balancing is required. Highest {method} usage node {highest_usage_node['name']} is above the defined threshold of {threshold}% with a value of {highest_node_value}%.")
proxlb_data["meta"]["balancing"]["balance"] = True
else:
logger.debug(f"Guest balancing is ok. Highest {method} usage node {highest_usage_node['name']} is below the defined threshold of {threshold}% with a value of {highest_node_value}%.")
proxlb_data["meta"]["balancing"]["balance"] = False
else:
logger.debug(f"No {method} threshold defined for balancing. Skipping threshold check.")
elif mode == "used":
method_value = [node_meta[f"{method}_{mode}_percent"] for node_meta in proxlb_data["nodes"].values()]
if proxlb_data["meta"]["balancing"].get(f"{method}_threshold", None):
threshold = proxlb_data["meta"]["balancing"].get(f"{method}_threshold")
highest_usage_node = max(proxlb_data["nodes"].values(), key=lambda x: x[f"{method}_{mode}_percent"])
highest_node_value = highest_usage_node[f"{method}_{mode}_percent"]
if highest_node_value >= threshold:
logger.debug(f"Guest balancing is required. Highest {method} usage node {highest_usage_node['name']} is above the defined threshold of {threshold}% with a value of {highest_node_value}%.")
proxlb_data["meta"]["balancing"]["balance"] = True
else:
logger.debug(f"Guest balancing is ok. Highest {method} usage node {highest_usage_node['name']} is below the defined threshold of {threshold}% with a value of {highest_node_value}%.")
proxlb_data["meta"]["balancing"]["balance"] = False
else:
logger.debug(f"No {method} threshold defined for balancing. Skipping threshold check.")
elif mode == "psi":
method_value = [node_meta[f"{method}_pressure_full_spikes_percent"] for node_meta in proxlb_data["nodes"].values()]
any_node_hot = any(node.get(f"{method}_pressure_hot", False) for node in proxlb_data["nodes"].values())
any_guest_hot = any(node.get(f"{method}_pressure_hot", False) for node in proxlb_data["guests"].values())
if any_node_hot:
logger.debug(f"Guest balancing is required. A node is marked as HOT based on {method} pressure metrics.")
proxlb_data["meta"]["balancing"]["balance"] = True
else:
logger.debug(f"Guest balancing is ok. No node is marked as HOT based on {method} pressure metrics.")
if any_guest_hot:
logger.debug(f"Guest balancing is required. A guest is marked as HOT based on {method} pressure metrics.")
proxlb_data["meta"]["balancing"]["balance"] = True
else:
logger.debug(f"Guest balancing is ok. No guest is marked as HOT based on {method} pressure metrics.")
return proxlb_data
else:
logger.critical(f"Unknown balancing mode: {mode} provided. Cannot get balanciness.")
sys.exit(1)
method_value_highest = max(method_value)
method_value_lowest = min(method_value)
@@ -159,7 +295,23 @@ class Calculations:
# Filter by the defined methods and modes for balancing
method = proxlb_data["meta"]["balancing"].get("method", "memory")
mode = proxlb_data["meta"]["balancing"].get("mode", "used")
lowest_usage_node = min(filtered_nodes, key=lambda x: x[f"{method}_{mode}_percent"])
if mode == "assigned":
logger.debug(f"Get best node for balancing by assigned {method} resources.")
lowest_usage_node = min(filtered_nodes, key=lambda x: x[f"{method}_{mode}_percent"])
elif mode == "used":
logger.debug(f"Get best node for balancing by used {method} resources.")
lowest_usage_node = min(filtered_nodes, key=lambda x: x[f"{method}_{mode}_percent"])
elif mode == "psi":
logger.debug(f"Get best node for balancing by pressure of {method} resources.")
lowest_usage_node = min(filtered_nodes, key=lambda x: x[f"{method}_pressure_full_spikes_percent"])
else:
logger.critical(f"Unknown balancing mode: {mode} provided. Cannot get best node.")
sys.exit(1)
proxlb_data["meta"]["balancing"]["balance_reason"] = 'resources'
proxlb_data["meta"]["balancing"]["balance_next_node"] = lowest_usage_node["name"]
@@ -188,7 +340,7 @@ class Calculations:
Returns:
None
"""
logger.debug("Starting: get_most_free_node.")
logger.debug("Starting: relocate_guests_on_maintenance_nodes.")
proxlb_data["meta"]["balancing"]["balance_next_guest"] = ""
for guest_name in proxlb_data["groups"]["maintenance"]:
@@ -199,7 +351,7 @@ class Calculations:
Calculations.update_node_resources(proxlb_data)
logger.warning(f"Warning: Balancing may not be perfect because guest {guest_name} was located on a node which is in maintenance mode.")
logger.debug("Finished: get_most_free_node.")
logger.debug("Finished: relocate_guests_on_maintenance_nodes.")
@staticmethod
def relocate_guests(proxlb_data: Dict[str, Any]):
@@ -233,7 +385,26 @@ class Calculations:
Calculations.get_most_free_node(proxlb_data)
for guest_name in proxlb_data["groups"]["affinity"][group_name]["guests"]:
proxlb_data["meta"]["balancing"]["balance_next_guest"] = guest_name
mode = proxlb_data["meta"]["balancing"].get("mode", "used")
if mode == 'psi':
logger.debug(f"Evaluating guest relocation based on {mode} mode.")
method = proxlb_data["meta"]["balancing"].get("method", "memory")
processed_guests_psi = proxlb_data["meta"]["balancing"].setdefault("processed_guests_psi", [])
unprocessed_guests_psi = [guest for guest in proxlb_data["guests"].values() if guest["name"] not in processed_guests_psi]
# Filter by the defined methods and modes for balancing
highest_usage_guest = max(unprocessed_guests_psi, key=lambda x: x[f"{method}_pressure_full_spikes_percent"])
# Append guest to the psi based processed list of guests
if highest_usage_guest["name"] == guest_name and guest_name not in proxlb_data["meta"]["balancing"]["processed_guests_psi"]:
proxlb_data["meta"]["balancing"]["processed_guests_psi"].append(guest_name)
proxlb_data["meta"]["balancing"]["balance_next_guest"] = guest_name
else:
logger.debug(f"Evaluating guest relocation based on {mode} mode.")
proxlb_data["meta"]["balancing"]["balance_next_guest"] = guest_name
Calculations.val_anti_affinity(proxlb_data, guest_name)
Calculations.val_node_relationships(proxlb_data, guest_name)
Calculations.update_node_resources(proxlb_data)
@@ -266,23 +437,28 @@ class Calculations:
if guest_name in proxlb_data["groups"]["anti_affinity"][group_name]['guests'] and not proxlb_data["guests"][guest_name]["processed"]:
logger.debug(f"Anti-Affinity: Guest: {guest_name} is included in anti-affinity group: {group_name}.")
# Iterate over all available nodes
for node_name in proxlb_data["nodes"].keys():
# Check if the group has only one member. If so skip new guest node assignment.
if proxlb_data["groups"]["anti_affinity"][group_name]["counter"] > 1:
logger.debug(f"Anti-Affinity: Group has more than 1 member.")
# Iterate over all available nodes
for node_name in proxlb_data["nodes"].keys():
# Only select node if it was not used before and is not in a
# maintenance mode. Afterwards, add it to the list of already
# used nodes for the current anti-affinity group
if node_name not in proxlb_data["groups"]["anti_affinity"][group_name]["used_nodes"]:
# Only select node if it was not used before and is not in a
# maintenance mode. Afterwards, add it to the list of already
# used nodes for the current anti-affinity group
if node_name not in proxlb_data["groups"]["anti_affinity"][group_name]["used_nodes"]:
if not proxlb_data["nodes"][node_name]["maintenance"]:
# If the node has not been used yet, we assign this node to the guest
proxlb_data["meta"]["balancing"]["balance_next_node"] = node_name
proxlb_data["groups"]["anti_affinity"][group_name]["used_nodes"].append(node_name)
logger.debug(f"Node: {node_name} marked as used for anti-affinity group: {group_name} with guest {guest_name}")
break
if not proxlb_data["nodes"][node_name]["maintenance"]:
# If the node has not been used yet, we assign this node to the guest
proxlb_data["meta"]["balancing"]["balance_next_node"] = node_name
proxlb_data["groups"]["anti_affinity"][group_name]["used_nodes"].append(node_name)
logger.debug(f"Node: {node_name} marked as used for anti-affinity group: {group_name} with guest {guest_name}")
break
else:
logger.critical(f"Node: {node_name} already got used for anti-affinity group:: {group_name}. (Tried for guest: {guest_name})")
else:
logger.critical(f"Node: {node_name} already got used for anti-affinity group:: {group_name}. (Tried for guest: {guest_name})")
else:
logger.debug(f"Anti-Affinity: Group has less than 2 members. Skipping node calculation for the group.")
else:
logger.debug(f"Guest: {guest_name} is not included in anti-affinity group: {group_name}. Skipping.")
@@ -343,6 +519,11 @@ class Calculations:
"""
logger.debug("Starting: update_node_resources.")
guest_name = proxlb_data["meta"]["balancing"]["balance_next_guest"]
if guest_name == "":
logger.debug("No guest defined to update node resources for.")
return
node_current = proxlb_data["guests"][guest_name]["node_current"]
node_target = proxlb_data["meta"]["balancing"]["balance_next_node"]

View File

@@ -62,24 +62,39 @@ class Guests:
# resource metrics for rebalancing to ensure that we do not overprovisiong the node.
for guest in proxmox_api.nodes(node).qemu.get():
if guest['status'] == 'running':
guests['guests'][guest['name']] = {}
guests['guests'][guest['name']]['name'] = guest['name']
guests['guests'][guest['name']]['cpu_total'] = int(guest['cpus'])
guests['guests'][guest['name']]['cpu_used'] = Guests.get_guest_cpu_usage(proxmox_api, node, guest['vmid'], guest['name'])
guests['guests'][guest['name']]['cpu_used'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', None)
guests['guests'][guest['name']]['cpu_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'some')
guests['guests'][guest['name']]['cpu_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'full')
guests['guests'][guest['name']]['cpu_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'some', spikes=True)
guests['guests'][guest['name']]['cpu_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'full', spikes=True)
guests['guests'][guest['name']]['cpu_pressure_hot'] = False
guests['guests'][guest['name']]['memory_total'] = guest['maxmem']
guests['guests'][guest['name']]['memory_used'] = guest['mem']
guests['guests'][guest['name']]['memory_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'some')
guests['guests'][guest['name']]['memory_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'full')
guests['guests'][guest['name']]['memory_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'some', spikes=True)
guests['guests'][guest['name']]['memory_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'full', spikes=True)
guests['guests'][guest['name']]['memory_pressure_hot'] = False
guests['guests'][guest['name']]['disk_total'] = guest['maxdisk']
guests['guests'][guest['name']]['disk_used'] = guest['disk']
guests['guests'][guest['name']]['disk_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'some')
guests['guests'][guest['name']]['disk_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'full')
guests['guests'][guest['name']]['disk_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'some', spikes=True)
guests['guests'][guest['name']]['disk_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'full', spikes=True)
guests['guests'][guest['name']]['disk_pressure_hot'] = False
guests['guests'][guest['name']]['id'] = guest['vmid']
guests['guests'][guest['name']]['node_current'] = node
guests['guests'][guest['name']]['node_target'] = node
guests['guests'][guest['name']]['processed'] = False
guests['guests'][guest['name']]['pressure_hot'] = False
guests['guests'][guest['name']]['tags'] = Tags.get_tags_from_guests(proxmox_api, node, guest['vmid'], 'vm')
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'], nodes)
guests['guests'][guest['name']]['type'] = 'vm'
logger.debug(f"Resources of Guest {guest['name']} (type VM) added: {guests['guests'][guest['name']]}")
@@ -94,20 +109,36 @@ class Guests:
guests['guests'][guest['name']] = {}
guests['guests'][guest['name']]['name'] = guest['name']
guests['guests'][guest['name']]['cpu_total'] = int(guest['cpus'])
guests['guests'][guest['name']]['cpu_used'] = Guests.get_guest_cpu_usage(proxmox_api, node, guest['vmid'], guest['name'])
guests['guests'][guest['name']]['cpu_used'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', None)
guests['guests'][guest['name']]['cpu_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'some')
guests['guests'][guest['name']]['cpu_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'full')
guests['guests'][guest['name']]['cpu_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'some', spikes=True)
guests['guests'][guest['name']]['cpu_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'full', spikes=True)
guests['guests'][guest['name']]['cpu_pressure_hot'] = False
guests['guests'][guest['name']]['memory_total'] = guest['maxmem']
guests['guests'][guest['name']]['memory_used'] = guest['mem']
guests['guests'][guest['name']]['memory_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'some')
guests['guests'][guest['name']]['memory_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'full')
guests['guests'][guest['name']]['memory_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'some', spikes=True)
guests['guests'][guest['name']]['memory_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'full', spikes=True)
guests['guests'][guest['name']]['memory_pressure_hot'] = False
guests['guests'][guest['name']]['disk_total'] = guest['maxdisk']
guests['guests'][guest['name']]['disk_used'] = guest['disk']
guests['guests'][guest['name']]['disk_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'some')
guests['guests'][guest['name']]['disk_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'full')
guests['guests'][guest['name']]['disk_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'some', spikes=True)
guests['guests'][guest['name']]['disk_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'full', spikes=True)
guests['guests'][guest['name']]['disk_pressure_hot'] = False
guests['guests'][guest['name']]['id'] = guest['vmid']
guests['guests'][guest['name']]['node_current'] = node
guests['guests'][guest['name']]['node_target'] = node
guests['guests'][guest['name']]['processed'] = False
guests['guests'][guest['name']]['pressure_hot'] = False
guests['guests'][guest['name']]['tags'] = Tags.get_tags_from_guests(proxmox_api, node, guest['vmid'], 'ct')
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'], nodes)
guests['guests'][guest['name']]['type'] = 'ct'
logger.debug(f"Resources of Guest {guest['name']} (type CT) added: {guests['guests'][guest['name']]}")
@@ -118,36 +149,55 @@ class Guests:
return guests
@staticmethod
def get_guest_cpu_usage(proxmox_api, node_name: str, vm_id: int, vm_name: str) -> float:
def get_guest_rrd_data(proxmox_api, node_name: str, vm_id: int, vm_name: str, object_name: str, object_type: str, spikes=False) -> float:
"""
Retrieve the average CPU usage of a guest instance (VM/CT) over the past hour.
This method queries the Proxmox VE API for RRD (Round-Robin Database) data
related to CPU usage of a specific guest instance and calculates the average CPU usage
over the last hour using the "AVERAGE" consolidation function.
Retrieves the rrd data metrics for a specific resource (CPU, memory, disk) of a guest VM or CT.
Args:
proxmox_api: An instance of the Proxmox API client.
node_name (str): The name of the Proxmox node hosting the VM.
vm_id (int): The unique identifier of the guest instance (VM/CT).
vm_name (str): The name of the guest instance (VM/CT).
proxmox_api (Any): The Proxmox API client instance.
node_name (str): The name of the node hosting the guest.
vm_id (int): The ID of the guest VM or CT.
vm_name (str): The name of the guest VM or CT.
object_name (str): The resource type to query (e.g., 'cpu', 'memory', 'disk').
object_type (str, optional): The pressure type ('some', 'full') or None for average usage.
spikes (bool, optional): Whether to consider spikes in the calculation. Defaults to False.
Returns:
float: The average CPU usage as a fraction (0.0 to 1.0) over the past hour.
Returns 0.0 if no data is available.
float: The calculated average usage value for the specified resource.
"""
logger.debug("Finished: get_guest_cpu_usage.")
logger.debug("Starting: get_guest_rrd_data.")
time.sleep(0.1)
try:
logger.debug(f"Getting RRD dara for guest: {vm_name}.")
guest_data_rrd = proxmox_api.nodes(node_name).qemu(vm_id).rrddata.get(timeframe="hour", cf="AVERAGE")
if spikes:
logger.debug(f"Getting spike RRD data for {object_name} from guest: {vm_name}.")
guest_data_rrd = proxmox_api.nodes(node_name).qemu(vm_id).rrddata.get(timeframe="hour", cf="MAX")
else:
logger.debug(f"Getting average RRD data for {object_name} from guest: {vm_name}.")
guest_data_rrd = proxmox_api.nodes(node_name).qemu(vm_id).rrddata.get(timeframe="hour", cf="AVERAGE")
except Exception:
logger.error(f"Failed to retrieve RRD data for guest: {vm_name} (ID: {vm_id}) on node: {node_name}. Using 0.0 as CPU usage.")
logger.debug("Finished: get_guest_cpu_usage.")
return 0.0
logger.error(f"Failed to retrieve RRD data for guest: {vm_name} (ID: {vm_id}) on node: {node_name}. Using 0.0 as value.")
logger.debug("Finished: get_guest_rrd_data.")
return float(0.0)
cpu_usage = sum(entry.get("cpu", 0.0) for entry in guest_data_rrd) / len(guest_data_rrd)
logger.debug(f"CPU RRD data for guest: {vm_name}: {cpu_usage}")
logger.debug("Finished: get_guest_cpu_usage.")
return cpu_usage
if object_type:
lookup_key = f"pressure{object_name}{object_type}"
if spikes:
# RRD data is collected every minute, so we look at the last 6 entries
# and take the maximum value to represent the spike
logger.debug(f"Getting RRD data (spike: {spikes}) of pressure for {object_name} {object_type} from guest: {vm_name}.")
rrd_data_value = [row.get(lookup_key) for row in guest_data_rrd if row.get(lookup_key) is not None]
rrd_data_value = max(rrd_data_value[-6:], default=0.0)
else:
# Calculate the average value from the RRD data entries
logger.debug(f"Getting RRD data (spike: {spikes}) of pressure for {object_name} {object_type} from guest: {vm_name}.")
rrd_data_value = sum(entry.get(lookup_key, 0.0) for entry in guest_data_rrd) / len(guest_data_rrd)
else:
logger.debug(f"Getting RRD data of cpu usage from guest: {vm_name}.")
rrd_data_value = sum(entry.get("cpu", 0.0) for entry in guest_data_rrd) / len(guest_data_rrd)
logger.debug(f"RRD data (spike: {spikes}) for {object_name} from guest: {vm_name}: {rrd_data_value}")
logger.debug("Finished: get_guest_rrd_data.")
return rrd_data_value

View File

@@ -21,6 +21,7 @@ __copyright__ = "Copyright (C) 2025 Florian Paul Azim Hoberg (@gyptazy)"
__license__ = "GPL-3.0"
import time
from typing import Dict, Any
from utils.logger import SystemdLogger
@@ -60,6 +61,8 @@ class Nodes:
if node["status"] == "online" and not Nodes.set_node_ignore(proxlb_config, node["node"]):
nodes["nodes"][node["node"]] = {}
nodes["nodes"][node["node"]]["name"] = node["node"]
nodes["nodes"][node["node"]]["pve_version"] = Nodes.get_node_pve_version(proxmox_api, node["node"])
nodes["nodes"][node["node"]]["pressure_hot"] = False
nodes["nodes"][node["node"]]["maintenance"] = False
nodes["nodes"][node["node"]]["cpu_total"] = node["maxcpu"]
nodes["nodes"][node["node"]]["cpu_assigned"] = 0
@@ -68,6 +71,11 @@ class Nodes:
nodes["nodes"][node["node"]]["cpu_assigned_percent"] = nodes["nodes"][node["node"]]["cpu_assigned"] / nodes["nodes"][node["node"]]["cpu_total"] * 100
nodes["nodes"][node["node"]]["cpu_free_percent"] = nodes["nodes"][node["node"]]["cpu_free"] / node["maxcpu"] * 100
nodes["nodes"][node["node"]]["cpu_used_percent"] = nodes["nodes"][node["node"]]["cpu_used"] / node["maxcpu"] * 100
nodes["nodes"][node["node"]]["cpu_pressure_some_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "cpu", "some")
nodes["nodes"][node["node"]]["cpu_pressure_full_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "cpu", "full")
nodes["nodes"][node["node"]]["cpu_pressure_some_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "cpu", "some", spikes=True)
nodes["nodes"][node["node"]]["cpu_pressure_full_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "cpu", "full", spikes=True)
nodes["nodes"][node["node"]]["cpu_pressure_hot"] = False
nodes["nodes"][node["node"]]["memory_total"] = node["maxmem"]
nodes["nodes"][node["node"]]["memory_assigned"] = 0
nodes["nodes"][node["node"]]["memory_used"] = node["mem"]
@@ -75,6 +83,11 @@ class Nodes:
nodes["nodes"][node["node"]]["memory_assigned_percent"] = nodes["nodes"][node["node"]]["memory_assigned"] / nodes["nodes"][node["node"]]["memory_total"] * 100
nodes["nodes"][node["node"]]["memory_free_percent"] = nodes["nodes"][node["node"]]["memory_free"] / node["maxmem"] * 100
nodes["nodes"][node["node"]]["memory_used_percent"] = nodes["nodes"][node["node"]]["memory_used"] / node["maxmem"] * 100
nodes["nodes"][node["node"]]["memory_pressure_some_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "memory", "some")
nodes["nodes"][node["node"]]["memory_pressure_full_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "memory", "full")
nodes["nodes"][node["node"]]["memory_pressure_some_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "memory", "some", spikes=True)
nodes["nodes"][node["node"]]["memory_pressure_full_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "memory", "full", spikes=True)
nodes["nodes"][node["node"]]["memory_pressure_hot"] = False
nodes["nodes"][node["node"]]["disk_total"] = node["maxdisk"]
nodes["nodes"][node["node"]]["disk_assigned"] = 0
nodes["nodes"][node["node"]]["disk_used"] = node["disk"]
@@ -82,11 +95,17 @@ class Nodes:
nodes["nodes"][node["node"]]["disk_assigned_percent"] = nodes["nodes"][node["node"]]["disk_assigned"] / nodes["nodes"][node["node"]]["disk_total"] * 100
nodes["nodes"][node["node"]]["disk_free_percent"] = nodes["nodes"][node["node"]]["disk_free"] / node["maxdisk"] * 100
nodes["nodes"][node["node"]]["disk_used_percent"] = nodes["nodes"][node["node"]]["disk_used"] / node["maxdisk"] * 100
nodes["nodes"][node["node"]]["disk_pressure_some_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "disk", "some")
nodes["nodes"][node["node"]]["disk_pressure_full_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "disk", "full")
nodes["nodes"][node["node"]]["disk_pressure_some_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "disk", "some", spikes=True)
nodes["nodes"][node["node"]]["disk_pressure_full_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "disk", "full", spikes=True)
nodes["nodes"][node["node"]]["disk_pressure_hot"] = False
# Evaluate if node should be set to maintenance mode
if Nodes.set_node_maintenance(proxmox_api, proxlb_config, node["node"]):
nodes["nodes"][node["node"]]["maintenance"] = True
logger.debug(f"Node metrics collected: {nodes}")
logger.debug("Finished: get_nodes.")
return nodes
@@ -153,3 +172,83 @@ class Nodes:
return True
logger.debug("Finished: set_node_ignore.")
@staticmethod
def get_node_rrd_data(proxmox_api, node_name: str, object_name: str, object_type: str, spikes=False) -> float:
"""
Retrieves the rrd data metrics for a specific resource (CPU, memory, disk) of a node.
Args:
proxmox_api (Any): The Proxmox API client instance.
node_name (str): The name of the node hosting the guest.
object_name (str): The resource type to query (e.g., 'cpu', 'memory', 'disk').
object_type (str, optional): The pressure type ('some', 'full') or None for average usage.
spikes (bool, optional): Whether to consider spikes in the calculation. Defaults to False.
Returns:
float: The calculated average usage value for the specified resource.
"""
logger.debug("Starting: get_node_rrd_data.")
time.sleep(0.1)
try:
if spikes:
logger.debug(f"Getting spike RRD data for {object_name} from node: {node_name}.")
node_data_rrd = proxmox_api.nodes(node_name).rrddata.get(timeframe="hour", cf="MAX")
else:
logger.debug(f"Getting average RRD data for {object_name} from node: {node_name}.")
node_data_rrd = proxmox_api.nodes(node_name).rrddata.get(timeframe="hour", cf="AVERAGE")
except Exception:
logger.error(f"Failed to retrieve RRD data for guest: {node_name}. Using 0.0 as value.")
logger.debug("Finished: get_node_rrd_data.")
return 0.0
lookup_key = f"pressure{object_name}{object_type}"
if spikes:
# RRD data is collected every minute, so we look at the last 6 entries
# and take the maximum value to represent the spike
rrd_data_value = [row.get(lookup_key) for row in node_data_rrd if row.get(lookup_key) is not None]
rrd_data_value = max(rrd_data_value[-6:], default=0.0)
else:
# Calculate the average value from the RRD data entries
rrd_data_value = sum(entry.get(lookup_key, 0.0) for entry in node_data_rrd) / len(node_data_rrd)
logger.debug(f"RRD data (spike: {spikes}) for {object_name} from node: {node_name}: {rrd_data_value}")
logger.debug("Finished: get_node_rrd_data.")
return rrd_data_value
@staticmethod
def get_node_pve_version(proxmox_api, node_name: str) -> float:
"""
Return the Proxmox VE (PVE) version for a given node by querying the Proxmox API.
This function calls proxmox_api.nodes(node_name).version.get() and extracts the
'version' field from the returned mapping. The value is expected to be numeric
(or convertible to float) and is returned as a float.
Args:
proxmox_api (Any): The Proxmox API client instance.
node_name (str): The name of the node hosting the guest.
Returns:
float: The PVE version for the specified node as a floating point number.
Raises:
Exception: If the proxmox_api call fails, returns an unexpected structure, or the
'version' field is missing or cannot be converted to float. Callers should
handle or propagate exceptions as appropriate.
"""
logger.debug("Starting: get_node_pve_version.")
time.sleep(0.1)
try:
logger.debug(f"Trying to get PVE version for node: {node_name}.")
version = proxmox_api.nodes(node_name).version.get()
except Exception:
logger.error(f"Failed to get PVE version for node: {node_name}.")
logger.debug(f"Got version {version['version']} for node {node_name}.")
logger.debug("Finished: get_node_pve_version.")
return version["version"]

View File

@@ -12,7 +12,9 @@ __license__ = "GPL-3.0"
import time
from typing import List
from typing import Dict, Any
from utils.logger import SystemdLogger
from utils.helper import Helper
logger = SystemdLogger()
@@ -153,7 +155,7 @@ class Tags:
return ignore_tag
@staticmethod
def get_node_relationships(tags: List[str]) -> str:
def get_node_relationships(tags: List[str], nodes: Dict[str, Any]) -> str:
"""
Get a node relationship tag for a guest from the Proxmox cluster by the API to pin
a guest to a node.
@@ -163,6 +165,7 @@ class Tags:
Args:
tags (List): A list holding all defined tags for a given guest.
nodes (Dict): A dictionary holding all available nodes in the cluster.
Returns:
Str: The related hypervisor node name.
@@ -174,7 +177,13 @@ class Tags:
for tag in tags:
if tag.startswith("plb_pin"):
node_relationship_tag = tag.replace("plb_pin_", "")
node_relationship_tags.append(node_relationship_tag)
# Validate if the node to pin is present in the cluster
if Helper.validate_node_presence(node_relationship_tag, nodes):
logger.info(f"Tag {node_relationship_tag} is valid! Defined node exists in the cluster.")
node_relationship_tags.append(node_relationship_tag)
else:
logger.warning(f"Tag {node_relationship_tag} is invalid! Defined node does not exist in the cluster. Not applying pinning.")
logger.debug("Finished: get_node_relationships.")
return node_relationship_tags

View File

@@ -16,6 +16,7 @@ import time
import utils.version
from utils.logger import SystemdLogger
from typing import Dict, Any
from types import FrameType
logger = SystemdLogger()
@@ -200,7 +201,7 @@ class Helper:
logger.debug("Finished: print_json.")
@staticmethod
def handler_sighup(signum, frame):
def handler_sighup(signum: int, frame: FrameType) -> None:
"""
Signal handler for SIGHUP.
@@ -217,6 +218,23 @@ class Helper:
Helper.proxlb_reload = True
logger.debug("Finished: handle_sighup.")
@staticmethod
def handler_sigint(signum: int, frame: FrameType) -> None:
"""
Signal handler for SIGINT. (triggered by CTRL+C).
Args:
signum (int): The signal number (e.g., SIGINT).
frame (FrameType): The current stack frame when the signal was received.
Returns:
None
"""
exit_message = "ProxLB has been successfully terminated by user."
logger.debug(exit_message)
print(f"\n {exit_message}")
sys.exit(0)
@staticmethod
def get_host_port_from_string(host_object):
"""
@@ -265,3 +283,27 @@ class Helper:
return parts[0], port
except ValueError:
return host_object, 8006
@staticmethod
def validate_node_presence(node: str, nodes: Dict[str, Any]) -> bool:
"""
Validates whether a given node exists in the provided cluster nodes dictionary.
Args:
node (str): The name of the node to validate.
nodes (Dict[str, Any]): A dictionary containing cluster information.
Must include a "nodes" key mapping to a dict of available nodes.
Returns:
bool: True if the node exists in the cluster, False otherwise.
"""
logger.debug("Starting: validate_node_presence.")
if node in nodes["nodes"].keys():
logger.info(f"Node {node} found in cluster. Applying pinning.")
logger.debug("Finished: validate_node_presence.")
return True
else:
logger.warning(f"Node {node} not found in cluster. Not applying pinning!")
logger.debug("Finished: validate_node_presence.")
return False

View File

@@ -88,7 +88,7 @@ class SystemdLogger:
# logging is preferred.
if SYSTEMD_PRESENT:
# Add a JournalHandler for systemd integration
handler = JournalHandler()
handler = JournalHandler(SYSLOG_IDENTIFIER="ProxLB")
else:
# Add a stdout handler as a fallback
handler = logging.StreamHandler(sys.stdout)

View File

@@ -135,6 +135,14 @@ class ProxmoxApi:
proxlb_credentials = proxlb_config["proxmox_api"]
present_auth_pass = "pass" in proxlb_credentials
present_auth_secret = "token_secret" in proxlb_credentials
token_id = proxlb_credentials.get("token_id", None)
if token_id:
non_allowed_chars = ["@", "!"]
for char in non_allowed_chars:
if char in token_id:
logger.error(f"Wrong user/token format defined. User and token id must be splitted! Please see: https://github.com/gyptazy/ProxLB/blob/main/docs/03_configuration.md#required-permissions-for-a-user")
sys.exit(1)
if present_auth_pass and present_auth_secret:
logger.critical(f"Username/password and API token authentication are mutal exclusive. Please use only one!")
@@ -336,7 +344,15 @@ class ProxmoxApi:
permissions_available = []
# Get the permissions for the current user/token from API
permissions = proxmox_api.access.permissions.get()
try:
permissions = proxmox_api.access.permissions.get()
except proxmoxer.core.ResourceException as api_error:
if "no such user" in str(api_error):
logger.error("Authentication to Proxmox API not possible: User not known - please check your username and config file.")
sys.exit(1)
else:
logger.error(f"Proxmox API error: {api_error}")
sys.exit(1)
# Get all available permissions of the current user/token
for path, permission in permissions.items():

View File

@@ -1,7 +1,7 @@
__app_name__ = "ProxLB"
__app_desc__ = "A DRS alike loadbalancer for Proxmox clusters."
__app_desc__ = "An advanced resource scheduler and load balancer for Proxmox clusters."
__author__ = "Florian Paul Azim Hoberg <gyptazy>"
__copyright__ = "Copyright (C) 2025 Florian Paul Azim Hoberg (@gyptazy)"
__license__ = "GPL-3.0"
__version__ = "1.1.5"
__version__ = "1.1.9"
__url__ = "https://github.com/gyptazy/ProxLB"

View File

@@ -2,9 +2,9 @@ from setuptools import setup
setup(
name="proxlb",
version="1.1.5",
description="A DRS alike loadbalancer for Proxmox clusters.",
long_description="An advanced DRS alike loadbalancer for Proxmox clusters that also supports maintenance modes and affinity/anti-affinity rules.",
version="1.1.9",
description="An advanced resource scheduler and load balancer for Proxmox clusters.",
long_description="An advanced resource scheduler and load balancer for Proxmox clusters that also supports maintenance modes and affinity/anti-affinity rules.",
author="Florian Paul Azim Hoberg",
author_email="gyptazy@gyptazy.com",
maintainer="Florian Paul Azim Hoberg",