Merge pull request #344 from gyptazy/release/1.1.9b

Release/1.1.9b
This commit is contained in:
gyptazy
2025-10-27 16:47:48 +01:00
committed by GitHub
19 changed files with 852 additions and 72 deletions

View File

@@ -0,0 +1,5 @@
added:
- Add pressure (PSI) based balancing for memory, cpu, disk (req. PVE9 or greater) (@gyptazy). [#337|
- Pressure (PSI) based balancing for nodes
- Pressure (PSI) based balancing for guests
- Add PVE version evaluation

View File

@@ -0,0 +1,2 @@
added:
- Add an optional memory balancing threshold (@gyptazy). [#342]

View File

@@ -0,0 +1,2 @@
added:
- Add affinity/anti-affinity support by pools (@gyptazy). [#343]

View File

@@ -0,0 +1 @@
date: TBD

100
README.md
View File

@@ -54,6 +54,10 @@ ProxLB's key features are by enabling automatic rebalancing of VMs and CTs acros
* Memory
* Disk (only local storage)
* CPU
* Rebalance by different modes:
* Used resources
* Assigned resources
* PSI (Pressure) of resources
* Get best nodes for further automation
* Supported Guest Types
* VMs
@@ -169,6 +173,7 @@ docker run -it --rm -v $(pwd)/proxlb.yaml:/etc/proxlb/proxlb.yaml proxlb
| Version | Image |
|------|:------:|
| latest | cr.gyptazy.com/proxlb/proxlb:latest |
| v1.1.9 | cr.gyptazy.com/proxlb/proxlb:v1.1.9 |
| v1.1.8 | cr.gyptazy.com/proxlb/proxlb:v1.1.8 |
| v1.1.7 | cr.gyptazy.com/proxlb/proxlb:v1.1.7 |
| v1.1.6.1 | cr.gyptazy.com/proxlb/proxlb:v1.1.6.1 |
@@ -277,8 +282,11 @@ The following options can be set in the configuration file `proxlb.yaml`:
| | balance_types | | ['vm', 'ct'] | `List` | Defined the types of guests that should be honored. [values: `vm`, `ct`]|
| | max_job_validation | | 1800 | `Int` | How long a job validation may take in seconds. (default: 1800) |
| | balanciness | | 10 | `Int` | The maximum delta of resource usage between node with highest and lowest usage. |
| | memory_threshold | | 75 | `Int` | The maximum threshold (in percent) that needs to be hit to perform balancing actions. (Optional) |
| | method | | memory | `Str` | The balancing method that should be used. [values: `memory` (default), `cpu`, `disk`]|
| | mode | | used | `Str` | The balancing mode that should be used. [values: `used` (default), `assigned`] |
| | mode | | used | `Str` | The balancing mode that should be used. [values: `used` (default), `assigned`, `psi` (pressure)] |
| | psi | | { nodes: { memory: { pressure_full: 0.20, pressure_some: 0.20, pressure_spikes: 1.00 }}} | `Dict` | A dict of PSI based thresholds for nodes and guests |
| | pools | | pools: { dev: { type: affinity }, de-nbg01-db: { type: anti-affinity }} | `Dict` | A dict of pool names and their type for creating affinity/anti-affinity rules |
| `service` | | | | | |
| | daemon | | True | `Bool` | If daemon mode should be activated. |
| | `schedule` | | | `Dict` | Schedule config block for rebalancing. |
@@ -320,9 +328,47 @@ balancing:
with_conntrack_state: True
balance_types: ['vm', 'ct']
max_job_validation: 1800
memory_threshold: 75
balanciness: 5
method: memory
mode: used
# # PSI thresholds only apply when using mode 'psi'
# # PSI based balancing is currently in beta and req. PVE >= 9
# psi:
# nodes:
# memory:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# cpu:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# disk:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# guests:
# memory:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# cpu:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# disk:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
pools:
dev:
type: affinity
de-nbg01-db
type: anti-affinity
pin:
- virt66
- virt77
service:
daemon: True
@@ -353,19 +399,33 @@ ProxLB provides an advanced mechanism to define affinity and anti-affinity rules
ProxLB implements affinity and anti-affinity rules through a tag-based system within the Proxmox web interface. Each guest (virtual machine or container) can be assigned specific tags, which then dictate its placement behavior. This method maintains a streamlined and secure approach to managing VM relationships while preserving Proxmoxs inherent permission model.
### Affinity Rules
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-affinity-rules.jpg"/> Affinity rules are used to group certain VMs together, ensuring that they run on the same host whenever possible. This can be beneficial for workloads requiring low-latency communication, such as clustered databases or application servers that frequently exchange data.
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-affinity-rules.jpg"/> Affinity rules are used to group certain VMs together, ensuring that they run on the same host whenever possible. This can be beneficial for workloads requiring low-latency communication, such as clustered databases or application servers that frequently exchange data. In general, there're two ways to manage affinity rules:
#### Affinity Rules by Tags
To define an affinity rule which keeps all guests assigned to this tag together on a node, users assign a tag with the prefix `plb_affinity_$TAG`:
#### Example for Screenshot
```
plb_affinity_talos
```
As a result, ProxLB will attempt to place all VMs with the `plb_affinity_web` tag on the same host (see also the attached screenshot with the same node).
### Anti-Affinity Rules
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-anti-affinity-rules.jpg"/> Conversely, anti-affinity rules ensure that designated VMs do not run on the same physical host. This is particularly useful for high-availability setups, where redundancy is crucial. Ensuring that critical services are distributed across multiple hosts reduces the risk of a single point of failure.
#### Affinity Rules by Pools
Antoher approach is by using pools in Proxmox. This way, it can easily also combined with other resources like backup jobs. However, in this approach you need to modify the ProxLB config file to your needs. Within the `balancing` section you can create a dict of pools, including the pool name and the affinity type. Please see the example for further details:
**Example Config**
```
balancing:
[...]
pools: # Optional: Define affinity/anti-affinity rules per pool
dev: # Pool name: dev
type: affinity # Type: affinity (keeping VMs together)
pin: # Pin VMs to Nodes
- virt77 # Pinning to 'virt77' which is maybe an older system for dev labs
```
### Anti-Affinity Rules by Tags
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-anti-affinity-rules.jpg"/> Conversely, anti-affinity rules ensure that designated VMs do not run on the same physical host. This is particularly useful for high-availability setups, where redundancy is crucial. Ensuring that critical services are distributed across multiple hosts reduces the risk of a single point of failure. In general, there're two ways to manage anti-affinity rules:
To define an anti-affinity rule that ensures to not move systems within this group to the same node, users assign a tag with the prefix:
@@ -376,6 +436,19 @@ plb_anti_affinity_ntp
As a result, ProxLB will try to place the VMs with the `plb_anti_affinity_ntp` tag on different hosts (see also the attached screenshot with the different nodes).
#### Anti-Affinity Rules by Pools
Antoher approach is by using pools in Proxmox. This way, it can easily also combined with other resources like backup jobs. However, in this approach you need to modify the ProxLB config file to your needs. Within the `balancing` section you can create a dict of pools, including the pool name and the affinity type. Please see the example for further details:
**Example Config**
```
balancing:
[...]
pools: # Optional: Define affinity/anti-affinity rules per pool
de-nbg01-db: # Pool name: de-nbg01-db
type: anti-affinity # Type: anti-affinity (spreading VMs apart)
```
**Note:** While this ensures that ProxLB tries distribute these VMs across different physical hosts within the Proxmox cluster this may not always work. If you have more guests attached to the group than nodes in the cluster, we still need to run them anywhere. If this case occurs, the next one with the most free resources will be selected.
### Ignore VMs
@@ -395,6 +468,7 @@ As a result, ProxLB will not migrate this guest with the `plb_ignore_dev` tag to
### Pin VMs to Specific Hypervisor Nodes
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-tag-node-pinning.jpg"/> Guests, such as VMs or CTs, can also be pinned to specific (and multiple) nodes in the cluster. This might be usefull when running applications with some special licensing requirements that are only fulfilled on certain nodes. It might also be interesting, when some physical hardware is attached to a node, that is not available in general within the cluster.
#### Pinning VMs to (a) specific Hypervisor Node(s) by Tag
To pin a guest to a specific cluster node, users assign a tag with the prefix `plb_pin_$nodename` to the desired guest:
#### Example for Screenshot
@@ -404,6 +478,22 @@ plb_pin_node03
As a result, ProxLB will pin the guest `dev-vm01` to the node `virt03`.
#### Pinning VMs to (a) specific Hypervisor Node(s) by Pools
Beside the tag approach, you can also pin a resource group to a specific hypervisor or groups of hypervisors by defining a `pin` key of type list.
**Example Config**
```
balancing:
[...]
pools: # Optional: Define affinity/anti-affinity rules per pool
dev: # Pool name: dev
type: affinity # Type: affinity (keeping VMs together)
pin: # Pin VMs to Nodes
- virt77 # Pinning to 'virt77' which is maybe an older system for dev labs
```
You can also repeat this step multiple times for different node names to create a potential group of allowed hosts where a the guest may be served on. In this case, ProxLB takes the node with the lowest used resources according to the defined balancing values from this group.
**Note:** The given node names from the tag are validated. This means, ProxLB validated if the given node name is really part of the cluster. In case of a wrongly defined or unavailable node name it continous to use the regular processes to make sure the guest keeps running.

View File

@@ -26,11 +26,48 @@ balancing:
live: True
with_local_disks: True
with_conntrack_state: True
balance_types: ['vm', 'ct']
max_job_validation: 1800
balanciness: 5
method: memory
mode: used
balance_types: ['vm', 'ct'] # 'vm' | 'ct'
max_job_validation: 1800 # Maximum time (in seconds) a job validation may take
memory_threshold: 75 # Optional: Maximum threshold (in percent) to trigger balancing actions
balanciness: 5 # Maximum delta of resource usage between highest and lowest usage node
method: memory # 'memory' | 'cpu' | 'disk'
mode: used # 'assigned' | 'used' | 'psi'
# # PSI thresholds only apply when using mode 'psi'
# psi:
# nodes:
# memory:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# cpu:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# disk:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# guests:
# memory:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# cpu:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
# disk:
# pressure_full: 0.20
# pressure_some: 0.20
# pressure_spikes: 1.00
pools: # Optional: Define affinity/anti-affinity rules per pool
dev: # Pool name: dev
type: affinity # Type: affinity (keeping VMs together)
de-nbg01-db: # Pool name: de-nbg01-db
type: anti-affinity # Type: anti-affinity (spreading VMs apart)
pin: # Define a pinning og guests to specific node(s)
- virt66
- virt77
service:
daemon: True

8
debian/changelog vendored
View File

@@ -1,3 +1,11 @@
proxlb (1.1.9) stable; urgency=medium
* Add pressure (PSI) based balancing for memory, cpu, disk (req. PVE9 or greater). (Closes: #339)
* Add (memory) threshold for nodes before running balancing. (Closes: #342)
* Add affinity/anti-affinity support by pools. (Closes: #343)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Thu, 22 Oct 2025 09:04:13 +0002
proxlb (1.1.8) stable; urgency=medium
* Fix API errors when using conntrack aware migration with older PVE version. (Closes: #318)

View File

@@ -20,6 +20,10 @@
7. [Run as a Systemd-Service](#run-as-a-systemd-service)
8. [SSL Self-Signed Certificates](#ssl-self-signed-certificates)
9. [Node Maintenances](#node-maintenances)
10. [Balancing Methods](#balancing-methods)
1. [Used Resources](#used-resources)
2. [Assigned Resources](#assigned-resources)
3. [Pressure (PSI) based Resources](#pressure-psi-based-resources)
## Authentication / User Accounts / Permissions
### Authentication
@@ -76,8 +80,8 @@ ProxLB provides an advanced mechanism to define affinity and anti-affinity rules
ProxLB implements affinity and anti-affinity rules through a tag-based system within the Proxmox web interface. Each guest (virtual machine or container) can be assigned specific tags, which then dictate its placement behavior. This method maintains a streamlined and secure approach to managing VM relationships while preserving Proxmoxs inherent permission model.
#### Affinity Rules
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-affinity-rules.jpg"/> Affinity rules are used to group certain VMs together, ensuring that they run on the same host whenever possible. This can be beneficial for workloads requiring low-latency communication, such as clustered databases or application servers that frequently exchange data.
#### Affinity Rules by Tags
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-affinity-rules.jpg"/> Affinity rules are used to group certain VMs together, ensuring that they run on the same host whenever possible. This can be beneficial for workloads requiring low-latency communication, such as clustered databases or application servers that frequently exchange data.
To define an affinity rule which keeps all guests assigned to this tag together on a node, users assign a tag with the prefix `plb_affinity_$TAG`:
@@ -88,8 +92,20 @@ plb_affinity_talos
As a result, ProxLB will attempt to place all VMs with the `plb_affinity_web` tag on the same host (see also the attached screenshot with the same node).
#### Anti-Affinity Rules
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-anti-affinity-rules.jpg"/> Conversely, anti-affinity rules ensure that designated VMs do not run on the same physical host. This is particularly useful for high-availability setups, where redundancy is crucial. Ensuring that critical services are distributed across multiple hosts reduces the risk of a single point of failure.
#### Affinity Rules by Pools
Antoher approach is by using pools in Proxmox. This way, it can easily also combined with other resources like backup jobs. However, in this approach you need to modify the ProxLB config file to your needs. Within the `balancing` section you can create a dict of pools, including the pool name and the affinity type. Please see the example for further details:
**Example Config**
```
balancing:
[...]
pools: # Optional: Define affinity/anti-affinity rules per pool
dev: # Pool name: dev
type: affinity # Type: affinity (keeping VMs together)
```
#### Anti-Affinity Rules by Tags
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-anti-affinity-rules.jpg"/> Conversely, anti-affinity rules ensure that designated VMs do not run on the same physical host. This is particularly useful for high-availability setups, where redundancy is crucial. Ensuring that critical services are distributed across multiple hosts reduces the risk of a single point of failure.
To define an anti-affinity rule that ensures to not move systems within this group to the same node, users assign a tag with the prefix:
@@ -102,6 +118,18 @@ As a result, ProxLB will try to place the VMs with the `plb_anti_affinity_ntp` t
**Note:** While this ensures that ProxLB tries distribute these VMs across different physical hosts within the Proxmox cluster this may not always work. If you have more guests attached to the group than nodes in the cluster, we still need to run them anywhere. If this case occurs, the next one with the most free resources will be selected.
#### Anti-Affinity Rules by Pools
Antoher approach is by using pools in Proxmox. This way, it can easily also combined with other resources like backup jobs. However, in this approach you need to modify the ProxLB config file to your needs. Within the `balancing` section you can create a dict of pools, including the pool name and the affinity type. Please see the example for further details:
**Example Config**
```
balancing:
[...]
pools: # Optional: Define affinity/anti-affinity rules per pool
de-nbg01-db: # Pool name: de-nbg01-db
type: anti-affinity # Type: anti-affinity (spreading VMs apart)
````
### Affinity / Anti-Affinity Enforcing
When a cluster is already balanced and does not require further adjustments, enabling the enforce_affinity parameter ensures that affinity and anti-affinity rules are still respected. This parameter prioritizes the placement of guest objects according to these rules, even if it leads to slight resource imbalances or increased migration overhead. Regularly reviewing and updating these rules, along with monitoring cluster performance, helps maintain optimal performance and reliability. By carefully managing these aspects, you can create a cluster environment that meets your specific needs and maintains a good balance of resources.
@@ -113,7 +141,7 @@ balancing:
*Note: This may have impacts to the cluster. Depending on the created group matrix, the result may also be an unbalanced cluster.*
### Ignore VMs / CTs
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-ignore-vm-movement.jpg"/> Guests, such as VMs or CTs, can also be completely ignored. This means, they won't be affected by any migration (even when (anti-)affinity rules are enforced). To ensure a proper resource evaluation, these guests are still collected and evaluated but simply skipped for balancing actions. Another thing is the implementation. While ProxLB might have a very restricted configuration file including the file permissions, this file is only read- and writeable by the Proxmox administrators. However, we might have user and groups who want to define on their own that their systems shouldn't be moved. Therefore, these users can simpy set a specific tag to the guest object - just like the (anti)affinity rules.
<img align="left" src="https://cdn.gyptazy.com/img/proxlb-ignore-vm-movement.jpg"/> Guests, such as VMs or CTs, can also be completely ignored. This means, they won't be affected by any migration (even when (anti-)affinity rules are enforced). To ensure a proper resource evaluation, these guests are still collected and evaluated but simply skipped for balancing actions. Another thing is the implementation. While ProxLB might have a very restricted configuration file including the file permissions, this file is only read- and writeable by the Proxmox administrators. However, we might have user and groups who want to define on their own that their systems shouldn't be moved. Therefore, these users can simpy set a specific tag to the guest object - just like the (anti)affinity rules.
To define a guest to be ignored from the balancing, users assign a tag with the prefix `plb_ignore_$TAG`:
@@ -235,4 +263,115 @@ The maintenance_nodes key must be defined as a list, even if it only includes a
* No new workloads will be balanced or migrated onto it.
* Any existing workloads currently running on the node will be migrated away in accordance with the configured balancing strategies, assuming resources on other nodes allow.
This feature is particularly useful during planned maintenance, upgrades, or troubleshooting, ensuring that services continue to run with minimal disruption while the specified node is being worked on.
This feature is particularly useful during planned maintenance, upgrades, or troubleshooting, ensuring that services continue to run with minimal disruption while the specified node is being worked on.
## 10. Balancing Methods
ProxLB provides multiple balancing modes that define *how* resources are evaluated and compared during cluster balancing.
Each mode reflects a different strategy for determining load and distributing guests (VMs or containers) between nodes.
Depending on your environment, provisioning strategy, and performance goals, you can choose between:
| Mode | Description | Typical Use Case |
|------|--------------|------------------|
| `used` | Uses the *actual runtime resource usage* (e.g. CPU, memory, disk). | Dynamic or lab environments with frequent workload changes and tolerance for overprovisioning. |
| `assigned` | Uses the *statically defined resource allocations* from guest configurations. | Production or SLA-driven clusters that require guaranteed resources and predictable performance. |
| `psi` | Uses Linux *Pressure Stall Information (PSI)* metrics to evaluate real system contention and pressure. | Advanced clusters that require pressure-aware decisions for proactive rebalancing. |
### 10.1 Used Resources
When **mode: `used`** is configured, ProxLB evaluates the *real usage metrics* of guest objects (VMs and CTs).
It collects the current CPU, memory, and disk usage directly from the Proxmox API to determine the *actual consumption* of each guest and node.
This mode is ideal for **dynamic environments** where workloads frequently change and **overprovisioning is acceptable**. It provides the most reactive balancing behavior, since decisions are based on live usage instead of static assignment.
Typical scenarios include:
- Production environments to distribute workloads across the nodes.
- Test or development clusters with frequent VM changes.
- Clusters where resource spikes are short-lived.
- Environments where slight resource contention is tolerable.
#### Example Configuration
```yaml
balancing:
mode: used
```
### 10.2 Assigned Resources
When **mode: `assigned`** is configured, ProxLB evaluates the *provisioned or allocated resources* of each guest (VM or CT) instead of their runtime usage.
It uses data such as **CPU cores**, **memory limits**, and **disk allocations** defined in Proxmox to calculate how much of each nodes capacity is reserved.
This mode is ideal for **production clusters** where:
- Overcommitment is *not allowed or only minimally tolerated*.
- Each nodes workload is planned based on the assigned capacities.
- Administrators want predictable resource distribution aligned with provisioning policies.
Unlike the `used` mode, `assigned` focuses purely on the *declared configuration* of guests and remains stable even if actual usage varies temporarily.
Typical scenarios include:
- Enterprise environments with SLA or QoS requirements.
- Clusters where workloads are sized deterministically.
- Situations where consistent node utilization and capacity awareness are crucial.
#### Example Configuration
```yaml
balancing:
mode: assigned
```
### 10.3 Pressure (PSI) based Resources
> [!IMPORTANT]
> PSI based balancing is still in beta! If you find any bugs, please raise an issue including metrics of all nodes and affected guests. You can provide metrics directly from PVE or Grafana (via node_exporter or pve_exporter).
When **mode: `psi`** is configured, ProxLB uses the **Linux Pressure Stall Information (PSI)** interface to measure the *real-time pressure* on system resources such as **CPU**, **memory**, and **disk I/O**.
Unlike the `used` or `assigned` modes, which rely on static or average metrics, PSI provides *direct insight into how often and how long tasks are stalled* because of insufficient resources.
This enables ProxLB to make **proactive balancing decisions** — moving workloads *before* performance degradation becomes visible to the user.
**IMPORTANT**: Predicting distributing workloads is dangerous and might not result into the expected state. Therefore, ProxLB migrates only a single instance each 60 minutes to obtain new real-metrics and to validate if further changes are required. Keep in mind, that migrations are also costly and should be avoided as much as possible.
PSI metrics are available for both **nodes** and **guest objects**, allowing fine-grained balancing decisions:
- **Node-level PSI:** Detects cluster nodes under systemic load or contention.
- **Guest-level PSI:** Identifies individual guests suffering from memory, CPU, or I/O stalls.
### PSI Metrics Explained
Each monitored resource defines three pressure thresholds:
| Key | Description |
|-----|--------------|
| `pressure_some` | Indicates partial stall conditions where some tasks are waiting for a resource. |
| `pressure_full` | Represents complete stall conditions where *all* tasks are blocked waiting for a resource. |
| `pressure_spikes` | Defines short-term burst conditions that may signal saturation spikes. |
These thresholds are expressed in **percentages** and represent how much time the kernel reports stalls over specific averaging windows (e.g. 5s, 10s, 60s).
### Example Configuration
```yaml
balancing:
mode: psi
psi:
nodes:
memory:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
cpu:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
disk:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
guests:
memory:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
cpu:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
disk:
pressure_full: 0.20
pressure_some: 0.20
pressure_spikes: 1.00
```

View File

@@ -2,5 +2,5 @@ apiVersion: v3
name: proxlb
description: A Helm chart for self-hosted ProxLB
type: application
version: "1.1.8"
appVersion: "v1.1.8"
version: "1.1.9"
appVersion: "v1.1.9"

View File

@@ -1,7 +1,7 @@
image:
registry: cr.gyptazy.com
repository: proxlb/proxlb
tag: v1.1.8
tag: v1.1.9
pullPolicy: IfNotPresent
imagePullSecrets: [ ]

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env bash
VERSION="1.1.8"
VERSION="1.1.9"
# ProxLB
sed -i "s/^__version__ = .*/__version__ = \"$VERSION\"/" "proxlb/utils/version.py"

View File

@@ -23,6 +23,7 @@ from models.guests import Guests
from models.groups import Groups
from models.calculations import Calculations
from models.balancing import Balancing
from models.pools import Pools
from utils.helper import Helper
@@ -71,16 +72,19 @@ def main():
# Get all required objects from the Proxmox cluster
meta = {"meta": proxlb_config}
nodes = Nodes.get_nodes(proxmox_api, proxlb_config)
guests = Guests.get_guests(proxmox_api, nodes, meta)
pools = Pools.get_pools(proxmox_api)
guests = Guests.get_guests(proxmox_api, pools, nodes, meta, proxlb_config)
groups = Groups.get_groups(guests, nodes)
# Merge obtained objects from the Proxmox cluster for further usage
proxlb_data = {**meta, **nodes, **guests, **groups}
proxlb_data = {**meta, **nodes, **guests, **pools, **groups}
Helper.log_node_metrics(proxlb_data)
# Update the initial node resource assignments
# by the previously created groups.
Calculations.set_node_assignments(proxlb_data)
Calculations.set_node_hot(proxlb_data)
Calculations.set_guest_hot(proxlb_data)
Calculations.get_most_free_node(proxlb_data, cli_args.best_node)
Calculations.relocate_guests_on_maintenance_nodes(proxlb_data)
Calculations.get_balanciness(proxlb_data)

View File

@@ -80,7 +80,7 @@ class Calculations:
for guest_name in group_meta["guests"]:
guest_node_current = proxlb_data["guests"][guest_name]["node_current"]
# Update Hardware assignments
# Update resource assignments
# Update assigned values for the current node
logger.debug(f"set_node_assignment of guest {guest_name} on node {guest_node_current} with cpu_total: {proxlb_data['guests'][guest_name]['cpu_total']}, memory_total: {proxlb_data['guests'][guest_name]['memory_total']}, disk_total: {proxlb_data['guests'][guest_name]['disk_total']}.")
proxlb_data["nodes"][guest_node_current]["cpu_assigned"] += proxlb_data["guests"][guest_name]["cpu_total"]
@@ -93,6 +93,83 @@ class Calculations:
logger.debug("Finished: set_node_assignments.")
def set_node_hot(proxlb_data: Dict[str, Any]) -> Dict[str, Any]:
"""
Evaluates node 'full' pressure metrics for memory, cpu, and io
against defined thresholds and sets <metric>_pressure_hot = True
when a node is considered HOT.
Returns the modified proxlb_data dict.
"""
logger.debug("Starting: set_node_hot.")
balancing_cfg = proxlb_data.get("meta", {}).get("balancing", {})
thresholds = balancing_cfg.get("psi_thresholds", balancing_cfg.get("psi", {}).get("nodes", {}))
nodes = proxlb_data.get("nodes", {})
for node_name, node in nodes.items():
if node.get("maintenance"):
continue
if node.get("ignore"):
continue
# PSI metrics are only availavble on Proxmox VE 9.0 and higher.
if proxlb_data["meta"]["balancing"].get("mode", "used") == "psi":
if tuple(map(int, proxlb_data["nodes"][node["name"]]["pve_version"].split('.'))) < tuple(map(int, "9.0".split('.'))):
logger.critical(f"Proxmox node {node['name']} runs Proxmox VE version {proxlb_data['nodes'][node['name']]['pve_version']}."
" PSI metrics require Proxmox VE 9.0 or higher. Balancing deactivated!")
for metric, threshold in thresholds.items():
pressure_full = node.get(f"{metric}_pressure_full_percent", 0.0)
pressure_some = node.get(f"{metric}_pressure_some_percent", 0.0)
pressure_spikes = node.get(f"{metric}_pressure_full_spikes_percent", 0.0)
is_hot = (pressure_full >= threshold["pressure_full"] and pressure_some >= threshold["pressure_some"]) or (pressure_spikes >= threshold["pressure_spikes"])
if is_hot:
logger.debug(f"Set node {node["name"]} as hot based on {metric} pressure metrics.")
proxlb_data["nodes"][node["name"]][f"{metric}_pressure_hot"] = True
proxlb_data["nodes"][node["name"]][f"pressure_hot"] = True
else:
logger.debug(f"Node {node["name"]} is not hot based on {metric} pressure metrics.")
logger.debug("Finished: set_node_hot.")
return proxlb_data
def set_guest_hot(proxlb_data: Dict[str, Any]) -> Dict[str, Any]:
"""
Evaluates guest 'full' pressure metrics for memory, cpu, and io
against defined thresholds and sets <metric>_pressure_hot = True
when a guest is considered HOT.
Returns the modified proxlb_data dict.
"""
logger.debug("Starting: set_guest_hot.")
balancing_cfg = proxlb_data.get("meta", {}).get("balancing", {})
thresholds = balancing_cfg.get("psi_thresholds", balancing_cfg.get("psi", {}).get("guests", {}))
guests = proxlb_data.get("guests", {})
for guest_name, guest in guests.items():
if guest.get("ignore"):
continue
for metric, threshold in thresholds.items():
pressure_full = guest.get(f"{metric}_pressure_full_percent", 0.0)
pressure_some = guest.get(f"{metric}_pressure_some_percent", 0.0)
pressure_spikes = guest.get(f"{metric}_pressure_full_spikes_percent", 0.0)
is_hot = (pressure_full >= threshold["pressure_full"] and pressure_some >= threshold["pressure_some"]) or (pressure_spikes >= threshold["pressure_spikes"])
if is_hot:
logger.debug(f"Set guest {guest["name"]} as hot based on {metric} pressure metrics.")
proxlb_data["guests"][guest["name"]][f"{metric}_pressure_hot"] = True
proxlb_data["guests"][guest["name"]][f"pressure_hot"] = True
else:
logger.debug(f"guest {guest["name"]} is not hot based on {metric} pressure metrics.")
logger.debug("Finished: set_guest_hot.")
return proxlb_data
@staticmethod
def get_balanciness(proxlb_data: Dict[str, Any]) -> Dict[str, Any]:
"""
@@ -113,7 +190,66 @@ class Calculations:
method = proxlb_data["meta"]["balancing"].get("method", "memory")
mode = proxlb_data["meta"]["balancing"].get("mode", "used")
balanciness = proxlb_data["meta"]["balancing"].get("balanciness", 10)
method_value = [node_meta[f"{method}_{mode}_percent"] for node_meta in proxlb_data["nodes"].values()]
if mode == "assigned":
method_value = [node_meta[f"{method}_{mode}_percent"] for node_meta in proxlb_data["nodes"].values()]
if proxlb_data["meta"]["balancing"].get(f"{method}_threshold", None):
threshold = proxlb_data["meta"]["balancing"].get(f"{method}_threshold")
highest_usage_node = max(proxlb_data["nodes"].values(), key=lambda x: x[f"{method}_{mode}_percent"])
highest_node_value = highest_usage_node[f"{method}_{mode}_percent"]
if highest_node_value >= threshold:
logger.debug(f"Guest balancing is required. Highest {method} usage node {highest_usage_node['name']} is above the defined threshold of {threshold}% with a value of {highest_node_value}%.")
proxlb_data["meta"]["balancing"]["balance"] = True
else:
logger.debug(f"Guest balancing is ok. Highest {method} usage node {highest_usage_node['name']} is below the defined threshold of {threshold}% with a value of {highest_node_value}%.")
proxlb_data["meta"]["balancing"]["balance"] = False
else:
logger.debug(f"No {method} threshold defined for balancing. Skipping threshold check.")
elif mode == "used":
method_value = [node_meta[f"{method}_{mode}_percent"] for node_meta in proxlb_data["nodes"].values()]
if proxlb_data["meta"]["balancing"].get(f"{method}_threshold", None):
threshold = proxlb_data["meta"]["balancing"].get(f"{method}_threshold")
highest_usage_node = max(proxlb_data["nodes"].values(), key=lambda x: x[f"{method}_{mode}_percent"])
highest_node_value = highest_usage_node[f"{method}_{mode}_percent"]
if highest_node_value >= threshold:
logger.debug(f"Guest balancing is required. Highest {method} usage node {highest_usage_node['name']} is above the defined threshold of {threshold}% with a value of {highest_node_value}%.")
proxlb_data["meta"]["balancing"]["balance"] = True
else:
logger.debug(f"Guest balancing is ok. Highest {method} usage node {highest_usage_node['name']} is below the defined threshold of {threshold}% with a value of {highest_node_value}%.")
proxlb_data["meta"]["balancing"]["balance"] = False
else:
logger.debug(f"No {method} threshold defined for balancing. Skipping threshold check.")
elif mode == "psi":
method_value = [node_meta[f"{method}_pressure_full_spikes_percent"] for node_meta in proxlb_data["nodes"].values()]
any_node_hot = any(node.get(f"{method}_pressure_hot", False) for node in proxlb_data["nodes"].values())
any_guest_hot = any(node.get(f"{method}_pressure_hot", False) for node in proxlb_data["guests"].values())
if any_node_hot:
logger.debug(f"Guest balancing is required. A node is marked as HOT based on {method} pressure metrics.")
proxlb_data["meta"]["balancing"]["balance"] = True
else:
logger.debug(f"Guest balancing is ok. No node is marked as HOT based on {method} pressure metrics.")
if any_guest_hot:
logger.debug(f"Guest balancing is required. A guest is marked as HOT based on {method} pressure metrics.")
proxlb_data["meta"]["balancing"]["balance"] = True
else:
logger.debug(f"Guest balancing is ok. No guest is marked as HOT based on {method} pressure metrics.")
return proxlb_data
else:
logger.critical(f"Unknown balancing mode: {mode} provided. Cannot get balanciness.")
sys.exit(1)
method_value_highest = max(method_value)
method_value_lowest = min(method_value)
@@ -159,7 +295,23 @@ class Calculations:
# Filter by the defined methods and modes for balancing
method = proxlb_data["meta"]["balancing"].get("method", "memory")
mode = proxlb_data["meta"]["balancing"].get("mode", "used")
lowest_usage_node = min(filtered_nodes, key=lambda x: x[f"{method}_{mode}_percent"])
if mode == "assigned":
logger.debug(f"Get best node for balancing by assigned {method} resources.")
lowest_usage_node = min(filtered_nodes, key=lambda x: x[f"{method}_{mode}_percent"])
elif mode == "used":
logger.debug(f"Get best node for balancing by used {method} resources.")
lowest_usage_node = min(filtered_nodes, key=lambda x: x[f"{method}_{mode}_percent"])
elif mode == "psi":
logger.debug(f"Get best node for balancing by pressure of {method} resources.")
lowest_usage_node = min(filtered_nodes, key=lambda x: x[f"{method}_pressure_full_spikes_percent"])
else:
logger.critical(f"Unknown balancing mode: {mode} provided. Cannot get best node.")
sys.exit(1)
proxlb_data["meta"]["balancing"]["balance_reason"] = 'resources'
proxlb_data["meta"]["balancing"]["balance_next_node"] = lowest_usage_node["name"]
@@ -188,7 +340,7 @@ class Calculations:
Returns:
None
"""
logger.debug("Starting: get_most_free_node.")
logger.debug("Starting: relocate_guests_on_maintenance_nodes.")
proxlb_data["meta"]["balancing"]["balance_next_guest"] = ""
for guest_name in proxlb_data["groups"]["maintenance"]:
@@ -199,7 +351,7 @@ class Calculations:
Calculations.update_node_resources(proxlb_data)
logger.warning(f"Warning: Balancing may not be perfect because guest {guest_name} was located on a node which is in maintenance mode.")
logger.debug("Finished: get_most_free_node.")
logger.debug("Finished: relocate_guests_on_maintenance_nodes.")
@staticmethod
def relocate_guests(proxlb_data: Dict[str, Any]):
@@ -233,7 +385,26 @@ class Calculations:
Calculations.get_most_free_node(proxlb_data)
for guest_name in proxlb_data["groups"]["affinity"][group_name]["guests"]:
proxlb_data["meta"]["balancing"]["balance_next_guest"] = guest_name
mode = proxlb_data["meta"]["balancing"].get("mode", "used")
if mode == 'psi':
logger.debug(f"Evaluating guest relocation based on {mode} mode.")
method = proxlb_data["meta"]["balancing"].get("method", "memory")
processed_guests_psi = proxlb_data["meta"]["balancing"].setdefault("processed_guests_psi", [])
unprocessed_guests_psi = [guest for guest in proxlb_data["guests"].values() if guest["name"] not in processed_guests_psi]
# Filter by the defined methods and modes for balancing
highest_usage_guest = max(unprocessed_guests_psi, key=lambda x: x[f"{method}_pressure_full_spikes_percent"])
# Append guest to the psi based processed list of guests
if highest_usage_guest["name"] == guest_name and guest_name not in proxlb_data["meta"]["balancing"]["processed_guests_psi"]:
proxlb_data["meta"]["balancing"]["processed_guests_psi"].append(guest_name)
proxlb_data["meta"]["balancing"]["balance_next_guest"] = guest_name
else:
logger.debug(f"Evaluating guest relocation based on {mode} mode.")
proxlb_data["meta"]["balancing"]["balance_next_guest"] = guest_name
Calculations.val_anti_affinity(proxlb_data, guest_name)
Calculations.val_node_relationships(proxlb_data, guest_name)
Calculations.update_node_resources(proxlb_data)
@@ -348,6 +519,11 @@ class Calculations:
"""
logger.debug("Starting: update_node_resources.")
guest_name = proxlb_data["meta"]["balancing"]["balance_next_guest"]
if guest_name == "":
logger.debug("No guest defined to update node resources for.")
return
node_current = proxlb_data["guests"][guest_name]["node_current"]
node_target = proxlb_data["meta"]["balancing"]["balance_next_node"]

View File

@@ -10,6 +10,7 @@ __license__ = "GPL-3.0"
from typing import Dict, Any
from utils.logger import SystemdLogger
from models.pools import Pools
from models.tags import Tags
import time
@@ -35,7 +36,7 @@ class Guests:
"""
@staticmethod
def get_guests(proxmox_api: any, nodes: Dict[str, Any], meta: Dict[str, Any]) -> Dict[str, Any]:
def get_guests(proxmox_api: any, pools: Dict[str, Any], nodes: Dict[str, Any], meta: Dict[str, Any], proxlb_config: Dict[str, Any]) -> Dict[str, Any]:
"""
Get metrics of all guests in a Proxmox cluster.
@@ -46,6 +47,8 @@ class Guests:
Args:
proxmox_api (any): The Proxmox API client instance.
nodes (Dict[str, Any]): A dictionary containing information about the nodes in the Proxmox cluster.
meta (Dict[str, Any]): A dictionary containing metadata information.
proxmox_config (Dict[str, Any]): A dictionary containing the ProxLB configuration.
Returns:
Dict[str, Any]: A dictionary containing metrics and information for all running guests.
@@ -62,24 +65,40 @@ class Guests:
# resource metrics for rebalancing to ensure that we do not overprovisiong the node.
for guest in proxmox_api.nodes(node).qemu.get():
if guest['status'] == 'running':
guests['guests'][guest['name']] = {}
guests['guests'][guest['name']]['name'] = guest['name']
guests['guests'][guest['name']]['cpu_total'] = int(guest['cpus'])
guests['guests'][guest['name']]['cpu_used'] = Guests.get_guest_cpu_usage(proxmox_api, node, guest['vmid'], guest['name'])
guests['guests'][guest['name']]['cpu_used'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', None)
guests['guests'][guest['name']]['cpu_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'some')
guests['guests'][guest['name']]['cpu_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'full')
guests['guests'][guest['name']]['cpu_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'some', spikes=True)
guests['guests'][guest['name']]['cpu_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'full', spikes=True)
guests['guests'][guest['name']]['cpu_pressure_hot'] = False
guests['guests'][guest['name']]['memory_total'] = guest['maxmem']
guests['guests'][guest['name']]['memory_used'] = guest['mem']
guests['guests'][guest['name']]['memory_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'some')
guests['guests'][guest['name']]['memory_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'full')
guests['guests'][guest['name']]['memory_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'some', spikes=True)
guests['guests'][guest['name']]['memory_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'full', spikes=True)
guests['guests'][guest['name']]['memory_pressure_hot'] = False
guests['guests'][guest['name']]['disk_total'] = guest['maxdisk']
guests['guests'][guest['name']]['disk_used'] = guest['disk']
guests['guests'][guest['name']]['disk_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'some')
guests['guests'][guest['name']]['disk_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'full')
guests['guests'][guest['name']]['disk_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'some', spikes=True)
guests['guests'][guest['name']]['disk_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'full', spikes=True)
guests['guests'][guest['name']]['disk_pressure_hot'] = False
guests['guests'][guest['name']]['id'] = guest['vmid']
guests['guests'][guest['name']]['node_current'] = node
guests['guests'][guest['name']]['node_target'] = node
guests['guests'][guest['name']]['processed'] = False
guests['guests'][guest['name']]['pressure_hot'] = False
guests['guests'][guest['name']]['tags'] = Tags.get_tags_from_guests(proxmox_api, node, guest['vmid'], 'vm')
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['pools'] = Pools.get_pools_for_guest(guest['name'], pools)
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'], nodes)
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'], nodes, guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['type'] = 'vm'
logger.debug(f"Resources of Guest {guest['name']} (type VM) added: {guests['guests'][guest['name']]}")
@@ -94,20 +113,37 @@ class Guests:
guests['guests'][guest['name']] = {}
guests['guests'][guest['name']]['name'] = guest['name']
guests['guests'][guest['name']]['cpu_total'] = int(guest['cpus'])
guests['guests'][guest['name']]['cpu_used'] = Guests.get_guest_cpu_usage(proxmox_api, node, guest['vmid'], guest['name'])
guests['guests'][guest['name']]['cpu_used'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', None)
guests['guests'][guest['name']]['cpu_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'some')
guests['guests'][guest['name']]['cpu_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'full')
guests['guests'][guest['name']]['cpu_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'some', spikes=True)
guests['guests'][guest['name']]['cpu_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'cpu', 'full', spikes=True)
guests['guests'][guest['name']]['cpu_pressure_hot'] = False
guests['guests'][guest['name']]['memory_total'] = guest['maxmem']
guests['guests'][guest['name']]['memory_used'] = guest['mem']
guests['guests'][guest['name']]['memory_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'some')
guests['guests'][guest['name']]['memory_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'full')
guests['guests'][guest['name']]['memory_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'some', spikes=True)
guests['guests'][guest['name']]['memory_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'memory', 'full', spikes=True)
guests['guests'][guest['name']]['memory_pressure_hot'] = False
guests['guests'][guest['name']]['disk_total'] = guest['maxdisk']
guests['guests'][guest['name']]['disk_used'] = guest['disk']
guests['guests'][guest['name']]['disk_pressure_some_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'some')
guests['guests'][guest['name']]['disk_pressure_full_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'full')
guests['guests'][guest['name']]['disk_pressure_some_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'some', spikes=True)
guests['guests'][guest['name']]['disk_pressure_full_spikes_percent'] = Guests.get_guest_rrd_data(proxmox_api, node, guest['vmid'], guest['name'], 'disk', 'full', spikes=True)
guests['guests'][guest['name']]['disk_pressure_hot'] = False
guests['guests'][guest['name']]['id'] = guest['vmid']
guests['guests'][guest['name']]['node_current'] = node
guests['guests'][guest['name']]['node_target'] = node
guests['guests'][guest['name']]['processed'] = False
guests['guests'][guest['name']]['pressure_hot'] = False
guests['guests'][guest['name']]['tags'] = Tags.get_tags_from_guests(proxmox_api, node, guest['vmid'], 'ct')
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['pools'] = Pools.get_pools_for_guest(guest['name'], pools)
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'], nodes)
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'], nodes, guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['type'] = 'ct'
logger.debug(f"Resources of Guest {guest['name']} (type CT) added: {guests['guests'][guest['name']]}")
@@ -118,36 +154,55 @@ class Guests:
return guests
@staticmethod
def get_guest_cpu_usage(proxmox_api, node_name: str, vm_id: int, vm_name: str) -> float:
def get_guest_rrd_data(proxmox_api, node_name: str, vm_id: int, vm_name: str, object_name: str, object_type: str, spikes=False) -> float:
"""
Retrieve the average CPU usage of a guest instance (VM/CT) over the past hour.
This method queries the Proxmox VE API for RRD (Round-Robin Database) data
related to CPU usage of a specific guest instance and calculates the average CPU usage
over the last hour using the "AVERAGE" consolidation function.
Retrieves the rrd data metrics for a specific resource (CPU, memory, disk) of a guest VM or CT.
Args:
proxmox_api: An instance of the Proxmox API client.
node_name (str): The name of the Proxmox node hosting the VM.
vm_id (int): The unique identifier of the guest instance (VM/CT).
vm_name (str): The name of the guest instance (VM/CT).
proxmox_api (Any): The Proxmox API client instance.
node_name (str): The name of the node hosting the guest.
vm_id (int): The ID of the guest VM or CT.
vm_name (str): The name of the guest VM or CT.
object_name (str): The resource type to query (e.g., 'cpu', 'memory', 'disk').
object_type (str, optional): The pressure type ('some', 'full') or None for average usage.
spikes (bool, optional): Whether to consider spikes in the calculation. Defaults to False.
Returns:
float: The average CPU usage as a fraction (0.0 to 1.0) over the past hour.
Returns 0.0 if no data is available.
float: The calculated average usage value for the specified resource.
"""
logger.debug("Finished: get_guest_cpu_usage.")
logger.debug("Starting: get_guest_rrd_data.")
time.sleep(0.1)
try:
logger.debug(f"Getting RRD dara for guest: {vm_name}.")
guest_data_rrd = proxmox_api.nodes(node_name).qemu(vm_id).rrddata.get(timeframe="hour", cf="AVERAGE")
if spikes:
logger.debug(f"Getting spike RRD data for {object_name} from guest: {vm_name}.")
guest_data_rrd = proxmox_api.nodes(node_name).qemu(vm_id).rrddata.get(timeframe="hour", cf="MAX")
else:
logger.debug(f"Getting average RRD data for {object_name} from guest: {vm_name}.")
guest_data_rrd = proxmox_api.nodes(node_name).qemu(vm_id).rrddata.get(timeframe="hour", cf="AVERAGE")
except Exception:
logger.error(f"Failed to retrieve RRD data for guest: {vm_name} (ID: {vm_id}) on node: {node_name}. Using 0.0 as CPU usage.")
logger.debug("Finished: get_guest_cpu_usage.")
return 0.0
logger.error(f"Failed to retrieve RRD data for guest: {vm_name} (ID: {vm_id}) on node: {node_name}. Using 0.0 as value.")
logger.debug("Finished: get_guest_rrd_data.")
return float(0.0)
cpu_usage = sum(entry.get("cpu", 0.0) for entry in guest_data_rrd) / len(guest_data_rrd)
logger.debug(f"CPU RRD data for guest: {vm_name}: {cpu_usage}")
logger.debug("Finished: get_guest_cpu_usage.")
return cpu_usage
if object_type:
lookup_key = f"pressure{object_name}{object_type}"
if spikes:
# RRD data is collected every minute, so we look at the last 6 entries
# and take the maximum value to represent the spike
logger.debug(f"Getting RRD data (spike: {spikes}) of pressure for {object_name} {object_type} from guest: {vm_name}.")
rrd_data_value = [row.get(lookup_key) for row in guest_data_rrd if row.get(lookup_key) is not None]
rrd_data_value = max(rrd_data_value[-6:], default=0.0)
else:
# Calculate the average value from the RRD data entries
logger.debug(f"Getting RRD data (spike: {spikes}) of pressure for {object_name} {object_type} from guest: {vm_name}.")
rrd_data_value = sum(entry.get(lookup_key, 0.0) for entry in guest_data_rrd) / len(guest_data_rrd)
else:
logger.debug(f"Getting RRD data of cpu usage from guest: {vm_name}.")
rrd_data_value = sum(entry.get("cpu", 0.0) for entry in guest_data_rrd) / len(guest_data_rrd)
logger.debug(f"RRD data (spike: {spikes}) for {object_name} from guest: {vm_name}: {rrd_data_value}")
logger.debug("Finished: get_guest_rrd_data.")
return rrd_data_value

View File

@@ -21,6 +21,7 @@ __copyright__ = "Copyright (C) 2025 Florian Paul Azim Hoberg (@gyptazy)"
__license__ = "GPL-3.0"
import time
from typing import Dict, Any
from utils.logger import SystemdLogger
@@ -47,6 +48,7 @@ class Nodes:
Args:
proxmox_api (any): The Proxmox API client instance.
proxmox_config (Dict[str, Any]): A dictionary containing the ProxLB configuration.
nodes (Dict[str, Any]): A dictionary containing information about the nodes in the Proxmox cluster.
Returns:
@@ -60,6 +62,8 @@ class Nodes:
if node["status"] == "online" and not Nodes.set_node_ignore(proxlb_config, node["node"]):
nodes["nodes"][node["node"]] = {}
nodes["nodes"][node["node"]]["name"] = node["node"]
nodes["nodes"][node["node"]]["pve_version"] = Nodes.get_node_pve_version(proxmox_api, node["node"])
nodes["nodes"][node["node"]]["pressure_hot"] = False
nodes["nodes"][node["node"]]["maintenance"] = False
nodes["nodes"][node["node"]]["cpu_total"] = node["maxcpu"]
nodes["nodes"][node["node"]]["cpu_assigned"] = 0
@@ -68,6 +72,11 @@ class Nodes:
nodes["nodes"][node["node"]]["cpu_assigned_percent"] = nodes["nodes"][node["node"]]["cpu_assigned"] / nodes["nodes"][node["node"]]["cpu_total"] * 100
nodes["nodes"][node["node"]]["cpu_free_percent"] = nodes["nodes"][node["node"]]["cpu_free"] / node["maxcpu"] * 100
nodes["nodes"][node["node"]]["cpu_used_percent"] = nodes["nodes"][node["node"]]["cpu_used"] / node["maxcpu"] * 100
nodes["nodes"][node["node"]]["cpu_pressure_some_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "cpu", "some")
nodes["nodes"][node["node"]]["cpu_pressure_full_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "cpu", "full")
nodes["nodes"][node["node"]]["cpu_pressure_some_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "cpu", "some", spikes=True)
nodes["nodes"][node["node"]]["cpu_pressure_full_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "cpu", "full", spikes=True)
nodes["nodes"][node["node"]]["cpu_pressure_hot"] = False
nodes["nodes"][node["node"]]["memory_total"] = node["maxmem"]
nodes["nodes"][node["node"]]["memory_assigned"] = 0
nodes["nodes"][node["node"]]["memory_used"] = node["mem"]
@@ -75,6 +84,11 @@ class Nodes:
nodes["nodes"][node["node"]]["memory_assigned_percent"] = nodes["nodes"][node["node"]]["memory_assigned"] / nodes["nodes"][node["node"]]["memory_total"] * 100
nodes["nodes"][node["node"]]["memory_free_percent"] = nodes["nodes"][node["node"]]["memory_free"] / node["maxmem"] * 100
nodes["nodes"][node["node"]]["memory_used_percent"] = nodes["nodes"][node["node"]]["memory_used"] / node["maxmem"] * 100
nodes["nodes"][node["node"]]["memory_pressure_some_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "memory", "some")
nodes["nodes"][node["node"]]["memory_pressure_full_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "memory", "full")
nodes["nodes"][node["node"]]["memory_pressure_some_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "memory", "some", spikes=True)
nodes["nodes"][node["node"]]["memory_pressure_full_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "memory", "full", spikes=True)
nodes["nodes"][node["node"]]["memory_pressure_hot"] = False
nodes["nodes"][node["node"]]["disk_total"] = node["maxdisk"]
nodes["nodes"][node["node"]]["disk_assigned"] = 0
nodes["nodes"][node["node"]]["disk_used"] = node["disk"]
@@ -82,11 +96,17 @@ class Nodes:
nodes["nodes"][node["node"]]["disk_assigned_percent"] = nodes["nodes"][node["node"]]["disk_assigned"] / nodes["nodes"][node["node"]]["disk_total"] * 100
nodes["nodes"][node["node"]]["disk_free_percent"] = nodes["nodes"][node["node"]]["disk_free"] / node["maxdisk"] * 100
nodes["nodes"][node["node"]]["disk_used_percent"] = nodes["nodes"][node["node"]]["disk_used"] / node["maxdisk"] * 100
nodes["nodes"][node["node"]]["disk_pressure_some_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "disk", "some")
nodes["nodes"][node["node"]]["disk_pressure_full_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "disk", "full")
nodes["nodes"][node["node"]]["disk_pressure_some_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "disk", "some", spikes=True)
nodes["nodes"][node["node"]]["disk_pressure_full_spikes_percent"] = Nodes.get_node_rrd_data(proxmox_api, node["node"], "disk", "full", spikes=True)
nodes["nodes"][node["node"]]["disk_pressure_hot"] = False
# Evaluate if node should be set to maintenance mode
if Nodes.set_node_maintenance(proxmox_api, proxlb_config, node["node"]):
nodes["nodes"][node["node"]]["maintenance"] = True
logger.debug(f"Node metrics collected: {nodes}")
logger.debug("Finished: get_nodes.")
return nodes
@@ -153,3 +173,83 @@ class Nodes:
return True
logger.debug("Finished: set_node_ignore.")
@staticmethod
def get_node_rrd_data(proxmox_api, node_name: str, object_name: str, object_type: str, spikes=False) -> float:
"""
Retrieves the rrd data metrics for a specific resource (CPU, memory, disk) of a node.
Args:
proxmox_api (Any): The Proxmox API client instance.
node_name (str): The name of the node hosting the guest.
object_name (str): The resource type to query (e.g., 'cpu', 'memory', 'disk').
object_type (str, optional): The pressure type ('some', 'full') or None for average usage.
spikes (bool, optional): Whether to consider spikes in the calculation. Defaults to False.
Returns:
float: The calculated average usage value for the specified resource.
"""
logger.debug("Starting: get_node_rrd_data.")
time.sleep(0.1)
try:
if spikes:
logger.debug(f"Getting spike RRD data for {object_name} from node: {node_name}.")
node_data_rrd = proxmox_api.nodes(node_name).rrddata.get(timeframe="hour", cf="MAX")
else:
logger.debug(f"Getting average RRD data for {object_name} from node: {node_name}.")
node_data_rrd = proxmox_api.nodes(node_name).rrddata.get(timeframe="hour", cf="AVERAGE")
except Exception:
logger.error(f"Failed to retrieve RRD data for guest: {node_name}. Using 0.0 as value.")
logger.debug("Finished: get_node_rrd_data.")
return 0.0
lookup_key = f"pressure{object_name}{object_type}"
if spikes:
# RRD data is collected every minute, so we look at the last 6 entries
# and take the maximum value to represent the spike
rrd_data_value = [row.get(lookup_key) for row in node_data_rrd if row.get(lookup_key) is not None]
rrd_data_value = max(rrd_data_value[-6:], default=0.0)
else:
# Calculate the average value from the RRD data entries
rrd_data_value = sum(entry.get(lookup_key, 0.0) for entry in node_data_rrd) / len(node_data_rrd)
logger.debug(f"RRD data (spike: {spikes}) for {object_name} from node: {node_name}: {rrd_data_value}")
logger.debug("Finished: get_node_rrd_data.")
return rrd_data_value
@staticmethod
def get_node_pve_version(proxmox_api, node_name: str) -> float:
"""
Return the Proxmox VE (PVE) version for a given node by querying the Proxmox API.
This function calls proxmox_api.nodes(node_name).version.get() and extracts the
'version' field from the returned mapping. The value is expected to be numeric
(or convertible to float) and is returned as a float.
Args:
proxmox_api (Any): The Proxmox API client instance.
node_name (str): The name of the node hosting the guest.
Returns:
float: The PVE version for the specified node as a floating point number.
Raises:
Exception: If the proxmox_api call fails, returns an unexpected structure, or the
'version' field is missing or cannot be converted to float. Callers should
handle or propagate exceptions as appropriate.
"""
logger.debug("Starting: get_node_pve_version.")
time.sleep(0.1)
try:
logger.debug(f"Trying to get PVE version for node: {node_name}.")
version = proxmox_api.nodes(node_name).version.get()
except Exception:
logger.error(f"Failed to get PVE version for node: {node_name}.")
logger.debug(f"Got version {version['version']} for node {node_name}.")
logger.debug("Finished: get_node_pve_version.")
return version["version"]

111
proxlb/models/pools.py Normal file
View File

@@ -0,0 +1,111 @@
"""
The Pools class retrieves all present pools defined on a Proxmox cluster
including the chield objects.
"""
__author__ = "Florian Paul Azim Hoberg <gyptazy>"
__copyright__ = "Copyright (C) 2025 Florian Paul Azim Hoberg (@gyptazy)"
__license__ = "GPL-3.0"
from typing import Dict, Any
from utils.logger import SystemdLogger
from models.tags import Tags
import time
logger = SystemdLogger()
class Pools:
"""
The Pools class retrieves all present pools defined on a Proxmox cluster
including the chield objects.
Methods:
__init__:
Initializes the Pools class.
get_pools(proxmox_api: any) -> Dict[str, Any]:
Retrieve pool definitions and membership from the Proxmox cluster.
Returns a dict with a top-level "pools" mapping each poolid to
{"name": <poolid>, "members": [<member_names>...]}.
This method does not collect per-member metrics or perform node filtering.
"""
def __init__(self):
"""
Initializes the Pools class with the provided ProxLB data.
"""
@staticmethod
def get_pools(proxmox_api: any) -> Dict[str, Any]:
"""
Retrieve all pools and their members from a Proxmox cluster.
Queries the Proxmox API for pool definitions and returns a dictionary
containing each pool's id/name and a list of its member VM/CT names.
This function does not perform per-member metric collection or node
filtering — it only gathers pool membership information.
Args:
proxmox_api (any): Proxmox API client instance.
Returns:
Dict[str, Any]: Dictionary with a top-level "pools" key mapping poolid
to {"name": <poolid>, "members": [<member_names>...]}.
"""
logger.debug("Starting: get_pools.")
pools = {"pools": {}}
# Pool objects: iterate over all pools in the cluster.
# We keep pool members even if their nodes are ignored so resource accounting
# for rebalancing remains correct and we avoid overprovisioning nodes.
for pool in proxmox_api.pools.get():
logger.debug(f"Got pool: {pool['poolid']}")
pools['pools'][pool['poolid']] = {}
pools['pools'][pool['poolid']]['name'] = pool['poolid']
pools['pools'][pool['poolid']]['members'] = []
# Fetch pool details and collect member names
pool_details = proxmox_api.pools(pool['poolid']).get()
for member in pool_details.get("members", []):
logger.debug(f"Got member: {member['name']} for pool: {pool['poolid']}")
pools['pools'][pool['poolid']]['members'].append(member["name"])
logger.debug("Finished: get_pools.")
return pools
@staticmethod
def get_pools_for_guest(guest_name: str, pools: Dict[str, Any]) -> Dict[str, Any]:
"""
Return the list of pool names that include the given guest.
Args:
guest_name (str): Name of the VM or CT to look up.
pools (Dict[str, Any]): Pools structure as returned by get_pools(),
expected to contain a top-level "pools" mapping each poolid to
{"name": <poolid>, "members": [<member_names>...]}.
Returns:
list[str]: Names of pools the guest is a member of (empty list if none).
"""
logger.debug("Starting: get_pools_for_guests.")
guest_pools = []
for pool in pools.items():
for pool_id, pool_data in pool[1].items():
if type(pool_data) is dict:
pool_name = pool_data.get("name", "")
pool_name_members = pool_data.get("members", [])
if guest_name in pool_name_members:
logger.debug(f"Guest: {guest_name} is member of Pool: {pool_name}.")
guest_pools.append(pool_name)
else:
logger.debug(f"Guest: {guest_name} is NOT member of Pool: {pool_name}.")
else:
logger.debug(f"Pool data for pool_id {pool_id} is not a dict: {pool_data}")
logger.debug("Finished: get_pools_for_guests.")
return guest_pools

View File

@@ -80,15 +80,18 @@ class Tags:
return tags
@staticmethod
def get_affinity_groups(tags: List[str]) -> List[str]:
def get_affinity_groups(tags: List[str], pools: List[str], proxlb_config: Dict[str, Any]) -> List[str]:
"""
Get affinity tags for a guest from the Proxmox cluster by the API.
This method retrieves all tags for a given guest and evaluates the
affinity tags which are required during the balancing calculations.
This method retrieves all tags for a given guest or based on a
membership of a pool and evaluates the affinity groups which are
required during the balancing calculations.
Args:
tags (List): A list holding all defined tags for a given guest.
pools (List): A list holding all defined pools for a given guest.
proxlb_config (Dict): A dict holding the ProxLB configuration.
Returns:
List: A list including all affinity tags for the given guest.
@@ -99,21 +102,36 @@ class Tags:
if len(tags) > 0:
for tag in tags:
if tag.startswith("plb_affinity"):
logger.debug(f"Adding affinity group for tag {tag}.")
affinity_tags.append(tag)
else:
logger.debug(f"Skipping affinity group for tag {tag}.")
if len(pools) > 0:
for pool in pools:
if pool in (proxlb_config['balancing'].get('pools') or {}):
if proxlb_config['balancing']['pools'][pool].get('type', None) == 'affinity':
logger.debug(f"Adding affinity group for pool {pool}.")
affinity_tags.append(pool)
else:
logger.debug(f"Skipping affinity group for pool {pool}.")
logger.debug("Finished: get_affinity_groups.")
return affinity_tags
@staticmethod
def get_anti_affinity_groups(tags: List[str]) -> List[str]:
def get_anti_affinity_groups(tags: List[str], pools: List[str], proxlb_config: Dict[str, Any]) -> List[str]:
"""
Get anti-affinity tags for a guest from the Proxmox cluster by the API.
This method retrieves all tags for a given guest and evaluates the
anti-affinity tags which are required during the balancing calculations.
This method retrieves all tags for a given guest or based on a
membership of a pool and evaluates the anti-affinity groups which
are required during the balancing calculations.
Args:
tags (List): A list holding all defined tags for a given guest.
pools (List): A list holding all defined pools for a given guest.
proxlb_config (Dict): A dict holding the ProxLB configuration.
Returns:
List: A list including all anti-affinity tags for the given guest..
@@ -124,7 +142,19 @@ class Tags:
if len(tags) > 0:
for tag in tags:
if tag.startswith("plb_anti_affinity"):
logger.debug(f"Adding anti-affinity group for tag {tag}.")
anti_affinity_tags.append(tag)
else:
logger.debug(f"Skipping anti-affinity group for tag {tag}.")
if len(pools) > 0:
for pool in pools:
if pool in (proxlb_config['balancing'].get('pools') or {}):
if proxlb_config['balancing']['pools'][pool].get('type', None) == 'anti-affinity':
logger.debug(f"Adding anti-affinity group for pool {pool}.")
anti_affinity_tags.append(pool)
else:
logger.debug(f"Skipping anti-affinity group for pool {pool}.")
logger.debug("Finished: get_anti_affinity_groups.")
return anti_affinity_tags
@@ -155,10 +185,10 @@ class Tags:
return ignore_tag
@staticmethod
def get_node_relationships(tags: List[str], nodes: Dict[str, Any]) -> str:
def get_node_relationships(tags: List[str], nodes: Dict[str, Any], pools: List[str], proxlb_config: Dict[str, Any]) -> str:
"""
Get a node relationship tag for a guest from the Proxmox cluster by the API to pin
a guest to a node.
a guest to a node or by defined pools from ProxLB configuration.
This method retrieves a relationship tag between a guest and a specific
hypervisor node to pin the guest to a specific node (e.g., for licensing reason).
@@ -166,24 +196,44 @@ class Tags:
Args:
tags (List): A list holding all defined tags for a given guest.
nodes (Dict): A dictionary holding all available nodes in the cluster.
pools (List): A list holding all defined pools for a given guest.
proxlb_config (Dict): A dict holding the ProxLB configuration.
Returns:
Str: The related hypervisor node name.
Str: The related hypervisor node name(s).
"""
logger.debug("Starting: get_node_relationships.")
node_relationship_tags = []
if len(tags) > 0:
logger.debug("Validating node pinning by tags.")
for tag in tags:
if tag.startswith("plb_pin"):
node_relationship_tag = tag.replace("plb_pin_", "")
# Validate if the node to pin is present in the cluster
if Helper.validate_node_presence(node_relationship_tag, nodes):
logger.info(f"Tag {node_relationship_tag} is valid! Defined node exists in the cluster.")
logger.debug(f"Tag {node_relationship_tag} is valid! Defined node exists in the cluster.")
logger.debug(f"Setting node relationship because of tag {tag} to {node_relationship_tag}.")
node_relationship_tags.append(node_relationship_tag)
else:
logger.warning(f"Tag {node_relationship_tag} is invalid! Defined node does not exist in the cluster. Not applying pinning.")
if len(pools) > 0:
logger.debug("Validating node pinning by pools.")
for pool in pools:
if pool in (proxlb_config['balancing'].get('pools') or {}):
node = proxlb_config['balancing']['pools'][pool].get('pin', None)
# Validate if the node to pin is present in the cluster
if Helper.validate_node_presence(node, nodes):
logger.debug(f"Pool pinning tag {node} is valid! Defined node exists in the cluster.")
logger.debug(f"Setting node relationship because of pool {pool} to {node}.")
node_relationship_tags.append(node)
else:
logger.warning(f"Pool pinning tag {node} is invalid! Defined node does not exist in the cluster. Not applying pinning.")
else:
logger.debug(f"Skipping pinning for pool {pool}. Pool is not defined in ProxLB configuration.")
logger.debug("Finished: get_node_relationships.")
return node_relationship_tags

View File

@@ -3,5 +3,5 @@ __app_desc__ = "An advanced resource scheduler and load balancer for Proxmox clu
__author__ = "Florian Paul Azim Hoberg <gyptazy>"
__copyright__ = "Copyright (C) 2025 Florian Paul Azim Hoberg (@gyptazy)"
__license__ = "GPL-3.0"
__version__ = "1.1.8"
__version__ = "1.1.9"
__url__ = "https://github.com/gyptazy/ProxLB"

View File

@@ -2,7 +2,7 @@ from setuptools import setup
setup(
name="proxlb",
version="1.1.8",
version="1.1.9",
description="An advanced resource scheduler and load balancer for Proxmox clusters.",
long_description="An advanced resource scheduler and load balancer for Proxmox clusters that also supports maintenance modes and affinity/anti-affinity rules.",
author="Florian Paul Azim Hoberg",