Compare commits

...

16 Commits

Author SHA1 Message Date
gyptazy
09b5b83c24 fix: HA affinity/anti-affinity rules can only be evaluated on PVE9+ nodes
Fixes: #391
2025-12-14 09:34:21 +01:00
gyptazy
8d61ccfbb1 Merge pull request #399 from gyptazy/fix/395-fix-non-existent-pool-stacktrace
fix: Fix crashing on non-existent pools
2025-12-13 20:06:16 +01:00
gyptazy
b39c13e2a5 fix: Fix crashing on non-existent pools
Fixes: #395
2025-12-13 20:05:11 +01:00
gyptazy
8e759b778c Merge pull request #396 from gyptazy/fix/395_fix_pool_based_node_pinning
fix: Fixed pool and ha-rules based node pinning of guests.
2025-12-12 08:04:23 +01:00
Florian Paul Azim Hoberg
22406e3628 fix: Fixed pool and ha-rules based node pinning of guests.
* Fixed pool based node pinning (@gyptazy). [#395]
  * Add support for Proxmox's native HA (node-affinity) rules for pinning guests to nodes (@gyptazy). [#391]

Fixes: #395
Fixes: #391
2025-12-11 14:43:44 +01:00
gyptazy
e7f5d5142e Merge pull request #392 from gyptazy/pipeline/dynamic-versioning-packages
pipline: Generate snapshot package
2025-12-10 12:51:37 +01:00
Florian Paul Azim Hoberg
48d621a06d pipline: Generate snapshot package 2025-12-10 12:42:28 +01:00
Florian Paul Azim Hoberg
c133ef1aee feature: Add support for Proxmox's native HA (affinity/anti-affinity) rules.
* Add support of native rules for affinity/anti-affinity types in Proxmox VE
  * Streamline affinity/anti-affinity rules by Tags, Pools and native Proxmox rules

Fixes: #391
2025-12-10 09:11:28 +01:00
gyptazy
9ea04f904d Merge pull request #388 from gyptazy/feature/387-select-balancing-workloads-by-size
feature: Add possibility to sort and select balancing workloads by smaller/larger guest objects
2025-12-08 15:52:25 +01:00
Florian Paul Azim Hoberg
5101202f72 feature: Add possibility to sort and select balancing workloads by smaller/larger guest objects
- Allows operators to select if first larger or smaller workloads should be migrated

Fixes: #387
2025-12-08 15:44:38 +01:00
gyptazy
929390b288 Merge pull request #386 from gyptazy/docs/385-proxmox-offline-mirror-repo-support
docs: Add documentation about offline repor mirror and proxmox-offline-mirror suppot
2025-12-06 16:12:37 +01:00
gyptazy
d4560c3af4 docs: Add documentation about offline repor mirror and proxmox-offline-mirror support
* Offline mirror support (air-gapped envs)
        * Add new full Debian repository

Fixes: #385
2025-12-06 12:26:15 +01:00
gyptazy
55c885194e Merge pull request #382 from gyptazy/fix/275-add-overprovisioning-safety-guard
fix(calculations): Add safety guard to avoid overprovisioning of nodes by memory
2025-12-06 11:19:38 +01:00
gyptazy
3d9f0eb85e fix(calculations): Add safety guard to avoid overprovisioning of nodes by memory.
Fixes: #275
2025-12-02 09:59:51 +01:00
gyptazy
490fb55ee1 Merge pull request #376 from Thalagyrt/patch-affinity-rebalance
Fix enforce_affinity boolean inversion
2025-11-27 08:41:05 +01:00
James Riley
a70330d4c3 Fix enforce_affinity boolean inversion
During runs in which affinity checks determine balancing actions,
there was a small error in a boolean calculation that caused
ProxLB to always rebalance, as it exited the verification loop with
a failure the first time it saw a VM that actually passed affinity
checks.
2025-11-26 07:06:28 -07:00
15 changed files with 333 additions and 22 deletions

View File

@@ -0,0 +1,2 @@
fixed:
- Fixed missing overprovisioning safety guard to avoid node overprovisioning (@gyptazy). [#275]

View File

@@ -0,0 +1,2 @@
fixed:
- Fixed affinity matrix pre-validation by inverting validations (@Thalagyrt). [#335]

View File

@@ -0,0 +1,2 @@
added:
- Add possibility to sort and select balancing workloads by smaller/larger guest objects (@gyptazy). [#387]

View File

@@ -0,0 +1,3 @@
feature:
- Add support for Proxmox's native HA (affinity/anti-affinity) rules (@gyptazy). [#391]
- Add support for Proxmox's native HA (node-affinity) rules for pinning guests to nodes (@gyptazy). [#391]

View File

@@ -0,0 +1,2 @@
fixed:
- Fixed pool based node pinning (@gyptazy). [#395]

View File

@@ -40,6 +40,19 @@ jobs:
# Install dependencies
apt-get update && \
apt-get install -y python3 python3-setuptools debhelper dh-python python3-pip python3-stdeb python3-proxmoxer python3-requests python3-urllib3 devscripts python3-all && \
# Get base version from source code
BASE_VERSION=\$(grep __version__ proxlb/utils/version.py | awk '{print \$3}' | tr -d '\"')
echo \"Base version: \$BASE_VERSION\"
# Build full version with timestamp
FULL_VERSION=\"\${BASE_VERSION}+$(date +%Y%m%d%H%M)\"
echo \"Full version: \$FULL_VERSION\"
# Update debian/changelog with new version
dch --force-bad-version -v \"\$FULL_VERSION\" \
\"Automated GitHub Actions build on $(date -u +'%Y-%m-%d %H:%M UTC').\" && \
# Build package using stdeb / setuptools
# python3 setup.py --command-packages=stdeb.command bdist_deb && \
# Build native package

View File

@@ -120,6 +120,8 @@ systemctl start proxlb
Afterwards, ProxLB is running in the background and balances your cluster by your defined balancing method (default: memory).
**Note**: If you want to use ProxLB with the proxmox-offline-mirror or any other APT mirror tool that does not support the flat repository architecture, please see the [docs](https://github.com/gyptazy/ProxLB/blob/main/docs/02_installation.md#Repo-Mirror-and-Proxmox-Offline-Mirror-Support) how you can add this by using ProxLB's fully repo.
#### Details
ProxLB provides two different repositories:
* https://repo.gyptazy.com/stable (only stable release)
@@ -287,6 +289,7 @@ The following options can be set in the configuration file `proxlb.yaml`:
| | memory_threshold | | 75 | `Int` | The maximum threshold (in percent) that needs to be hit to perform balancing actions. (Optional) |
| | method | | memory | `Str` | The balancing method that should be used. [values: `memory` (default), `cpu`, `disk`]|
| | mode | | used | `Str` | The balancing mode that should be used. [values: `used` (default), `assigned`, `psi` (pressure)] |
| | balance_larger_guests_first | | False | `Bool` | Option to prefer larger/smaller guests first |
| | psi | | { nodes: { memory: { pressure_full: 0.20, pressure_some: 0.20, pressure_spikes: 1.00 }}} | `Dict` | A dict of PSI based thresholds for nodes and guests |
| | pools | | pools: { dev: { type: affinity }, de-nbg01-db: { type: anti-affinity }} | `Dict` | A dict of pool names and their type for creating affinity/anti-affinity rules |
| `service` | | | | | |
@@ -334,6 +337,7 @@ balancing:
balanciness: 5
method: memory
mode: used
balance_larger_guests_first: False
# # PSI thresholds only apply when using mode 'psi'
# # PSI based balancing is currently in beta and req. PVE >= 9
# psi:

View File

@@ -32,6 +32,7 @@ balancing:
balanciness: 5 # Maximum delta of resource usage between highest and lowest usage node
method: memory # 'memory' | 'cpu' | 'disk'
mode: used # 'assigned' | 'used' | 'psi'
balance_larger_guests_first: False # Option to prioritize balancing of larger or smaller guests first
# # PSI thresholds only apply when using mode 'psi'
# psi:
# nodes:

View File

@@ -6,6 +6,7 @@
- [Quick-Start](#quick-start)
- [Details](#details)
- [Debian Packages (.deb files)](#debian-packages-deb-files)
- [Repo Mirror and Proxmox Offline Mirror Support](#repo-mirror-and-proxmox-offline-mirror-support)
- [RedHat Package](#redhat-package)
- [Container Images / Docker](#container-images--docker)
- [Overview of Images](#overview-of-images)
@@ -83,6 +84,27 @@ vi /etc/proxlb/proxlb.yaml
systemctl start proxlb
```
#### Repo Mirror and Proxmox Offline Mirror Support
ProxLB uses the supported flat mirror style for the Debian repository. Unfortunately, not all offline-mirror applications support it. One of the known ones is the official *proxmox-offline-mirror* which is unable to handle flat repositories (see also: [#385](https://github.com/gyptazy/ProxLB/issues/385)).
Therefore, we currently operate and support both ways to avoid everyone force switching to the new repository. As a result, you can simply use this repository:
```
deb https://repo.gyptazy.com/proxlb stable main
```
**Example Config for proxmox-offline-mirror:**
An example config for the proxmox-offline-mirror would look like:
```
mirror: proxlb
architectures amd64
base-dir /var/lib/proxmox-offline-mirror/mirrors/
key-path /etc/apt/trusted.gpg.d/proxlb.asc
repository deb https://repo.gyptazy.com/proxlb stable main
sync true
verify true
```
### RedHat Package
There's currently no official support for RedHat based systems. However, there's a dummy .rpm package for such systems in the pipeline which can be found here:
* https://github.com/gyptazy/ProxLB/actions/workflows/20-pipeline-build-rpm-package.yml

View File

@@ -25,6 +25,7 @@ from models.groups import Groups
from models.calculations import Calculations
from models.balancing import Balancing
from models.pools import Pools
from models.ha_rules import HaRules
from utils.helper import Helper
@@ -73,12 +74,14 @@ def main():
# Get all required objects from the Proxmox cluster
meta = {"meta": proxlb_config}
nodes = Nodes.get_nodes(proxmox_api, proxlb_config)
meta = Features.validate_any_non_pve9_node(meta, nodes)
pools = Pools.get_pools(proxmox_api)
guests = Guests.get_guests(proxmox_api, pools, nodes, meta, proxlb_config)
ha_rules = HaRules.get_ha_rules(proxmox_api, meta)
guests = Guests.get_guests(proxmox_api, pools, ha_rules, nodes, meta, proxlb_config)
groups = Groups.get_groups(guests, nodes)
# Merge obtained objects from the Proxmox cluster for further usage
proxlb_data = {**meta, **nodes, **guests, **pools, **groups}
proxlb_data = {**meta, **nodes, **guests, **pools, **ha_rules, **groups}
Helper.log_node_metrics(proxlb_data)
# Validate usable features by PVE versions

View File

@@ -377,7 +377,19 @@ class Calculations:
if proxlb_data["meta"]["balancing"].get("enforce_affinity", False):
logger.debug("Balancing of guests will be performed. Reason: enforce affinity balancing")
for group_name in proxlb_data["groups"]["affinity"]:
# Sort guests by used memory
# Allows processing larger guests first or smaller guests first
larger_first = proxlb_data.get("meta", {}).get("balancing", {}).get("balance_larger_guests_first", False)
if larger_first:
logger.debug("Larger guests will be processed first. (Sorting descending by memory used)")
else:
logger.debug("Smaller guests will be processed first. (Sorting ascending by memory used)")
sorted_guest_usage_groups = sorted(proxlb_data["groups"]["affinity"], key=lambda g: proxlb_data["groups"]["affinity"][g]["memory_used"], reverse=larger_first)
# Iterate over all affinity groups
for group_name in sorted_guest_usage_groups:
# We get initially the node with the most free resources and then
# migrate all guests within the group to that node to ensure the
@@ -387,6 +399,10 @@ class Calculations:
for guest_name in proxlb_data["groups"]["affinity"][group_name]["guests"]:
mode = proxlb_data["meta"]["balancing"].get("mode", "used")
if not Calculations.validate_node_resources(proxlb_data, guest_name):
logger.warning(f"Skipping relocation of guest {guest_name} due to insufficient resources on target node {proxlb_data['meta']['balancing']['balance_next_node']}. This might affect affinity group {group_name}.")
continue
if mode == 'psi':
logger.debug(f"Evaluating guest relocation based on {mode} mode.")
method = proxlb_data["meta"]["balancing"].get("method", "memory")
@@ -605,7 +621,7 @@ class Calculations:
logger.debug(f"Affinity for guest {guest} is {'valid' if balancing_state_affinity else 'NOT valid'}")
logger.debug(f"Anti-affinity for guest {guest} is {'valid' if balancing_state_anti_affinity else 'NOT valid'}")
balancing_ok = not balancing_state_affinity or not balancing_state_anti_affinity
balancing_ok = balancing_state_affinity and balancing_state_anti_affinity
if balancing_ok:
logger.debug(f"Rebalancing based on affinity/anti-affinity map is not required.")
@@ -707,3 +723,40 @@ class Calculations:
return False
return True
@staticmethod
def validate_node_resources(proxlb_data: Dict[str, Any], guest_name: str) -> bool:
"""
Validate that the target node has sufficient resources to host the specified guest.
This function checks if the target node, determined by the balancing logic,
has enough CPU, memory, and disk resources available to accommodate the guest.
Args:
proxlb_data (Dict[str, Any]): A dictionary containing the complete ProxLB state including:
- "nodes": Dictionary with node resource information
- "guests": Dictionary with guest resource requirements
- "meta": Dictionary with balancing information including target node
guest_name (str): The name of the guest to validate resources for
Returns:
bool: True if the target node has sufficient resources, False otherwise
"""
logger.debug("Starting: validate_node_resources.")
node_target = proxlb_data["meta"]["balancing"]["balance_next_node"]
node_memory_free = proxlb_data["nodes"][node_target]["memory_free"]
node_cpu_free = proxlb_data["nodes"][node_target]["cpu_free"]
node_disk_free = proxlb_data["nodes"][node_target]["disk_free"]
guest_memory_required = proxlb_data["guests"][guest_name]["memory_used"]
guest_cpu_required = proxlb_data["guests"][guest_name]["cpu_used"]
guest_disk_required = proxlb_data["guests"][guest_name]["disk_used"]
if guest_memory_required < node_memory_free:
logger.debug(f"Node '{node_target}' has sufficient resources for guest '{guest_name}'.")
logger.debug("Finished: validate_node_resources.")
return True
else:
logger.debug(f"Node '{node_target}' lacks sufficient resources for guest '{guest_name}'.")
logger.debug("Finished: validate_node_resources.")
return False

View File

@@ -88,3 +88,37 @@ class Features:
proxlb_data["meta"]["balancing"]["enable"] = False
logger.debug("Finished: validate_available_features.")
@staticmethod
def validate_any_non_pve9_node(meta: any, nodes: any) -> dict:
"""
Validate if any node in the cluster is running Proxmox VE < 9.0.0 and update meta accordingly.
This function inspects the cluster node versions and sets a flag in meta indicating whether
any node is running a Proxmox VE version older than 9.0.0.
Args:
meta (dict): Metadata structure that will be updated with cluster version information.
nodes (dict): Cluster nodes mapping whose values contain 'pve_version' strings.
Returns:
dict: The updated meta dictionary with 'cluster_non_pve9' flag set to True or False.
Side effects:
- Mutates meta["meta"]["cluster_non_pve9"] based on node versions.
- Emits debug log messages.
Notes:
- Version comparison uses semantic version parsing; defaults to "0.0.0" if pve_version is missing.
"""
logger.debug("Starting: validate_any_non_pve9_node.")
any_non_pve9_node = any(version.parse(node.get("pve_version", "0.0.0")) < version.parse("9.0.0") for node in nodes.get("nodes", {}).values())
if any_non_pve9_node:
meta["meta"]["cluster_non_pve9"] = True
logger.debug("Finished: validate_any_non_pve9_node. Result: True")
else:
meta["meta"]["cluster_non_pve9"] = False
logger.debug("Finished: validate_any_non_pve9_node. Result: False")
return meta

View File

@@ -11,6 +11,7 @@ __license__ = "GPL-3.0"
from typing import Dict, Any
from utils.logger import SystemdLogger
from models.pools import Pools
from models.ha_rules import HaRules
from models.tags import Tags
import time
@@ -36,7 +37,7 @@ class Guests:
"""
@staticmethod
def get_guests(proxmox_api: any, pools: Dict[str, Any], nodes: Dict[str, Any], meta: Dict[str, Any], proxlb_config: Dict[str, Any]) -> Dict[str, Any]:
def get_guests(proxmox_api: any, pools: Dict[str, Any], ha_rules: Dict[str, Any], nodes: Dict[str, Any], meta: Dict[str, Any], proxlb_config: Dict[str, Any]) -> Dict[str, Any]:
"""
Get metrics of all guests in a Proxmox cluster.
@@ -46,6 +47,8 @@ class Guests:
Args:
proxmox_api (any): The Proxmox API client instance.
pools (Dict[str, Any]): A dictionary containing information about the pools in the Proxmox cluster.
ha_rules (Dict[str, Any]): A dictionary containing information about the HA rules in the
nodes (Dict[str, Any]): A dictionary containing information about the nodes in the Proxmox cluster.
meta (Dict[str, Any]): A dictionary containing metadata information.
proxmox_config (Dict[str, Any]): A dictionary containing the ProxLB configuration.
@@ -95,10 +98,11 @@ class Guests:
guests['guests'][guest['name']]['pressure_hot'] = False
guests['guests'][guest['name']]['tags'] = Tags.get_tags_from_guests(proxmox_api, node, guest['vmid'], 'vm')
guests['guests'][guest['name']]['pools'] = Pools.get_pools_for_guest(guest['name'], pools)
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['ha_rules'] = HaRules.get_ha_rules_for_guest(guest['name'], ha_rules, guest['vmid'])
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], guests['guests'][guest['name']]['ha_rules'], proxlb_config)
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], guests['guests'][guest['name']]['ha_rules'], proxlb_config)
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'], nodes, guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'], nodes, guests['guests'][guest['name']]['pools'], guests['guests'][guest['name']]['ha_rules'], proxlb_config)
guests['guests'][guest['name']]['type'] = 'vm'
logger.debug(f"Resources of Guest {guest['name']} (type VM) added: {guests['guests'][guest['name']]}")
@@ -140,10 +144,11 @@ class Guests:
guests['guests'][guest['name']]['pressure_hot'] = False
guests['guests'][guest['name']]['tags'] = Tags.get_tags_from_guests(proxmox_api, node, guest['vmid'], 'ct')
guests['guests'][guest['name']]['pools'] = Pools.get_pools_for_guest(guest['name'], pools)
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['ha_rules'] = HaRules.get_ha_rules_for_guest(guest['name'], ha_rules, guest['vmid'])
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], guests['guests'][guest['name']]['ha_rules'], proxlb_config)
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'], guests['guests'][guest['name']]['pools'], guests['guests'][guest['name']]['ha_rules'], proxlb_config)
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'], nodes, guests['guests'][guest['name']]['pools'], proxlb_config)
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'], nodes, guests['guests'][guest['name']]['pools'], guests['guests'][guest['name']]['ha_rules'], proxlb_config)
guests['guests'][guest['name']]['type'] = 'ct'
logger.debug(f"Resources of Guest {guest['name']} (type CT) added: {guests['guests'][guest['name']]}")

126
proxlb/models/ha_rules.py Normal file
View File

@@ -0,0 +1,126 @@
"""
The HaRules class retrieves all HA rules defined on a Proxmox cluster
including their affinity settings and member resources.
"""
__author__ = "Florian Paul Azim Hoberg <gyptazy>"
__copyright__ = "Copyright (C) 2025 Florian Paul Azim Hoberg (@gyptazy)"
__license__ = "GPL-3.0"
from typing import Dict, Any
from utils.logger import SystemdLogger
logger = SystemdLogger()
class HaRules:
"""
The HaRules class retrieves all HA rules defined on a Proxmox cluster
including their (anti)a-ffinity settings and member resources and translates
them into a ProxLB usable format.
Methods:
__init__:
Initializes the HaRules class.
get_ha_rules(proxmox_api: any) -> Dict[str, Any]:
Retrieve HA rule definitions from the Proxmox cluster.
Returns a dict with a top-level "ha_rules" mapping each rule id to
{"rule": <rule_id>, "type": <affinity_type>, "members": [<resource_ids>...]}.
Converts affinity settings to descriptive format (affinity or anti-affinity).
"""
def __init__(self):
"""
Initializes the HA Rules class with the provided ProxLB data.
"""
@staticmethod
def get_ha_rules(proxmox_api: any, meta: dict) -> Dict[str, Any]:
"""
Retrieve all HA rules from a Proxmox cluster.
Queries the Proxmox API for HA rule definitions and returns a dictionary
containing each rule's id, affinity type, and member resources (VM/CT IDs).
This function processes rule affinity settings and converts them to a more
descriptive format (affinity or anti-affinity).
Args:
proxmox_api (any): Proxmox API client instance.
meta (dict): The metadata dictionary containing cluster information.
Returns:
Dict[str, Any]: Dictionary with a top-level "ha_rules" key mapping rule id
to {"rule": <rule_id>, "type": <affinity_type>, "members": [<resource_ids>...]}.
"""
logger.debug("Starting: get_ha_rules.")
ha_rules = {"ha_rules": {}}
# If any node is non PVE 9, skip fetching HA rules as they are unsupported
if meta["meta"]["cluster_non_pve9"]:
logger.debug("Skipping HA rule retrieval as non Proxmox VE 9 systems detected.")
return ha_rules
else:
logger.debug("Cluster running Proxmox VE 9 or newer, proceeding with HA rule retrieval.")
for rule in proxmox_api.cluster.ha.rules.get():
# Skip disabled rules (disable key exists AND is truthy)
if rule.get("disable", 0):
logger.debug(f"Skipping ha-rule: {rule['rule']} of type {rule['type']} affecting guests: {rule['resources']}. Rule is disabled.")
continue
# Create a resource list by splitting on commas and stripping whitespace containing
# the VM and CT IDs that are part of this HA rule
resources_list_guests = [int(r.split(":")[1]) for r in rule["resources"].split(",") if r.strip()]
# Convert the affinity field to a more descriptive type
if rule.get("affinity", None) == "negative":
affinity_type = "anti-affinity"
else:
affinity_type = "affinity"
# Create affected nodes list
resources_list_nodes = []
if rule.get("nodes", None):
resources_list_nodes = [n for n in rule["nodes"].split(",") if n]
# Create the ha_rule element
ha_rules['ha_rules'][rule['rule']] = {}
ha_rules['ha_rules'][rule['rule']]['rule'] = rule['rule']
ha_rules['ha_rules'][rule['rule']]['type'] = affinity_type
ha_rules['ha_rules'][rule['rule']]['nodes'] = resources_list_nodes
ha_rules['ha_rules'][rule['rule']]['members'] = resources_list_guests
logger.debug(f"Got ha-rule: {rule['rule']} as type {affinity_type} affecting guests: {rule['resources']}")
logger.debug("Finished: ha_rules.")
return ha_rules
@staticmethod
def get_ha_rules_for_guest(guest_name: str, ha_rules: Dict[str, Any], vm_id: int) -> Dict[str, Any]:
"""
Return the list of HA rules that include the given guest.
Args:
guest_name (str): Name of the VM or CT to look up.
ha_rules (Dict[str, Any]): HA rules structure as returned by get_ha_rules(),
expected to contain a top-level "ha_rules" mapping each rule id to
{"rule": <rule_id>, "type": <affinity_type>, "members": [<resource_ids>...]}.
vm_id (int): VM or CT ID of the guest.
Returns:
list: IDs of HA rules the guest is a member of (empty list if none).
"""
logger.debug("Starting: get_ha_rules_for_guest.")
guest_ha_rules = []
for rule in ha_rules["ha_rules"].values():
if vm_id in rule.get("members", []):
logger.debug(f"Guest: {guest_name} (VMID: {vm_id}) is member of HA Rule: {rule['rule']}.")
guest_ha_rules.append(rule)
else:
logger.debug(f"Guest: {guest_name} (VMID: {vm_id}) is NOT member of HA Rule: {rule['rule']}.")
logger.debug("Finished: get_ha_rules_for_guest.")
return guest_ha_rules

View File

@@ -80,7 +80,7 @@ class Tags:
return tags
@staticmethod
def get_affinity_groups(tags: List[str], pools: List[str], proxlb_config: Dict[str, Any]) -> List[str]:
def get_affinity_groups(tags: List[str], pools: List[str], ha_rules: List[str], proxlb_config: Dict[str, Any]) -> List[str]:
"""
Get affinity tags for a guest from the Proxmox cluster by the API.
@@ -91,6 +91,7 @@ class Tags:
Args:
tags (List): A list holding all defined tags for a given guest.
pools (List): A list holding all defined pools for a given guest.
ha_rules (List): A list holding all defined ha_rules for a given guest.
proxlb_config (Dict): A dict holding the ProxLB configuration.
Returns:
@@ -99,6 +100,7 @@ class Tags:
logger.debug("Starting: get_affinity_groups.")
affinity_tags = []
# Tag based affinity groups
if len(tags) > 0:
for tag in tags:
if tag.startswith("plb_affinity"):
@@ -107,6 +109,7 @@ class Tags:
else:
logger.debug(f"Skipping affinity group for tag {tag}.")
# Pool based affinity groups
if len(pools) > 0:
for pool in pools:
if pool in (proxlb_config['balancing'].get('pools') or {}):
@@ -116,11 +119,18 @@ class Tags:
else:
logger.debug(f"Skipping affinity group for pool {pool}.")
# HA rule based affinity groups
if len(ha_rules) > 0:
for ha_rule in ha_rules:
if ha_rule.get('type', None) == 'affinity':
logger.debug(f"Adding affinity group for ha-rule {ha_rule}.")
affinity_tags.append(ha_rule['rule'])
logger.debug("Finished: get_affinity_groups.")
return affinity_tags
@staticmethod
def get_anti_affinity_groups(tags: List[str], pools: List[str], proxlb_config: Dict[str, Any]) -> List[str]:
def get_anti_affinity_groups(tags: List[str], pools: List[str], ha_rules: List[str], proxlb_config: Dict[str, Any]) -> List[str]:
"""
Get anti-affinity tags for a guest from the Proxmox cluster by the API.
@@ -131,6 +141,7 @@ class Tags:
Args:
tags (List): A list holding all defined tags for a given guest.
pools (List): A list holding all defined pools for a given guest.
ha_rules (List): A list holding all defined ha_rules for a given guest.
proxlb_config (Dict): A dict holding the ProxLB configuration.
Returns:
@@ -139,6 +150,7 @@ class Tags:
logger.debug("Starting: get_anti_affinity_groups.")
anti_affinity_tags = []
# Tag based anti-affinity groups
if len(tags) > 0:
for tag in tags:
if tag.startswith("plb_anti_affinity"):
@@ -147,6 +159,7 @@ class Tags:
else:
logger.debug(f"Skipping anti-affinity group for tag {tag}.")
# Pool based anti-affinity groups
if len(pools) > 0:
for pool in pools:
if pool in (proxlb_config['balancing'].get('pools') or {}):
@@ -156,6 +169,13 @@ class Tags:
else:
logger.debug(f"Skipping anti-affinity group for pool {pool}.")
# HA rule based anti-affinity groups
if len(ha_rules) > 0:
for ha_rule in ha_rules:
if ha_rule.get('type', None) == 'anti-affinity':
logger.debug(f"Adding anti-affinity group for ha-rule {ha_rule}.")
anti_affinity_tags.append(ha_rule['rule'])
logger.debug("Finished: get_anti_affinity_groups.")
return anti_affinity_tags
@@ -185,7 +205,7 @@ class Tags:
return ignore_tag
@staticmethod
def get_node_relationships(tags: List[str], nodes: Dict[str, Any], pools: List[str], proxlb_config: Dict[str, Any]) -> str:
def get_node_relationships(tags: List[str], nodes: Dict[str, Any], pools: List[str], ha_rules: List[str], proxlb_config: Dict[str, Any]) -> str:
"""
Get a node relationship tag for a guest from the Proxmox cluster by the API to pin
a guest to a node or by defined pools from ProxLB configuration.
@@ -197,6 +217,7 @@ class Tags:
tags (List): A list holding all defined tags for a given guest.
nodes (Dict): A dictionary holding all available nodes in the cluster.
pools (List): A list holding all defined pools for a given guest.
ha_rules (List): A list holding all defined ha_rules for a given guest.
proxlb_config (Dict): A dict holding the ProxLB configuration.
Returns:
@@ -205,6 +226,7 @@ class Tags:
logger.debug("Starting: get_node_relationships.")
node_relationship_tags = []
# Tag based node relationship
if len(tags) > 0:
logger.debug("Validating node pinning by tags.")
for tag in tags:
@@ -219,21 +241,38 @@ class Tags:
else:
logger.warning(f"Tag {node_relationship_tag} is invalid! Defined node does not exist in the cluster. Not applying pinning.")
# Pool based node relationship
if len(pools) > 0:
logger.debug("Validating node pinning by pools.")
for pool in pools:
if pool in (proxlb_config['balancing'].get('pools') or {}):
node = proxlb_config['balancing']['pools'][pool].get('pin', None)
# Validate if the node to pin is present in the cluster
if Helper.validate_node_presence(node, nodes):
logger.debug(f"Pool pinning tag {node} is valid! Defined node exists in the cluster.")
logger.debug(f"Setting node relationship because of pool {pool} to {node}.")
node_relationship_tags.append(node)
else:
logger.warning(f"Pool pinning tag {node} is invalid! Defined node does not exist in the cluster. Not applying pinning.")
pool_nodes = proxlb_config['balancing']['pools'][pool].get('pin', None)
for node in pool_nodes if pool_nodes is not None else []:
# Validate if the node to pin is present in the cluster
if Helper.validate_node_presence(node, nodes):
logger.debug(f"Pool pinning tag {node} is valid! Defined node exists in the cluster.")
logger.debug(f"Setting node relationship because of pool {pool} to {node}.")
node_relationship_tags.append(node)
else:
logger.warning(f"Pool pinning tag {node} is invalid! Defined node does not exist in the cluster. Not applying pinning.")
else:
logger.debug(f"Skipping pinning for pool {pool}. Pool is not defined in ProxLB configuration.")
# HA rule based node relationship
if len(ha_rules) > 0:
logger.debug("Validating node pinning by ha-rules.")
for ha_rule in ha_rules:
if len(ha_rule.get("nodes", 0)) > 0:
if ha_rule.get("type", None) == "affinity":
logger.debug(f"ha-rule {ha_rule['rule']} is of type affinity.")
for node in ha_rule["nodes"]:
logger.debug(f"Adding {node} as node relationship because of ha-rule {ha_rule['rule']}.")
node_relationship_tags.append(node)
else:
logger.debug(f"ha-rule {ha_rule['rule']} is of type anti-affinity. Skipping node relationship addition.")
logger.debug("Finished: get_node_relationships.")
return node_relationship_tags