mirror of
https://github.com/gyptazy/ProxLB.git
synced 2026-04-06 04:41:58 +02:00
Compare commits
9 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
615e2f5608 | ||
|
|
fa1e1ad8a3 | ||
|
|
c78def3919 | ||
|
|
54c53b9860 | ||
|
|
1fe8f703cc | ||
|
|
7ba806abf7 | ||
|
|
6b2e120739 | ||
|
|
e4103df326 | ||
|
|
f2acd4efa6 |
@@ -0,0 +1,2 @@
|
||||
added:
|
||||
- Allow pinning of guests to a group of nodes (@gyptazy). [#245]
|
||||
@@ -0,0 +1,2 @@
|
||||
fixed:
|
||||
- Fixed an issue where balancing was performed in combination of deactivated balancing and dry-run mode (@gyptazy). [#248]
|
||||
2
.changelogs/1.1.4/255_fix_loglevels.yml
Normal file
2
.changelogs/1.1.4/255_fix_loglevels.yml
Normal file
@@ -0,0 +1,2 @@
|
||||
fixed:
|
||||
- Modified log levels to make output lighter at INFO level (@pmarasse) [#255]
|
||||
1
.changelogs/1.1.4/release_meta.yml
Normal file
1
.changelogs/1.1.4/release_meta.yml
Normal file
@@ -0,0 +1 @@
|
||||
date: 2025-06-27
|
||||
14
CHANGELOG.md
14
CHANGELOG.md
@@ -5,7 +5,19 @@ All notable changes to this project will be documented in this file.
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [1.1.3] - 2025-06-29
|
||||
## [1.1.4] - 2025-06-27
|
||||
|
||||
### Added
|
||||
|
||||
- Allow pinning of guests to a group of nodes (@gyptazy). [#245]
|
||||
|
||||
### Fixed
|
||||
|
||||
- Modified log levels to make output lighter at INFO level (@pmarasse) [#255]
|
||||
- Fixed an issue where balancing was performed in combination of deactivated balancing and dry-run mode (@gyptazy). [#248]
|
||||
|
||||
|
||||
## [1.1.3] - 2025-06-19
|
||||
|
||||
### Added
|
||||
|
||||
|
||||
@@ -161,6 +161,7 @@ docker run -it --rm -v $(pwd)/proxlb.yaml:/etc/proxlb/proxlb.yaml proxlb
|
||||
| Version | Image |
|
||||
|------|:------:|
|
||||
| latest | cr.gyptazy.com/proxlb/proxlb:latest |
|
||||
| v1.1.4 | cr.gyptazy.com/proxlb/proxlb:v1.1.4 |
|
||||
| v1.1.3 | cr.gyptazy.com/proxlb/proxlb:v1.1.3 |
|
||||
| v1.1.2 | cr.gyptazy.com/proxlb/proxlb:v1.1.2 |
|
||||
| v1.1.1 | cr.gyptazy.com/proxlb/proxlb:v1.1.1 |
|
||||
@@ -377,7 +378,7 @@ As a result, ProxLB will not migrate this guest with the `plb_ignore_dev` tag to
|
||||
**Note:** Ignored guests are really ignored. Even by enforcing affinity rules this guest will be ignored.
|
||||
|
||||
### Pin VMs to Specific Hypervisor Nodes
|
||||
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-tag-node-pinning.jpg"/> Guests, such as VMs or CTs, can also be pinned to specific nodes in the cluster. This might be usefull when running applications with some special licensing requirements that are only fulfilled on certain nodes. It might also be interesting, when some physical hardware is attached to a node, that is not available in general within the cluster.
|
||||
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-tag-node-pinning.jpg"/> Guests, such as VMs or CTs, can also be pinned to specific (and multiple) nodes in the cluster. This might be usefull when running applications with some special licensing requirements that are only fulfilled on certain nodes. It might also be interesting, when some physical hardware is attached to a node, that is not available in general within the cluster.
|
||||
|
||||
To pin a guest to a specific cluster node, users assign a tag with the prefix `plb_pin_$nodename` to the desired guest:
|
||||
|
||||
@@ -388,6 +389,8 @@ plb_pin_node03
|
||||
|
||||
As a result, ProxLB will pin the guest `dev-vm01` to the node `virt03`.
|
||||
|
||||
You can also repeat this step multiple times for different node names to create a potential group of allowed hosts where a the guest may be served on. In this case, ProxLB takes the node with the lowest used resources according to the defined balancing values from this group.
|
||||
|
||||
**Note:** The given node names from the tag are validated. This means, ProxLB validated if the given node name is really part of the cluster. In case of a wrongly defined or unavailable node name it continous to use the regular processes to make sure the guest keeps running.
|
||||
|
||||
## Maintenance
|
||||
|
||||
8
debian/changelog
vendored
8
debian/changelog
vendored
@@ -1,3 +1,11 @@
|
||||
proxlb (1.1.4) stable; urgency=medium
|
||||
|
||||
* Allow pinning of guests to a group of nodes. (Closes: #245)
|
||||
* Modified log levels to make output lighter at INFO level. (Closes: #255)
|
||||
* ixed an issue where balancing was performed in combination of deactivated balancing and dry-run mode. (Closes: #248)
|
||||
|
||||
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Fri, 27 Jun 2025 16:22:58 +0000
|
||||
|
||||
proxlb (1.1.3) stable; urgency=medium
|
||||
|
||||
* Add relaod (SIGHUP) function to ProxLB to reload the configuration. (Closes: #189)
|
||||
|
||||
@@ -137,6 +137,8 @@ plb_pin_node03
|
||||
|
||||
As a result, ProxLB will pin the guest `dev-vm01` to the node `virt03`.
|
||||
|
||||
You can also repeat this step multiple times for different node names to create a potential group of allowed hosts where a the guest may be served on. In this case, ProxLB takes the node with the lowest used resources according to the defined balancing values from this group.
|
||||
|
||||
**Note:** The given node names from the tag are validated. This means, ProxLB validated if the given node name is really part of the cluster. In case of a wrongly defined or unavailable node name it continous to use the regular processes to make sure the guest keeps running.
|
||||
|
||||
### API Loadbalancing
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
#!/usr/bin/env bash
|
||||
VERSION="1.1.2b"
|
||||
VERSION="1.1.4"
|
||||
|
||||
sed -i "s/^__version__ = .*/__version__ = \"$VERSION\"/" "proxlb/utils/version.py"
|
||||
sed -i "s/version=\"[0-9]*\.[0-9]*\.[0-9]*\"/version=\"$VERSION\"/" setup.py
|
||||
|
||||
@@ -87,8 +87,9 @@ def main():
|
||||
Helper.log_node_metrics(proxlb_data, init=False)
|
||||
|
||||
# Perform balancing actions via Proxmox API
|
||||
if not cli_args.dry_run or not proxlb_data["meta"]["balancing"].get("enable", False):
|
||||
Balancing(proxmox_api, proxlb_data)
|
||||
if proxlb_data["meta"]["balancing"].get("enable", False):
|
||||
if not cli_args.dry_run:
|
||||
Balancing(proxmox_api, proxlb_data)
|
||||
|
||||
# Validate if the JSON output should be
|
||||
# printed to stdout
|
||||
|
||||
@@ -149,7 +149,7 @@ class Balancing:
|
||||
}
|
||||
|
||||
try:
|
||||
logger.debug(f"Balancing: Starting to migrate guest {guest_name} of type VM.")
|
||||
logger.info(f"Balancing: Starting to migrate VM guest {guest_name} from {guest_node_current} to {guest_node_target}.")
|
||||
job_id = proxmox_api.nodes(guest_node_current).qemu(guest_id).migrate().post(**migration_options)
|
||||
except proxmoxer.core.ResourceException as proxmox_api_error:
|
||||
logger.critical(f"Balancing: Failed to migrate guest {guest_name} of type VM due to some Proxmox errors. Please check if resource is locked or similar.")
|
||||
@@ -178,7 +178,7 @@ class Balancing:
|
||||
guest_node_target = proxlb_data["guests"][guest_name]["node_target"]
|
||||
|
||||
try:
|
||||
logger.debug(f"Balancing: Starting to migrate guest {guest_name} of type CT.")
|
||||
logger.info(f"Balancing: Starting to migrate CT guest {guest_name} from {guest_node_current} to {guest_node_target}.")
|
||||
job_id = proxmox_api.nodes(guest_node_current).lxc(guest_id).migrate().post(target=guest_node_target, restart=1)
|
||||
except proxmoxer.core.ResourceException as proxmox_api_error:
|
||||
logger.critical(f"Balancing: Failed to migrate guest {guest_name} of type CT due to some Proxmox errors. Please check if resource is locked or similar.")
|
||||
|
||||
@@ -129,7 +129,7 @@ class Calculations:
|
||||
logger.debug("Finished: get_balanciness.")
|
||||
|
||||
@staticmethod
|
||||
def get_most_free_node(proxlb_data: Dict[str, Any], return_node: bool = False) -> Dict[str, Any]:
|
||||
def get_most_free_node(proxlb_data: Dict[str, Any], return_node: bool = False, guest_node_relation_list: list = []) -> Dict[str, Any]:
|
||||
"""
|
||||
Get the name of the Proxmox node in the cluster with the most free resources based on
|
||||
the user defined method (e.g.: memory) and mode (e.g.: used).
|
||||
@@ -138,6 +138,8 @@ class Calculations:
|
||||
proxlb_data (Dict[str, Any]): The data holding all content of all objects.
|
||||
return_node (bool): The indicator to simply return the best node for further
|
||||
assignments.
|
||||
guest_node_relation_list (list): A list of nodes that have a tag on the given
|
||||
guest relationship for pinning.
|
||||
|
||||
Returns:
|
||||
Dict[str, Any]: Updated meta data section of the node with the most free resources that should
|
||||
@@ -146,8 +148,15 @@ class Calculations:
|
||||
logger.debug("Starting: get_most_free_node.")
|
||||
proxlb_data["meta"]["balancing"]["balance_next_node"] = ""
|
||||
|
||||
# Do not include nodes that are marked in 'maintenance'
|
||||
# Filter and exclude nodes that are in maintenance mode
|
||||
filtered_nodes = [node for node in proxlb_data["nodes"].values() if not node["maintenance"]]
|
||||
|
||||
# Filter and include nodes that given by a relationship between guest and node. This is only
|
||||
# used if the guest has a relationship to a node defined by "pin" tags.
|
||||
if len(guest_node_relation_list) > 0:
|
||||
filtered_nodes = [node for node in proxlb_data["nodes"].values() if node["name"] in guest_node_relation_list]
|
||||
|
||||
# Filter by the defined methods and modes for balancing
|
||||
method = proxlb_data["meta"]["balancing"].get("method", "memory")
|
||||
mode = proxlb_data["meta"]["balancing"].get("mode", "used")
|
||||
lowest_usage_node = min(filtered_nodes, key=lambda x: x[f"{method}_{mode}_percent"])
|
||||
@@ -226,7 +235,7 @@ class Calculations:
|
||||
for guest_name in proxlb_data["groups"]["affinity"][group_name]["guests"]:
|
||||
proxlb_data["meta"]["balancing"]["balance_next_guest"] = guest_name
|
||||
Calculations.val_anti_affinity(proxlb_data, guest_name)
|
||||
Calculations.val_node_relationship(proxlb_data, guest_name)
|
||||
Calculations.val_node_relationships(proxlb_data, guest_name)
|
||||
Calculations.update_node_resources(proxlb_data)
|
||||
|
||||
logger.debug("Finished: relocate_guests.")
|
||||
@@ -281,7 +290,7 @@ class Calculations:
|
||||
logger.debug("Finished: val_anti_affinity.")
|
||||
|
||||
@staticmethod
|
||||
def val_node_relationship(proxlb_data: Dict[str, Any], guest_name: str):
|
||||
def val_node_relationships(proxlb_data: Dict[str, Any], guest_name: str):
|
||||
"""
|
||||
Validates and assigns guests to nodes based on defined relationships based on tags.
|
||||
|
||||
@@ -292,24 +301,26 @@ class Calculations:
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
logger.debug("Starting: val_node_relationship.")
|
||||
logger.debug("Starting: val_node_relationships.")
|
||||
proxlb_data["guests"][guest_name]["processed"] = True
|
||||
|
||||
if proxlb_data["guests"][guest_name]["node_relationship"]:
|
||||
logger.info(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['guests'][guest_name]['node_relationship']}. Pinning to node.")
|
||||
if len(proxlb_data["guests"][guest_name]["node_relationships"]) > 0:
|
||||
logger.debug(f"Guest '{guest_name}' has relationships defined to node(s): {','.join(proxlb_data['guests'][guest_name]['node_relationships'])}. Pinning to node.")
|
||||
|
||||
# Get the node with the most free resources of the group
|
||||
guest_node_relation_list = proxlb_data["guests"][guest_name]["node_relationships"]
|
||||
Calculations.get_most_free_node(proxlb_data, False, guest_node_relation_list)
|
||||
|
||||
# Validate if the specified node name is really part of the cluster
|
||||
if proxlb_data['guests'][guest_name]['node_relationship'] in proxlb_data["nodes"].keys():
|
||||
logger.info(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['guests'][guest_name]['node_relationship']} is a known hypervisor node in the cluster.")
|
||||
# Pin the guest to the specified hypervisor node.
|
||||
proxlb_data["meta"]["balancing"]["balance_next_node"] = proxlb_data['guests'][guest_name]['node_relationship']
|
||||
if proxlb_data["meta"]["balancing"]["balance_next_node"] in proxlb_data["nodes"].keys():
|
||||
logger.debug(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['meta']['balancing']['balance_next_node']} is a known hypervisor node in the cluster.")
|
||||
else:
|
||||
logger.warning(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['guests'][guest_name]['node_relationship']} but this node name is not known in the cluster!")
|
||||
logger.warning(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['meta']['balancing']['balance_next_node']} but this node name is not known in the cluster!")
|
||||
|
||||
else:
|
||||
logger.info(f"Guest '{guest_name}' does not have any specific node relationships.")
|
||||
logger.debug(f"Guest '{guest_name}' does not have any specific node relationships.")
|
||||
|
||||
logger.debug("Finished: val_node_relationship.")
|
||||
logger.debug("Finished: val_node_relationships.")
|
||||
|
||||
@staticmethod
|
||||
def update_node_resources(proxlb_data):
|
||||
|
||||
@@ -79,7 +79,7 @@ class Guests:
|
||||
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'])
|
||||
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'])
|
||||
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
|
||||
guests['guests'][guest['name']]['node_relationship'] = Tags.get_node_relationship(guests['guests'][guest['name']]['tags'])
|
||||
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'])
|
||||
guests['guests'][guest['name']]['type'] = 'vm'
|
||||
|
||||
logger.debug(f"Resources of Guest {guest['name']} (type VM) added: {guests['guests'][guest['name']]}")
|
||||
@@ -107,7 +107,7 @@ class Guests:
|
||||
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'])
|
||||
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'])
|
||||
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
|
||||
guests['guests'][guest['name']]['node_relationship'] = Tags.get_node_relationship(guests['guests'][guest['name']]['tags'])
|
||||
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'])
|
||||
guests['guests'][guest['name']]['type'] = 'ct'
|
||||
|
||||
logger.debug(f"Resources of Guest {guest['name']} (type CT) added: {guests['guests'][guest['name']]}")
|
||||
|
||||
@@ -112,7 +112,7 @@ class Nodes:
|
||||
if proxlb_config.get("proxmox_cluster", None).get("maintenance_nodes", None) is not None:
|
||||
if len(proxlb_config.get("proxmox_cluster", {}).get("maintenance_nodes", [])) > 0:
|
||||
if node_name in proxlb_config.get("proxmox_cluster", {}).get("maintenance_nodes", []):
|
||||
logger.warning(f"Node: {node_name} has been set to maintenance mode (by ProxLB config).")
|
||||
logger.info(f"Node: {node_name} has been set to maintenance mode (by ProxLB config).")
|
||||
return True
|
||||
else:
|
||||
logger.debug(f"Node: {node_name} is not in maintenance mode by ProxLB config.")
|
||||
@@ -122,7 +122,7 @@ class Nodes:
|
||||
if ha_element.get("status"):
|
||||
if "maintenance mode" in ha_element.get("status"):
|
||||
if ha_element.get("node") == node_name:
|
||||
logger.warning(f"Node: {node_name} has been set to maintenance mode (by Proxmox HA API).")
|
||||
logger.info(f"Node: {node_name} has been set to maintenance mode (by Proxmox HA API).")
|
||||
return True
|
||||
else:
|
||||
logger.debug(f"Node: {node_name} is not in maintenance mode by Proxmox HA API.")
|
||||
@@ -149,7 +149,7 @@ class Nodes:
|
||||
if proxlb_config.get("proxmox_cluster", None).get("ignore_nodes", None) is not None:
|
||||
if len(proxlb_config.get("proxmox_cluster", {}).get("ignore_nodes", [])) > 0:
|
||||
if node_name in proxlb_config.get("proxmox_cluster", {}).get("ignore_nodes", []):
|
||||
logger.warning(f"Node: {node_name} has been set to be ignored. Not adding node!")
|
||||
logger.info(f"Node: {node_name} has been set to be ignored. Not adding node!")
|
||||
return True
|
||||
|
||||
logger.debug("Finished: set_node_ignore.")
|
||||
|
||||
@@ -153,7 +153,7 @@ class Tags:
|
||||
return ignore_tag
|
||||
|
||||
@staticmethod
|
||||
def get_node_relationship(tags: List[str]) -> str:
|
||||
def get_node_relationships(tags: List[str]) -> str:
|
||||
"""
|
||||
Get a node relationship tag for a guest from the Proxmox cluster by the API to pin
|
||||
a guest to a node.
|
||||
@@ -167,13 +167,14 @@ class Tags:
|
||||
Returns:
|
||||
Str: The related hypervisor node name.
|
||||
"""
|
||||
logger.debug("Starting: get_node_relationship.")
|
||||
node_relationship_tag = False
|
||||
logger.debug("Starting: get_node_relationships.")
|
||||
node_relationship_tags = []
|
||||
|
||||
if len(tags) > 0:
|
||||
for tag in tags:
|
||||
if tag.startswith("plb_pin"):
|
||||
node_relationship_tag = tag.replace("plb_pin_", "")
|
||||
node_relationship_tags.append(node_relationship_tag)
|
||||
|
||||
logger.debug("Finished: get_node_relationship.")
|
||||
return node_relationship_tag
|
||||
logger.debug("Finished: get_node_relationships.")
|
||||
return node_relationship_tags
|
||||
|
||||
@@ -3,5 +3,5 @@ __app_desc__ = "A DRS alike loadbalancer for Proxmox clusters."
|
||||
__author__ = "Florian Paul Azim Hoberg <gyptazy>"
|
||||
__copyright__ = "Copyright (C) 2025 Florian Paul Azim Hoberg (@gyptazy)"
|
||||
__license__ = "GPL-3.0"
|
||||
__version__ = "1.1.3"
|
||||
__version__ = "1.1.4"
|
||||
__url__ = "https://github.com/gyptazy/ProxLB"
|
||||
|
||||
2
setup.py
2
setup.py
@@ -2,7 +2,7 @@ from setuptools import setup
|
||||
|
||||
setup(
|
||||
name="proxlb",
|
||||
version="1.1.3",
|
||||
version="1.1.4",
|
||||
description="A DRS alike loadbalancer for Proxmox clusters.",
|
||||
long_description="An advanced DRS alike loadbalancer for Proxmox clusters that also supports maintenance modes and affinity/anti-affinity rules.",
|
||||
author="Florian Paul Azim Hoberg",
|
||||
|
||||
Reference in New Issue
Block a user