Compare commits

...

30 Commits

Author SHA1 Message Date
gyptazy
8630333e4b feature: Add Helm chart support for ProxLB
Fixes: #281
2025-08-19 06:27:38 +02:00
gyptazy
7bd9a9b038 Merge pull request #282 from MaoMaoCake/main
Feat: Add support for helm
2025-08-18 19:31:14 +02:00
maomaocake
16651351de Implemented changes from comments 2025-08-18 12:46:29 +07:00
maomaocake
63805f1f50 Added support for resource limits and requests 2025-08-17 16:11:18 +07:00
maomaocake
c0ff1b5273 Added support for new user defined labels 2025-08-17 15:59:17 +07:00
maomaocake
07f8596fc5 Helm Chart 2025-08-17 15:41:55 +07:00
Florian
affbe433f9 Merge pull request #280 from gyptazy/docs/fix-ipv6-example-syntax
docs: Fix IPv6 address syntax in given examples
2025-08-06 16:42:13 +02:00
Florian Paul Azim Hoberg
7bda22e754 docs: Fix IPv6 address syntax in given examples 2025-08-06 16:38:23 +02:00
Florian
253dcf8eb9 Merge pull request #274 from gyptazy/docs/273-proxmox-9-compatibility
docs: Add compatibility matrix of tested Proxmox versions
2025-07-20 12:37:20 +02:00
gyptazy
6212d23268 docs: Add compatibility matrix of tested Proxmox versions
Fixes: #273
2025-07-20 12:36:25 +02:00
Florian
cf8c06393f Merge pull request #270 from gyptazy/docs/258-extend-doc-of-cluster-node-names-usage
docs: Adjust docs regarding maintenance node hostnames
2025-07-17 11:53:33 +02:00
Florian Paul Azim Hoberg
5c23fd3433 docs: Adjust docs regarding maintenance node hostnames
Fixes: #258
2025-07-17 11:52:08 +02:00
Florian
0fb732fc8c Merge pull request #269 from gyptazy/fix/268-evaluate-balancing-types
fix: Fix balancing evaluation of guest types (e.g., VM or CT).
2025-07-17 11:44:29 +02:00
Florian Paul Azim Hoberg
f36d96c72a fix: Fix balancing evaluation of guest types (e.g., VM or CT).
Fixes: #268
2025-07-17 11:41:00 +02:00
Florian
9cc03717ef Merge pull request #267 from gyptazy/release/1.1.5
release: Create release 1.1.5
2025-07-14 11:13:12 +02:00
Florian Paul Azim Hoberg
4848887ccc release: Create release 1.1.5
Fixes: #266
2025-07-14 11:08:38 +02:00
Gombócz Márton
04476feeaf Docs/custom api port (#264)
* docs(README.md): added description and examples of port declaration in hosts
* docs(docs/03_configuration): updated hosts description about using ports
2025-07-10 10:33:44 +02:00
Florian
b3765bf0ae Merge pull request #261 from gyptazy/feature/260-custom-api-ports
feature: Allow custom (instead of static tcp/8006) API ports for API hosts
2025-07-10 09:19:34 +02:00
Florian Paul Azim Hoberg
806b728a14 feature: Allow custom (instead of static tcp/8006) API ports for API hosts.
Fixes: #260
2025-07-08 17:39:29 +02:00
Florian
2c34ec91b1 Merge pull request #257 from gyptazy/release/prepare-1.1.5
release: Create release 1.1.5 beta 1
2025-06-29 11:08:15 +02:00
gyptazy
08b746a53b release: Create release 1.1.5 beta 1 2025-06-27 16:50:17 +02:00
Florian
615e2f5608 Merge pull request #256 from gyptazy/release/1.1.4
release: Create release 1.1.4
2025-06-27 16:40:53 +02:00
gyptazy
fa1e1ad8a3 release: Create release 1.1.4
Fixes: #254
2025-06-27 16:23:31 +02:00
pmarasse
c78def3919 Fix loglevels (#255)
* Modified some loglevels to make output lighter at INFO level

Co-authored-by: Philippe MARASSE <philippe@marasse.fr>
2025-06-27 15:10:57 +02:00
Florian
54c53b9860 Merge pull request #253 from gyptazy/feature/245-add-guest-pinning-to-group-of-nodes
feature: Allow pinning of guests to a group of nodes
2025-06-26 13:59:44 +02:00
Florian Paul Azim Hoberg
1fe8f703cc feature: Allow pinning of guests to a group of nodes
* You can now simply define multiple tags with plb_pin_node names
    where nodes are being evaluated and the one with the lowest
    resource usage will be taken.

Fixes: #245
2025-06-26 13:54:05 +02:00
Florian
7ba806abf7 Merge pull request #252 from gyptazy/fix/248-dry-run-with-deactivated-balancing
Fix an issue where balancing was performed in combination of deactivated balancing and dry-run mode
2025-06-24 10:09:10 +02:00
Florian Paul Azim Hoberg
6b2e120739 Fix: Fixed an issue where balancing was performed in combination of deactivated balancing and dry-run mode
Fixes: #248
2025-06-24 10:06:28 +02:00
Florian
e4103df326 Merge pull request #251 from gyptazy/prepare/1.1.4
release: Prepare release 1.1.4 beta
2025-06-24 10:00:27 +02:00
Florian Paul Azim Hoberg
f2acd4efa6 release: Prepare release 1.1.4 beta 2025-06-24 09:56:06 +02:00
29 changed files with 375 additions and 53 deletions

View File

@@ -0,0 +1,2 @@
added:
- Allow pinning of guests to a group of nodes (@gyptazy). [#245]

View File

@@ -0,0 +1,2 @@
fixed:
- Fixed an issue where balancing was performed in combination of deactivated balancing and dry-run mode (@gyptazy). [#248]

View File

@@ -0,0 +1,2 @@
fixed:
- Modified log levels to make output lighter at INFO level (@pmarasse) [#255]

View File

@@ -0,0 +1 @@
date: 2025-06-27

View File

@@ -0,0 +1,2 @@
added:
- Allow custom API ports instead of fixed tcp/8006 (@gyptazy). [#260]

View File

@@ -0,0 +1 @@
date: 2025-07-14

View File

@@ -0,0 +1,2 @@
fixed:
- Fix balancing evaluation of guest types (e.g., VM or CT) (@gyptazy). [#268]

View File

@@ -0,0 +1 @@
date: TBD

View File

@@ -5,7 +5,27 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.1.3] - 2025-06-29
## [1.1.5] - 2025-07-14
### Added
- Allow custom API ports instead of fixed tcp/8006 (@gyptazy). [#260]
## [1.1.4] - 2025-06-27
### Added
- Allow pinning of guests to a group of nodes (@gyptazy). [#245]
### Fixed
- Modified log levels to make output lighter at INFO level (@pmarasse) [#255]
- Fixed an issue where balancing was performed in combination of deactivated balancing and dry-run mode (@gyptazy). [#248]
## [1.1.3] - 2025-06-19
### Added

View File

@@ -77,6 +77,10 @@ Before starting any migrations, ProxLB validates that rebalancing actions are ne
## Installation
### Requirements / Dependencies
* Proxmox
* Proxmox 7.x
* Proxmox 8.x
* Proxmox 9.x (Beta 1 tested)
* Python3.x
* proxmoxer
* requests
@@ -161,6 +165,8 @@ docker run -it --rm -v $(pwd)/proxlb.yaml:/etc/proxlb/proxlb.yaml proxlb
| Version | Image |
|------|:------:|
| latest | cr.gyptazy.com/proxlb/proxlb:latest |
| v1.1.5 | cr.gyptazy.com/proxlb/proxlb:v1.1.5 |
| v1.1.4 | cr.gyptazy.com/proxlb/proxlb:v1.1.4 |
| v1.1.3 | cr.gyptazy.com/proxlb/proxlb:v1.1.3 |
| v1.1.2 | cr.gyptazy.com/proxlb/proxlb:v1.1.2 |
| v1.1.1 | cr.gyptazy.com/proxlb/proxlb:v1.1.1 |
@@ -239,7 +245,7 @@ The following options can be set in the configuration file `proxlb.yaml`:
| Section | Option | Sub Option | Example | Type | Description |
|---------|:------:|:----------:|:-------:|:----:|:-----------:|
| `proxmox_api` | | | | | |
| | hosts | | ['virt01.example.com', '10.10.10.10', 'fe01::bad:code::cafe'] | `List` | List of Proxmox nodes. Can be IPv4, IPv6 or mixed. |
| | hosts | | ['virt01.example.com', '10.10.10.10', 'fe01:bad:code::cafe', 'virt01.example.com:443', '[fc00::1]', '[fc00::1]:443', 'fc00::1:8006'] | `List` | List of Proxmox nodes. Can be IPv4, IPv6 or mixed. You can specify custom ports. In case of IPv6 without brackets the port is considered after the last colon |
| | user | | root@pam | `Str` | Username for the API. |
| | pass | | FooBar | `Str` | Password for the API. (Recommended: Use API token authorization!) |
| | token_id | | proxlb | `Str` | Token ID of the user for the API. |
@@ -249,7 +255,7 @@ The following options can be set in the configuration file `proxlb.yaml`:
| | retries | | 1 | `Int` | How often a connection attempt to the defined API host should be performed. |
| | wait_time | | 1 | `Int` | How many seconds should be waited before performing another connection attempt to the API host. |
| `proxmox_cluster` | | | | | |
| | maintenance_nodes | | ['virt66.example.com'] | `List` | A list of Proxmox nodes that are defined to be in a maintenance. |
| | maintenance_nodes | | ['virt66.example.com'] | `List` | A list of Proxmox nodes that are defined to be in a maintenance. (must be the same node names as used within the cluster) |
| | ignore_nodes | | [] | `List` | A list of Proxmox nodes that are defined to be ignored. |
| | overprovisioning | | False | `Bool` | Avoids balancing when nodes would become overprovisioned. |
| `balancing` | | | | | |
@@ -279,7 +285,7 @@ The following options can be set in the configuration file `proxlb.yaml`:
An example of the configuration file looks like:
```
proxmox_api:
hosts: ['virt01.example.com', '10.10.10.10', 'fe01::bad:code::cafe']
hosts: ['virt01.example.com', '10.10.10.10', 'fe01:bad:code::cafe']
user: root@pam
pass: crazyPassw0rd!
# API Token method
@@ -377,7 +383,7 @@ As a result, ProxLB will not migrate this guest with the `plb_ignore_dev` tag to
**Note:** Ignored guests are really ignored. Even by enforcing affinity rules this guest will be ignored.
### Pin VMs to Specific Hypervisor Nodes
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-tag-node-pinning.jpg"/> Guests, such as VMs or CTs, can also be pinned to specific nodes in the cluster. This might be usefull when running applications with some special licensing requirements that are only fulfilled on certain nodes. It might also be interesting, when some physical hardware is attached to a node, that is not available in general within the cluster.
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-tag-node-pinning.jpg"/> Guests, such as VMs or CTs, can also be pinned to specific (and multiple) nodes in the cluster. This might be usefull when running applications with some special licensing requirements that are only fulfilled on certain nodes. It might also be interesting, when some physical hardware is attached to a node, that is not available in general within the cluster.
To pin a guest to a specific cluster node, users assign a tag with the prefix `plb_pin_$nodename` to the desired guest:
@@ -388,6 +394,8 @@ plb_pin_node03
As a result, ProxLB will pin the guest `dev-vm01` to the node `virt03`.
You can also repeat this step multiple times for different node names to create a potential group of allowed hosts where a the guest may be served on. In this case, ProxLB takes the node with the lowest used resources according to the defined balancing values from this group.
**Note:** The given node names from the tag are validated. This means, ProxLB validated if the given node name is really part of the cluster. In case of a wrongly defined or unavailable node name it continous to use the regular processes to make sure the guest keeps running.
## Maintenance

View File

@@ -1,5 +1,5 @@
proxmox_api:
hosts: ['virt01.example.com', '10.10.10.10', 'fe01::bad:code::cafe']
hosts: ['virt01.example.com', '10.10.10.10', 'fe01:bad:code::cafe']
user: root@pam
pass: crazyPassw0rd!
# API Token method

14
debian/changelog vendored
View File

@@ -1,3 +1,17 @@
proxlb (1.1.5) stable; urgency=medium
* Allow custom API ports instead of fixed tcp/8006. (Closes: #260)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Mon, 14 Jul 2025 11:07:34 +0000
proxlb (1.1.4) stable; urgency=medium
* Allow pinning of guests to a group of nodes. (Closes: #245)
* Modified log levels to make output lighter at INFO level. (Closes: #255)
* ixed an issue where balancing was performed in combination of deactivated balancing and dry-run mode. (Closes: #248)
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Fri, 27 Jun 2025 16:22:58 +0000
proxlb (1.1.3) stable; urgency=medium
* Add relaod (SIGHUP) function to ProxLB to reload the configuration. (Closes: #189)

View File

@@ -19,6 +19,7 @@
6. [Parallel Migrations](#parallel-migrations)
7. [Run as a Systemd-Service](#run-as-a-systemd-service)
8. [SSL Self-Signed Certificates](#ssl-self-signed-certificates)
9. [Node Maintenances](#node-maintenances)
## Authentication / User Accounts / Permissions
### Authentication
@@ -137,14 +138,20 @@ plb_pin_node03
As a result, ProxLB will pin the guest `dev-vm01` to the node `virt03`.
You can also repeat this step multiple times for different node names to create a potential group of allowed hosts where a the guest may be served on. In this case, ProxLB takes the node with the lowest used resources according to the defined balancing values from this group.
**Note:** The given node names from the tag are validated. This means, ProxLB validated if the given node name is really part of the cluster. In case of a wrongly defined or unavailable node name it continous to use the regular processes to make sure the guest keeps running.
### API Loadbalancing
ProxLB supports API loadbalancing, where one or more host objects can be defined as a list. This ensures, that you can even operator ProxLB without further changes when one or more nodes are offline or in a maintenance. When defining multiple hosts, the first reachable one will be picked.
ProxLB supports API loadbalancing, where one or more host objects can be defined as a list. This ensures, that you can even operator ProxLB without further changes when one or more nodes are offline or in a maintenance. When defining multiple hosts, the first reachable one will be picked. You can speficy custom ports in the list. There are 4 ways of defining hosts with ports:
1. Hostname of IPv4 without port (in this case the default 8006 will be used)
2. Hostname or IPv4 with port
3. IPv6 in brackets with optional port
4. IPv6 without brackets, in this case the port is assumed after last colon
```
proxmox_api:
hosts: ['virt01.example.com', '10.10.10.10', 'fe01::bad:code::cafe']
hosts: ['virt01.example.com', '10.10.10.10', 'fe01::bad:code::cafe', 'virt01.example.com:443', '[fc00::1]', '[fc00::1]:443', 'fc00::1:8006']
```
### Ignore Host-Nodes or Guests
@@ -207,4 +214,25 @@ proxmox_api:
ssl_verification: False
```
*Note: Disabling SSL certificate validation is not recommended.*
*Note: Disabling SSL certificate validation is not recommended.*
### Node Maintenances
To exclude specific nodes from receiving any new workloads during the balancing process, the `maintenance_nodes` configuration option can be used. This option allows administrators to define a list of nodes that are currently undergoing maintenance or should otherwise not be used for running virtual machines or containers.
```yaml
maintenance_nodes:
- virt66.example.com
```
which can also be written as:
```yaml
maintenance_nodes: ['virt66.example.com']
```
The maintenance_nodes key must be defined as a list, even if it only includes a single node. Each entry in the list must exactly match the node name as it is known within the Proxmox VE cluster. Do not use IP addresses, alternative DNS names, or aliases—only the actual cluster node names are valid. Once a node is marked as being in maintenance mode:
* No new workloads will be balanced or migrated onto it.
* Any existing workloads currently running on the node will be migrated away in accordance with the configured balancing strategies, assuming resources on other nodes allow.
This feature is particularly useful during planned maintenance, upgrades, or troubleshooting, ensuring that services continue to run with minimal disruption while the specified node is being worked on.

24
helm/proxlb/Chart.yaml Normal file
View File

@@ -0,0 +1,24 @@
apiVersion: v3
name: proxlb
description: A Helm chart for self-hosted ProxLB
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: "1.1.5"
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "v1.1.5"

View File

@@ -0,0 +1,13 @@
{{- define "proxlb.fullname" -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{ define "proxlb.labels" }}
app.kubernetes.io/name: {{ .Release.Name }}
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: proxlb
{{- if .Values.labels }}
{{ toYaml .Values.labels }}
{{- end }}
{{ end }}

View File

@@ -0,0 +1,11 @@
{{- if .Values.configmap.create }}
apiVersion: v1
kind: ConfigMap
metadata:
name: proxlb-config
labels:
{{- include "proxlb.labels" . | nindent 4 }}
data:
proxlb.yaml: |
{{ toYaml .Values.configmap.config | indent 4 }}
{{ end }}

View File

@@ -0,0 +1,44 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
{{- include "proxlb.labels" . | nindent 4 }}
spec:
replicas: 1 # Number of replicas cannot be more than 1
selector:
matchLabels:
{{- include "proxlb.labels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "proxlb.labels" . | nindent 8 }}
spec:
{{- with .Values.image.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
# not interacting with the k8s cluster
automountServiceAccountToken: False
containers:
- name: proxlb
image: "{{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
{{- if .Values.extraArgs.dryRun }}
- --dry-run
{{- end }}
volumeMounts:
- name: config
mountPath: /etc/proxlb/proxlb.yaml
subPath: proxlb.yaml
{{ if .Values.resources }}
resources:
{{ with .Values.resources }}
{{ toYaml . | nindent 10 }}
{{ end }}
{{ end }}
volumes:
- name: config
configMap:
name: proxlb-config

60
helm/proxlb/values.yaml Normal file
View File

@@ -0,0 +1,60 @@
image:
registry: cr.gyptazy.com
repository: proxlb/proxlb
tag: v1.1.5
pullPolicy: IfNotPresent
imagePullSecrets: [ ]
resources:
limits:
cpu: "1000m"
memory: "2Gi"
requests:
cpu: "100m"
memory: "100Mi"
labels: {}
extraArgs:
dryRun: false
configmap:
create: true
config:
proxmox_api:
hosts: []
#Can be either a user or a token
# user: ""
# pass: ""
# token_id: ""
# token_secret: ""
ssl_verification: True
timeout: 10
proxmox_cluster:
maintenance_nodes: [ ]
ignore_nodes: [ ]
overprovisioning: True
balancing:
enable: True
enforce_affinity: False
parallel: False
# If running parallel job, you can define
# the amount of prallel jobs (default: 5)
parallel_jobs: 1
live: True
with_local_disks: True
balance_types: [ 'vm', 'ct' ]
max_job_validation: 1800
balanciness: 5
method: memory
mode: used
service:
daemon: True
schedule:
interval: 12
format: "hours"
delay:
enable: False
time: 1
format: "hours"
log_level: INFO

View File

@@ -1,6 +1,12 @@
#!/usr/bin/env bash
VERSION="1.1.2b"
VERSION="1.1.4"
# ProxLB
sed -i "s/^__version__ = .*/__version__ = \"$VERSION\"/" "proxlb/utils/version.py"
sed -i "s/version=\"[0-9]*\.[0-9]*\.[0-9]*\"/version=\"$VERSION\"/" setup.py
# Helm Chart
sed -i "s/^version: .*/version: \"$VERSION\"/" helm/proxlb/Chart.yaml
sed -i "s/^appVersion: .*/appVersion: \"v$VERSION\"/" helm/proxlb/Chart.yaml
echo "OK: Versions have been sucessfully set to $VERSION"

View File

@@ -87,8 +87,9 @@ def main():
Helper.log_node_metrics(proxlb_data, init=False)
# Perform balancing actions via Proxmox API
if not cli_args.dry_run or not proxlb_data["meta"]["balancing"].get("enable", False):
Balancing(proxmox_api, proxlb_data)
if proxlb_data["meta"]["balancing"].get("enable", False):
if not cli_args.dry_run:
Balancing(proxmox_api, proxlb_data)
# Validate if the JSON output should be
# printed to stdout

View File

@@ -90,11 +90,23 @@ class Balancing:
# VM Balancing
if guest_meta["type"] == "vm":
job_id = self.exec_rebalancing_vm(proxmox_api, proxlb_data, guest_name)
if 'vm' in proxlb_data["meta"]["balancing"].get("balance_types", []):
logger.debug("Balancing: Balancing for guest {guest_name} of type VM started.")
job_id = self.exec_rebalancing_vm(proxmox_api, proxlb_data, guest_name)
else:
logger.debug(
f"Balancing: Balancing for guest {guest_name} will not be performed. "
"Guest is of type VM which is not included in allowed balancing types.")
# CT Balancing
elif guest_meta["type"] == "ct":
job_id = self.exec_rebalancing_ct(proxmox_api, proxlb_data, guest_name)
if 'ct' in proxlb_data["meta"]["balancing"].get("balance_types", []):
logger.debug("Balancing: Balancing for guest {guest_name} of type CT started.")
job_id = self.exec_rebalancing_ct(proxmox_api, proxlb_data, guest_name)
else:
logger.debug(
f"Balancing: Balancing for guest {guest_name} will not be performed. "
"Guest is of type CT which is not included in allowed balancing types.")
# Just in case we get a new type of guest in the future
else:
@@ -149,7 +161,7 @@ class Balancing:
}
try:
logger.debug(f"Balancing: Starting to migrate guest {guest_name} of type VM.")
logger.info(f"Balancing: Starting to migrate VM guest {guest_name} from {guest_node_current} to {guest_node_target}.")
job_id = proxmox_api.nodes(guest_node_current).qemu(guest_id).migrate().post(**migration_options)
except proxmoxer.core.ResourceException as proxmox_api_error:
logger.critical(f"Balancing: Failed to migrate guest {guest_name} of type VM due to some Proxmox errors. Please check if resource is locked or similar.")
@@ -178,7 +190,7 @@ class Balancing:
guest_node_target = proxlb_data["guests"][guest_name]["node_target"]
try:
logger.debug(f"Balancing: Starting to migrate guest {guest_name} of type CT.")
logger.info(f"Balancing: Starting to migrate CT guest {guest_name} from {guest_node_current} to {guest_node_target}.")
job_id = proxmox_api.nodes(guest_node_current).lxc(guest_id).migrate().post(target=guest_node_target, restart=1)
except proxmoxer.core.ResourceException as proxmox_api_error:
logger.critical(f"Balancing: Failed to migrate guest {guest_name} of type CT due to some Proxmox errors. Please check if resource is locked or similar.")

View File

@@ -129,7 +129,7 @@ class Calculations:
logger.debug("Finished: get_balanciness.")
@staticmethod
def get_most_free_node(proxlb_data: Dict[str, Any], return_node: bool = False) -> Dict[str, Any]:
def get_most_free_node(proxlb_data: Dict[str, Any], return_node: bool = False, guest_node_relation_list: list = []) -> Dict[str, Any]:
"""
Get the name of the Proxmox node in the cluster with the most free resources based on
the user defined method (e.g.: memory) and mode (e.g.: used).
@@ -138,6 +138,8 @@ class Calculations:
proxlb_data (Dict[str, Any]): The data holding all content of all objects.
return_node (bool): The indicator to simply return the best node for further
assignments.
guest_node_relation_list (list): A list of nodes that have a tag on the given
guest relationship for pinning.
Returns:
Dict[str, Any]: Updated meta data section of the node with the most free resources that should
@@ -146,8 +148,15 @@ class Calculations:
logger.debug("Starting: get_most_free_node.")
proxlb_data["meta"]["balancing"]["balance_next_node"] = ""
# Do not include nodes that are marked in 'maintenance'
# Filter and exclude nodes that are in maintenance mode
filtered_nodes = [node for node in proxlb_data["nodes"].values() if not node["maintenance"]]
# Filter and include nodes that given by a relationship between guest and node. This is only
# used if the guest has a relationship to a node defined by "pin" tags.
if len(guest_node_relation_list) > 0:
filtered_nodes = [node for node in proxlb_data["nodes"].values() if node["name"] in guest_node_relation_list]
# Filter by the defined methods and modes for balancing
method = proxlb_data["meta"]["balancing"].get("method", "memory")
mode = proxlb_data["meta"]["balancing"].get("mode", "used")
lowest_usage_node = min(filtered_nodes, key=lambda x: x[f"{method}_{mode}_percent"])
@@ -226,7 +235,7 @@ class Calculations:
for guest_name in proxlb_data["groups"]["affinity"][group_name]["guests"]:
proxlb_data["meta"]["balancing"]["balance_next_guest"] = guest_name
Calculations.val_anti_affinity(proxlb_data, guest_name)
Calculations.val_node_relationship(proxlb_data, guest_name)
Calculations.val_node_relationships(proxlb_data, guest_name)
Calculations.update_node_resources(proxlb_data)
logger.debug("Finished: relocate_guests.")
@@ -281,7 +290,7 @@ class Calculations:
logger.debug("Finished: val_anti_affinity.")
@staticmethod
def val_node_relationship(proxlb_data: Dict[str, Any], guest_name: str):
def val_node_relationships(proxlb_data: Dict[str, Any], guest_name: str):
"""
Validates and assigns guests to nodes based on defined relationships based on tags.
@@ -292,24 +301,26 @@ class Calculations:
Returns:
None
"""
logger.debug("Starting: val_node_relationship.")
logger.debug("Starting: val_node_relationships.")
proxlb_data["guests"][guest_name]["processed"] = True
if proxlb_data["guests"][guest_name]["node_relationship"]:
logger.info(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['guests'][guest_name]['node_relationship']}. Pinning to node.")
if len(proxlb_data["guests"][guest_name]["node_relationships"]) > 0:
logger.debug(f"Guest '{guest_name}' has relationships defined to node(s): {','.join(proxlb_data['guests'][guest_name]['node_relationships'])}. Pinning to node.")
# Get the node with the most free resources of the group
guest_node_relation_list = proxlb_data["guests"][guest_name]["node_relationships"]
Calculations.get_most_free_node(proxlb_data, False, guest_node_relation_list)
# Validate if the specified node name is really part of the cluster
if proxlb_data['guests'][guest_name]['node_relationship'] in proxlb_data["nodes"].keys():
logger.info(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['guests'][guest_name]['node_relationship']} is a known hypervisor node in the cluster.")
# Pin the guest to the specified hypervisor node.
proxlb_data["meta"]["balancing"]["balance_next_node"] = proxlb_data['guests'][guest_name]['node_relationship']
if proxlb_data["meta"]["balancing"]["balance_next_node"] in proxlb_data["nodes"].keys():
logger.debug(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['meta']['balancing']['balance_next_node']} is a known hypervisor node in the cluster.")
else:
logger.warning(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['guests'][guest_name]['node_relationship']} but this node name is not known in the cluster!")
logger.warning(f"Guest '{guest_name}' has a specific relationship defined to node: {proxlb_data['meta']['balancing']['balance_next_node']} but this node name is not known in the cluster!")
else:
logger.info(f"Guest '{guest_name}' does not have any specific node relationships.")
logger.debug(f"Guest '{guest_name}' does not have any specific node relationships.")
logger.debug("Finished: val_node_relationship.")
logger.debug("Finished: val_node_relationships.")
@staticmethod
def update_node_resources(proxlb_data):

View File

@@ -79,7 +79,7 @@ class Guests:
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationship'] = Tags.get_node_relationship(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['type'] = 'vm'
logger.debug(f"Resources of Guest {guest['name']} (type VM) added: {guests['guests'][guest['name']]}")
@@ -107,7 +107,7 @@ class Guests:
guests['guests'][guest['name']]['affinity_groups'] = Tags.get_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['anti_affinity_groups'] = Tags.get_anti_affinity_groups(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['ignore'] = Tags.get_ignore(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationship'] = Tags.get_node_relationship(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['node_relationships'] = Tags.get_node_relationships(guests['guests'][guest['name']]['tags'])
guests['guests'][guest['name']]['type'] = 'ct'
logger.debug(f"Resources of Guest {guest['name']} (type CT) added: {guests['guests'][guest['name']]}")

View File

@@ -112,7 +112,7 @@ class Nodes:
if proxlb_config.get("proxmox_cluster", None).get("maintenance_nodes", None) is not None:
if len(proxlb_config.get("proxmox_cluster", {}).get("maintenance_nodes", [])) > 0:
if node_name in proxlb_config.get("proxmox_cluster", {}).get("maintenance_nodes", []):
logger.warning(f"Node: {node_name} has been set to maintenance mode (by ProxLB config).")
logger.info(f"Node: {node_name} has been set to maintenance mode (by ProxLB config).")
return True
else:
logger.debug(f"Node: {node_name} is not in maintenance mode by ProxLB config.")
@@ -122,7 +122,7 @@ class Nodes:
if ha_element.get("status"):
if "maintenance mode" in ha_element.get("status"):
if ha_element.get("node") == node_name:
logger.warning(f"Node: {node_name} has been set to maintenance mode (by Proxmox HA API).")
logger.info(f"Node: {node_name} has been set to maintenance mode (by Proxmox HA API).")
return True
else:
logger.debug(f"Node: {node_name} is not in maintenance mode by Proxmox HA API.")
@@ -149,7 +149,7 @@ class Nodes:
if proxlb_config.get("proxmox_cluster", None).get("ignore_nodes", None) is not None:
if len(proxlb_config.get("proxmox_cluster", {}).get("ignore_nodes", [])) > 0:
if node_name in proxlb_config.get("proxmox_cluster", {}).get("ignore_nodes", []):
logger.warning(f"Node: {node_name} has been set to be ignored. Not adding node!")
logger.info(f"Node: {node_name} has been set to be ignored. Not adding node!")
return True
logger.debug("Finished: set_node_ignore.")

View File

@@ -153,7 +153,7 @@ class Tags:
return ignore_tag
@staticmethod
def get_node_relationship(tags: List[str]) -> str:
def get_node_relationships(tags: List[str]) -> str:
"""
Get a node relationship tag for a guest from the Proxmox cluster by the API to pin
a guest to a node.
@@ -167,13 +167,14 @@ class Tags:
Returns:
Str: The related hypervisor node name.
"""
logger.debug("Starting: get_node_relationship.")
node_relationship_tag = False
logger.debug("Starting: get_node_relationships.")
node_relationship_tags = []
if len(tags) > 0:
for tag in tags:
if tag.startswith("plb_pin"):
node_relationship_tag = tag.replace("plb_pin_", "")
node_relationship_tags.append(node_relationship_tag)
logger.debug("Finished: get_node_relationship.")
return node_relationship_tag
logger.debug("Finished: get_node_relationships.")
return node_relationship_tags

View File

@@ -10,6 +10,7 @@ __license__ = "GPL-3.0"
import json
import uuid
import re
import sys
import time
import utils.version
@@ -214,4 +215,53 @@ class Helper:
logger.debug("Starting: handle_sighup.")
logger.debug("Got SIGHUP signal. Reloading...")
Helper.proxlb_reload = True
logger.debug("Starting: handle_sighup.")
logger.debug("Finished: handle_sighup.")
@staticmethod
def get_host_port_from_string(host_object):
"""
Parses a string containing a host (IPv4, IPv6, or hostname) and an optional port, and returns a tuple of (host, port).
Supported formats:
- Hostname or IPv4 without port: "example.com" or "192.168.0.1"
- Hostname or IPv4 with port: "example.com:8006" or "192.168.0.1:8006"
- IPv6 in brackets with optional port: "[fc00::1]" or "[fc00::1]:8006"
- IPv6 without brackets, port is assumed after last colon: "fc00::1:8006"
If no port is specified, port 8006 is used as the default.
Args:
host_object (str): A string representing a host with or without a port.
Returns:
tuple: A tuple (host: str, port: int)
"""
logger.debug("Starting: get_host_port_from_string.")
# IPv6 (with or without port, written in brackets)
match = re.match(r'^\[(.+)\](?::(\d+))?$', host_object)
if match:
host = match.group(1)
port = int(match.group(2)) if match.group(2) else 8006
return host, port
# Count colons to identify IPv6 addresses without brackets
colon_count = host_object.count(':')
# IPv4 or hostname without port
if colon_count == 0:
return host_object, 8006
# IPv4 or hostname with port
elif colon_count == 1:
host, port = host_object.split(':')
return host, int(port)
# IPv6 (with or without port, assume last colon is port)
else:
parts = host_object.rsplit(':', 1)
try:
port = int(parts[1])
return parts[0], port
except ValueError:
return host_object, 8006

View File

@@ -33,6 +33,7 @@ try:
except ImportError:
URLLIB3_PRESENT = False
from typing import Dict, Any
from utils.helper import Helper
from utils.logger import SystemdLogger
@@ -189,9 +190,9 @@ class ProxmoxApi:
api_connection_wait_time = proxlb_config["proxmox_api"].get("wait_time", 1)
for api_connection_attempt in range(api_connection_retries):
validated = self.test_api_proxmox_host(host)
if validated:
validated_api_hosts.append(validated)
validated_api_host, api_port = self.test_api_proxmox_host(host)
if validated_api_host:
validated_api_hosts.append(validated_api_host)
break
else:
logger.warning(f"Attempt {api_connection_attempt + 1}/{api_connection_retries} failed for host {host}. Retrying in {api_connection_wait_time} seconds...")
@@ -200,7 +201,7 @@ class ProxmoxApi:
if len(validated_api_hosts) > 0:
# Choose a random host to distribute the load across the cluster
# as a simple load balancing mechanism.
return random.choice(validated_api_hosts)
return random.choice(validated_api_hosts), api_port
logger.critical("No valid Proxmox API hosts found.")
print("No valid Proxmox API hosts found.")
@@ -228,6 +229,10 @@ class ProxmoxApi:
"""
logger.debug("Starting: test_api_proxmox_host.")
# Validate for custom ports in API hosts which might indicate
# that an external loadbalancer will be used.
host, port = Helper.get_host_port_from_string(host)
# Try resolving DNS to IP and log non-resolvable ones
try:
ip = socket.getaddrinfo(host, None, socket.AF_UNSPEC)
@@ -239,12 +244,12 @@ class ProxmoxApi:
for address_type in ip:
if address_type[0] == socket.AF_INET:
logger.debug(f"{host} is type ipv4.")
if self.test_api_proxmox_host_ipv4(host):
return host
if self.test_api_proxmox_host_ipv4(host, port):
return host, port
elif address_type[0] == socket.AF_INET6:
logger.debug(f"{host} is type ipv6.")
if self.test_api_proxmox_host_ipv6(host):
return host
if self.test_api_proxmox_host_ipv6(host, port):
return host, port
else:
return False
@@ -378,7 +383,7 @@ class ProxmoxApi:
self.validate_config(proxlb_config)
# Get a valid Proxmox API endpoint
proxmox_api_endpoint = self.api_connect_get_hosts(proxlb_config, proxlb_config.get("proxmox_api", {}).get("hosts", []))
proxmox_api_endpoint, proxmox_api_port = self.api_connect_get_hosts(proxlb_config, proxlb_config.get("proxmox_api", {}).get("hosts", []))
# Disable warnings for SSL certificate validation
if not proxlb_config.get("proxmox_api").get("ssl_verification", True):
@@ -392,6 +397,7 @@ class ProxmoxApi:
if proxlb_config.get("proxmox_api").get("token_secret", False):
proxmox_api = proxmoxer.ProxmoxAPI(
proxmox_api_endpoint,
port=proxmox_api_port,
user=proxlb_config.get("proxmox_api").get("user", True),
token_name=proxlb_config.get("proxmox_api").get("token_id", True),
token_value=proxlb_config.get("proxmox_api").get("token_secret", True),
@@ -401,6 +407,7 @@ class ProxmoxApi:
else:
proxmox_api = proxmoxer.ProxmoxAPI(
proxmox_api_endpoint,
port=proxmox_api_port,
user=proxlb_config.get("proxmox_api").get("user", True),
password=proxlb_config.get("proxmox_api").get("pass", True),
verify_ssl=proxlb_config.get("proxmox_api").get("ssl_verification", True),
@@ -420,6 +427,5 @@ class ProxmoxApi:
sys.exit(2)
logger.info(f"API connection to host {proxmox_api_endpoint} succeeded.")
logger.debug("Finished: api_connect.")
return proxmox_api

View File

@@ -3,5 +3,5 @@ __app_desc__ = "A DRS alike loadbalancer for Proxmox clusters."
__author__ = "Florian Paul Azim Hoberg <gyptazy>"
__copyright__ = "Copyright (C) 2025 Florian Paul Azim Hoberg (@gyptazy)"
__license__ = "GPL-3.0"
__version__ = "1.1.3"
__version__ = "1.1.5"
__url__ = "https://github.com/gyptazy/ProxLB"

View File

@@ -2,7 +2,7 @@ from setuptools import setup
setup(
name="proxlb",
version="1.1.3",
version="1.1.5",
description="A DRS alike loadbalancer for Proxmox clusters.",
long_description="An advanced DRS alike loadbalancer for Proxmox clusters that also supports maintenance modes and affinity/anti-affinity rules.",
author="Florian Paul Azim Hoberg",