Compare commits

...

27 Commits

Author SHA1 Message Date
Florian
5542b9bc6c Merge pull request #116 from gyptazy/release/prepare-release-1.0.5
release: Create release 1.0.5
2024-10-30 17:13:06 +01:00
gyptazy
16c5ee4d74 release: Create release 1.0.5 2024-10-30 17:04:14 +01:00
Alex Shut
21a73b71df Allow migration from local disks (#113)
Add parameter `with-local-disks=1` to allow migration from local disks.
2024-10-19 08:17:10 +02:00
Elliot Nevills
d3c055cbad fix: bug to allow log_verbosity be set to DEBUG, WARN, and WARNING (#98)
* fixed bug to allow log_verbosity be set to DEBUG, WARN, and WARNING

* removed WARN logging verbosity option

* adjusted for logging verbosity options

* add to changelog

* added fix log verbosity issue to changelog
2024-10-18 21:21:35 +02:00
Florian
24b7d2c860 Merge pull request #111 from gyptazy/fix/106-fix-maintenance-node-eval
fix: Fix offline node evaluation & maintenance compare of different type objects
2024-10-18 21:20:54 +02:00
gyptazy
6e87e2d478 fix: Fix offline node evaluation & maintenance compare of different type objects
- Fix node (and its objects) evaluation when not reachable (e.g., maintenance).
  - Fix evaluation of maintenance mode where comparing list & string resulted in a crash (by @glitchvern).
  - Set ProxLB version to 1.0.5b

Fixes: #160
Fixes: #107
Contributed-by: @glitchvern
2024-10-17 13:32:47 +02:00
Florian
2593b87d3f Merge pull request #105 from gyptazy/fix/104-adjust-docs-parallel-migration
fix(docs): Change docs to make bool usage in configs more clear.
2024-10-16 08:48:56 +02:00
gyptazy
6310262e97 fix(docs): Change docs to make bool usage in configs more clear.
Fixes: #104
2024-10-16 08:19:16 +02:00
Florian
38712e90a3 Update issue templates 2024-10-11 14:49:43 +02:00
Florian
c2b2f62462 Update issue templates 2024-10-11 14:47:22 +02:00
Florian
adde04639e Update issue templates 2024-10-11 14:44:00 +02:00
Florian
a4b1f4af24 Merge pull request #96 from gyptazy/release/1.0.4
release: Create stable release 1.0.4
2024-10-11 12:37:44 +02:00
Florian Paul Azim Hoberg
55c714a888 release: Create stable release 1.0.4
Fixes: #95
2024-10-11 12:30:10 +02:00
Florian
3cd631db20 Merge pull request #93 from gyptazy/fix/75-fix-cpu-balancing-calculations
fix: Fix CPU balancing where calculations are done in float instead of int.
2024-10-11 08:35:28 +02:00
Florian Paul Azim Hoberg
d44da076cc fix: Fix CPU balancing where calculations are done in float instead of string.
By: @glitchvern
Fixes: #75
2024-10-11 08:25:12 +02:00
Florian
95e8fc5737 Merge pull request #92 from gyptazy/feature/91-make-api-timeout-configureable
feature: Add feature to make API timeout configureable
2024-10-10 19:35:57 +02:00
Florian Paul Azim Hoberg
50a9e91633 feature: Add feature to make API timeout configureable
Fixes: #91
2024-10-10 19:33:21 +02:00
Florian
cca4c454dd Merge pull request #90 from gyptazy/feature/add-version-output
feature: Add version output as cli arg
2024-10-10 10:36:17 +02:00
Florian Paul Azim Hoberg
17c9c98bbc feature: Add version output as cli arg
Fixes: #89
2024-10-10 10:34:19 +02:00
Florian
486acad44f Merge pull request #87 from gyptazy/feature/58-add-maintenance-mode
feature: Add maintenance mode to evacuate a node and move workloads for other nodes in the cluster.
2024-10-07 17:30:40 +02:00
Florian Paul Azim Hoberg
f73261e68c feature: Add maintenance mode to evacuate a node and move workloads for other nodes in the cluster.
Fixes: #58
Fixes: #84
2024-10-07 17:28:50 +02:00
Florian
464644def8 Merge pull request #82 from gyptazy/fix/81-adjust-infrastructure
fix(docs): Fix outdated documentation regarding the infrastructure.
2024-09-25 08:21:43 +02:00
Florian Paul Azim Hoberg
93b7894a6f fix(docs): Fix outdated documentation regarding the infrastructure.
Fixes: [#81]
2024-09-25 08:00:19 +02:00
Florian Paul Azim Hoberg
d53a6f695f fix: Run storage balancing only on support shared storages and when really needed.
Fixes: #79
2024-09-13 08:37:32 +02:00
Florian
029ec31ad9 Merge pull request #77 from gyptazy/release/72-create-release-1.0.3
release: Prepare ProxLB release 1.0.3
2024-09-11 16:50:42 +02:00
Florian Paul Azim Hoberg
045159eb8d release: Prepare ProxLB release 1.0.3 2024-09-11 16:21:50 +02:00
Florian
3415e0ccec Merge pull request #76 from gyptazy/docs/74-adjust-master-only-docs
docs: Fix documentation for the master_only parameter placed in the wrong config section.
2024-09-11 15:34:55 +02:00
25 changed files with 491 additions and 199 deletions

View File

@@ -1 +1 @@
date: TBD
date: 2024-09-12

View File

@@ -0,0 +1,2 @@
added:
- Add maintenance mode to evacuate a node and move workloads for other nodes in the cluster. [#58]

View File

@@ -0,0 +1,2 @@
fixed:
- Fix CPU balancing where calculations are done in float instead of int. (by @glitchvern) [#75]

View File

@@ -0,0 +1,3 @@
changed:
- Run storage balancing only on supported shared storages. [#79]
- Run storage balancing only when needed to save time. [#79]

View File

@@ -0,0 +1,2 @@
fixed:
- Fix documentation for the underlying infrastructure. [#81]

View File

@@ -0,0 +1,2 @@
added:
- Add version output cli arg. [#89]

View File

@@ -0,0 +1,2 @@
added:
- Add feature to make API timeout configureable. [#91]

View File

@@ -0,0 +1 @@
date: 2024-10-11

View File

@@ -0,0 +1,2 @@
changed:
- Change docs to make bool usage in configs more clear (by @gyptazy). [#104]

View File

@@ -0,0 +1,2 @@
fixed:
- Fix evaluation of maintenance mode where comparing list & string resulted in a crash (by @glitchvern). [#106]

View File

@@ -0,0 +1,2 @@
fixed:
- Fix node (and its objects) evaluation when not reachable, e.g., maintenance (by @gyptazy). [#107]

View File

@@ -0,0 +1,2 @@
fixed:
- Fix migration from local disks (by @greenlogles). [#113]

View File

@@ -0,0 +1,2 @@
fixed:
- Fix allowed values (add DEBUG, WARNING) for log verbosity (by @gyptazy). [#98]

View File

@@ -0,0 +1 @@
date: 2024-10-30

View File

@@ -0,0 +1 @@
date: TBD

27
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,27 @@
---
name: Bug report
about: Create a bug report
title: "`Bug`:"
labels: bug, needs-analysis
assignees: ''
---
## General
<-- Describe the bug from a high level perspective. -->
## Weighting
Score: <-- Define a scoring from 0-10 (10 highest, most urgent) -->
## Config
<-- Attach the ProxLB configuration for further analysis. Please take car to NOT publish your API credentials! -->
## Log
<-- Attach the ProxLB debug log for further analysis. Please take car to NOT publish your API credentials! -->
## Meta
Please provide some more information about your setup. This includes where you obtained ProxLB (e.g., as a `.deb` file, from the repository or container image) and also which version you're running in which mode. You can obtain the used version from you image version, your local repository information or by running `proxlb -v`.
Version: <-- DEFINE_VERSION -->
Installed from: <-- DEFINE_INSTALL_SOURCE -->
Running as: <-- Container, local on Proxmox, local on all Proxmox, dedicated -->

View File

@@ -0,0 +1,14 @@
---
name: Feature request
about: Create a new request for a missing feature
title: "`Feature`: "
labels: feature, needs-analysis
assignees: ''
---
## General
<-- Describe the feature idea from a high level perspective. -->
## Details
<-- Provide some more details about the new feature request and provide examples. -->

View File

@@ -6,34 +6,96 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.0.5] - 2024-10-30
### Changed
- Change docs to make bool usage in configs more clear. [#104]
### Fixed
- Fix migration from local disks (by @greenlogles). [#113]
- Fix allowed values (add DEBUG, WARNING) for log verbosity. [#98]
- Fix node (and its objects) evaluation when not reachable (e.g., maintenance). [#107]
- Fix evaluation of maintenance mode where comparing list & string resulted in a crash (by @glitchvern). [#106]
## [1.0.4] - 2024-10-11
### Added
- Add feature to make API timeout configureable. [#91]
- Add maintenance mode to evacuate a node and move workloads for other nodes in the cluster. [#58]
- Add version output cli arg. [#89]
### Changed
- Run storage balancing only on supported shared storages. [#79]
- Run storage balancing only when needed to save time. [#79]
### Fixed
- Fix CPU balancing where calculations are done in float instead of int. (by @glitchvern) [#75]
- Fix documentation for the underlying infrastructure. [#81]
## [1.0.3] - 2024-09-12
### Added
- Add storage balancing function. [#51]
- Add a convert function to cast all bool alike options from configparser to bools. [#53]
- Add a config parser options for future features. [#53]
- Add a config versio schema that must be supported by ProxLB. [#53]
- Add doc how to add dedicated user for authentication. (by @Dulux-Oz)
- Add feature to allow the API hosts being provided as a comma separated list. [#60]
- Add cli arg `-b` to return the next best node for next VM/CT placement. [#8]
### Changed
- Improve the underlying code base for future implementations. [#53]
- Provide a more reasonable output when HA services are not active in a Proxmox cluster. [#68]
### Fixed
- Fixed `master_only` function by inverting the condition.
- Improved the overall validation and error handling. [#64]
- Fix bug in the `proxlb.conf` in the vm_balancing section.
- Fix anti-affinity rules not evaluating a new and different node. [#67]
- Fix documentation for the master_only parameter placed in the wrong config section. [#74]
- Fix handling of unset `ignore_nodes` and `ignore_vms` resulted in an attribute error. [#71]
## [1.0.2] - 2024-08-13
### Added
- Add option to run migration in parallel or sequentially. [#41]
- Add option to run ProxLB only on the Proxmox's master node in the cluster (reg. HA feature). [#40]
- Add option to run migrations in parallel or sequentially. [#41]
### Changed
- Fix daemon timer to use hours instead of minutes. [#45]
### Fixed
- Fix CMake packaging for Debian package to avoid overwriting the config file. [#49]
- Fix wonkey code style.
## [1.0.0] - 2024-08-01
### Added
- Add feature to prevent VMs from being relocated by defining a wildcard pattern. [#7]
- Add feature to make log verbosity configurable [#17].
- Add option_mode to rebalance by node's free resources in percent (instead of bytes). [#29]
- Add option to rebalance by assigned VM resources to avoid over provisioning. [#16]
- Add Docker/Podman support. [#10 by @daanbosch]
- Add exclude grouping feature to rebalance VMs from being located together to new nodes. [#4]
- Add feature to prevent VMs from being relocated by defining the 'plb_ignore_vm' tag. [#7]
- Add dry-run support to see what kind of rebalancing would be done. [#6]
- Add LXC/Container integration. [#27]
- Add exclude grouping feature to rebalance VMs from being located together to new nodes. [#4]
- Add dry-run support to see what kind of rebalancing would be done. [#6]
- Add Docker/Podman support. [#10 by @daanbosch]
- Add feature to prevent VMs from being relocated by defining a wildcard pattern. [#7]
- Add feature to prevent VMs from being relocated by defining the 'plb_ignore_vm' tag. [#7]
- Add include grouping feature to rebalance VMs bundled to new nodes. [#3]
- Add option to rebalance by assigned VM resources to avoid overprovisioning. [#16]
- Add feature to make log verbosity configurable [#17].
### Changed

View File

@@ -29,7 +29,7 @@ Before submitting a pull request, ensure that your changes sucessfully perform t
1. **Install pytest if you haven't already:**
```sh
pip install fake8
pip install flake8
```
2. **Run the lintin:**
@@ -118,4 +118,4 @@ By participating in this project, you agree to abide by our [Code of Conduct](CO
If you need help or have any questions, feel free to reach out by creating an issue or by joining our [discussion forum](https://github.com/gyptazy/proxlb/discussions). You can also refer to our [documentation](https://github.com/gyptazy/ProxLB/tree/main/docs) for more information about the project or join our [chat room](https://matrix.to/#/#proxlb:gyptazy.ch) in Matrix.
Thank you for contributing to ProxLB! Together, we can enhance the efficiency and performance of Proxmox clusters.
Thank you for contributing to ProxLB! Together, we can enhance the efficiency and performance of Proxmox clusters.

View File

@@ -1,5 +1,5 @@
# ProxLB - (Re)Balance VM Workloads in Proxmox Clusters
<img align="left" src="https://cdn.gyptazy.ch/images/Prox-LB-logo.jpg"/>
<img align="left" src="https://cdn.gyptazy.com/images/Prox-LB-logo.jpg"/>
<br>
<p float="center"><img src="https://img.shields.io/github/license/gyptazy/ProxLB"/><img src="https://img.shields.io/github/contributors/gyptazy/ProxLB"/><img src="https://img.shields.io/github/last-commit/gyptazy/ProxLB/main"/><img src="https://img.shields.io/github/issues-raw/gyptazy/ProxLB"/><img src="https://img.shields.io/github/issues-pr/gyptazy/ProxLB"/></p>
@@ -37,6 +37,8 @@
- [Downloads](#downloads)
- [Packages](#packages)
- [Repository](#repository)
- [Stable Releases](#stable-releases)
- [Beta/Testing Releases](#betatesting-releases)
- [Container Images (Docker/Podman)](#container-images-dockerpodman)
- [Misc](#misc)
- [Bugs](#bugs)
@@ -55,7 +57,7 @@ Intelligent rebalancing is a key feature of ProxLB where it re-balances VMs base
Automated rebalancing reduces the need for manual actions, allowing operators to focus on other critical tasks, thereby increasing operational efficiency.
### Video of Migration
<img src="https://cdn.gyptazy.ch/images/proxlb-rebalancing-demo.gif"/>
<img src="https://cdn.gyptazy.com/images/proxlb-rebalancing-demo.gif"/>
## Features
* Rebalance VMs/CTs in the cluster by:
@@ -108,37 +110,40 @@ The following options can be set in the `proxlb.conf` file:
| Section | Option | Example | Description |
|------|:------:|:------:|:------:|
| `proxmox` | api_host | hypervisor01.gyptazy.ch | Host or IP address (or comma separated list) of the remote Proxmox API. |
| `proxmox` | api_host | hypervisor01.gyptazy.com | Host or IP address (or comma separated list) of the remote Proxmox API. |
| | api_user | root@pam | Username for the API. |
| | api_pass | FooBar | Password for the API. |
| | verify_ssl | 1 | Validate SSL certificates (1) or ignore (0). (default: 1) |
| | verify_ssl | 1 | Validate SSL certificates (1) or ignore (0). (default: 1, type: bool) |
| | timeout | 10 | Timeout for the Proxmox API in sec. (default: 10) |
| `vm_balancing` | enable | 1 | Enables VM/CT balancing. |
| | method | memory | Defines the balancing method (default: memory) where you can use `memory`, `disk` or `cpu`. |
| | mode | used | Rebalance by `used` resources (efficiency) or `assigned` (avoid overprovisioning) resources. (default: used)|
| | mode_option | byte | Rebalance by node's resources in `bytes` or `percent`. (default: bytes) |
| | type | vm | Rebalance only `vm` (virtual machines), `ct` (containers) or `all` (virtual machines & containers). (default: vm)|
| | balanciness | 10 | Value of the percentage of lowest and highest resource consumption on nodes may differ before rebalancing. (default: 10) |
| | parallel_migrations | 1 | Defines if migrations should be done parallely or sequentially. (default: 1) |
| | parallel_migrations | 1 | Defines if migrations should be done parallely or sequentially. (default: 1, type: bool) |
| | maintenance_nodes | dummynode03,dummynode04 | Defines a comma separated list of nodes to set them into maintenance mode and move VMs/CTs to other nodes. |
| | ignore_nodes | dummynode01,dummynode02,test* | Defines a comma separated list of nodes to exclude. |
| | ignore_vms | testvm01,testvm02 | Defines a comma separated list of VMs to exclude. (`*` as suffix wildcard or tags are also supported) |
| `storage_balancing` | enable | 0 | Enables storage balancing. |
| | balanciness | 10 | Value of the percentage of lowest and highest storage consumption may differ before rebalancing. (default: 10) |
| | parallel_migrations | 1 | Defines if migrations should be done parallely or sequentially. (default: 1) |
| `update_service` | enable | 0 | Enables the automated update service (rolling updates). |
| | parallel_migrations | 1 | Defines if migrations should be done parallely or sequentially. (default: 1, type: bool) |
| `update_service` | enable | 0 | Enables the automated update service (rolling updates). (default: 0, type: bool) |
| `api` | enable | 0 | Enables the ProxLB API. |
| `service`| daemon | 1 | Run as a daemon (1) or one-shot (0). (default: 1) |
| `service`| daemon | 1 | Run as a daemon (1) or one-shot (0). (default: 1, type: bool) |
| | schedule | 24 | Hours to rebalance in hours. (default: 24) |
| | master_only | 0 | Defines is this should only be performed (1) on the cluster master node or not (0). (default: 0) |
| | log_verbosity | INFO | Defines the log level (default: CRITICAL) where you can use `INFO`, `WARN` or `CRITICAL` |
| | master_only | 0 | Defines is this should only be performed (1) on the cluster master node or not (0). (default: 0, type: bool) |
| | log_verbosity | INFO | Defines the log level (default: CRITICAL) where you can use `DEBUG`, `INFO`, `WARNING` or `CRITICAL` |
| | config_version | 3 | Defines the current config version schema for ProxLB |
An example of the configuration file looks like:
```
[proxmox]
api_host: hypervisor01.gyptazy.ch
api_host: hypervisor01.gyptazy.com
api_user: root@pam
api_pass: FooBar
verify_ssl: 1
timeout: 10
[vm_balancing]
enable: 1
method: memory
@@ -154,6 +159,7 @@ balanciness: 10
# Enable parallel migrations. If set to 0 it will wait for completed migrations
# before starting next migration.
parallel_migrations: 1
maintenance_nodes: dummynode03,dummynode04
ignore_nodes: dummynode01,dummynode02
ignore_vms: testvm01,testvm02
[storage_balancing]
@@ -172,7 +178,7 @@ config_version: 3
```
#### Notes
* If running ProxLB on more than one Proxmox node you can set `api_host` to a comma-separated list of each node's IP address or hostname. (Example: `api_host: node01.gyptazy.ch,node02.gyptazy.ch,node03.gyptazy.ch`)
* If running ProxLB on more than one Proxmox node you can set `api_host` to a comma-separated list of each node's IP address or hostname. (Example: `api_host: node01.gyptazy.com,node02.gyptazy.com,node03.gyptazy.com`)
* The `verify_ssl` parameter can switch between the mode to verify trusted remote certificates. Keep in mind, that even local ones are **not** trusted by default and need to be imported to the truststore.
* Even when using only the `vm_balancing` mode, ensure to have the other sections listed in your config:
```
@@ -193,6 +199,8 @@ The following options and parameters are currently supported:
| -d | --dry-run | Performs a dry-run without doing any actions. | Unset |
| -j | --json | Returns a JSON of the VM movement. | Unset |
| -b | --best-node | Returns the best next node for a VM/CT placement (useful for further usage with Terraform/Ansible). | Unset |
| -m | --maintenance | Sets node(s) to maintenance mode & moves workloads away. | Unset |
| -v | --version | Returns the ProxLB version on stdout. | Unset |
### Balancing
#### General
@@ -231,13 +239,13 @@ Limitations:
### Affinity Rules / Grouping Relationships
#### Affinity (Stay Together)
<img align="left" src="https://cdn.gyptazy.ch/images/plb-rebalancing-include-balance-group.jpg"/> Access the Proxmox Web UI by opening your web browser and navigating to your Proxmox VE web interface, then log in with your credentials. Navigate to the VM you want to tag by selecting it from the left-hand navigation panel. Click on the "Options" tab to view the VM's options, then select "Edit" or "Add" (depending on whether you are editing an existing tag or adding a new one). In the tag field, enter plb_include_ followed by your unique identifier, for example, plb_include_group1. Save the changes to apply the tag to the VM. Repeat these steps for each VM that should be included in the group.
<img align="left" src="https://cdn.gyptazy.com/images/plb-rebalancing-include-balance-group.jpg"/> Access the Proxmox Web UI by opening your web browser and navigating to your Proxmox VE web interface, then log in with your credentials. Navigate to the VM you want to tag by selecting it from the left-hand navigation panel. Click on the "Options" tab to view the VM's options, then select "Edit" or "Add" (depending on whether you are editing an existing tag or adding a new one). In the tag field, enter plb_include_ followed by your unique identifier, for example, plb_include_group1. Save the changes to apply the tag to the VM. Repeat these steps for each VM that should be included in the group.
#### Anti-Affinity (Keep Apart)
<img align="left" src="https://cdn.gyptazy.ch/images/plb-rebalancing-exclude-balance-group.jpg"/> Access the Proxmox Web UI by opening your web browser and navigating to your Proxmox VE web interface, then log in with your credentials. Navigate to the VM you want to tag by selecting it from the left-hand navigation panel. Click on the "Options" tab to view the VM's options, then select "Edit" or "Add" (depending on whether you are editing an existing tag or adding a new one). In the tag field, enter plb_exclude_ followed by your unique identifier, for example, plb_exclude_critical. Save the changes to apply the tag to the VM. Repeat these steps for each VM that should be excluded from being on the same node.
<img align="left" src="https://cdn.gyptazy.com/images/plb-rebalancing-exclude-balance-group.jpg"/> Access the Proxmox Web UI by opening your web browser and navigating to your Proxmox VE web interface, then log in with your credentials. Navigate to the VM you want to tag by selecting it from the left-hand navigation panel. Click on the "Options" tab to view the VM's options, then select "Edit" or "Add" (depending on whether you are editing an existing tag or adding a new one). In the tag field, enter plb_exclude_ followed by your unique identifier, for example, plb_exclude_critical. Save the changes to apply the tag to the VM. Repeat these steps for each VM that should be excluded from being on the same node.
#### Ignore VMs (Tag Style)
<img align="left" src="https://cdn.gyptazy.ch/images/plb-rebalancing-ignore-vm.jpg"/> In Proxmox, you can ensure that certain VMs are ignored during the rebalancing process by setting a specific tag within the Proxmox Web UI, rather than solely relying on configurations in the ProxLB config file. This can be achieved by adding the tag 'plb_ignore_vm' to the VM. Once this tag is applied, the VM will be excluded from any further rebalancing operations, simplifying the management process.
<img align="left" src="https://cdn.gyptazy.com/images/plb-rebalancing-ignore-vm.jpg"/> In Proxmox, you can ensure that certain VMs are ignored during the rebalancing process by setting a specific tag within the Proxmox Web UI, rather than solely relying on configurations in the ProxLB config file. This can be achieved by adding the tag 'plb_ignore_vm' to the VM. Once this tag is applied, the VM will be excluded from any further rebalancing operations, simplifying the management process.
### Systemd
When installing a Linux distribution (such as .deb or .rpm) file, this will be shipped with a systemd unit file. The default configuration file will be sourced from `/etc/proxlb/proxlb.conf`.
@@ -254,7 +262,7 @@ A manual installation is possible and also supports BSD based systems. Proxmox R
The executable must be able to read the config file, if no dedicated config file is given by the `-c` argument, PLB tries to read it from `/etc/proxlb/proxlb.conf`.
### Proxmox GUI Integration
<img align="left" src="https://cdn.gyptazy.ch/images/proxlb-GUI-integration.jpg"/> PLB can also be directly be used from the Proxmox Web UI by installing the optional package `pve-proxmoxlb-service-ui` package which has a dependency on the `proxlb` package. For the Web UI integration, it requires to be installed (in addition) on the nodes on the cluster. Afterwards, a new menu item is present in the HA chapter called `Rebalancing`. This chapter provides two possibilities:
<img align="left" src="https://cdn.gyptazy.com/images/proxlb-GUI-integration.jpg"/> PLB can also be directly be used from the Proxmox Web UI by installing the optional package `pve-proxmoxlb-service-ui` package which has a dependency on the `proxlb` package. For the Web UI integration, it requires to be installed (in addition) on the nodes on the cluster. Afterwards, a new menu item is present in the HA chapter called `Rebalancing`. This chapter provides two possibilities:
* Rebalancing VM workloads
* Migrate VM workloads away from a defined node (e.g. maintenance preparation)
@@ -262,8 +270,8 @@ The executable must be able to read the config file, if no dedicated config file
The easiest way to get started is by using the ready-to-use packages that I provide on my CDN and to run it on a Linux Debian based system. This can also be one of the Proxmox nodes itself.
```
wget https://cdn.gyptazy.ch/files/amd64/debian/proxlb/proxlb_1.0.2_amd64.deb
dpkg -i proxlb_1.0.2_amd64.deb
wget https://cdn.gyptazy.com/files/os/debian/proxlb/proxlb_1.0.5_amd64.deb
dpkg -i proxlb_1.0.5_amd64.deb
# Adjust your config
vi /etc/proxlb/proxlb.conf
systemctl restart proxlb
@@ -316,48 +324,56 @@ Here you can find some overviews of references for and about the ProxLB (PLB):
| Description | Link |
|------|:------:|
| General introduction into ProxLB | https://gyptazy.ch/blog/proxlb-rebalancing-vm-workloads-across-nodes-in-proxmox-clusters/ |
| Howto install and use ProxLB on Debian to rebalance vm workloads in a Proxmox cluster | https://gyptazy.ch/howtos/howto-install-and-use-proxlb-to-rebalance-vm-workloads-across-nodes-in-proxmox-clusters/ |
| General introduction into ProxLB | https://gyptazy.com/blog/proxlb-rebalancing-vm-workloads-across-nodes-in-proxmox-clusters/ |
| Howto install and use ProxLB on Debian to rebalance vm workloads in a Proxmox cluster | https://gyptazy.com/howtos/howto-install-and-use-proxlb-to-rebalance-vm-workloads-across-nodes-in-proxmox-clusters/ |
## Downloads
ProxLB can be obtained in man different ways, depending on which use case you prefer. You can use simply copy the code from GitHub, use created packages for Debian or RedHat based systems, use a Repository to keep ProxLB always up to date or simply use a Container image for Docker/Podman.
### Packages
Ready to use packages can be found at:
* https://cdn.gyptazy.ch/files/amd64/debian/proxlb/
* https://cdn.gyptazy.ch/files/amd64/ubuntu/proxlb/
* https://cdn.gyptazy.ch/files/amd64/redhat/proxlb/
* https://cdn.gyptazy.ch/files/amd64/freebsd/proxlb/
* https://cdn.gyptazy.com/files/os/debian/proxlb/
* https://cdn.gyptazy.com/files/os/ubuntu/proxlb/
* https://cdn.gyptazy.com/files/os/redhat/proxlb/
### Repository
Debian based systems can also use the repository by adding the following line to their apt sources:
#### Stable Releases
```
deb https://repo.gyptazy.ch/ /
deb https://repo.gyptazy.com/stable /
```
The Repository's GPG key can be found at: `https://repo.gyptazy.ch/repo/KEY.gpg`
#### Beta/Testing Releases
```
deb https://repo.gyptazy.com/testing /
```
The Repository's GPG key can be found at: `https://repo.gyptazy.com/repository.gpg`
You can also simply import it by running:
```
# KeyID: DEB76ADF7A0BAADB51792782FD6A7A70C11226AA
# SHA256: 5e44fffa09c747886ee37cc6e9e7eaf37c6734443cc648eaf0a9241a89084383 KEY.gpg
# KeyID: 17169F23F9F71A14AD49EDADDB51D3EB01824F4C
# UID: gyptazy Solutions Repository <contact@gyptazy.com>
# SHA256: 52c267e6f4ec799d40cdbdb29fa518533ac7942dab557fa4c217a76f90d6b0f3 repository.gpg
wget -O /etc/apt/trusted.gpg.d/proxlb.asc https://repo.gyptazy.ch/repo/KEY.gpg
wget -O /etc/apt/trusted.gpg.d/proxlb.asc https://repo.gyptazy.com/repository.gpg
```
*Note: The defined repositories `repo.gyptazy.ch` and `repo.proxlb.de` are the same!*
*Note: The defined repositories `repo.gyptazy.com` and `repo.proxlb.de` are the same!*
### Container Images (Docker/Podman)
Container Images for Podman, Docker etc., can be found at:
| Version | Image |
|------|:------:|
| latest | cr.gyptazy.ch/proxlb/proxlb:latest |
| v1.0.2 | cr.gyptazy.ch/proxlb/proxlb:v1.0.2 |
| v1.0.0 | cr.gyptazy.ch/proxlb/proxlb:v1.0.0 |
| v0.9.9 | cr.gyptazy.ch/proxlb/proxlb:v0.9.9 |
| latest | cr.gyptazy.com/proxlb/proxlb:latest |
| v1.0.5 | cr.gyptazy.com/proxlb/proxlb:v1.0.5 |
| v1.0.4 | cr.gyptazy.com/proxlb/proxlb:v1.0.4 |
| v1.0.3 | cr.gyptazy.com/proxlb/proxlb:v1.0.3 |
| v1.0.2 | cr.gyptazy.com/proxlb/proxlb:v1.0.2 |
| v1.0.0 | cr.gyptazy.com/proxlb/proxlb:v1.0.0 |
| v0.9.9 | cr.gyptazy.com/proxlb/proxlb:v0.9.9 |
## Misc
### Bugs
@@ -370,13 +386,13 @@ Feel free to add further documentation, to adjust already existing one or to con
You can also find additional and more detailed documentation within the [docs/](https://github.com/gyptazy/ProxLB/tree/main/docs) directory.
### Support
If you need assistance or have any questions, we offer support through our dedicated [chat room](https://matrix.to/#/#proxlb:gyptazy.ch) in Matrix and on Reddit. Join our community for real-time help, advice, and discussions. Connect with us in our dedicated chat room for immediate support and live interaction with other users and developers. You can also visit our [GitHub Community](https://github.com/gyptazy/ProxLB/discussions/) to post your queries, share your experiences, and get support from fellow community members and moderators. You may also just open directly an issue [here](https://github.com/gyptazy/ProxLB/issues) on GitHub. We are here to help and ensure you have the best experience possible.
If you need assistance or have any questions, we offer support through our dedicated [chat room](https://matrix.to/#/#proxlb:gyptazy.com) in Matrix and on Reddit. Join our community for real-time help, advice, and discussions. Connect with us in our dedicated chat room for immediate support and live interaction with other users and developers. You can also visit our [GitHub Community](https://github.com/gyptazy/ProxLB/discussions/) to post your queries, share your experiences, and get support from fellow community members and moderators. You may also just open directly an issue [here](https://github.com/gyptazy/ProxLB/issues) on GitHub. We are here to help and ensure you have the best experience possible.
| Support Channel | Link |
|------|:------:|
| Matrix | [#proxlb:gyptazy.ch](https://matrix.to/#/#proxlb:gyptazy.ch) |
| Matrix | [#proxlb:gyptazy.com](https://matrix.to/#/#proxlb:gyptazy.com) |
| GitHub Community | [GitHub Community](https://github.com/gyptazy/ProxLB/discussions/)
| GitHub | [ProxLB GitHub](https://github.com/gyptazy/ProxLB/issues) |
### Author(s)
* Florian Paul Azim Hoberg @gyptazy (https://gyptazy.ch)
* Florian Paul Azim Hoberg @gyptazy (https://gyptazy.com)

View File

@@ -1,5 +1,5 @@
cmake_minimum_required(VERSION 3.16)
project(proxmox-rebalancing-service VERSION 1.0.2)
project(proxmox-rebalancing-service VERSION 1.0.5)
install(PROGRAMS ../proxlb DESTINATION /bin)
install(FILES ../proxlb.conf DESTINATION /etc/proxlb)
@@ -9,8 +9,8 @@ install(FILES proxlb.service DESTINATION /etc/systemd/system)
set(CPACK_PACKAGE_NAME "proxlb")
set(CPACK_RESOURCE_FILE_LICENSE "${CMAKE_CURRENT_SOURCE_DIR}/../LICENSE")
set(CPACK_RESOURCE_FILE_README "${CMAKE_CURRENT_SOURCE_DIR}/../README.md")
set(CPACK_DEBIAN_PACKAGE_MAINTAINER "Florian Paul Azim <gyptazy> Hoberg <gyptazy@gyptazy.ch>")
set(CPACK_PACKAGE_CONTACT "Florian Paul Azim Hoberg <gyptazy@gyptazy.ch>")
set(CPACK_DEBIAN_PACKAGE_MAINTAINER "Florian Paul Azim <gyptazy> Hoberg <gyptazy@gyptazy.com>")
set(CPACK_PACKAGE_CONTACT "Florian Paul Azim Hoberg <gyptazy@gyptazy.com>")
set(CPACK_PACKAGE_VENDOR "gyptazy")
# RPM packaging

View File

@@ -1,3 +1,45 @@
proxlb (1.0.5) unstable; urgency=low
* Fix migration from local disks.
* Fix allowed values (add DEBUG, WARNING) for log verbosity.
* Fix node (and its objects) evaluation when not reachable (e.g., maintenance).
* Fix evaluation of maintenance mode where comparing list & string resulted in a crash.
* Change docs to make bool usage in configs more clear.
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Wed, 30 Oct 2024 17:02:31 +0100
proxlb (1.0.4) unstable; urgency=low
* Add feature to make API timeout configureable.
* Add maintenance mode to evacuate a node and move workloads for other nodes in the cluster.
* Add version output cli arg.
* Run storage balancing only on supported shared storages.
* Run storage balancing only when needed to save time.
* Fix CPU balancing where calculations are done in float instead of int. (by @glitchvern)
* Fix documentation for the underlying infrastructure.
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Fri, 11 Oct 2024 06:14:13 +0200
proxlb (1.0.3) unstable; urgency=low
* Add a convert function to cast all bool alike options from configparser to bools.
* Add a config parser options for future features.
* Add a config versio schema that must be supported by ProxLB.
* Add feature to allow the API hosts being provided as a comma separated list.
* Add storage balancing function.
* Add doc how to add dedicated user for authentication. (by @Dulux-Oz)
* Add cli arg `-b` to return the next best node for next VM/CT placement.Fix some wonkey code styles.
* Provide a more reasonable output when HA services are not active in a Proxmox cluster.
* Improve the underlying code base for future implementations.
* Fix documentation for the master_only parameter placed in the wrong config section.
* Fixed `master_only` function by inverting the condition.
* Improved the overall validation and error handling.
* Fix bug in the `proxlb.conf` in the vm_balancing section.
* Fix handling of unset `ignore_nodes` and `ignore_vms` resulted in an attribute error.
* Fix anti-affinity rules not evaluating a new and different node.
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Wed, 11 Sep 2024 17:31:03 +0200
proxlb (1.0.2) unstable; urgency=low
* Add option to run migration in parallel or sequentially.
@@ -6,16 +48,16 @@ proxlb (1.0.2) unstable; urgency=low
* Fix CMake packaging for Debian package to avoid overwriting the config file.
* Fix some wonkey code styles.
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.ch> Tue, 13 Aug 2024 17:28:14 +0200
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Tue, 13 Aug 2024 17:28:14 +0200
proxlb (1.0.0) unstable; urgency=low
* Initial release of ProxLB.
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.ch> Thu, 01 Aug 2024 17:04:12 +0200
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Thu, 01 Aug 2024 17:04:12 +0200
proxlb (0.9.0) unstable; urgency=low
* Initial development release of ProxLB as a tech preview.
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.ch> Sun, 07 Jul 2024 05:38:41 +0200
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.com> Sun, 07 Jul 2024 05:38:41 +0200

View File

@@ -1,11 +1,44 @@
* Tue Aug 13 2024 Florian Paul Azim Hoberg <gyptazy@gyptazy.ch>
* Wed Oct 30 2024 Florian Paul Azim Hoberg <gyptazy@gyptazy.com>
- Fix migration from local disks.
- Fix allowed values (add DEBUG, WARNING) for log verbosity.
- Fix node (and its objects) evaluation when not reachable (e.g., maintenance).
- Fix evaluation of maintenance mode where comparing list & string resulted in a crash.
- Change docs to make bool usage in configs more clear.
* Fri Oct 11 2024 Florian Paul Azim Hoberg <gyptazy@gyptazy.com>
- Add feature to make API timeout configureable.
- Add maintenance mode to evacuate a node and move workloads for other nodes in the cluster.
- Add version output cli arg.
- Run storage balancing only on supported shared storages.
- Run storage balancing only when needed to save time.
- Fix CPU balancing where calculations are done in float instead of int. (by @glitchvern)
- Fix documentation for the underlying infrastructure.
* Wed Sep 12 2024 Florian Paul Azim Hoberg <gyptazy@gyptazy.com>
- Add a convert function to cast all bool alike options from configparser to bools.
- Add a config parser options for future features.
- Add a config versio schema that must be supported by ProxLB.
- Add feature to allow the API hosts being provided as a comma separated list.
- Add storage balancing function.
- Add doc how to add dedicated user for authentication. (by @Dulux-Oz)
- Add cli arg `-b` to return the next best node for next VM/CT placement.Fix some wonkey code styles.
- Provide a more reasonable output when HA services are not active in a Proxmox cluster.
- Improve the underlying code base for future implementations.
- Fix documentation for the master_only parameter placed in the wrong config section.
- Fixed `master_only` function by inverting the condition.
- Improved the overall validation and error handling.
- Fix bug in the `proxlb.conf` in the vm_balancing section.
- Fix handling of unset `ignore_nodes` and `ignore_vms` resulted in an attribute error.
- Fix anti-affinity rules not evaluating a new and different node.
* Tue Aug 13 2024 Florian Paul Azim Hoberg <gyptazy@gyptazy.com>
- Add option to run migration in parallel or sequentially.
- Add option to run ProxLB only on a Proxmox cluster master (req. HA feature).
- Fixed daemon timer to use hours instead of minutes.
- Fixed some wonkey code styles.
* Thu Aug 01 2024 Florian Paul Azim Hoberg <gyptazy@gyptazy.ch>
* Thu Aug 01 2024 Florian Paul Azim Hoberg <gyptazy@gyptazy.com>
- Initial release of ProxLB.
* Sun Jul 07 2024 Florian Paul Azim Hoberg <gyptazy@gyptazy.ch>
* Sun Jul 07 2024 Florian Paul Azim Hoberg <gyptazy@gyptazy.com>
- Initial development release of ProxLB as a tech preview.

351
proxlb
View File

@@ -42,9 +42,9 @@ import urllib3
# Constants
__appname__ = "ProxLB"
__version__ = "1.0.3b"
__version__ = "1.0.5"
__config_version__ = 3
__author__ = "Florian Paul Azim Hoberg <gyptazy@gyptazy.ch> @gyptazy"
__author__ = "Florian Paul Azim Hoberg <gyptazy@gyptazy.com> @gyptazy"
__errors__ = False
@@ -190,7 +190,7 @@ def __validate_config_content(proxlb_config):
'vm_balancing_mode_option': ['bytes', 'percent'],
'vm_balancing_type': ['vm', 'ct', 'all'],
'storage_balancing_method': ['disk_space'],
'log_verbosity': ['INFO', 'CRITICAL']
'log_verbosity': ['DEBUG', 'INFO', 'WARNING', 'CRITICAL']
}
for string_val in validate_string_options:
@@ -204,13 +204,22 @@ def __validate_config_content(proxlb_config):
def initialize_args():
""" Initialize given arguments for ProxLB. """
argparser = argparse.ArgumentParser(description='ProxLB')
argparser.add_argument('-c', '--config', type=str, help='Path to config file.', required=False)
argparser.add_argument('-d', '--dry-run', help='Perform a dry-run without doing any actions.', action='store_true', required=False)
argparser.add_argument('-j', '--json', help='Return a JSON of the VM movement.', action='store_true', required=False)
argparser.add_argument('-b', '--best-node', help='Returns the best next node.', action='store_true', required=False)
argparser.add_argument('-c', '--config', help='Path to config file', type=str, required=False)
argparser.add_argument('-d', '--dry-run', help='Perform a dry-run without doing any actions.', action='store_true', required=False)
argparser.add_argument('-j', '--json', help='Return a JSON of the VM movement.', action='store_true', required=False)
argparser.add_argument('-b', '--best-node', help='Returns the best next node.', action='store_true', required=False)
argparser.add_argument('-m', '--maintenance', help='Sets node to maintenance mode & moves workloads away.', type=str, required=False)
argparser.add_argument('-v', '--version', help='Returns the current ProxLB version.', action='store_true', required=False)
return argparser.parse_args()
def proxlb_output_version():
""" Print ProxLB version information on CLI. """
print(f'{__appname__} version {__version__}\nRequired config version: >= {__config_version__}')
print('ProxLB support: https://github.com/gyptazy/ProxLB\nDeveloper: gyptazy.com')
sys.exit(0)
def initialize_config_path(app_args):
""" Initialize path to ProxLB config file. """
info_prefix = 'Info: [config]:'
@@ -238,6 +247,7 @@ def initialize_config_options(config_path):
proxlb_config['proxmox_api_user'] = config['proxmox']['api_user']
proxlb_config['proxmox_api_pass'] = config['proxmox']['api_pass']
proxlb_config['proxmox_api_ssl_v'] = config['proxmox']['verify_ssl']
proxlb_config['proxmox_api_timeout'] = config['proxmox'].get('timeout', 10)
# VM Balancing
proxlb_config['vm_balancing_enable'] = config['vm_balancing'].get('enable', 1)
proxlb_config['vm_balancing_method'] = config['vm_balancing'].get('method', 'memory')
@@ -246,6 +256,7 @@ def initialize_config_options(config_path):
proxlb_config['vm_balancing_type'] = config['vm_balancing'].get('type', 'vm')
proxlb_config['vm_balanciness'] = config['vm_balancing'].get('balanciness', 10)
proxlb_config['vm_parallel_migrations'] = config['vm_balancing'].get('parallel_migrations', 1)
proxlb_config['vm_maintenance_nodes'] = config['vm_balancing'].get('maintenance_nodes', '')
proxlb_config['vm_ignore_nodes'] = config['vm_balancing'].get('ignore_nodes', '')
proxlb_config['vm_ignore_vms'] = config['vm_balancing'].get('ignore_vms', '')
proxlb_config['vm_enforce_affinity_groups'] = config['vm_balancing'].get('enforce_affinity_groups', 1)
@@ -313,7 +324,7 @@ def validate_config_minimum_version(proxlb_config):
logging.info(f'{info_prefix} ProxLB config version {proxlb_config["config_version"]} is fine. Required: {__config_version__}.')
def api_connect(proxmox_api_host, proxmox_api_user, proxmox_api_pass, proxmox_api_ssl_v):
def api_connect(proxmox_api_host, proxmox_api_user, proxmox_api_pass, proxmox_api_ssl_v, proxmox_api_timeout):
""" Connect and authenticate to the Proxmox remote API. """
error_prefix = 'Error: [api-connection]:'
warn_prefix = 'Warning: [api-connection]:'
@@ -327,7 +338,7 @@ def api_connect(proxmox_api_host, proxmox_api_user, proxmox_api_pass, proxmox_ap
proxmox_api_host = __api_connect_get_host(proxmox_api_host)
try:
api_object = proxmoxer.ProxmoxAPI(proxmox_api_host, user=proxmox_api_user, password=proxmox_api_pass, verify_ssl=proxmox_api_ssl_v)
api_object = proxmoxer.ProxmoxAPI(proxmox_api_host, user=proxmox_api_user, password=proxmox_api_pass, verify_ssl=proxmox_api_ssl_v, timeout=int(proxmox_api_timeout))
except proxmoxer.backends.https.AuthenticationError as proxmox_api_error:
logging.critical(f'{error_prefix} Provided credentials do not work: {proxmox_api_error}')
sys.exit(2)
@@ -366,6 +377,7 @@ def __api_connect_get_host(proxmox_api_host):
def __api_connect_test_ipv4_host(proxmox_api_host, port):
""" Validate if a given host on the IPv4 management address is reachable. """
error_prefix = 'Error: [api-connect-test-host]:'
info_prefix = 'Info: [api-connect-test-host]:'
proxmox_connection_timeout = 2
@@ -386,6 +398,7 @@ def __api_connect_test_ipv4_host(proxmox_api_host, port):
def __api_connect_test_ipv6_host(proxmox_api_host, port):
""" Validate if a given host on the IPv6 management address is reachable. """
error_prefix = 'Error: [api-connect-test-host]:'
info_prefix = 'Info: [api-connect-test-host]:'
proxmox_connection_timeout = 2
@@ -461,21 +474,24 @@ def validate_cluster_master(cluster_master):
return True
def get_node_statistics(api_object, ignore_nodes):
def get_node_statistics(api_object, ignore_nodes, maintenance_nodes):
""" Get statistics of cpu, memory and disk for each node in the cluster. """
info_prefix = 'Info: [node-statistics]:'
node_statistics = {}
ignore_nodes_list = ignore_nodes.split(',')
info_prefix = 'Info: [node-statistics]:'
node_statistics = {}
ignore_nodes_list = ignore_nodes.split(',')
maintenance_nodes_list = maintenance_nodes.split(',')
for node in api_object.nodes.get():
if node['status'] == 'online' and node['node'] not in ignore_nodes_list:
if node['status'] == 'online':
node_statistics[node['node']] = {}
node_statistics[node['node']]['maintenance'] = False
node_statistics[node['node']]['ignore'] = False
node_statistics[node['node']]['cpu_total'] = node['maxcpu']
node_statistics[node['node']]['cpu_assigned'] = node['cpu']
node_statistics[node['node']]['cpu_assigned'] = 0
node_statistics[node['node']]['cpu_assigned_percent'] = int((node_statistics[node['node']]['cpu_assigned']) / int(node_statistics[node['node']]['cpu_total']) * 100)
node_statistics[node['node']]['cpu_assigned_percent_last_run'] = 0
node_statistics[node['node']]['cpu_used'] = 0
node_statistics[node['node']]['cpu_free'] = int(node['maxcpu']) - int(node['cpu'])
node_statistics[node['node']]['cpu_used'] = node['cpu']
node_statistics[node['node']]['cpu_free'] = (node['maxcpu']) - (node['cpu'] * node['maxcpu'])
node_statistics[node['node']]['cpu_free_percent'] = int((node_statistics[node['node']]['cpu_free']) / int(node['maxcpu']) * 100)
node_statistics[node['node']]['cpu_free_percent_last_run'] = 0
node_statistics[node['node']]['memory_total'] = node['maxmem']
@@ -496,6 +512,15 @@ def get_node_statistics(api_object, ignore_nodes):
node_statistics[node['node']]['disk_free_percent_last_run'] = 0
logging.info(f'{info_prefix} Added node {node["node"]}.')
# Update node specific vars
if node['node'] in maintenance_nodes_list:
node_statistics[node['node']]['maintenance'] = True
logging.info(f'{info_prefix} Maintenance mode: {node["node"]} is set to maintenance mode.')
if node['node'] in ignore_nodes_list:
node_statistics[node['node']]['ignore'] = True
logging.info(f'{info_prefix} Ignore Node: {node["node"]} is set to be ignored.')
logging.info(f'{info_prefix} Created node statistics.')
return node_statistics
@@ -518,133 +543,136 @@ def get_vm_statistics(api_object, ignore_vms, balancing_type):
for node in api_object.nodes.get():
# Add all virtual machines if type is vm or all.
if balancing_type == 'vm' or balancing_type == 'all':
for vm in api_object.nodes(node['node']).qemu.get():
# Get VM/CT objects only when the node is online and reachable.
if node['status'] == 'online':
# Get the VM tags from API.
vm_tags = __get_vm_tags(api_object, node, vm['vmid'], 'vm')
if vm_tags is not None:
group_include, group_exclude, vm_ignore = __get_proxlb_groups(vm_tags)
# Add all virtual machines if type is vm or all.
if balancing_type == 'vm' or balancing_type == 'all':
for vm in api_object.nodes(node['node']).qemu.get():
# Get wildcard match for VMs to ignore if a wildcard pattern was
# previously found. Wildcards may slow down the task when using
# many patterns in the ignore list. Therefore, run this only if
# a wildcard pattern was found. We also do not need to validate
# this if the VM is already being ignored by a defined tag.
if vm_ignore_wildcard and not vm_ignore:
vm_ignore = __check_vm_name_wildcard_pattern(vm['name'], ignore_vms_list)
# Get the VM tags from API.
vm_tags = __get_vm_tags(api_object, node, vm['vmid'], 'vm')
if vm_tags is not None:
group_include, group_exclude, vm_ignore = __get_proxlb_groups(vm_tags)
if vm['status'] == 'running' and vm['name'] not in ignore_vms_list and not vm_ignore:
vm_statistics[vm['name']] = {}
vm_statistics[vm['name']]['group_include'] = group_include
vm_statistics[vm['name']]['group_exclude'] = group_exclude
vm_statistics[vm['name']]['cpu_total'] = vm['cpus']
vm_statistics[vm['name']]['cpu_used'] = vm['cpu']
vm_statistics[vm['name']]['memory_total'] = vm['maxmem']
vm_statistics[vm['name']]['memory_used'] = vm['mem']
vm_statistics[vm['name']]['disk_total'] = vm['maxdisk']
vm_statistics[vm['name']]['disk_used'] = vm['disk']
vm_statistics[vm['name']]['vmid'] = vm['vmid']
vm_statistics[vm['name']]['node_parent'] = node['node']
vm_statistics[vm['name']]['node_rebalance'] = node['node']
vm_statistics[vm['name']]['storage'] = {}
vm_statistics[vm['name']]['type'] = 'vm'
# Get wildcard match for VMs to ignore if a wildcard pattern was
# previously found. Wildcards may slow down the task when using
# many patterns in the ignore list. Therefore, run this only if
# a wildcard pattern was found. We also do not need to validate
# this if the VM is already being ignored by a defined tag.
if vm_ignore_wildcard and not vm_ignore:
vm_ignore = __check_vm_name_wildcard_pattern(vm['name'], ignore_vms_list)
# Get disk details of the related object.
_vm_details = api_object.nodes(node['node']).qemu(vm['vmid']).config.get()
logging.info(f'{info_prefix} Getting disk information for vm {vm["name"]}.')
if vm['status'] == 'running' and vm['name'] not in ignore_vms_list and not vm_ignore:
vm_statistics[vm['name']] = {}
vm_statistics[vm['name']]['group_include'] = group_include
vm_statistics[vm['name']]['group_exclude'] = group_exclude
vm_statistics[vm['name']]['cpu_total'] = vm['cpus']
vm_statistics[vm['name']]['cpu_used'] = vm['cpu']
vm_statistics[vm['name']]['memory_total'] = vm['maxmem']
vm_statistics[vm['name']]['memory_used'] = vm['mem']
vm_statistics[vm['name']]['disk_total'] = vm['maxdisk']
vm_statistics[vm['name']]['disk_used'] = vm['disk']
vm_statistics[vm['name']]['vmid'] = vm['vmid']
vm_statistics[vm['name']]['node_parent'] = node['node']
vm_statistics[vm['name']]['node_rebalance'] = node['node']
vm_statistics[vm['name']]['storage'] = {}
vm_statistics[vm['name']]['type'] = 'vm'
for vm_detail_key, vm_detail_value in _vm_details.items():
# vm_detail_key_validator = re.sub('\d+$', '', vm_detail_key)
vm_detail_key_validator = re.sub(r'\d+$', '', vm_detail_key)
# Get disk details of the related object.
_vm_details = api_object.nodes(node['node']).qemu(vm['vmid']).config.get()
logging.info(f'{info_prefix} Getting disk information for vm {vm["name"]}.')
if vm_detail_key_validator in _vm_details_storage_allowed:
vm_statistics[vm['name']]['storage'][vm_detail_key] = {}
match = re.match(r'([^:]+):[^/]+/(.+),iothread=\d+,size=(\d+G)', _vm_details[vm_detail_key])
for vm_detail_key, vm_detail_value in _vm_details.items():
# vm_detail_key_validator = re.sub('\d+$', '', vm_detail_key)
vm_detail_key_validator = re.sub(r'\d+$', '', vm_detail_key)
# Create an efficient match group and split the strings to assign them to the storage information.
if match:
_volume = match.group(1)
_disk_name = match.group(2)
_disk_size = match.group(3)
if vm_detail_key_validator in _vm_details_storage_allowed:
vm_statistics[vm['name']]['storage'][vm_detail_key] = {}
match = re.match(r'([^:]+):[^/]+/(.+),iothread=\d+,size=(\d+G)', _vm_details[vm_detail_key])
vm_statistics[vm['name']]['storage'][vm_detail_key]['name'] = _disk_name
vm_statistics[vm['name']]['storage'][vm_detail_key]['device_name'] = vm_detail_key
vm_statistics[vm['name']]['storage'][vm_detail_key]['volume'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['storage_parent'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['storage_rebalance'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['size'] = _disk_size[:-1]
logging.info(f'{info_prefix} Added disk for {vm["name"]}: Name {_disk_name} on volume {_volume} with size {_disk_size}.')
else:
logging.info(f'{info_prefix} No (or unsupported) disk(s) for {vm["name"]} found.')
# Create an efficient match group and split the strings to assign them to the storage information.
if match:
_volume = match.group(1)
_disk_name = match.group(2)
_disk_size = match.group(3)
logging.info(f'{info_prefix} Added vm {vm["name"]}.')
vm_statistics[vm['name']]['storage'][vm_detail_key]['name'] = _disk_name
vm_statistics[vm['name']]['storage'][vm_detail_key]['device_name'] = vm_detail_key
vm_statistics[vm['name']]['storage'][vm_detail_key]['volume'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['storage_parent'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['storage_rebalance'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['size'] = _disk_size[:-1]
logging.info(f'{info_prefix} Added disk for {vm["name"]}: Name {_disk_name} on volume {_volume} with size {_disk_size}.')
else:
logging.info(f'{info_prefix} No (or unsupported) disk(s) for {vm["name"]} found.')
# Add all containers if type is ct or all.
if balancing_type == 'ct' or balancing_type == 'all':
for vm in api_object.nodes(node['node']).lxc.get():
logging.info(f'{info_prefix} Added vm {vm["name"]}.')
logging.warning(f'{warn_prefix} Rebalancing on LXC containers (CT) always requires them to shut down.')
logging.warning(f'{warn_prefix} {vm["name"]} is from type CT and cannot be live migrated!')
# Get the VM tags from API.
vm_tags = __get_vm_tags(api_object, node, vm['vmid'], 'ct')
if vm_tags is not None:
group_include, group_exclude, vm_ignore = __get_proxlb_groups(vm_tags)
# Add all containers if type is ct or all.
if balancing_type == 'ct' or balancing_type == 'all':
for vm in api_object.nodes(node['node']).lxc.get():
# Get wildcard match for VMs to ignore if a wildcard pattern was
# previously found. Wildcards may slow down the task when using
# many patterns in the ignore list. Therefore, run this only if
# a wildcard pattern was found. We also do not need to validate
# this if the VM is already being ignored by a defined tag.
if vm_ignore_wildcard and not vm_ignore:
vm_ignore = __check_vm_name_wildcard_pattern(vm['name'], ignore_vms_list)
logging.warning(f'{warn_prefix} Rebalancing on LXC containers (CT) always requires them to shut down.')
logging.warning(f'{warn_prefix} {vm["name"]} is from type CT and cannot be live migrated!')
# Get the VM tags from API.
vm_tags = __get_vm_tags(api_object, node, vm['vmid'], 'ct')
if vm_tags is not None:
group_include, group_exclude, vm_ignore = __get_proxlb_groups(vm_tags)
if vm['status'] == 'running' and vm['name'] not in ignore_vms_list and not vm_ignore:
vm_statistics[vm['name']] = {}
vm_statistics[vm['name']]['group_include'] = group_include
vm_statistics[vm['name']]['group_exclude'] = group_exclude
vm_statistics[vm['name']]['cpu_total'] = vm['cpus']
vm_statistics[vm['name']]['cpu_used'] = vm['cpu']
vm_statistics[vm['name']]['memory_total'] = vm['maxmem']
vm_statistics[vm['name']]['memory_used'] = vm['mem']
vm_statistics[vm['name']]['disk_total'] = vm['maxdisk']
vm_statistics[vm['name']]['disk_used'] = vm['disk']
vm_statistics[vm['name']]['vmid'] = vm['vmid']
vm_statistics[vm['name']]['node_parent'] = node['node']
vm_statistics[vm['name']]['node_rebalance'] = node['node']
vm_statistics[vm['name']]['storage'] = {}
vm_statistics[vm['name']]['type'] = 'ct'
# Get wildcard match for VMs to ignore if a wildcard pattern was
# previously found. Wildcards may slow down the task when using
# many patterns in the ignore list. Therefore, run this only if
# a wildcard pattern was found. We also do not need to validate
# this if the VM is already being ignored by a defined tag.
if vm_ignore_wildcard and not vm_ignore:
vm_ignore = __check_vm_name_wildcard_pattern(vm['name'], ignore_vms_list)
# Get disk details of the related object.
_vm_details = api_object.nodes(node['node']).lxc(vm['vmid']).config.get()
logging.info(f'{info_prefix} Getting disk information for vm {vm["name"]}.')
if vm['status'] == 'running' and vm['name'] not in ignore_vms_list and not vm_ignore:
vm_statistics[vm['name']] = {}
vm_statistics[vm['name']]['group_include'] = group_include
vm_statistics[vm['name']]['group_exclude'] = group_exclude
vm_statistics[vm['name']]['cpu_total'] = vm['cpus']
vm_statistics[vm['name']]['cpu_used'] = vm['cpu']
vm_statistics[vm['name']]['memory_total'] = vm['maxmem']
vm_statistics[vm['name']]['memory_used'] = vm['mem']
vm_statistics[vm['name']]['disk_total'] = vm['maxdisk']
vm_statistics[vm['name']]['disk_used'] = vm['disk']
vm_statistics[vm['name']]['vmid'] = vm['vmid']
vm_statistics[vm['name']]['node_parent'] = node['node']
vm_statistics[vm['name']]['node_rebalance'] = node['node']
vm_statistics[vm['name']]['storage'] = {}
vm_statistics[vm['name']]['type'] = 'ct'
for vm_detail_key, vm_detail_value in _vm_details.items():
# vm_detail_key_validator = re.sub('\d+$', '', vm_detail_key)
vm_detail_key_validator = re.sub(r'\d+$', '', vm_detail_key)
# Get disk details of the related object.
_vm_details = api_object.nodes(node['node']).lxc(vm['vmid']).config.get()
logging.info(f'{info_prefix} Getting disk information for vm {vm["name"]}.')
if vm_detail_key_validator in _vm_details_storage_allowed:
vm_statistics[vm['name']]['storage'][vm_detail_key] = {}
match = re.match(r'(?P<volume>[^:]+):(?P<disk_name>[^,]+),size=(?P<disk_size>\S+)', _vm_details[vm_detail_key])
for vm_detail_key, vm_detail_value in _vm_details.items():
# vm_detail_key_validator = re.sub('\d+$', '', vm_detail_key)
vm_detail_key_validator = re.sub(r'\d+$', '', vm_detail_key)
# Create an efficient match group and split the strings to assign them to the storage information.
if match:
_volume = match.group(1)
_disk_name = match.group(2)
_disk_size = match.group(3)
if vm_detail_key_validator in _vm_details_storage_allowed:
vm_statistics[vm['name']]['storage'][vm_detail_key] = {}
match = re.match(r'(?P<volume>[^:]+):(?P<disk_name>[^,]+),size=(?P<disk_size>\S+)', _vm_details[vm_detail_key])
vm_statistics[vm['name']]['storage'][vm_detail_key]['name'] = _disk_name
vm_statistics[vm['name']]['storage'][vm_detail_key]['device_name'] = vm_detail_key
vm_statistics[vm['name']]['storage'][vm_detail_key]['volume'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['storage_parent'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['storage_rebalance'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['size'] = _disk_size[:-1]
logging.info(f'{info_prefix} Added disk for {vm["name"]}: Name {_disk_name} on volume {_volume} with size {_disk_size}.')
else:
logging.info(f'{info_prefix} No disks for {vm["name"]} found.')
# Create an efficient match group and split the strings to assign them to the storage information.
if match:
_volume = match.group(1)
_disk_name = match.group(2)
_disk_size = match.group(3)
logging.info(f'{info_prefix} Added vm {vm["name"]}.')
vm_statistics[vm['name']]['storage'][vm_detail_key]['name'] = _disk_name
vm_statistics[vm['name']]['storage'][vm_detail_key]['device_name'] = vm_detail_key
vm_statistics[vm['name']]['storage'][vm_detail_key]['volume'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['storage_parent'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['storage_rebalance'] = _volume
vm_statistics[vm['name']]['storage'][vm_detail_key]['size'] = _disk_size[:-1]
logging.info(f'{info_prefix} Added disk for {vm["name"]}: Name {_disk_name} on volume {_volume} with size {_disk_size}.')
else:
logging.info(f'{info_prefix} No disks for {vm["name"]} found.')
logging.info(f'{info_prefix} Added vm {vm["name"]}.')
logging.info(f'{info_prefix} Created VM statistics.')
return vm_statistics
@@ -680,6 +708,7 @@ def update_node_statistics(node_statistics, vm_statistics):
def get_storage_statistics(api_object):
""" Get statistics of all storage in the cluster. """
info_prefix = 'Info: [storage-statistics]:'
storage_whitelist = ['nfs']
storage_statistics = {}
for node in api_object.nodes.get():
@@ -688,7 +717,7 @@ def get_storage_statistics(api_object):
# Only add enabled and active storage repositories that might be suitable for further
# storage balancing.
if storage['enabled'] and storage['active'] and storage['shared']:
if storage['enabled'] and storage['active'] and storage['shared'] and storage['type'] in storage_whitelist:
storage_statistics[storage['storage']] = {}
storage_statistics[storage['storage']]['name'] = storage['storage']
storage_statistics[storage['storage']]['total'] = storage['total']
@@ -802,11 +831,17 @@ def balancing_vm_calculations(balancing_method, balancing_mode, balancing_mode_o
__validate_vm_statistics(vm_statistics)
rebalance = __validate_balanciness(balanciness, balancing_method, balancing_mode, node_statistics)
# Run rebalancing calculations.
if rebalance:
# Get most used/assigned resources of the VM and the most free or less allocated node.
resources_vm_most_used, processed_vms = __get_most_used_resources_vm(balancing_method, balancing_mode, vm_statistics, processed_vms)
resources_node_most_free = __get_most_free_resources_node(balancing_method, balancing_mode, balancing_mode_option, node_statistics)
# If most used vm is on most free node then skip it and get another one.
while resources_vm_most_used[1]['node_parent'] == resources_node_most_free[0] and len(processed_vms) < len(vm_statistics):
resources_vm_most_used, processed_vms = __get_most_used_resources_vm(balancing_method, balancing_mode, vm_statistics, processed_vms)
logging.debug(f'{info_prefix} processed {len(processed_vms)} out of {len(vm_statistics)} vms.')
# Update resource statistics for VMs and nodes.
node_statistics, vm_statistics = __update_vm_resource_statistics(resources_vm_most_used, resources_node_most_free,
vm_statistics, node_statistics, balancing_method, balancing_mode)
@@ -823,14 +858,42 @@ def balancing_vm_calculations(balancing_method, balancing_mode, balancing_mode_o
logging.info(f'{info_prefix} Best next node for VM & CT placement: {best_next_node[0]}')
sys.exit(0)
# # Honour groupings for include and exclude groups for rebalancing VMs.
# node_statistics, vm_statistics = __get_vm_tags_include_groups(vm_statistics, node_statistics, balancing_method, balancing_mode)
# node_statistics, vm_statistics = __get_vm_tags_exclude_groups(vm_statistics, node_statistics, balancing_method, balancing_mode)
logging.info(f'{info_prefix} Balancing calculations done.')
return node_statistics, vm_statistics
def balancing_vm_maintenance(proxlb_config, app_args, node_statistics, vm_statistics):
""" Calculate re-balancing of VMs that need to be moved away from maintenance nodes. """
info_prefix = 'Info: [rebalancing-maintenance-vm-calculator]:'
maintenance_nodes_list = proxlb_config['vm_maintenance_nodes'].split(',')
nodes_present = list(node_statistics.keys())
balancing_method = proxlb_config['vm_balancing_method']
balancing_mode = proxlb_config['vm_balancing_mode']
balancing_mode_option = proxlb_config['vm_balancing_mode_option']
# Merge maintenance nodes from config and cli args.
if app_args.maintenance is not None:
logging.info(f'{info_prefix} Maintenance nodes from CLI arg and config will be merged.')
maintenance_nodes_list = maintenance_nodes_list + app_args.maintenance.split(',')
# Ensure that only existing nodes in the cluster will be used.
if len(proxlb_config['vm_maintenance_nodes']) > 1:
maintenance_nodes_list = set(maintenance_nodes_list) & set(nodes_present)
logging.info(f'{info_prefix} Maintenance mode for the following hosts defined: {maintenance_nodes_list}')
else:
logging.info(f'{info_prefix} No nodes for maintenance mode defined.')
return node_statistics, vm_statistics
for node_name in maintenance_nodes_list:
node_vms = list(filter(lambda item: item[0] if item[1]['node_parent'] == node_name else [], vm_statistics.items()))
# Update resource statistics for VMs and nodes.
for vm in node_vms:
resources_node_most_free = __get_most_free_resources_node(balancing_method, balancing_mode, balancing_mode_option, node_statistics)
node_statistics, vm_statistics = __update_vm_resource_statistics(vm, resources_node_most_free, vm_statistics, node_statistics, balancing_method, balancing_mode)
return node_statistics, vm_statistics
def __validate_balancing_method(balancing_method):
""" Validate for valid and supported balancing method. """
error_prefix = 'Error: [balancing-method-validation]:'
@@ -894,8 +957,9 @@ def __validate_balanciness(balanciness, balancing_method, balancing_mode, node_s
return False
# Add node information to resource list.
node_resource_percent_list.append(int(node_info[f'{balancing_method}_{node_resource_selector}_percent']))
logging.debug(f'{info_prefix} Node: {node_name} with values: {node_info}')
if not node_statistics[node_name]['maintenance']:
node_resource_percent_list.append(int(node_info[f'{balancing_method}_{node_resource_selector}_percent']))
logging.debug(f'{info_prefix} Node: {node_name} with values: {node_info}')
# Create a sorted list of the delta + balanciness between the node resources.
node_resource_percent_list_sorted = sorted(node_resource_percent_list)
@@ -934,11 +998,11 @@ def __get_most_free_resources_node(balancing_method, balancing_mode, balancing_m
# Return the node information based on the balancing mode.
if balancing_mode == 'used' and balancing_mode_option == 'bytes':
node = max(node_statistics.items(), key=lambda item: item[1][f'{balancing_method}_free'])
node = max(node_statistics.items(), key=lambda item: item[1][f'{balancing_method}_free'] if not item[1]['maintenance'] else -float('inf'))
if balancing_mode == 'used' and balancing_mode_option == 'percent':
node = max(node_statistics.items(), key=lambda item: item[1][f'{balancing_method}_free_percent'])
node = max(node_statistics.items(), key=lambda item: item[1][f'{balancing_method}_free_percent'] if not item[1]['maintenance'] else -float('inf'))
if balancing_mode == 'assigned':
node = min(node_statistics.items(), key=lambda item: item[1][f'{balancing_method}_assigned'] if item[1][f'{balancing_method}_assigned_percent'] > 0 or item[1][f'{balancing_method}_assigned_percent'] < 100 else -float('inf'))
node = min(node_statistics.items(), key=lambda item: item[1][f'{balancing_method}_assigned'] if not item[1]['maintenance'] and (item[1][f'{balancing_method}_assigned_percent'] > 0 or item[1][f'{balancing_method}_assigned_percent'] < 100) else -float('inf'))
logging.info(f'{info_prefix} {node}')
return node
@@ -1125,7 +1189,8 @@ def __run_vm_rebalancing(api_object, _vm_vm_statistics, app_args, parallel_migra
# Migrate type VM (live migration).
if value['type'] == 'vm':
logging.info(f'{info_prefix} Rebalancing VM {vm} from node {value["node_parent"]} to node {value["node_rebalance"]}.')
job_id = api_object.nodes(value['node_parent']).qemu(value['vmid']).migrate().post(target=value['node_rebalance'],online=1)
options = {'target': value['node_rebalance'], 'online': 1, 'with-local-disks': 1}
job_id = api_object.nodes(value['node_parent']).qemu(value['vmid']).migrate().post(**options)
# Migrate type CT (requires restart of container).
if value['type'] == 'ct':
@@ -1449,6 +1514,8 @@ def main():
# Initialize PAS.
initialize_logger('CRITICAL')
app_args = initialize_args()
if app_args.version:
proxlb_output_version()
config_path = initialize_config_path(app_args)
pre_validations(config_path)
@@ -1461,7 +1528,7 @@ def main():
while True:
# API Authentication.
api_object = api_connect(proxlb_config['proxmox_api_host'], proxlb_config['proxmox_api_user'], proxlb_config['proxmox_api_pass'], proxlb_config['proxmox_api_ssl_v'])
api_object = api_connect(proxlb_config['proxmox_api_host'], proxlb_config['proxmox_api_user'], proxlb_config['proxmox_api_pass'], proxlb_config['proxmox_api_ssl_v'], proxlb_config['proxmox_api_timeout'])
# Get master node of cluster and ensure that ProxLB is only performed on the
# cluster master node to avoid ongoing rebalancing.
@@ -1472,16 +1539,20 @@ def main():
validate_daemon(proxlb_config['daemon'], proxlb_config['schedule'])
continue
# Get metric & statistics for vms and nodes.
# Get metrics & statistics for vms and nodes.
if proxlb_config['vm_balancing_enable'] or proxlb_config['storage_balancing_enable'] or app_args.best_node:
node_statistics = get_node_statistics(api_object, proxlb_config['vm_ignore_nodes'])
node_statistics = get_node_statistics(api_object, proxlb_config['vm_ignore_nodes'], proxlb_config['vm_maintenance_nodes'])
vm_statistics = get_vm_statistics(api_object, proxlb_config['vm_ignore_vms'], proxlb_config['vm_balancing_type'])
node_statistics = update_node_statistics(node_statistics, vm_statistics)
storage_statistics = get_storage_statistics(api_object)
# Obtaining metrics for the storage may take longer times and is not needed for VM/CT balancing.
# We can save time by skipping this when not really needed.
if proxlb_config['storage_balancing_enable']:
storage_statistics = get_storage_statistics(api_object)
# Execute VM/CT balancing sub-routines.
if proxlb_config['vm_balancing_enable'] or app_args.best_node:
node_statistics, vm_statistics = balancing_vm_calculations(proxlb_config['vm_balancing_method'], proxlb_config['vm_balancing_mode'], proxlb_config['vm_balancing_mode_option'], node_statistics, vm_statistics, proxlb_config['vm_balanciness'], app_args, rebalance=False, processed_vms=[])
node_statistics, vm_statistics = balancing_vm_maintenance(proxlb_config, app_args, node_statistics, vm_statistics)
node_statistics, vm_statistics = balancing_vm_affinity_groups(node_statistics, vm_statistics, proxlb_config['vm_balancing_method'], proxlb_config['vm_balancing_mode'],)
vm_output_statistics = run_rebalancing(api_object, vm_statistics, app_args, proxlb_config['vm_parallel_migrations'], 'vm')

View File

@@ -7,6 +7,7 @@ verify_ssl: 1
enable: 1
method: memory
mode: used
maintenance_nodes: dummynode03,dummynode04
ignore_nodes: dummynode01,dummynode02
ignore_vms: testvm01,testvm02
[storage_balancing]