mirror of
https://github.com/gyptazy/ProxLB.git
synced 2026-04-05 20:31:57 +02:00
release(0.9.9): Prepare public development release 0.9.9
This commit is contained in:
2
.changelogs/0.9.0/1_initial_release.yaml
Normal file
2
.changelogs/0.9.0/1_initial_release.yaml
Normal file
@@ -0,0 +1,2 @@
|
||||
added:
|
||||
- Development release of ProxLB.
|
||||
1
.changelogs/0.9.0/release_meta.yml
Normal file
1
.changelogs/0.9.0/release_meta.yml
Normal file
@@ -0,0 +1 @@
|
||||
date: 2024-02-01
|
||||
2
.changelogs/0.9.9/1_initial_release.yaml
Normal file
2
.changelogs/0.9.9/1_initial_release.yaml
Normal file
@@ -0,0 +1,2 @@
|
||||
added:
|
||||
- Initial public development release of ProxLB.
|
||||
1
.changelogs/0.9.9/release_meta.yml
Normal file
1
.changelogs/0.9.9/release_meta.yml
Normal file
@@ -0,0 +1 @@
|
||||
date: 2024-07-06
|
||||
2
.gitignore
vendored
Normal file
2
.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
packaging/changelog-fragments-creator/
|
||||
dev/
|
||||
20
CHANGELOG.md
Normal file
20
CHANGELOG.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
|
||||
## [0.9.9] - 2024-07-06
|
||||
|
||||
### Added
|
||||
|
||||
- Initial public development release of ProxLB.
|
||||
|
||||
|
||||
## [0.9.0] - 2024-02-01
|
||||
|
||||
### Added
|
||||
|
||||
- Development release of ProxLB.
|
||||
172
README.md
172
README.md
@@ -1,2 +1,170 @@
|
||||
# proxnode-balancer
|
||||
Balancing VM workloads across nodes in a Proxmox cluster.
|
||||
# ProxLB - (Re)Balance VM Workloads in Proxmox Clusters
|
||||
<img align="left" src="https://cdn.gyptazy.ch/images/Prox-LB-logo.jpg"/>
|
||||
<br>
|
||||
|
||||
<p float="center"><img src="https://img.shields.io/github/license/gyptazy/ProxLB"/><img src="https://img.shields.io/github/contributors/gyptazy/ProxLB"/><img src="https://img.shields.io/github/last-commit/gyptazy/ProxLB/main"/><img src="https://img.shields.io/github/issues-raw/gyptazy/ProxLB"/><img src="https://img.shields.io/github/issues-pr/gyptazy/ProxLB"/></p>
|
||||
|
||||
|
||||
## Table of Content
|
||||
* Introduction
|
||||
* Video of Migration
|
||||
* Features
|
||||
* Usage
|
||||
* Dependencies
|
||||
* Options
|
||||
* Parameters
|
||||
* Systemd
|
||||
* Manuel
|
||||
* Proxmox GUI Integration
|
||||
* Quick Start
|
||||
* Motivation
|
||||
* References
|
||||
* Packages
|
||||
* Misc
|
||||
* Bugs
|
||||
* Contributing
|
||||
* Author(s)
|
||||
|
||||
## Introduction
|
||||
`ProxLB` (PLB) is an advanced tool designed to enhance the efficiency and performance of Proxmox clusters by optimizing the distribution of virtual machines (VMs) across the cluster nodes by using the Proxmox API. ProxLB meticulously gathers and analyzes a comprehensive set of resource metrics from both the cluster nodes and the running VMs. These metrics include CPU usage, memory consumption, and disk utilization, specifically focusing on local disk resources.
|
||||
|
||||
PLB collects resource usage data from each node in the Proxmox cluster, including CPU, (local) disk and memory utilization. Additionally, it gathers resource usage statistics from all running VMs, ensuring a granular understanding of the cluster's workload distribution.
|
||||
|
||||
Intelligent rebalancing is a key feature of ProxLB where it re-balances VMs based on their memory, disk or cpu usage, ensuring that no node is overburdened while others remain underutilized. The rebalancing capabilities of PLB significantly enhance cluster performance and reliability. By ensuring that resources are evenly distributed, PLB helps prevent any single node from becoming a performance bottleneck, improving the reliability and stability of the cluster. Efficient rebalancing leads to better utilization of available resources, potentially reducing the need for additional hardware investments and lowering operational costs.
|
||||
|
||||
Automated rebalancing reduces the need for manual actions, allowing operators to focus on other critical tasks, thereby increasing operational efficiency.
|
||||
|
||||
### Video of Migration
|
||||
<img src="https://cdn.gyptazy.ch/images/proxlb-rebalancing-demo.gif"/>
|
||||
|
||||
## Features
|
||||
* Rebalance the cluster by:
|
||||
* Memory
|
||||
* Disk (only local storage)
|
||||
* CPU
|
||||
* Performing
|
||||
* Periodically
|
||||
* One-shot solution
|
||||
* Filter
|
||||
* Exclude nodes
|
||||
* Exclude virtual machines
|
||||
* Migrate VM workloads away (e.g. maintenance preparation)
|
||||
* Fully based on Proxmox API
|
||||
* Usage
|
||||
* One-Shot (one-shot)
|
||||
* Periodically (daemon)
|
||||
* Proxmox Web GUI Integration (optional)
|
||||
|
||||
## Usage
|
||||
Running PLB is easy and it runs almost everywhere since it just depends on `Python3` and the `proxmoxer` library. Therefore, it can directly run on a Proxmox node, dedicated systems like Debian, RedHat, or even FreeBSD, as long as the API is reachable by the client running PLB.
|
||||
|
||||
### Dependencies
|
||||
* Python3
|
||||
* proxmoxer (Python module)
|
||||
|
||||
### Options
|
||||
The following options can be set in the `proxlb.conf` file:
|
||||
|
||||
| Option | Example | Description |
|
||||
|------|:------:|:------:|
|
||||
| api_host | hypervisor01.gyptazy.ch | Host or IP address of the remote Proxmox API. |
|
||||
| api_user | root@pam | Username for the API. |
|
||||
| api_pass | FooBar | Password for the API. |
|
||||
| verify_ssl | 1 | Validate SSL certificates (1) or ignore (0). (default: 1) |
|
||||
| method | memory | Defines the balancing method (default: memory) where you can use `memory`, `disk` or `cpu`. |
|
||||
| ignore_nodes | dummynode01,dummynode02 | Defines a comma separated list of nodes to exclude. |
|
||||
| ignore_vms | testvm01,testvm02 | Defines a comma separated list of VMs to exclude. |
|
||||
| daemon | 1 | Run as a daemon (1) or one-shot (0). (default: 1) |
|
||||
| schedule | 24 | Hours to rebalance in hours. (default: 24) |
|
||||
|
||||
An example of the configuration file looks like:
|
||||
```
|
||||
[proxmox]
|
||||
api_host: hypervisor01.gyptazy.ch
|
||||
api_user: root@pam
|
||||
api_pass: FooBar
|
||||
verify_ssl: 1
|
||||
[balancing]
|
||||
method: memory
|
||||
ignore_nodes: dummynode01,dummynode02
|
||||
ignore_vms: testvm01,testvm02
|
||||
[service]
|
||||
daemon: 1
|
||||
```
|
||||
|
||||
### Parameters
|
||||
The following options and parameters are currently supported:
|
||||
|
||||
| Option | Long Option | Description | Default |
|
||||
|------|:------:|------:|------:|
|
||||
| -c | --config | Path to a config file. | /etc/proxlb/proxlb.conf (default) |
|
||||
|
||||
### Systemd
|
||||
When installing a Linux distribution (such as .deb or .rpm) file, this will be shipped with a systemd unit file. The default configuration file will be sourced from `/etc/proxlb/proxlb.conf`.
|
||||
|
||||
| Unit Name | Options |
|
||||
|------|:------:|
|
||||
| proxlb | start, stop, status, restart |
|
||||
|
||||
### Manual
|
||||
A manual installation is possible and also supports BSD based systems. Proxmox Rebalancing Service relies on mainly two important files:
|
||||
* proxlb (Python Executable)
|
||||
* proxlb.conf (Config file)
|
||||
|
||||
The executable must be able to read the config file, if no dedicated config file is given by the `-c` argument, PLB tries to read it from `/etc/proxlb/proxlb.conf`.
|
||||
|
||||
### Proxmox GUI Integration
|
||||
<img align="left" src="https://cdn.gyptazy.ch/images/proxlb-GUI-integration.jpg"/> PLB can also be directly be used from the Proxmox Web UI by installing the optional package `pve-proxmoxlb-service-ui` package which has a dependency on the `proxlb` package. For the Web UI integration, it requires to be installed (in addition) on the nodes on the cluster. Afterwards, a new menu item is present in the HA chapter called `Rebalancing`. This chapter provides two possibilities:
|
||||
* Rebalancing VM workloads
|
||||
* Migrate VM workloads away from a defined node (e.g. maintenance preparation)
|
||||
|
||||
### Quick Start
|
||||
The easiest way to get started is by using the ready-to-use packages that I provide on my CDN and to run it on a Linux Debian based system. This can also be one of the Proxmox nodes itself.
|
||||
|
||||
```
|
||||
wget https://cdn.gyptazy.ch/files/amd64/debian/proxlb/proxlb_0.9.9_amd64.deb
|
||||
dpkg -i proxlb_0.9.9_amd64.deb
|
||||
# Adjust your config
|
||||
vi /etc/proxlb/proxlb.conf
|
||||
systemctl restart proxlb
|
||||
systemctl status proxlb
|
||||
```
|
||||
|
||||
### Logging
|
||||
ProxLB uses the `SystemdHandler` for logging. You can find all your logs in your systemd unit log or in the journalctl.
|
||||
|
||||
## Motivation
|
||||
As a developer managing a cluster of virtual machines for my projects, I often encountered the challenge of resource imbalance. Nodes within the cluster would become unevenly loaded, with some nodes being overburdened while others remained underutilized. This imbalance led to inefficiencies, performance bottlenecks, and increased operational costs. Frustrated by the lack of an adequate solution to address this issue, I decided to develop the ProxLB (PLB) to ensure better resource distribution across my clusters.
|
||||
|
||||
My primary motivation for creating PLB stemmed from my work on my BoxyBSD project, where I consistently faced the difficulty of maintaining balanced nodes while running various VM workloads but also on my personal clusters. The absence of an efficient rebalancing mechanism made it challenging to achieve optimal performance and stability. Recognizing the necessity for a tool that could gather and analyze resource metrics from both the cluster nodes and the running VMs, I embarked on developing ProxLB.
|
||||
|
||||
PLB meticulously collects detailed resource usage data from each node in a Proxmox cluster, including CPU load, memory usage, and local disk space utilization. It also gathers comprehensive statistics from all running VMs, providing a granular understanding of the workload distribution. With this data, PLB intelligently redistributes VMs based on memory usage, local disk usage, and CPU usage. This ensures that no single node is overburdened, storage resources are evenly distributed, and the computational load is balanced, enhancing overall cluster performance.
|
||||
|
||||
As an advocate of the open-source philosophy, I believe in the power of community and collaboration. By sharing solutions like PLB, I aim to contribute to the collective knowledge and tools available to developers facing similar challenges. Open source fosters innovation, transparency, and mutual support, enabling developers to build on each other's work and create better solutions together.
|
||||
|
||||
Developing PLB was driven by a desire to solve a real problem I faced in my projects. However, the spirit behind this effort was to provide a valuable resource to the community. By open-sourcing PLB, I hope to help other developers manage their clusters more efficiently, optimize their resource usage, and reduce operational costs. Sharing this solution aligns with the core principles of open source, where the goal is not only to solve individual problems but also to contribute to the broader ecosystem.
|
||||
|
||||
## References
|
||||
Here you can find some overviews of references for and about the ProxLB (PLB):
|
||||
|
||||
| Description | Link |
|
||||
|------|:------:|
|
||||
| General introduction into ProxLB | https://gyptazy.ch/blog/proxlb-rebalancing-vm-workloads-across-nodes-in-proxmox-clusters/ |
|
||||
| Howto install and use ProxLB on Debian to rebalance vm workloads in a Proxmox cluster | https://gyptazy.ch/howtos/howto-install-and-use-proxlb-to-rebalance-vm-workloads-across-nodes-in-proxmox-clusters/ |
|
||||
|
||||
## Packages
|
||||
Ready to use packages can be found at:
|
||||
* https://cdn.gyptazy.ch/files/amd64/debian/proxlb/
|
||||
* https://cdn.gyptazy.ch/files/amd64/ubuntu/proxlb/
|
||||
* https://cdn.gyptazy.ch/files/amd64/redhat/proxlb/
|
||||
* https://cdn.gyptazy.ch/files/amd64/freebsd/proxlb/
|
||||
|
||||
## Misc
|
||||
### Bugs
|
||||
Bugs can be reported via the GitHub issue tracker [here](https://github.com/gyptazy/ProxLB/issues). You may also report bugs via email or deliver PRs to fix them on your own. Therefore, you might also see the contributing chapter.
|
||||
|
||||
### Contributing
|
||||
Feel free to add further documentation, to adjust already existing one or to contribute with code. Please take care about the style guide and naming conventions.
|
||||
|
||||
### Author(s)
|
||||
* Florian Paul Azim Hoberg @gyptazy (https://gyptazy.ch)
|
||||
|
||||
0
docs/01_Installation.md
Normal file
0
docs/01_Installation.md
Normal file
0
docs/02_Configuration.md
Normal file
0
docs/02_Configuration.md
Normal file
23
docs/03_FAQ.md
Normal file
23
docs/03_FAQ.md
Normal file
@@ -0,0 +1,23 @@
|
||||
## FAQ
|
||||
|
||||
### Could not import all dependencies
|
||||
ProxLB requires the Python library `proxmoxer`. This can simply be installed by the most
|
||||
system repositories. If you encounter this error message you simply need to install it.
|
||||
|
||||
|
||||
```
|
||||
# systemctl status proxlb
|
||||
x proxlb.service - Proxmox Rebalancing Service
|
||||
Loaded: loaded (/etc/systemd/system/proxlb.service; static)
|
||||
Active: failed (Result: exit-code) since Sat 2024-07-06 10:25:16 UTC; 1s ago
|
||||
Duration: 239ms
|
||||
Process: 7285 ExecStart=/usr/bin/proxlb -c /etc/proxlb/proxlb.conf (code=exited, status=2)
|
||||
Main PID: 7285 (code=exited, status=2)
|
||||
CPU: 129ms
|
||||
|
||||
Jul 06 10:25:16 build01 systemd[1]: Started proxlb.service - ProxLB.
|
||||
Jul 06 10:25:16 build01 proxlb[7285]: proxlb: Error: [python-imports]: Could not import all dependencies. Please install "proxmoxer".
|
||||
```
|
||||
|
||||
Debian/Ubuntu: apt-get install python3-proxmoxer
|
||||
If the package is not provided by your systems repository, you can also install it by running `pip3 install proxmoxer`.
|
||||
16
packaging/01_package.sh
Executable file
16
packaging/01_package.sh
Executable file
@@ -0,0 +1,16 @@
|
||||
#!/bin/bash
|
||||
apt-get install rpm cmake git make
|
||||
|
||||
git clone https://github.com/gyptazy/changelog-fragments-creator.git
|
||||
./changelog-fragments-creator/changelog-creator -f ../.changelogs/ -o ../CHANGELOG.md
|
||||
mkdir packages
|
||||
mkdir build
|
||||
cd build
|
||||
cmake ..
|
||||
cpack -G DEB .
|
||||
cpack -G RPM .
|
||||
cp *.deb ../packages
|
||||
cp *.rpm ../packages
|
||||
cd ..
|
||||
rm -rf build
|
||||
echo "Packages created. Packages can be found in directory: packages"
|
||||
4
packaging/02_changelog_only.sh
Executable file
4
packaging/02_changelog_only.sh
Executable file
@@ -0,0 +1,4 @@
|
||||
#!/bin/bash
|
||||
git clone https://github.com/gyptazy/changelog-fragments-creator.git
|
||||
./changelog-fragments-creator/changelog-creator -f ../.changelogs/ -o ../CHANGELOG.md
|
||||
echo "Created changelog file"
|
||||
41
packaging/CMakeLists.txt
Normal file
41
packaging/CMakeLists.txt
Normal file
@@ -0,0 +1,41 @@
|
||||
cmake_minimum_required(VERSION 3.16)
|
||||
project(proxmox-rebalancing-service VERSION 0.9.9)
|
||||
|
||||
install(PROGRAMS ../proxlb DESTINATION /bin)
|
||||
install(FILES ../proxlb.conf DESTINATION /etc/proxlb)
|
||||
install(FILES proxlb.service DESTINATION /etc/systemd/system)
|
||||
|
||||
# General
|
||||
set(CPACK_PACKAGE_NAME "proxlb")
|
||||
set(CPACK_RESOURCE_FILE_LICENSE "${CMAKE_CURRENT_SOURCE_DIR}/../LICENSE")
|
||||
set(CPACK_RESOURCE_FILE_README "${CMAKE_CURRENT_SOURCE_DIR}/../README.md")
|
||||
set(CPACK_DEBIAN_PACKAGE_MAINTAINER "Florian Paul Azim <gyptazy> Hoberg <gyptazy@gyptazy.ch>")
|
||||
set(CPACK_PACKAGE_CONTACT "Florian Paul Azim Hoberg <gyptazy@gyptazy.ch>")
|
||||
set(CPACK_PACKAGE_VENDOR "gyptazy")
|
||||
|
||||
# RPM packaging
|
||||
set(CPACK_PACKAGE_VERSION ${CMAKE_PROJECT_VERSION})
|
||||
set(CPACK_GENERATOR "RPM")
|
||||
set(CPACK_RPM_PACKAGE_ARCHITECTURE "amd64")
|
||||
set(CPACK_RPM_PACKAGE_SUMMARY "ProxLB Rebalancing VM workloads within Proxmox clusters.")
|
||||
set(CPACK_RPM_PACKAGE_DESCRIPTION "ProxLB Rebalancing VM workloads within Proxmox clusters.")
|
||||
set(CPACK_RPM_CHANGELOG_FILE "${CMAKE_CURRENT_SOURCE_DIR}/changelog_redhat")
|
||||
set(CPACK_PACKAGE_RELEASE 1)
|
||||
set(CPACK_RPM_PACKAGE_LICENSE "GPL 3.0")
|
||||
set(CPACK_RPM_PACKAGE_REQUIRES "python >= 3.2.0")
|
||||
|
||||
# DEB packaging
|
||||
set(CPACK_DEBIAN_FILE_NAME DEB-DEFAULT)
|
||||
set(CPACK_DEBIAN_PACKAGE_ARCHITECTURE "amd64")
|
||||
set(CPACK_DEBIAN_PACKAGE_SUMMARY "ProxLB Rebalancing VM workloads within Proxmox clusters.")
|
||||
set(CPACK_DEBIAN_PACKAGE_DESCRIPTION "ProxLB Rebalancing VM workloads within Proxmox clusters.")
|
||||
set(CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA "${CMAKE_CURRENT_SOURCE_DIR}/changelog_debian")
|
||||
set(CPACK_DEBIAN_PACKAGE_DEPENDS "python3")
|
||||
set(CPACK_DEBIAN_PACKAGE_LICENSE "GPL 3.0")
|
||||
|
||||
|
||||
# Install
|
||||
set(CPACK_PACKAGING_INSTALL_PREFIX ${CMAKE_INSTALL_PREFIX})
|
||||
set(CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA "${CMAKE_CURRENT_SOURCE_DIR}/postinst")
|
||||
set(CPACK_RPM_POST_INSTALL_SCRIPT_FILE "${CMAKE_CURRENT_SOURCE_DIR}/postinst")
|
||||
include(CPack)
|
||||
14
packaging/README.md
Normal file
14
packaging/README.md
Normal file
@@ -0,0 +1,14 @@
|
||||
## Build packages
|
||||
Building the packages requires cmake, deb and rpm.
|
||||
For building packages, simly run the following commands:
|
||||
|
||||
```
|
||||
mkdir build
|
||||
cd build
|
||||
cmake ..
|
||||
cpack -G RPM .
|
||||
cpack -G DEB .
|
||||
```
|
||||
|
||||
When running on Debian/Ubuntu you can directly call `01_package.sh`
|
||||
to create your own packages.
|
||||
5
packaging/changelog_debian
Normal file
5
packaging/changelog_debian
Normal file
@@ -0,0 +1,5 @@
|
||||
proxlb (0.9.0) unstable; urgency=low
|
||||
|
||||
* Initial release of ProxLB.
|
||||
|
||||
-- Florian Paul Azim Hoberg <gyptazy@gyptazy.ch> Sun, 07 Jul 2024 05:38:41 -0200
|
||||
2
packaging/changelog_redhat
Normal file
2
packaging/changelog_redhat
Normal file
@@ -0,0 +1,2 @@
|
||||
* Sun Jul 07 2024 Florian Paul Azim Hoberg <gyptazy@gyptazy.ch>
|
||||
- Initial release of ProxLB.
|
||||
5
packaging/postinst
Normal file
5
packaging/postinst
Normal file
@@ -0,0 +1,5 @@
|
||||
#!/bin/bash
|
||||
useradd -m plb
|
||||
chown plb:plb /etc/proxlb/proxlb.conf
|
||||
chmod 600 /etc/proxlb/proxlb.conf
|
||||
systemctl daemon-reload
|
||||
6
packaging/proxlb.service
Normal file
6
packaging/proxlb.service
Normal file
@@ -0,0 +1,6 @@
|
||||
[Unit]
|
||||
Description=ProxLB - Rebalance VM workloads
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/bin/proxlb -c /etc/proxlb/proxlb.conf
|
||||
User=plb
|
||||
409
proxlb
Executable file
409
proxlb
Executable file
@@ -0,0 +1,409 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
# ProxLB
|
||||
# ProxLB (re)balances VM workloads across nodes in Proxmox clusters.
|
||||
# ProxLB obtains current metrics from all nodes within the cluster for
|
||||
# further auto balancing by memory, disk or cpu and rebalances the VMs
|
||||
# over all available nodes in a cluster by having an equal resource usage.
|
||||
# Copyright (C) 2024 Florian Paul Azim Hoberg @gyptazy <gyptazy@gyptazy.ch>
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
import argparse
|
||||
import configparser
|
||||
import logging
|
||||
import os
|
||||
try:
|
||||
import proxmoxer
|
||||
_imports = True
|
||||
except ImportError as error:
|
||||
_imports = False
|
||||
import requests
|
||||
import sys
|
||||
import time
|
||||
import urllib3
|
||||
|
||||
|
||||
# Constants
|
||||
__appname__ = "ProxLB"
|
||||
__version__ = "0.9.9"
|
||||
__author__ = "Florian Paul Azim Hoberg <gyptazy@gyptazy.ch> @gyptazy"
|
||||
__errors__ = False
|
||||
|
||||
|
||||
# Classes
|
||||
## Logging class
|
||||
class SystemdHandler(logging.Handler):
|
||||
""" Class to handle logging options. """
|
||||
PREFIX = {
|
||||
logging.CRITICAL: "<2> " + __appname__ + ": ",
|
||||
logging.ERROR: "<3> " + __appname__ + ": ",
|
||||
logging.WARNING: "<4> " + __appname__ + ": ",
|
||||
logging.INFO: "<6> " + __appname__ + ": ",
|
||||
logging.DEBUG: "<7> " + __appname__ + ": ",
|
||||
logging.NOTSET: "<7 " + __appname__ + ": ",
|
||||
}
|
||||
|
||||
def __init__(self, stream=sys.stdout):
|
||||
self.stream = stream
|
||||
logging.Handler.__init__(self)
|
||||
|
||||
def emit(self, record):
|
||||
try:
|
||||
msg = self.PREFIX[record.levelno] + self.format(record) + "\n"
|
||||
self.stream.write(msg)
|
||||
self.stream.flush()
|
||||
except Exception:
|
||||
self.handleError(record)
|
||||
|
||||
|
||||
# Functions
|
||||
def initialize_logger(log_level, log_handler):
|
||||
""" Initialize ProxLB logging handler. """
|
||||
info_prefix = 'Info: [logger]:'
|
||||
|
||||
root_logger = logging.getLogger()
|
||||
root_logger.setLevel(log_level)
|
||||
root_logger.addHandler(SystemdHandler())
|
||||
logging.info(f'{info_prefix} Logger got initialized.')
|
||||
|
||||
|
||||
def pre_validations(config_path):
|
||||
""" Run pre-validations as sanity checks. """
|
||||
info_prefix = 'Info: [pre-validations]:'
|
||||
|
||||
__validate_imports()
|
||||
__validate_config_file(config_path)
|
||||
logging.info(f'{info_prefix} All pre-validations done.')
|
||||
|
||||
|
||||
def post_validations():
|
||||
""" Run post-validations as sanity checks. """
|
||||
error_prefix = 'Error: [post-validations]:'
|
||||
info_prefix = 'Info: [post-validations]:'
|
||||
|
||||
if __errors__:
|
||||
logging.critical(f'{error_prefix} Not all post-validations succeeded. Please validate!')
|
||||
else:
|
||||
logging.info(f'{info_prefix} All post-validations succeeded.')
|
||||
|
||||
|
||||
def validate_daemon(daemon, schedule):
|
||||
""" Validate if ProxLB runs as a daemon. """
|
||||
info_prefix = 'Info: [daemon]:'
|
||||
|
||||
if bool(int(daemon)):
|
||||
logging.info(f'{info_prefix} Running in daemon mode. Next run in {schedule} hours.')
|
||||
time.sleep(int(schedule) * 60)
|
||||
else:
|
||||
logging.info(f'{info_prefix} Not running in daemon mode. Quitting.')
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
def __validate_imports():
|
||||
""" Validate if all Python imports succeeded. """
|
||||
error_prefix = 'Error: [python-imports]:'
|
||||
info_prefix = 'Info: [python-imports]:'
|
||||
|
||||
if not _imports:
|
||||
logging.critical(f'{error_prefix} Could not import all dependencies. Please install "proxmoxer".')
|
||||
sys.exit(2)
|
||||
else:
|
||||
logging.info(f'{info_prefix} All required dependencies were imported.')
|
||||
|
||||
|
||||
def __validate_config_file(config_path):
|
||||
""" Validate if all Python imports succeeded. """
|
||||
error_prefix = 'Error: [config]:'
|
||||
info_prefix = 'Info: [config]:'
|
||||
|
||||
if not os.path.isfile(config_path):
|
||||
logging.critical(f'{error_prefix} Could not find config file in: {config_path}.')
|
||||
sys.exit(2)
|
||||
else:
|
||||
logging.info(f'{info_prefix} Configuration file loaded from: {config_path}.')
|
||||
|
||||
|
||||
def initialize_args():
|
||||
""" Initialize given arguments for ProxLB. """
|
||||
argparser = argparse.ArgumentParser(description='ProxLB')
|
||||
argparser.add_argument('-c', '--config', type=str, help='Path to config file.')
|
||||
return argparser.parse_args()
|
||||
|
||||
|
||||
def initialize_config_path(app_args):
|
||||
""" Initialize path to ProxLB config file. """
|
||||
info_prefix = 'Info: [config]:'
|
||||
|
||||
config_path = app_args.config
|
||||
if app_args.config is None:
|
||||
config_path = '/etc/proxlb/proxlb.conf'
|
||||
logging.info(f'{info_prefix} No config file provided. Falling back to: {config_path}.')
|
||||
else:
|
||||
logging.info(f'{info_prefix} Using config file: {config_path}.')
|
||||
return config_path
|
||||
|
||||
|
||||
def initialize_config_options(config_path):
|
||||
""" Read configuration from given config file for ProxLB. """
|
||||
error_prefix = 'Error: [config]:'
|
||||
info_prefix = 'Info: [config]:'
|
||||
|
||||
try:
|
||||
config = configparser.ConfigParser()
|
||||
config.read(config_path)
|
||||
# Proxmox config
|
||||
proxmox_api_host = config['proxmox']['api_host']
|
||||
proxmox_api_user = config['proxmox']['api_user']
|
||||
proxmox_api_pass = config['proxmox']['api_pass']
|
||||
proxmox_api_ssl_v = config['proxmox']['verify_ssl']
|
||||
# Balancing
|
||||
balancing_method = config['balancing'].get('method', 'memory')
|
||||
ignore_nodes = config['balancing'].get('ignore_nodes', None)
|
||||
ignore_vms = config['balancing'].get('ignore_vms', None)
|
||||
# Service
|
||||
daemon = config['service'].get('daemon', 1)
|
||||
schedule = config['service'].get('schedule', 24)
|
||||
except configparser.NoSectionError:
|
||||
logging.critical(f'{error_prefix} Could not find the required section.')
|
||||
sys.exit(2)
|
||||
except configparser.ParsingError:
|
||||
logging.critical(f'{error_prefix} Unable to parse the config file.')
|
||||
sys.exit(2)
|
||||
except KeyError:
|
||||
logging.critical(f'{error_prefix} Could not find the required options in config file.')
|
||||
sys.exit(2)
|
||||
|
||||
logging.info(f'{info_prefix} Configuration file loaded.')
|
||||
return proxmox_api_host, proxmox_api_user, proxmox_api_pass, proxmox_api_ssl_v, balancing_method, \
|
||||
ignore_nodes, ignore_vms, daemon, schedule
|
||||
|
||||
|
||||
def api_connect(proxmox_api_host, proxmox_api_user, proxmox_api_pass, proxmox_api_ssl_v):
|
||||
""" Connect and authenticate to the Proxmox remote API. """
|
||||
error_prefix = 'Error: [api-connection]:'
|
||||
warn_prefix = 'Warning: [api-connection]:'
|
||||
info_prefix = 'Info: [api-connection]:'
|
||||
proxmox_api_ssl_v = bool(int(proxmox_api_ssl_v))
|
||||
|
||||
if not proxmox_api_ssl_v:
|
||||
requests.packages.urllib3.disable_warnings()
|
||||
logging.warning(f'{warn_prefix} API connection does not verify SSL certificate.')
|
||||
|
||||
try:
|
||||
api_object = proxmoxer.ProxmoxAPI(proxmox_api_host, user=proxmox_api_user, password=proxmox_api_pass, verify_ssl=proxmox_api_ssl_v)
|
||||
except urllib3.exceptions.NameResolutionError:
|
||||
logging.critical(f'{error_prefix} Could not resolve the given host: {proxmox_api_host}.')
|
||||
sys.exit(2)
|
||||
except requests.exceptions.ConnectTimeout:
|
||||
logging.critical(f'{error_prefix} Connection time out to host: {proxmox_api_host}.')
|
||||
sys.exit(2)
|
||||
except requests.exceptions.SSLError:
|
||||
logging.critical(f'{error_prefix} SSL certificate verification failed for host: {proxmox_api_host}.')
|
||||
sys.exit(2)
|
||||
|
||||
logging.info(f'{info_prefix} API connection succeeded to host: {proxmox_api_host}.')
|
||||
return api_object
|
||||
|
||||
|
||||
def get_node_statistics(api_object, ignore_nodes):
|
||||
""" Get statistics of cpu, memory and disk for each node in the cluster. """
|
||||
info_prefix = 'Info: [node-statistics]:'
|
||||
node_statistics = {}
|
||||
ignore_nodes_list = ignore_nodes.split(',')
|
||||
|
||||
for node in api_object.nodes.get():
|
||||
if node['status'] == 'online' and node['node'] not in ignore_nodes_list:
|
||||
node_statistics[node['node']] = {}
|
||||
node_statistics[node['node']]['cpu_total'] = node['maxcpu']
|
||||
node_statistics[node['node']]['cpu_used'] = node['cpu']
|
||||
node_statistics[node['node']]['cpu_free'] = int(node['maxcpu']) - int(node['cpu'])
|
||||
node_statistics[node['node']]['memory_total'] = node['maxmem']
|
||||
node_statistics[node['node']]['memory_used'] = node['mem']
|
||||
node_statistics[node['node']]['memory_free'] = int(node['maxmem']) - int(node['mem'])
|
||||
node_statistics[node['node']]['disk_total'] = node['maxdisk']
|
||||
node_statistics[node['node']]['disk_used'] = node['disk']
|
||||
node_statistics[node['node']]['disk_free'] = int(node['maxdisk']) - int(node['disk'])
|
||||
logging.info(f'{info_prefix} Added node {node["node"]}.')
|
||||
|
||||
logging.info(f'{info_prefix} Created node statistics.')
|
||||
return node_statistics
|
||||
|
||||
|
||||
def get_vm_statistics(api_object, ignore_vms):
|
||||
""" Get statistics of cpu, memory and disk for each vm in the cluster. """
|
||||
info_prefix = 'Info: [vm-statistics]:'
|
||||
vm_statistics = {}
|
||||
ignore_vms_list = ignore_vms.split(',')
|
||||
|
||||
for node in api_object.nodes.get():
|
||||
for vm in api_object.nodes(node['node']).qemu.get():
|
||||
|
||||
if vm['status'] == 'running' and vm['name'] not in ignore_vms_list:
|
||||
vm_statistics[vm['name']] = {}
|
||||
vm_statistics[vm['name']]['cpu_total'] = vm['cpus']
|
||||
vm_statistics[vm['name']]['cpu_used'] = vm['cpu']
|
||||
vm_statistics[vm['name']]['memory_total'] = vm['maxmem']
|
||||
vm_statistics[vm['name']]['memory_used'] = vm['mem']
|
||||
vm_statistics[vm['name']]['disk_total'] = vm['maxdisk']
|
||||
vm_statistics[vm['name']]['disk_used'] = vm['disk']
|
||||
vm_statistics[vm['name']]['vmid'] = vm['vmid']
|
||||
vm_statistics[vm['name']]['node_parent'] = node['node']
|
||||
# Rebalancing node will be overwritten after calculations.
|
||||
# If the vm stays on the node, it will be removed at a
|
||||
# later time.
|
||||
vm_statistics[vm['name']]['node_rebalance'] = node['node']
|
||||
logging.info(f'{info_prefix} Added vm {vm["name"]}.')
|
||||
|
||||
logging.info(f'{info_prefix} Created VM statistics.')
|
||||
return vm_statistics
|
||||
|
||||
|
||||
def balancing_calculations(balancing_method, node_statistics, vm_statistics):
|
||||
""" Calculate re-balancing of VMs on present nodes across the cluster. """
|
||||
error_prefix = 'Error: [rebalancing-calculator]:'
|
||||
info_prefix = 'Info: [rebalancing-calculator]:'
|
||||
|
||||
if balancing_method not in ['memory', 'disk', 'cpu']:
|
||||
logging.error(f'{error_prefix} Invalid balancing method: {balancing_method}')
|
||||
sys.exit(2)
|
||||
return node_statistics, vm_statistics
|
||||
|
||||
sorted_vms = sorted(vm_statistics.items(), key=lambda item: item[1][f'{balancing_method}_used'], reverse=True)
|
||||
logging.info(f'{info_prefix} Balancing will be done for {balancing_method} efficiency.')
|
||||
|
||||
total_resource_free = sum(node_info[f'{balancing_method}_free'] for node_info in node_statistics.values())
|
||||
total_resource_used = sum(vm_info[f'{balancing_method}_used'] for vm_info in vm_statistics.values())
|
||||
|
||||
if total_resource_used > total_resource_free:
|
||||
logging.error(f'{error_prefix} Not enough {balancing_method} resources to accommodate all VMs.')
|
||||
return node_statistics, vm_statistics
|
||||
|
||||
# Rebalance in Round-robin initial distribution to ensure each node gets at least one VM.
|
||||
nodes = list(node_statistics.items())
|
||||
node_count = len(nodes)
|
||||
node_index = 0
|
||||
|
||||
for vm_name, vm_info in sorted_vms:
|
||||
assigned = False
|
||||
for _ in range(node_count):
|
||||
node_name, node_info = nodes[node_index]
|
||||
if vm_info[f'{balancing_method}_used'] <= node_info[f'{balancing_method}_free']:
|
||||
vm_info['node_rebalance'] = node_name
|
||||
node_info[f'{balancing_method}_free'] -= vm_info[f'{balancing_method}_used']
|
||||
assigned = True
|
||||
node_index = (node_index + 1) % node_count
|
||||
break
|
||||
node_index = (node_index + 1) % node_count
|
||||
|
||||
if not assigned:
|
||||
logging.error(f'{error_prefix} VM {vm_name} with {balancing_method} usage {vm_info[f"{balancing_method}_used"]} cannot fit into any node.')
|
||||
|
||||
# Calculate and rebalance remaining VMs using best-fit strategy.
|
||||
while True:
|
||||
unassigned_vms = [vm for vm in vm_statistics.items() if 'node_rebalance' not in vm[1]]
|
||||
if not unassigned_vms:
|
||||
break
|
||||
|
||||
for vm_name, vm_info in unassigned_vms:
|
||||
best_node_name = None
|
||||
best_node_info = None
|
||||
min_resource_diff = float('inf')
|
||||
|
||||
for node_name, node_info in node_statistics.items():
|
||||
resource_free = node_info[f'{balancing_method}_free']
|
||||
resource_diff = resource_free - vm_info[f'{balancing_method}_used']
|
||||
|
||||
if resource_diff >= 0 and resource_diff < min_resource_diff:
|
||||
min_resource_diff = resource_diff
|
||||
best_node_name = node_name
|
||||
best_node_info = node_info
|
||||
|
||||
if best_node_name is not None:
|
||||
vm_info['node_rebalance'] = best_node_name
|
||||
best_node_info[f'{balancing_method}_free'] -= vm_info[f'{balancing_method}_used']
|
||||
else:
|
||||
logging.error(f'{error_prefix} VM {vm_name} with {balancing_method} usage {vm_info[f"{balancing_method}_used"]} cannot fit into any node.')
|
||||
|
||||
# Remove VMs where 'node_rebalance' is the same as 'node_parent' since they
|
||||
# do not need to be migrated.
|
||||
vms_to_remove = [vm_name for vm_name, vm_info in vm_statistics.items() if 'node_rebalance' in vm_info and vm_info['node_rebalance'] == vm_info.get('node_parent')]
|
||||
for vm_name in vms_to_remove:
|
||||
del vm_statistics[vm_name]
|
||||
|
||||
logging.info(f'{info_prefix} Balancing calculations done.')
|
||||
return node_statistics, vm_statistics
|
||||
|
||||
|
||||
def __get_node_most_free_values(balancing_method, node_statistics):
|
||||
""" Get and return the most free resources of a node by the defined balancing method. """
|
||||
if balancing_method == 'memory':
|
||||
return max(node_statistics.items(), key=lambda item: item[1]['memory_free'])
|
||||
if balancing_method == 'disk':
|
||||
return max(node_statistics.items(), key=lambda item: item[1]['disk_free'])
|
||||
if balancing_method == 'cpu':
|
||||
return max(node_statistics.items(), key=lambda item: item[1]['cpu_free'])
|
||||
|
||||
|
||||
def run_vm_rebalancing(api_object, vm_statistics_rebalanced):
|
||||
""" Run rebalancing of vms to new nodes in cluster. """
|
||||
error_prefix = 'Error: [rebalancing-executor]:'
|
||||
info_prefix = 'Info: [rebalancing-executor]:'
|
||||
|
||||
logging.info(f'{info_prefix} Starting to rebalance vms to their new nodes.')
|
||||
for vm, value in vm_statistics_rebalanced.items():
|
||||
|
||||
try:
|
||||
logging.info(f'{info_prefix} Rebalancing vm {vm} from node {value["node_parent"]} to node {value["node_rebalance"]}.')
|
||||
api_object.nodes(value['node_parent']).qemu(value['vmid']).migrate().post(target=value['node_rebalance'],online=1)
|
||||
except proxmoxer.core.ResourceException as error_resource:
|
||||
__errors__ = True
|
||||
logging.critical(f'{error_prefix} {error_resource}')
|
||||
|
||||
|
||||
def main():
|
||||
""" Run ProxLB for balancing VM workloads across a Proxmox cluster. """
|
||||
# Initialize PAS.
|
||||
initialize_logger('CRITICAL', 'SystemdHandler()')
|
||||
app_args = initialize_args()
|
||||
config_path = initialize_config_path(app_args)
|
||||
pre_validations(config_path)
|
||||
|
||||
# Parse global config
|
||||
proxmox_api_host, proxmox_api_user, proxmox_api_pass, proxmox_api_ssl_v, balancing_method, \
|
||||
ignore_nodes, ignore_vms, daemon, schedule = initialize_config_options(config_path)
|
||||
|
||||
while True:
|
||||
# API Authentication.
|
||||
api_object = api_connect(proxmox_api_host, proxmox_api_user, proxmox_api_pass, proxmox_api_ssl_v)
|
||||
|
||||
# Get metric & statistics for vms and nodes.
|
||||
node_statistics = get_node_statistics(api_object, ignore_nodes)
|
||||
vm_statistics = get_vm_statistics(api_object, ignore_vms)
|
||||
|
||||
# Calculate rebalancing of vms.
|
||||
node_statistics_rebalanced, vm_statistics_rebalanced = balancing_calculations(balancing_method, node_statistics, vm_statistics)
|
||||
|
||||
# Rebalance vms to new nodes within the cluster.
|
||||
run_vm_rebalancing(api_object, vm_statistics_rebalanced)
|
||||
|
||||
# Validate for any errors
|
||||
post_validations()
|
||||
|
||||
# Validate daemon service
|
||||
validate_daemon(daemon, schedule)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
12
proxlb.conf
Normal file
12
proxlb.conf
Normal file
@@ -0,0 +1,12 @@
|
||||
[proxmox]
|
||||
api_host: hypervisor01.gyptazy.ch
|
||||
api_user: root@pam
|
||||
api_pass: FooBar
|
||||
verify_ssl: 1
|
||||
[balancing]
|
||||
method: memory
|
||||
ignore_nodes: dummynode01,dummynode02
|
||||
ignore_vms: testvm01,testvm02
|
||||
[service]
|
||||
daemon: 1
|
||||
schedule: 24
|
||||
1
tests/README.md
Normal file
1
tests/README.md
Normal file
@@ -0,0 +1 @@
|
||||
## Unit Tests
|
||||
175
tests/tests.py
Normal file
175
tests/tests.py
Normal file
@@ -0,0 +1,175 @@
|
||||
import unittest
|
||||
from unittest.mock import patch, MagicMock
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
import configparser
|
||||
|
||||
from proxlb import (
|
||||
initialize_logger,
|
||||
pre_validations,
|
||||
post_validations,
|
||||
validate_daemon,
|
||||
__validate_imports,
|
||||
__validate_config_file,
|
||||
initialize_args,
|
||||
initialize_config_path,
|
||||
initialize_config_options,
|
||||
api_connect,
|
||||
get_node_statistics,
|
||||
get_vm_statistics,
|
||||
balancing_calculations,
|
||||
__get_node_most_free_values,
|
||||
run_vm_rebalancing,
|
||||
SystemdHandler,
|
||||
__errors__
|
||||
)
|
||||
|
||||
class TestProxLB(unittest.TestCase):
|
||||
|
||||
def test_initialize_logger(self):
|
||||
with patch('logging.getLogger') as mock_get_logger, patch('logging.Handler'):
|
||||
mock_logger = MagicMock()
|
||||
mock_get_logger.return_value = mock_logger
|
||||
initialize_logger(logging.DEBUG, SystemdHandler())
|
||||
mock_logger.setLevel.assert_called_with(logging.DEBUG)
|
||||
self.assertTrue(mock_logger.addHandler.called)
|
||||
|
||||
def test_pre_validations(self):
|
||||
with patch('proxlb.__validate_imports') as mock_validate_imports, patch('proxlb.__validate_config_file') as mock_validate_config_file:
|
||||
pre_validations('/path/to/config')
|
||||
self.assertTrue(mock_validate_imports.called)
|
||||
mock_validate_config_file.assert_called_with('/path/to/config')
|
||||
|
||||
def test_post_validations(self):
|
||||
global __errors__
|
||||
__errors__ = False
|
||||
with patch('logging.critical') as mock_critical, patch('logging.info') as mock_info:
|
||||
post_validations()
|
||||
self.assertTrue(mock_info.called)
|
||||
self.assertFalse(mock_critical.called)
|
||||
|
||||
__errors__ = True
|
||||
with patch('logging.critical') as mock_critical, patch('logging.info'):
|
||||
post_validations()
|
||||
self.assertTrue(mock_critical.called)
|
||||
|
||||
def test_validate_daemon(self):
|
||||
with patch('logging.info') as mock_info, patch('time.sleep') as mock_sleep, patch('sys.exit') as mock_exit:
|
||||
validate_daemon(1, 1)
|
||||
self.assertTrue(mock_info.called)
|
||||
self.assertTrue(mock_sleep.called)
|
||||
|
||||
validate_daemon(0, 1)
|
||||
self.assertTrue(mock_exit.called)
|
||||
|
||||
def test_validate_imports(self):
|
||||
global _imports
|
||||
_imports = True
|
||||
with patch('logging.critical') as mock_critical, patch('logging.info') as mock_info, patch('sys.exit') as mock_exit:
|
||||
__validate_imports()
|
||||
self.assertTrue(mock_info.called)
|
||||
self.assertFalse(mock_exit.called)
|
||||
self.assertFalse(mock_critical.called)
|
||||
|
||||
_imports = False
|
||||
with patch('logging.critical') as mock_critical, patch('logging.info'), patch('sys.exit') as mock_exit:
|
||||
__validate_imports()
|
||||
self.assertTrue(mock_critical.called)
|
||||
self.assertTrue(mock_exit.called)
|
||||
|
||||
def test_validate_config_file(self):
|
||||
with patch('os.path.isfile', return_value=True), patch('logging.critical') as mock_critical, patch('logging.info') as mock_info, patch('sys.exit') as mock_exit:
|
||||
__validate_config_file('/path/to/config')
|
||||
self.assertTrue(mock_info.called)
|
||||
self.assertFalse(mock_exit.called)
|
||||
self.assertFalse(mock_critical.called)
|
||||
|
||||
with patch('os.path.isfile', return_value=False), patch('logging.critical') as mock_critical, patch('logging.info'), patch('sys.exit') as mock_exit:
|
||||
__validate_config_file('/path/to/config')
|
||||
self.assertTrue(mock_critical.called)
|
||||
self.assertTrue(mock_exit.called)
|
||||
|
||||
@patch('argparse.ArgumentParser.parse_args', return_value=argparse.Namespace(config='/path/to/config'))
|
||||
def test_initialize_args(self, mock_parse_args):
|
||||
args = initialize_args()
|
||||
self.assertEqual(args.config, '/path/to/config')
|
||||
|
||||
def test_initialize_config_path(self):
|
||||
app_args = MagicMock(config='/path/to/config')
|
||||
with patch('logging.info') as mock_info:
|
||||
config_path = initialize_config_path(app_args)
|
||||
self.assertEqual(config_path, '/path/to/config')
|
||||
self.assertTrue(mock_info.called)
|
||||
|
||||
app_args.config = None
|
||||
with patch('logging.info') as mock_info:
|
||||
config_path = initialize_config_path(app_args)
|
||||
self.assertEqual(config_path, '/etc/proxlb/proxlb.conf')
|
||||
self.assertTrue(mock_info.called)
|
||||
|
||||
@patch('configparser.ConfigParser.read', side_effect=lambda x: setattr(configparser.ConfigParser(), 'proxmox', {'api_host': 'host', 'api_user': 'user', 'api_pass': 'pass', 'verify_ssl': '0'}))
|
||||
def test_initialize_config_options(self, mock_read):
|
||||
with patch('logging.info') as mock_info, patch('sys.exit') as mock_exit:
|
||||
config_path = '/path/to/config'
|
||||
proxmox_api_host, proxmox_api_user, proxmox_api_pass, proxmox_api_ssl_v, balancing_method, ignore_nodes, ignore_vms, daemon, schedule = initialize_config_options(config_path)
|
||||
self.assertEqual(proxmox_api_host, 'host')
|
||||
self.assertEqual(proxmox_api_user, 'user')
|
||||
self.assertEqual(proxmox_api_pass, 'pass')
|
||||
self.assertEqual(proxmox_api_ssl_v, '0')
|
||||
self.assertTrue(mock_info.called)
|
||||
self.assertFalse(mock_exit.called)
|
||||
|
||||
@patch('proxmoxer.ProxmoxAPI')
|
||||
def test_api_connect(self, mock_proxmox_api):
|
||||
with patch('requests.packages.urllib3.disable_warnings') as mock_disable_warnings, patch('logging.warning') as mock_warning, patch('logging.info') as mock_info:
|
||||
proxmox_api_ssl_v = 0
|
||||
api_object = api_connect('host', 'user', 'pass', proxmox_api_ssl_v)
|
||||
self.assertTrue(mock_disable_warnings.called)
|
||||
self.assertTrue(mock_warning.called)
|
||||
self.assertTrue(mock_info.called)
|
||||
self.assertTrue(mock_proxmox_api.called)
|
||||
|
||||
def test_get_node_statistics(self):
|
||||
mock_api_object = MagicMock()
|
||||
mock_api_object.nodes.get.return_value = [{'node': 'node1', 'status': 'online', 'maxcpu': 100, 'cpu': 50, 'maxmem': 1000, 'mem': 500, 'maxdisk': 10000, 'disk': 5000}]
|
||||
node_statistics = get_node_statistics(mock_api_object, '')
|
||||
self.assertIn('node1', node_statistics)
|
||||
self.assertEqual(node_statistics['node1']['cpu_total'], 100)
|
||||
self.assertEqual(node_statistics['node1']['cpu_used'], 50)
|
||||
self.assertEqual(node_statistics['node1']['memory_total'], 1000)
|
||||
self.assertEqual(node_statistics['node1']['memory_used'], 500)
|
||||
self.assertEqual(node_statistics['node1']['disk_total'], 10000)
|
||||
self.assertEqual(node_statistics['node1']['disk_used'], 5000)
|
||||
|
||||
def test_get_vm_statistics(self):
|
||||
mock_api_object = MagicMock()
|
||||
mock_api_object.nodes.get.return_value = [{'node': 'node1', 'status': 'online'}]
|
||||
mock_api_object.nodes().qemu.get.return_value = [{'name': 'vm1', 'status': 'running', 'cpus': 4, 'cpu': 2, 'maxmem': 8000, 'mem': 4000, 'maxdisk': 20000, 'disk': 10000, 'vmid': 101}]
|
||||
vm_statistics = get_vm_statistics(mock_api_object, '')
|
||||
self.assertIn('vm1', vm_statistics)
|
||||
self.assertEqual(vm_statistics['vm1']['cpu_total'], 4)
|
||||
self.assertEqual(vm_statistics['vm1']['cpu_used'], 2)
|
||||
self.assertEqual(vm_statistics['vm1']['memory_total'], 8000)
|
||||
self.assertEqual(vm_statistics['vm1']['memory_used'], 4000)
|
||||
self.assertEqual(vm_statistics['vm1']['disk_total'], 20000)
|
||||
self.assertEqual(vm_statistics['vm1']['disk_used'], 10000)
|
||||
self.assertEqual(vm_statistics['vm1']['vmid'], 101)
|
||||
self.assertEqual(vm_statistics['vm1']['node_parent'], 'node1')
|
||||
|
||||
def test_balancing_calculations(self):
|
||||
node_statistics = {
|
||||
'node1': {'cpu_free': 80, 'memory_free': 8000, 'disk_free': 80000},
|
||||
'node2': {'cpu_free': 70, 'memory_free': 7000, 'disk_free': 70000}
|
||||
}
|
||||
vm_statistics = {
|
||||
'vm1': {'cpu_used': 20, 'memory_used': 2000, 'disk_used': 20000, 'node_parent': 'node1'},
|
||||
'vm2': {'cpu_used': 30, 'memory_used': 3000, 'disk_used': 30000, 'node_parent': 'node1'}
|
||||
}
|
||||
with patch('logging.info') as mock_info, patch('logging.error') as mock_error:
|
||||
node_statistics_rebalanced, vm_statistics_rebalanced = balancing_calculations('memory', node_statistics, vm_statistics)
|
||||
self.assertTrue(mock_info.called)
|
||||
self.assertFalse(mock_error.called)
|
||||
self.assertEqual(vm_statistics_rebalanced['vm1']['node_rebalance'], 'node2')
|
||||
self.assertEqual(vm_statistics_rebalanced['vm2']['node
|
||||
|
||||
Reference in New Issue
Block a user