In this section:
There are several VM operating parameters that can be set to improve system throughput for a single or multiple VMs installed on a KVM Host. Some VM operating parameters are set on the KVM Host and are modified at any time when the VM is running, while others are set on the VM and are only configured when the VM is shut down.
The following sections contain some VM performance tuning recommendations to improve system performance. The VM performance recommendations are considered general guidelines and are not exhaustive. Refer to the documentation provided by your Linux OS and KVM Host vendors. For example, Redhat provides extensive documentation on using virt-manager and optimizing VM performance. See Redhat Virtualization Tuning and Optimization Guide.
This section of the
The Installer tool allocates to each VM the required system resources available on the KVM Host. Provisioning a VM with more resources than required decreases the performance of the VM, as well as other VMs sharing the same KVM Host.
If available for operating systems where the utility tuned-admin exists (most recent releases of Red Hat derived O/Ses), the virtual-host setting is very beneficial for guest VM responsivity.
tuned-adm active (output should be virtual-host)
tuned-adm profile virtual-host
echo virtual-host > /etc/tuned/active_profile
The virt-manager tool for Linux provides a GUI for tuning VM parameters such as the amount of RAM, number and type of CPUs, CPU Pinning and other operating parameters. To use the virt-manager application, you require a Linux OS running on a local client that supports graphical applications. To modify the operating parameters of a VM, use virt-manager on your local client to make a connection to the remote KVM Host. The virt-manager application cannot directly run within a VM (no X11 support) on a KVM Host.
If virt-manager is not available, all VM tuning modifications are performed by editing the VM Configuration file using 'virsh'. The virsh application is an interactive shell that is used in scripts or as a standalone application and is the main interface for managing virsh 'guest domains' or VMs. The application lists all operating VMs it is currently connected to, and is used to create, pause, or shut down a VM.
The following figure shows virt-manager running on a local Linux desktop OS and remotely managing three VMs installed on two physical KVM Hosts.
Virt-manager Connected To KVM Hosts And Managing Three VMs
The Installer tool assigns the first four CPU cores to a VM when it is created. A second VM created on the same KVM Host requires the same resource allocation as the first VM.
CPU Pinning is the assignment or binding of a virtual CPU core (vCPU) to a physical CPU core on the KVM Host. For maximum performance, it is recommended that you assign each vCPU on each VM to a different physical CPU and Non-uniform Memory Access (NUMA) domain on the KVM Host.
To modify vCPU parameters, the VM must be shut down.
If SS7 and Diameter are running on the same VM, then eight CPU cores must be assigned to the VM. In addition to the four CPU cores assigned as default when the VM was created, four CPU cores are added to the default setting.
Proper performance of the DSC requires full cores and not hyperthreaded cores. When pinning, use only Processor IDs that represent actual cores, not hyperthreaded cores.
If you do not statically pin vCPUs to physical CPUs, your system may incur a significant degradation in performance.
Open the VM using virt-manager to view and set the VM's CPU parameters.
The CPU Pinning option is set in virt-manager's Processor Configuration screen, as shown in the following figure.
Virt-manager's Processor Configuration Screen - CPU Pinning Option
You can also use the following commands to assign a VM (vCPU) to a pysical CPU device.
# virsh vcpupin <VM_name> x y --live --config
Using the KVM GUI software to set the CPU pinning is prone to errors. Certain versions of the virt-manager software do not provide full support for the CPU pinning function. It is recommended that you verify CPU pinning on your system using the Linux system command (virsh) in a console window, for example:
# virsh vcpupin <VM_name> VCPU: CPU Affinity ---------------------------------- 0: 12 1: 13 2: 14 3: 15 4: 16 5: 17
In the above example, the output indicates six vCPUs of <VM_name> are each assigned to one physical CPU device. The CPU mapping begins with vCPU_0 pinned to physical CPU_12; and continues to vCPU_5 pinned to physical CPU_17.
A VM consists of several processes (threads) on the Host other than the vCPU processes. Setting a VM (vCPU) pinning (see Assigning a VM (vCPU) to a Physical CPU Device) locks the vCPU processes. Setting the emulator pinning constrains the remainder of the processes.
Example configuration:
The command to set the emulator pin is as follows:
# virsh emulatorpin <VM_name> --cpulist 0-3 --config --current
In a multi-processor system, NUMA allows some CPUs to access system resources faster than other CPUs. In a Linux OS, system resources are split up into NUMA domains.
NUMA domains are used in virt-manager to set the CPU Pinning option of the VM.
To identify NUMA domains on the KVM Host, run the command, lscpu (Linux RedHat OS), at the system prompt. For example, on a KVM Host running a Linux RedHat OS, the command returns the following:
Example: lscpu NUMA node0 CPU(s): 0-5,12-17 NUMA node1 CPU(s): 6-11,18-23
In the above example, Processor ID's [12-17] represent the Hyperthreading component of CPUs [0-5]. For more details, see the section on Hyperthreading.
A VM uses special features available on a physical CPU device if it is allowed to identify all CPU resources (and their features) on the KVM Host. Host CPU visibility can be enabled in virt-manager using the 'Copy Host CPU Configuration' button available in the Processor Configuration screen. The following figure shows virt-manager's Processor Configuration screen and the location of the 'Copy host CPU configuration' button.
The VM CPU and emulator pinning can be configured at install time using the installation file (refer to The VM Configuration File).
Hyperthreading adds 15-25% to the performance of a CPU core. Processor IDs that represent Hyperthreading are not considered a full CPU core.
To identify CPUs with Hyperthreading, login to the KVM Host and execute the Bash command 'cat /proc/cpuinfo'. The command returns information about physical CPUs and CPU cores available on the KVM Host. The command list shows each CPU and CPU core on the Host which are identified by a Processor ID (command output format: processor : n). CPU cores that belong to the same physical CPU appear in the listing with the same Processor ID.
Two CPU cores, each with different Processor ID's, but with the same Physical ID and Core ID, is the Linux representation of a CPU core with Hyperthreading capability.
If a CPU core supports Hyperthreading (and support for Hyperthreading is provided by the KVM Host), the CPU core displays as a separate CPU core in the 'cat /proc/cpuinfo' command listing.
For example, the following 'cat /proc/cpuinfo' command listing shows there are two CPUs with six cores each. All CPUs support hyperthreading. Processor 0 and 12 represent a CPU core and its hyperthreading capability.
Example: bash# cat /proc/cpuinfo # cat /proc/cpuinfo | grep -e "^processor" -e "^physical" -e "^core id" processor : 0 physical id : 0 core id : 0 processor : 1 physical id : 0 core id : 1 [snip] processor : 12 physical id : 0 core id : 0
Recent CPU devices allow CPU Frequency Scaling to reduce power consumption when only partial CPU resources are required. Under heavy traffic (or traffic bursts) conditions, the DSC SWe VM performs better (that is, maintains a higher throughput with less latency) if CPU Frequency Scaling is disabled and the CPU cores are permitted to run at full speed all the time.
To disable CPU Frequency Scaling, run the following Bash command on the KVM Host for each CPU running a VM :
Example for cpu0 and cpu1: bash# echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor bash# echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
On systems running a RedHat LinuxOS, the above command is used to set the CPU Frequency Scaling option, which can be enabled at runtime for specific Processor IDs.
Alternatively, the Performance Scaling Governor setting forces the CPU cores to operate at full speed at all times, which increases power consumption by a larger magnitude in comparison to the more conservative Frequency Scaling Governor.
The Frequency Scaling Governor setting does not persist across KVM Host reboots. If the KVM Host is running a RedHat OS variant, this setting can be made persistent by adding the line(s) to /etc/rc.d/rc.local. For further details, please consult the support documentation for your Linux OS on the KVM Host.
Some general recommendations are presented in this section, however, the list is not exhaustive. For more information on VM performance tuning, please consult your Linux OS and KVM Host vendor documentation.
When modifying VM tuning parameters, the measured effect on performance varies, depending on the Linux OS running on the KVM Host and network infrastructure. The following VM tuning parameters are suggested: