In this section:
There are several VM operating parameters that can be set to improve system throughput for a single or multiple VMs installed on a KVM Host. Some VM operating parameters are set on the KVM Host and are modified at any time when the VM Instance is running, while others are set on the VM and are only configured when the VM Instance is shut down.
The following sections contain some VM performance tuning recommendations to improve system performance. The VM performance recommendations are considered general guidelines and are not exhaustive. Refer to the documentation provided by your Linux OS and KVM Host vendors. For example, Redhat provides extensive documentation on using virt-manager and optimizing VM performance. Refer to the Redhat Virtualization Tuning and Optimization Guide.
For performance tuning the VM instance you must log on to host system as root
user.
Sonus recommends applying the BIOS settings in the table below to all the Sonus VMs for optimum performance:
BIOS Parameter | Setting | Comments |
---|---|---|
CPU power management | Balanced | Sonus recommends Maximum Performance |
Intel Hyper-Threading | Enabled | |
Intel Turbo Boost | Enabled | |
Intel VT-x (Virtualization Technology) | Enabled | For hardware virtualization |
All server BIOS settings are different, but in general the following guidelines apply:
To determine the host system's processor and CPU details. Perform the following steps:
Execute the following command to know how many vCPUs are assigned to host CPUs:
lscpu-p
The command executes with the following output:
The first column lists the logical CPU number of a CPU as used by the Linux kernel. The second column lists the logical core number, this information can be used for vCPU pinning.
CPU pinning ensures a VM only gets CPU time from a specific CPU or set of CPUs. Pinning is performed on each logical CPU of the guest VM against each core ID in the host system. The CPU pinning information will be lost every time the VM instance is shutdown or restarted. To avoid entering the pinning information again you must update the KVM configuration XML file on the host system.
To update the pinning information in the KVM configuration XML file:
Ensure that no two VM instances are allocated with the same physical cores on the host system.
Enter the following command.
virsh
The command executes with the following output:
Enter the following command to edit the VM instance:
virsh # edit <KVM_instance_name>
Search for the vcpu placement
attribute.
Enter CPU pinning information as shown:
Ensure that no two VM instances have the same physical core affinity. For example, if VM1 has affinity to 0, 1, 2, 3, then no VM should be pinned to 0, 1, 2, 3, 8, 9,10, or 11 as these CPUs belong to a physical core assigned to VM1. Also, all other VM instances running on the same host must be assigned with affinity, otherwise the VMs without affinity might impact the performance of VMs having affinity.
Enter the following command to save and exit the XML file.
:wq
Even if the Copy host CPU configuration was selected while creating a VM instance, the host configuration may not be copied on the VM instance. To resolve this issue, you must edit the cpu mode to host-passthrough using a virsh command in the host system.
To edit VM's CPU mode:
Enter the following command.
virsh
The command executes with the following output:
Enter the following command to edit the VM instance:
edit <KVM_instance_name>
Search for cpu mode
attribute.
Replace the cpu mode attribute with the following:
The topology details entered must be same as the topology details while creating the VM instance.
For example, if the topology was set to 1 socket, 4 cores and 1 thread the same must be entered in this XML file.
Enter the following command to save and exit the XML file.
:wq
Enter the following command to start the VM instance.
start <KVM_instance_name>
Enter the following command to verify the host CPU configuration on the VM instance:
cat /proc/cpuinfo
The command executes with the following output.
To increase the transmit queue length to 4096:
By default, the transmit queue length is set to 500.
Execute the following command to know the available interfaces:
virsh
The virsh prompt is displayed.
Execute the following command.
domiflist <VM_instance_name>
The list of active interfaces is displayed.
Execute the following command to increase the transmit queue length of the tap interface.
ifconfig <interface_name> txqueuelen <length>
where interface_name
is the name of the interface you want to change the queue length, and length
is the new queue lenght. For example, ifconfig macvtap4 txqueuelen 4096
.
Execute the following command to check the interface length.
ifconfig <interface_name>
The command executes with the following output.
Apply below settings to all Sonus VMs installed on the host.
Kernel same-page metering (KSM) is a technology which finds common memory pages inside a Linux system and merges the pages to save memory resources. In the event of one of the copies being updated, a new copy is created so the function is transparent to the processes on the system. For hypervisors, KSM is highly beneficial where multiple guests are running with the same level of operating system. However, there is an overhead due to the scanning process which may cause the applications to run slower which is not desirable. The SBC SWe requires KSM to be turned-off.
The sample commands below are for Ubuntu 4.4, please use the syntax that corresponds to your operating system
# echo 0 >/sys/kernel/mm/ksm/run # echo "KSM_ENABLED=0" > /etc/default/qemu-kvm
Once the KSM is turned-off, it is important to verify that there is still sufficient memory on the hypervisor. When the pages are not merged, it may increase memory usage and lead to swapping that impacts performance negatively.
Host pinning is to isolate physical cores where guest VM is hosted against physical cores where Linux host processes/services runs, to avoid the performance impact of VMs, due to host-level Linux services. In this example, the core 0 (Core 0 and core 36 are logical cores) and core 1 (Core 1 and core 37 is a logical cores) are reserved for Linux host processes.
The CPUAffinity option in /etc/systemd/system.conf sets affinity to systemd by default, as well as for everything it launches, unless their .service file overrides the CPUAffinity setting with its own value. Configure the CPUAffinity option in /etc/systemd/system.conf.
Execute below command:
lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 72 On-line CPU(s) list: 0-71 Thread(s) per core: 2 Core(s) per socket: 18 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz Stepping: 1 CPU MHz: 2699.984 BogoMIPS: 4604.99 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 46080K NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71
To dedicate physical CPU 0 and 1 for host processing in /etc/systemd/system.conf, add CPU Affilinity as 0 1 36 37. Restart the system.
CPUAffinity=0 1 36 37
Mount the HugeTLB filesystem on the host.
mkdir -p /hugepages
Add below line in /etc/fstab
file.
hugetlbfs /hugepages hugetlbfs defaults 0 0
Configuring number of 2M hugepages is equal to vRAM requirement for hosting a VM:
cat /etc/sysctl.conf# System default settings live in /usr/lib/sysctl.d/00-system.conf. # To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file # # For more information, see sysctl.conf(5) and sysctl.d(5). vm.nr_hugepages = 25000 (assuming a 24G VM) vm.hugetlb_shm_group = 36
Add highlighted line in your instance XML file using ‘virsh edit <instanceName>
<domain type='kvm' id='3'> <name>RENGALIVM01</name> <uuid>f1bae5a2-d26e-4fc0-b472-3638743def9a</uuid> <memory unit='KiB'>25165824</memory> <currentMemory unit='KiB'>25165824</currentMemory> <memoryBacking> <hugepages> <page size='2048' unit='KiB' nodeset='0'/> </hugepages> </memoryBacking>
Above example for pinning the VM on NUMA node 0, similarly for hosting second VM on NUMA node 1 use nodeset = ‘1’
Restart the host.
To verify, get the PID for VM and execute below command to check VM memory is received from single NUMA node:
numastat -p <vmpid>
Execute below command to disable flow control for interfaces attached to SWe VM.
ethtool -A <interface name> rx off tx off autoneg off
Interface name is per the actual configuration.
Example:
ethtool -A p4p3 rx off tx off autoneg off
ethtool -A p4p4 rx off tx off autoneg off
ethtool -A em3 rx off tx off autoneg off
ethtool -A em4 rx off tx off autoneg off
Refer RHEL site to make NIC ethtool settings persistent (apply automatically at boot).