In this section:
Overview
The following sections contain VM performance tuning recommendations to improve system performance. These performance recommendations are general guidelines, and are not intended to be all-inclusive.
Refer to the documentation provided by your Linux OS and KVM host vendors for complete details. For example, Redhat provides extensive documentation on using virt-manager and optimizing VM performance. Refer to the Redhat Virtualization Tuning and Optimization Guide for details.
For performance tuning procedures on a VM instance, log onto the host system as the root
user.
General Recommendations
Recommended BIOS Settings
Ribbon recommends the following BIOS settings in the host for optimum performance.
For GPU transcoding, ensure all power supplies are plugged into the server.
Procedure
Set CPU Frequency on the Host
The CPU Frequency Setting determines the operating clock speed of the processor and in turn the system performance. Red Hat offers a set of built-in tuning profiles and a tool called tuned-adm that helps in configuring the required tuning profile.
Ribbon recommends to apply the throughput-performance
tuning profile, which allows the processor to operate at maximum frequency.
- Determine the active tuning profile:
# tuned-adm active
Current active profile: powersave - Apply the throughput-performance tuning profile:
# tuned-adm profile throughput-performance
This configuration is persistent across reboots and takes effect immediately. There is no need to reboot the host after configuring this tuning profile.
Perform NUMA Pinning for the VM
Use the procedure below to accomplish NUMA pinning for the VM.
You can skip NUMA pinning for virtual pkt interfaces.
Determine the number of NUMA nodes on the host server.
[root@srvr3320 ~]# lscpu | grep NUMA NUMA node(s): 2 NUMA node0 CPU(s): 0-7,16-23 NUMA node1 CPU(s): 8-15,24-31 [root@srvr3320 ~]#
In this example, there are two NUMA nodes on the server.
- Find out which NUMA node the SR-IOV enabled PF is connected to, which will be allocated to the SBC VM for pkt interfaces.
Obtain the bus-info of the PF interface using the command
ethtool -I <PF interface name>
.[root@srvr3320 ~]# ethtool -i ens4f0 driver: igb version: 5.6.0-k firmware-version: 1.52.0 expansion-rom-version: bus-info: 0000:81:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: yes [root@srvr3320 ~]#
Identify the NUMA node of the PCI device using
cat /sys/bus/pci/devices/<PCI device>/numa_node
.[root@srvr3320 ~]# cat /sys/bus/pci/devices/0000\:81\:00.0/numa_node 1
Repeat the previous step for other SR-IOV interfaces from which you plan to connect VFs.
NoteMake sure that all PCI devices are connected to the same NUMA node.
Once the NUMA node is discovered, set the <numatune> of the SBC VM in the VM xml file.
<numatune> <memory mode='preferred' nodeset="1"/> </numatune>
Determine Host Processor and CPU Details
To determine the host system's processor and CPU details, enter the following command to determine how many vCPUs are assigned to host CPUs:
lscpu -p
[root@srvr3320 ~]# lscpu -p # The following is the parsable format, which can be fed to other # programs. Each different item in every column has an unique ID # starting from zero. # CPU,Core,Socket,Node,,L1d,L1i,L2,L3 0,0,0,0,,0,0,0,0 1,1,0,0,,1,1,1,0 2,2,0,0,,2,2,2,0 3,3,0,0,,3,3,3,0 4,4,0,0,,4,4,4,0 5,5,0,0,,5,5,5,0 6,6,0,0,,6,6,6,0 7,7,0,0,,7,7,7,0 8,8,1,1,,8,8,8,1 9,9,1,1,,9,9,9,1 10,10,1,1,,10,10,10,1 11,11,1,1,,11,11,11,1 12,12,1,1,,12,12,12,1 13,13,1,1,,13,13,13,1 14,14,1,1,,14,14,14,1 15,15,1,1,,15,15,15,1 16,0,0,0,,0,0,0,0 17,1,0,0,,1,1,1,0 18,2,0,0,,2,2,2,0 19,3,0,0,,3,3,3,0 20,4,0,0,,4,4,4,0 21,5,0,0,,5,5,5,0 22,6,0,0,,6,6,6,0 23,7,0,0,,7,7,7,0 24,8,1,1,,8,8,8,1 25,9,1,1,,9,9,9,1 26,10,1,1,,10,10,10,1 27,11,1,1,,11,11,11,1 28,12,1,1,,12,12,12,1 29,13,1,1,,13,13,13,1 30,14,1,1,,14,14,14,1 31,15,1,1,,15,15,15,1 [root@srvr3320 ~]#
The first column lists the logical CPU number of a CPU as used by the Linux kernel. The second column lists the logical core number - use this information for vCPU pinning.
Ensure Persistent CPU Pinning
CPU pinning ensures that a VM only gets CPU time from a specific CPU or set of CPUs. Pinning is performed on each logical CPU of the guest VM against each core ID in the host system. The CPU pinning information is lost every time the VM instance is shut down or restarted. To avoid entering the pinning information again, update the KVM configuration XML file on the host system.
- Ensure that no two VM instances are allocated the same physical cores on the host system.
- Ensure that all VMs hosted on the physical server are pinned. Do not mix pinned and unpinned VMs because this will cause all VMs to get treated as if they are unpinned.
- To create vCPU to hyper-thread pinning, pin consecutive vCPUs to sibling threads (logical cores) of the same physical core. Identify the logical core/sibling threads from the output returned by the command
lscpu
on the host. - Do not include the 0th physical core of the host in pinning. This is recommended because most host management/kernel threads are spawned on the 0th core by default.
Use the following steps to update the pinning information in the KVM configuration XML file:
- Shut down the VM instance.
Start virsh.
virsh [root@kujo ~]# virsh Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh #
Edit the VM instance:
virsh # edit <KVM_instance_name>
Search for the
vcpu placement
attribute.Make sure the vCPUs are pinned to the correct NUMA node CPUs.
Ribbon recommends to reserve the 1-core siblings of each NUMA node for the host process (do not use for the VM). Since the PCI is connected to NUMA node1 (as determined in step step 2.b of NUMA Pinning procedure), you must pin the vCPUs of the VM from the CPU siblings in NUMA node1.Skip the first physical core siblings, 8 and 24, and pin the rest.
<vcpu placement='static' cpuset='9,25,10,26'>4</vcpu> <cputune> <vcpupin vcpu="0" cpuset="9"/> <vcpupin vcpu="1" cpuset="25"/> <vcpupin vcpu="2" cpuset="10"/> <vcpupin vcpu="3" cpuset="26"/> </cputune>
As the CPU Architecture Example shows , you must pin the cores to their siblings (i.e. the two Hyperthreads coming from the same physical core). The second column in the example shows the physical core number.NoteNote: As Sub-NUMA Clustering is disabled in the BIOS, each Socket will represent each numa node. So in this case socket 0 is NUMA node0 and Socket 1 is NUMA node1. Make sure that all the vCPUs are pinned to the same NUMA node and don’t cross the NUMA boundary.
TipEnsure that no two VM instances have the same physical core affinity. For example, if VM1 has an affinity of 9,25,10,26 assigned, then no other VM should be pinned to this core again. To Assign CPU pinning to other VMs, use the other available cores on the host, leaving the first 2 logical cores (as described in Perform Host Pinning) per NUMA node for the host.
Also, assign all other VM instances running on the same host with affinity; otherwise the VMs without affinity may impact the performance of VMs that have affinity.
Save and exit the XML file.
:wq
Edit VM CPU Mode
Ribbon recommends to set the CPU mode to host-model
using a virsh
command in the host system.
Use the following steps to edit the VM CPU mode:
- Shut down the VM instance.
Start virsh.
virsh
The
virsh
prompt displays.Edit the VM instance:
edit <KVM_instance_name>
Search for the
cpu mode
attribute.Edit the
cpu mode
attribute:TipEnsure the topology details entered are identical to the topology details set while creating the VM instance. For example, if the topology was set to 1 socket, 2 cores and 2 threads, enter the same details in this XML file.
Save and exit the XML file.
:wq
Start the VM instance.
start <KVM_instance_name>
Increase the Transmit Queue Length for virt-io Interfaces
This section is applicable only for virt-io based interfaces. Ribbon recommends to increase the Transmit Queue Length of host tap interfaces to 4096 for better performance. By default, the Transmit Queue Length is set to 500.
To increase the Transmit Queue Length to 4096:
Start virsh:
virsh
The
virsh
prompt displays.Identify the available interfaces.
domiflist <VM_instance_name>
The list of active interfaces displays.
Increase the Transmit Queue Lengths for the tap interfaces.
ifconfig <interface_name> txqueuelen <length>
The
interface_name
is the name of the interface you want to change, andlength
is the new queue length. For example,ifconfig macvtap4 txqueuelen 4096
.Verify the value of the interface length.
ifconfig <interface_name>
Example output:
- To make this setting persistent across the reboot, do the following:
Modify/Create the 60-tap.rules file and add the KERNEL command
# vim /etc/udev/rules.d/60-tap.rules KERNEL=="tap*", RUN+="/sbin/ip link set %k txqueuelen 4096" – Add this line # udevadm control --reload-rules
Apply the rules to already created interfaces.
# udevadm trigger --attr-match=subsystem=net
Reboot the host.
Stop Kernel Same-page Metering (KSM)
Kernel same-page metering (KSM) is a technology which finds common memory pages inside a Linux system and merges the pages to save memory resources. In the event of one of the copies being updated, a new copy is created so the function is transparent to the processes on the system. For hypervisors, KSM is highly beneficial when multiple guests are running with the same level of the operating system. However, there is overhead due to the scanning process which may cause the applications to run slower, which is not desirable.
To turn off KSM in the host:
Deactivate KSM by stopping the
ksmtuned
and theksm
services as shown below. This does not persist across reboots.# systemctl stop ksm # systemctl stopksmtuned
Disable KSM persistently as shown below:
# systemctl disable ksm # systemctl disable ksmtuned
Perform Host Pinning
To avoid performance impact on VMs due to host-level Linux services, host pinning isolates physical cores where a guest VM is hosted from physical cores where the Linux host processes/services run. Ribbon recommends to leave one physical core per CPU processor for host processes.
In this example, the core 0 (Core 0 and core 16 are logical cores) and core 8 (Core 8 and core 24 are logical cores) are reserved for Linux host processes.
The CPUAffinity
option in /etc/systemd/system.conf
sets affinity to systemd
by default, as well as for everything it launches, unless their .service
file overrides the CPUAffinity
setting with its own value.
Configure the
CPUAffinity
option in/etc/systemd/system.conf
:[root@srvr3320 ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 45 Model name: Intel(R) Xeon(R) CPU E5-2658 0 @ 2.10GHz Stepping: 7 CPU MHz: 1782.128 CPU max MHz: 2100.0000 CPU min MHz: 1200.0000 BogoMIPS: 4190.19 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0-7,16-23 NUMA node1 CPU(s): 8-15,24-31
To dedicate the physical CPUs 0 and 8 for host processing, specify CPUAffinity as 0 8 16 24 in the file
/etc/systemd/system.conf.
CPUAffinity=0 8 16 24
Restart the system.
Using <emulatorpin> Tag
The <emulatorpin>
tag specifies to which host physical CPUs the emulator (a subset of a domain, not including vCPUs) is pinned. The <emulatorpin>
tag provides a method of setting a precise affinity to emulator thread processes. As a result, vhost threads run on the same subset of physical CPUs and memory, thus benefit from cache locality.
<cputune> <emulatorpin cpuset="11,27"/> </cputune>
The <emulatorpin>
tag is required in order to isolate the virtio network traffic to be pinned to a different core than the VM vCPUs. This greatly reduces the percentage steal seen inside the VMs.
Ribbon recommends to pin the emulatorpin cpuset to the host CPU siblings using the same name as the VM memory. If no CPUs are left on the NUMA node, you can also pin it to the other NUMA node.
Back Up VMs with 1G hugepages
Ribbon recommends to back up its VMs with 1G hugepages for performance reasons. Configure hugepages in the host during boot time to minimize memory fragmentation. If the host OS does not support the recommendations of 1G hugepage size, configure hugepages of size 2M in place of 1G.
The number of hugepages is decided based on the total memory available on the host. Ribbon recommends to configure 80-90% of total memory as hugepage memory and leave the rest as normal linux memory.
Configure the huge page size as 1G and number of huge pages by appending the following line to the kernel command line options in /etc/default/grub. In the example below, the host has a total of 256G memory, out of which 200G is configured as hugepages.
GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8 crashkernel=auto intel_iommu=on iommu=pt default_hugepagesz=1GB hugepagesz=1G hugepages=200 rhgb quiet" GRUB_DISABLE_RECOVERY="true"
Regenerate the GRUB2 configuration as shown below:
If your system uses BIOS firmware, issue the command:
# grub2-mkconfig -o /boot/grub2/grub.cfg
If your system uses UEFI firmware, issue the command:
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
Add lines in your instance XML file using
virsh
edit
<instanceName
>.NoteMake sure that the PCI device (SR-IOV, vCPU and VM memory) comes from the same NUMA node. For virtual pkt interfaces, Also, ensure that the vCPU and memory comes from the same NUMA node.
<memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <hugepages> <page size='1048576' unit='KiB' nodeset='1'/> </hugepages> </memoryBacking>
This example pins the VM on NUMA node1. For hosting a second VM on other NUMA node use the proper NUMA node value in the nodeset = <NUMA Node>.
Restart the host.
Obtain the PID of the VM:
ps -eaf | grep qemu | grep -i <vm_name>
Verify VM memory is received from a single NUMA node:
numastat -p <vmpid>
Disable Flow Control
Perform the following steps to disable flow control.
This setting is optional and depends on NIC capability. Not all NICs allow you to modify the flow control parameters. If it is supported by NICs, Ribbon recommends to disable flow control to avoid head-of-line blocking issues.
To disable flow control:
- Log in to the system as the
root
user. Disable flow control for interfaces attached to the SWe VM.
TipUse the
<interface name>
from the actual configuration.ethtool -A <interface name> rx off tx off autoneg off
Exampleethtool -A p4p3 rx off tx off autoneg off ethtool -A p4p4 rx off tx off autoneg off ethtool -A em3 rx off tx off autoneg off ethtool -A em4 rx off tx off autoneg off
To make the setting persistent:
The network service in CentOS/RedHat has the ability to make the setting persistent. The script /etc/sysconfig/network-scripts/ifup-post
checks for the existence of /sbin/ifup-local
, and if it exists, runs it with the interface name as a parameter (e.g. /sbin/ifup-local eth0
)
Steps:
- Create this file using
touch /sbin/ifup-local
- Make it executable using
chmod +x /sbin/ifup-local
- Set the file's SELinux context using
chcon --reference /sbin/ifup /sbin/ifup-local
- Open the file in an editor.
Here is an example of a simple script to apply the same settings to all interfaces (except lo):
#!/bin/bash if [ -n "$1" ]; then if [ "$1" != "lo" ];then /sbin/ethtool -A $1 rx off tx off autoneg off fi fi
Recap of Changes in the KVM Configuration XML File
Below is an example KVM configuraion XML file that includes all of the above changes. The highlighted text identifies the changed values, which should be followed properly as described above.
<domain type='kvm' id='1'>
<name>ISBC_SWE_VM</name>
<uuid>6c8b18c6-f633-4847-b1a3-a4f97bd5c14a</uuid>
<memory unit='KiB'>33554432</memory>
<currentMemory unit='KiB'>33554432</currentMemory>
<memoryBacking>
<hugepages>
<page size='1048576' unit='KiB' nodeset='1'/>
</hugepages>
</memoryBacking>
<numatune>
<memory mode='preferred' nodeset="1"/>
</numatune>
<vcpu placement='static' cpuset='9,25,10,26'>4</vcpu>
<cputune>
<vcpupin vcpu="0" cpuset="9"/>
<vcpupin vcpu="1" cpuset="25"/>
<vcpupin vcpu="2" cpuset="10"/>
<vcpupin vcpu="3" cpuset="26"/>
<emulatorpin cpuset='11,27'/>
</cputune>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='host-model'>
<topology sockets='1' cores='2' threads='2' />
</cpu>
...
</domain>
Tune Interrupt Requests (IRQs)
This section applies only to virt-io-based packet interfaces. Virt-IO networking works by sending interrupts on the host core. SBC VM performance can be impacted if frequent processing interruptions occur on any core of the VM. To avoid this, the affinity of the IRQs for a virtio-based packet interface should be different from the cores assigned to the SBC VM.
The /proc/interrupts
file lists the number of interrupts per CPU, per I/O device. IRQs have an associated "affinity" property, "smp_affin
ity," that defines which CPU cores are allowed to run the interrupt service routine (ISR) for that IRQ. Refer to the distribution guidelines of the host OS for the exact steps to locate and specify the IRQ affinity settings for a device.
External Reference: https://access.redhat.com/solutions/2144921
OVS-DPDK Virtio Interfaces - Performance Tuning Recommendations
Follow the open stack recommended performance settings for host and guest: Refer to VNF Performance Tuning for details.
Make sure that physical network adapters, Poll Mode Driver (PMD) threads, and pinned CPUs for the instance are all on the same NUMA node.This is a mandate for optimal performance.
PMD threads are the threads that do the heavy lifting for userspace switching. They perform tasks such as continuous polling of input ports for packets, classifying packets once received, and executing actions on the packets once they are classified.
- Set the queue size for virtio interfaces to 1024 by updating the Director template.
NovaComputeExtraConfig: - nova::compute::libvirt::tx_queue_size: '"1024"'
NovaComputeExtraConfig: - nova::compute::libvirt::rx_queue_size: '"1024"'
- Configure the following dpdk parameters in host ovs-dpdk:
- Make sure two pair of Rx/Tx queues are configured for host dpdk interfaces
To validate, issue the following command duringovs-dpdk
bring-up:ovs-vsctl get Interface dpdk0 options
For background details, see http://docs.openvswitch.org/en/latest/howto/dpdk/ - Enable per-port memory, which means each port will use separate mem-pool for receiving packets, instead of using a default shared mem-pool:
ovs-vsctl set Open_vSwitch . other_config:per-port-memory=true
- configure 4096 MB huge page memory on each socket:
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=4096,4096
- Make sure to spawn the appropriate number of PMD threads so that each port/queue can be serviced by a particular PMD thread. The PMD threads must be pinned to dedicated cores/hyper-threads, which must be in the same NUMA as network adapter and guest, which must be isolated from kernel, and must not be used by guest for any other purpose. The pmd-cpu-mask needs to be set accordingly.
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x40001004000100
The example above sets PMD threads to run on two physical cores:8,26,36,54. (cores:8-36 and 26-54 are sibling hyper-threads). - Restart ovs-vswitchd after the changes:
systemctl status ovs-vswitchd
systemctl restart ovs-vswitchd
- Make sure two pair of Rx/Tx queues are configured for host dpdk interfaces
- The port and Rx queue assignment to PMD threads is crucial for optimal performance. Follow http://docs.openvswitch.org/en/latest/topics/dpdk/pmd/ for more details. The affinity is a csv list of <queue_id>:<core_id> which needs to be set for each ports.
ovs-vsctl set interface dpdk0 other_config:pmd-rxq-affinity="0:8,1:26"
ovs-vsctl set interface vhub89b3d58-4f other_config:pmd-rxq-affinity="0:36"
ovs-vsctl set interface vhu6d3f050e-de other_config:pmd-rxq-affinity="1:54"
In the example above, the PMD thread on core 8 will read queue 0 and PMD thread on core 26 will read queue 1 of dpdk0 interface.
Alternatively, you can use the default assignment of port/Rx queues to PMD threads and enable auto-load-balance option so that ovs will put the threads on cores based on load.
ovs-vsctl set open_vswitch . other_config:pmd-auto-lb="true"
ovs-appctl dpif-netdev/pmd-rxq-rebalance
Troubleshooting
- To check the port/Rx queue distribution among PMD threads, enter the command:
ovs-appctl dpif-netdev/pmd-rxq-show
- To check the PMD thread stats ( actual cpu usage), use below command and check for "processing cycles" and "idle cycles":
ovs-appctl dpif-netdev/pmd-stats-clear && sleep 10 && ovs-appctl dpif-netdev/pmd-stats-show
To check packet drops on host dpdk interfaces, use the below command and check for rx_dropped/tx_dropped counters:
watch -n 1 'ovs-vsctl get interface dpdk0 statistics|sed -e "s/,/\n/g" -e "s/[\",\{,\}, ]//g" -e "s/=/ =\u21d2 /g"'
For additional details, refer to the following page for troubleshooting performance issues/packet drops in ovs-dpdk environment:
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/ovs-dpdk_end_to_end_troubleshooting_guide/validating_an_ovs_dpdk_deployment#find_the_ovs_dpdk_port_physical_nic_mapping_configured_by_os_net_config
Benchmarking
Setup details:
- Platform: RHOSP13
- Host OS: RHEL7.5
- Processor: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
- 1 Provider Network configured for Management Interface
- 1 Provider Network configured for HA Interface
- OVS+DPDK enabled for packet interfaces (pkt0 and pkt1)
- 2 pair of Rx/Tx queues in host dpdk interfaces
- 1 Rx/Tx queue in guest virtio interface
- 4 PMD threads pinned to 4 hyper threads (i.e. using up 2 physical cores)
Guest Details:
- SSBC - 8vcpu/18GB RAM/100GB HDD
- MSBC - 10vcpu/20GB RAM/100 GB HDD
Benchmarking has been tested in a D-SBC setup with up to 30k pass-through sessions using the recommendations described in this document.
You may require additional cores for PMD threads for higher numbers.
External References
https://docs.openvswitch.org/en/latest/howto/dpdk/
https://docs.openvswitch.org/en/latest/topics/dpdk/pmd/