Add_workflow_for_techpubs | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
Panel | ||||
---|---|---|---|---|
In this section:
|
The following sections contain
Panel | ||||
---|---|---|---|---|
In this section:
|
There are several VM operating parameters that can be set to improve system throughput for a single or multiple VMs installed on a KVM host. Some VM operating parameters are set on the KVM host and are modified any time when the VM Instance is running, while others are set on the VM and are only configured when the VM Instance is shut down.
The following sections contain VM performance tuning recommendations to improve system performance. The VM These performance recommendations are considered general guidelines, and are not intended to be all-inclusive.
exhaustive. Refer to the documentation provided by your Linux OS and KVM host vendors for complete details. For example, Redhat provides extensive documentation on using virt-manager and optimizing VM performance. Refer to the Redhat Virtualization Tuning and Optimization Guide for details.
Info | ||
---|---|---|
| ||
For performance tuning procedures on a VM instance you must , log on to onto the host system as the |
Excerpt |
---|
General Recommendations |
The following general recommendations apply to all platforms where SBC SWe is deployed:
|
|
|
|
|
|
|
Spacevars | ||
---|---|---|
|
Sonus recommends applying the BIOS settings in the following table on all VMs for optimum performance:
Caption | ||||
---|---|---|---|---|
| ||||
|
|
|
|
|
|
For hardware virtualization
All server BIOS settings are different, but in general the following guidelines apply:
Check the current configuration of the CPU frequency setting using the following command on the host system.
Code Block |
---|
# cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor |
The CPU frequency setting must be set to performance
to improve VNF performance. Use the following command on the host system:
|
Info | ||
---|---|---|
| ||
For GPU transcoding, ensure all power supplies are plugged into the server. |
The CPU frequency setting determines the operating clock speed of the processor and in turn the system performance. Red Hat offers a set of in-built tuning profiles and a tool called tuned-adm that helps in configuring the required tuning profile.
Ribbon recommends to apply the throughput-performance
tuning profile, which allows the processor to operate at maximum frequency.
Determine the active tuning profile:
Code Block |
---|
# tuned-adm active
Current active profile: powersave |
Apply the throughput-performance tuning profile:
Code Block |
---|
# tuned-adm profile throughput-performance |
This configuration is persistent across reboots, and takes effect immediately. There is no need to reboot the host after configuring this tuning profile.
Use the procedure below to accomplish NUMA pinning for the VM.
# echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governorInfo | ||
---|---|---|
| ||
You |
can skip NUMA pinning for virtual pkt interfaces. |
Determine the number of NUMA nodes on the host server.
Code Block |
---|
[root@srvr3320 ~]# lscpu | grep NUMA
NUMA node(s): 2
NUMA node0 CPU(s): 0-7,16-23
NUMA node1 CPU(s): 8-15,24-31
[root@srvr3320 ~]# |
In this example, there are two NUMA nodes on the server.
Obtain the bus-info of the PF interface using the command ethtool -I <PF interface name>
.
Code Block |
---|
[root@srvr3320 ~]# ethtool -i ens4f0
driver: igb
version: 5.6.0-k
firmware-version: 1.52.0
expansion-rom-version:
bus-info: 0000:81:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
[root@srvr3320 ~]# |
Anchor 2.b 2.b
Identify the NUMA node of the PCI device using cat /sys/bus/pci/devices/<PCI device>/numa_node
.
Code Block |
---|
[root@srvr3320 ~]# cat /sys/bus/pci/devices/0000\:81\:00.0/numa_node
1 |
Repeat the previous step for other SR-IOV interfaces from which you plan to connect VFs.
Info | ||
---|---|---|
| ||
Make sure that all PCI devices are connected to the same NUMA node. |
Once the NUMA node is discovered, set the <numatune> of the SBC VM in the VM xml file.
Code Block |
---|
<numatune>
<memory mode='preferred' nodeset="1"/>
</numatune> |
To determine the host system's processor and CPU details, enter the following command to determine how many vCPUs are assigned to host CPUs:
Code Block |
---|
lscpu -p |
Anchor | ||||
---|---|---|---|---|
|
Code Block | ||
---|---|---|
| ||
[root@srvr3320 ~]# lscpu -p
# The following is the parsable format, which can be fed to other
# programs. Each different item in every column has an unique ID
# starting from zero.
# CPU,Core,Socket,Node,,L1d,L1i,L2,L3
0,0,0,0,,0,0,0,0
1,1,0,0,,1,1,1,0
2,2,0,0,,2,2,2,0
3,3,0,0,,3,3,3,0
4,4,0,0,,4,4,4,0
5,5,0,0,,5,5,5,0
6,6,0,0,,6,6,6,0
7,7,0,0,,7,7,7,0
8,8,1,1,,8,8,8,1
9,9,1,1,,9,9,9,1
10,10,1,1,,10,10,10,1
11,11,1,1,,11,11,11,1
12,12,1,1,,12,12,12,1
13,13,1,1,,13,13,13,1
14,14,1,1,,14,14,14,1
15,15,1,1,,15,15,15,1
16,0,0,0,,0,0,0,0
17,1,0,0,,1,1,1,0
18,2,0,0,,2,2,2,0
19,3,0,0,,3,3,3,0
20,4,0,0,,4,4,4,0
21,5,0,0,,5,5,5,0
22,6,0,0,,6,6,6,0
23,7,0,0,,7,7,7,0
24,8,1,1,,8,8,8,1
25,9,1,1,,9,9,9,1
26,10,1,1,,10,10,10,1
27,11,1,1,,11,11,11,1
28,12,1,1,,12,12,12,1
29,13,1,1,,13,13,13,1
30,14,1,1,,14,14,14,1
31,15,1,1,,15,15,15,1
[root@srvr3320 ~]#
|
The first column lists the logical CPU number of a CPU used by the Linux kernel. The second column lists the logical core number -- use this information for vCPU pinning.
CPU pinning ensures that a VM only gets CPU time from a specific CPU or set of CPUs. Pinning is performed on each logical CPU of the guest VM against each core ID in the host system. The CPU pinning information is lost every time the VM instance is shut down or restarted. To avoid entering the pinning information again, update the KVM configuration XML file on the host system.
Info | ||
---|---|---|
| ||
|
Use the following steps to update the pinning information in the KVM configuration guest VM XML file:
Start virsh.
Code Block | ||
---|---|---|
| ||
virsh
[root@kujo ~]# virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # |
Check the list of running instances:
Code Block | ||
---|---|---|
| ||
virsh # list --all
Id Name State
----------------------------------------------------
2 ISBC_SWE_VM running |
Edit the VM instance:
Code Block | ||
---|---|---|
| ||
virsh # edit <KVM_instance_name> |
Search for the vcpu placement
attribute.
Panel | ||
---|---|---|
| ||
|
Make sure the vCPUs are pinned to the correct NUMA node CPUs.
recommends to reserve the 1-core siblings of each NUMA node for the host process (do not use for the VM). Since the PCI is connected to NUMA node1 (as determined in step step 2.b of NUMA Pinning procedure), you must pin the vCPUs of the VM from the CPU siblings in NUMA node1. Spacevars 0 company
Skip the first physical core siblings, 8 and 24, and pin the rest.
Code Block |
---|
<vcpu placement='static' cpuset='9,25,10,26'>4</vcpu>
<cputune>
<vcpupin vcpu="0" cpuset="9"/>
<vcpupin vcpu="1" cpuset="25"/>
<vcpupin vcpu="2" cpuset="10"/>
<vcpupin vcpu="3" cpuset="26"/>
</cputune> |
As the CPU Architecture Example shows , you must pin the cores to their siblings (i.e. the two Hyperthreads coming from the same physical core). The second column in the example shows the physical core number.
Info | ||
---|---|---|
| ||
Note: As Sub-NUMA Clustering is disabled in the BIOS, each Socket will represent each numa node. So in this case socket 0 is NUMA node0 and Socket 1 is NUMA node1. Make sure that all the vCPUs are pinned to the same NUMA node and don’t cross the NUMA boundary. |
Tip | ||
---|---|---|
| ||
Ensure that no two VM instances have the same physical core affinity. For example, if VM1 has an affinity of 9,25,10,26 assigned, then no other VM should be pinned to this core again. To Assign CPU pinning to other VMs, use the other available cores on the host, leaving the first 2 logical cores (as described in Perform Host Pinning) per NUMA node for the host. Also, assign all other VM instances running on the same host with affinity; otherwise the VMs without affinity may impact the performance of VMs that have affinity. |
Save and exit the XML file.
Code Block |
---|
:wq |
Start the VM instance.
Code Block | ||
---|---|---|
| ||
virsh # start <KVM_instance_name> |
Info | ||
---|---|---|
| ||
If you require additional changes to the XML file (such as those described below), you can hold off on restarting until all changes area made. |
Spacevars | ||
---|---|---|
|
host-model
using a virsh
command in the host system.Use the following steps to edit the VM CPU mode:
Start virsh.
Code Block | ||
---|---|---|
| ||
virsh
[root@kujo ~]# virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # |
Check the list of running instances:
Code Block | ||
---|---|---|
| ||
virsh # list --all
Id Name State
----------------------------------------------------
2 ISBC_SWE_VM running |
Edit the VM instance:
Code Block | ||
---|---|---|
| ||
virsh # edit <KVM_instance_name> |
Locate the <cpu mode='custom'>
attribute in the default configuration.
Panel | ||
---|---|---|
| ||
|
Replace the entire CPU mode content shown above with the below content, containing the proper CPU topology of the VM. To identify the proper topology for your VM instance, use sockets=1 (as the VM has a single NUMA node), threads=2 (since the VM will support hyperthreading), cores=<number of vcCPUs for the VM/2>.
Panel | ||
---|---|---|
| ||
|
Info | ||
---|---|---|
| ||
Ensure to enter the topology details above to exactly match the topology details set while creating the VM instance (The number of cores equals the number of vCPUs allocated from the VM divided by 2). For example, if the VM instance topology is set to 1 socket, 2 cores and 2 threads, enter the identical details in this XML file. |
Save and exit the XML file.
Code Block |
---|
:wq |
Start the VM instance.
Code Block | ||
---|---|---|
| ||
virsh # start <KVM_instance_name> |
Info | ||
---|---|---|
| ||
If you require additional changes to the XML file (such as those described below), you can hold off on restarting until all changes area made. |
Increase the Transmit Queue Length for virt-io Interfaces
This section is applicable only for virt-io based interfaces.
Spacevars | ||
---|---|---|
|
To increase the Transmit Queue Length to 4096:
Start virsh:
Code Block | ||
---|---|---|
| ||
virsh
[root@kujo ~]# virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # |
Identify the available interfaces.
Code Block | ||
---|---|---|
| ||
domiflist <VM_instance_name> |
The list of active interfaces displays.
Panel | ||
---|---|---|
| ||
|
Increase the Transmit Queue Lengths for the tap interfaces.
Code Block | ||
---|---|---|
| ||
ifconfig <interface_name> txqueuelen <length> |
The interface_name
is the name of the interface you want to change, and length
is the new queue length. For example, ifconfig macvtap4 txqueuelen 4096
.
Verify the value of the interface length.
Code Block | ||
---|---|---|
| ||
ifconfig <interface_name> |
Example output:
Panel | ||
---|---|---|
| ||
|
To make this setting persistent across the reboot, do the following:
Modify/Create the 60-tap.rules file and add the KERNEL command by adding the line: KERNEL=="tap*", RUN+="/sbin/ip link set %k txqueuelen 4096"
Code Block |
---|
# vim /etc/udev/rules.d/60-tap.rules
KERNEL=="tap*", RUN+="/sbin/ip link set %k txqueuelen 4096"
# udevadm control --reload-rules |
Apply the rules to already created interfaces.
Code Block |
---|
# udevadm trigger --attr-match=subsystem=net |
Reboot the host.
Code Block |
---|
reboot |
Info | ||
---|---|---|
| ||
If you require additional changes, you can hold off on rebooting until all changes area made. |
Kernel same-page metering (KSM) is a technology which finds common memory pages inside a Linux system and merges the pages to save memory resources. In the event of one of the copies being updated, a new copy is created so the function is transparent to the processes on the system. For hypervisors, KSM is highly beneficial when multiple guests are running with the same level of the operating system. However, there is overhead due to the scanning process which may cause the applications to run slower, which is not desirable.
To turn off KSM in the host:
Deactivate KSM by stopping the ksmtuned
and ksm
services, as shown below. This does not persist across reboots.
Code Block |
---|
# systemctl stop ksm
# systemctl stop ksmtuned |
Disable KSM persistently as shown below:
Code Block |
---|
# systemctl disable ksm
# systemctl disable ksmtuned |
To avoid performance impact on VMs due to host-level Linux services, host pinning isolates physical cores where a guest VM is hosted from physical cores where the Linux host processes/services run.
Spacevars | ||
---|---|---|
|
In this example, core 0 (Core 0 and core 16 are logical cores) and core 8 (Core 8 and core 24 are logical cores) each represent the first core in each CPU socket, and are reserved for Linux host processes.
Info | ||
---|---|---|
| ||
The |
Configure the CPUAffinity
option in /etc/systemd/system.conf
. To get the first core siblings of each socket, use lscpu
as shown below, or any other equivalent command.
As shown below, 0
and 16
are the first core siblings on NUMA node0 CPU(s)
, and 8
and 24
are the first core siblings on NUMA node1 CPU(s)
.
Panel | ||
---|---|---|
| ||
|
To dedicate the physical CPUs 0 and 8 for host processing, specify CPUAffinity as 0 8 16 24 in the file /etc/systemd/system.conf
.
Code Block |
---|
CPUAffinity=0 8 16 24 |
Reboot the system.
Code Block |
---|
reboot |
The <emulatorpin>
tag specifies to which host physical CPUs the emulator (a subset of a domain, not including vCPUs) is pinned. The <emulatorpin>
tag provides a method of setting a precise affinity to emulator thread processes. As a result, vhost threads run on the same subset of physical CPUs and memory, thus benefit from cache locality.
In the above example, since the VM is pinned to core siblings 9,25, 10,26 from NUMA node0
, and 8 and 24 from NUMA node1
for host-level services, you can pin the emulator thread to any free cor siblings in the same NUMA node, such as 11 and 27, as shown below.
Code Block | ||
---|---|---|
| ||
<cputune>
<emulatorpin cpuset="11,27"/>
</cputune> |
The <emulatorpin>
tag is required in order to isolate the virtio network traffic to be pinned to a different core than the VM vCPUs. This greatly reduces the percentage steal seen inside the VMs.
Info | ||||
---|---|---|---|---|
| ||||
|
Spacevars | ||
---|---|---|
|
The number of hugepages is decided based on the total memory available on the host.
recommends to configure 80-90% of total memory as hugepage memory and leave the rest as normal linux memory. Spacevars 0 company
Configure the host server hugepage size for 1G hugepages by appending the following line to the kernel command line options in /etc/default/grub. In the example below, the host has a total of 256G memory, out of which 200G is configured as hugepages.
Code Block |
---|
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8 crashkernel=auto intel_iommu=on iommu=pt default_hugepagesz=1GB hugepagesz=1G hugepages=200 rhgb quiet"
GRUB_DISABLE_RECOVERY="true" |
Regenerate the GRUB2 configuration as shown below:
If your system uses BIOS firmware, issue the command:
Code Block |
---|
# grub2-mkconfig -o /boot/grub2/grub.cfg |
If your system uses UEFI firmware, issue the command:
Code Block |
---|
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg |
Tip | ||
---|---|---|
| ||
A simple method to find out if you are running UEFI or BIOS is to check for the presence of the folder Enter the ls command:
If you get the error " |
Add the following lines in your instance XML file using virsh
edit
<instanceName
> to allow the hypervisor to back the VM with hugepage memory.
Info | ||
---|---|---|
| ||
Make sure that the PCI device (SR-IOV, vCPU and VM memory) comes from the same NUMA node. For virtual pkt interfaces, Also, ensure that the vCPU and memory comes from the same NUMA node. |
Code Block |
---|
<memory unit='KiB'>33554432</memory>
<currentMemory unit='KiB'>33554432</currentMemory>
<memoryBacking>
<hugepages>
<page size='1048576' unit='KiB' nodeset='1'/>
</hugepages>
</memoryBacking> |
Tip |
---|
This example pins the VM on NUMA node1. For hosting a second VM on other NUMA node use the proper NUMA node value in the nodeset = <NUMA Node>. |
Restart the host.
Code Block |
---|
reboot |
Info | ||
---|---|---|
| ||
If you require additional changes, you can hold off on rebooting until all changes area made. |
Obtain the PID of the VM:
Code Block |
---|
ps -eaf | grep qemu | grep -i <vm_name> |
Verify VM memory is received from a single NUMA node:
Code Block |
---|
numastat -p <vmpid> |
Perform the following steps to disable flow control.
Info | ||||
---|---|---|---|---|
| ||||
This setting is optional and depends on NIC capability. Not all NICs allow you to modify the flow control parameters. If it is supported by NICs,
|
To disable flow control:
root
user.Disable flow control for interfaces attached to the SWe VM.
Tip | ||
---|---|---|
| ||
Use the |
Code Block |
---|
ethtool -A <interface name> rx off tx off autoneg off |
Code Block | ||
---|---|---|
| ||
ethtool -A p4p3 rx off tx off autoneg off
ethtool -A p4p4 rx off tx off autoneg off
ethtool -A em3 rx off tx off autoneg off
ethtool -A em4 rx off tx off autoneg off |
To make the setting persistent:
The network service in CentOS/RedHat has the ability to make the setting persistent. The script /etc/sysconfig/network-scripts/ifup-post
checks for the existence of /sbin/ifup-local
. If it exists, the script runs it with the interface name as a parameter (e.g. /sbin/ifup-local eth0
).
Perform the following steps:
Create this file using the touch
command:
Code Block |
---|
touch /sbin/ifup-local |
Make the file executable using the chmod
command:
Code Block |
---|
chmod +x /sbin/ifup-local |
Set the file's SELinux context using the chcon
command:
Code Block |
---|
chcon --reference /sbin/ifup /sbin/ifup-local |
Open the file in an editor.
Here is an example of a simple script to apply the same settings to all interfaces (except lo):
Code Block |
---|
#!/bin/bash
if [ -n "$1" ]; then
if [ "$1" != "lo" ];then
/sbin/ethtool -A $1 rx off tx off autoneg off
fi
fi |
Use the following example KVM configuraion XML file to verify all of the changed values (highlighted in red
) you performed in the aforementioned performance tuning steps.
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
|
This section applies only to virt-io-based packet interfaces. Virt-IO networking works by sending interrupts on the host core. SBC VM performance can be impacted if frequent processing interruptions occur on any core of the VM. To avoid this, the affinity of the IRQs for a virtio-based packet interface should be different from the cores assigned to the SBC VM.
The /proc/interrupts
file lists the number of interrupts per CPU, per I/O device. IRQs have an associated "affinity" property, "smp_affin
ity," that defines which CPU cores are allowed to run the interrupt service routine (ISR) for that IRQ. Refer to the distribution guidelines of the host OS for the exact steps to locate and specify the IRQ affinity settings for a device.
External Reference: https://access.redhat.com/solutions/2144921
Span |
---|
Validate your configuration changes using the steps below.
CPU frequency on the Host: Determine the active tuning profile:
Code Block |
---|
# tuned-adm active
Current active profile: throughput-performance |
Verify the NUMA node of the SR-IOV device, as described in the section "Perform NUMA Pinning for the VM".
Check to ensure all vCPU pinnings match what was previousl assigned:
Code Block |
---|
virsh # vcpupin 2
VCPU: CPU Affinity
--------------------------
0: 9
1: 25
2: 10
3: 26 |
KSM settings: Validate that KSM is disabled:
Code Block |
---|
# systemctl list-unit-files | grep disabled | grep ksm
ksm.service disabled
ksmtuned.service disabled |
Host pinning: Check the CPUAffinity set on the host to ensure the CPU numbers match what you assigned earlier.
Code Block |
---|
# cat /etc/systemd/system.conf | grep CPUAffinity
CPUAffinity=0 8 16 24 |
Flow control:
Check the physical interfaces on the host to ensure that flow control is disabled.
Code Block |
---|
# ethtool -a <interface name>
Pause parameters for <interface name>:
Autonegotiate: off
RX: off
TX: off |
Overall VM settings: Verify all other VM instance settings remain intact aft the final reboot.
Code Block |
---|
# virsh
# edit <instance name> |
Span |
---|
Include Page | ||||
---|---|---|---|---|
|
Pagebreak |
---|
To determine the host system's processor and CPU details, perform the following steps:
Execute the following command to know how many vCPUs are assigned to host CPUs:
Code Block |
---|
lscpu-p |
The command provides the following output:
Caption | ||||
---|---|---|---|---|
| ||||
The first column lists the logical CPU number of a CPU as used by the Linux kernel. The second column lists the logical core number, this information can be used for vCPU pinning.
CPU pinning ensures that a VM only gets CPU time from a specific CPU or set of CPUs. Pinning is performed on each logical CPU of the guest VM against each core ID in host system. The CPU pinning information will be lost every time the VM instance is shutdown or restarted. To avoid entering the pinning information again you must update the KVM configuration XML file on the host system.
Info | ||
---|---|---|
| ||
|
To update the pinning information in the KVM configuration XML file:
Enter the following command.
Code Block | ||
---|---|---|
| ||
virsh |
The command provides the following output:
Caption | ||||
---|---|---|---|---|
| ||||
Enter the following command to edit the VM instance:
Code Block | ||
---|---|---|
| ||
virsh # edit <KVM_instance_name> |
Search for the vcpu placement
attribute.
Caption | ||||
---|---|---|---|---|
| ||||
Enter CPU pinning information as shown:
Caption | ||||
---|---|---|---|---|
| ||||
Info | ||
---|---|---|
| ||
Ensure that no two VM instances have the same physical core affinity. For example, if VM1 has affinity of 0,1,2,3, then no VM should be pinned to 0,1,2,3, 8,9,10 or 11 as these CPUs belong to a physical core assigned to VM1. Also, all other VM instances running on the same host must be assigned with affinity, otherwise the VMs without affinity might impact the performance of VMs having affinity. |
Enter the following command to save and exit the XML file.
Code Block |
---|
:wq |
Even if the Copy host CPU configuration was selected while creating a VM instance, the host configuration may not be copied on the VM instance. To resolve this issue, you must edit the CPU mode to host-passthrough
using a virsh
command in the host system.
To edit the VM CPU mode:
Enter the following command.
Code Block |
---|
virsh |
The command provides the following output:
Caption | ||||
---|---|---|---|---|
| ||||
Enter the following command to edit the VM instance:
Code Block | ||
---|---|---|
| ||
edit <KVM_instance_name> |
Search for the cpu mode
attribute.
Caption | ||||
---|---|---|---|---|
| ||||
Replace the cpu mode
attribute with the following:
Caption | ||||
---|---|---|---|---|
| ||||
Info | ||
---|---|---|
| ||
The topology details entered must be same as the topology details while creating the VM instance. For example, if the topology was set to 1 socket, 4 cores and 1 thread the same must be entered in this XML file. |
Enter the following command to save and exit the XML file.
Code Block |
---|
:wq |
Enter the following command to start the VM instance.
Code Block | ||
---|---|---|
| ||
start <KVM_instance_name> |
Enter the following command to verify the host CPU configuration on the VM instance:
Code Block | ||
---|---|---|
| ||
cat /proc/cpuinfo |
The command provides the following output.
Caption | ||||
---|---|---|---|---|
| ||||
To increase the transmit queue length to 4096:
Info | ||
---|---|---|
| ||
By default, the transmit queue length is set to 500. |
Execute the following command to identify the available interfaces:
Code Block | ||
---|---|---|
| ||
virsh |
The virsh
prompt is displayed.
Execute the following command.
Code Block | ||
---|---|---|
| ||
domiflist <VM_instance_name> |
The list of active interfaces is displayed.
Caption | ||||
---|---|---|---|---|
| ||||
Execute the following command to increase the transmit queue length of the tap interface.
Code Block | ||
---|---|---|
| ||
ifconfig <interface_name> txqueuelen <length> |
where interface_name
is the name of the interface you want to change, and length
is the new queue length. For example, ifconfig macvtap4 txqueuelen 4096
.
Execute the following command to verify the value of the interface length.
Code Block | ||
---|---|---|
| ||
ifconfig <interface_name> |
The command provides the following output.
Caption | ||||
---|---|---|---|---|
| ||||
Apply the following settings to all VMs installed on the host.
Kernel same-page metering (KSM) is a technology which finds common memory pages inside a Linux system and merges the pages to save memory resources. In the event of one of the copies being updated, a new copy is created so the function is transparent to the processes on the system. For hypervisors, KSM is highly beneficial where multiple guests are running with the same level of the operating system. However, there is overhead due to the scanning process which may cause the applications to run slower which is not desirable. The SBC SWe requires that KSM is turned off.
The sample commands below are for Ubuntu 4.4; use the syntax that corresponds to your operating system.
Code Block |
---|
# echo 0 >/sys/kernel/mm/ksm/run
# echo "KSM_ENABLED=0" > /etc/default/qemu-kvm |
Once KSM is turned off, it is important to verify that there is still sufficient memory on the hypervisor. When the pages are not merged, it may increase memory usage and lead to swapping that negatively impacts performance.
Host pinning isolates physical cores where a guest VM is hosted against physical cores where Linux host processes/services run to avoid performance impact on VMs due to host-level Linux services. In this example, the core 0 (Core 0 and core 36 are logical cores) and core 1 (Core 1 and core 37 are logical cores) are reserved for Linux host processes.
The CPUAffinity
option in /etc/systemd/system.conf
sets affinity to systemd
by default, as well as for everything it launches, unless their .service
file overrides the CPUAffinity
setting with its own value. Configure the CPUAffinity
option in /etc/systemd/system.conf
.
Execute the following command:
Code Block |
---|
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2699.984
BogoMIPS: 4604.99
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71
|
To dedicate the physical CPUs 0 and 1 for host processing in /etc/systemd/system.conf
, add CPUAffinity
as 0 1 36 37. Restart the system.
Code Block |
---|
CPUAffinity=0 1 36 37 |
Mount the HugeTLB filesystem on the host.
Code Block |
---|
mkdir -p /hugepages |
Add the following line in the /etc/fstab
file.
Code Block |
---|
hugetlbfs /hugepages hugetlbfs defaults 0 0 |
Configure the number of 2M hugepages equal to the vRAM requirement for hosting a VM:
Code Block |
---|
cat /etc/sysctl.conf# System default settings live in /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
vm.nr_hugepages = 25000 (assuming a 24G VM)
vm.hugetlb_shm_group = 36 |
Add lines in your instance XML file using virsh edit <instanceName>:
Code Block |
---|
<domain type='kvm' id='3'>
<name>RENGALIVM01</name>
<uuid>f1bae5a2-d26e-4fc0-b472-3638743def9a</uuid>
<memory unit='KiB'>25165824</memory>
<currentMemory unit='KiB'>25165824</currentMemory>
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB' nodeset='0'/>
</hugepages>
</memoryBacking> |
Info | ||
---|---|---|
| ||
The previous example pins the VM on NUMA node 0. For hosting a second VM on NUMA node 1, use nodeset = ‘1’ |
Restart the host.
To verify, get the PID for the VM and execute the following command to check that VM memory is received from a single NUMA node:
Code Block |
---|
numastat -p <vmpid> |
root
user.Execute the following command to disable flow control for interfaces attached to the SWe VM.
Code Block |
---|
ethtool -A <interface name> rx off tx off autoneg off |
Info | ||
---|---|---|
| ||
Use the |
Example:
ethtool -A p4p3 rx off tx off autoneg off
ethtool -A p4p4 rx off tx off autoneg off
ethtool -A em3 rx off tx off autoneg off
ethtool -A em4 rx off tx off autoneg off
Info | ||
---|---|---|
| ||
Refer to the RHEL site for information on how to make NIC |