Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Internal_display_only


Add_workflow_for_techpubs
AUTH1UserResourceIdentifier{userKey=8a00a0c86820e56901685f374974002d, userName='null'}
JIRAIDAUTHSBX-101297
REV5UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26c752002f, userName='null'}
REV6UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV3UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cba10618, userName='null'}
REV4UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26c9f0036e, userName='null'}
REV1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26ca1903b3, userName='null'}
REV2UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cdb30a83, userName='null'}


Panel

In this section:

Table of Contents
maxLevel4


The following sections contain VM performance tuning recommendations to improve system performance. These performance recommendations are general guidelines and are not exhaustive. Refer to the documentation provided by your Linux OS and KVM host vendors. For example, Redhat provides extensive documentation on using virt-manager and optimizing VM performance. Refer to the Redhat Virtualization Tuning and Optimization Guide for details.

Info
titleNote

For performance tuning procedures on a VM instance log on to the host system as the root user. 


Excerpt

General Recommendations

  • Ensure the number of vCPUs in an instance is always an even number (4, 6, 8, and so on) as hyper threaded vcpus are used.
  • For best performance, make sure a single instance is confined to a single NUMA. Performance degradation occurs if an instance spans across multiple NUMAs.
  • Ensure the physical NICs associated with an instance are connected to the same NUMA/socket where the instance is hosted. Doing so reduces the remote node memory access, which in turn helps improve the performance.


Recommended BIOS Settings

Spacevars
0company
 recommends the following BIOS settings in the host for optimum performance.

Caption
0Table
1Recommended BIOS Settings


BIOS Parameter
Setting

CPU power management/

Power Regulator

Maximum performance

or Static High Performance

Intel Hyper-ThreadingEnabled
Intel Turbo BoostEnabled
Intel VT-x (Virtualization Technology)Enabled
Thermal Configuration

Optimal Cooling

or Maximum Cooling

Minimum Processor Idle Power Core C-stateNo C-states
Minimum Processor Idle Power Package C-stateNo C-states
Energy Performance BIASMax Performance

Sub-NUMA Clustering

 Disabled
HW PrefetcherDisabled
SRIOVEnabled
Intel® VT-dEnabled


 


Info
titleNote

For GPU transcoding, ensure all power supplies are plugged into the server.

CPU Frequency Setting on the Host

The cpu frequency setting determines the  operating clock speed of the processor and in turn the system performance. Red Hat offers a set of in-built tuning profiles and a tool called tuned-adm that helps in configuring the required tuning profile.

Ribbon recommends to apply throughput-performance tuning profile, which makes the processor to operate at maximum frequency.

  • Find out the active tuning profile

# tuned-adm active

Current active profile: powersave

  • Apply  throughput-performance tuning profile

# tuned-adm profile throughput-performance

This configuration is persistent across reboots and takes effect immediately. There is no need to reboot the host after configuring this tuning profile.

Processor and CPU Details

To determine the host system's processor and CPU details, perform the following steps:

  1. Execute the following command to determine how many vCPUs are assigned to host CPUs:

    Code Block
    lscpu -p

    The command provides the following output:

    Caption
    0Figure
    1CPU Architecture

    The first column lists the logical CPU number of a CPU as used by the Linux kernel. The second column lists the logical core number - use this information for vCPU pinning.

Persistent CPU Pinning

CPU pinning ensures that a VM only gets CPU time from a specific CPU or set of CPUs. Pinning is performed on each logical CPU of the guest VM against each core ID in the host system. The CPU pinning information is lost every time the VM instance is shutdown or restarted. To avoid entering the pinning information again, update the KVM configuration XML file on the host system.

Info
titleNote:
  • Ensure that no two VM instances are allocated the same physical cores on the host system.
  • Ensure that all the VMs hosted on the physical server are pinned.
  • To create vCPU to hyper-thread pinning, pin consecutive vCPUs to sibling threads (logical cores) of the same physical core. Identify the logical core/sibling threads from the output returned by the command lscpu on the host.
  • Do not include the 0th physical core of the host in pinning. This is recommended because most host management/kernel threads are spawned on the 0th core by default.

 Use the following steps to update the pinning information in the KVM configuration XML file:

  1. Shutdown the VM instance.
  2. Enter the following command.

    Code Block
    languagenone
    virsh

    The command provides the following output:

    Caption
    0Figure
    1virsh Prompt


  3. Enter the following command to edit the VM instance:

    Code Block
    languagenone
    virsh # edit <KVM_instance_name>


  4. Search for the vcpu placement attribute.

    Caption
    0Figure
    1vCPU Placement Attribute


  5. Enter CPU pinning information as shown below:

    Caption
    0Figure
    1CPU Pinning Information


    Tip
    titleTip

    Ensure that no two VM instances have the same physical core affinity. For example, if VM1 has affinity of 0,1,2,3 assigned, then ensure no VM is pinned to 0,1,2,3,8,9,10 or 11 as these CPUs belong to the physical core assigned to VM1. Also, assign all other VM instances running on the same host with affinity; otherwise the VMs without affinity may impact the performance of VMs that have affinity.


  6. Enter the following command to save and exit the XML file.

    Code Block
    :wq


CPU Mode Configuration

Spacevars
0company
 recommends to set the CPU mode to host-model using a virsh command in the host system.

Use the following steps to edit the VM CPU mode:

  1. Shutdown the VM instance.
  2. Enter the following command.

    Code Block
    virsh

    The following output displays:

    Caption
    0Figure
    1virsh Prompt


  3. Enter the following command to edit the VM instance:

    Code Block
    languagenone
    edit <KVM_instance_name>


  4. Search for the cpu mode attribute.

    Caption
    0Figure
    1cpu mode


  5. Edit the cpu mode attribute with the following:

    Caption
    0Figure
    1Editing CPU Mode


    Tip
    titleTip

    Ensure the topology details entered are identical to the topology details set while creating the VM instance. For example, if the topology was set to 1 socket, 2 cores and 2 threads, enter the same details in this XML file.


  6. Enter the following command to save and exit the XML file.

    Code Block
    :wq


  7. Enter the following command to start the VM instance.

    Code Block
    languagenone
    start <KVM_instance_name>


Increasing the Transmit Queue Length for virt-io Interfaces

This section is applicable only for virt-io based interfaces. 

Spacevars
0company
 recommends to increase the transmit queue length of host tap interfaces to 4096 for better performance.

By default, the transmit queue length is set to 500. To increase the transmit queue length to 4096, use the following procedure:

  1. Execute the following command to identify the available interfaces:

    Code Block
    languagenone
    virsh

    The virsh prompt displays.

  2. Execute the following command.

    Code Block
    languagenone
    domiflist <VM_instance_name>

    The list of active interfaces displays.

    Caption
    0Figure
    1Active Interfaces List


  3. Execute the following command to increase the transmit queue lengths for the tap interfaces.

    Code Block
    languagenone
    ifconfig <interface_name> txqueuelen <length>

    The interface_name is the name of the interface you want to change, and length is the new queue length. For example, ifconfig macvtap4 txqueuelen 4096.

  4. Execute the following command to verify the value of the interface length.

    Code Block
    languagenone
    ifconfig <interface_name>

    The command provides the following output.

    Caption
    0Figure
    1Interface Information


Kernel Same-page Metering (KSM) Settings

Kernel same-page metering (KSM) is a technology which finds common memory pages inside a Linux system and merges the pages to save memory resources. In the event of one of the copies being updated, a new copy is created so the function is transparent to the processes on the system. For hypervisors, KSM is highly beneficial when multiple guests are running with the same level of the operating system. However, there is overhead due to the scanning process which may cause the applications to run slower, which is not desirable.

Turn off KSM in the host.

Deactivate KSM by stopping the ksmtuned and the ksm services as shown below. This does not persist across reboots.

Code Block
# systemctl stop ksm
# systemctl stopksmtuned

Disable KSM persistently as shown below:

Code Block
# systemctl disable ksm
# systemctl disable ksmtuned

Host Pinning

To avoid performance impact on VMs due to host-level Linux services, host pinning isolates physical cores where a guest VM is hosted from physical cores where the Linux host processes/services run. 

Spacevars
0company
 recommends to leave one physical core per CPU processor for host processes.

In this example, the core 0 (Core 0 and core 36 are logical cores) and core 1 (Core 1 and core 37 are logical cores) are reserved for Linux host processes.

The CPUAffinity option in /etc/systemd/system.conf sets affinity to systemd by default, as well as for everything it launches, unless their .service file overrides the CPUAffinity setting with its own value. Configure the CPUAffinity option in /etc/systemd/system.conf.

Execute the following command:

Code Block
lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                72
On-line CPU(s) list:   0-71
Thread(s) per core:    2
Core(s) per socket:    18
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
Stepping:              1
CPU MHz:               2699.984
BogoMIPS:              4604.99
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              46080K
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71

To dedicate the physical CPUs 0 and 1 for host processing, in the file /etc/systemd/system.conf, specify CPUAffinity as 0 1 36 37, as shown below. Restart the system.

Code Block
CPUAffinity=0 1 36 37

Back Up VMs with 1G hugepages

Spacevars
0company
 recommends to back up its VMs with 1G hugepages for performance reasons. Configure hugepages in the host during boot time to minimize memory fragmentation. If the host OS does not support the recommendations of 1G hugepage size, configure hugepages of size 2M in place of 1G.

The number of hugepages is decided based on the total memory available on the host. 

Spacevars
0company
recommends to configure 80-90% of total memory as hugepage memory and leave the rest as normal linux memory.

  1. Configure the huge page size as 1G and number of huge pages by appending the following line to the kernel command line options in /etc/default/grubIn the example below, the host  has a total of 256G memory, out of which 200G is configured as hugepages. 

    Code Block
    GRUB_TIMEOUT=5
    
    GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
    
    GRUB_DEFAULT=saved
    
    GRUB_DISABLE_SUBMENU=true
    
    GRUB_TERMINAL_OUTPUT="console"
    
    GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8 crashkernel=auto intel_iommu=on iommu=pt default_hugepagesz=1GB hugepagesz=1G hugepages=200 rhgb quiet"
    
    GRUB_DISABLE_RECOVERY="true"


  2. Regenerate the GRUB2 configuration as shown below: 

    1. If your system uses BIOS firmware, execute the following command:

      Code Block
      # grub2-mkconfig -o /boot/grub2/grub.cfg


    2. On a system with UEFI firmware, execute the following command: 

      Code Block
      # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg


  3. Add lines in your instance XML file using virsh edit <instanceName>. The example below is for a 32G instance: 

    Code Block
    <memory unit='KiB'>33554432</memory>
    
    <currentMemory unit='KiB'>33554432</currentMemory>
    
    <memoryBacking>
    
    <hugepages>
    
    <page size='1048576' unit='KiB' nodeset='0'/>
    
    </hugepages>
    
    </memoryBacking>


    Tip

    The previous example pins the VM on NUMA node 0. For hosting a second VM on NUMA node 1, use nodeset = ‘1’.


  4. Restart the host.

  5. Obtain the PID of the VM from the following command:

    Code Block
    ps -eaf | grep qemu | grep -i <vm_name>


  6. Execute the following command to verify VM memory is received from a single NUMA node:

    Code Block
    numastat -p  <vmpid>


Disable Flow Control

Use the following steps to disable flow control:

Info
titleNote

This setting is optional and depends on NIC capability. Not all NICs allow you to modify the flow control parameters. If it is supported by NICs,

Spacevars
0company
 recommends to disable flow control to avoid head-of-line blocking issues.


  1. Log in to the system as the root user.
  2. Execute the following command to disable flow control for interfaces attached to the SWe VM.

    Code Block
    ethtool -A <interface name> rx off tx off autoneg off  


    Tip
    titleTip

    Use the <interface name> from the actual configuration.

    Example:

    ethtool -A p4p3 rx off tx off autoneg off
    ethtool -A p4p4 rx off tx off autoneg off
    ethtool -A em3 rx off tx off autoneg off
    ethtool -A em4 rx off tx off autoneg off

Tuning Interrupt Requests (IRQs)

This section applies only to virt-io-based packet interfaces. Virt-IO networking works by sending interrupts on the host core. SBC VM performance can be impacted if frequent interrupt processing occurs on any core of the VM. To avoid this, the affinity of the IRQs for a virtio-based packet interface should be different from the cores assigned to the SBC VM.

The /proc/interrupts file lists the number of interrupts per CPU, per I/O device. IRQs have an associated "affinity" property, "smp_affinity," that defines which CPU cores are allowed to execute the interrupt service routine (ISR) for that IRQ. Refer to the distribution guidelines of the host OS for the exact steps to locate and specify the IRQ affinity settings for a device.

External Reference: https://access.redhat.com/solutions/2144921


Include Page
_OVS-DPDK Virtio Interfaces - Performance Tuning Recommendations
_OVS-DPDK Virtio Interfaces - Performance Tuning Recommendations

Pagebreak