Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Add_workflow_for_techpubs
AUTH1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cd5909df, userName='null'}
REV5UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cd5909df, userName='null'}
REV6UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cd5909df, userName='null'}
REV3UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cba10618, userName='null'}
REV1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cdee0aee, userName='null'}


Panel

In this section:

Table of Contents
maxLevel4


Overview

The following sections contain VM performance tuning recommendations to improve system performance. These performance recommendations are general guidelines, and are not intended to be all-inclusive.

Refer to the documentation provided by your Linux OS and KVM host vendors for complete details. For example, Redhat provides extensive documentation on using virt-manager and optimizing VM performance. Refer to the Redhat Virtualization Tuning and Optimization Guide for details.

Info
titleNote

For performance tuning procedures on a VM instance, log onto the host system as the root user. 


Excerpt

General Recommendations

  • Ensure the number of hyper-threaded vCPUs in an instance is always even (4, 6, 8, and so on).
  • For best performance, make sure a single instance is confined to a single NUMA. Performance degradation occurs if an instance spans across multiple NUMAs.
  • Ensure the physical NICs associated with an instance are connected to the same NUMA/socket where the instance is hosted. Doing so reduces the remote node memory access which, in turn, helps to improve performance.


Recommended BIOS Settings

Spacevars
0company
 recommends the following BIOS settings in the host for optimum performance.

Caption
0Table
1Recommended BIOS Settings


BIOS Parameter

Setting

CPU power management

Power Regulator

  • Maximum performance
    or
  • Static High Performance
Intel Hyper-ThreadingEnabled
Intel Turbo BoostEnabled
Intel VT-x (Virtualization Technology)Enabled
Thermal Configuration
  • Optimal Cooling
    or
  • Maximum Cooling
Minimum Processor Idle Power Core C-stateNo C-states
Minimum Processor Idle Power Package C-stateNo C-states
Energy Performance BIASMax Performance

Sub-NUMA Clustering

 DisabledDisabled
HW PrefetcherDisabled
SRIOVEnabled
Intel® VT-dEnabled



Info
titleNote

For GPU transcoding, ensure all power supplies are plugged into the server.

Procedure

Set CPU Frequency on the Host

The CPU frequency setting determines the  operating clock speed of the processor and in turn the system performance. Red Hat offers a set of in-built tuning profiles and a tool called tuned-adm that helps in configuring the required tuning profile.

Ribbon recommends to apply the throughput-performance tuning profile, which allows the processor to operate at maximum frequency.

  1. Determine the active tuning profile:

    Code Block
    # tuned-adm active

    
    Current active profile:
     powersave
     powersave


  2. Apply the throughput-performance tuning profile:

    Code Block
    # tuned-adm profile throughput-performance


This configuration is persistent across reboots, and takes effect immediately. There is no need to reboot the host after configuring this tuning profile.

Perform NUMA Pinning for the VM

Use the procedure below to accomplish NUMA pinning for the VM.

Info
titleNote

You can skip NUMA pinning for virtual pkt interfaces.


  1. Determine the number of NUMA nodes on the host server.

    Code Block
    [root@srvr3320 ~]# lscpu | grep NUMA
    NUMA node(s):          2
    NUMA node0 CPU(s):     0-7,16-23
    NUMA node1 CPU(s):     8-15,24-31
    [root@srvr3320 ~]#

    In this example, there are two NUMA nodes on the server.

  2. Find out to which NUMA node the SR-IOV-enabled PF is connected to, which will be get allocated to the SBC VM for pkt interfaces.

    1. Obtain the bus-info of the PF interface using the command ethtool -I <PF interface name>.

      Code Block
      [root@srvr3320 ~]# ethtool -i ens4f0
      driver: igb
      version: 5.6.0-k
      firmware-version: 1.52.0
      expansion-rom-version:
      bus-info: 0000:81:00.0
      supports-statistics: yes
      supports-test: yes
      supports-eeprom-access: yes
      supports-register-dump: yes
      supports-priv-flags: yes
      [root@srvr3320 ~]#

      Anchor
      2.b
      2.b

    2. Identify the NUMA node of the PCI device using cat /sys/bus/pci/devices/<PCI device>/numa_node.

      Code Block
      [root@srvr3320 ~]# cat /sys/bus/pci/devices/0000\:81\:00.0/numa_node
      1


  3. Repeat the previous step for other SR-IOV interfaces from which you plan to connect VFs.

    Info
    titleNote

    Make sure that all PCI devices are connected to the same NUMA node.


  4. Once the NUMA node is discovered, set the <numatune> of the SBC VM in the VM xml file.

    Code Block
    <numatune>
        <memory mode='preferred' nodeset="1"/>
    </numatune>


Determine Host Processor and CPU Details

To determine the host system's processor and CPU details, enter the following command to determine how many vCPUs are assigned to host CPUs:

Code Block
lscpu -p

Anchor
CPU Architecture Example
CPU Architecture Example

Code Block
titleCPU Architecture Example
[root@srvr3320 ~]# lscpu -p
# The following is the parsable format, which can be fed to other
# programs. Each different item in every column has an unique ID
# starting from zero.
# CPU,Core,Socket,Node,,L1d,L1i,L2,L3
0,0,0,0,,0,0,0,0
1,1,0,0,,1,1,1,0
2,2,0,0,,2,2,2,0
3,3,0,0,,3,3,3,0
4,4,0,0,,4,4,4,0
5,5,0,0,,5,5,5,0
6,6,0,0,,6,6,6,0
7,7,0,0,,7,7,7,0
8,8,1,1,,8,8,8,1
9,9,1,1,,9,9,9,1
10,10,1,1,,10,10,10,1
11,11,1,1,,11,11,11,1
12,12,1,1,,12,12,12,1
13,13,1,1,,13,13,13,1
14,14,1,1,,14,14,14,1
15,15,1,1,,15,15,15,1
16,0,0,0,,0,0,0,0
17,1,0,0,,1,1,1,0
18,2,0,0,,2,2,2,0
19,3,0,0,,3,3,3,0
20,4,0,0,,4,4,4,0
21,5,0,0,,5,5,5,0
22,6,0,0,,6,6,6,0
23,7,0,0,,7,7,7,0
24,8,1,1,,8,8,8,1
25,9,1,1,,9,9,9,1
26,10,1,1,,10,10,10,1
27,11,1,1,,11,11,11,1
28,12,1,1,,12,12,12,1
29,13,1,1,,13,13,13,1
30,14,1,1,,14,14,14,1
31,15,1,1,,15,15,15,1
[root@srvr3320 ~]#


The first column lists the logical CPU number of a CPU as used by the Linux kernel. The second column lists the logical core number -- use this information for vCPU pinning.

Ensure Persistent CPU Pinning

CPU pinning ensures that a VM only gets CPU time from a specific CPU or set of CPUs. Pinning is performed on each logical CPU of the guest VM against each core ID in the host system. The CPU pinning information is lost every time the VM instance is shut down or restarted. To avoid entering the pinning information again, update the KVM configuration XML file on the host system.

Info
titleNote:
  • Ensure that no two VM instances are allocated the same physical cores on the host system.
  • Ensure that all the VMs hosted on the physical server are pinned.
  • To create vCPU to hyper-thread pinning, pin consecutive vCPUs to sibling threads (logical coresHyper-threaded logical CPUs for the physical core) of the same physical core. Identify the logical core/sibling threads from the output returned by the command lscpu on the host.
    (Hyper-threading is when a physical processor core allows its resources to be allocated as multiple logical processors. Hyper-thread pinning is the when VM vCPUs are pinned to each hyper-threaded core)
  • Do not include the 0th physical core of the host in pinning. This is recommended because most host management/kernel threads are spawned on the 0th core by default.

 Use the following steps to update the pinning information in the KVM configuration XML guest VM XML file:

  1. Shut down the VM instance.

  2. Start virsh.

    Code Block
    languagenone
    virsh
    [root@kujo ~]# virsh
    Welcome to virsh, the virtualization interactive terminal.
    
    Type:   'help' for help with commands
    		'quit' to quit
    
    virsh  #


  3. Edit the VM instanceCheck the list of running instances:

    Code Block
    languagenone
    virsh # edit <KVM_instance_name>

    Search for the vcpu placement attribute.

    Image Removed

    Make sure the vCPUs are pinned to the correct NUMA node CPUs.

    Spacevars
    0company
     recommends to reserve the 1-core siblings of each NUMA node for the host process (do not use for the VM). Since the PCI is connected to NUMA node1 (as determined in step step 2.b of NUMA Pinning procedure), you must pin the vCPUs of the VM from the CPU siblings in NUMA node1.

    Skip the first physical core siblings, 8 and 24, and pin the rest.

    Code Block
    <vcpu placement='static' cpuset='9,25,10,26'>4</vcpu>
    <cputune>
               <vcpupin vcpu="0" cpuset="9"/>
               <vcpupin vcpu="1" cpuset="25"/>
               <vcpupin vcpu="2" cpuset="10"/>
               <vcpupin vcpu="3" cpuset="26"/>
    </cputune>
    As the CPU Architecture Example shows , you must pin the cores to their siblings (i.e. the two Hyperthreads coming from the same physical core). The second column in the example shows the physical core number.
    Info
    titleNote

    Note: As Sub-NUMA Clustering is disabled in the BIOS, each Socket will represent each numa node. So in this case socket 0 is NUMA node0 and Socket 1 is NUMA node1. Make sure that all the vCPUs are pinned to the same NUMA node and don’t cross the NUMA boundary.

    Tip
    titleTip

    Ensure that no two VM instances have the same physical core affinity. For example, if VM1 has an affinity of 9,25,10,26 assigned, then no other VM should be pinned to this core again. To Assign CPU pinning to other VMs, use the other available cores on the host, leaving the first 2 logical cores (as described in Perform Host Pinning) per NUMA node for the host. 

    Also, assign all other VM instances running on the same host with affinity; otherwise the VMs without affinity may impact the performance of VMs that have affinity.

    Save and exit the XML file.

    Code Block
    :wq

Edit VM CPU Mode

Spacevars
0company
 recommends to set the CPU mode to host-model using a virsh command in the host system.

Use the following steps to edit the VM CPU mode:

  • Shut down the VM instance.
  • Start virsh.

    Code Block
    virsh
    The virsh prompt displays.

    Edit the VM instance:

    Code Block
    languagenone
    edit <KVM_instance_name>

    Search for the cpu mode attribute.

    Image Removed

    Edit the cpu mode attribute:

    Image Removed

    Tip
    titleTip

    Ensure the topology details entered are identical to the topology details set while creating the VM instance. For example, if the topology was set to 1 socket, 2 cores and 2 threads, enter the same details in this XML file.

    Save and exit the XML file.

    Code Block
    :wq

    Start the VM instance.

    Code Block
    languagenone
    start <KVM_instance_name>

    Increase the Transmit Queue Length for virt-io Interfaces

    This section is applicable only for virt-io based interfaces. 

    Spacevars
    0company
     recommends to increase the Transmit Queue Length of host tap interfaces to 4096 for better performance. By default, the Transmit Queue Length is set to 500. 

    To increase the Transmit Queue Length to 4096:

    Start virsh:

    Code Block
    languagenone
    virsh
    The virsh prompt displays.

    Identify the available interfaces.

    Code Block
    languagenone
    domiflist <VM_instance_name>

    The list of active interfaces displays.

    Caption
    0Figure
    1Active Interfaces List

    Image Removed

    Increase the Transmit Queue Lengths for the tap interfaces.

    Code Block
    languagenone
    ifconfig <interface_name> txqueuelen <length>
    The interface_name is the name of the interface you want to change, and length is the new queue length. For example, ifconfig macvtap4 txqueuelen 4096.

    Verify the value of the interface length.

    Code Block
    languagenone
    ifconfig <interface_name>

    Example output:

    Image Removed

    To make this setting persistent across the reboot, do the following:

    Modify/Create the 60-tap.rules file and add the KERNEL command

    Code Block
    # vim /etc/udev/rules.d/60-tap.rules
    KERNEL=="tap*", RUN+="/sbin/ip link set %k txqueuelen 4096" – Add this line
    # udevadm control --reload-rules

    Apply the rules to already created interfaces.

    Code Block
    # udevadm trigger --attr-match=subsystem=net
  • Reboot the host.

  • Stop Kernel Same-page Metering (KSM)

    Kernel same-page metering (KSM) is a technology which finds common memory pages inside a Linux system and merges the pages to save memory resources. In the event of one of the copies being updated, a new copy is created so the function is transparent to the processes on the system. For hypervisors, KSM is highly beneficial when multiple guests are running with the same level of the operating system. However, there is overhead due to the scanning process which may cause the applications to run slower, which is not desirable.

    To turn off KSM in the host:

    Deactivate KSM by stopping the ksmtuned and the ksm services as shown below. This does not persist across reboots.

    Code Block
    # systemctl stop ksm
    # systemctl stopksmtuned

    Disable KSM persistently as shown below:

    Code Block
    # systemctl disable ksm
    # systemctl disable ksmtuned

    Perform Host Pinning

    To avoid performance impact on VMs due to host-level Linux services, host pinning isolates physical cores where a guest VM is hosted from physical cores where the Linux host processes/services run. 

    Spacevars
    0company
     recommends to leave one physical core per CPU processor for host processes.

    In this example, the core 0 (Core 0 and core 16 are logical cores) and core 8 (Core 8 and core 24 are logical cores) are reserved for Linux host processes.

    Info
    titleNote

    The CPUAffinity option in /etc/systemd/system.conf sets affinity to systemd by default, as well as for everything it launches, unless their .service file overrides the CPUAffinity setting with its own value.

    Configure the CPUAffinity option in /etc/systemd/system.conf:

    Code Block
    [root@srvr3320 ~]# lscpu
    Architecture:          x86_64
    CPU op-mode(s):        32-bit, 64-bit
    Byte Order:            Little Endian
    CPU(s):                32
    On-line CPU(s) list:   0-31
    Thread(s) per core:    2
    Core(s) per socket:    8
    Socket(s):             2
    NUMA node(s):          2
    Vendor ID:             GenuineIntel
    CPU family:            6
    Model:                 45
    Model name:            Intel(R) Xeon(R) CPU E5-2658 0 @ 2.10GHz
    Stepping:              7
    CPU MHz:               1782.128
    CPU max MHz:           2100.0000
    CPU min MHz:           1200.0000
    BogoMIPS:              4190.19
    Virtualization:        VT-x
    L1d cache:             32K
    L1i cache:             32K
    L2 cache:              256K
    L3 cache:              20480K
    NUMA node0 CPU(s):     0-7,16-23
    NUMA node1 CPU(s):     8-15,24-31
    

    To dedicate the physical CPUs 0 and 8 for host processing, specify CPUAffinity as 0 8 16 24 in the file /etc/systemd/system.conf.

    Code Block
    CPUAffinity=0 8 16 24
  • Restart the system.

    1. list --all
      
      Id   Name           State
      ----------------------------------------------------
      2    ISBC_SWE_VM    running


    2. Edit the VM instance:

      Code Block
      languagenone
      virsh # edit <KVM_instance_name>


    3. Search for the vcpu placement attribute.

      Panel
      bgColortransparent

      <domain type='kvm'>
        <name>ISBC_SWE_VM</name>
        <uuid>c31953dc-726d-4725-a405-9f446696add5</uuid>
        <memory unit='KiB'>33554432</memory>
        <currentMemory unit='KiB'>33554432</currentMemory>
        <vcpu placement='static'>4</vcpu>
        <os>
          <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
          <boot dev='hd'/>
        </os>
        <features>


    4. Make sure the vCPUs are pinned to the correct NUMA node CPUs.

      Spacevars
      0company
       recommends to reserve the 1-core siblings of each NUMA node for the host process (do not use for the VM). Since the PCI is connected to NUMA node1 (as determined in step step 2.b of NUMA Pinning procedure), you must pin the vCPUs of the VM from the CPU siblings in NUMA node1.

      1. Skip the first physical core siblings, 8 and 24, and pin the rest.

        Code Block
        <vcpu placement='static' cpuset='9,25,10,26'>4</vcpu>
        <cputune>
                   <vcpupin vcpu="0" cpuset="9"/>
                   <vcpupin vcpu="1" cpuset="25"/>
                   <vcpupin vcpu="2" cpuset="10"/>
                   <vcpupin vcpu="3" cpuset="26"/>
        </cputune>


        As the CPU Architecture Example shows , you must pin the cores to their siblings (i.e. the two Hyperthreads coming from the same physical core). The second column in the example shows the physical core number.

        Info
        titleNote

        Note: As Sub-NUMA Clustering is disabled in the BIOS, each Socket will represent each numa node. So in this case socket 0 is NUMA node0 and Socket 1 is NUMA node1. Make sure that all the vCPUs are pinned to the same NUMA node and don’t cross the NUMA boundary.


        Tip
        titleTip

        Ensure that no two VM instances have the same physical core affinity. For example, if VM1 has an affinity of 9,25,10,26 assigned, then no other VM should be pinned to this core again. To Assign CPU pinning to other VMs, use the other available cores on the host, leaving the first 2 logical cores (as described in Perform Host Pinning) per NUMA node for the host. 

        Also, assign all other VM instances running on the same host with affinity; otherwise the VMs without affinity may impact the performance of VMs that have affinity.


    5. Save and exit the XML file.

      Code Block
      :wq


    6. Start the VM instance.

      Code Block
      languagenone
      virsh # start <KVM_instance_name>


      Info
      titleNote

      If you require additional changes to the XML file (such as those described below), you can hold off on restarting until all changes area made.


    Edit VM CPU Mode

    Spacevars
    0company
     recommends to set the CPU mode to host-model using a virsh command in the host system.

    Use the following steps to edit the VM CPU mode:

    1. Shut down the VM instance.

    2. Start virsh.

      Code Block
      languagenone
      virsh
      [root@kujo ~]# virsh
      Welcome to virsh, the virtualization interactive terminal.
      
      Type:   'help' for help with commands
      		'quit' to quit
      
      virsh  #


    3. Check the list of running instances:

      Code Block
      languagenone
      virsh # list --all
      
      Id   Name           State
      ----------------------------------------------------
      2    ISBC_SWE_VM    running


    4. Edit the VM instance:

      Code Block
      languagenone
      virsh # edit <KVM_instance_name>


    5. Locate the <cpu mode='custom'> attribute in the default configuration.

      Panel
      bgColortransparent

      <cpu mode='custom' match='exact' check='partial'>
        <model fallback='allow'>SandyBridge</model>
        <vendor>Intel</vendor>
        <feature policy='require' name='pbe'/>
        <feature policy='require' name='tm2'/>
        <feature policy='require' name='est'/>
        <feature policy='require' name='vmx'/>
        <feature policy='require' name='osxsave'/>
        <feature policy='require' name='smx'/>
        <feature policy='require' name='ss'/>
        <feature policy='require' name='ds'/>
        <feature policy='require' name='vme'/>
        <feature policy='require' name='dtes64'/>
        <feature policy='require' name='ht'/>
        <feature policy='require' name='dca'/>
        <feature policy='require' name='pcid'/>
        <feature policy='require' name='tm'/>
        <feature policy='require' name='pdcm'/>
        <feature policy='require' name='pdpe1gb'/>
        <feature policy='require' name='ds_cpl'/>
        <feature policy='require' name='xtpr'/>
        <feature policy='require' name='acpi'/>
        <feature policy='require' name='monitor'/>
      </cpu>


    6. Replace the entire CPU mode content shown above with the below content, containing the proper CPU topology of the VM. To identify the proper topology for your VM instance, use sockets=1 (as the VM has a single NUMA node), threads=2 (since the VM will support hyperthreading), cores=<number of vcCPUs for the VM/2>.

      Panel
      bgColortransparent

      <cpu mode='host-model'>
        <topology sockets='1' cores='2' threads='2'/>
      </cpu>


      Info
      titleNote

      Ensure to enter the topology details above to exactly match the topology details set while creating the VM instance (The number of cores equals the number of vCPUs allocated from the VM divided by 2)

      For example, if the VM instance topology is set to 1 socket, 2 cores and 2 threads, enter the identical details in this XML file.


    7. Save and exit the XML file.

      Code Block
      :wq


    8. Start the VM instance.

      Code Block
      languagenone
      virsh # start <KVM_instance_name>


      Info
      titleNote

      If you require additional changes to the XML file (such as those described below), you can hold off on restarting until all changes area made.


    Increase the Transmit Queue Length for virt-io Interfaces

    This section is applicable only for virt-io based interfaces. 

    Spacevars
    0company
     recommends to increase the Transmit Queue Length of host tap interfaces to 4096 for better performance. By default, the Transmit Queue Length is set to 500. 
    (Tap interfaces are the logical netdevices that libvirt creates on the host server for the guest VM)

    To increase the Transmit Queue Length to 4096:

    1. Start virsh:

      Code Block
      languagenone
      virsh
      [root@kujo ~]# virsh
      Welcome to virsh, the virtualization interactive terminal.
      
      Type:   'help' for help with commands
      		'quit' to quit
      
      virsh  #


    2. Identify the available interfaces.

      Code Block
      languagenone
      domiflist <VM_instance_name>


      The list of active interfaces displays.

      Panel
      bgColortransparent

      virsh # domiflist ISBC_SWE_VM

      Interface  Type     Source    Model     MAC
      -------------------------------------------------------
      macvtap4   direct   eno1      virtio    52:54:00:e5:e8:9f
      macvtap5   direct   eno1      virtio    52:54:00:2b:43:9b
      macvtap6   direct   ens3fl    virtio    52:54:00:aa:89:38
      macvtap7   direct   ens3f0    virtio    52:54:00:b6:60:76


      virsh #


    3. Increase the Transmit Queue Lengths for the tap interfaces.

      Code Block
      languagenone
      ifconfig <interface_name> txqueuelen <length>


      The interface_name is the name of the interface you want to change, and length is the new queue length. For example, ifconfig macvtap4 txqueuelen 4096.

    4. Verify the value of the interface length.

      Code Block
      languagenone
      ifconfig <interface_name>


      Example output:

      Panel
      bgColortransparent

      [root@kujo ~]# ifconfig macvtap4 txqueuelen 4096
      [root@kujo ~]# ifconfig macvtap5 txqueuelen 4096
      [root@kujo ~]# ifconfig macvtap6 txqueuelen 4096
      [root@kujo ~]# ifconfig macvtap7 txqueuelen 4096
      [root@kujo ~]# ifconfig macvtap4
      macvtap4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
              inet6 fe80::5054:ff:fee5:e89f prefixlen 64 scopeid 0x20<linkk>
              ether 52:54:00:e5:e8:9f txqueuelen 4096 (Ethernet)
              RX packets 2547441 bytes 177005232 (168.8 MiB)
              RX errors 260 dropped 260 overruns 0 frame 0
              TX packets 50573 bytes 17987512 (17.1 MiB)
              TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


    5. To make this setting persistent across the reboot, do the following:

      1. Modify/Create the 60-tap.rules file and add the KERNEL command by adding the line: KERNEL=="tap*", RUN+="/sbin/ip link set %k txqueuelen 4096"

        Code Block
        # vim /etc/udev/rules.d/60-tap.rules
        KERNEL=="tap*", RUN+="/sbin/ip link set %k txqueuelen 4096"
        # udevadm control --reload-rules


      2. Apply the rules to already created interfaces.

        Code Block
        # udevadm trigger --attr-match=subsystem=net


      3. Reboot the host.

        Code Block
        reboot


        Info
        titleNote

        If you require additional changes, you can hold off on rebooting until all changes area made.


    Stop Kernel Same-page Metering (KSM)

    Kernel same-page metering (KSM) is a technology which finds common memory pages inside a Linux system and merges the pages to save memory resources. In the event of one of the copies being updated, a new copy is created so the function is transparent to the processes on the system. For hypervisors, KSM is highly beneficial when multiple guests are running with the same level of the operating system. However, there is overhead due to the scanning process which may cause the applications to run slower, which is not desirable.

    To turn off KSM in the host:

    1. Deactivate KSM by stopping the ksmtuned and ksm services, as shown below. This does not persist across reboots.

      Code Block
      # systemctl stop ksm
      # systemctl stop ksmtuned


    2. Disable KSM persistently as shown below:

      Code Block
      # systemctl disable ksm
      # systemctl disable ksmtuned


    Perform Host Pinning

    To avoid performance impact on VMs due to host-level Linux services, host pinning isolates physical cores where a guest VM is hosted from physical cores where the Linux host processes/services run. 

    Spacevars
    0company
     recommends to leave the first physical core for each CPU socket, along with its siblings, for the host processes.

    In this example, core 0 (Core 0 and core 16 are logical cores) and core 8 (Core 8 and core 24 are logical cores) each represent the first core in each CPU socket, and are reserved for Linux host processes.

    Info
    titleNote

    The CPUAffinity option in /etc/systemd/system.conf sets affinity to systemd by default, as well as for everything it launches, unless their .service file overrides the CPUAffinity setting with its own value.


    1. Configure the CPUAffinity option in /etc/systemd/system.confTo get the first core siblings of each socket, use lscpu as shown below, or any other equivalent command.
      As shown below, 0 and 16 are the first core siblings on NUMA node0 CPU(s), and 8 and 24 are the first core siblings on NUMA node1 CPU(s).

      Panel
      bgColortransparent

      [root@srvr3320 ~]# lscpu
      Architecture: x86_64
      CPU op-mode(s): 32-bit, 64-bit
      Byte Order: Little Endian
      CPU(s): 32
      On-line CPU(s) list: 0-31
      Thread(s) per core: 2
      Core(s) per socket: 8
      Socket(s): 2
      NUMA node(s): 2
      Vendor ID: GenuineIntel
      CPU family: 6
      Model: 45
      Model name: Intel(R) Xeon(R) CPU E5-2658 0 @ 2.10GHz
      Stepping: 7
      CPU MHz: 1782.128
      CPU max MHz: 2100.0000
      CPU min MHz: 1200.0000
      BogoMIPS: 4190.19
      Virtualization: VT-x
      L1d cache: 32K
      L1i cache: 32K
      L2 cache: 256K
      L3 cache: 20480K
      NUMA node0 CPU(s): 0-7,16-23
      NUMA node1 CPU(s): 8-15,24-31


    2. To dedicate the physical CPUs 0 and 8 for host processing, specify CPUAffinity as 0 8 16 24 in the file /etc/systemd/system.conf.

      Code Block
      CPUAffinity=0 8 16 24


    3. Reboot the system.

      Code Block
      reboot


    Using <emulatorpin> Tag

    The <emulatorpin> tag specifies to which host physical CPUs the emulator (a subset of a domain, not including vCPUs) is pinned. The <emulatorpin> tag provides a method of setting a precise affinity to emulator thread processes. As a result, vhost threads run on the same subset of physical CPUs and memory, thus benefit from cache locality. 

    In the above example, since the VM is pinned to core siblings 9,25, 10,26 from NUMA node0, and 8 and 24 from NUMA node1 for host-level services, you can pin the emulator thread to any free cor siblings in the same NUMA node, such as 11 and 27, as shown below.

    Code Block
    titleExample
    <cputune>
            <emulatorpin cpuset="11,27"/>
    </cputune>


    The <emulatorpin> tag is required in order to isolate the virtio network traffic to be pinned to a different core than the VM vCPUs. This greatly reduces the percentage steal seen inside the VMs.

    Info
    titleNote

    Spacevars
    0company
    recommends to pin the emulatorpin cpuset to the host CPU siblings using the same name as the VM memory. If no CPUs are left on the NUMA node, you can also pin it to the other NUMA node.


    Back Up VMs with 1G hugepages

    Spacevars
    0company
     recommends to back up the VMs by adding 1G hugepages in order to boost performance. Configure hugepages in the host during boot time to minimize memory fragmentation. If the host OS does not support the recommendations of 1G hugepage size, configure hugepages of size 2M in place of 1G.

    The number of hugepages is decided based on the total memory available on the host. 

    Spacevars
    0company
    recommends to configure 80-90% of total memory as hugepage memory and leave the rest as normal linux memory.

    1. Configure the host server hugepage size for 1G hugepages by appending the following line to the kernel command line options in /etc/default/grubIn the example below, the host  has a total of 256G memory, out of which 200G is configured as hugepages. 

      Code Block
      GRUB_TIMEOUT=5
      
      GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
      
      GRUB_DEFAULT=saved
      
      GRUB_DISABLE_SUBMENU=true
      
      GRUB_TERMINAL_OUTPUT="console"
      
      GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8 crashkernel=auto intel_iommu=on iommu=pt default_hugepagesz=1GB hugepagesz=1G hugepages=200 rhgb quiet"
      
      GRUB_DISABLE_RECOVERY="true"


    2. Regenerate the GRUB2 configuration as shown below: 

      1. If your system uses BIOS firmware, issue the command:

        Code Block
        # grub2-mkconfig -o /boot/grub2/grub.cfg


      2. If your system uses UEFI firmware, issue the command: 

        Code Block
        # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg


        Tip
        titleTip

        A simple method to find out if you are running UEFI or BIOS is to check for the presence of the folder /sys/firmware/efi

        Enter the ls command:

          # ls -l /sys/firmware/efi

        If you get the error "ls: cannot access /sys/firmware/efi: No such file or directory",  you system is using BIOS firmware, If the folder is present, your system is using UEFI firmware.


    3. Add the following lines in your instance XML file using virsh edit <instanceName> to allow the hypervisor to back the VM with hugepage memory.

      Info
      titleNote

      Make sure that the PCI device (SR-IOV, vCPU and VM memory) comes from the same NUMA node. For virtual pkt interfaces, Also, ensure that the vCPU and memory comes from the same NUMA node.


      Code Block
      <memory unit='KiB'>33554432</memory>
      <currentMemory unit='KiB'>33554432</currentMemory>
      <memoryBacking>
          <hugepages>
      	<page size='1048576' unit='KiB' nodeset='1'/>
          </hugepages>
      </memoryBacking>


      Tip

      This example pins the VM on NUMA node1. For hosting a second VM on other NUMA node use the proper NUMA node value in the nodeset = <NUMA Node>.


    4. Restart the host.

      Code Block
      reboot


      Info
      titleNote

      If you require additional changes, you can hold off on rebooting until all changes area made.


    5. Obtain the PID of the VM:

      Code Block
      ps -eaf | grep qemu | grep -i <vm_name>


    6. Verify VM memory is received from a single NUMA node:

      Code Block
      numastat -p  <vmpid>


    Disable Flow Control

    Perform the following steps to disable flow control.

    Info
    titleNote

    This setting is optional and depends on NIC capability. Not all NICs allow you to modify the flow control parameters. If it is supported by NICs,

    Spacevars
    0company
     recommends to disable flow control to avoid head-of-line blocking issues.

    To disable flow control:

    1. Login to the host system as the root user.

    2. Disable flow control for interfaces attached to the SWe VM.

      Tip
      titleTip

      Use the <interface name> from the actual configuration.


      Code Block
      ethtool -A <interface name> rx off tx off autoneg off  


      Code Block
      titleExample
      ethtool -A p4p3 rx off tx off autoneg off
      ethtool -A p4p4 rx off tx off autoneg off
      ethtool -A em3 rx off tx off autoneg off
      ethtool -A em4 rx off tx off autoneg off


    To make the setting persistent:

    The network service in CentOS/RedHat has the ability to make the setting persistent. The script /etc/sysconfig/network-scripts/ifup-post checks for the existence of /sbin/ifup-local. If it exists, the script runs it with the interface name as a parameter (e.g. /sbin/ifup-local eth0).

    Perform the following steps:

    1. Create this file using the  touch command:

      Code Block
      touch /sbin/ifup-local


    2. Make the file executable using the chmod command:

      Code Block
      chmod +x /sbin/ifup-local


    3. Set the file's SELinux context using the chcon command: 

      Code Block
      chcon --reference /sbin/ifup /sbin/ifup-local




    4. Open the file in an editor.

      Here is an example of a simple script to apply the same settings to all interfaces (except lo):

      Code Block
      #!/bin/bash
      if [ -n "$1" ]; then
          if [ "$1" != "lo" ];then
              /sbin/ethtool -A $1

    Using <emulatorpin> Tag

    The <emulatorpin> tag specifies to which host physical CPUs the emulator (a subset of a domain, not including vCPUs) is pinned. The <emulatorpin> tag provides a method of setting a precise affinity to emulator thread processes. As a result, vhost threads run on the same subset of physical CPUs and memory, thus benefit from cache locality. 

    Code Block
    titleExample
    <cputune>
            <emulatorpin cpuset="11,27"/>
    </cputune>

    The <emulatorpin> tag is required in order to isolate the virtio network traffic to be pinned to a different core than the VM vCPUs. This greatly reduces the percentage steal seen inside the VMs.

    Info
    titleNote

    Spacevars
    0company
    recommends to pin the emulatorpin cpuset to the host CPU siblings using the same name as the VM memory. If no CPUs are left on the NUMA node, you can also pin it to the other NUMA node.

    Back Up VMs with 1G hugepages

    Spacevars
    0company
     recommends to back up its VMs with 1G hugepages for performance reasons. Configure hugepages in the host during boot time to minimize memory fragmentation. If the host OS does not support the recommendations of 1G hugepage size, configure hugepages of size 2M in place of 1G.

    The number of hugepages is decided based on the total memory available on the host. 

    Spacevars
    0company
    recommends to configure 80-90% of total memory as hugepage memory and leave the rest as normal linux memory.

    Configure the huge page size as 1G and number of huge pages by appending the following line to the kernel command line options in /etc/default/grubIn the example below, the host  has a total of 256G memory, out of which 200G is configured as hugepages. 

    Code Block
    GRUB_TIMEOUT=5
    
    GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
    
    GRUB_DEFAULT=saved
    
    GRUB_DISABLE_SUBMENU=true
    
    GRUB_TERMINAL_OUTPUT="console"
    
    GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8 crashkernel=auto intel_iommu=on iommu=pt default_hugepagesz=1GB hugepagesz=1G hugepages=200 rhgb quiet"
    
    GRUB_DISABLE_RECOVERY="true"

    Regenerate the GRUB2 configuration as shown below: 

    If your system uses BIOS firmware, issue the command:

    Code Block
    # grub2-mkconfig -o /boot/grub2/grub.cfg

    If your system uses UEFI firmware, issue the command: 

    Code Block
    # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

    Add lines in your instance XML file using virsh edit <instanceName>.

    Info
    titleNote

    Make sure that the PCI device (SR-IOV, vCPU and VM memory) comes from the same NUMA node. For virtual pkt interfaces, Also, ensure that the vCPU and memory comes from the same NUMA node.

    Code Block
    <memory unit='KiB'>33554432</memory>
    <currentMemory unit='KiB'>33554432</currentMemory>
    <memoryBacking>
        <hugepages>
    	<page size='1048576' unit='KiB' nodeset='1'/>
        </hugepages>
    </memoryBacking>
    Tip

    This example pins the VM on NUMA node1. For hosting a second VM on other NUMA node use the proper NUMA node value in the nodeset = <NUMA Node>.

    Restart the host.

    Obtain the PID of the VM:

    Code Block
    ps -eaf | grep qemu | grep -i <vm_name>

    Verify VM memory is received from a single NUMA node:

    Code Block
    numastat -p  <vmpid>

    Disable Flow Control

    Perform the following steps to disable flow control.

    Info
    titleNote

    This setting is optional and depends on NIC capability. Not all NICs allow you to modify the flow control parameters. If it is supported by NICs,

    Spacevars
    0company
     recommends to disable flow control to avoid head-of-line blocking issues.

    To disable flow control:

    1. Log in to the system as the root user.

      Disable flow control for interfaces attached to the SWe VM.

      Tip
      titleTip

      Use the <interface name> from the actual configuration.

      Code Block
      ethtool -A <interface name> rx off tx off autoneg off  
      Code Block
      titleExample
      ethtool -A p4p3 rx off tx off autoneg off
      ethtool -A p4p4 rx off tx off autoneg off
      ethtool -A em3 rx off tx off autoneg off
      ethtool -A em4 rx off tx off autoneg off

    To make the setting persistent:

    The network service in CentOS/RedHat has the ability to make the setting persistent. The script /etc/sysconfig/network-scripts/ifup-post checks for the existence of /sbin/ifup-local, and if it exists, runs it with the interface name as a parameter (e.g. /sbin/ifup-local eth0)

    Steps:

    Create this file using touch /sbin/ifup-local
  • Make it executable using chmod +x /sbin/ifup-local
  • Set the file's SELinux context using chcon --reference /sbin/ifup /sbin/ifup-local
  • Open the file in an editor.
  • Here is an example of a simple script to apply the same settings to all interfaces (except lo):

    Code Block
    #!/bin/bash
    if [ -n "$1" ]; then
        if [ "$1" != "lo" ];then
            /sbin/ethtool -A $1 rx off tx off autoneg off
        fi
    fi

    Recap of Changes in the KVM Configuration XML File

    Below is an example KVM configuraion XML file that includes all of the above changes. The  highlighted text identifies the changed values, which should be followed properly as described above.

    1. 
          fi
      fi


    Recap of Changes in the KVM Configuration XML File

    Use the following example KVM configuraion XML file to verify all of the changed values (highlighted in red) you performed in the aforementioned performance tuning steps.  


    Panel
    bgColortransparent
    titleBGColor#f5f5f5
    titleExample

    <domain type='kvm' id='1'>

      <name>ISBC_SWE_VM</name>

      <uuid>6c8b18c6-f633-4847-b1a3-a4f97bd5c14a</uuid>

      <memory unit='KiB'>33554432</memory>

      <currentMemory unit='KiB'>33554432</currentMemory>

      <memoryBacking>

        <hugepages>

          <page size='1048576' unit='KiB' nodeset='1'/>

        </hugepages>

      </memoryBacking>

      <numatune>

        <memory mode='preferred' nodeset="1"/>

      </numatune>

      <vcpu placement='static' cpuset='9,25,10,26'>4</vcpu>

      <cputune>

        <vcpupin vcpu="0" cpuset="9"/>

        <vcpupin vcpu="1" cpuset="25"/>

        <vcpupin vcpu="2" cpuset="10"/>

        <vcpupin vcpu="3" cpuset="26"/>

        <emulatorpin cpuset='11,27'/>

      </cputune>

      <resource>

        <partition>/machine</partition>

      </resource>

      <os>

        <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>

        <boot dev='hd'/>

      </os>

      <features>

        <acpi/>

        <apic/>

      </features>

      <cpu mode='host-model'>

        <topology sockets='1' cores='2' threads='2' />

      </cpu>

    ...

    </domain>



    Tune Interrupt Requests (IRQs)

    This section applies only to virt-io-based packet interfaces. Virt-IO networking works by sending interrupts on the host core. SBC VM performance can be impacted if frequent processing interruptions occur on any core of the VM. To avoid this, the affinity of the IRQs for a virtio-based packet interface should be different from the cores assigned to the SBC VM.

    The /proc/interrupts file lists the number of interrupts per CPU, per I/O device. IRQs have an associated "affinity" property, "smp_affinity," that defines which CPU cores are allowed to run the interrupt service routine (ISR) for that IRQ. Refer to the distribution guidelines of the host OS for the exact steps to locate and specify the IRQ affinity settings for a device.

    External Reference: https://access.redhat.com/solutions/2144921


    Span



    Validate Changes

    Validate your configuration changes using the steps below.

    1. BIOS settings: If changed, ensure BIOS settings are set properly (changes will become effective after system startup). 

    2. CPU frequency on the Host: Determine the active tuning profile:

      Code Block
      # tuned-adm active
      Current active profile: throughput-performance


    3. NUMA and CPU pinning: Validate pinning and other memory settings.

      1. Verify the NUMA node of the SR-IOV device, as described in the section "Perform NUMA Pinning for the VM".

      2. Check to ensure all vCPU pinnings match what was previousl assigned:

        Code Block
        virsh # vcpupin 2
        VCPU: CPU Affinity
        --------------------------
        0: 9
        1: 25
        2: 10
        3: 26


    4. TXqueuelen value: Verify the TXqueuelen value matches the example output in step 4 of "Increase the Transmit Queue Length for virt-io Interfaces".

    5. KSM settings: Validate that KSM is disabled:

      Code Block
      # systemctl list-unit-files | grep disabled | grep ksm
      ksm.service                                   disabled
      ksmtuned.service                              disabled


    6. Host pinning: Check the CPUAffinity set on the host to ensure the CPU numbers match what you assigned earlier.

      Code Block
      # cat /etc/systemd/system.conf | grep CPUAffinity
      CPUAffinity=0 8 16 24


    7. Flow control:

      1. Check the physical interfaces on the host to ensure that flow control is disabled.

        Code Block
        # ethtool -a <interface name>
        Pause parameters for <interface name>:
        Autonegotiate:  off
        RX:             off
        TX:             off


      2. If flow controls is not disabled, go back to Disable Flow Control to perform the steps.

    8. Overall VM settings:  Verify all other VM instance settings remain intact aft the final reboot.

      Code Block
      # virsh
      # edit <instance name>
    Panel
    bgColortransparent
    titleExample

    <domain type='kvm' id='1'>

      <name>ISBC_SWE_VM</name>

      <uuid>6c8b18c6-f633-4847-b1a3-a4f97bd5c14a</uuid>

      <memory unit='KiB'>33554432</memory>

      <currentMemory unit='KiB'>33554432</currentMemory>

      <memoryBacking>

        <hugepages>

          <page size='1048576' unit='KiB' nodeset='1'/>

        </hugepages>

      </memoryBacking>

      <numatune>

        <memory mode='preferred' nodeset="1"/>

      </numatune>

      <vcpu placement='static' cpuset='9,25,10,26'>4</vcpu>

      <cputune>

        <vcpupin vcpu="0" cpuset="9"/>

        <vcpupin vcpu="1" cpuset="25"/>

        <vcpupin vcpu="2" cpuset="10"/>

        <vcpupin vcpu="3" cpuset="26"/>

        <emulatorpin cpuset='11,27'/>

      </cputune>

      <resource>

        <partition>/machine</partition>

      </resource>

      <os>

        <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>

        <boot dev='hd'/>

      </os>

      <features>

        <acpi/>

        <apic/>

      </features>

      <cpu mode='host-model'>

        <topology sockets='1' cores='2' threads='2' />

      </cpu>

    ...

    </domain>

    Tune Interrupt Requests (IRQs)

    This section applies only to virt-io-based packet interfaces. Virt-IO networking works by sending interrupts on the host core. SBC VM performance can be impacted if frequent processing interruptions occur on any core of the VM. To avoid this, the affinity of the IRQs for a virtio-based packet interface should be different from the cores assigned to the SBC VM.

    The /proc/interrupts file lists the number of interrupts per CPU, per I/O device. IRQs have an associated "affinity" property, "smp_affinity," that defines which CPU cores are allowed to run the interrupt service routine (ISR) for that IRQ. Refer to the distribution guidelines of the host OS for the exact steps to locate and specify the IRQ affinity settings for a device.

    External Reference: https://access.redhat.com/solutions/2144921


    Span



    Include Page
    _OVS-DPDK Virtio Interfaces - Performance Tuning Recommendations
    _OVS-DPDK Virtio Interfaces - Performance Tuning Recommendations

    Pagebreak