Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Add_workflow_for_techpubs
AUTH1UserResourceIdentifier{userKey=8a00a0c86820e56901685f374974002d, userName='null'}
JIRAIDAUTHSBX-94575110236
REV5UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV6UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV3REV1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26ca1903b3, userName='null'}
REV1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26ca1903b3, userName='null'}


Panel
Panel

In this section:

Table of Contents
maxLevel4



There are VM operating parameters you can set to improve system throughput for a single or multiple VMs installed on a KVM host. Some VM operating parameters are set on the KVM host and are modified any time when the VM instance is running, while others are set on the VM and are only configured when the VM instance is shut down.

The following sections contain VM performance tuning recommendations to improve system performance. These performance recommendations are general guidelines and are not exhaustive. The following sections contain VM performance tuning recommendations to improve system performance. These performance recommendations are general guidelines and are not exhaustive. Refer to the documentation provided by your Linux OS and KVM host vendors. For example, Redhat provides extensive documentation on using virt-manager and optimizing VM performance. Refer to the Redhat Virtualization Tuning and Optimization Guide for details.

Info
titleNote:

For performance tuning procedures on a VM instance you must log on to the host system as the root user. 


Excerpt

General Recommendations

The following general recommendations apply to any platform where SBC SWe is deployed:

  • The number of vCPUs deployed on a system should be Ensure the number of vCPUs in an instance is always an even number (4, 6, 8, etc.)and so on) as hyper threaded vcpus are used.
  • For best performance, deploy only make sure a single instance on is confined to a single NUMA. Performance degradation occurs if you host more than one instance on a NUMA or if a single instance spans multiple NUMAs.an instance spans across multiple NUMAs.
  • Ensure Make sure that the physical NICs associated with an instance are connected to the same NUMA/socket where the instance is hosted. In the case of a dual NUMA host, ideally two instances should be hosted, with each instance on a separate NUMA and the associated NICs of each of the instances connected to their respective NUMAs.
  • To optimize performance, configure memory card equally on both NUMA nodes. For example if a dual NUMA node server has a total of 128 GiB of RAM, configure 64 GiB of RAM on each NUMA node.

Recommended BIOS Settings

  • Doing so reduces the remote node memory access, which in turn helps improve the performance.


Recommended BIOS Settings

Spacevars
0company
 recommends the following BIOS settings in the host for optimum performance.Ribbon recommends applying the BIOS settings in the following table on all VMs for optimum performance:

Caption
0Table
1Recommended BIOS Settings


BIOS Parameter

Setting

Comments

CPU power management/

Balanced

Power Regulator

Ribbon recommends

Maximum performance

or Static High Performance

Intel Hyper-ThreadingEnabled
Intel Turbo BoostEnabled
Intel VT-x (Virtualization Technology)Enabled

For hardware virtualization

 

All server BIOS settings are different, but in general, the following guidelines apply:

  • Set power profiles to maximum performance
  • Set thermal configurations to optimal cooling
  • Thermal Configuration

    Optimal Cooling

    or Maximum Cooling

    Minimum Processor Idle Power Core C-stateNo C-states
    Minimum Processor Idle Power Package C-stateNo C-states
    Energy Performance BIASMax Performance

    Sub-NUMA Clustering

     Disabled
    HW PrefetcherDisabled
    SRIOVEnabled
    Intel® VT-dEnabled
    Disable HW prefetcher




    Info
    titleNote

    For GPU transcoding, ensure

    that

    all power supplies are plugged

    in to

    into the server.

    CPU Frequency Setting on the Host

    Check the current configuration of the CPU frequency setting using the following command on the host system.

    Code Block
    # cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

    The CPU frequency setting must be set to performance to improve VNF performance. Use the following command on the host system:

    Code Block
    # echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

    The cpu frequency setting determines the  operating clock speed of the processor and in turn the system performance. Red Hat offers a set of in-built tuning profiles and a tool called tuned-adm that helps in configuring the required tuning profile.

    Ribbon recommends to apply throughput-performance tuning profile, which makes the processor to operate at maximum frequency.

    • Find out the active tuning profile

    # tuned-adm active

    Current active profile: powersave

    • Apply  throughput-performance tuning profile

    # tuned-adm profile throughput-performance

    This configuration is persistent across reboots and takes effect immediately. There is no need to reboot the host after configuring this tuning profile

    Info
    titleNote
    You must ensure that the settings persist across reboot

    .

    Processor and CPU Details

    To determine the host system's processor and CPU details, perform the following steps:

    1. Execute the following command to determine how many vCPUs are assigned to host CPUs:

      Code Block
      lscpu -p

      The command provides the following output:

      Caption
      0Figure
      1CPU Architecture

      The first column lists the logical CPU number of a CPU as used by the Linux kernel. The second column lists the logical core number , this information can be used - use this information for vCPU pinning.

    Persistent CPU Pinning

    CPU pinning ensures that a VM only gets CPU time from a specific CPU or set of CPUs. Pinning is performed on each logical CPU of the guest VM against each core ID in the host system. The CPU pinning information will be is lost every time the VM instance is shutdown or restarted. To avoid entering the pinning information again, you must update the KVM configuration XML file on the host system.

    Info
    titleNote:
    • Ensure that no two VM instances are allocated the same physical cores on the host system.
    • Ensure that all the VMs hosted on the physical server are pinned.
    • To create vCPU to hyper-thread pinning, pin consecutive vCPUs to sibling threads (logical cores) of the same physical core. The Identify the logical core/sibling threads can be identified from the output returned by the command lscpu on the host.
    • Do not include the 0th physical core of the host in pinning. This is recommended because most host management/kernel threads are spawned on the 0th core by default.

     To  Use the following steps to update the pinning information in the KVM configuration XML file:

    1. Shutdown the VM instance.
    2. Enter the following command.

      Code Block
      languagenone
      virsh

      The command provides the following output:

      Caption
      0Figure
      1virsh Prompt


    3. Enter the following command to edit the VM instance:

      Code Block
      languagenone
      virsh # edit <KVM_instance_name>


    4. Search for the vcpu placement attribute.

      Caption
      0Figure
      1vCPU Placement Attribute


    5. Enter CPU pinning information as shown below:

      Caption
      0Figure
      1CPU Pinning Information


      Tip
      titleTip

      Ensure that no two VM instances have the same physical core affinity. For example, if VM1 has affinity of 0,1,2,3 assigned, then ensure no VM should be is pinned to 0,1,2,3,8,9,10 or 11 as these CPUs belong to the physical core assigned to VM1. Also, assign all other VM instances running on the same host must be assigned with affinity, ; otherwise the VMs without affinity might may impact the performance of VMs having that have affinity.


    6. Enter the following command to save and exit the XML file.

      Code Block
      :wq


    CPU Mode Configuration

    Spacevars
    0company
     recommends to set the CPU Even if the Copy host CPU configuration was selected while creating a VM instance, the host configuration may not be copied on the VM instance. To resolve this issue, you must edit the CPU mode to host-passthroughmodel using a virsh command in the host system.

    To Use the following steps to edit the VM CPU mode:

    1. Shutdown the VM instance.
    2. Enter the following command.

      Code Block
      virsh

      The command provides the following output displays:

      Caption
      0Figure
      1virsh Prompt


    3. Enter the following command to edit the VM instance:

      Code Block
      languagenone
      edit <KVM_instance_name>


    4. Search for the cpu mode attribute.

      Caption
      0Figure
      1cpu mode


    5. Edit the cpu mode attribute with the following:

      Caption
      0Figure
      1Editing CPU Mode

      Image Modified


      Tip
      titleTip

      The Ensure the topology details entered must be same as are identical to the topology details that were set while creating the VM instance. For example, if the topology was set to 1 socket, 4 2 cores and 1 thread2 threads, enter the same must be entered details in this XML file.


    6. Enter the following command to save and exit the XML file.

      Code Block
      :wq


    7. Enter the following command to start the VM instance.

      Code Block
      languagenone
      start <KVM_instance_name>

      Enter the following command to verify the host CPU configuration on the VM instance:

      Code Block
      languagenone
      cat /proc/cpuinfo

      The command provides the following output.

      Caption
      0Figure
      1Verifying CPU Configuration

      Image Removed

    Increasing the Transmit Queue Length

    Increasing the Transmit Queue Length for virt-io Interfaces

    This section is applicable only for virt-io based interfaces. 

    Spacevars
    0company
     recommends to increase the transmit queue length of host tap interfaces to 4096 for better performance.

    By default, the transmit queue length is set to 500. To increase the transmit queue length to 4096, use the following procedure:

    1. Execute the following command to identify the available interfaces:

      Code Block
      languagenone
      virsh

      The virsh prompt is displayeddisplays.

    2. Execute the following command.

      Code Block
      languagenone
      domiflist <VM_instance_name>

      The list of active interfaces is displayeddisplays.

      Caption
      0Figure
      1Active Interfaces List


    3. Execute the following command to increase the transmit queue lengths for the tap interfaces.

      Code Block
      languagenone
      ifconfig <interface_name> txqueuelen <length>

      where The interface_name is the name of the interface you want to change, and length is the new queue length. For example, ifconfig macvtap4 txqueuelen 4096.

    4. Execute the following command to verify the value of the interface length.

      Code Block
      languagenone
      ifconfig <interface_name>

      The command provides the following output.

      Caption
      0Figure
      1Interface Information


    Kernel Same-page Metering (KSM) Settings

    Apply the following settings to all VMs installed on the host.

    Kernel same-page metering (KSM) is a technology which finds common memory pages inside a Linux system and merges the pages to save memory resources. In the event of one of the copies being updated, a new copy is created so the function is transparent to the processes on the system. For hypervisors, KSM is highly beneficial when multiple guests are running with the same level of the operating system. However, there is overhead due to the scanning process which may cause the applications to run slower, which is not desirable. The SBC SWe requires that KSM is turned off.The sample commands below are for Ubuntu 4.4; use the syntax that corresponds to your operating system

    Turn off KSM in the host.

    Deactivate KSM by stopping the ksmtuned and the ksm services as shown below. This does not persist across reboots.

    Code Block
    # echosystemctl 0 >/sys/kernel/mm/ksm/runstop ksm
    # echosystemctl "stopksmtuned

    Disable KSM persistently as shown below:

    Code Block
    KSM_ENABLED=0" > /etc/default/qemu-kvm
    # systemctl disable ksm
    # systemctl disable ksmtuned

    Once KSM is turned off, it is important to verify that there is still sufficient memory on the hypervisor. When the pages are not merged, it may increase memory usage and lead to swapping that negatively impacts performance.

    Host Pinning

    Host Pinning

    To avoid performance impact on VMs due to host-level Linux services, host pinning isolates physical cores where a guest VM is hosted from physical cores where the Linux host processes/services run. 

    Spacevars
    0company
     recommends to leave one physical core per CPU processor for host processes.

    In this example, the core 0 (Core 0 In this example, the core 0 (Core 0 and core 36 are logical cores) and core 1 (Core 1 and core 37 are logical cores) are reserved for Linux host processes.

    The CPUAffinity option in /etc/systemd/system.conf sets affinity to systemd by default, as well as for everything it launches, unless their .service file overrides the CPUAffinity setting with its own value. Configure the CPUAffinity option in /etc/systemd/system.conf.

    Execute the following command:

    Code Block
    lscpu
    Architecture:          x86_64
    CPU op-mode(s):        32-bit, 64-bit
    Byte Order:            Little Endian
    CPU(s):                72
    On-line CPU(s) list:   0-71
    Thread(s) per core:    2
    Core(s) per socket:    18
    Socket(s):             2
    NUMA node(s):          2
    Vendor ID:             GenuineIntel
    CPU family:            6
    Model:                 79
    Model name:            Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
    Stepping:              1
    CPU MHz:               2699.984
    BogoMIPS:              4604.99
    Virtualization:        VT-x
    L1d cache:             32K
    L1i cache:             32K
    L2 cache:              256K
    L3 cache:              46080K
    NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70
    NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71
    
    

    To dedicate the physical CPUs 0 and 1 for host processing, in the file /etc/systemd/system.conf, specify CPUAffinity as 0 1 36 37, as shown below. Restart the system.

    Code Block
    CPUAffinity=0 1 36 37

    Back

    up

    Up VMs with 1G hugepages

    Spacevars
    0company
     recommends to back up its VMs with 1G hugepages for performance reasons. Configure hugepages in the host during boot time to minimize memory fragmentation. If the host OS does not support the recommendations of 1G hugepage size, configure hugepages of size 2M in place of 1G.

    The number of hugepages is decided based on the total memory available on the host. 

    Spacevars
    0company
    recommends to configure 80-90% of total memory as hugepage memory and leave the rest as normal linux memory.

    1. Configure the huge page size as 1G and number of huge pages by appending the following line to the kernel command line options in /etc/default/grubIn the example below, the host  has a total of 256G memory, out of which 200G is configured as hugepages. 

      Code Block
      GRUB_TIMEOUT=5
      
      GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
      
      GRUB_DEFAULT=saved
      
      GRUB_DISABLE_SUBMENU=true
      
      GRUB_TERMINAL_OUTPUT="console"
      
      GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8 crashkernel=auto intel_iommu=on iommu=pt default_hugepagesz=1GB hugepagesz=1G hugepages=200 rhgb quiet"
      
      GRUB_DISABLE_RECOVERY="true"


    2. Regenerate the GRUB2 configuration as shown below: 

      1. If your system uses BIOS firmware, execute the following command:

        Code Block
        # grub2-mkconfig -o /boot/grub2/grub.cfg


      2. On a system with UEFI firmware, execute the following command: 

        Code Block
        # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

      Mount the HugeTLB filesystem on the host.

      Code Block
      mkdir -p /hugepages

      Add the following line in the /etc/fstab  file.

      Code Block
      hugetlbfs    /hugepages    hugetlbfs    defaults    0 0

      Configure the number of 2M hugepages equal to the vRAM requirement for hosting a VM:

      Code Blockcat /etc/sysctl.conf# System default settings live in /usr/lib/sysctl.d/00-system.conf. # To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file # # For more information, see sysctl.conf(5) and sysctl.d(5). vm.nr_hugepages = 25000 (assuming a 24G VM) vm.hugetlb_shm_group = 36

    3. Add lines in your instance XML file using virsh edit

      <instanceName>:

      <instanceName>. The example below is for a 32G instance: 

      Code Block
      <domain type='kvm' id<memory unit='3'>
        <name>RENGALIVM01</name>
        <uuid>f1bae5a2-d26e-4fc0-b472-3638743def9a</uuid>
        <memory KiB'>33554432</memory>
      
      <currentMemory unit='KiB'>25165824</memory>
        <currentMemory>33554432</currentMemory>
      
      <memoryBacking>
      
      <hugepages>
      
      <page size='1048576' unit='KiB'>25165824</currentMemory>
        <memoryBacking>
         <hugepages>
            <page size='2048' unit='KiB' nodeset='0'/>  
          </hugepages>
        </memoryBacking>
      Tip
      titleTip

      The previous example pins the VM on NUMA node 0. For hosting a second VM on NUMA node 1, use nodeset = ‘1’.

    4. Restart the host.

    5. To verify, get the PID for the VM and execute the following command to check that VM memory is received from a single NUMA node:

      Code Block
      numastat -p  <vmpid>

    Disable Flow Control

    1. ' nodeset='0'/>
      
      </hugepages>
      
      </memoryBacking>


      Tip

      The previous example pins the VM on NUMA node 0. For hosting a second VM on NUMA node 1, use nodeset = ‘1’.


    2. Restart the host.

    3. Obtain the PID of the VM from the following command:

      Code Block
      ps -eaf | grep qemu | grep -i <vm_name>


    4. Execute the following command to verify VM memory is received from a single NUMA node:

      Code Block
      numastat -p  <vmpid>


    Disable Flow Control

    Use the following steps to disable flow control:

    Info
    titleNote

    This setting is optional and depends on NIC capability. Not all NICs allow you to modify the flow control parameters. If it is supported by NICs,

    Spacevars
    0company
     recommends to disable flow control to avoid head-of-line blocking issues.


    1. Log in to Log into the system as the root user.
    2. Execute the following command to disable flow control for interfaces attached to the SWe VM.

      Code Block
      ethtool -A <interface name> rx off tx off autoneg off  


      Tip
      titleTip

      Use the <interface name> from the actual configuration.

      Example:

      ethtool -A p4p3 rx off tx off autoneg off
      ethtool -A p4p4 rx off tx off autoneg off
      ethtool -A em3 rx off tx off autoneg off
      ethtool -A em4 rx off tx off autoneg off

      Info
      titleNote:
      Refer to the RHEL site for information on how to make NIC ethtool settings persistent (applied automatically at boot).

    Tuning Interrupt Requests (IRQs)

    This section applies only to virt-io-based packet interfaces. Virt-IO networking works by sending interrupts on the host core. SBC VM performance can be impacted if frequent interrupt processing occurs on any core of the VM. To avoid this, the affinity of the IRQs for a virtio-based packet interface should be different from the cores assigned to the SBC VM.

    The /proc/interrupts file lists the number of interrupts per CPU, per I/O device. IRQs have an associated "affinity" property, "smp_affinity," that defines which CPU cores are allowed to execute the interrupt service routine (ISR) for that IRQ. Refer to the distribution guidelines of the host OS for the exact steps to locate and specify the IRQ affinity settings for a device.

    External Reference: https://access.redhat.com/solutions/2144921

    Include Page
    _OVS-DPDK Virtio Interfaces - Performance Tuning Recommendations
    _OVS-DPDK Virtio Interfaces - Performance Tuning Recommendations

    Pagebreak