You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

 

In this section:

 

There are several VM operating parameters that can be set to improve system throughput for a single or multiple VMs installed on a KVM host. Some VM operating parameters are set on the KVM host and are modified any time when the VM Instance is running, while others are set on the VM and are only configured when the VM Instance is shut down.

The following sections contain VM performance tuning recommendations to improve system performance. The VM performance recommendations are considered general guidelines and are not exhaustive. Refer to the documentation provided by your Linux OS and KVM host vendors. For example, Redhat provides extensive documentation on using virt-manager and optimizing VM performance. Refer to the Redhat Virtualization Tuning and Optimization Guide.

Note:

For performance tuning procedures on a VM instance you must log on to the host system as the root user. 

General Recommendations

The following general recommendations apply to all platforms where SBC SWe is deployed:

  • The number of vCPUs deployed on a system should be an even number (4, 6, 8, etc.).
  • For best performance, deploy only a single instance on a single NUMA. Performance degradation occurs if you host more than one instance on a NUMA or if a single instance spans multiple NUMAs.
  • Make sure that the physical NICs associated with an instance are connected to the same NUMA/socket where the instance is hosted. In the case of a dual NUMA host, ideally two instances should be hosted, with each instance on a separate NUMA and the associated NICs of each of the instances connected to their respective NUMAs.
  • To optimize performance, configure memory card equally on both NUMA nodes. For example if a dual NUMA node server has a total of 128 GiB of RAM, configure 64 GiB of RAM on each NUMA node.

Recommended BIOS Settings

Ribbon recommends applying the BIOS settings in the following table on all VMs for optimum performance:

Recommended BIOS Settings

BIOS Parameter
Setting
Comments
CPU power managementBalancedRibbon recommends Maximum Performance
Intel Hyper-ThreadingEnabled 
Intel Turbo BoostEnabled 
Intel VT-x (Virtualization Technology)Enabled

For hardware virtualization

 

All server BIOS settings are different, but in general the following guidelines apply:

  • Set power profiles to maximum performance
  • Set thermal configurations to optimal cooling
  • Disable HW prefetcher

 

CPU Frequency Setting on the Host

Check the current configuration of the CPU frequency setting using the following command on the host system.

# cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

The CPU frequency setting must be set to performance to improve VNF performance. Use the following command on the host system:

# echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
Note

You must ensure that you keep the previous settings persistent across reboot.

 

Processor and CPU Details

To determine the host system's processor and CPU details, perform the following steps:

  1. Execute the following command to know how many vCPUs are assigned to host CPUs:

    lscpu -p

    The command provides the following output:

    CPU Architecture

    The first column lists the logical CPU number of a CPU as used by the Linux kernel. The second column lists the logical core number, this information can be used for vCPU pinning.

Persistent CPU Pinning

CPU pinning ensures that a VM only gets CPU time from a specific CPU or set of CPUs. Pinning is performed on each logical CPU of the guest VM against each core ID in host system. The CPU pinning information will be lost every time the VM instance is shutdown or restarted. To avoid entering the pinning information again you must update the KVM configuration XML file on the host system.

Note:
  • Ensure that no two VM instances are allocated the same physical cores on the host system.
  • Ensure that all the VMs hosted on the physical server are pinned.
  • To create vCPU to hyper-thread pinning, pin consecutive vCPUs to sibling threads (logical cores) of the same physical core. The logical core/sibling threads can be identified from the output returned by the command lscpu on the host.
  • Do not include the 0th physical core of the host in pinning. This is recommended because most host management/kernel threads are spawned on the 0th core by default.

 To update the pinning information in the KVM configuration XML file:

  1. Shutdown the VM instance.
  2. Enter the following command.

    virsh

    The command provides the following output:

    virsh Prompt

  3. Enter the following command to edit the VM instance:

    virsh # edit <KVM_instance_name>
  4. Search for the vcpu placement attribute.

    vCPU Placement Attribute

  5. Enter CPU pinning information as shown:

    CPU Pinning Information

    Tip

    Ensure that no two VM instances have the same physical core affinity. For example, if VM1 has affinity of 0,1,2,3, then no VM should be pinned to 0,1,2,3, 8,9,10 or 11 as these CPUs belong to a physical core assigned to VM1. Also, all other VM instances running on the same host must be assigned with affinity, otherwise the VMs without affinity might impact the performance of VMs having affinity.

  6. Enter the following command to save and exit the XML file.

    :wq

CPU Mode Configuration

Even if the Copy host CPU configuration was selected while creating a VM instance, the host configuration may not be copied on the VM instance. To resolve this issue, you must edit the CPU mode to host-passthrough using a virsh command in the host system.

To edit the VM CPU mode:

  1. Shutdown the VM instance.
  2. Enter the following command.

    virsh

    The command provides the following output:

    virsh Prompt

  3. Enter the following command to edit the VM instance:

    edit <KVM_instance_name>
  4. Search for the cpu mode attribute.

    cpu mode

  5. Replace the cpu mode attribute with the following:

    Editing CPU Mode

    Tip

    The topology details entered must be same as the topology details while creating the VM instance.

    For example, if the topology was set to 1 socket, 4 cores and 1 thread the same must be entered in this XML file.

  6. Enter the following command to save and exit the XML file.

    :wq
  7. Enter the following command to start the VM instance.

    start <KVM_instance_name>
  8. Enter the following command to verify the host CPU configuration on the VM instance:

    cat /proc/cpuinfo

    The command provides the following output.

    Verifying CPU Configuration

Increasing the Transmit Queue Length

To increase the transmit queue length to 4096:

Note:

By default, the transmit queue length is set to 500.

  1. Execute the following command to identify the available interfaces:

    virsh

    The virsh prompt is displayed.

  2. Execute the following command.

    domiflist <VM_instance_name>

    The list of active interfaces is displayed.

    Active Interfaces List

  3. Execute the following command to increase the transmit queue length of the tap interface.

    ifconfig <interface_name> txqueuelen <length>

    where interface_name is the name of the interface you want to change, and length is the new queue length. For example, ifconfig macvtap4 txqueuelen 4096.

  4. Execute the following command to verify the value of the interface length.

    ifconfig <interface_name>

    The command provides the following output.

    Interface Information

Kernel Same-page Metering (KSM) Settings

Apply the following settings to all VMs installed on the host.

Kernel same-page metering (KSM) is a technology which finds common memory pages inside a Linux system and merges the pages to save memory resources. In the event of one of the copies being updated, a new copy is created so the function is transparent to the processes on the system. For hypervisors, KSM is highly beneficial where multiple guests are running with the same level of the operating system. However, there is overhead due to the scanning process which may cause the applications to run slower which is not desirable. The SBC SWe requires that KSM is turned off.

The sample commands below are for Ubuntu 4.4; use the syntax that corresponds to your operating system.

# echo 0 >/sys/kernel/mm/ksm/run
# echo "KSM_ENABLED=0" > /etc/default/qemu-kvm

Once KSM is turned off, it is important to verify that there is still sufficient memory on the hypervisor. When the pages are not merged, it may increase memory usage and lead to swapping that negatively impacts performance.

Host Pinning

Host pinning isolates physical cores where a guest VM is hosted against physical cores where Linux host processes/services run to avoid performance impact on VMs due to host-level Linux services. In this example, the core 0 (Core 0 and core 36 are logical cores) and core 1 (Core 1 and core 37 are logical cores) are reserved for Linux host processes.

The CPUAffinity option in /etc/systemd/system.conf sets affinity to systemd by default, as well as for everything it launches, unless their .service file overrides the CPUAffinity setting with its own value. Configure the CPUAffinity option in /etc/systemd/system.conf.

Execute the following command:

lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                72
On-line CPU(s) list:   0-71
Thread(s) per core:    2
Core(s) per socket:    18
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
Stepping:              1
CPU MHz:               2699.984
BogoMIPS:              4604.99
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              46080K
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71

To dedicate the physical CPUs 0 and 1 for host processing in /etc/systemd/system.conf, add CPUAffinity as 0 1 36 37. Restart the system.

CPUAffinity=0 1 36 37

Back up VMs with hugepages

  1. Mount the HugeTLB filesystem on the host.

    mkdir -p /hugepages
  2. Add the following line in the /etc/fstab  file.

    hugetlbfs    /hugepages    hugetlbfs    defaults    0 0
  3. Configure the number of 2M hugepages equal to the vRAM requirement for hosting a VM:

    cat /etc/sysctl.conf# System default settings live in /usr/lib/sysctl.d/00-system.conf.
    # To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
    #
    # For more information, see sysctl.conf(5) and sysctl.d(5).
    vm.nr_hugepages = 25000 (assuming a 24G VM)
    vm.hugetlb_shm_group = 36
  4. Add lines in your instance XML file using virsh edit <instanceName>:

    <domain type='kvm' id='3'>
      <name>RENGALIVM01</name>
      <uuid>f1bae5a2-d26e-4fc0-b472-3638743def9a</uuid>
      <memory unit='KiB'>25165824</memory>
      <currentMemory unit='KiB'>25165824</currentMemory>
      <memoryBacking>
       <hugepages>
          <page size='2048' unit='KiB' nodeset='0'/>  
        </hugepages>
      </memoryBacking>
    Tip

    The previous example pins the VM on NUMA node 0. For hosting a second VM on NUMA node 1, use nodeset = ‘1’.

  5. Restart the host.

  6. To verify, get the PID for the VM and execute the following command to check that VM memory is received from a single NUMA node:

    numastat -p  <vmpid>

Disable Flow Control

  1. Log into the system as the root user.
  2. Execute the following command to disable flow control for interfaces attached to the SWe VM.

    ethtool -A <interface name> rx off tx off autoneg off  
    Tip

    Use the <interface name> from the actual configuration.

    Example:

    ethtool -A p4p3 rx off tx off autoneg off
    ethtool -A p4p4 rx off tx off autoneg off
    ethtool -A em3 rx off tx off autoneg off
    ethtool -A em4 rx off tx off autoneg off

     

    Note:

    Refer to the RHEL site for information on how to make NIC ethtool settings persistent (apply automatically at boot).

     

     

  • No labels