Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Add_workflow_for_techpubs
AUTH2
AUTH1
REV5
REV6
REV3
REV1
REV2

The sections below describe the best possible performance and scale for a given virtual machine resource profile:

...

 

Caption
0Table
1Recommended BIOS Settings

 

BIOS Parameter
Setting
Comments
CPU power managementBalancedSonus recommends Maximum Performance
Intel Hyper-ThreadingEnabled 
Intel Turbo BoostEnabled 
SR-IOVEnabled 
Intel VT-x (Virtualization Technology)EnabledFor hardware virtualization
All server BIOS settings are different, but in general the following guidelines apply:
  • Set power profiles to maximum performance
  • Set thermal configurations to Optimal cooling
  • Disable HW prefetcher

CPU Pinning Overview

...

Caption
0Table
1

...

BIOS Setting Recommendations for HP DL380p Gen8 Server
BIOS ParameterRecommended
Setting
Default Value
HP Power ProfileMaximum PerformanceBalanced Power and Performance
Thermal ConfigurationOptimal CoolingOptimal Cooling
HW PrefetchersDisabledEnabled

 

CPU Pinning Overview

Apply below settings to all Nova compute hosts in the pinned host aggregate.

Caption
0Table
1Nova Compute Hosts

 


Applies to:Configuration
S-SBC3.b
M-SBC3.b
T-SBC3.b
SBC Configurator3.a


From the

...

 

Applies to:Configuration
S-SBC3.b
M-SBC3.b
T-SBC3.b
SBC Configurator3.a

From the hypervisor's perspective, a virtual machine appears as a single process that should be scheduled on the available CPUs. By design, hypervisors can schedule the clock cycle on a different processor. While this is certainly acceptable in environments where the hypervisor is allowed to over-commit, this contradicts the requirements for real-time applications. Hence, Sonus requires CPU pinning to prevent applications from sharing a core.

Caption
0Figure
1CPU with Unpinned Applications

Caption
0Figure
1CPU with Pinned Applications

 

 

...

To enable CPU pinning, execute the following steps on every compute host where CPU pinning is to be enabledcompute host:

  1. To retrieve the NUMA topology for the node, execute the below command:

    Code Block
    # lscpu  | grep NUMA
    NUMA node(s):          2
    NUMA node0 CPU(s):     0-11,24-35
    NUMA node1 CPU(s):     12-23,36-47
    Note

    In this case, there are two Intel Sockets with 12 cores each; configured for Hyper-Threading. CPUs are paired on physical cores in the pattern 0/24, 1/25, etc. (The pairs are also known as thread siblings).

  2. When the The following code is must be added at the end of /etc/default/grub, the system understands the cores that should be used by VMs (and not by host operating system):

    Code Block
    GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=1G hugepages=256"
    Note

    For Red Hat RHEL based host OS, Red Hat recommended omitting the isolcpus reservation configuration.

    The number of hugepages depends on how many VM instances is created on this host and multiplied by the memory size of each instance. The hugepagesz should be the maximum hugespace value supported by the kernel being used.

  3. A pin set limits KVM to placing guests on a subset of the physical cores and thread siblings. Omitting some cores from the pin set ensures that there are dedicated cores for the OpenStack processes and application. The pin set can also be a means of ensuring ensures that KVM guests never use more than one thread/core while leaving the additional thread for shared KVM/OpenStack processes. This mechanism can boost the performance of non-threaded guest applications by allowing the host OS to schedule closely related host OS processes on the same core with the guest OS (e.g. virtio processes). The following example built on the CPU and NUMA topology shown in Step 1 (above):

    • For Hyper-Threading Host: Add the CPU pin set list to vcpu_pin_set in default section of /etc/nova/nova.conf:

      Code Block
      vcpu_pin_set=2-11,14-23,26-35,38-47

      For compute nodes, servicing VMs which can be run on Hyperhyper-Threaded threaded host, the CPU PIN set includes all thread siblings except for the cores which are carved out and dedicated to host OS. The resulting CPU PIN in the example dedicates cores/threads 0/24,1/25 and 12/36,13/37 to the host OS. VMs uses cores/threads 2/26-11/35 on NUMA node 0, and cores/threads 14/38-23/47 on NUMA node 1.

  4. Update the boot record and reboot the compute node.

  5. Configure the Nova Scheduler to use NUMA Topology and Aggregate Instance Extra Specs on Nova Controller Hosts:

...

    • scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,

    • ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,

    • PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter

      In addition to support SR-IOV, enable the PciPassthroughFilter and restart the openstack-nova-scheduler service.

      Code Block
      systemctl restart openstack-nova-scheduler.service

      With CPU pinning now is enabled, Nova must be configured to use it. See the section below for a method to use a combination of host-aggregate and nova flavor keys.

...

Sonus supports either host-passthrough or host-model for non-S/M/T-SBC instances; this includes the SBC Configurator.

S

...

/M/T-SBC Instances

The CPU model defines the CPU flags and the CPU architecture that is exposed from the host processor to the guest. Modify nova.conf file located at /etc/nova/nova.conf. Sonus recommends setting CPU Mode to host-model for SBC instances so every detail of the host CPU can be known by SBC SWe.

This setting is defined in /etc/nova/nova.conf:

[libvirt]
virt_type = kvm 

cpu_mode = host-model

This change is made in

S/M/T-SBC Instances

The CPU model defines the CPU flags and the CPU architecture that is exposed from the host processor to the guest. Modify nova.conf file located at /etc/nova/nova.conf. Sonus recommends setting CPU Mode to host-passthrough for SBC instances so every detail of the host CPU can be known by SBC SWe. The host-model setting impacts how CPU L2 / L3 cache information is communicated to the guest OS since the libvert emulated CPU does not accurately represent the L2 and L3 CPU hardware caches to the guest OS. In performance testing of the Sonus SBC, Sonus has seen significant performance degradation, which goes beyond just a simple reduction in capacity. This results in signaling latency and jitter that is outside of acceptable limits even at modest loads.

This setting is defined in /etc/nova/nova.conf:

[libvirt]
virt_type = kvm 

cpu_mode = host-passthrough

This change is made in /etc/nova/nova-compute.conf:

[libvirt]
virt_type = kvm

-compute.conf:

[libvirt]
virt_type = kvm

Anchor
overcommit
overcommit

CPU Frequency Setting in the Compute Host

Check the current configuration of the CPU frequency setting using the following command on the host system.

Code Block
# cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

The CPU frequency setting must be set to performance to improve the VNF performance. Use the following command on the host system:

Code Block
# echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
Note

You must ensure that to keep the above settings persistent across reboot.

  Anchorovercommitovercommit

Removal of CPU and Memory Over Commit

...

VNFCPU-PinningHyper-Threading Flavor Setting
S-SBCRequired

Supported

M-SBC

Required

Supported

 
T-SBCRequired

Supported

 
SBC-ConfiguratorSupported but not requiredSupported

...

VNF

CPU-Pinning

(hw:cpu_policy=dedicated)

Hyper-Threading
Flavor Setting

RAM*DiskCores / vCPUs
S-SBCPinned

Yes

64 128 GiB*100 GB20 / 40
M-SBCPinned

Yes

32 GiB*100 GB10 / 20
SBC-ConfiguratorPinnedYes16 GiB*80 GB2 / 4

...

  • hw:cpu_policy=dedicated: This setting enables CPU pinning.

  • hw:cpu_thread_policy=isolate: The SBC VNFCs require non-threaded access to the pinned CPUs to guarantee real-time performance for signaling and mediaprefer: This setting allocates each vCPU on thread siblings of physical CPUs.

  • hw:numa_nodes: This setting defines how the host processor cores are spread over the host NUMA nodes. Where this is set to 1, it ensures that the cores are not spread over more than 1 NUMA node ensuring the performance of having one; otherwise Nova would be free to split the cores up between available NUMA nodes.

  • hw:cpu_max_sockets: This setting defines how KVM exposes the sockets and cores to the guest. Without this setting, KVM always exposes a socket for every core; each socket having one core. This requires a mapping in the host virtualization layer to convert the topology resulting in a measurable performance degradation. That performance overhead can be avoided by accurately matching the advertised cpu_sockets to the requested host numa_nodes. Using the *_max_* variable ensures that the value cannot be overridden in the image metadata supplied by tenant level users.

EMS SWe and PSX SWe Flavor Examples

...

To create a S-SBC SWe flavor with 64 128 GiB RAM and 100 GB of Hard Disk based on 2 x NUMA nodes of 20 vCPUs each (For example, 40 vCPUs for S-SBC), enter the following Nova commands from the Openstack CLI.

Code Block
% nova flavor-create Sonus-SSBC auto 65536131072 100 40
% nova flavor-key Sonus-SSBC set hw:cpu_policy=dedicated hw:cpu_thread_policy=prefer
% nova flavor-key Sonus-SSBC set hw:cpu_max_sockets=2
% nova flavor-key Sonus-SSBC set hw:mem_page_size=2048
% nova flavor-key Sonus-SSBC set hw:numa_nodes=2

...

...

Pagebreak