This section describes the best possible performance and scale for a given virtual machine resource profile.
Real-time applications have stringent requirements with respect to jitter, latency, quality of service and packet loss. The migration of real-time applications to an all-software environment requires deterministic response to failures and performance in the scheduler of the hypervisor and the host Operating System operating system (OS). Although OpenStack continually addresses carrier-grade performance, scalability, resiliency, manageability, modularity and interoperability; however, some Some fine-tuning is available can be done to achieve maximum scale and reliable performance for the SBC SWe and ancillary applications. This document page defines those the areas that needs to can be fine-tuned in OpenStack/KVM platformsenvironments.
The Sonus Ribbon SBC SWe requires a reservation of CPU, memory and hard disk resources in virtual machines in addition to implementing certain performance tuning parameters for any production deployments that are support over 100 concurrent sessions.
Info |
---|
|
The OpenStack infrastructure supports I/O (PCIe) based NUMA scheduling as referenced here. |
Excerpt Include |
---|
| KVM Performance Tuning |
---|
| KVM Performance Tuning |
---|
nopanel | true |
---|
|
Sonus Ribbon recommends applying the BIOS settings in the table below to all Nova compute hosts running the Sonus Ribbon VMs for optimum performance:
- S-SBC
- M-SBC
- T-SBC
- I-SBC Configurator
Caption |
---|
0 | Table |
---|
1 | Recommended BIOS Settings |
---|
|
| | |
---|
CPU power management | Balanced |
|
Sonus Ribbon recommends Maximum Performance | Intel |
|
HyperThreading Turbo Boost | SRIOV Virtualization Technologyvirtualization technology) | Enabled | For hardware virtualization |
|
Caption |
---|
0 | Table |
---|
1 | BIOS Setting Recommendations for HP DL380p Gen8 Server |
---|
|
BIOS Parameter | Recommended Setting | Default Value |
---|
HP Power Profile | Maximum Performance | Balanced Power and Performance | Thermal Configuration | Optimal Cooling | Optimal Cooling | HW Prefetchers | Disabled | Enabled |
|
Apply below settings to all Nova compute hosts in the pinned host aggregate.
Caption |
---|
0 | Table |
---|
1 | Nova Compute Hosts |
---|
|
Applies to: | Configuration |
---|
S-SBC | 3.b | M-SBC | 3.b | T-SBC | 3.b | I-SBC |
|
Configuratora
From the hypervisor's perspective, a virtual machine appears as a single process that should be scheduled on the available CPUs. By design, hypervisors schedule the clock cycle on a different processor. While this is certainly acceptable in environments where the hypervisor is allowed to over-commit, this contradicts the requirements for real-time applications. Hence, Sonus Ribbon requires CPU pinning to prevent applications from sharing a core.
Caption |
---|
0 | Figure |
---|
1 | CPU with Unpinned Applications |
---|
|
Image Modified |
Caption |
---|
0 | Figure |
---|
1 | CPU with Pinned Applications |
---|
|
Image Modified |
By default, virtual CPUs are not assigned to a host CPU, but Sonus Ribbon requires CPU pinning to maintain the requirements of real-time media traffic. The primary reason for pinning Sonus instances is to prevent other workloads (including those of the host OS) from causing significant jitter in media processing. It is also possible to introduce significant message queuing delays and buffer overflows at higher call rates. OpenStack states that no instance Instances with pinned CPUs can cannot use the CPUs of another pinned instance. This prevents resource contention and improves processor cache efficiency by reserving physical cores. Host Aggregate aggregate filters or Availability Zones availability zones can be used to select compute hosts for pinned and non-pinned instances. OpenStack clearly states that pinned instances must be separated from unpinned instances as as the latter will not respect the resourcing requirements of the former.
To enable CPU pinning, execute the following steps on every compute host:
To retrieve the NUMA topology for the node, execute the below command:
Code Block |
---|
# lscpu | grep NUMA
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47 |
Infotip |
---|
|
In this case, there are two Intel Sockets sockets with 12 cores each; configured for Hyperhyper-Threadingthreading. CPUs are paired on physical cores in the pattern 0/24, 1/25, etc. (The pairs are also known as thread siblings). |
The following code must be added at the end of /etc/default/grub
:
Code Block |
---|
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=1G hugepages=256" |
Infotip |
---|
|
The number of hugepages depends on how many VM instances is are created on this host and multiplied by the memory size of each instance. The hugepagesz value should be the maximum hugespace value supported by the kernel being used. |
- A pin set limits KVM to placing guests on a subset of the physical cores and thread siblings
The vcpu_pin_set property defines which physical CPUs (pCPUs) can be used by instance virtual CPUs (vCPUs). Omitting some cores from the pin set ensures that
there are dedicated cores instance’s virtual CPUs doesn’t occupy the cores that are meant for the OpenStack processes and
application. The pin set ensures that KVM guests never use more than one thread/core while leaving the additional thread for shared KVM/OpenStack processes. This mechanism boost other host applications. This ensures a clear segregation of host and guest processes, resulting in better performance of the guest instances.
This mechanism boosts the performance of non-threaded guest applications by allowing the host OS to schedule closely related host OS processes on the same core with the guest OS (e.g. virtio processes).
Info |
---|
To realize better performance, do not configure the isolcpus. |
The following example built on the CPU and NUMA topology shown in
Step step 1 (above):
For Hypera hyper-Threading Hostthreading host: Add the CPU pin set list to vcpu_pin_set
in the default
section of /etc/nova/nova.conf:
Code Block |
---|
vcpu_pin_set=2-11,14-23,26-35,38-47 |
For compute nodes, servicing VMs which can be run on hyper-threaded hosthosts, the CPU PIN pin set includes all thread siblings except for the cores which are carved out and dedicated to the host OS. The resulting CPU PIN pin in the example dedicates cores/threads 0/24,1/25 and 12/36,13/37 to the host OS. VMs uses use cores/threads 2/26-11/35 on NUMA node 0, and cores/threads 14/38-23/47 on NUMA node 1.
Update the boot record and reboot the compute node.
Configure the Nova Scheduler scheduler to use NUMA Topology and Aggregate Instance Extra Specs on Nova Controller Hoststopology and aggregate instance extra specs on Nova controller hosts:
On each node where the OpenStack Compute Scheduler compute scheduler (openstack-nova-scheduler
) runs, edit the nova.conf
file that is located at /etc/nova/nova.conf
. Add the AggregateInstanceExtraSpecFilter
and NUMATopologyFilter values NUMATopologyFilter
values to the list of scheduler_default_filters
. These filters are used to segregate the compute nodes that can be used for CPU pinning from those that cannot and to apply NUMA-aware scheduling rules when launching instances:
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,
PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter
In addition to support SR-IOV, enable the PciPassthroughFilter
and restart the openstack-nova-
scheduler servicescheduler
service.
Code Block |
---|
systemctl restart openstack-nova-scheduler.service |
With CPU pinning is enabled, Nova Nova must be configured to use it. See the section below for a method to use a combination of host-aggregate and nova and Nova flavor keys.
CPU Model Setting
Apply below the following settings to all Nova all Nova compute hosts where Sonus VMs VMs are installed.
Applies to: |
---|
EMS |
PSX-M |
PSX-Replica |
S-SBC |
M-SBC |
T-SBC |
SBC Configurator |
The The CPU model defines the CPU flags and the CPU architecture that is are exposed from the host processor to the guest.
Sonus Ribbon supports either host-passthrough or host-model for non-S/M/T-SBC instances; this includes the SBC Configurator.
The CPU model defines the CPU flags and the CPU architecture that is exposed from the host processor to the guest. Modify Modify the nova.conf
file located at /etc/nova/nova.conf.
Sonus Ribbon recommends setting the CPU Mode mode to host-model
for SBC instances so every detail of the host CPU can be known by SBC SWe.
This setting is defined in /etc/nova/nova.conf:
[libvirt]
virt_type = kvm
cpu_mode = host-model
|
---|
This change is made in /etc/nova/nova-compute.conf:
Check the current configuration of the CPU frequency setting using the following command on the host system.
Code Block |
---|
# cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor |
The CPU frequency setting must be set to performance
to improve the VNF performance. Use the following command on the host system:
Code Block |
---|
# echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor |
Info |
---|
|
You must ensure that to you keep the above settings persistent across reboot. |
Apply below the following settings to all Nova all Nova compute hosts where Sonus Ribbon VMs are installed:
- S-SBC
- M-SBC
- T-SBC
- I-SBC Configurator
The default settings for CPU (1.16) and Memory (1:5). Modify the nova.conf
file located at /etc/nova/nova.conf
, and change the default settings of cpu_allocation_ratio
and ram_allocation_ratio
to (1:1) for resource reservation.
Code Block |
---|
cpu_allocation_ratio = 1.0
ram_allocation_ratio = 1.0 |
Apply below the following settings to all Nova all Nova compute hosts where Sonus Ribbon VMs are installed:
- S-SBC
- M-SBC
- T-SBC
- I-SBC Configurator
While using the centralized 1:1 HA mode with virtual nics (virtio), OpenStack creates tap devices for each port on the guest VM. The Tx queue length of the tap devices is set to 500 by default that which defines the queue between the OVS and the VM instance. The value 500 is too low on the queue that ; this increases the possibility of packet drops at the tap device. Set the Tx queue length to a higher value that increases performance and reliability. Use a value that matches your performance requirements.
The sample commands below are for Ubuntu 4.4, please ; use the syntax that corresponds to your operating system.
Code Block |
---|
Modify the 60-tap.rules file and add the KERNEL command
# vim /etc/udev/rules.d/60-tap.rules
KERNEL=="tap*", RUN+="/sbin/ip link set %k txqueuelen 1000" - Add this line
# udevadm control --reload-rules
Use the below command to apply the rules to already created interfaces:
# udevadm trigger --attr-match=subsystem=net |
Kernel Same-page Metering (KSM) Settings
Apply below the following settings to all Nova all Nova compute hosts where Sonus Ribbon VMs are installed:
- S-SBC
- M-SBC
- T-SBC
- I-SBC Configurator
Kernel same-page metering (KSM) is a technology which that finds common memory pages inside a Linux system and merges the pages to save memory resources. In the event of that one of the copies being is updated, a new copy is created so the function is transparent to the processes on the system. For hypervisors, KSM is highly beneficial where when multiple guests are running with the same level of the operating system. However, there is an overhead due to the scanning process which may cause the applications to run slower which is not desirable. The SBC SWe requires that KSM to be turned - off.
The sample commands below are for Ubuntu 4.4, please ; use the syntax that corresponds to your operating system.
Code Block |
---|
# echo 0 >/sys/kernel/mm/ksm/run
# echo "KSM_ENABLED=0" > /etc/default/qemu-kvm |
Once the KSM is turned-off, it is important to verify that there is still sufficient memory on the hypervisor. When the pages are not merged, it may increase memory usage and lead to swapping that negatively impacts performance negatively.
Threading Hyper-Threading threading is designed to use idle resources on Intel processors. A physical core is split into 2 x two logical cores for creating parallel threads. Each logical core has its own architectural state. The actual performance gains of from using Hyperhyper-Threading depends threading depend on the amount of idle resources on the physical CPU.
This
Hyper-threading is shown in the diagram below.
Caption |
---|
0 | Figure |
---|
1 | Hyperthreading Support |
---|
|
Image Modified |
Warning |
---|
This feature is applicable only for a Distributed SBC on an Openstack platform. |
Sonus
Threading Hyper-Threading can threading should be enabled in the BIOS for all Sonus NFV Ribbon VNF elements.
Caption |
---|
0 | Table |
---|
1 | VNF CPU Pinning and Hyper-threading Support |
---|
|
VNF | CPU-Pinning | Hyper-Threading Flavor Setting |
---|
S-SBC | Required | Required | M-SBC | Required | Required | T-SBC | Required | Required | I-SBC |
|
-ConfiguratorSupported but not required | Supported |
Sonus
Caption |
---|
0 | Table |
---|
1 | VNF Tested Configurations |
---|
|
VNF | CPU-Pinning (hw:cpu_policy=dedicated) | Hyper-Threading Flavor Setting | RAM* | Disk | Cores / vCPUs |
---|
S-SBC | Pinned | Yes | 128 GiB* | 100 GB |
|
20 40Pinned | Yes | 32 GiB* | 100 GB | 10 / 20 | SBC-Configurator16 80 2 4 *Memory values rounded to the next power of 2 to prevent memory fragmentation in the nova the Nova compute scheduler.
A few methods exist to influence VM placement in OpenStack environments. The method described in this section segregates Nova segregates Nova compute nodes into discrete host aggregates and use Nova flavor-key aggregate_instance_extra_specs so that specific flavors will use specific host aggregates. For this to work, all flavors must specify a host aggregate. This is accomplished by first assigning all existing flavors to a "normal" host aggregate, then assigning only the Nova compute hosts configured for non-Hyperhyper-Threading to threading to a "Pin-Isolate" host aggregate.
Code Block |
---|
From the Openstack CLI, create the host aggregates and assign compute hosts:
% nova aggregate-create Active-Pin-Isolate
% nova aggregate-set-metadata Active-Pin-Isolate Active-Pin-Isolate=true
% nova aggregate-add-host Active-Pin-Isolate {first nova compute host in aggregate}
{repeat for each compute host to be added to this aggregate}
% nova aggregate-create Active
% nova aggregate-set-metadata Active Active=true
% nova aggregate-add-host Active {first nova compute host in aggregate}
{repeat for each compute host to be added to this aggregate} |
Info |
---|
|
Ensure all existing flavors on in the entire stack specify the Hyperthe hyper-Threaded threaded aggregate by using the "aggregate_instance_extra_specs:Active"="true" metadata parameter. Otherwise, flavors can get scheduled on the hosts with pinning and the non-pinned VMs will not respect the pinned isolation. |
Code Block |
---|
From
the
Openstack
CLI,
assign
all
existing
flavors
to
the
non-pinned
host
aggregate:
Code Block |
---|
% for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`; \
do nova flavor-key ${FLAVOR} set \
"aggregate_instance_extra_specs:Active"="true"; \
done |
Tested The flavor definitions mentioned listed below includes include the following Extra Specsextra specs:
hw:cpu_policy=dedicated
: This setting enables CPU pinning.
hw:cpu_thread_policy=prefer
: This setting allocates each vCPU on thread siblings of physical CPUs.
hw:numa_nodes
: This setting defines how the host processor cores are spread over the host NUMA nodes. Where When this is set to 1, it ensures that the cores are not spread over more than 1 NUMA node, ensuring the performance of having one; otherwise Nova would be free to split the cores up between available NUMA nodes.
hw:cpu_max_sockets:
This setting defines how KVM exposes the sockets and cores to the guest. Without this setting, KVM always exposes a socket for every core ; with each socket having one core. This requires a mapping in the host virtualization layer to convert the topology, resulting in a measurable performance degradation. That performance overhead can be avoided by accurately matching the advertised cpu_sockets to the requested host numa_nodes. Using the *_max_* variable ensures that the value cannot be overridden in the image metadata supplied by tenant-level users.
EMS SWe and PSX SWe Flavor ExamplesTo create an M-SBC SWe flavor with 20 vCPUs, 32 GiB of RAM and 100 GB of Hard disk, enter the following Nova commands from the Openstack CLI.
Code Block |
---|
#EMS
% nova flavor-create EMSSBC-SK-ECM-01P auto 1638432768 60100 820
% nova flavor-key EMS-SK-E-01P set aggregate_instance_extra_specs:Active=true MSBC set hw:cpu_policy=dedicated hw:cpu_thread_policy=dedicated
#PSX Masterprefer
% nova flavor-create PSX-SK-PM-01P auto 65536 180 20key MSBC set hw:cpu_max_sockets=1
% nova flavor-key PSX-SK-PM-01PMSBC set aggregatehw:mem_instance_extra_specs:Active=true hw:cpu_policy=dedicated
#SBC Configuratorpage_size=1048576
% nova flavor-create SBC-SK-C-01P auto 16384 80 4
% nova flavor-key SBC-SK-C-01P set aggregate_instance_extra_specs:Active=true hw:cpu_policy=dedicated
#PSX Replica as SRv6 Proxy
key MSBC set hw:numa_nodes=1
|
To create an S-SBC SWe flavor with 128 GiB RAM and 100 GB of Hard Disk based on 2 x NUMA nodes of 20 vCPUs each (For example, 40 vCPUs for S-SBC), enter the following Nova commands from the Openstack CLI.
Code Block |
---|
% nova flavor-create PSXSBC-SK-SRV6CS-01P auto 32768131072 180100 1640
% nova flavor-key PSX-SK-SRV6-01PSSBC set aggregate_instance_extra_specs:Active=truehw:cpu_policy=dedicated hw:cpu_thread_policy=dedicatedprefer
% nova flavor-key PSX-SK-SRV6-01PSSBC set hw:numa_nodes=1 hw:cpu_max_sockets=1
#PSX Replica as D+ 2
% nova flavor-key SSBC set hw:mem_page_size=1048576
% nova flavor-create PSX-SK-CD-01P auto 32768 180 16
% nova flavor-key PSX-SK-CD-01P set aggregate_instance_extra_specs:Active=true hw:cpu_policy=dedicated
% nova flavor-key PSX-SK-CD-01P set hw:numa_nodes=1 hw:cpu_max_sockets=1 |
To create a M-SBC SWe flavor with 20 vCPUs, 32 GiB of RAM and 100 GB of Hard disk, enter the following Nova commands from the Openstack CLI.
Code Block |
---|
% nova flavor-create SBC-SK-CM-01P auto 32768 100 20
% nova flavor-key Sonus-MSBC set hw:cpu_policy=dedicated hw:cpu_thread_policy=prefer
% nova flavor-key Sonus-MSBC set hw:cpu_max_sockets=1
% nova flavor-key Sonus-MSBC set hw:mem_page_size=2048
% nova flavor-key Sonus-MSBC set hw:numa_nodes=1
|
To create a S-SBC SWe flavor with 128 GiB RAM and 100 GB of Hard Disk based on 2 x NUMA nodes of 20 vCPUs each (For example, 40 vCPUs for S-SBC), enter the following Nova commands from the Openstack CLI.
Code Block |
---|
% nova flavor-create SBC-SK-CS-01P auto 131072 100 40
% nova flavor-key Sonus-SSBC set hw:cpu_policy=dedicated hw:cpu_thread_policy=prefer
% nova flavor-key Sonus-SSBC set hw:cpu_max_sockets=2
% nova flavor-key Sonus-SSBC set hw:mem_page_size=2048
% nova flavor-key Sonus-SSBC set hw:numa_nodes=2
|
Regarding the default setting, numa_mempolicy=preferred, the NUMA memory allocation policy is set to "strict" which forces the kernel to allocate memory only from the local NUMA node where processes are scheduled. If memory on one of the NUMA node is exhausted for any reason, the kernel cannot allocate memory from another NUMA node even when memory is available on that node. With this in mind, using the default setting would have a negative impact on applications like the S-SBC. This setting is in reference to the link below:
https://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-numa-placement.html
Host Aggregate Based Pinning Flavor Specification Reference: http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
OpenStack Flavor Specification Reference: http://docs.openstack.org/admin-guide/compute-flavors.html
OpenStack CPU Typologies Reference: http://docs.openstack.org/admin-guide/compute-cpu-topologies.htmlkey SSBC set hw:numa_nodes=2
|
Regarding the default setting, numa_mempolicy=preferred
, the NUMA memory allocation policy is set to "strict" which forces the kernel to allocate memory only from the local NUMA node where processes are scheduled. If memory on one of the NUMA node is exhausted for any reason, the kernel cannot allocate memory from another NUMA node even when memory is available on that node. With this in mind, using the default setting would have a negative impact on applications like the S-SBC. This setting is in reference to the link below:
https://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-numa-placement.html