Add_workflow_for_techpubs | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
The sections below describe the best possible performance and scale for a given virtual machine resource profile:
...
Caption | ||||
---|---|---|---|---|
| ||||
|
BIOS Parameter | Setting | Comments |
---|---|---|
CPU power management | Balanced | Sonus recommends Maximum Performance |
Intel Hyper-Threading | Enabled | |
Intel Turbo Boost | Enabled | |
SR-IOV | Enabled | |
Intel VT-x (Virtualization Technology) | Enabled | For hardware virtualization |
...
Caption | |||
---|---|---|---|
|
...
| ||||||||||||
|
Apply below settings to all Nova compute hosts in the pinned host aggregate.
Caption | ||||
---|---|---|---|---|
| ||||
|
Applies to: | Configuration |
---|---|
S-SBC | 3.b |
M-SBC | 3.b |
T-SBC | 3.b |
SBC Configurator | 3.a |
From the
...
Applies to: | Configuration |
---|---|
S-SBC | 3.b |
M-SBC | 3.b |
T-SBC | 3.b |
SBC Configurator | 3.a |
From the hypervisor's perspective, a virtual machine appears as a single process that should be scheduled on the available CPUs. By design, hypervisors can schedule the clock cycle on a different processor. While this is certainly acceptable in environments where the hypervisor is allowed to over-commit, this contradicts the requirements for real-time applications. Hence, Sonus requires CPU pinning to prevent applications from sharing a core.
Caption | ||||
---|---|---|---|---|
| ||||
Caption | ||||
---|---|---|---|---|
| ||||
|
...
To enable CPU pinning, execute the following steps on every compute host where CPU pinning is to be enabledcompute host:
To retrieve the NUMA topology for the node, execute the below command:
Code Block |
---|
# lscpu | grep NUMA NUMA node(s): 2 NUMA node0 CPU(s): 0-11,24-35 NUMA node1 CPU(s): 12-23,36-47 |
Note |
---|
In this case, there are two Intel Sockets with 12 cores each; configured for Hyper-Threading. CPUs are paired on physical cores in the pattern 0/24, 1/25, etc. (The pairs are also known as thread siblings). |
When the The following code is must be added at the end of /etc/default/grub, the system understands the cores that should be used by VMs (and not by host operating system):
Code Block |
---|
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=1G hugepages=256" |
Note |
---|
For Red Hat RHEL based host OS, Red Hat recommended omitting the The number of hugepages depends on how many VM instances is created on this host and multiplied by the memory size of each instance. The hugepagesz should be the maximum hugespace value supported by the kernel being used. |
For Hyper-Threading Host: Add the CPU pin set list to vcpu_pin_set in default
section of /etc/nova/nova.conf:
Code Block |
---|
vcpu_pin_set=2-11,14-23,26-35,38-47 |
For compute nodes, servicing VMs which can be run on Hyperhyper-Threaded threaded host, the CPU PIN set includes all thread siblings except for the cores which are carved out and dedicated to host OS. The resulting CPU PIN in the example dedicates cores/threads 0/24,1/25 and 12/36,13/37 to the host OS. VMs uses cores/threads 2/26-11/35 on NUMA node 0, and cores/threads 14/38-23/47 on NUMA node 1.
Update the boot record and reboot the compute node.
Configure the Nova Scheduler to use NUMA Topology and Aggregate Instance Extra Specs on Nova Controller Hosts:
...
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,
PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter
In addition to support SR-IOV, enable the PciPassthroughFilter and restart the openstack-nova-scheduler service.
Code Block |
---|
systemctl restart openstack-nova-scheduler.service |
With CPU pinning now is enabled, Nova must be configured to use it. See the section below for a method to use a combination of host-aggregate and nova flavor keys.
...
Sonus supports either host-passthrough or host-model for non-S/M/T-SBC instances; this includes the SBC Configurator.
...
The CPU model defines the CPU flags and the CPU architecture that is exposed from the host processor to the guest. Modify nova.conf file located at /etc/nova/nova.conf. Sonus recommends setting CPU Mode to host-model for SBC instances so every detail of the host CPU can be known by SBC SWe.
This setting is defined in /etc/nova/nova.conf:
cpu_mode = host-model |
---|
This change is made in
The CPU model defines the CPU flags and the CPU architecture that is exposed from the host processor to the guest. Modify nova.conf file located at /etc/nova/nova.conf. Sonus recommends setting CPU Mode to host-passthrough for SBC instances so every detail of the host CPU can be known by SBC SWe. The host-model setting impacts how CPU L2 / L3 cache information is communicated to the guest OS since the libvert emulated CPU does not accurately represent the L2 and L3 CPU hardware caches to the guest OS. In performance testing of the Sonus SBC, Sonus has seen significant performance degradation, which goes beyond just a simple reduction in capacity. This results in signaling latency and jitter that is outside of acceptable limits even at modest loads.
This setting is defined in /etc/nova/nova.conf:
cpu_mode = host-passthrough |
---|
This change is made in /etc/nova/nova-compute.conf:
[libvirt] |
---|
-compute.conf:
[libvirt] |
---|
Anchor | ||||
---|---|---|---|---|
|
Check the current configuration of the CPU frequency setting using the following command on the host system.
Code Block |
---|
# cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor |
The CPU frequency setting must be set to performance
to improve the VNF performance. Use the following command on the host system:
Code Block |
---|
# echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor |
Note |
---|
You must ensure that to keep the above settings persistent across reboot. |
Anchor
...
VNF | CPU-Pinning | Hyper-Threading Flavor Setting |
---|---|---|
S-SBC | Required | Supported |
M-SBC | Required | Supported |
T-SBC | Required | Supported |
SBC-Configurator | Supported but not required | Supported |
...
VNF | CPU-Pinning (hw:cpu_policy=dedicated) | Hyper-Threading | RAM* | Disk | Cores / vCPUs |
---|---|---|---|---|---|
S-SBC | Pinned | Yes | 64 128 GiB* | 100 GB | 20 / 40 |
M-SBC | Pinned | Yes | 32 GiB* | 100 GB | 10 / 20 |
SBC-Configurator | Pinned | Yes | 16 GiB* | 80 GB | 2 / 4 |
...
hw:cpu_max_sockets: This setting defines how KVM exposes the sockets and cores to the guest. Without this setting, KVM always exposes a socket for every core; each socket having one core. This requires a mapping in the host virtualization layer to convert the topology resulting in a measurable performance degradation. That performance overhead can be avoided by accurately matching the advertised cpu_sockets to the requested host numa_nodes. Using the *_max_* variable ensures that the value cannot be overridden in the image metadata supplied by tenant level users.
...
To create a S-SBC SWe flavor with 64 128 GiB RAM and 100 GB of Hard Disk based on 2 x NUMA nodes of 20 vCPUs each (For example, 40 vCPUs for S-SBC), enter the following Nova commands from the Openstack CLI.
Code Block |
---|
% nova flavor-create Sonus-SSBC auto 65536131072 100 40 % nova flavor-key Sonus-SSBC set hw:cpu_policy=dedicated hw:cpu_thread_policy=prefer % nova flavor-key Sonus-SSBC set hw:cpu_max_sockets=2 % nova flavor-key Sonus-SSBC set hw:mem_page_size=2048 % nova flavor-key Sonus-SSBC set hw:numa_nodes=2 |
...
Host Aggregate Based Pinning Flavor Specification Reference: http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
OpenStack Flavor Specification Reference: http://docs.openstack.org/admin-guide/compute-flavors.html
OpenStack CPU Typologies Reference: http://docs.openstack.org/admin-guide/compute-cpu-topologies.html
...
Pagebreak |
---|