...
To retrieve the NUMA topology for the node, execute the below command:
Code Block |
---|
# lscpu | grep NUMA NUMA node(s): 2 NUMA node0 CPU(s): 0-11,24-35 NUMA node1 CPU(s): 12-23,36-47 |
Note |
---|
In this case, there are two Intel Sockets with 12 cores each; configured for hyperHyper-threadingThreading. CPUs are paired on physical cores in the pattern 0/24, 1/25, etc. (The pairs are also known as thread siblings). |
When the following code is added at the end of /etc/default/grub, the system understands the cores that should be used by VMs (and not by host operating system):
Code Block |
---|
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=1G hugepages=256" |
Note |
---|
For Red Hat RHEL based host OS, Red Hat recommended omitting the The number of hugepages depends on how many VM instances is created on this host and multiplied by the memory size of each instance. The hugepagesz should be the maximum hugespace value supported by the kernel being used. |
For Hyper-Threading Host: Add the CPU pin set list to vcpu_pin_set in /etc/nova/nova.conf:
Code Block |
---|
vcpu_pin_set=2-11,14-23,26-35,38-47 |
For compute nodes, servicing VMs which can be run on Hyper-Threaded host, the CPU PIN set includes all thread siblings except for the cores which are carved out and dedicated to host OS. The resulting CPU PIN in the example dedicates cores/threads 0/24,1/25 and 12/36,13/37 to the host OS. VMs uses cores/threads 2/26-11/35 on NUMA node 0, and cores/threads 14/38-23/47 on NUMA node 1.
Update the boot record and reboot the compute node.
Configure the Nova Scheduler to use NUMA Topology and Aggregate Instance Extra Specs on Nova Controller Hosts:
...
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,
PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter
In addition to support SR-IOV, enable the PciPassthroughFilter and restart the openstack-nova-scheduler service.
Code Block |
---|
systemctl restart openstack-nova-scheduler.service |
With CPU pinning now enabled, Nova must be configured to use it. See the section below for a method to use a combination of host-aggregate and nova flavor keys.
Pagebreak |
---|
Anchor | ||||
---|---|---|---|---|
|
...
Hyper-threading Threading is designed to use " idle resources " on Intel processors. A physical core is split into 2 x logical cores for parallel threads. Each logical core has its own architectural state. The actual performance gains of using Hyper-Threading depends on the amount of idle resources on the physical CPU.
This is shown in the diagram below:.
Caption | ||||
---|---|---|---|---|
| ||||
The actual performance gains of using hyper-threading depends on the amount of idle resources on the physical CPU. Hyper-threading on a SBC SWe instance has yet to prove any quantifiable gain in performance for a given number of cores. Sonus is in the process of assessing this on various call flows. The performance should never drop below the values obtained without hyper-threading for the same number of cores and could increase, but we need additional engineering work to qualify there are no negative impacts.
...
Warning |
---|
This feature is applicable only for a Distributed SBC on an Openstack platform. |
Hyper-Threading can be enabled in the BIOS for all Sonus NFV elements.
Caption | ||||
---|---|---|---|---|
|
Caption | ||||
---|---|---|---|---|
| ||||
|
VNF | CPU-Pinning | Hyper-Threading Flavor Setting |
---|---|---|
S-SBC | Required | Not Supported (Support is pending further research and development) |
MM-SBC | Required | Not Supported (Support is pending further research and development) |
T-SBC | Required | Not Supported (Support is pending further research and development) |
SBC-Configurator | Supported but not required | Supported |
...
VNF | CPU-Pinning (hw:cpu_policy=dedicated) | Hyper-Threading | RAM* | Disk | Cores / vCPUs | |
---|---|---|---|---|---|---|
S-SBC | Pinned | No | Yes | 64 GiB65,536 MB* | 80 100 GB | 20 / 2040 |
M-SBC | Pinned | NoYes | 32 ,768 MBGiB* | 80 100 GB | 10 / 1020 | |
SBC-Configurator | Pinned | Yes | 16 ,384 MBGiB* | 80 GB | 2 / 4 |
*Memory values rounded to the next power of 2 to prevent memory fragmentation in the nova compute scheduler.
...
hw:cpu_max_sockets: This setting defines how KVM exposes the sockets and cores to the guest. Without this setting, KVM always exposes a socket for every core; each socket having one core. This requires a mapping in the host virtualization layer to convert the topology resulting in a measurable performance degradation. That performance overhead can be avoided by accurately matching the advertised cpu_sockets to the requested host numa_nodes. Using the *_max_* variable ensures that the value cannot be overridden in the image metadata supplied by tenant level users.
Code Block |
---|
...
#EMS % nova flavor-create EMS-SK-E-01P auto 16384 |
...
60 8 % nova flavor-key EMS-SK-E-01P set aggregate_instance_extra_specs:Active=true hw:cpu_policy=dedicated |
...
#PSX Master % nova flavor-create PSX-SK-PM-01P auto 65536 180 20 % nova flavor-key PSX-SK-PM-01P set aggregate_instance_extra_specs:Active=true hw:cpu_policy=dedicated |
...
#SBC Configurator % nova flavor-create SBC-SK-C-01P auto 16384 80 4 % nova flavor-key SBC-SK-C-01P set aggregate_instance_extra_specs:Active=true hw:cpu_policy=dedicated |
...
#PSX Replica as SRv6 Proxy % nova flavor-create PSX-SK-SRV6-01P auto 32768 180 16 % nova flavor-key PSX-SK-SRV6-01P set aggregate_instance_extra_specs:Active=true hw:cpu_policy=dedicated % nova flavor-key PSX-SK-SRV6-01P set hw:numa_nodes=1 hw:cpu_max_sockets=1 |
...
#PSX Replica as D+ |
...
% nova flavor-create PSX-SK-CD-01P auto 32768 180 16
% nova flavor-key PSX-SK-CD-01P set aggregate_instance_extra_specs:Active=true hw:cpu_policy=dedicated
% nova flavor-key PSX-SK-CD-01P set hw:numa_nodes=1 hw:cpu_max_sockets=1 |
...
To create a M-SBC SWe flavor with 20 vCPUs, 32 GiB of RAM and 100 GB of Hard disk, enter the following Nova commands from the Openstack CLI.
Code Block |
---|
% nova flavor-create Sonus-MSBC auto 32768 100 20 % nova flavor- |
...
key |
...
Sonus-MSBC set hw:cpu_policy=dedicated hw:cpu_thread_policy=prefer % nova flavor-key |
...
Sonus-MSBC set |
...
hw:cpu_ |
...
max_sockets=1 % nova flavor-key |
...
Sonus-MSBC set hw: |
...
mem_page_size= |
...
2048 % nova flavor-key |
...
Sonus-MSBC set hw:numa_nodes= |
...
1
|
To create a S-SBC SWe flavor with 64 GiB RAM and 100 GB of Hard Disk based on 2 x NUMA nodes of 20 vCPUs each (For example, 40 vCPUs for S-SBC), enter the following Nova commands from the Openstack CLI.
Code Block |
---|
% nova flavor-create |
...
Sonus-SSBC auto |
...
65536 |
...
100 |
...
40 % nova flavor-key |
...
Sonus-SSBC set hw:cpu_policy=dedicated hw:cpu_thread_policy= |
...
prefer % nova flavor-key |
...
Sonus-SSBC set hw: |
...
cpu_max_sockets= |
...
2 % nova flavor-key |
...
Sonus-SSBC set hw:mem_page_size=2048 % nova flavor-key Sonus-SSBC set hw: |
...
numa_nodes=2
|
Regarding the default setting, numa_mempolicy=preferred, the NUMA memory allocation policy is set to "strict" which forces the kernel to allocate memory only from the local NUMA node where processes are scheduled. If memory on one of the NUMA node is exhausted for any reason, the kernel cannot allocate memory from another NUMA node even when memory is available on that node. With this in mind, using the default setting would have a negative impact on applications like the S-SBC. This setting is in reference to the link below:
Host Aggregate Based Pinning Flavor Specification Reference: http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
OpenStack Flavor Specification Reference: http://docs.openstack.org/admin-guide/compute-flavors.html
OpenStack CPU Typologies Reference: http://docs.openstack.org/admin-guide/compute-cpu-topologies.html
...