...
To retrieve the NUMA topology for the node, execute the below command:
Code Block |
---|
# lscpu | grep NUMA NUMA node(s): 2 NUMA node0 CPU(s): 0-11,24-35 NUMA node1 CPU(s): 12-23,36-47 |
Note |
---|
In this case, there are two Intel Sockets with 12 cores each; configured for Hyper-Threading. CPUs are paired on physical cores in the pattern 0/24, 1/25, etc. (The pairs are also known as thread siblings). |
The following code must be added at the end of /etc/default/grub:
Code Block |
---|
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=1G hugepages=256" |
Note |
---|
The number of hugepages depends on how many VM instances is created on this host and multiplied by the memory size of each instance. The hugepagesz should be the maximum hugespace value supported by the kernel being used. |
For Hyper-Threading Host: Add the CPU pin set list to vcpu_pin_set in default
section of /etc/nova/nova.conf:
Code Block |
---|
vcpu_pin_set=2-11,14-23,26-35,38-47 |
For For compute nodes, servicing VMs which can be run on hyper-threaded host, the CPU PIN set includes all thread siblings except for the cores which are carved out and dedicated to host OS. The resulting CPU PIN in the example dedicates cores/threads 0/24,1/25 and 12/36,13/37 to the host OS. VMs uses cores/threads 2/26-11/35 on NUMA node 0, and cores/threads 14/38-23/47 on NUMA node 1.
Update the boot record and reboot the compute node.
Configure the Nova Scheduler to use NUMA Topology and Aggregate Instance Extra Specs on Nova Controller Hosts:
...
Note |
---|
You must ensure that to keep the above settings persistent across reboot. |
...
...