Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. To retrieve the NUMA topology for the node, execute the below command:

    Code Block
    # lscpu  | grep NUMA
    NUMA node(s):          2
    NUMA node0 CPU(s):     0-11,24-35
    NUMA node1 CPU(s):     12-23,36-47
    Note

    In this case, there are two Intel Sockets with 12 cores each; configured for Hyper-Threading. CPUs are paired on physical cores in the pattern 0/24, 1/25, etc. (The pairs are also known as thread siblings).

  2. The following code must be added at the end of /etc/default/grub:

    Code Block
    GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=1G hugepages=256"
    Note

    The number of hugepages depends on how many VM instances is created on this host and multiplied by the memory size of each instance. The hugepagesz should be the maximum hugespace value supported by the kernel being used.

  3. A pin set limits KVM to placing guests on a subset of the physical cores and thread siblings. Omitting some cores from the pin set ensures that there are dedicated cores for the OpenStack processes and application. The pin set ensures that KVM guests never use more than one thread/core while leaving the additional thread for shared KVM/OpenStack processes. This mechanism boost the performance of non-threaded guest applications by allowing the host OS to schedule closely related host OS processes on the same core with the guest OS (e.g. virtio processes). The following example built on the CPU and NUMA topology shown in Step 1 (above):

    • For Hyper-Threading Host: Add the CPU pin set list to vcpu_pin_set in default section of /etc/nova/nova.conf:

      Code Block
      vcpu_pin_set=2-11,14-23,26-35,38-47

      For  For compute nodes, servicing VMs which can be run on hyper-threaded host, the CPU PIN set includes all thread siblings except for the cores which are carved out and dedicated to host OS. The resulting CPU PIN in the example dedicates cores/threads 0/24,1/25 and 12/36,13/37 to the host OS. VMs uses cores/threads 2/26-11/35 on NUMA node 0, and cores/threads 14/38-23/47 on NUMA node 1.

  4. Update the boot record and reboot the compute node.

  5. Configure the Nova Scheduler to use NUMA Topology and Aggregate Instance Extra Specs on Nova Controller Hosts:

...

Note

You must ensure that to keep the above settings persistent across reboot.

...

Removal of CPU and Memory Over Commit

...