Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Add_workflow_for_techpubs
AUTH1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
JIRAIDAUTHSBX-94575
REV5UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV6UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV3

...

UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cba10618, userName='null'}
REV1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cba10618, userName='null'}



Panel

In this section:

Table of Contents
maxLevel4



This section describes the best possible performance and scale for a given virtual machine resource profile.

Purpose

Real-time applications have stringent requirements with respect to jitter, latency, quality of service and packet loss. The migration of real-time applications to an all-software environment requires deterministic response to failures and performance in the scheduler of the hypervisor and the host

...

operating system (OS).

...

Some fine-tuning

...

can be done to achieve maximum scale and reliable performance for the SBC SWe and ancillary applications. This

...

page defines

...

the areas that

...

can be fine-tuned in OpenStack/KVM

...

environments.

The

...

Ribbon SBC SWe requires a reservation of CPU, memory and hard disk resources in virtual machines in addition to implementing certain performance tuning parameters for any production deployments that

...

support over 100 concurrent sessions.

Info

...

titleNote

The OpenStack infrastructure supports I/O (PCIe) based NUMA scheduling as referenced here.

Recommendations

Excerpt Include
KVM Performance Tuning
KVM Performance Tuning
nopaneltrue

Recommended BIOS Settings

...

Ribbon recommends applying the BIOS settings in the table below to all Nova compute hosts running the

...

Ribbon VMs for optimum performance:

  • S-SBC
  • M-SBC
  • T-SBC
  • I-SBC

...

...


Caption
0Table
1Recommended BIOS Settings

...


BIOS Parameter
Setting
Comments
CPU power managementBalanced

...

Ribbon recommends Maximum Performance
Intel

...

hyper-

...

threadingEnabled

...


Intel

...

turbo boostEnabled

...


VT-dEnabled
Intel VT-x (

...

virtualization technology)EnabledFor hardware virtualization

...

All server BIOS settings are different, but in general the following guidelines apply:

...



Caption
0Table
1BIOS Setting Recommendations for HP DL380p Gen8 Server


BIOS ParameterRecommended
Setting
Default Value
HP Power ProfileMaximum PerformanceBalanced Power and Performance
Thermal ConfigurationOptimal CoolingOptimal Cooling
HW PrefetchersDisabledEnabled

...



CPU Pinning Overview

Apply below settings to all Nova compute hosts in the pinned host aggregate.

Caption
0Table
1Nova Compute Hosts

...

 


Applies to:Configuration
S-SBC3.b
M-SBC3.b
T-SBC3.b
I-SBC

...

3.

...

b




From the hypervisor's perspective, a virtual machine appears as a single process that should be scheduled on the available CPUs. By design, hypervisors

...

schedule the clock cycle on a different processor. While this is certainly acceptable in environments where the hypervisor is allowed to over-commit, this contradicts the requirements for real-time applications. Hence,

...

Ribbon requires CPU pinning to prevent applications from sharing a core.

Caption
0Figure
1CPU with Unpinned Applications

Image Modified


Caption
0Figure
1CPU with Pinned Applications

 Image Modified

 


By default, virtual CPUs are not assigned to a host CPU, but

...

Ribbon requires CPU pinning to maintain the requirements of real-time media traffic. The primary reason for pinning

...

is to prevent other workloads (including those of the host OS) from causing significant jitter in media processing. It is also possible to introduce significant message queuing delays and buffer overflows at higher call rates.

...

Instances with pinned CPUs

...

cannot use the CPUs of another pinned instance. This prevents resource contention and improves processor cache efficiency by reserving physical cores. Host

...

aggregate filters or

...

availability zones can be used to select compute hosts for pinned and non-pinned instances. OpenStack clearly states that pinned instances must be separated from unpinned instances

...

as the latter will not respect the resourcing requirements of the former.

To enable CPU pinning, execute the following steps on every compute host

...

:

  1. To retrieve the NUMA topology for the node, execute the below command:

    Code Block
    # lscpu  | grep NUMA
    NUMA node(s):          2
    NUMA node0 CPU(s):     0-11,24-35
    NUMA node1 CPU(s):     12-23,36-47

...


  1. Tip
    titleTip

    In this case, there are two Intel

...

  1. sockets with 12 cores each; configured for hyper-threading. CPUs are paired on physical cores in the pattern 0/24, 1/25, etc. (The pairs are also known as thread siblings).

...


  1. The following code

...

  1. must be added at the end of /etc/default/grub

...

  1. :

    Code Block
    GRUB_CMDLINE_LINUX

...

  1. ="$GRUB_CMDLINE_LINUX hugepagesz=1G hugepages=256"

...


  1. Tip
    titleTip

...

  1. The number of hugepages depends on how many VM instances

...

  1. are created on this host and multiplied by the memory size of each instance. The hugepagesz value should be the maximum hugespace value supported by the kernel being used.

...


  1. The vcpu_pin_set property defines which physical CPUs (pCPUs) can be used by instance virtual CPUs (vCPUs). Omitting some cores from the pin set ensures

...

  1. that instance’s virtual CPUs doesn’t occupy the cores that are meant for the OpenStack processes and

...

  1. other host applications. This ensures a clear segregation of host and guest processes, resulting in better performance of the guest instances. 

    This mechanism boosts the performance of non-threaded guest applications by allowing the host OS to schedule closely related host OS processes on the same core with the guest OS (e.g. virtio processes).

    Info

    To realize better performance, do not configure the isolcpus.

    The following example built on the CPU and NUMA topology shown in

...

  1. step 1 (above):

    • For

...

    • a hyper-threading host: Add the CPU pin set list to vcpu_pin_set in the default section of /etc/nova/nova.conf:

      Code Block
      vcpu_pin_set=2-11,14-23,26-35,38-47

...

    •  For compute nodes, servicing VMs which can be run on

...

    • hyper-

...

    • threaded hosts, the CPU

...

    • pin set includes all thread siblings except for the cores which are carved out and dedicated to the host OS. The resulting CPU

...

    • pin in the example dedicates cores/threads 0/24,1/25 and 12/36,13/37 to the host OS. VMs

...

    • use cores/threads 2/26-11/35 on NUMA node 0, and cores/threads 14/38-23/47 on NUMA node 1.

  1. Update the boot record and reboot the compute node.

  2. Configure the Nova

...

  1. scheduler to use NUMA

...

  1. topology and aggregate instance extra specs on Nova controller hosts:

On each node where the OpenStack

...

compute scheduler (openstack-nova-scheduler) runs, edit the nova.conf file that is located at /etc/nova/nova.conf. Add the AggregateInstanceExtraSpecFilter and

...

NUMATopologyFilter values to the list of scheduler_default_filters. These filters are used to segregate the compute nodes that can be used for CPU pinning from those that cannot and to apply NUMA-aware scheduling rules when launching instances:

    • scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,

    • ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,

    • PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter

      In addition to support SR-IOV, enable the PciPassthroughFilter and restart the openstack-nova-

...

    • scheduler service.

      Code Block
      systemctl restart openstack-nova-scheduler.service

      With CPU pinning

...

    • enabled,

...

    •  Nova must be configured to use it. See the section below for a method to use a combination of host-aggregate

...

    • and Nova flavor keys.

...

Pagebreak

Anchor
model
model
CPU Model Setting

Apply

...

the following settings to

...

all Nova compute hosts where

...

Spacevars
0company
 VMs are installed.

Applies to:
EMS
PSX-M
PSX-Replica
S-SBC
M-SBC
T-SBC

...

The CPU model defines the CPU flags and

...

CPU architecture that

...

are exposed from the host processor to the guest.

Non-S/M/T/I-SBC Instances

...

Ribbon supports either host-passthrough or host-model for non-S/M/T-SBC instances

...

.

S/M/T/I-SBC Instances

...

Modify the nova.conf file located at /etc/nova/nova.conf.

...

Ribbon recommends setting the CPU

...

mode to host-

...

model for SBC instances

...

.

This setting is defined in /etc/nova/nova.conf:

[libvirt]
virt_type = kvm 

cpu_mode = host-model

This change is made in /etc/nova/nova-compute.conf:

[libvirt]
virt_type = kvm

Anchor
overcommit
overcommit

CPU Frequency Setting in the Compute Host

Check the current configuration of the CPU frequency setting using the following command on the host system.

Code Block
# cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

The CPU frequency setting must be set to performance to improve VNF performance. Use the following command on the host system:

Code Block
# echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor


Info
titleNote

You must ensure that you keep the above settings persistent across reboot.

This setting is defined in /etc/nova/nova.conf:

[libvirt]
virt_type = kvm 

cpu_mode = host-passthrough

This change is made in /etc/nova/nova-compute.conf:

[libvirt]
virt_type = kvm

...


Removal of CPU and Memory Over Commit

Apply

...

the following settings to

...

all Nova compute hosts where

...

Ribbon VMs are installed:

  • S-SBC
  • M-SBC
  • T-SBC
  • I-SBC

...

The default settings for CPU (1.16) and Memory (1:5). Modify the nova.conf file located at /etc/nova/nova.conf, and change the default settings of cpu_allocation_ratio and ram_allocation_ratio to (1:1) for resource reservation.

Code Block
cpu_allocation_ratio = 1.0
ram_allocation_ratio = 1.0

Adjusting the Tx Queue Length of a Tap Device

Apply

...

the following settings to

...

all Nova compute hosts where

...

Ribbon VMs are installed:

  • S-SBC
  • M-SBC
  • T-SBC
  • I-SBC

...

While using

...

1:1 HA mode with virtual nics (virtio), OpenStack creates tap devices for each port on the guest VM. The Tx queue length of the tap devices is set to 500 by default

...

which defines the queue between the OVS and the VM instance. The value 500 is too low on the queue

...

; this increases the possibility of packet drops at the tap device. Set the Tx queue length to a higher value that increases performance and reliability. Use a value that matches your performance requirements. 

The sample commands below are for Ubuntu 4.4

...

; use the syntax that corresponds to your operating system.

Code Block
Modify the 60-tap.rules file and add the KERNEL command
# vim /etc/udev/rules.d/60-tap.rules
KERNEL=="tap*", RUN+="/sbin/ip link set %k txqueuelen 1000" - Add this line
# udevadm control --reload-rules
Use the below command to apply the rules to already created interfaces:
# udevadm trigger --attr-match=subsystem=net

Kernel Same-page Metering (KSM) Settings

Apply

...

the following settings to

...

all Nova compute hosts where

...

Ribbon VMs are installed:

  • S-SBC
  • M-SBC
  • T-SBC
  • I-SBC

...

Kernel same-page metering (KSM) is a technology

...

that finds common memory pages inside a Linux system and merges the pages to save memory resources. In the event

...

that one of the copies

...

is updated, a new copy is created so the function is transparent to the processes on the system. For hypervisors, KSM is highly beneficial

...

when multiple guests are running with the same level of the operating system. However, there is an overhead due to the scanning process which may cause the applications to run slower

...

. The SBC SWe requires that KSM

...

be turned

...

off.

The sample commands below are for Ubuntu 4.4

...

; use the syntax that corresponds to your operating system.

Code Block
# echo 0 >/sys/kernel/mm/ksm/run
# echo "KSM_ENABLED=0" > /etc/default/qemu-kvm

Once

...

KSM

...

turned-off, it is important to verify that there is still sufficient memory on the hypervisor. When the pages are not merged, it may increase memory usage and lead to swapping that negatively impacts performance

...

.

Hyper-

...

threading Support

Hyper-threading is designed to use

...

idle resources

...

on Intel processors. A physical core is split into

...

two logical cores

...

creating parallel threads. Each logical core has its own architectural state.

...

The actual performance gains from using hyper-threading depend on the amount of idle resources on the physical CPU.

Hyper-threading is shown in the diagram below

...

The actual performance gains of using hyper-threading depends on the amount of idle resources on the physical CPU. Hyper-threading on a SBC SWe instance has yet to prove any quantifiable gain in performance for a given number of cores. Sonus is in the process of assessing this on various call flows. The performance should never drop below the values obtained without hyper-threading for the same number of cores and could increase, but we need additional engineering work to qualify there are no negative impacts.

...

.

Caption
0Figure
1Hyperthreading Support

Image Modified

...


Ribbon VNF CPU Pinning and Hyper-threading Support

Hyper-threading should be enabled in the BIOS for all Ribbon VNF elements.

Caption
0Table
1
VNF CPU Pinning and Hyper-threading Support

Hyper-threading can be enabled in the BIOS for all Sonus NFV elements.

Caption
0Table
1VNF CPU Pinning and Hyper-threading Support

 

...


VNFCPU-PinningHyper-Threading Flavor Setting
S-SBCRequired

Not Supported

...

Required

M-SBC

Required

...

Not Supported

...

Required

T-SBCRequired

...

Not Supported

...

Required

I-SBCRequired

Required



Ribbon

...

VNF Tested Configurations


Caption
0Table
1VNF Tested Configurations

...

 


VNF

CPU-Pinning

(hw:cpu_policy=dedicated)

Hyper-Threading
Flavor Setting

RAM*DiskCores / vCPUs
S-SBCPinned

...

No

Yes

128 GiB

...

*

...

100 GB

...

16 /

...

32

M-SBC
(3:1 instances)

Pinned

...

Yes

32

...

GiB*

...

100 GB

...

8 /

...

16


*Memory values rounded to the next power of 2 to prevent memory fragmentation in

...

the Nova compute scheduler.


Host-Aggregate Method for SMP VM Placement

A few methods exist to influence VM placement in OpenStack environments. The method described in this section

...

segregates Nova compute nodes into discrete host aggregates and use Nova flavor-key aggregate_instance_extra_specs so that specific flavors will use specific host aggregates. For this to work, all flavors must specify a host aggregate. This is accomplished by first assigning all existing flavors to a "normal" host aggregate, then assigning only the Nova compute hosts configured for non-

...

hyper-

...

threading to a "Pin-Isolate" host aggregate.

Code Block
From the Openstack CLI, create the host aggregates and assign compute hosts:

% nova aggregate-create Active-Pin-Isolate
% nova aggregate-set-metadata Active-Pin-Isolate Active-Pin-Isolate=true
% nova aggregate-add-host Active-Pin-Isolate {first nova compute host in aggregate}
    {repeat for each compute host to be added to this aggregate}

% nova aggregate-create Active
% nova aggregate-set-metadata Active Active=true
% nova aggregate-add-host Active {first nova compute host in aggregate}
    {repeat for each compute host to be added to this aggregate}

...


Info
titleNote

Ensure all existing flavors

...

in the entire stack specify

...

the hyper-

...

threaded aggregate by using the "aggregate_instance_extra_specs:Active"="true" metadata parameter. Otherwise, flavors can get scheduled on the hosts with pinning and the non-pinned VMs will not respect the pinned isolation.

...

From

...

the

...

Openstack

...

CLI,

...

assign

...

all

...

existing

...

flavors

...

to

...

the

...

non-pinned

...

host

...

aggregate:

...

Code Block
% for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`; \
    do nova flavor-key ${FLAVOR} set \
        "aggregate_instance_extra_specs:Active"="true"; \
    done

...

Example Flavor Definitions

The flavor definitions

...

listed below

...

include the following

...

extra specs:

  • hw:cpu_policy=dedicated: This setting enables CPU pinning.

  • hw:cpu_thread_policy=

...

  • prefer: This setting allocates each vCPU on thread siblings of physical CPUs.

  • hw:numa_nodes: This setting defines how the host processor cores are spread over the host NUMA nodes.

...

  • When this is set to 1, it ensures that the cores are not spread over more than 1 NUMA node, ensuring the performance of having one; otherwise Nova would be free to split the cores up between available NUMA nodes.

  • hw:cpu_max_sockets: This setting defines how KVM exposes the sockets and cores to the guest. Without this setting, KVM always exposes a socket for every core

...

  • with each socket having one core. This requires a mapping in the host virtualization layer to convert the topology, resulting in a measurable performance degradation. That performance overhead can be avoided by accurately matching the advertised cpu_sockets to the requested host numa_nodes. Using the *_max_* variable ensures that the value cannot be overridden in the image metadata supplied by tenant-level users.

...

SBC SWe Flavor Example

To create an M-SBC SWe flavor with 20 vCPUs, 32 GiB of RAM and 100 GB of Hard disk, enter the following Nova commands from the Openstack CLI.

Code Block
% nova flavor-create 

...

SBC-SK-

...

CM-01P auto 

...

32768 

...

100 

...

20
% nova flavor-key 

...

MSBC set 

...

hw:cpu_policy=dedicated hw:cpu_thread_policy=

...

prefer
% nova flavor-

...

key MSBC set hw:cpu_max_sockets=1
% nova flavor-key 

...

MSBC set 

...

hw:

...

mem_page_

...

size=

...

1048576

...

% nova flavor-

...

key MSBC set hw:numa_nodes=1

To create an S-SBC SWe flavor with 128 GiB RAM and 100 GB of Hard Disk based on 2 x NUMA nodes of 20 vCPUs each (For example, 40 vCPUs for S-SBC), enter the following Nova commands from the Openstack CLI.

Code Block
% nova flavor-

...

create SBC-SK-

...

CS-01P 

...

auto 131072 100 40
% nova flavor-key SSBC set hw:cpu_policy=dedicated hw:cpu_thread_policy=

...

prefer
% nova flavor-key SSBC set hw:cpu_max_sockets=2
% nova flavor-

...

key SSBC set hw:mem_page_size=1048576
% nova flavor-key 

...

SSBC set hw:numa_nodes=

...

2

Regarding the default setting, numa_mempolicy=preferred, the NUMA memory allocation policy is set to "strict" which forces the kernel to allocate memory only from the local NUMA node where processes are scheduled. If memory on one of the NUMA node is exhausted for any reason, the kernel cannot allocate memory from another NUMA node even when memory is available on that node. With this in mind, using the default setting would have a negative impact on applications like the S-SBC. This setting is in reference to the link below:

https://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-numa-placement.html

References


OVS-DPDK Virtio Interfaces - Performance Tuning Recommendations

  1. Follow the open stack recommended performance settings for host and guest: Refer to VNF Performance Tuning for details.

  2. Make sure that physical network adapters, PMD threads, and pinned CPUs for the instance are all on the same NUMA node.This is a mandate for optimal performance.

  3. Set the queue size for virtio interfaces to 1024 by updating the Director template.

    1. NovaComputeExtraConfig: - nova::compute::libvirt::tx_queue_size: '"1024"'

    2. NovaComputeExtraConfig: - nova::compute::libvirt::rx_queue_size: '"1024"'


  4. Configure the following dpdk parameters in host ovs-dpdk:

    1. Make sure two pair of Rx/Tx queues are configured for host dpdk interfaces, which can be validated using the following command:

      ovs-vsctl get Interface dpdk0 options

      This needs to be done during ovs-dpdk bring-up. For background details, see http://docs.openvswitch.org/en/latest/howto/dpdk/


    2. Enable per-port memory, which means each port will use separate mem-pool for receiving packets, instead of using a default shared mem-pool:

      ovs-vsctl set Open_vSwitch . other_config:per-port-memory=true

    3. configure 4096 MB huge page memory on each socket:
        
      ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=4096,4096


    4. Make sure to spawn the appropriate number of pmd threads so that each port/queue can be serviced by a particular pmd thread. The pmd threads must be pinned to dedicated cores/hyper-threads, which must be in the same NUMA as network adapter and guest, which must be isolated from kernel, and must not be used by guest for any other purpose. The pmd-cpu-mask needs to be set accordingly.

      ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x40001004000100

      The example above sets pmd threads to run on two physical cores:8,26,36,54. (cores:8-36 and 26-54 are sibling hyper-threads).

    5. Restart ovs-vswitchd after the changes:

      systemctl status ovs-vswitchd
      systemctl restart ovs-vswitchd

  5. The port and Rx queue assignment to pmd threads is crucial for optimal performance. Follow http://docs.openvswitch.org/en/latest/topics/dpdk/pmd/ for more details. The affinity is a csv list of <queue_id>:<core_id> which needs to be set for each ports. 

    ovs-vsctl set interface dpdk0 other_config:pmd-rxq-affinity="0:8,1:26" 

    ovs-vsctl set interface vhub89b3d58-4f other_config:pmd-rxq-affinity="0:36"

    ovs-vsctl set interface vhu6d3f050e-de other_config:pmd-rxq-affinity="1:54"

    In the example above, the pmd thread on core 8 will read queue 0 and pmd thread on core 26 will read queue 1 of dpdk0 interface.

     Alternatively, you can use the default assignment of port/Rx queues to pmd threads and enable auto-load-balance option so that ovs will put the threads on cores based on load.

    ovs-vsctl set open_vswitch . other_config:pmd-auto-lb="true"

    ovs-appctl dpif-netdev/pmd-rxq-rebalance


  6. For better performance, pin emulator threads to dedicated vCPUs (outside of the guest vCPUs) to avoid %steal in the guest. To achieve this, update the setting for hw:emulator_threads_policy to isolate in the flavor description.

Troubleshooting:

  1. To check the port/Rx queue distribution among pmd threads, use this command below:

    ovs-appctl dpif-netdev/pmd-rxq-show

  2. To check the pmd thread stats ( actual cpu usage), use below command and check for "processing cycles" and "idle cycles":

    ovs-appctl dpif-netdev/pmd-stats-clear && sleep 10 && ovs-appctl dpif-netdev/pmd-stats-show


  3. To check packet drops on host dpdk interfaces, use the below command and check for rx_dropped/tx_dropped counters:

    watch -n 1 'ovs-vsctl get interface dpdk0 statistics|sed -e "s/,/\n/g" -e "s/[\",\{,\}, ]//g" -e "s/=/ =\u21d2 /g"'

     

  4. For more details, refer to the following page for troubleshooting performance issues/packet drops in ovs-dpdk environment:

    https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/ovs-dpdk_end_to_end_troubleshooting_guide/validating_an_ovs_dpdk_deployment#find_the_ovs_dpdk_port_physical_nic_mapping_configured_by_os_net_config

Benchmarking:

Setup details:

  • Platform: RHOSP13
  • Host OS: RHEL7.5
  • Processor: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
  • 1 Provider Network configured for Management Interface
  • 1 Provider Network configured for HA Interface
  • OVS+DPDK enabled for packet interfaces (pkt0 and pkt1)
  • 2 pair of Rx/Tx queues in host dpdk interfaces
  • 1 Rx/Tx queue in guest virtio interface
  • 4 pmd threads pinned to 4 hyper threads (i.e. using up 2 physical cores)


Guest Details:

  • SSBC - 8vcpu/18GB RAM/100GB HDD
  • MSBC - 10vcpu/20GB RAM/100 GB HDD
Info

Benchmarking has been tested in a D-SBC setup with up to 30k pass-through sessions using the recommendations described in this document. Beyond this number, additional cores for pmd threads may be required.

External References:

 http://docs.openvswitch.org/en/latest/howto/dpdk/

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/ovs-dpdk_end_to_end_troubleshooting_guide/index

http://docs.openvswitch.org/en/latest/topics/dpdk/pmd/

Code Block
From the Openstack CLI, Create flavors that will go on the non-Hyper-Threaded Nova compute hosts:

C-SBC Signaling
% nova flavor-create SBC-SK-CS-01P auto 65536 80 20
% nova flavor-key SBC-SK-CS-01P set aggregate_instance_extra_specs:Active-Pin-Isolate=true 
% nova flavor-key SBC-SK-CS-01P set hw:cpu_policy=dedicated hw:cpu_thread_policy=isolate 
% nova flavor-key SBC-SK-CS-01P set hw:mem_page_size=1048576
% nova flavor-key SBC-SK-CS-01P set hw:numa_nodes=2 hw:cpu_max_sockets=2
 
C-SBC Media
% nova flavor-create SBC-SK-CM-01P auto 32768 80 10
% nova flavor-key SBC-SK-CM-01P set aggregate_instance_extra_specs:Active-Pin-Isolate=true
% nova flavor-key SBC-SK-CM-01P set hw:cpu_policy=dedicated hw:cpu_thread_policy=isolate
% nova flavor-key SBC-SK-CM-01P set hw:numa_nodes=1 hw:cpu_max_sockets=1
% nova flavor-key SBC-SK-CM-01P set hw:mem_page_size=1048576

References

 

Pagebreak