Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Follow the open stack recommended performance settings for host and guest: Refer to VNF Performance Tuning for details.

  2. Make sure that physical network adapters, PMD threads, and pinned CPUs for the instance are all on the same NUMA node.This is a mandate for optimal performance.

  3. Set the queue size for virtio interfaces to 1024 by updating the Director template.

    1. NovaComputeExtraConfig: - nova::compute::libvirt::tx_queue_size: '"1024"'

    2. NovaComputeExtraConfig: - nova::compute::libvirt::rx_queue_size: '"1024"'


  4. Configure the following dpdk parameters in host ovs-dpdk:

    1. Make sure two pair of Rx/Tx queues are configured for host dpdk interfaces, which can be validated using the following command:

      ovs-vsctl get Interface dpdk0 options

      This needs to be done during ovs-dpdk bring-up. For background details, see http://docs.openvswitch.org/en/latest/howto/dpdk/


    2. Enable per-port memory, which means each port will use separate mem-pool for receiving packets, instead of using a default shared mem-pool:

      ovs-vsctl set Open_vSwitch . other_config:per-port-memory=true

    3. configure 4096 MB huge page memory on each socket:
        
      ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=4096,4096


    4. Make sure to spawn the appropriate number of pmd threads so that each port/queue can be serviced by a particular pmd thread. The pmd threads must be pinned to dedicated cores/hyper-threads, which must be in the same NUMA as network adapter and guest, which must be isolated from kernel, and must not be used by guest for any other purpose. The pmd-cpu-mask needs to be set accordingly.

      ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x40001004000100

      The example above sets pmd threads to run on two physical cores:8,26,36,54. (cores:8-36 and 26-54 are sibling hyper-threads).

    5. Restart ovs-vswitchd after the changes:

      systemctl status ovs-vswitchd
      systemctl restart ovs-vswitchd

  5. The port and Rx queue assignment to pmd threads is crucial for optimal performance. Follow http://docs.openvswitch.org/en/latest/topics/dpdk/pmd/ for more details. The affinity is a csv list of <queue_id>:<core_id> which needs to be set for each ports. 

    ovs-vsctl set interface dpdk0 other_config:pmd-rxq-affinity="0:8,1:26" 

    ovs-vsctl set interface vhub89b3d58-4f other_config:pmd-rxq-affinity="0:36"

    ovs-vsctl set interface vhu6d3f050e-de other_config:pmd-rxq-affinity="1:54"

    In the example above, the pmd thread on core 8 will read queue 0 and pmd thread on core 26 will read queue 1 of dpdk0 interface.

     Alternatively, you can use the default assignment of port/Rx queues to pmd threads and enable auto-load-balance option so that ovs will put the threads on cores based on load.

    ovs-vsctl set open_vswitch . other_config:pmd-auto-lb="true"

    ovs-appctl dpif-netdev/pmd-rxq-rebalance


  6. For better performance, pin emulator threads to dedicated vCPUs (outside of the guest vCPUs) to avoid %steal in the guest. To achieve this, update the setting for  for hw:emulator_threads_policy to isolate in the flavor description.

...