DO NOT SHARE THESE DOCS WITH CUSTOMERS!
This is an LA release that will only be provided to a select number of PLM-sanctioned customers (PDFs only). Contact PLM for details.
Follow the open stack recommended performance settings for host and guest: Refer to VNF Performance Tuning for details.
Make sure that physical network adapters, Poll Mode Driver (PMD) threads, and pinned CPUs for the instance are all on the same NUMA node.This is a mandate for optimal performance.
PMD threads are the threads that do the heavy lifting for userspace switching. They perform tasks such as continuous polling of input ports for packets, classifying packets once received, and executing actions on the packets once they are classified.
NovaComputeExtraConfig: - nova::compute::libvirt::tx_queue_size: '"1024"'
NovaComputeExtraConfig: - nova::compute::libvirt::rx_queue_size: '"1024"'
ovs-dpdk
bring-up:ovs-vsctl get Interface dpdk0 options
ovs-vsctl set Open_vSwitch . other_config:per-port-memory=true
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=4096,4096
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x40001004000100
systemctl status ovs-vswitchd
systemctl restart ovs-vswitchd
ovs-vsctl set interface dpdk0 other_config:pmd-rxq-affinity="0:8,1:26"
ovs-vsctl set interface vhub89b3d58-4f other_config:pmd-rxq-affinity="0:36"
ovs-vsctl set interface vhu6d3f050e-de other_config:pmd-rxq-affinity="1:54"
In the example above, the PMD thread on core 8 will read queue 0 and PMD thread on core 26 will read queue 1 of dpdk0 interface.
Alternatively, you can use the default assignment of port/Rx queues to PMD threads and enable auto-load-balance option so that ovs will put the threads on cores based on load.
ovs-vsctl set open_vswitch . other_config:pmd-auto-lb="true"
ovs-appctl dpif-netdev/pmd-rxq-rebalance
ovs-appctl dpif-netdev/pmd-rxq-show
ovs-appctl dpif-netdev/pmd-stats-clear && sleep 10 && ovs-appctl dpif-netdev/pmd-stats-show
To check packet drops on host dpdk interfaces, use the below command and check for rx_dropped/tx_dropped counters:
watch -n 1 'ovs-vsctl get interface dpdk0 statistics|sed -e "s/,/\n/g" -e "s/[\",\{,\}, ]//g" -e "s/=/ =\u21d2 /g"'
For additional details, refer to the following page for troubleshooting performance issues/packet drops in ovs-dpdk environment:
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/ovs-dpdk_end_to_end_troubleshooting_guide/validating_an_ovs_dpdk_deployment#find_the_ovs_dpdk_port_physical_nic_mapping_configured_by_os_net_config
Setup details:
Guest Details:
Benchmarking has been tested in a D-SBC setup with up to 30k pass-through sessions using the recommendations described in this document.
You may require additional cores for PMD threads for higher numbers.
https://docs.openvswitch.org/en/latest/howto/dpdk/
https://docs.openvswitch.org/en/latest/topics/dpdk/pmd/