In this section:
This page was created based on
.The information in this page was obtained from the open source Data Plane Development Kit (DPDK) website
and formatted to meet Ribbon quality standards.This information is current as of November, 2020. For the latest information, please visit the DPDK website.
This document describes the required configuration of the Cisco® System Inc. VIC Ethernet NICs (also referred to as vNICs below). If you are running, or wish to run, the SBC SWe software application on Cisco UCS servers using Cisco VIC adapters the following topics are relevant. This document will guide you to perform UCS-related configuration, plus explain some special handling that is required for UCS.
The following configurations are required on top of the basic CiscoUCS configuration to ensure the SBC SWe comes up properly.
This document is intended for use by personnnel with Cisco UCS configuration knowledge.
The maximum number of receive queues (RQs), work queues (WQs) and completion queues (CQs) are configurable on a per vNIC basis through the Cisco UCS Manager (CIMC or UCSM).
Configure these values as follows:
For example: If the application requires 3 Rx queues, and 3 Tx queues, configue the vNIC to have at least 3 WQs, 6 RQs (3 pairs), and 6 CQs (3 for use by WQs + 3 for use by the 3 pairs of RQs).
Likewise, the number of receive and transmit descriptors are configurable on a per-vNIC basis via the UCS Manager, Ensure they are greater than, or equal to, the nb_rx_desc and nb_tx_desc parameters expected for use in the calls to rte_eth_rx_queue_setup() and rte_eth_tx_queue_setup(), respectively. Thus, an application requesting more than the set size is limited to that size.
Unless there is a lack of resources due to creating many vNICs, it is recommended to set the WQ and RQ sizes to the maximum value. This gives the application the greatest amount of flexibility in its queue configuration.
Since the introduction of Rx scatter, for performance reasons, this PMD uses two RQs on the vNIC per receive queue in DPDK. One RQ holds descriptors for the start of a packet, and the second RQ holds the descriptors for the rest of the fragments of a packet. This means that the nb_rx_desc parameter to rte_eth_rx_queue_setup() can be a greater than 4,096. The exact amount will depend on the size of the mbufs being used for receives, and the MTU size.
For example: If the mbuf size is 2,048 and the MTU is 9,000, then receiving a full size packet will take 5 descriptors, 1 from the start-of-packet queue, and 4 from the second queue. Assuming that the RQ size was set to the maximum of 4,096, then the application can specify up to 1,024 + 4,096 as the nb_rx_desc parameter to rte_eth_rx_queue_setup().
Configure at least one interrupt per vNIC interface in the UCS manager regardless of the number receive/transmit queues. The ENIC PMD uses this interrupt to get information about link status and errors in the fast path.
In addition to the interrupt for link status and errors, when using Rx queue interrupts, increase the number of configured interrupts so that there is at least one interrupt for each Rx queue. For example, if the app uses 3 Rx queues and wants to use per-queue interrupts, configure 4 (3 + 1) interrupts.
In order to fully utilize RSS in DPDK, enable all RSS related settings in CIMC or UCSM. These include the following items listed under Receive Side Scaling: TCP, IPv4, TCP-IPv4, IPv6, TCP-IPv6, IPv6 Extension, TCP-IPv6 Extension.
SR-IOV mode is only supported in a KVM hypervisor.
UCS blade servers configured with dynamic vNIC connection policies in UCSM are capable of supporting SR-IOV. SR-IOV virtual functions (VFs) are specialized vNICs, distinct from regular Ethernet vNICs. These VFs can be directly assigned to virtual machines (VMs) as ‘passthrough’ devices.
In UCS, SR-IOV VFs require the use of the Cisco Virtual Machine Fabric Extender (VM-FEX), which gives the VM a dedicated interface on the Fabric Interconnect (FI). Layer 2 switching is done at the FI. This may eliminate the requirement for software switching on the host to route intra-host VM traffic.
Please refer to the Cisco page,
, for information on configuring SR-IOV adapter policies and port profiles using UCSMOnce the policies are in place and the host OS is rebooted, VFs are visible on the host, similar to the below example:
# lspci | grep Cisco | grep Ethernet 0d:00.0 Ethernet controller: Cisco Systems Inc VIC Ethernet NIC (rev a2) 0d:00.1 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2) 0d:00.2 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2) 0d:00.3 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2) 0d:00.4 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2) 0d:00.5 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2) 0d:00.6 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2) 0d:00.7 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2)
Enable Intel IOMMU on the host and install KVM and libvirt, and reboot again, as required. Then, using libvirt, create a VM instance with an assigned device. Below is an example interface block (part of the domain configuration XML) that adds the host VF 0d:00:01 to the VM. profileid='pp-vlan-25' indicates the port profile that has been configured in UCSM.
<interface type='hostdev' managed='yes'> <mac address='52:54:00:ac:ff:b6'/> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x0d' slot='0x00' function='0x1'/> </source> <virtualport type='802.1Qbh'> <parameters profileid='pp-vlan-25'/> </virtualport> </interface>
Alternatively, you can configure this in a separate file using the network keyword. These methods are described in the libvirt documentation for Network XML format.
When the VM instance is started, libvirt will bind the host VF to vfio, complete provisioning on the FI and bring up the link.
It is not possible to use a VF directly from the host because it is not fully provisioned until libvirt brings up the VM that it is assigned to.
In the VM instance, the VF is now visible (the VF 00:04.0 is seen on the VM instance and is available for binding to a DPDK).
# lspci | grep Ether 00:04.0 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2)
Follow the normal DPDK install procedure to bind the VF to either igb_uio or vfio in non-IOMMU mode.
Pass-through does not require SR-IOV. If VM-FEX is not desired, create as many regular vNICs as necessary and assign them to VMs as pass-through devices. Since these vNICs are not SR-IOV VFs, using them as pass-through devices does not require libvirt, port profiles, and VM-FEX.