Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
SR-IOV SRIOV enables a Single Root Function (for example, a single Ethernet port), to appear as multiple, separate, physical devices. A physical device with SR-IOV capabilities can be configured to appear in the PCI configuration space as multiple functions, each device has its own configuration space complete with Base Address Registers (BARs).

SR-IOV SRIOV uses two new PCI functions:
  • Physical Functions (PFs) are full PCIe devices that include the SR-IOV capabilities. Physical Functions are discovered, managed, and configured as normal PCI devices. Physical Functions configure and manage the SR-IOV SRIOV functionality by assigning Virtual Functions.
  • Virtual Functions (VFs) are simple PCIe functions that only process I/O. Each Virtual Function is derived from a Physical Function. The number of Virtual Functions a device may have is limited by the device hardware. A single Ethernet port, the Physical Device, may map to many Virtual Functions that can be shared to guests.

...

Each Virtual Function can only be mapped once as Virtual Functions require real hardware. A guest can have multiple Virtual Functions. A Virtual Function appears as a network card in the same way as a normal network card would appear to an operating system.

The SR-IOV SRIOV drivers are implemented in the kernel. The core implementation is contained in the PCI subsystem, but there must also be driver support for both the Physical Function (PF) and Virtual Function (VF) devices. With an SR-IOV SRIOV capable device one can allocate VFs from a PF. The VFs appear as PCI devices which are backed on the physical PCI device by resources (queues, and register sets). 


Info
titleNote

For optimal performance, refer to Performance Tuning of VMs page before preceding with the information in this section.

 


Prerequisites

  • Install Linux Vanilla flavor 7.2

  • Install SRIOV supported PCI card
  • Enable VT-d / IOMMU in BIOS
  • Configure the Linux Vanilla flavor kernel to support IOMMU.

Expand
titleClick here for more information...
  1. Log on to the Host IP as root user.
  2. Locate the grub.cfg configuration file located in /etc/sysconfig or /etc/default/grub. In this example, the grub.cfg file is found in /etc/default/grub directory.

    Code Block
    cat /etc/default/grub
    cat /etc/sysconfig


  3. Modify the grub.cfg file.

    Code Block
    vi /etc/default/grub


  4. Add intel_iommu=on at the end of GRUB_CMDLINE_LINUX and generate the GRUB configuration file.

    Code Block
    GRUB_TIMEOUT=5
    GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
    GRUB_DEFAULT=saved
    GRUB_DISABLE_SUBMENU=true
    GRUB_TERMINAL_OUTPUT="console"
    GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap  rhgb quiet intel_iommu=on"
    GRUB_DISABLE_RECOVERY="true"


  5. To generate an updated grub configuration file, depending upon the type of host server, choose one of the following options:
    1. For BIOS based host servers (supports < 2TB partition), execute run the following command:

      Code Block
      grub2-mkconfig -o /boot/grub2/grub.cfg

      ~OR~

    2. For UEFI based machines (supports > 2TB partitions), execute run the following command:

      Code Block
      grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg


    Code Block
    Generating grub configuration file ...
    Found linux image: /boot/vmlinuz-3.10.0-514.el7.x86_64
    Found initrd image: /boot/initramfs-3.10.0-514.el7.x86_64.img
    Found linux image: /boot/vmlinuz-0-rescue-5766a48bb7bf4084832185345c6dcaa7
    Found initrd image: /boot/initramfs-0-rescue-5766a48bb7bf4084832185345c6dcaa7.img


  6. Reboot the system.
  7. To verify whether intel_iommu=on is added to the grub file, execute run the command below command:

    Code Block
    cat /proc/cmdline
    
    BOOT_IMAGE=/vmlinuz-3.10.0-514.el7.x86_64 
    root=UUID=9cfce69c-8519-4440-bc06-5a6e4f7f2248 ro crashkernel=auto rhgb 
    quiet LANG=en_US.UTF-8 intel_iommu=on



Procedure

  1. Anchor

...

  1. step1

...

  1. step1
    Log on to the Host IP as root user.

  2. To know the maximum number of virtual functions a physical function can support, execute run the command below command:

    Code Block
    cat /sys/class/net/eth3/device/sriov_totalvfs


  3. On a host machine (compute node), create the required number of virtual functions via the PCI SYS interface. Execute Run the command below command:
    In this example, we create 8 virtual functions. The total number of virtual functions is based on the requirements.

    Code Block
    echo "echo '8' > /sys/class/net/eth3/device/sriov_numvfs" >> /etc/rc.local 


  4. Anchor
    step4
    step4
    Run Execute the following command:

    Code Block
    chmod +x /etc/rc.d/rc.local


  5. Reboot the system.

  6. Anchor
    step6
    step6
    To verify devices exist and to attach VM:

    1. To list the newly added Virtual functions attached to the network device, execute run the command below command:
      Example: Intel I350

      Code Block
      lspci | grep I350
      
      01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.4 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.5 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.6 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.7 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:10.0 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.1 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.2 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.3 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.4 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.5 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.6 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.7 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)


    2. To filter the Intel I350 network device from the list of available host devices use 01, execute run the command below command:

      Code Block
      virsh nodedev-list | grep 01
      pci_0000_01_00_0
      pci_0000_01_00_1
      pci_0000_01_00_2
      pci_0000_01_00_3
      pci_0000_01_00_4
      pci_0000_01_00_5
      pci_0000_01_00_6
      pci_0000_01_00_7
      pci_0000_01_10_0
      pci_0000_01_10_1
      pci_0000_01_10_2
      pci_0000_01_10_3
      pci_0000_01_10_4
      pci_0000_01_10_5
      pci_0000_01_10_6
      pci_0000_01_10_7


  7. Anchor
    step7
    step7
    To get device details:

    Info

    Follow step 6 and 7 for other packet interface as well.

     


    1. The pci_0000_01_00_0 is one of the Physical Functions and pci_0000_01_10_0 is the first corresponding Virtual Function for that Physical Function.

    2. To get advanced output for both devices, execute run the command below command:

       


      Code Block
      virsh nodedev-dumpxml pci_0000_01_10_0
      <device>
        <name>pci_0000_01_10_0</name>
        <path>/sys/devices/pci0000:00/0000:00:03.0/0000:01:10.0</path>
        <parent>pci_0000_00_03_0</parent>
        <driver>
          <name>igbvf</name>
        </driver>
        <capability type='pci'>
          <domain>0</domain>
          <bus>1</bus>
          <slot>16</slot>
          <function>0</function>
          <product id='0x1520'>I350 Ethernet Controller Virtual Function</product>
          <vendor id='0x8086'>Intel Corporation</vendor>
          <capability type='phys_function'>
            <address domain='0x0000' bus='0x01' slot='0x00' function='0x2'/>
          </capability>
          <iommuGroup number='94'>
            <address domain='0x0000' bus='0x01' slot='0x10' function='0x2'/>
          </iommuGroup>
          <numa node='0'/>
          <pci-express>
            <link validity='cap' port='0' speed='5' width='4'/>
            <link validity='sta' width='0'/>
          </pci-express>
        </capability>
      </device>

       


      This The following example adds the Virtual Function pci_0000_01_10_0 to the virtual machine. Note the bus, slot and function parameters of the Virtual Function is required for adding the device. For example, copy these parameters into a temporary XML file, such as /tmp/new-interface.xml.

      Code Block
      <interface type='hostdev' managed='yes'>
       <source>
        <address type='pci' domain='0' bus='1' slot='16' function='0'/>
       </source>
      </interface>


    3. Before going to add the Virtual Functions to the virtual machine, we must detach those virtual functions from Host machine.

      Code Block
      virsh nodedev-detach pci_0000_01_10_0


  8. Before you proceed with the below next step, create the virtual machine. Refer to For details, refer to Creating a New SBC SWe Instance on KVM Hypervisor.
  9. To add the Virtual Function to the virtual machine, execute run the command below command. This attaches the new device immediately and saves it for subsequent guest restarts.

    Info

    The MyGuest indicates the Virtual Machine name.


    Code Block
    virsh attach-device [MyGuest] /tmp/new-interface.xml --config


    Info

    Using the --config option ensures the new device is available after future guest restarts.

...