Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...


The SR-IOV drivers are implemented in the kernel. The core implementation is contained in the PCI subsystem, but there must also be driver support for both the Physical Function (PF) and Virtual Function (VF) devices. With an SR-IOV capable device one can allocate VFs from a PF. The VFs appear as PCI devices which are backed on the physical PCI device by resources (queues, and register sets).

 

note
Info
titleNote

For optimal performance, refer to performance tuning to Performance Tuning of VMs page before preceding with the information in this section.

 

Prerequisites

  • Install Linux Vanilla flavor 7.2

  • Install SRIOV supported PCI card
  • Enable VT-d / IOMMU in BIOS
  • Configure the Linux Vanilla flavor kernel to support IOMMU.

    Expand
    titleClick here for more information...
    1. Log on to the Host IP as root user.
    2. Locate the grub.cfg configuration file located in /etc/sysconfig or /etc/default/grub. In this example, the grub.cfg file is found in /etc/default/grub directory.

      Code Block
      cat /etc/default/grub
      cat /etc/sysconfig
    3. Modify the grub.cfg file.

      Code Block
      vi /etc/default/grub
    4. Add intel_iommu=on at the end of GRUB_CMDLINE_LINUX and generate the GRUB configuration file.

      Code Block
      GRUB_TIMEOUT=5
      GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
      GRUB_DEFAULT=saved
      GRUB_DISABLE_SUBMENU=true
      GRUB_TERMINAL_OUTPUT="console"
      GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap  rhgb quiet intel_iommu=on"
      GRUB_DISABLE_RECOVERY="true"
    5. To generate an updated grub configuration file, execute below command:

      Code Block
      grub2-mkconfig -o /boot/grub2/grub.cfg
      Code Block
      Generating grub configuration file ...
      Found linux image: /boot/vmlinuz-3.10.0-514.el7.x86_64
      Found initrd image: /boot/initramfs-3.10.0-514.el7.x86_64.img
      Found linux image: /boot/vmlinuz-0-rescue-5766a48bb7bf4084832185345c6dcaa7
      Found initrd image: /boot/initramfs-0-rescue-5766a48bb7bf4084832185345c6dcaa7.img
    6. Reboot the system.
    7. To verify whether intel_iommu=on is added to the grub file, execute below command:

      Code Block
      cat /proc/cmdline
      
      BOOT_IMAGE=/vmlinuz-3.10.0-514.el7.x86_64 
      root=UUID=9cfce69c-8519-4440-bc06-5a6e4f7f2248 ro crashkernel=auto rhgb 
      quiet LANG=en_US.UTF-8 intel_iommu=on

...

  1. Log on to the Host IP as root user.

  2. To know the maximum number of virtual functions a physical function can support, execute below command:

    Code Block
    cat /sys/class/net/eth3/device/sriov_totalvfs
  3. On a host machine (compute node), create the required number of virtual functions via the PCI SYS interface. Execute below command:
    In this example, we create 8 virtual functions. The total number of virtual functions is based on the requirements.

    Code Block
    echo "echo '8' > /sys/class/net/eth3/device/sriov_numvfs" >> /etc/rc.local 
  4. Reboot the system.

  5. To verify devices exist and to attach VM:

    1. To list the newly added Virtual functions attached to the network device, execute below command:
      Example: Intel I350

      Code Block
      lspci | grep I350
      
      01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.4 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.5 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.6 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:00.7 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
      01:10.0 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.1 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.2 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.3 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.4 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.5 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.6 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
      01:10.7 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
    2. To filter the Intel I350 network device from the list of available host devices use 01, execute below command:

      Code Block
      virsh nodedev-list | grep 01
      pci_0000_01_00_0
      pci_0000_01_00_1
      pci_0000_01_00_2
      pci_0000_01_00_3
      pci_0000_01_00_4
      pci_0000_01_00_5
      pci_0000_01_00_6
      pci_0000_01_00_7
      pci_0000_01_10_0
      pci_0000_01_10_1
      pci_0000_01_10_2
      pci_0000_01_10_3
      pci_0000_01_10_4
      pci_0000_01_10_5
      pci_0000_01_10_6
      pci_0000_01_10_7
  6. To get device details:

    Noteinfo

    Follow step 6 and 7 for other packet interface as well.

     

    1. The pci_0000_01_00_0 is one of the Physical Functions and pci_0000_01_10_0 is the first corresponding Virtual Function for that Physical Function.

    2. To get advanced output for both devices, execute below command:

       

      Code Block
      virsh nodedev-dumpxml pci_0000_01_10_0
      <device>
        <name>pci_0000_01_10_0</name>
        <path>/sys/devices/pci0000:00/0000:00:03.0/0000:01:10.0</path>
        <parent>pci_0000_00_03_0</parent>
        <driver>
          <name>igbvf</name>
        </driver>
        <capability type='pci'>
          <domain>0</domain>
          <bus>1</bus>
          <slot>16</slot>
          <function>0</function>
          <product id='0x1520'>I350 Ethernet Controller Virtual Function</product>
          <vendor id='0x8086'>Intel Corporation</vendor>
          <capability type='phys_function'>
            <address domain='0x0000' bus='0x01' slot='0x00' function='0x2'/>
          </capability>
          <iommuGroup number='94'>
            <address domain='0x0000' bus='0x01' slot='0x10' function='0x2'/>
          </iommuGroup>
          <numa node='0'/>
          <pci-express>
            <link validity='cap' port='0' speed='5' width='4'/>
            <link validity='sta' width='0'/>
          </pci-express>
        </capability>
      </device>

       

      This example adds the Virtual Function pci_0000_01_10_0 to the virtual machine. Note the bus, slot and function parameters of the Virtual Function is required for adding the device. For example, copy these parameters into a temporary XML file, such as /tmp/new-interface.xml.

      Code Block
      <interface type='hostdev' managed='yes'>
       <source>
        <address type='pci' domain='0' bus='1' slot='16' function='0'/>
       </source>
      </interface>
    3. Before going to add the Virtual Functions to the virtual machine, we must detach those virtual functions from Host machine.

      Code Block
      virsh nodedev-detach pci_0000_01_10_0
  7. Before you proceed with the below step, create virtual machine. Refer to Creating a New SBC SWe Instance on KVM Hypervisor.
  8. To add the Virtual Function to the virtual machine, execute below command. This attaches the new device immediately and saves it for subsequent guest restarts.

    Noteinfo

    The MyGuest indicates the Virtual Machine name.

    Code Block
    virsh attach-device [MyGuest] /tmp/new-interface.xml --config
    Noteinfo

    Using the --config option ensures the new device is available after future guest restarts.