Prerequisites

  • Ensure SR-IOV is enabled in the host's BIOS settings by logging in through the iLO console.
  • When using SR-IOV interfaces, do not add more than 64 VLANs, as the driver does not support it.
  • SR-IOV is a licensed feature on VMware. Procure the "VMware vSphere Enterprise Plus" license to enable SR-IOV support on ESXi.
    • Install SR-IOV supported two 10 Gigabit PCI cards.

    • Minimum 10 GB RAM.
    • VM must be created with four Interfaces:
      • One virtual interface for Management Port

      • One virtual interface for HA Port

      • Two virtual interfaces for SR-IOV Ports


Note

Configuring all four ports with different IP addresses in four networks is recommended.

For example:

  • MGMT - Network 1
  • HA - Network 2
  • SR-IOV - Network 3
  • SR-IOV - Network 4

Configuring Virtual Machine Instances

Configuring SR-IOV

Perform the following steps:

  1. Log on to VMware ESXi GUI as the root user.


    Note

    The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.

  2. To check the status of the SR-IOV cards:

    1. Navigate to Host > Manage.

    2. Select the tab Hardware.
      Ensure both the SR-IOV cards are in a disabled state or the Passthrough is disabled.


  3. If the SR-IOV cards are not disabled, disable them by performing the following:

    1. Enable SSH. To enable SSH,

      1. Navigate to Host and select the tab Actions.

      2. From the drop-down list, select the option Services.

      3. Click Enable Secure Shell (SSH) and Enable console shell.

    2. Check the name of the NIC cards.

      1. Log on to CLI as root.

      2. Enter the following command:

        Input
        lspci|grep X540
        


        Sample output
        0000:0b:00.0 Network controller: Intel(R) Ethernet Controller X540-AT2 [vmnic8]
        0000:0b:00.1 Network controller: Intel(R) Ethernet Controller X540-AT2 [vmnic9]
    3. Open the esx.conf file and find the PCI ID associated with the NIC card.

      1. Change the owner of the PCI ID from passthrough to vmkernel.

      2. Enter the following command:

        vi /etc/vmware/esx.conf
        


        Sample output
        /system/uuid = "5a33649d-c9db-e792-c676-5cb9018acc24"
        /system/uservars/psa/defaultLunMasksInstalled = "1"  
        /system/uservars/corestorage/Filter/DefaultVMWRulesLoaded = "1"
        /system/uservars/corestorage/VAAI/DefaultVMWRulesLoaded = "1"  
        /system/uservars/host-acceptance-level = "partner"           
        /resourceGroups/version = "6.5.0"                 
        /adv/Misc/HostIPAddr = "10.54.12.81"
        /adv/Misc/DiskDumpSlotSize = "2560" 
        /adv/Misc/HostName = "hpg9-9"      
        /adv/Net/ManagementIface = "vmk0"
        /adv/Net/ManagementAddr = "10.54.12.81"
        /adv/UserMem/UserMemASRandomSeed = "1418738923"
        /adv/UserVars/HostClientCEIPOptIn = "1"        
        /device/00000:005:00.0/vmkname = "vmhba1"
        /device/00000:002:00.0/vmkname = "vmnic0"
        /device/00000:002:00.2/vmkname = "vmnic2"
        /device/00000:003:00.0/vmkname = "vmhba0"
        /device/00000:002:00.1/vmkname = "vmnic1"
        /device/00000:011:00.1/owner = "vmkernel"
        /device/00000:011:00.1/device = "1528"   
        /device/00000:011:00.1/vendor = "8086"
        /device/00000:011:00.1/vmkname = "vmnic9"
        /device/00000:004:00.2/vmkname = "vmnic6"
        /device/00000:004:00.1/vmkname = "vmnic5"
        /device/00000:002:00.3/vmkname = "vmnic3"
        /device/00000:004:00.0/vmkname = "vmnic4"
        /device/00000:005:00.1/vmkname = "vmhba2"
        /device/00000:004:00.3/vmkname = "vmnic7"
        /device/00000:011:00.0/vmkname = "vmnic8"
        /device/00000:011:00.0/vendor = "8086"   
        /device/00000:011:00.0/device = "1528"
        /device/00000:011:00.0/owner = "vmkernel"
        - /etc/vmware/esx.conf 33/499 6%
    4. Save the file.

    5. Reboot the host.

  4. In the VMware ESXi GUI, navigate to Host > Manage.

  5. Select the Hardware tab.

  6. From the PCI Devices, select the SR-IOV card.


  7. Click Configure SR-IOV.

    The window to configure the SR-IOV card is displayed.

  8. For the option Enabled, select Yes and set the number of virtual functions.


  9. Click Save.

  10. Configure the other SR-IOV card. Repeat steps 4 to 9.

  11. Reboot the host by clicking Reboot host.

    A reboot is mandatory to reflect the changes.



    The following warning message is displayed.


  12. Click Reboot.

    The VMware ESXi login window displays "The host is rebooting...".

  13. Once the virtual function is created, the SR-IOV cards and the Passthrough for the virtual functions display the status as "Active".

Creating a vSwitch

Creating a Management vSwitch

Perform the following steps.

Start

  1. Navigate to Networking. From the Port group tab, click the Add port group.
    The Add port group window is displayed with the following fields:

    FieldExample or Recommendation
    NameVMNetwork
    VLAN ID0
    Virtual switchvSwitch0
    SecuritySelect Inherit from vSwitch
  2. Click Add.

Creating an HA vSwitch

Perform the following steps.

Start

  1. Navigate to Networking. From the Port group tab, click the Add port group.
    The Add port group window is displayed with the following fields:

    FieldExample or Recommendation
    NameHANetwork
    VLAN ID0
    Virtual switchvSwitch1
    SecuritySelect Inherit from vSwitch
  2. Click Add.

Creating the Virtual Machine

Perform the following steps:

Start

  1. Navigate to Virtual Machines. Click Create / Register VM to create or register a virtual machine.
    The Select creation type option is displayed.


    Note

    The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.


  2. Select the option Create a new virtual machine.

  3. Click Next.

    The Select a name and guest OS option is displayed with the following fields:

    FieldExample or Recommendation
    NameName of the virtual machine. For example, "VM".
    CompatibilityESXi 6.5 virtual machine
    Guest OS familyLinux
    Guest OS versionDebian GNULinux 8 (64-bit)
  4. Click Next.

    The Select storage option is displayed.

  5. Select datastore1.

    Note

    Ensure that the datastore has minimum 500 GB or more space. This datastore is required to store all log-related data files.



  6. Click Next.
    The Customize settings option is displayed.

  7. Configure virtual hardware from Customize settings:

    1. Set the CPU

      When configuring virtual CPUs within the vSphere Web Client, you can configure:

      • The total number of vCPUs for the virtual machine
      • The total number of cores per socket

      The following table provides examples of socket determination based on the CPU and Cores per Socket within the vSphere Web Client:

      Total Number of
      Virtual CPUs (CPU)
      Cores per
      Socket
      Number of Sockets
      Determined by the
      vSphere Web Client
      441
      422
      414
      881
      824
      842
      818
      Note

      A minimum of 4 vCPUs is required. Any number of vCPUs may be configured depending on the call capacity requirements, but the number should be even (4, 6, 8, etc.) to avoid impacting performance.


      Note

      Set the CPU reservation to equal the physical processor CPU speed multiplied by the number of vCPUs assigned to the VM, divided by 2.

      CPU Reservation = (No. of vCPUs * CPU frequency)/2

      For example, a configuration of 32 CPUs with a processor of 2.30 GHz CPU frequency reserves "(32 * 2300)/2 = 36800 MHz".

      The following table describes the CPU fields.

      FieldsExample?Recommendation
      CPU10
      Number of Virtual sockets1
      Number of cores per virtual socket10
      CPU Reservation12925 MHz
      LimitUnlimited
      SharesNormal
      Hardware virtualizationNone
      Performance countersNone
      Scheduling AffinityNone
    2. Adjust the Memory using the following fields:

      FieldsExamples/Recommendation
      Memory20480 MB.
      Note: It is recommended to use more than 20 GB of memory.
      ReservationSelect the option "Reserve all guest memory (All locked)".
      LimitUnlimited
      SharesNormal
      Memory Hot PlugNone
    3. Set the Hard disk 1.

      The following table describes the Hard disk 1 fields.

      FieldsExamples/Recommendation
      Hard disk 1200 GB
      Maximum Size1.43 TB
      Locationdatastore1
      Disk ProvisioningSelect Thick provisioned, lazily zeroed
      SharesNormal
      Limit-IOPsUnlimited
      Virtual Device NodeSCSI controller 0 and SCSI (0:0)
      Disk modeDependent
      SharingNone
    4. Set the SCSI Controller

      The following table describes the SCSI Controller fields.

      FieldsExamples/Recommendation
      SCSI ControllerSelect "LSI Logic Parallel" from the drop-down list
      SCSI Bus SharingNone
      SATA Controller 0N/A
      USB controller 1USB 2.0
    5. Set the Network Adapter 1
      Network Adapter 1 is used to provision MGMT ports.

      The following table describes the Network Adapter 1 fields.

      FieldsExamples/Recommendation
      Network Adapter 1Select the MGMT ports from the drop-down list. For example, "VM Network".
      StatusSelect the option "Connect" at power on.
      Adapter TypeVMXNET3
      MAC AddressAutomatic

      Once Network Adapter 1 is created for MGMT ports, create a new Network Adapter for HA ports.

    6. Select Add network adapter. The option to create a New Network Adapter for the HA port is displayed.
      The following table describes the New Network Adapter fields.

      FieldsExamples/Recommendation
      New Network AdapterSelect the HA port from the drop-down list. For example, "HA Network".
      StatusSelect the option "Connect" at power on.
      Adapter TypeVMXNET3
      MAC AddressAutomatic

      To attach packet ports on VMware ESXi 7.0 and later versions, skip steps 7.g to 7.j and instead follow the steps in Attach SR-IOV Interface in VMware ESXi 8.0 and Above

      Modified: for 12.1.4



    7. Click Add other device to continue the configuration (for PKT0 and for PKT1, if applicable).


    8. Select the option PCI device from the drop-down list.

      The New PCI device option is created.


    9. Repeat steps g and h to create more PCI Devices from PKT networks.

      Note

      Follow the same procedure for both a non-port redundancy and port redundancy scenario.

      • 2 PKT interfaces are supported for a non-port redundancy scenario
      • 4 PKT interfaces are supported for a port redundancy scenario

      Ensure packet ports are attached to the SBC VM in the correct sequential order.  See  Readjust SR-IOV Network Adapter Addition Sequence.

    10. Click Next. The Ready to complete option is displayed.


    11. PKT Port NUMA Affinity: Perform the following steps:
      1. Find the PKT port NUMA affinity by executing the following command on the ESXi host:

        vsish -e get /net/pNics/<PKT port name - vmnicX>/properties | grep "NUMA"
      2. Update VM NUMA affinity to be the same as NIC NUMA affinity:
        1. Edit Settings -> VM options -> Configuration Parameters -> Add Parameters.


        2. Add the following parameters:

          numa.nodeAffinity' =  0 or 1 (based on PKT port NIC affinity)
          numa.autosize.once = FALSE
          



    12. Once the review is performed, click Finish. The Virtual Machine is created.

RSS Configuration for VMs using SR-IOV Interfaces

Receive side scaling (RSS) is a mechanism that enables spreading incoming network traffic across multiple CPUs, thus eliminating a potential processing bottleneck. For SR-IOV configurations, update the RSS configuration in the ESXi host as shown in the procedure below.

Start

  1. Enter the following command to unload the IXGBE driver:

    esxcfg-module -u ixgbe
  2. Enter the following command to verify the driver is unloaded:

    esxcfg-module -l | grep ixg 
  3. Enter one of the following commands to reload the driver with the required virtual function (VF) and RSS configurations:

    vmkload_mod ixgbe max_vfs=2,2 RSS=4,4

    ~ or ~

    esxcfg-module -s 'max_vfs=2,2, RSS=4,4' ixgbe
  4. Reboot the host for the RSS to take effect. This modified configuration is retained after the reboot.

Installing the SBC Application on VMware

Once the VMware instance is created, install the SBC application using the steps below.

Start

  1. Select the Virtual Machine where you want to install the SBC application.
  2. Click Edit. The Edit Settings window is displayed.

  3. Select CD/DVD Drive 1.


    Note

    The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.


  4. The Datastore browser window is displayed. Browse the ISO image file.


  5. Click Select. The following window is displayed.


  6. Click Save.
  7. Click Power on to power on the VM.

  8. The SBC Installer window is displayed. Press Enter to boot.

  9. Once the installation completes, you are prompted to enter the login credentials.

  10. Log on to CLI as linuxadmin. Provide the following IP addresses:

    1. Primary Management IPv4 Address

    2. Primary Management Network Mask

    3. Primary Management Gateway IP Address, you are prompted to use IPv6 address.

    4. When prompted, enter n if you do not want to set the IPv6 address.