In this section:


To install the SBC on a virtual machine (VM) with Direct I/O pass-through, you must first configure the pass-through I/O devices on a ESXi host and create a VM and allocate its resources (for example CPU, memory, and NICs), as well as configure a datastore to contain the SBC operating system and application software.

VMDirectPath I/O is a VMware technology that can be used with I/O hardware to reduce the CPU impact of high-bandwidth throughput workload's by "bypassing" the hypervisor. VM DirectPath I/O allows guest operating systems to directly access an I/O device, bypassing the virtualization layer and enhancing the performance.


Configuring Pass-through I/O Devices on ESXi Host

Note

You must follow the BIOS settings recommendations for the particular server.  Refer to BIOS Settings Recommendations section for guidance.


To configure the Passthrough I/O devices on a ESXi host:

  1. Select a host from the inventory panel of the vSphere Client.

  2. On the Configuration tab, click Advanced Settings.
    The Passthrough Configuration page appears, listing all the available Passthrough devices.

    Note

    A green icon indicates that a device is enabled and active. An orange icon indicates that the state of the device has changed and the host must be rebooted before the device can be used.

  3. Click Edit.

  4. Select the devices to be used for Passthrough and click OK.

    Configuring Passthrough device


    Note

    The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.


  5. Reboot the Host.

    Rebooting the host

Note

Pass-through devices are not detected in VMware ESXi version Build–1483097 and VM does not get powered on.

Note

Make sure all your NICs are physically plugged in (link light on) before creating your VM. Otherwise, when you perform the ISO, an incorrect port mapping occurs (logical to physical), and your SBC does not function properly.

Creating a Virtual Machine (VM)

Perform the following steps to create newSBC VM.

  1. Login as user root on VMware vSphere client.

    1. Enter VM Ware host machine IP Address.
    2. Enter your VMware vSphere administrator user name.
    3. Enter your VMware vSphere administrator password. 

    VMware VSphere Client Login

    The vSphere Client main window appears.


    Note

    The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.



  2. In VMware vSphere Client main window, click Getting Started tab. Click Create a new virtual machine.

    New Virtual Machine Main



  3. Select Custom in Configuration window, and click Next. Provide a Name for your SBC. The name can be up to 80 characters. Click Next.

    Note

    Avoid special characters for name. Refer to System Name and Hostname Naming Conventions for details.

    Custom Configuration



  4. From the Storage screen, select a datastore1 and click Next. Ensure datastore has at least 100 GB or more free space. This datastore is required to store all log-related data files.

    Datastore



  5. Select Virtual Machine Version: 11. Refer to VMware Hardware and Software Requirements for more information.

    Selecting Virtual Machine Version



  6. From Guest Operating System screen, make the following selections, and then click Next:

    1. Select Linux as the Guest Operating System.

    2. Select Debian GNU/Linux 9 (64-bit) from Version drop-down menu.

      Guest Operating System




  7. From CPUs screen, make following selections, and click Next. (In the following screenshot example, 4 cores are chosen).

    1. Number of Virtual sockets: 1

    2. Number of cores per virtual socket: 4 or above (depending on whether all virtual NICs are used)

      Creating Virtual CPUs



    3. From the Memory screen, assign memory for the virtual machine, and then click Next. For fewer than 6000 calls, the minimum reservation must be 10 GB. Anything more than 6000 calls require at least 14GB vRAM. Refer to SBC SWe Performance Metrics for more information.

      Assigning Memory



  8. Define Virtual Machine network interface connections (NICs) using following options from the drop-down menus. Then click Next to continue.

    Assigning NICs




    1. Select first network adapter (NIC 1:) for management interface. For example, the label can be MGMT which is created using Configuring vNetwork Standard Switches (vSwitches).

    2. Select second network adapter (NIC 2:) for HA interface. For example, the label can be HA.

      Note

      These network adapters and labels are already created on the ESX host server. If you are installing for the first time on new ESXi host server, these network adapters and corresponding labels (MGMT, HA) needs to be created. For details, see Configuring vNetwork Standard Switches (vSwitches). The PKT0, and PKT1 is created using the PCI devices. Refer step 14 through 19 for guidance.

      Make sure that Network Adapters are mapped exactly in the following order as shown below:

      • First network adapter (Network Adapter 1) for management interface
      • Second network adapter (Network Adapter 2) for high availability
    3. Select Adapter type as VMXNET 3  for both MGMT and HA interfaces.

    4. Ensure all Connect at Power On check boxes are checked. This must be Power On always.

  9. From the SCSI Controller screen, select LSI Logic Parallel as the SCSI Controller option, then click Next to continue.

    Selecting SCSI Controller



  10. From the Select a Disk screen, select Create a new virtual disk option, then click Next to continue.

  11. From the Create a Disk screen, make the following selections:

    Selecting Disk Provisioning Option




    1. In Capacity section, assign a minimum of 100 GB or more disk space.


    2. In Disk Provisioning section, choose only Thick Provisioning Eager Zeroed option:

      1. Thick Provision Lazy Zeroed  - Allocates the requested hard disk (virtual) during the VM creation. This type of disk pre-allocates and dedicates a pre-defined amount of a space for a virtual machine's disk operations, but it does not write zeroes to a virtual machine file system block until the first write within that region at run time.

      2. Thick Provision Eager Zeroed(recommended) - Pre-allocates and dedicates a user defined amount of space for a VM disk operations.

      3. Thin Provision - Creates virtual hard disk during runtime (on write operations). This provides more optimal hard disk usage, but it has some performance impact until it creates maximum requested virtual hard disk.

    3. In Location section, select Specify a datastore or datastore cluster and click Browse...  Select the datastore or NFS where ISO image is available.

      Specifying Datastore



    4. Click Next to continue.

  12. From the Advanced Options screen, keep the default value SCSI (0:0) and click Next. The default virtual device node is SCSI (0:0).

    Selecting Advance Option



  13. From the summary screen, review your VM settings and select the Edit the virtual machine settings before completion option. Click Continue. To go back and change the settings, click Back.

    Ready to Complete



  14. Click Add to continue the configuration on PKT0 and PKT1.

    Adding the device



  15. Choose the type of device to add and click Next. For Direct I/O interface, you must select the PCI device.

    Selecting PCI Device



  16. Select the physical PCI/PCI2 device to connect the packet interface PKT0 and click Next.

    Specifying the PCI/PCIe Device



  17. PKT Port NUMA Affinity: To find and modify PKT port NUMA affinity, perform the following steps:
    1. Find the PKT port NUMA affinity by executing the following command on the EXSi host:

      vsish -e get /net/pNics/<PKT port name - vmnicX>/properties | grep "NUMA"
    2. Update VM Numa affinity to be same as NIC NUMA affinity:
      1. Edit Settings -> VM options -> Configuration Parameters -> Add Parameters.

        Edit Configuration Settings


      2. Add the following parameters:

        numa.nodeAffinity' =  0 or 1 (based on PKT port NIC affinity)
        
        numa.autosize.once = FALSE


        Add Parameters


  18. Click Finish.

    Finalizing the Device Details



  19. Repeat the step 14 through 17 to add the physical device to connect the PKT1 interface.

    Note
    As it is a Direct I/O, the same Passthrough device must not be used by a multiple VM instances. You must select the different PCI/PCIe device in step 16 for the PKT1 interface.



  20. After adding the PCI devices for both PKT0 and PKT1 interfaces, click Finish to complete.

    Completing the Task




    Note

    All the NIC ports of the same NIC card has to be fully configured as either pass-through devices or for virtual networking (vSwitches). This is a VMware ESXi limitation/restriction.

  21. The virtual machine is created under host IP address with the specified configuration. See the below example screen.

    Created VM




    This creates the virtual machine.



  22. After VM is created, you must manually enable autostart and autostop of the Virtual Machine. Enabling the autostop/autostart of Virtual Machine is useful, in scenarios where power is lost and later regained. This setting of VM helps autostart of VMs when the host machine is powered ON.
    To autostart the VM, perform the following steps:
    1. Select the host IP address from left-pane and click Configuration Tab. The Virtual Machine Startup and Shutdown section is displayed.

      Virtual Machine Startup/Shutdown



    2. Click Properties link displayed towards the top-right window.

      1. In System Settings pane, you must ensure to select the Allow virtual machine to start and stop automatically with the system check box. By default it is selected.

        Enabling Auto Startup and Auto Shutdown of VM



      2. In Startup Order pane, select the VM, which you want to automatically start and click Move Up.
        The selected VM is displayed underneath Automatic Startup.

        Moving VM to AutoStartup



      3. Click OK.  This completes the autostartup and autoshutdown settings for the Virtual Machine.