In this section:

To install the SBC on a virtual machine (VM) with Direct I/O pass-through, you must first configure the pass-through I/O devices on a ESXi host and create a VM and allocate its resources (for example CPU, memory, and NICs), as well as configure a datastore to contain the SBC operating system and application software.

VMDirectPath I/O is a VMware technology that can be used with I/O hardware to reduce the CPU impact of high-bandwidth throughput workload's by "bypassing" the hypervisor. VM DirectPath I/O allows guest operating systems to directly access an I/O device, bypassing the virtualization layer and enhancing the performance.

Configure Passthrough I/O Devices

You must follow the BIOS setting recommendations for the particular server. Refer to BIOS Settings Recommendations for more information.

Perform the following steps to configure the passthrough I/O devices.

  1. Log on to VMware ESXi GUI as the root user.
    1. Enter your VMware administrator user name.
    2. Enter your VMware administrator password.


    Note

    The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.


  2. Navigate to Manage>Hardware>PCI Devices and select the devices to use for passthrough. Click Toggle passthrough.


  3. Click Reboot host.

    Make sure that all NICs are physically plugged in (link light on) before creating your VM. Otherwise, when you perform the ISO, an incorrect port mapping occurs (logical to physical) and the SBC does not function properly.


    Passthrough devices are not detected in VMware ESXi version Build–1483097 and the VM does not power on.

Create a Virtual Machine (VM)

Perform the following steps to create new SBC VM.

  1. Login as user root on the VMware ESXi GUI.
    1. Enter your VMware administrator the .
    2. Enter your VMware administrator password.
     
  2. Click Virtual Machines. Click Create / Register VM to open the New virtual machine window. 


    Note

    The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.



  3. Make sure that Create a new virtual machine is selected and click Next.


  4. Configure the VM name and guest OS.

    1. Provide a name for your VM. The name can be up to 80 characters.

      Avoid special characters for the name. Refer to System Name and Hostname Naming Conventions for details.

    2. Select ESXi 6.5 virtual machine as the Compatibility, Linux as the Guest OS family, and Debian GNU/Linux 9 (64-bit) as the Guest OS version.
    3. Click Next.

     
  5. Select a datastore and click Next. Make sure that the datastore has at least 100GB or more of free space. This datastore is required to store all log-related data files.

  6. Under Virtual Hardware, expand the CPU drop-down menu. Customize the CPU settings as follows:
    1. Determine the total number of virtual CPUs (vCPUs) to allocate to the virtual machine, and select this number in the CPU field (the following screen capture uses four vCPUs).
    2. In the Cores per Socket field, enter the total number of cores to allocate to each virtual socket (the following screen capture uses four cores).

      The number of virtual sockets automatically updates.

    3. In the Reservation field, enter a reservation number.

      Set the CPU reservation so that it equals the physical processor CPU speed multiplied by the number of vCPUs assigned to the VM, divided by 2.

      CPU Reservation = (No. of vCPUs * CPU frequency)/2

      For example, a configuration of 4 CPUs with a processor of 2.99 GHz CPU frequency reserves "(4 * 2992)/2 = 5984 MHz".

    4. In the Limit field, select Unlimited for optimal performance.

     
  7. Under Virtual Hardware, expand the Memory drop-down menu and assign memory to the virtual machine.

    Use 20 GB of RAM for large configurations and 17 GB of RAM for small configurations. Refer to SBC SWe Performance Metrics for more information on memory required for different call capacities.

    For fewer than 6000 calls, the minimum reservation must be 10GB. Anything more than 6000 calls requires at least 14GB vRAM. Refer to SBC SWe Performance Metrics for more information.

  8. Under Virtual Hardware, expand the Hard disk 1 drop-down menu. Customize the Hard disk settings as follows:
    1. In the Hard disk 1 field, enter a minimum of 100 and select GB.
    2. In the Location field, click Browse... and select the datastore where the ISO file is available.
    3. In the Disk Provisioning field, select Thick provisioned, eagerly zeroed.

      The Virtual Device Node field should use the default values of SCSI controller 0 and SCSI (0:0).

  9. Under Virtual Hardware, select LSI Logic Parallel in the SCSI Controller 0 field.

  10. Under Virtual Hardware, click Add network adapter once so that there are two Network Adapter fields (one is the default). Define the network adapters as follows:
    1. Select the management interface for the first network adapter (VM Network in the following screen capture).
    2. Select the HA interface for the second network adapter (HA Network in the following screen capture).

      These network adapters and labels are already created on the ESXi host server. If you are installing for the first time on a new ESXi host server, these network adapters and corresponding labels (VM Network, HA Network) need to be created. For details, refer to Creating Virtual Machine using vNetwork Standard Switch (vSwitch). The PKT0 and PKT1 are created using the PCI devices.

      Make sure that the Network Adapters are mapped in the following order:

      • First network adapter (Network Adapter 1) for the management interface
      • Second network adapter (Network Adapter 2) for high availability interface
    3. Expand each Network Adapter drop-down menu and check the Connect at power on box beside each Status field.
    4. Select VMXNET 3 in each Adapter Type field.

      The VMXNET 3 virtual network adapter has no physical counterpart. VMXNET is optimized for performance in a virtual machine.

  11. Complete the following steps to continue the configuration on PKT0 and PKT1.
    1. Under Virtual Hardware, click Add other device and select PCI device. Repeat this step.
    2. Expand both New PCI device drop-down menus. In the New PCI device fields, select the PCI devices.

      Since it is a Direct I/O, the same passthrough device must not be used by multiple VM instances. Select a PCI/PCIe device for the PKT0 interface that is different from the PKT1 interface.

  12. Click Next. Review the VM settings that you configured and click Finish to create the VM.

Enable/Disable Autostart of the VM

Enabling and disabling the autostart of the VM is useful when the VM regains power that was previously lost. Perform the following steps to enable and disable autostart of the VM.

  1. Login as user root on the VMware ESXi GUI.
    1. Enter your VMware administrator user name.
    2. Enter your VMware administrator password.
  2. Click Virtual Machines, and click the VM that you want to configure.
  3. Click Actions and select Autostart>Enable to enable autostart.


    Note

    The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.


  4. Click Actions and select Autostart>Configure. Adjust the settings in the Configure autostart window as desired. Click Save.

    To disable autostart, click Actions and select Autostart>Disable from the pop-up menu.