DO NOT SHARE THESE DOCS WITH CUSTOMERS!
This is an LA release that will only be provided to a select number of PLM-sanctioned customers (PDFs only). Contact PLM for details.
In this section:
Ensure that SR-IOV is enabled on the BIOS settings of the host by logging in through iLO console.
When using SR-IOV interfaces, do not add more than 64 VLANs as the driver does not support it.
SR-IOV is a licensed feature on VMware. Procure the "VMware vSphere Enterprise Plus" license to enable SR-IOV support on ESXi.
Install SR-IOV supported two 10 Gigabit PCI cards.
One virtual interface for Management Port
One virtual interface for HA Port
Two virtual interfaces for SR-IOV Ports
It is recommended to configure all four ports with different IP addresses in four different networks.
For example:
Perform the following steps:
Log on to VMware ESXi GUI as the root user.
The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.
To check the status of the SR-IOV cards:
Navigate to Host > Manage.
Select the tab Hardware.
Ensure both the SR-IOV cards are in disabled state, or the Passthrough is in disabled state.
If the SR-IOV cards are not disabled, disable them by performing the following:
Enable SSH. To enable SSH,
Navigate to Host and select the tab Actions.
From the drop-down list, select the option Services. Click Enable Secure Shell (SSH) and Enable console shell.
Check the name of the NIC cards. To check the name of the NIC cards,
Log on to CLI as root.
Execute the following command:
lspci|grep X540
The following is the sample display output:
0000:0b:00.0 Network controller: Intel(R) Ethernet Controller X540-AT2 [vmnic8] 0000:0b:00.1 Network controller: Intel(R) Ethernet Controller X540-AT2 [vmnic9]
Open the esx.conf
file and find the PCI ID associated with the NIC card.
Change the owner of the PCI ID from passthrough
to vmkernel
.
Execute the following command:
vi /etc/vmware/esx.conf
The following is the sample display output:
/system/uuid = "5a33649d-c9db-e792-c676-5cb9018acc24" /system/uservars/psa/defaultLunMasksInstalled = "1" /system/uservars/corestorage/Filter/DefaultVMWRulesLoaded = "1" /system/uservars/corestorage/VAAI/DefaultVMWRulesLoaded = "1" /system/uservars/host-acceptance-level = "partner" /resourceGroups/version = "6.5.0" /adv/Misc/HostIPAddr = "10.54.12.81" /adv/Misc/DiskDumpSlotSize = "2560" /adv/Misc/HostName = "hpg9-9" /adv/Net/ManagementIface = "vmk0" /adv/Net/ManagementAddr = "10.54.12.81" /adv/UserMem/UserMemASRandomSeed = "1418738923" /adv/UserVars/HostClientCEIPOptIn = "1" /device/00000:005:00.0/vmkname = "vmhba1" /device/00000:002:00.0/vmkname = "vmnic0" /device/00000:002:00.2/vmkname = "vmnic2" /device/00000:003:00.0/vmkname = "vmhba0" /device/00000:002:00.1/vmkname = "vmnic1" /device/00000:011:00.1/owner = "vmkernel" /device/00000:011:00.1/device = "1528" /device/00000:011:00.1/vendor = "8086" /device/00000:011:00.1/vmkname = "vmnic9" /device/00000:004:00.2/vmkname = "vmnic6" /device/00000:004:00.1/vmkname = "vmnic5" /device/00000:002:00.3/vmkname = "vmnic3" /device/00000:004:00.0/vmkname = "vmnic4" /device/00000:005:00.1/vmkname = "vmhba2" /device/00000:004:00.3/vmkname = "vmnic7" /device/00000:011:00.0/vmkname = "vmnic8" /device/00000:011:00.0/vendor = "8086" /device/00000:011:00.0/device = "1528" /device/00000:011:00.0/owner = "vmkernel" - /etc/vmware/esx.conf 33/499 6%
Save the file.
Reboot the host.
In the VMware ESXi GUI, navigate to Host > Manage.
Select the Hardware tab.
From the PCI Devices, select the SR-IOV card.
Click Configure SR-IOV.
The window to configure the SR-IOV card is displayed.
For the option Enabled, select Yes and set the number of virtual functions.
Click Save.
Configure the other SR-IOV card. Repeat the steps from 4 to 6.
Reboot the host by clicking Reboot host.
The following warning message is displayed.
Click Reboot.
The VMware ESXi login window is displayed with the message "The host is rebooting...".
Once the virtual function is created, the SR-IOV cards and the Passthrough for the virtual functions display the status as "Active".
Perform the following steps:
Navigate to Networking. From the Port group tab, click the Add port group.
The Add port group window is displayed.
The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.
Perform the following steps:
Navigate to Networking. From the Port group tab, click the Add port group.
The Add port group window is displayed.
The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.
Perform the following steps:
Navigate to Virtual Machines. Click Create / Register VM to create or register a virtual machine. The Select creation type option is displayed.
The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.
Select the option Create a new virtual machine.
Click Next.
The Select a name and guest OS option is displayed.
The following table describes the Select a name and guest OS fields.
Click Next.
The Select storage option is displayed.
Select datastore1.
Ensure that the datastore has minimum 500 GB or more space. This datastore is required to store all log-related data files.
Click Next.
The Customize settings option is displayed.
Configure virtual hardware from Customize settings:
Setting CPU
When configuring virtual CPUs within the vSphere Web Client, you can configure:
The following table provides the examples of socket determination based on the CPU and Cores per Socket within the vSphere Web Client:
A minimum of 4 vCPUs is required. Any number of vCPUs may be configured depending upon the call capacity requirements, but the number should be even (4, 6, 8, etc.) to avoid impacting performance.
Set the CPU reservation so that it equals the physical processor CPU speed multiplied by the number of vCPUs assigned to the VM, divided by 2.
CPU Reservation = (No. of vCPUs * CPU frequency)/2
For example, a configuration of 32 CPUs with a processor of 2.30 GHz CPU frequency reserves "(32 * 2300)/2 = 36800 MHz".
The following table describes the CPU fields.
Setting Memory
The following table describes the Memory fields.
Setting Hard disk 1
The following table describes the Hard disk 1 fields.
Setting SCSI Controller
The following table describes the SCSI Controller fields.
Setting Network Adapter 1
The Network Adapter 1 is used for provisioning MGMT ports.
The following table describes the Network Adapter 1 fields.
Select Add network adapter. The option to create New Network Adapter for HA port is displayed.
The following table describes the New Network Adapter fields.
Click Add other device to continue the configuration on PKT0 and PKT1.
Select the option PCI device from the drop-down list.
The New PCI device option is created.
Repeat steps g and h to create one more PCI Device.
Click Next. The Ready to complete option is displayed.
Find the PKT port NUMA affinity by executing the following command on the EXSi host:
vsish -e get /net/pNics/<PKT port name - vmnicX>/properties | grep "NUMA"
Edit Settings -> VM options -> Configuration Parameters -> Add Parameters.
Add the following parameters:
numa.nodeAffinity' = 0 or 1 (based on PKT port NIC affinity) numa.autosize.once = FALSE
Receive side scaling (RSS) is a mechanism that enables spreading incoming network traffic across multiple CPUs, thus eliminating a potential processing bottleneck. For SR-IOV configurations, update the RSS configuration in the ESXi host as shown in the procedure below:
Execute the following command to unload the IXGBE driver:
esxcfg-module -u ixgbe
Execute the following command to verify the driver is unloaded:
esxcfg-module -l | grep ixg
Execute one of the following commands to reload the driver with the required virtual function (VF) and RSS configurations:
vmkload_mod ixgbe max_vfs=2,2 RSS=4,4
~ or ~
esxcfg-module -s 'max_vfs=2,2, RSS=4,4' ixgbe
Once the VMware instance is created, install the SBC application.
Perform the following steps:
Click Edit. The Edit Settings window is displayed.
Select CD/DVD Drive 1.
The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.
The Datastore browser window is displayed. Browse the ISO image file.
Click Select. The following window is displayed.
Click Power on to power on the VM.
The SBC Installer window is displayed. Press Enter to boot.
Once the installation completes, you are prompted to enter the login credentials.
Log on to CLI as linuxadmin
. Provide the following IP addresses:
Primary Management IPv4 Address
Primary Management Network Mask
Primary Management Gateway IP Address, you are prompted to use IPv6 address.
n
if you do not want to set IPv6