Add_workflow_for_techpubs | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Panel | ||||
---|---|---|---|---|
In this section:
|
Info | ||
---|---|---|
| ||
Related articles:
|
Info | ||
---|---|---|
| ||
Ensure that SR-IOV is enabled on the BIOS settings of the host by logging in through iLO console. |
Info | ||
---|---|---|
| ||
When using SR-IOV interfaces, do not add more than 64 VLANs as the driver does not support it. |
Info | ||
---|---|---|
| ||
SR-IOV is a licensed feature on VMware. Procure the "VMware vSphere Enterprise Plus" license to enable SR-IOV support on ESXi. |
Install SR-IOV supported two 10 Gigabit PCI cards.
One virtual interface for Management Port
One virtual interface for HA Port
Two virtual interfaces for SR-IOV Ports
Info | ||
---|---|---|
| ||
It is recommended to configure all four ports with different IP addresses in four different networks. For example:
|
Perform the following steps:
Log on to VMware ESXi GUI as the root user.
Caption | ||||
---|---|---|---|---|
| ||||
Include Page | ||||
---|---|---|---|---|
|
To check the status of the SR-IOV cards:
Navigate to Host > Manage.
Select the tab Hardware.
Ensure both the SR-IOV cards are in disabled state, or the Passthrough is in disabled state.
Caption | ||||
---|---|---|---|---|
| ||||
If the SR-IOV cards are not disabled, disable them by performing the following:
Enable SSH. To enable SSH,
Navigate to Host and select the tab Actions.
From the drop-down list, select the option Services. Click Enable Secure Shell (SSH) and Enable console shell.
Caption | ||||
---|---|---|---|---|
| ||||
Check the name of the NIC cards. To check the name of the NIC cards,
Log on to CLI as root.
Execute the following command:
Code Block | ||
---|---|---|
| ||
lspci|grep X540 |
The following is the sample display output:
Code Block |
---|
0000:0b:00.0 Network controller: Intel(R) Ethernet Controller X540-AT2 [vmnic8] 0000:0b:00.1 Network controller: Intel(R) Ethernet Controller X540-AT2 [vmnic9] |
Open the esx.conf
file and find the PCI ID associated with the NIC card.
Change the owner of the PCI ID from passthrough
to vmkernel
.
Execute the following command:
Code Block |
---|
vi /etc/vmware/esx.conf |
The following is the sample display output:
Code Block |
---|
/system/uuid = "5a33649d-c9db-e792-c676-5cb9018acc24" /system/uservars/psa/defaultLunMasksInstalled = "1" /system/uservars/corestorage/Filter/DefaultVMWRulesLoaded = "1" /system/uservars/corestorage/VAAI/DefaultVMWRulesLoaded = "1" /system/uservars/host-acceptance-level = "partner" /resourceGroups/version = "6.5.0" /adv/Misc/HostIPAddr = "10.54.12.81" /adv/Misc/DiskDumpSlotSize = "2560" /adv/Misc/HostName = "hpg9-9" /adv/Net/ManagementIface = "vmk0" /adv/Net/ManagementAddr = "10.54.12.81" /adv/UserMem/UserMemASRandomSeed = "1418738923" /adv/UserVars/HostClientCEIPOptIn = "1" /device/00000:005:00.0/vmkname = "vmhba1" /device/00000:002:00.0/vmkname = "vmnic0" /device/00000:002:00.2/vmkname = "vmnic2" /device/00000:003:00.0/vmkname = "vmhba0" /device/00000:002:00.1/vmkname = "vmnic1" /device/00000:011:00.1/owner = "vmkernel" /device/00000:011:00.1/device = "1528" /device/00000:011:00.1/vendor = "8086" /device/00000:011:00.1/vmkname = "vmnic9" /device/00000:004:00.2/vmkname = "vmnic6" /device/00000:004:00.1/vmkname = "vmnic5" /device/00000:002:00.3/vmkname = "vmnic3" /device/00000:004:00.0/vmkname = "vmnic4" /device/00000:005:00.1/vmkname = "vmhba2" /device/00000:004:00.3/vmkname = "vmnic7" /device/00000:011:00.0/vmkname = "vmnic8" /device/00000:011:00.0/vendor = "8086" /device/00000:011:00.0/device = "1528" /device/00000:011:00.0/owner = "vmkernel" - /etc/vmware/esx.conf 33/499 6% |
Save the file.
Reboot the host.
In the VMware ESXi GUI, navigate to Host > Manage.
Select the Hardware tab.
From the PCI Devices, select the SR-IOV card.
Caption | ||||
---|---|---|---|---|
| ||||
Click Configure SR-IOV.
The window to configure the SR-IOV card is displayed.
For the option Enabled, select Yes and set the number of virtual functions.
Caption | ||||
---|---|---|---|---|
| ||||
Click Save.
Configure the other SR-IOV card. Repeat the steps from 4 to 6.
Reboot the host by clicking Reboot host.
Caption | ||||
---|---|---|---|---|
| ||||
The following warning message is displayed.
Caption | ||||
---|---|---|---|---|
| ||||
Click Reboot.
The VMware ESXi login window is displayed with the message "The host is rebooting...".
Caption | ||||
---|---|---|---|---|
| ||||
Once the virtual function is created, the SR-IOV cards and the Passthrough for the virtual functions display the status as "Active".
Caption | ||||
---|---|---|---|---|
| ||||
Perform the following steps:
Navigate to Networking. From the Port group tab, click the Add port group.
The Add port group window is displayed.
Caption | ||||
---|---|---|---|---|
| ||||
Include Page | ||||
---|---|---|---|---|
|
Caption | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
Perform the following steps:
Navigate to Networking. From the Port group tab, click the Add port group.
The Add port group window is displayed.
Caption | ||||
---|---|---|---|---|
| ||||
Include Page | ||||
---|---|---|---|---|
|
Caption | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
Perform the following steps:
Navigate to Virtual Machines. Click Create / Register VM to create or register a virtual machine. The Select creation type option is displayed.
Caption | ||||
---|---|---|---|---|
| ||||
Include Page | ||||
---|---|---|---|---|
|
Select the option Create a new virtual machine.
Click Next.
The Select a name and guest OS option is displayed.
Caption | ||||
---|---|---|---|---|
| ||||
The following table describes the Select a name and guest OS fields.
Caption | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
Click Next.
The Select storage option is displayed.
Select datastore1.
Info | ||
---|---|---|
| ||
Ensure that the datastore has minimum 500 GB or more space. This datastore is required to store all log-related data files. |
Caption | ||||
---|---|---|---|---|
| ||||
Click Next.
The Customize settings option is displayed.
Configure virtual hardware from Customize settings:
Setting CPU
When configuring virtual CPUs within the vSphere Web Client, you can configure:
The following table provides the examples of socket determination based on the CPU and Cores per Socket within the vSphere Web Client:
Caption | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||
|
Info | ||
---|---|---|
| ||
A minimum of 4 vCPUs is required. Any number of vCPUs may be configured depending upon the call capacity requirements, but the number should be even (4, 6, 8, etc.) to avoid impacting performance. |
Caption | ||||
---|---|---|---|---|
| ||||
Info | ||
---|---|---|
| ||
Set the CPU reservation so that it equals the physical processor CPU speed multiplied by the number of vCPUs assigned to the VM, divided by 2. CPU Reservation = (No. of vCPUs * CPU frequency)/2 For example, a configuration of 32 CPUs with a processor of 2.30 GHz CPU frequency reserves "(32 * 2300)/2 = 36800 MHz". |
The following table describes the CPU fields.
Caption | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||
|
Setting Memory
Caption | ||||
---|---|---|---|---|
| ||||
The following table describes the Memory fields.
Caption | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||
|
Setting Hard disk 1
Caption | ||||
---|---|---|---|---|
| ||||
The following table describes the Hard disk 1 fields.
Caption | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||
|
Setting SCSI Controller
Caption | ||||
---|---|---|---|---|
| ||||
The following table describes the SCSI Controller fields.
Caption | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
Setting Network Adapter 1
The Network Adapter 1 is used for provisioning MGMT ports.
Caption | ||||
---|---|---|---|---|
| ||||
The following table describes the Network Adapter 1 fields.
Caption | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
Once the Network Adapter 1 is created for MGMT ports, create a new Network Adapter for HA ports.
Select Add network adapter. The option to create New Network Adapter for HA port is displayed.
Caption | ||||
---|---|---|---|---|
| ||||
The following table describes the New Network Adapter fields.
Caption | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
To attach packet ports on VMware ESXi 6.7 and above versions, skip the steps 7.g to 7.j 349358001 to 349358001, and follow the steps in the section Attach SR-IOV Interface in VMware ESXi 67.7 0 and Above.
Anchor | ||||
---|---|---|---|---|
|
Caption | ||||
---|---|---|---|---|
| ||||
Select the option PCI device from the drop-down list.
Caption | ||||
---|---|---|---|---|
| ||||
The New PCI device option is created.
Repeat steps g and h to create one more PCI Device.
Caption | ||||
---|---|---|---|---|
| ||||
Anchor | ||||
---|---|---|---|---|
|
Caption | ||||
---|---|---|---|---|
| ||||
Find the PKT port NUMA affinity by executing the following command on the EXSi host:
Code Block |
---|
vsish -e get /net/pNics/<PKT port name - vmnicX>/properties | grep "NUMA" |
Edit Settings -> VM options -> Configuration Parameters -> Add Parameters.
Caption | ||||
---|---|---|---|---|
| ||||
|
Add the following parameters:
Code Block |
---|
numa.nodeAffinity' = 0 or 1 (based on PKT port NIC affinity) numa.autosize.once = FALSE |
Caption | ||||
---|---|---|---|---|
| ||||
Receive side scaling (RSS) is a mechanism that enables spreading incoming network traffic across multiple CPUs, thus eliminating a potential processing bottleneck. For SR-IOV configurations, update the RSS configuration in the ESXi host as shown in the procedure below:
Execute the following command to unload the IXGBE driver:
Code Block |
---|
esxcfg-module -u ixgbe |
Execute the following command to verify the driver is unloaded:
Code Block |
---|
esxcfg-module -l | grep ixg |
Execute one of the following commands to reload the driver with the required virtual function (VF) and RSS configurations:
Code Block |
---|
vmkload_mod ixgbe max_vfs=2,2 RSS=4,4 |
~ or ~
Code Block |
---|
esxcfg-module -s 'max_vfs=2,2, RSS=4,4' ixgbe |
Once the VMware instance is created, install the SBC application.
Perform the following steps:
Click Edit. The Edit Settings window is displayed.
Select CD/DVD Drive 1.
Caption | ||||
---|---|---|---|---|
| ||||
Include Page | ||||
---|---|---|---|---|
|
The Datastore browser window is displayed. Browse the ISO image file.
Caption | ||||
---|---|---|---|---|
| ||||
Click Select. The following window is displayed.
Caption | ||||
---|---|---|---|---|
| ||||
Click Power on to power on the VM.
Caption | ||||
---|---|---|---|---|
| ||||
The SBC Installer window is displayed. Press Enter to boot.
Caption | ||||
---|---|---|---|---|
| ||||
Once the installation completes, you are prompted to enter the login credentials.
Caption | ||||
---|---|---|---|---|
| ||||
Log on to CLI as linuxadmin
. Provide the following IP addresses:
Primary Management IPv4 Address
Primary Management Network Mask
Primary Management Gateway IP Address, you are prompted to use IPv6 address.
n
if you do not want to set IPv6Caption | ||||
---|---|---|---|---|
| ||||