In this section:
When using release 8.2.0R2, SR-IOV and VLANs are supported on systems with x540 and 82599-based cards.
Ensure that SR-IOV is enabled on the BIOS settings of the host by logging in through iLO console.
SR-IOV is a licensed feature on VMware and the "VMware vSphere Enterprise Plus" license must be procured to enable SR-IOV support on ESXi.
Install SR-IOV supported two 10 Gigabit PCI cards.
One virtual interface for Management Port
One virtual interface for HA Port
Two virtual interfaces for SR-IOV Ports
It is recommended to configure all four ports with different IP addresses in four different networks.
For example:
Perform the following steps:
Log on to VMware ESXi GUI as the root user.
VMWare ESXi 6.5
The figures shown in this procedure are intended as examples of the user interface and might not match the presented image exactly.
To check the status of the SR-IOV cards:
Navigate to Host > Manage.
Select the tab Hardware.
Ensure both the SR-IOV cards are in disabled state or the Passthrough must be in disabled state.
SR-IOV Cards
If the SR-IOV cards are not disabled, they must be disabled by performing following:
Enable SSH. To enable SSH,
Navigate to Host and select the tab Actions.
From the drop-down list, select the option Services. Click Enable Secure Shell (SSH) and Enable console shell.
Enable SSH
Check the name of the NIC cards. To check the name of the NIC cards,
Log on to CLI as root.
Execute the following command:
lspci|grep X540
The following is the sample display output:
0000:0b:00.0 Network controller: Intel(R) Ethernet Controller X540-AT2 [vmnic8] 0000:0b:00.1 Network controller: Intel(R) Ethernet Controller X540-AT2 [vmnic9]
Open the esx.conf
file and find the PCI ID associated with the NIC card.
Change the owner of the PCI ID from passthrough
to vmkernel
.
Execute the following command:
vi /etc/vmware/esx.conf
The following is the sample display output:
/system/uuid = "5a33649d-c9db-e792-c676-5cb9018acc24" /system/uservars/psa/defaultLunMasksInstalled = "1" /system/uservars/corestorage/Filter/DefaultVMWRulesLoaded = "1" /system/uservars/corestorage/VAAI/DefaultVMWRulesLoaded = "1" /system/uservars/host-acceptance-level = "partner" /resourceGroups/version = "6.5.0" /adv/Misc/HostIPAddr = "10.54.12.81" /adv/Misc/DiskDumpSlotSize = "2560" /adv/Misc/HostName = "hpg9-9" /adv/Net/ManagementIface = "vmk0" /adv/Net/ManagementAddr = "10.54.12.81" /adv/UserMem/UserMemASRandomSeed = "1418738923" /adv/UserVars/HostClientCEIPOptIn = "1" /device/00000:005:00.0/vmkname = "vmhba1" /device/00000:002:00.0/vmkname = "vmnic0" /device/00000:002:00.2/vmkname = "vmnic2" /device/00000:003:00.0/vmkname = "vmhba0" /device/00000:002:00.1/vmkname = "vmnic1" /device/00000:011:00.1/owner = "vmkernel" /device/00000:011:00.1/device = "1528" /device/00000:011:00.1/vendor = "8086" /device/00000:011:00.1/vmkname = "vmnic9" /device/00000:004:00.2/vmkname = "vmnic6" /device/00000:004:00.1/vmkname = "vmnic5" /device/00000:002:00.3/vmkname = "vmnic3" /device/00000:004:00.0/vmkname = "vmnic4" /device/00000:005:00.1/vmkname = "vmhba2" /device/00000:004:00.3/vmkname = "vmnic7" /device/00000:011:00.0/vmkname = "vmnic8" /device/00000:011:00.0/vendor = "8086" /device/00000:011:00.0/device = "1528" /device/00000:011:00.0/owner = "vmkernel" - /etc/vmware/esx.conf 33/499 6%
Save the file.
Reboot the host.
In the VMware ESXi GUI, navigate to Host > Manage.
Select the Hardware tab.
From the PCI Devices, select the SR-IOV card.
Selecting the SR-IOV Card
Click Configure SR-IOV.
The window to configure the SR-IOV card is displayed.
For the option Enabled, select Yes and set the number of virtual functions.
Configuring the SR-IOV Card
Click Save.
Configure the other SR-IOV card. Repeat the steps from 4 to 6.
Reboot the host by clicking Reboot host.
Reboot Host
The following warning message is displayed.
Warning Message
Click Reboot.
The VMware ESXi login window is displayed with the message "The host is rebooting...".
The VMWare ESXI Login Window Displaying the Message
Once the virtual function is created, the SR-IOV cards and the Passthrough for the virtual functions display the status as "Active".
SR-IOV Cards and Virtual Function Status
Perform the following steps:
Navigate to Networking. From the Port group tab, click the Add port group.
The Add port group window is displayed.
Adding the MGMT Port Group
The figures shown in this procedure are intended as examples of the user interface and might not match the presented image exactly.
Add Port Group Fields
Field | Example or Recommendation |
---|---|
Name | VMNetwork |
VLAN ID | 0 |
Virtual switch | vSwitch0 |
Security | Select Inherit from vSwitch |
Perform the following steps:
Navigate to Networking. From the Port group tab, click the Add port group.
The Add port group window is displayed.
Adding the HA Port Group
The figures shown in this procedure are intended as examples of the user interface and might not match the presented image exactly.
Add Port Group Fields
Field | Example or Recommendation |
---|---|
Name | HANetwork |
VLAN ID | 0 |
Virtual switch | vSwitch1 |
Security | Select Inherit from vSwitch |
Perform the following steps:
Navigate to Virtual Machines. Click Create / Register VM to create or register a virtual machine. The Select creation type option is displayed.
Create or Register a Vrtual Machine
The figures shown in this procedure are intended as examples of the user interface and might not match the presented image exactly.
Select the option Create a new virtual machine.
Click Next.
The Select a name and guest OS option is displayed.
Select a Name and Guest OS
The following table describes the Select a name and guest OS fields.
Select a Name and Guest OS Fields
Field | Example or Recommendation |
---|---|
Name | Name of the virtual machine. For example, VM . |
Compatibility | ESXi 6.5 virtual machine |
Guest OS family | Linux |
Guest OS version | Debian GNULinux 8 (64-bit) |
Click Next.
The Select storage option is displayed.
Select datastore1.
Ensure that the datastore has minimum 500 GB or more space. This datastore is required to store all log-related data files.
Select Storage
Click Next.
The Customize settings option is displayed.
Configure virtual hardware from Customize settings:
Setting CPU
When configuring virtual CPUs within the vSphere Web Client, you can configure:
The following table provides the examples of socket determination based on the CPU and Cores per Socket within the vSphere Web Client:
Number of Sockets Determined by the vSphere Web Client
Total Number of virtual CPUs (CPU) | Cores per Socket | Number of Sockets Determined by the vSphere Web Client |
---|---|---|
4 | 4 | 1 |
4 | 2 | 2 |
4 | 1 | 4 |
8 | 8 | 1 |
8 | 2 | 4 |
8 | 4 | 2 |
8 | 1 | 8 |
A minimum of 4 vCPUs is required. Any number of vCPUs may be configured depending upon the call capacity requirements, but the number should be even (4, 6, 8, etc.) to avoid impacting performance.
Setting CPU
Set the CPU reservation so that it equals the physical processor CPU speed multiplied by the number of vCPUs assigned to the VM, divided by 2.
CPU Reservation = (No. of vCPUs * CPU frequency)/2
For example, a configuration of 32 CPUs with a processor of 2.30 GHz CPU frequency reserves "(32 * 2300)/2 = 36800 MHz".
The following table describes the CPU fields.
Customize Settings-CPU
Fields | Example or Recommendation |
---|---|
CPU | 10 |
Number of Virtual sockets | 1 |
Number of cores per virtual socket | 10 |
CPU Reservation | 25850 MHz |
Limit | Unlimited |
Shares | Normal |
Hardware virtualization | None |
Performance counters | None |
Scheduling Affinity | None |
Setting Memory
Setting Memory
The following table describes the Memory fields.
Customize Settings-Memory
Fields | Examples or Recommendation |
---|---|
Memory | 20480 MB .Note: It is recommended to use more than 20 GB memory. |
Reservation | Select the option Reserve all guest memory (All locked) |
Limit | Unlimited |
Shares | Normal |
Memory Hot Plug | None |
Setting Hard disk 1
Setting Hard disk 1
The following table describes the Hard disk 1 fields.
Customize Settings - Hard disk 1
Fields | Examples or Recommendation |
---|---|
Hard disk 1 | 200 GB |
Maximum Size | 1.43 TB |
Location | datastore1 |
Disk Provisioning | Select Thick provisioned, lazily zeroed |
Shares | Normal |
Limit-IOPs | Unlimited |
Virtual Device Node | SCSI controller 0 and SCSI (0:0) |
Disk mode | Dependent |
Sharing | None |
Setting SCSI Controller
Setting SCSI Controller
The following table describes the SCSI Controller fields.
Customize Settings - SCSI Controller
Fields | Examples or Recommendation |
---|---|
SCSI Controller | Select LSI Logic Parallel from the drop-down list |
SCSI Bus Sharing | None |
SATA Controller 0 | N/A |
USB controller 1 | USB 2.0 |
Setting Network Adapter 1
The Network Adapter 1 is used for provisioning MGMT ports.
Setting Network Adapter 1
The following table describes the Network Adapter 1 fields.
Customize Settings - Network Adapter 1
Fields | Examples or Recommendation |
---|---|
Network Adapter 1 | Select the MGMT ports from the drop-down list. For example, VM Network . |
Status | Select the option Connect at power on . |
Adapter Type | VMXNET3 |
MAC Address | Automatic |
Select Add network adapter. The option to create New Network Adapter for HA port is displayed.
Add Network Adapter
The following table describes the New Network Adapter fields.
Customize Settings - New Network Adapter
Fields | Examples or Recommendation |
---|---|
New Network Adapter | Select the HA port from the drop-down list. For example, HA Network . |
Status | Select the option Connect at power on . |
Adapter Type | VMXNET3 |
MAC Address | Automatic |
Click Add other device to continue the configuration on PKT0 and PKT1.
Add Other Device
Select the option PCI device from the drop-down list.
Select PCI Device
The New PCI device option is created.
Repeat steps g and h to create one more PCI Device.
New PCI device
Click Next. The Ready to complete option is displayed.
Ready to Complete
Find the PKT port NUMA affinity by executing the following command on the EXSi host:
vsish -e get /net/pNics/<PKT port name - vmnicX>/properties | grep "NUMA"
Edit Settings -> VM options -> Configuration Parameters -> Add Parameters.
Edit Configuration Parameters
Add the following parameters:
numa.nodeAffinity' = 0 or 1 (based on PKT port NIC affinity) numa.autosize.once = FALSE
Add Parameters
Receive side scaling (RSS) is a mechanism that enables spreading incoming network traffic across multiple CPUs, thus eliminating a potential processing bottleneck. For SR-IOV configurations, the RSS configuration must be updated in the ESXi host as follows:
Execute the following command to unload the IXGBE driver:
esxcfg-module -u ixgbe
Execute the following command to verify the driver is unloaded:
esxcfg-module -l | grep ixg
Execute one of the following commands to reload the driver with the required virtual function (VF) and RSS configurations:
vmkload_mod ixgbe max_vfs=2,2 RSS=4,4
or
esxcfg-module -s 'max_vfs=2,2, RSS=4,4' ixgbe
Once the VMware instance is created, you must install the SBC application.
Perform the following steps:
Click Edit. The Edit Settings window is displayed.
Select CD/DVD Drive 1.
Edit Settings
The figures shown in this procedure are intended as examples of the user interface and might not match the presented image exactly.
The Datastore browser window is displayed. Browse the ISO image file.
Selecting the OS File
Click Select. The following window is displayed.
Saving the OS File
Click Power on to power on the VM.
Powering On the VM
The SBC Installer window is displayed. Press Enter to boot.
SBC Installer Window
Once the installation completes, you are prompted to enter the login credentials.
Login Information
Log on to CLI as linuxadmin
. Provide the following IP addresses:
Primary Management IPv4 Address
Primary Management Network Mask
Primary Management Gateway IP Address, you are prompted to use IPv6 address.
n
if you do not want to set IPv6Ip Addresses