DO NOT SHARE THESE DOCS WITH CUSTOMERS!
This is an LA release that will only be provided to a select number of PLM-sanctioned customers (PDFs only). Contact PLM for details.
To install SBC on a virtual machine (VM) with PCI pass-through device, you must first create a VM and allocate its resources (for example CPU, memory, and NICs), as well as configure a datastore to contain the SBC operating system and application software.
You must configure the host system with PCI pass-through device before creating a new SBC SWe instance.
You must follow the BIOS settings recommendations for the particular server. Refer to BIOS Settings Recommendations section for guidance.
To configure the host:
Navigate to the following directory path:
cd /etc/default/
grub
file in vi
editor.Search for GRUB_CMDLINE_LINUX
or linuxefi/vmlinuz
and append the following:
intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pci=realloc For example: GRUB_CMDLINE_LINUX="vconsole.keymap=us crashkernel=auto vconsole.font=latarcyrheb-sun16 rhgb quiet intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pci=realloc" GRUB_DISABLE_RECOVERY="true"
Enter the following command to update the grub.cfg
file:
grub2-mkconfig --output=/boot/<DIR_PATH>/grub.cfg
Based on your BIOS settings <DIR_PATH>
can either be /efi/EFI/redhat/
or /grub2/
.
Enter the following command to reboot the host system.
reboot
Log onto the host system after reboot.
Enter the following command to verify the grub update:
# cat /proc/cmdline - BOOT_IMAGE=/vmlinuz-3.10.0-123.el7.x86_64 root=UUID=9ab5f45b-8b74-4620-ae67-4d48eea55273 ro vconsole.keymap=us crashkernel=auto vconsole.font=latarcyrheb-sun16 rhgb quiet intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pci=realloc
Enter the following command to enable VFIO.
modprobe vfio-pci
Enter the following command to list IOMMU groups with the interfaces.
find /sys/kernel/iommu_groups/ -type l
The command executes with the list of IOMMU groups.
/sys/kernel/iommu_groups/15/devices/0000:0a:00.0 /sys/kernel/iommu_groups/15/devices/0000:0a:00.1 /sys/kernel/iommu_groups/16/devices/0000:00:1d.0 /sys/kernel/iommu_groups/17/devices/0000:00:1e.0 /sys/kernel/iommu_groups/18/devices/0000:00:1f.0 /sys/kernel/iommu_groups/18/devices/0000:00:1f.2 /sys/kernel/iommu_groups/19/devices/0000:02:00.0 /sys/kernel/iommu_groups/20/devices/0000:03:00.0 /sys/kernel/iommu_groups/20/devices/0000:03:00.1
Enter the following command to list interfaces.
lspci | grep -i ether
The command executes listing the interfaces.
03:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01) 03:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01) 03:00.2 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01) 03:00.3 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01) 0a:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 0a:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
Only Intel I350 Ethernet adapter is supported for configuring as PCI pass-through device.
Enter the following command to know the PCI ID of the interface.
lspci -n -s 0000:<interface_id> For example, lspci -n -s 0000:0a:00.0
The command executes to list the PCI ID.
0a:00.0 0200: 8086:1521 (rev 01)
Enter the following command to unbind the interface.
echo 0000:0a:00.0 >/sys/bus/pci/devices/0000\:0a\:00.0/driver/unbind
Enter the following command to add this new interface to the VFIO-PCI list.
echo 8086 1521 > /sys/bus/pci/drivers/vfio-pci/new_id
Repeat steps 10 through 12 to add another interface to the VFIO-PCI list.
To install SBC on a virtual machine (VM), first create a VM and allocate resources (such as CPU, memory, and NICs), as well as configure a datastore that contains SBC operating system and application software.
To create a new SBC SWe KVM instance:
The instance is created on Linux Vanilla flavor version 7.0, and the screens listed here may vary in the latest Linux Vanilla flavor versions.
Perform the following to export the display on your desktop if you are remotely accessing the KVM host system.
Log onto KVM host system through SSH and telnet client such as PuTTY.
Enter the following command to export the display:
export DISPLAY=<system_IP>:0.0
where <system_IP> is the system IP address where GUI is exported for display.
Enter the following command to launch the virtual machine manager (virt manager) on your system.
virt-manager
Ensure Xserver is running on the host system to import the display on your desktop.
The Virtual Machine Manager window displays.
Click New to create a new VM.
The Create a new virtual machine window displays.
The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.
Enter the name for the virtual machine in the Name field.
Click Forward.
The locate your install media window displays.
In Step 2 of the Create a new virtual machine window:
Select Use ISO image and click Browse.
The Locate ISO media volume window displays.
Click Browse Local.
The Locate ISO media window displays.
Navigate to the folder containing the media file; select the ISO file and click Open.
The directory path of the selected ISO is displayed in Use ISO image field.
Click Forward.
The memory and CPU settings window displays.
Click Forward.
The storage details window displays.
Select the Enable storage for this virtual machine option.
Select the Create a disk image on the computer's hard drive option and enter 100 GB as the hard drive space.
Click Forward.
The ready to begin installation window displays.
In Step 5 of the Create a new virtual machine window:
Select the Customize configuration before install checkbox.
Click Finish.
The VM configuration screen displays.
The figures shown in this procedure are intended as examples of the user interface and might not match the presented images exactly.
In the left pane, click Processor. The corresponding processor options displays.
Click Copy host CPU configuration.
The Model field displays the host CPU configuration running on the host system.
In the Threads field, enter 2 as the number of threads required for the instance.
Click Apply.
Refer to table 5 on For KVM Hypervisor for minimum configuration requirements.
Click Advanced options to expand the section and select the IDE from the drop-down list as the Disk bus value.
Click Apply.
In the left pane, click NIC. The corresponding network interface options displays.
Click Source device drop-down list and select a device for MGT interface.
Click Apply.
Click Add Hardware to add a NIC for HA interface.
The Add New Virtual Hardware window displays.
Click the Host device drop-down list and select a device for HA interface.
By default, the MAC address for the selected host device is displayed in the MAC address field. Do not uncheck the MAC address checkbox.
Click Finish.
The new NIC is listed in the left pane.
Select the new NIC.
Click Source mode drop-down list and select Bridge.
Click Apply.
Click Finish.
The added PCI device for PKT0 is listed in the left pane.
Repeat steps 9 (d) [i] through 9 (d) [iv] to add another PCI device for PKT1.
Configure the VM for CPU pinning to optimize performance. Complete the following steps:
Return to the VM Configuration screen and select Boot Options in the left pane.
Update the Boot device order so that IDE CDROM1 is first, followed by VirtIO Disk 1.
Click Apply.
Click Begin Installation.
The ConnexIP Installer Boot Menu displays.