In this section:
This section describes
To install and configure
The recommended hardware and software settings are intended to ensure optimum
The following table lists the server hardware requirements: Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper threading). Ribbon recommends using Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware. The supported CPU Family number is 6 and CPU Model number must be newer than 26. Refer to the Intel Architecture and Processor Identification document for more information. ESXi 6.5 and later releases require approximately 2 physical cores to be set aside for hypervisor functionality. The number of VMs which can be hosted on a server must be planned for accordingly. Otherwise, 8 NICs (preferably with SR-IOV capability to support SWe optimizations). Make sure NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems. Intel x710 NICs are also supported on VMware (ESXi versions 6.5 and above) with SR-IOV enabled. x710 NICs are not supported on Direct I/O or KVM. Number of ports allowed: The Configuration Requirement Processor RAM Minimum 24 GB Hard Disk Minimum 500 GB Network Interface Cards (NICs) Ports
Ribbon recommends the following BIOS settings for optimum performance:
For example, the BIOS settings shown below are recommended for HP DL380p Gen8 servers. For BIOS settings of other servers, refer to the respective vendor's website.
The following are the The VMware Enterprise Plus license is required for SR-IOV.SBC SWe for VMware – Software Requirements
VMware ESXi Requirements
For more information, refer to Downloading the SBC SWe Software Package.
The following general recommendations apply to all platforms where SBC SWe is deployed:
The following are recommendations for VMware ESXi configurations:
Use virtual hardware version 8 and above when creating new VMs (available starting in ESXi 5.x). This provides additional capabilities to VMs such as supporting a VM with up to 1 TB RAM, up to 32 vCPUs, etc.
Use the VMware vSphere client to configure the following ESXi host configuration parameters on the Advanced Settings page (see figure below) before installing the
The following configurations are recommended to improve performance.
Configure the VM Latency Sensitivity setting to High in VM Options > Advanced configurations as shown below:
Limit instances to using cores from a single NUMA by configuring the numa.nodeaffinity option according to the number of the NUMA node the VM is on. Access this option using the path: VM Options > Advanced > Configuration Parameters > Edit Configuration For example, in the following figure the Numa node is 0.
Additional VM configuration recommendations appear in the following table.
Settings | Recommended Configuration |
---|---|
vCPU | Minimum 4 vCPUs required. Any number of vCPUs may be configured depending upon the call capacity requirements, but the number should be even (4, 6, 8, etc.) to avoid impacting performance. Keep the Resource Allocation setting marked as 'unlimited.' Refer to Adjusting Resource Allocations for a VM for more information on setting CPU-related parameters. |
vRAM | Keep the Resource Allocation setting marked as 'unlimited'. Reserve the memory based on call capacity and configuration requirements. Refer to SBC SWe Performance Metrics for supported call capacities with different configuration limits. |
Virtual Hard Disk | Set the Hard Disk (virtual) size as 100 GB or more (based on requirements of retaining CDRs, logs, etc. for number of days)
|
vNICs | Set the number of virtual NICs as 4 (1-MGMT, 1-HA, 1-PKT0 and 1-PKT1).
|
vSwitch settings |
|
Make sure the Processors, ESXi version and VM configuration (vCPUs, vRAM, Virtual Hard Disk, vNICs, and vSwitch Settings) are identical for an SBC SWe HA pair.
Make sure that the BIOS and ESXi settings and recommendations are not changed once they are applied on the server.
To configure VLAN on SRIOV and PCI Passthrough Ethernet interfaces, disable the Data Center Bridging (DCB) on the switch connected to the interfaces.