This section describes
hardware and software requirements and recommendations.
To install and configure
, make sure the Virtual Machine (VM) host meets the following recommended hardware, server platform and software requirements
. Warning |
---|
|
The recommended hardware and software settings are intended to ensure optimum stability and performance. If the recommended settings are not used, the system may not behave as expected.
SBC SWe for VMware – Server Hardware Requirements
Multiexcerpt |
---|
MultiExcerptName | SBC SWe for VMware – Server Hardware Requirements |
---|
|
Warning |
---|
| The software only runs on platforms using Intel processors. Platforms using AMD processors are not supported. |
The following table lists the server hardware requirements: Caption |
---|
0 | Table |
---|
1 | Server Hardware Requirements |
---|
|
|
|
Configuration | Requirement |
---|
Processor | Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper threading). Info |
---|
| Ribbon recommends using Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware. |
Info |
---|
| ESXi 6.5 and later releases require approximately 2 physical cores to be set aside for hypervisor functionality. The number of VMs which can be hosted on a server must be planned for accordingly. |
| RAM | Minimum 24 GB | Hard Disk | Minimum 500 GB | Network Interface Cards (NICs) | Minimum 4 NICs, if physical NIC redundancy is not required |
|
.Otherwise, 8 NICs (preferably with SR-IOV capability to support SWe optimizations). Info |
---|
| Make sure NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems. |
|
|
Intel I350, x540, x550, and 82599 Ethernet adapters - following NICs are supported for configuring as SR-IOV and DirectPath I/O pass-through devices. SR-IOV is supported only with 10 Gbps interfaces (x540/82599/x710):
Intel I350, x540, x550, x710 and 82599, Mellanox Connect - 4x, and Mellanox Connect - 5x. - The VMware Enterprise Plus license is required for
|
|
|
SR-IOV. Info |
---|
|
Intel x710 NICs are also supported on VMware (ESXi versions 6.5 and above) with enabled. x710 NICs are not supported on Direct I/O or KVM | Ports | Number of ports allowed: |
|
- 1 Management port
- 1 HA port
- 2 Media ports
Warning |
---|
|
The software only runs on platforms using Intel processors. Platforms using AMD processors are not supported. |
BIOS Setting Recommendations
Ribbon recommends the following BIOS settings for optimum performance:
Caption |
---|
0 | Table |
---|
1 | Recommended BIOS Settings for Optimum Performance |
---|
|
BIOS Parameter | Recommended Setting | Details |
---|
Intel VT-x (Virtualization Technology) | Enabled | For hardware virtualization | Intel VT-d (Directed I/O) | Enabled | If available | Intel Hyper-Threading | Enabled | | Intel Turbo Boost | Enabled | | CPU power management | Maximum Performance | |
|
For example, the BIOS settings shown below are recommended for HP DL380p Gen8 servers. For BIOS settings of other servers, refer to the respective vendor's website.
Caption |
---|
0 | Table |
---|
1 | BIOS Setting Recommendations for HP DL380p Gen8 Server |
---|
|
BIOS Parameter | Recommended Setting | Default Value |
---|
HP Power Profile | Maximum Performance | Balanced Power and Performance | Thermal Configuration | Maximum Cooling | Optimal Cooling | HW Prefetchers | Disabled | Enabled | Adjacent Sector Prefetcher | Disabled | Enabled | Processor Power and Utilization Monitoring | Disabled | Enabled | Memory Pre-Failure Notification | Disabled | Enabled | Memory Refresh Rate | 1x Refresh | 2x Refresh | Data Direct I/O | Enabled | Disabled | SR-IOV | Enabled | Disabled | Intel® VT-d | Enabled | Disabled |
|
Multiexcerpt |
---|
MultiExcerptName | SBC SWe for VMware – Software Requirements
Multiexcerpt |
---|
MultiExcerptName | SBC SWe for VMware – Software Requirements |
---|
|
Warning |
---|
| Using older versions of ESXi can trigger a VM instance shutdown. To prevent this from occurring, you must upgrade the VMware ESXi -- refer to the End of General Support column on https://lifecycle.vmware.com/#/ for supported versions. |
The following are the requirements for VMware ESXi environments:VMware ESXi Requirements Caption |
---|
0 | Table |
---|
1 | VMWare ESXi Requirements |
---|
|
Software | Version* | Tested and Qualified Version | For More Information |
---|
vSphere ESXi |
|
|
51 0 or above | - VMware 6.0 tested with VM version 11
- VMware 6.5 tested with VM version 13
| - Customized ESXi images for various server platforms are available on VMware and hardware platform vendor sites.
- These ensure that all the required drivers for network and storage controllers are available to run ESXi server.
- Most of the customized ESXi images come with customized management software to manage the server running the ESXi software.
- Customized ESXi images for HP ProLiant and IBM servers are available at:
| vSphere Client | 5.1 or above |
|
|
| | vCenter Server Info |
---|
| The VMware Enterprise Plus license is required for SR-IOV. |
|
Third-Party References:
Downloading the SBC SWe Software Package
For more information, refer to Downloading the SBC SWe Software Package.
The following general recommendations apply to all platforms where SBC SWe is deployed:
- The number of vCPUs deployed on a system should be an even number (4, 6, 8, etc.).
- For best performance, deploy only a single instance on a single NUMA. Performance degradation occurs if you host more than one instance on a NUMA or if a single instance spans multiple NUMAs.
- Make sure that the physical NICs associated with an instance are connected to the same NUMA/socket where the instance is hosted. In the case of a dual NUMA host, ideally two instances should be hosted, with each instance on a separate NUMA and the associated NICs of each of the instances connected to their respective NUMAs.
- To optimize performance, configure memory card equally on both NUMA nodes. For example if a dual NUMA node server has a total of 128 GiB of RAM, configure 64 GiB of RAM on each NUMA node.
General ESXi Recommendations
The following are recommendations for VMware ESXi configurations:
- Plan enough resources (RAM, CPU, NIC ports, hard disk, etc.) for all the virtual machines (VMs) to run on server platform, including resources needed by ESXi itself.
- Allocate each VM only as much virtual hardware as that VM requires. Provisioning a VM with more resources than it requires can, in some cases, reduce the performance of that VM as well as other virtual machines sharing the same host.
- Under BIOS settings, disconnect or disable any physical hardware devices (floppy devices, network interfaces, storage controllers, optical drives, USB controllers, etc.) that you will not be using to free up CPU resources.
Use virtual hardware version 8 and above when creating new VMs (available starting in ESXi 5.x). This provides additional capabilities to VMs such as supporting a VM with up to 1 TB RAM, up to 32 vCPUs, etc.
Anchor |
---|
ESXi_Param | ESXi_Param | ESXi Host Configuration ParametersUse the VMware vSphere client to configure the following ESXi host configuration parameters on the Advanced Settings page (see figure below) before installing the
. Div |
---|
|
Caption |
---|
0 | Table |
---|
1 | ESXi Advanced Settings |
---|
| ESXi Parameter | ESXi 5.1 Settings | ESXi 5.5 Settings | ESXi 6.0 Settings | ESXi 6.5 Settings |
---|
Recommended | Default | Recommended | Default | Recommended | Default | Recommended | Default |
---|
Cpu.CoschedCrossCall | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | Cpu.CreditAgePeriod | 500 | 3000 | 1000 | 1000 | 1000 | 1000 | 1000 | 3000 | DataMover.HardwareAcceleratedInit | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | DataMover.HardwareAcceleratedMove | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | Disk.SchedNumReqOutstanding | 256 | 32 | n/a | n/a | n/a | n/a | n/a | n/a | Irq.BestVcpuRouting | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | Mem.BalancePeriod | 0 | 15 | 0 | 15 | n/a | n/a | n/a | n/a | Mem.SamplePeriod | 0 | 60 | 0 | 60 | n/a | n/a | n/a | n/a | Mem.ShareScanGHz | 0 | 4 | 0 | 4 | 0 | 4 | 0 | 4 | Mem.VMOverheadGrowthLimit | 0 | 4294967295 | 0 | 4294967295 | 0 | 4294967295 | 0 | 4294967295 | Misc.TimerMaxHardPeriod | 2000 | 100000 | 2000 | 100000 | 2000 | 100000 | 2000 | 500000 | Misc.TimerMinHardPeriod | 100 | 100 | n/a | n/a | n/a | n/a | n/a | n/a | Net.AllowPT | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | Net.MaxNetifRxQueueLen | 500 | 100 | 500 | 100 | 500 | 100 | 500 | n/a | Net.MaxNetifTxQueueLen | 1000 | 500 | 1000 | 500 | 1000 | 500 | 1000 | 2000 | Net.NetTxCompletionWorldlet | 0 | 1 | 0 | 1 | n/a | n/a | n/a | n/a | Net.NetTxWordlet | 0 | 2 | 1 | 2 | 1 | 2 | 1 | n/a | Numa.LTermFairnessInterval | 0 | 5 | 0 | 5 | 0 | 5 | 0 | 5 | Numa.MonMigEnable | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | Numa.PageMigEnable | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | Numa.PreferHT | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | Numa.RebalancePeriod | 60000 | 2000 | 60000 | 2000 | 60000 | 2000 | 60000 | 2000 | Numa.SwapInterval | 1 | 3 | 1 | 3 | 1 | 3 | 1 | 3 | Numa.SwapLoadEnable | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 |
|
|
Caption |
---|
|
Image Removed |
VM Configuration Recommendations
The following configurations are recommended to improve performance.
Configure the VM Latency Sensitivity setting to High in VM Options > Advanced configurations as shown below:
Caption |
---|
0 | Figure |
---|
1 | VM Options Window |
---|
|
Image Removed |
Limit instances to using cores from a single NUMA by configuring the numa.nodeaffinity option according to the number of the NUMA node the VM is on. Access this option using the path: VM Options > Advanced > Configuration Parameters > Edit Configuration For example, in the following figure the Numa node is 0.
Caption |
---|
0 | Figure |
---|
1 | Configuration Parameters Window |
---|
|
Image Removed |
Additional VM configuration recommendations appear in the following table.
Caption |
---|
0 | Table |
---|
1 | VM Configuration Recommendations |
---|
|
|
Settings | Recommended Configuration |
---|
vCPU | Minimum 4 vCPUs required. Any number of vCPUs may be configured depending upon the call capacity requirements, but the number should be even (4, 6, 8, etc.) to avoid impacting performance. Keep the Resource Allocation setting marked as 'unlimited.' Refer to Adjusting Resource Allocations for a VM for more information on setting CPU-related parameters. |
vRAM | Keep the Resource Allocation setting marked as 'unlimited'. Reserve the memory based on call capacity and configuration requirements. Refer to SBC SWe Performance Metrics for supported call capacities with different configuration limits. |
Virtual Hard Disk | Set the Hard Disk (virtual) size as 100 GB or more (based on of retaining CDRs, logs, etc for number of days)- Use Thick provisioning (eager zero)
- Hard disk size cannot be changed once SBC SWe software is installed
vNICs | Set the number of virtual NICs as 4 (1-MGMT, 1-HA, 1-PKT0 and 1-PKT1). - Use only the VMXNET3 driver.
Always use the automatic MAC address assignment option while creating vNICs. Associate each vNIC with separate vSwitches. Use the ESXi NIC teaming feature to achieve redundancy at the physical NIC level.
|
vSwitch settings | - Use four different vSwitches for the four vNICs on the VM. This ensures various traffic is physically separated on the SBC.
- Use four different virtual networking labels, each with different VLANs or subnets.
- Always run active and standby VMs on different physical servers.
- Disable VM logging.
|
Info |
---|
|
Make sure the Processors, ESXi version and VM configuration (vCPUs, vRAM, Virtual Hard Disk, vNICs, and vSwitch Settings) are identical for an SBC SWe HA pair. |
Info |
---|
|
Make sure that the BIOS and ESXi settings and recommendations are not changed once they are applied on the server. |