This section describes
hardware and software requirements and recommendations.
To install and configure the
, make sure the Virtual Machine (VM) host meets the following recommended hardware, server platform and software requirements.
Warning |
---|
|
warning |
The recommended hardware and software settings are intended to ensure optimum stability and performance. If the recommended settings are not used, the system may not behave as expected. |
multiexcerpt
MultiExcerptName | SBC SWe for VMware – Server Hardware Requirements
Multiexcerpt |
---|
MultiExcerptName | SBC SWe for VMware – Server Hardware Requirements |
---|
|
The following table lists the server hardware requirements: Caption |
---|
0 | Table |
---|
1 | Server Hardware Requirements |
---|
|
|
|
Configuration | Requirement |
---|
Processor | Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper threading). |
|
Note |
Sonus Info |
---|
| Ribbon recommends using Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware. |
|
|
note Number number is 6 and CPU Model |
|
|
Number
Info |
---|
| ESXi 6.5 and later releases require approximately 2 physical cores to be set aside for hypervisor functionality. The number of VMs which can be hosted on a server must be planned for accordingly. |
| RAM | Minimum 24 GB | Hard Disk | Minimum 500 GB | Network Interface Cards (NICs) | Minimum 4 NICs, if physical NIC redundancy is not required. Otherwise, 8 NICs (preferably with SR-IOV capability to support |
|
future note NIC has NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems. |
|
|
Note |
Only Intel I350 Ethernet adapter is
Info |
---|
| - The Intel I350, x540, x550, and 82599 Ethernet adapters are supported for configuring as
|
|
|
VMDirectPath - SR-IOV and DirectPath I/O pass-through
|
|
|
device.- devices. SR-IOV is supported only with 10 Gbps interfaces (x540/82599).
- The VMware Enterprise Plus license is required for SR-IOV.
|
Info |
---|
| Intel x710 NICs are also supported on VMware (ESXi versions 6.5 and above) with SR-IOV enabled. x710 NICs are not supported on Direct I/O or KVM. |
| Ports | Number of ports allowed: - 1 Management port
- 1 HA port
- 2 Media ports
|
Warning |
---|
| The software only runs on platforms using Intel processors. Platforms using AMD processors are not supported. |
|
BIOS Setting Recommendations
Sonus Ribbon recommends the following BIOS settings for optimum performance:
Caption |
---|
0 | Table |
---|
1 | Recommended BIOS Settings for Optimum Performance |
---|
|
BIOS Parameter | Recommended Setting | Details |
---|
Intel VT-x (Virtualization Technology) | Enabled | For hardware virtualization | Intel VT-d (Directed I/O) | Enabled | If available | Intel Hyper-Threading | Enabled | |
| CPU power management | Maximum Performance |
|
|
For example, the BIOS settings are shown below are recommended for HP DL 380p DL380p Gen8 servers. For BIOS settings of other servers, refer to the respective vendor's website.
Caption |
---|
0 | Table |
---|
1 | BIOS Setting Recommendations for HP DL380p Gen8 Server |
---|
|
BIOS Parameter | Recommended Setting | Default Value |
---|
HP Power Profile | Maximum Performance | Balanced Power and Performance | Thermal Configuration | Maximum Cooling | Optimal Cooling | HW Prefetchers | Disabled | Enabled | Adjacent Sector Prefetcher | Disabled | Enabled | Processor Power and Utilization Monitoring | Disabled | Enabled | Memory Pre-Failure Notification | Disabled | Enabled | Memory Refresh Rate | 1x Refresh | 2x Refresh | Data Direct I/O | Enabled | Disabled | SR-IOV | Enabled | Disabled | Intel® VT-d | Enabled | Disabled |
|
Multiexcerpt |
---|
MultiExcerptName | SBC SWe for VMware – Software Requirements |
---|
|
SBC SWe for VMware – Software RequirementsThe following are the |
VMware ESXi and software requirements requirements for VMware ESXi environments: VMware ESXi Requirements Caption |
---|
0 | Table |
---|
1 | VMWare ESXi Requirements |
---|
|
Software | Version | Tested and Qualified Version | For More Information |
---|
vSphere ESXi |
|
|
51 0 or above | - VMware 6.0 tested with VM version 11
- VMware 6.5 tested with VM version 13
| - Customized ESXi images for various server platforms are available on VMware and
|
|
|
Hardware - hardware platform vendor sites.
|
|
|
It ensures - These ensure that all the required drivers for network and storage controllers are available to run ESXi server.
- Most of the customized ESXi images
|
|
|
comes - come with customized management software to manage the server running the ESXi software.
- Customized ESXi images for HP ProLiant and IBM servers are available at:
|
|
|
(under Download images) | vSphere Client | 5.1 or above |
|
|
| Warning |
---|
Virtual network interfaces will not come up using VMware ESXi 5.5 Update 3. For more information, refer to 5.0.x Release Notes. |
Note |
---|
The VMware Standard License will be sufficient for SWe SBC unless the PKT interfaces for SBC SWe needs to be configured in Direct IO Pass-through mode. In that case, the Enterprise Plus version of VMware ESXi software is required. |
Info |
---|
| The VMware Enterprise Plus license is required for SR-IOV. |
|
Third-Party References:
Downloading the SBC SWe Software Package
For more information, refer to Downloading the SBC SWe Installation Files from SalesForce PortalSoftware Package.
Sonus Following are recommended VMware ESXi and virtual machine (VM) configurations. The following general recommendations apply to all platforms where SBC SWe is deployed:
- The number of vCPUs deployed on a system should be an even number (4, 6, 8, etc.).
- For best performance, deploy only a single instance on a single NUMA. Performance degradation occurs if you host more than one instance on a NUMA or if a single instance spans multiple NUMAs.
- Make sure that the physical NICs associated with an instance are connected to the same NUMA/socket where the instance is hosted. In the case of a dual NUMA host, ideally two instances should be hosted, with each instance on a separate NUMA and the associated NICs of each of the instances connected to their respective NUMAs.
- To optimize performance, configure memory card equally on both NUMA nodes. For example if a dual NUMA node server has a total of 128 GiB of RAM, configure 64 GiB of RAM on each NUMA node.
General ESXi Recommendations
Following The following are recommended recommendations for VMware ESXi configurations:
- Plan enough resources (RAM, CPU, NIC ports, hard disk, etc.) for all the virtual machines (VMs) to run on server platform, including resources needed by ESXi itself.
- Allocate each VM only as much virtual hardware as that VM requires. Provisioning a VM with more resources than it requires can, in some cases, reduce the performance of that VM as well as other virtual machines sharing the same host.
- Disconnect Under BIOS settings, disconnect or disable any physical hardware devices (Floppy floppy devices, Network network interfaces, Storage storage controllers, Optical optical drives, USB controllers, etc.) under BIOS settings that you will not be using to free up interrupt/ CPU resources.
Use virtual hardware version 8 and above while when creating new VMs (available starting in ESXi 5.x). This provides additional capabilities to VMs such as supporting a VM with up to 1 TB RAM, up to 32 vCPUs, etc.
ESXi Host Configuration Parameters
Use the VMware vSphere client to configure the following ESXi host configuration parameters on the Advanced Settings page (see figure below) before installing the
:. Div |
---|
class | pdf6pttextpdf8pttext |
---|
|
Caption |
---|
0 | Table |
---|
1 | ESXi Advanced Settings |
---|
|
ESXi Parameter | ESXi 5.1 Settings | ESXi 5.5 Settings | ESXi 6.0 Settings | ESXi 6.5 Settings |
---|
Recommended | Default | Recommended | Default | Recommended | Default | Recommended | Default |
---|
Cpu.CoschedCrossCall | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | Cpu.CreditAgePeriod | 500 | 3000 | 1000 | 1000 | 1000 | 1000 | 1000 | 3000 | DataMover.HardwareAcceleratedInit | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | DataMover.HardwareAcceleratedMove | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | Disk.SchedNumReqOutstanding | 256 | 32 |
| | | | Irq.BestVcpuRouting | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | Mem.BalancePeriod | 0 | 15 | 0 | 15 |
|
| | Mem.ShareScanGHz | 0 | 4 | 0 | 4 | 0 | 4 | 0 | 4 | Mem.VMOverheadGrowthLimit | 0 | 4294967295 | 0 | 4294967295 | 0 | 4294967295 | 0 | 4294967295 | Misc.TimerMaxHardPeriod | 2000 | 100000 | 2000 | 100000 | 2000 | 100000 | 2000 | 500000 | Misc.TimerMinHardPeriod | 100 | 100 |
| | | | Net.AllowPT | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | Net.MaxNetifRxQueueLen | 500 | 100 | 500 | 100 | 500 | 100 | 500 | n/a | Net.MaxNetifTxQueueLen | 1000 | 500 | 1000 | 500 | 1000 | 500 | 1000 | 2000 | Net.NetTxCompletionWorldlet | 0 | 1 | 0 | 1 |
| | Net.NetTxWordlet | 0 | 2 | 1 | 2 | 1 | 2 | 1 | n/a | Numa.LTermFairnessInterval | 0 | 5 | 0 | 5 | 0 | 5 | 0 | 5 | Numa.MonMigEnable | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | Numa.PageMigEnable | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | Numa.PreferHT | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | Numa.RebalancePeriod | 60000 | 2000 | 60000 | 2000 | 60000 | 2000 | 60000 | 2000 | Numa.SwapInterval | 1 | 3 | 1 | 3 | 1 | 3 | 1 | 3 | Numa.SwapLoadEnable | 0 | 1 | 0 | 1 | 0 | 1 |
|
|
Caption |
---|
|
|
VM Configuration Recommendations
The following configurations are recommended to improve performance.
Configure the VM Latency Sensitivity setting to High in VM Options > Advanced configurations as shown below:
Caption |
---|
0 | Figure |
---|
1 | VM Options Window |
---|
|
Image Added |
Limit instances to using cores from a single NUMA by configuring the numa.nodeaffinity option according to the number of the NUMA node the VM
Configuration Recommendationsis on. Access this option using the path: VM Options > Advanced > Configuration Parameters > Edit Configuration For example, in the following figure the Numa node is 0.
Caption |
---|
0 | Figure |
---|
1 | Configuration Parameters Window |
---|
|
Image Added |
Additional VM configuration recommendations appear in the following table.
Caption |
---|
0 | Table |
---|
1 | VM Configuration Recommendations |
---|
|
|
|
Settings | Recommended Configuration |
---|
vCPU | Minimum 4 vCPUs required |
. But In case where there are only four physical cores available, you must configure the VM with only 3 vCPUs. Any number of vCPUs may be configured depending upon the call capacity requirements, but the number should be even (4, 6, 8, etc.) to avoid impacting performance. Keep the Resource Allocation setting marked as 'unlimited.' |
with CPU freq. set to "physical processor CPU speed multiplied by number of vCPUs assigned to VM". Note |
---|
3 vCPUs configuration is supported only from SBC 4.2.4 release. |
Note |
---|
VMDirectPath mode is only supported for PKT ports for vCPUs >=4. |
Warning |
In Direct I/O, if you want to notified with link detection failure using LinkMonitor, use a server with more than 4 vCPUsRefer to Adjusting Resource Allocations for a VM for more information on setting CPU-related parameters. |
vRAM | Keep the Resource Allocation setting marked as 'unlimited'. Reserve the memory based on call capacity and configuration requirements. Refer to SBC SWe Performance Metrics for supported call capacities with different configuration limits. |
Virtual Hard Disk | Set the Hard Disk (virtual) size as 100 GB or more (based on requirements of retaining CDRs, logs, etc. for number of days) - Use Thick provisioning (eager zero)
- Hard disk size cannot be changed once SBC SWe software is installed
|
vNICs | Set the number of virtual NICs as 4 (1-MGMT, 1-HA, 1-PKT0 and 1-PKT1). |
to |
vSwitch settings | - Use four different vSwitches for
|
each - the four vNICs on the VM. This ensures various traffic
|
to be - is physically separated on the SBC.
|
note | The same physical NIC port cannot be associated with different vSwitches. |
- Use four different virtual networking labels, each with different VLANs or subnets.
- Always run active and standby VMs on different physical servers.
- Disable VM logging.
|
Info |
---|
|
note |
Make sure the Processors, ESXi version and VM configuration (vCPUs, vRAMsvRAM, Virtual Hard Disk, vNICs, and vSwitch Settings) must be are identical for an SBC SWe HA pair. |
Infonote |
---|
|
Make sure that the BIOS and ESXi settings and recommendations must are not be changed once it is they are applied on the server. |
Multiexcerpt include |
---|
MultiExcerptName | _vlan_sriov_disable_dcb |
---|
PageWithExcerpt | _VLAN_SRIOV_Disable_DCB |
---|
|