Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Add_workflow_for_techpubs
AUTH1
JIRAIDAUTHNA
REV5
REV6
REV3
REV1

Anchor
BackToTop
BackToTop

Noprint
Panel
borderColorgreen
bgColortransparent
borderWidth2

Back to Table of Contents

Back to Installing SBC SWe on Virtual Platforms

Back to Hardware and Software Requirements

...

To install and configure

Spacevars
0product2
, make sure the Virtual Machine (VM) host meets the following recommended hardware, server platform and software requirements:.

 

Warning

The recommended hardware and software settings are intended to ensure optimum 

Spacevars
0product2
stability and performance. If the recommended settings are not used, 
Spacevars
0product2
system may not behave as expected.

Server Hardware Requirements

Following The following table lists the server hardware requirements:

...

 ConfigurationRequirement
Processor

Intel Xeon processors (Nehalem micro-architecture or above)

Note

It is recommended to use Sonus recommends using Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware.

Note

The supported CPU Family Number is 6 and CPU Model Number must be newer than 26. Refer to Intel Architecture and Processor Identification document for more information.

 RAMMinimum 24 GB
Hard DiskMinimum 500 GB
Network Interface Cards (NICs)
Minimum 4 NICs, if physical NIC redundancy is not required.

Otherwise, 8 NICs (preferably with SR-IOV capability to support future SWe optimizations).

Note

Make sure NIC has multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems.

Note

Only Intel I350 Ethernet adapter is supported for configuring as VMDirectPath I/O pass-through device.

Ports

Number of ports allowed:

  • 1 Management port
  • 1 HA port
  • 2 Media ports

...

For example, the BIOS settings are shown below for HP DL 380p Gen8 servers. For BIOS settings of other servers, refer to the respective vendor website.

 

Caption
0Table
1BIOS Setting Recommendations for HP DL380p Gen8 Server
BIOS ParameterRecommended
Setting
Default Value
HP Power Profile Maximum PerformanceBalanced Power and Performance
Thermal ConfigurationMaximum CoolingOptimal Cooling
HW PrefetchersDisabledEnabled
Adjacent Sector PrefetcherDisabledEnabled
Processor Power and Utilization MonitoringDisabledEnabled
Memory Pre-Failure NotificationDisabledEnabled
Memory Refresh Rate1x Refresh2x Refresh
Data Direct I/OEnabledDisabled
SR-IOVEnabledDisabled
Intel® VT-dEnabledDisabled

...

 SettingsRecommended Configuration
vCPU

Minimum 4 vCPUs required. But In case where there are only four physical cores available, you must configure the VM with only 3 vCPUs. Any number of vCPUs may be configured depending upon the call capacity requirements.

  • Keep Resource Allocation setting marked as 'unlimited' with CPU freq. set to "physical processor CPU speed multiplied by number of vCPUs assigned to VM".
Note

3 vCPUs configuration is supported only from SBC 4.2.4 release.

Note

VMDirectPath mode is only supported for PKT ports for vCPUs >=4.

 vRAM

Keep Resource Allocation setting marked as 'unlimited'. Reserve the memory based on the VM capacity requirement.
For capacity and performance related information, see VMware Hypervisorcall capacity and configuration requirements. Refer to benchmarking data for supported call capacities with different configuration limits.

Virtual Hard Disk

Set Hard Disk (virtual) size as 100 GB or more (based on requirements of retaining CDRs, logs, etc. for number of days)

    • Use Thick provisioning (eager zero)
    • Hard disk size cannot be changed once SBC SWe software is installed
vNICs

Set number of virtual NICs as 4 (1-MGMT, 1-HA, 1-PKT0 and 1-PKT1).

  • Use only VMXNET3 driver
  • Always use automatic MAC address assignment option while creating vNICs.
  • Associate each vNIC to separate vSwitches
  • Use ESXi NIC teaming feature to achieve redundancy at physical NIC level.
vSwitch settings
  • Use four different vSwitches for each vNICs on
    Spacevars
    0product
    VM. This ensures various traffic to be physically separated on SBC.
    • Assign 1 physical NIC port (1 Gbps) to each vSwitch if physical NIC redundancy is not needed, otherwise assign 2 physical NIC ports (in active-standby mode using NIC team feature) to each vSwitch.

      Note

      The same physical NIC port cannot be associated with different vSwitches.

  • Use four different virtual networking labels, each with different VLANs or subnets.
  • Always run active and standby
    Spacevars
    0product
    VMs on different physical servers
  • Disable VM logging

...