Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Add_workflow_for_techpubs
AUTH1

JIRAIDAUTHNAREV5REV6REV3REV1REV2

UserResourceIdentifier{userKey=8a00a0c85fd202bb0160132c449a0026, userName='null'}
REV5UserResourceIdentifier{userKey=8a00a0c85fd202bb0160132c449a0026, userName='null'}
REV6UserResourceIdentifier{userKey=8a00a0c85fd202bb0160132c449a0026, userName='null'}
REV3UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cd700a1f, userName='null'}
REV1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26ccbb08c9, userName='null'}

 

This section describes

Spacevars
0product2
hardware and software requirements and recommendations.

To install and configure the 

Spacevars
0product2
, make sure the Virtual Machine (VM) host meets the following recommended hardware, server platform and software requirements.

Warning
titleWarning

The recommended hardware and software settings are intended to ensure optimum 

Spacevars
0product2
stability and performance. If the recommended settings are not used, the
Spacevars
0product2
system may not behave as expected.

Pagebreak

SBC SWe for VMware – Server Hardware Requirements

Multiexcerpt
MultiExcerptNameSBC SWe for VMware – Server Hardware Requirements

The following table lists the server hardware requirements:

Caption
0Table
1Server Hardware Requirements

 

 ConfigurationRequirement
Processor

Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper threading).

Info
titleNote

Sonus Ribbon recommends using Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware.

Info
titleNote

The supported CPU Family Number is 6 and CPU Model Number must be newer than 26. Refer to Intel Architecture and Processor Identification document for more information.

Info
titleNote

ESXi 6.5 and later releases requires require approximately 2 two physical cores to be set aside for hypervisor functionality. Number of VMs which can be hosted on a server needs to be planned accordingly.

 RAMMinimum 24 GB
Hard DiskMinimum 500 GB
Network Interface Cards (NICs)
Minimum 4 NICs, if physical NIC redundancy is not required.

Otherwise, 8 NICs (preferably with SR-IOV capability to support SWe optimizations).

title
Info
titleNoteNotes
  • Make sure NIC has multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems.
Info
Note
  • The Intel I350, x540, and 82599 Ethernet adapters are supported for configuring as SR-IOV and DirectPath I/O pass-through devices. The SR-IOV is supported only with
10Gbps
  • 10 Gbps interfaces (x540/82599).
  • The Enterprise Plus license is required for SR-IOV.
Info
titleNote

 Intel x710 NICs are also supported on VMware (ESXi versions 6.5 and above) with SR-IOV enabled. x710 NICs are not supported on Direct I/O or KVM. 

Ports

Number of ports allowed:

  • 1 Management port
  • 1 HA port
  • 2 Media ports

 

 

Warning
titleWarning

The 

Spacevars
0product2
software only runs on platforms using Intel processors. Platforms using AMD processors are not supported.

BIOS Setting Recommendations

Sonus Ribbon recommends the following BIOS settings for optimum performance:

Caption
0Table
1Recommended BIOS Settings for Optimum Performance
BIOS ParameterRecommended
Setting
Details
Intel VT-x (Virtualization Technology)EnabledFor hardware virtualization
Intel VT-d (Directed I/O)EnabledIf available
Intel Hyper-ThreadingEnabled 
Intel Turbo BoostEnabled 
CPU power managementMaximum Performance 

Pagebreak

For example, the BIOS settings are shown below for HP DL 380p Gen8 servers. For BIOS settings of other servers, refer to the respective vendor website.

Caption
0Table
1BIOS Setting Recommendations for HP DL380p Gen8 Server
BIOS ParameterRecommended
Setting
Default Value
HP Power Profile Maximum PerformanceBalanced Power and Performance
Thermal ConfigurationMaximum CoolingOptimal Cooling
HW PrefetchersDisabledEnabled
Adjacent Sector PrefetcherDisabledEnabled
Processor Power and Utilization MonitoringDisabledEnabled
Memory Pre-Failure NotificationDisabledEnabled
Memory Refresh Rate1x Refresh2x Refresh
Data Direct I/OEnabledDisabled
SR-IOVEnabledDisabled
Intel® VT-dEnabledDisabled

Multiexcerpt
MultiExcerptNameSBC SWe for VMware – Software Requirements

SBC SWe for VMware – Software Requirements

The following are the VMware ESXi and 

Spacevars
0product2
 software requirements:

VMware ESXi Requirements

Caption
0Table
1VMWare ESXi Requirements
SoftwareVersionTested and Qualified VersionFor More Information
vSphere ESXi 5.1 or above
  • VMware 6.0 tested with VM version 11
  • VMware 6.5 tested with VM version 13
  • Customized ESXi images for various server platforms are available on VMware and Hardware platform vendor sites.
    • It ensures that all the required drivers for network and storage controllers are available to run ESXi server.
    • Most of the customized ESXi images comes with customized management software to manage server running ESXi software.
    • Customized ESXi images for HP ProLiant and IBM servers are available at:
vSphere Client5.1 or above VMware Knowledge Base
vCenter Server5.1 or above vCenter Server
Info
titleNote

The VMware Standard License will be sufficient for SWe SBC unless the PKT interfaces for SBC SWe needs to be configured in Direct IO Pass-through mode. In that case, the Enterprise Plus version of VMware ESXi software is requiredEnterprise Plus license is required for SR-IOV.

Third-Party References:

Downloading the SBC SWe Software Package

For more information, refer to Downloading the SBC SWe Installation Files from SalesForce PortalSoftware Package.

Sonus

Recommendations for Optimum Performance

The following general recommendations apply to all platforms where SBC SWe is deployed:

  • The number of vCPUs deployed on a system should be an even number (4, 6, 8, etc.).
  • For best performance, deploy only a single instance on a single NUMA. Performance degradation occurs if you host more than one instance on a NUMA or if a single instance spans multiple NUMAs.
  • Make sure that the physical NICs associated with an instance are connected to the same NUMA/socket where the instance is hosted. In the case of a dual NUMA host, ideally two instances should be hosted, with each instance on a separate NUMA and the associated NICs of each of the instances connected to their respective NUMAs.
  • To optimize performance, configure memory card equally on both NUMA nodes. For example if a dual NUMA node server has a total of 128 GiB of RAM, configure 64 GiB of RAM on each NUMA node.

Following are recommended VMware ESXi and

Spacevars
0product2
virtual machine (VM) configurations. 

General ESXi Recommendations

Following are recommended VMware ESXi configurations:

  • Plan enough resources (RAM, CPU, NIC ports, hard disk, etc.) for all the virtual machines (VMs) to run on server platform, including resources needed by ESXi itself.
  • Allocate each VM only as much virtual hardware as that VM requires. Provisioning a VM with more resources than it requires can, in some cases, reduce the performance of that VM as well as other virtual machines sharing the same host.
  • Disconnect Under BIOS settings, disconnect or disable any physical hardware devices (Floppy floppy devices, Network network interfaces, Storage storage controllers, Optical optical drives, USB controllers, etc.) under BIOS settings   that you will not be using to free up interrupt/ CPU resources.
  • Use virtual hardware version 8 and above while when creating new VMs (available starting in ESXi 5.x). This provides additional capabilities to VMs such as supporting a VM with up to 1 TB RAM, up to 32 vCPUs, etc.

Anchor
ESXi_Param
ESXi_Param
ESXi Host Configuration Parameters

Use VMware vSphere client to configure the following ESXi host configuration parameters on the Advanced Settings page (see figure below) before installing the

Spacevars
0product2
:.

Div
classpdf6pttextpdf8pttext
Caption
0Table
1ESXi Advanced Settings

 

ESXi Parameter

ESXi 5.1 SettingsESXi 5.5 SettingsESXi 6.0 SettingsESXi 6.5 Settings

Recommended

Default

Recommended

Default

Recommended

Default

RecommendedDefault

Cpu.CoschedCrossCall

0

1

0

1

0

1

01

Cpu.CreditAgePeriod

500

3000

100010001000100010003000

DataMover.HardwareAcceleratedInit

0

1

0

1

0

1

01

DataMover.HardwareAcceleratedMove

0

1

0

1

0

1

01

Disk.SchedNumReqOutstanding

256

32

(error)(error)(error)n/an/an/an/an/an/a(error)

Irq.BestVcpuRouting

1

0

1

0

1

0

10

Mem.BalancePeriod

0

15

0

15

(error)

n/a

n/a

n/an/a(error)

Mem.SamplePeriod

0

60

0

60

(error)

n/a

n/a

n/an/a(error)

Mem.ShareScanGHz

0

4

0

4

0

4

04

Mem.VMOverheadGrowthLimit

0

4294967295

0

4294967295

0

4294967295

04294967295

Misc.TimerMaxHardPeriod

2000

100000

2000

100000

2000

100000

2000500000

Misc.TimerMinHardPeriod

100

100

(error)(error)(error)

n/an/an/an/an/an/a(error)

Net.AllowPT

1

0

1

0

1

0

11

Net.MaxNetifRxQueueLen

500

100

500

100

500

100

500n/a

Net.MaxNetifTxQueueLen

1000

500

1000

500

1000

500

10002000

Net.NetTxCompletionWorldlet

0

1

01(error)n/an/an/an/a(error)

Net.NetTxWordlet

0

2

12121n/a

Numa.LTermFairnessInterval

0

5

0

5

0

5

05

Numa.MonMigEnable

0

1

0

1

0

1

01

Numa.PageMigEnable

0

1

0

1

0

1

01

Numa.PreferHT

1

0

1

0

1

0

10

Numa.RebalancePeriod

60000

2000

60000

2000

60000

2000

600002000

Numa.SwapInterval

1

3

1

3

1

3

13

Numa.SwapLoadEnable

0

1

0

1

0

1

01
Caption
0Figure
1ESXi Parameters

VM Configuration Recommendations

The following configurations are recommended to improve performance.

Configure the VM Latency Sensitivity setting to High in VM Options > Advanced configurations as shown below:

Caption
0Figure
1VM Options Window

Image Added

 

Limit instances to using cores from a single NUMA by configuring the numa.nodeaffinity option according to the number of the NUMA node the VM is on. Access this option using the path: VM Options > Advanced > Configuration Parameters > Edit Configuration  For example, in the following figure the Numa node is 0. 

Caption
0Figure
1Configuration Parameters Window

Image Added

Additional VM configuration recommendations appear in the following table.

Caption
0Table
1VM Configuration Recommendations

 

In Direct I/O, if you want to notified with link detection failure using LinkMonitor, use a server with more than 4 vCPUs

Refer to Adjusting Resource Allocations for a VM for more information on setting CPU-related parameters.

 SettingsRecommended Configuration
vCPU

Minimum 4 vCPUs required. But In case where there are only four physical cores available, you must configure the VM with only 3 vCPUs. Any number of vCPUs may be configured depending upon the call capacity requirements, but the number should be even (4, 6, 8, etc.) to avoid impacting performance.

Keep the Resource Allocation setting marked as 'unlimited.'

with CPU freq. set to "physical processor CPU speed multiplied by number of vCPUs assigned to VM".
Info
titleNote

3 vCPUs configuration is supported only from SBC 4.2.4 release.

Info
titleNote

VMDirectPath mode is only supported for PKT ports for vCPUs >=4.

Warning
titleWarning
 vRAM

Keep the Resource Allocation setting marked as 'unlimited'. Reserve the memory based on call capacity and configuration requirements. Refer to SBC SWe Performance Metrics for supported call capacities with different configuration limits.

Virtual Hard Disk

Set Hard Disk (virtual) size as 100 GB or more (based on requirements of retaining CDRs, logs, etc. for number of days)

    • Use Thick provisioning (eager zero)
    • Hard disk size cannot be changed once SBC SWe software is installed
vNICs

Set number of virtual NICs as 4 (1-MGMT, 1-HA, 1-PKT0 and 1-PKT1).

  • Use only VMXNET3 driver
  • Always use automatic MAC address assignment option while creating vNICs.
  • Associate each vNIC to separate vSwitches
  • Use ESXi NIC teaming feature to achieve redundancy at physical NIC level.
vSwitch settings
  • Use four different vSwitches for each vNICs on
    Spacevars
    0product
    VM. This ensures various traffic to be physically separated on SBC.
    • Assign 1 physical NIC port (1 Gbps) to each vSwitch if physical NIC redundancy is not needed, otherwise assign 2 physical NIC ports (in active-standby mode using NIC team feature) to each vSwitch.

      Info
      titleNote

      The same physical NIC port cannot be associated with different vSwitches.

  • Use four different virtual networking labels, each with different VLANs or subnets.
  • Always run active and standby
    Spacevars
    0product
    VMs on different physical servers
  • Disable VM logging
Info
titleNote

Make sure the Processors, ESXi version and VM configuration (vCPUs, vRAMs, Virtual Hard Disk, vNICs, and vSwitch Settings) must be identical for SBC SWe HA pair.

Info
titleNote

Make sure that the BIOS and ESXi settings and recommendations must not be changed once it is applied on the server.

Multiexcerpt include

Info
titleNote
On a server with a single 4 cores processor, ESXi 6.5 and above releases are not supported since there is not enough CPU available for launching 3 vCPUs SBC SWe VM after considering overhead required for ESXi.

MultiExcerptName_vlan_sriov_disable_dcb
PageWithExcerpt_VLAN_SRIOV_Disable_DCB

Pagebreak