Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

 

Add_workflow_for_techpubs
AUTH1eyoustraUserResourceIdentifier{userKey=8a00a0c85fd202bb0160132c449a0026, userName='null'}
REV5eyoustraUserResourceIdentifier{userKey=8a00a0c85fd202bb0160132c449a0026, userName='null'}
REV6eyoustraUserResourceIdentifier{userKey=8a00a0c85fd202bb0160132c449a0026, userName='null'}
REV3kvenkatramanUserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cc4107b4, userName='null'}
REV1dkumarUserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26c87c010e, userName='null'}



Panel

In this section:

Table of Contents
maxLevel4


 

This section describes

Spacevars
0product2
hardware and software requirements and recommendations.

To install and configure the 

Spacevars
0product2
, make sure the Virtual Machine (VM) host meets the following recommended hardware, server platform and software requirements.

Warning
titleWarning

The recommended hardware and software settings are intended to ensure optimum 

Spacevars
0product2
stability and performance. If the recommended settings are not used, the
Spacevars
0product2
system may not behave as expected.

Pagebreak

SBC SWe for VMware – Server Hardware Requirements


Multiexcerpt
MultiExcerptNameSBC SWe for VMware – Server Hardware Requirements

The following table lists the server hardware requirements:

Caption
0Table
1Server Hardware Requirements
 



 ConfigurationRequirement
Processor

Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper threading).

Info
titleNote

Ribbon recommends using Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware.


Info
titleNote

The supported CPU Family

Number

number is 6 and CPU Model

Number

number must be newer than 26. Refer to the Intel Architecture and Processor Identification document for more information.


Info
titleNote

ESXi 6.5 and later releases

requires

require approximately 2 physical cores to be set aside for hypervisor functionality.

Number

The number of VMs which can be hosted on a server

needs to

must be planned for accordingly.


 RAMMinimum 24 GB
Hard DiskMinimum 500 GB
Network Interface Cards (NICs)
Minimum 4 NICs, if physical NIC redundancy is not required.

Otherwise, 8 NICs (preferably with SR-IOV capability to support SWe optimizations).

Info
titleNote

Make sure

NIC has

NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems.


Info
titleNote
  • The Intel I350, x540, x550, and 82599 Ethernet adapters are supported for configuring as SR-IOV and DirectPath I/O pass-through devices.
The
  • SR-IOV is supported only with 10 Gbps interfaces (x540/82599).
  • The VMware Enterprise Plus license is required for SR-IOV.


Info
titleNote

 Intel x710 NICs are also supported on VMware (ESXi versions 6.5 and above) with SR-IOV enabled. x710 NICs are not supported on Direct I/O or KVM. 


Ports

Number of ports allowed:

  • 1 Management port
  • 1 HA port
  • 2 Media ports

 

 



Warning
titleWarning

The 

Spacevars
0product2
software only runs on platforms using Intel processors. Platforms using AMD processors are not supported.



BIOS Setting Recommendations

Ribbon recommends the following BIOS settings for optimum performance:

Caption
0Table
1Recommended BIOS Settings for Optimum Performance


BIOS ParameterRecommended
Setting
Details
Intel VT-x (Virtualization Technology)EnabledFor hardware virtualization
Intel VT-d (Directed I/O)EnabledIf available
Intel Hyper-ThreadingEnabled
 

Intel Turbo BoostEnabled
 

CPU power managementMaximum Performance
 


Pagebreak

For example, the BIOS settings are shown below are recommended for HP DL 380p DL380p Gen8 servers. For BIOS settings of other servers, refer to the respective vendor's website.

Caption
0Table
1BIOS Setting Recommendations for HP DL380p Gen8 Server


BIOS ParameterRecommended
Setting
Default Value
HP Power Profile Maximum PerformanceBalanced Power and Performance
Thermal ConfigurationMaximum CoolingOptimal Cooling
HW PrefetchersDisabledEnabled
Adjacent Sector PrefetcherDisabledEnabled
Processor Power and Utilization MonitoringDisabledEnabled
Memory Pre-Failure NotificationDisabledEnabled
Memory Refresh Rate1x Refresh2x Refresh
Data Direct I/OEnabledDisabled
SR-IOVEnabledDisabled
Intel® VT-dEnabledDisabled



Multiexcerpt
MultiExcerptNameSBC SWe for VMware – Software Requirements

SBC SWe for VMware – Software Requirements

The following are the

VMware ESXi and 

Spacevars
0product2

 software requirements

 requirements for VMware ESXi environments:

VMware ESXi Requirements

Caption
0Table
1VMWare ESXi Requirements


SoftwareVersionTested and Qualified VersionFor More Information
vSphere ESXi
5
6.
1
0 or above
  • VMware 6.0 tested with VM version 11
  • VMware 6.5 tested with VM version 13
  • Customized ESXi images for various server platforms are available on VMware and
Hardware
  • hardware platform vendor sites.
    It ensures
      • These ensure that all the required drivers for network and storage controllers are available to run ESXi server.
      • Most of the customized ESXi images
    comes
    (under Download images)
    vSphere Client5.1 or above
     

    VMware Knowledge Base
    vCenter Server5.1 or above
     



    Info
    titleNote

    The VMware Enterprise Plus license is required for SR-IOV.


    Third-Party References:


    Downloading the SBC SWe Software Package

    For more information, refer to Downloading the SBC SWe Software Package.

    Recommendations for Optimum Performance

    The following general recommendations apply to all platforms where SBC SWe is deployed:

    • The number of vCPUs deployed on a system should be an even number (4, 6, 8, etc.).
    • For best performance, deploy only a single instance on a single NUMA. Performance degradation occurs if you host more than one instance on a NUMA or if a single instance spans multiple NUMAs.
    • Make sure that the physical NICs associated with an instance are connected to the same NUMA/socket where the instance is hosted. In the case of a dual NUMA host, ideally two instances should be hosted, with each instance on a separate NUMA and the associated NICs of each of the instances connected to their respective NUMAs.
    • To optimize performance, configure memory card equally on both NUMA nodes. For example if a dual NUMA node server has a total of 128 GiB of RAM, configure 64 GiB of RAM on each NUMA node.
     

    Following are recommended VMware ESXi and

    Spacevars
    0product2
    virtual machine (VM) configurations. 

    General ESXi Recommendations

    Following The following are recommended recommendations for VMware ESXi configurations:

    • Plan enough resources (RAM, CPU, NIC ports, hard disk, etc.) for all the virtual machines (VMs) to run on server platform, including resources needed by ESXi itself.
    • Allocate each VM only as much virtual hardware as that VM requires. Provisioning a VM with more resources than it requires can, in some cases, reduce the performance of that VM as well as other virtual machines sharing the same host.
    • Under BIOS settings, disconnect or disable any physical hardware devices (floppy devices, network interfaces, storage controllers, optical drives, USB controllers, etc.)  that you will not be using to free up CPU resources.
    • Use virtual hardware version 8 and above when creating new VMs (available starting in ESXi 5.x). This provides additional capabilities to VMs such as supporting a VM with up to 1 TB RAM, up to 32 vCPUs, etc.

    Anchor
    ESXi_Param
    ESXi_Param
    ESXi Host Configuration Parameters

    Use the VMware vSphere client to configure the following ESXi host configuration parameters on the Advanced Settings page (see figure below) before installing the

    Spacevars
    0product2
    .

    Div
    classpdf8pttext


    Caption
    0Table
    1ESXi Advanced Settings


     


    ESXi Parameter

    ESXi 5.1 SettingsESXi 5.5 SettingsESXi 6.0 SettingsESXi 6.5 Settings

    Recommended

    Default

    Recommended

    Default

    Recommended

    Default

    RecommendedDefault

    Cpu.CoschedCrossCall

    0

    1

    0

    1

    0

    1

    01

    Cpu.CreditAgePeriod

    500

    3000

    100010001000100010003000

    DataMover.HardwareAcceleratedInit

    0

    1

    0

    1

    0

    1

    01

    DataMover.HardwareAcceleratedMove

    0

    1

    0

    1

    0

    1

    01

    Disk.SchedNumReqOutstanding

    256

    32

    n/an/an/an/an/an/a

    Irq.BestVcpuRouting

    1

    0

    1

    0

    1

    0

    10

    Mem.BalancePeriod

    0

    15

    0

    15

    n/a

    n/a

    n/an/a

    Mem.SamplePeriod

    0

    60

    0

    60

    n/a

    n/a

    n/an/a

    Mem.ShareScanGHz

    0

    4

    0

    4

    0

    4

    04

    Mem.VMOverheadGrowthLimit

    0

    4294967295

    0

    4294967295

    0

    4294967295

    04294967295

    Misc.TimerMaxHardPeriod

    2000

    100000

    2000

    100000

    2000

    100000

    2000500000

    Misc.TimerMinHardPeriod

    100

    100

    n/an/an/an/an/an/a

    Net.AllowPT

    1

    0

    1

    0

    1

    0

    11

    Net.MaxNetifRxQueueLen

    500

    100

    500

    100

    500

    100

    500n/a

    Net.MaxNetifTxQueueLen

    1000

    500

    1000

    500

    1000

    500

    10002000

    Net.NetTxCompletionWorldlet

    0

    1

    01n/an/an/an/a

    Net.NetTxWordlet

    0

    2

    12121n/a

    Numa.LTermFairnessInterval

    0

    5

    0

    5

    0

    5

    05

    Numa.MonMigEnable

    0

    1

    0

    1

    0

    1

    01

    Numa.PageMigEnable

    0

    1

    0

    1

    0

    1

    01

    Numa.PreferHT

    1

    0

    1

    0

    1

    0

    10

    Numa.RebalancePeriod

    60000

    2000

    60000

    2000

    60000

    2000

    600002000

    Numa.SwapInterval

    1

    3

    1

    3

    1

    3

    13

    Numa.SwapLoadEnable

    0

    1

    0

    1

    0

    1

    01




    Caption
    0Figure
    1ESXi Parameters

    VM Configuration Recommendations

    The following configurations are recommended to improve performance.

    Configure the VM Latency Sensitivity setting to High in VM Options > Advanced configurations as shown below:

    Caption
    0Figure
    1VM Options Window

    Image Modified

     


    Limit instances to using cores from a single NUMA by configuring the numa.nodeaffinity option according to the number of the NUMA node the VM is on. Access this option using the path: VM Options > Advanced > Configuration Parameters > Edit Configuration  For example, in the following figure the Numa node is 0. 

    Caption
    0Figure
    1Configuration Parameters Window

    Image Modified

     


    Additional VM configuration recommendations appear in the following table.

     
    Caption
    0Table
    1VM Configuration Recommendations



     SettingsRecommended Configuration
    vCPU

    Minimum 4 vCPUs required. Any number of vCPUs may be configured depending upon the call capacity requirements, but the number should be even (4, 6, 8, etc.) to avoid impacting performance.

    Keep the Resource Allocation setting marked as 'unlimited.' Refer to Adjusting Resource Allocations for a VM for more information on setting CPU-related parameters.

     vRAM

    Keep the Resource Allocation setting marked as 'unlimited'. Reserve the memory based on call capacity and configuration requirements. Refer to SBC SWe Performance Metrics for supported call capacities with different configuration limits.

    Virtual Hard Disk

    Set the Hard Disk (virtual) size as 100 GB or more (based on requirements of retaining CDRs, logs, etc. for number of days)

      • Use Thick provisioning (eager zero)
      • Hard disk size cannot be changed once SBC SWe software is installed
    vNICs

    Set the number of virtual NICs as 4 (1-MGMT, 1-HA, 1-PKT0 and 1-PKT1).

    • Use only the VMXNET3 driver.
    • Always use the automatic MAC address assignment option while creating vNICs.
    • Associate each vNIC
    to
    • with separate vSwitches.
    • Use the ESXi NIC teaming feature to achieve redundancy at the physical NIC level.
    vSwitch settings
    • Use four different vSwitches for
    each
    • the four vNICs on the
      Spacevars
      0product
      VM. This ensures various traffic
    to be
    • is physically separated on the SBC.
      • Assign 1 physical NIC port (1 Gbps) to each vSwitch if physical NIC redundancy is not needed, otherwise assign 2 physical NIC ports (in active-standby mode using NIC team feature) to each vSwitch.

        Info
        titleNote

        The same physical NIC port cannot be associated with different vSwitches.


    • Use four different virtual networking labels, each with different VLANs or subnets.
    • Always run active and standby
      Spacevars
      0product
      VMs on different physical servers.
    • Disable VM logging.


    Info
    titleNote

    Make sure the Processors, ESXi version and VM configuration (vCPUs, vRAMsvRAM, Virtual Hard Disk, vNICs, and vSwitch Settings) must be are identical for an SBC SWe HA pair.


    Info
    titleNote

    Make sure that the BIOS and ESXi settings and recommendations must are not be changed once it is they are applied on the server.

    Multiexcerpt include
    MultiExcerptName_vlan_sriov_disable_dcb
    PageWithExcerpt_VLAN_SRIOV_Disable_DCB
     


    Pagebreak