You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 38 Next »

This section describes

Unable to show "metadata-from": No such page "_space_variables"
hardware and software requirements and recommendations.

To install and configure

Unable to show "metadata-from": No such page "_space_variables"
, make sure the Virtual Machine (VM) host meets the following recommended hardware, server platform and software requirements.

Warning

The recommended hardware and software settings are intended to ensure optimum 

Unable to show "metadata-from": No such page "_space_variables"
stability and performance. If the recommended settings are not used, the
Unable to show "metadata-from": No such page "_space_variables"
system may not behave as expected.

SBC SWe for VMware – Server Hardware Requirements

The following table lists the server hardware requirements:

Server Hardware Requirements

 
 ConfigurationRequirement
Processor

Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper threading).

Note

Ribbon recommends using Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware.

Note

The supported CPU Family number is 6 and CPU Model number must be newer than 26. Refer to the Intel Architecture and Processor Identification document for more information.

Note

ESXi 6.5 and later releases require approximately 2 physical cores to be set aside for hypervisor functionality. The number of VMs which can be hosted on a server must be planned for accordingly.

 RAMMinimum 24 GB
Hard DiskMinimum 500 GB
Network Interface Cards (NICs)
Minimum 4 NICs, if physical NIC redundancy is not required.

Otherwise, 8 NICs (preferably with SR-IOV capability to support SWe optimizations).

Note

Make sure NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems.

Note
  • The Intel I350, x540, x550, and 82599 Ethernet adapters are supported for configuring as SR-IOV and DirectPath I/O pass-through devices. SR-IOV is supported only with 10 Gbps interfaces (x540/82599).
  • The VMware Enterprise Plus license is required for SR-IOV.
Note

 Intel x710 NICs are also supported on VMware (ESXi versions 6.5 and above) with SR-IOV enabled. x710 NICs are not supported on Direct I/O or KVM. 

Ports

Number of ports allowed:

  • 1 Management port
  • 1 HA port
  • 2 Media ports

 

 

Warning

The 

Unable to show "metadata-from": No such page "_space_variables"
software only runs on platforms using Intel processors. Platforms using AMD processors are not supported.

BIOS Setting Recommendations

Ribbon recommends the following BIOS settings for optimum performance:

Recommended BIOS Settings for Optimum Performance

BIOS ParameterRecommended
Setting
Details
Intel VT-x (Virtualization Technology)EnabledFor hardware virtualization
Intel VT-d (Directed I/O)EnabledIf available
Intel Hyper-ThreadingEnabled 
Intel Turbo BoostEnabled 
CPU power managementMaximum Performance 

For example, the BIOS settings shown below are recommended for HP DL380p Gen8 servers. For BIOS settings of other servers, refer to the respective vendor's website.

BIOS Setting Recommendations for HP DL380p Gen8 Server

BIOS ParameterRecommended
Setting
Default Value
HP Power Profile Maximum PerformanceBalanced Power and Performance
Thermal ConfigurationMaximum CoolingOptimal Cooling
HW PrefetchersDisabledEnabled
Adjacent Sector PrefetcherDisabledEnabled
Processor Power and Utilization MonitoringDisabledEnabled
Memory Pre-Failure NotificationDisabledEnabled
Memory Refresh Rate1x Refresh2x Refresh
Data Direct I/OEnabledDisabled
SR-IOVEnabledDisabled
Intel® VT-dEnabledDisabled

SBC SWe for VMware – Software Requirements

The following are the

Unable to show "metadata-from": No such page "_space_variables"
 requirements for VMware ESXi environments:

VMware ESXi Requirements

VMWare ESXi Requirements

SoftwareVersionTested and Qualified VersionFor More Information
vSphere ESXi 5.1 or above
  • VMware 6.0 tested with VM version 11
  • VMware 6.5 tested with VM version 13
  • Customized ESXi images for various server platforms are available on VMware and hardware platform vendor sites.
    • These ensure that all the required drivers for network and storage controllers are available to run ESXi server.
    • Most of the customized ESXi images come with customized management software to manage the server running the ESXi software.
    • Customized ESXi images for HP ProLiant and IBM servers are available at:
vSphere Client5.1 or above VMware Knowledge Base
vCenter Server5.1 or above vCenter Server

Note

The VMware Enterprise Plus license is required for SR-IOV.

Third-Party References:

Downloading the SBC SWe Software Package

For more information, refer to Downloading the SBC SWe Software Package.

Recommendations for Optimum Performance

The following general recommendations apply to all platforms where SBC SWe is deployed:

  • The number of vCPUs deployed on a system should be an even number (4, 6, 8, etc.).
  • For best performance, deploy only a single instance on a single NUMA. Performance degradation occurs if you host more than one instance on a NUMA or if a single instance spans multiple NUMAs.
  • Make sure that the physical NICs associated with an instance are connected to the same NUMA/socket where the instance is hosted. In the case of a dual NUMA host, ideally two instances should be hosted, with each instance on a separate NUMA and the associated NICs of each of the instances connected to their respective NUMAs.
  • To optimize performance, configure memory card equally on both NUMA nodes. For example if a dual NUMA node server has a total of 128 GiB of RAM, configure 64 GiB of RAM on each NUMA node.

General ESXi Recommendations

The following are recommendations for VMware ESXi configurations:

  • Plan enough resources (RAM, CPU, NIC ports, hard disk, etc.) for all the virtual machines (VMs) to run on server platform, including resources needed by ESXi itself.
  • Allocate each VM only as much virtual hardware as that VM requires. Provisioning a VM with more resources than it requires can, in some cases, reduce the performance of that VM as well as other virtual machines sharing the same host.
  • Under BIOS settings, disconnect or disable any physical hardware devices (floppy devices, network interfaces, storage controllers, optical drives, USB controllers, etc.)  that you will not be using to free up CPU resources.
  • Use virtual hardware version 8 and above when creating new VMs (available starting in ESXi 5.x). This provides additional capabilities to VMs such as supporting a VM with up to 1 TB RAM, up to 32 vCPUs, etc.

ESXi Host Configuration Parameters

Use the VMware vSphere client to configure the following ESXi host configuration parameters on the Advanced Settings page (see figure below) before installing the

Unable to show "metadata-from": No such page "_space_variables"
.

ESXi Advanced Settings

 

ESXi Parameter

ESXi 5.1 SettingsESXi 5.5 SettingsESXi 6.0 SettingsESXi 6.5 Settings

Recommended

Default

Recommended

Default

Recommended

Default

RecommendedDefault

Cpu.CoschedCrossCall

0

1

0

1

0

1

01

Cpu.CreditAgePeriod

500

3000

100010001000100010003000

DataMover.HardwareAcceleratedInit

0

1

0

1

0

1

01

DataMover.HardwareAcceleratedMove

0

1

0

1

0

1

01

Disk.SchedNumReqOutstanding

256

32

n/an/an/an/an/an/a

Irq.BestVcpuRouting

1

0

1

0

1

0

10

Mem.BalancePeriod

0

15

0

15

n/a

n/a

n/an/a

Mem.SamplePeriod

0

60

0

60

n/a

n/a

n/an/a

Mem.ShareScanGHz

0

4

0

4

0

4

04

Mem.VMOverheadGrowthLimit

0

4294967295

0

4294967295

0

4294967295

04294967295

Misc.TimerMaxHardPeriod

2000

100000

2000

100000

2000

100000

2000500000

Misc.TimerMinHardPeriod

100

100

n/an/an/an/an/an/a

Net.AllowPT

1

0

1

0

1

0

11

Net.MaxNetifRxQueueLen

500

100

500

100

500

100

500n/a

Net.MaxNetifTxQueueLen

1000

500

1000

500

1000

500

10002000

Net.NetTxCompletionWorldlet

0

1

01n/an/an/an/a

Net.NetTxWordlet

0

2

12121n/a

Numa.LTermFairnessInterval

0

5

0

5

0

5

05

Numa.MonMigEnable

0

1

0

1

0

1

01

Numa.PageMigEnable

0

1

0

1

0

1

01

Numa.PreferHT

1

0

1

0

1

0

10

Numa.RebalancePeriod

60000

2000

60000

2000

60000

2000

600002000

Numa.SwapInterval

1

3

1

3

1

3

13

Numa.SwapLoadEnable

0

1

0

1

0

1

01

ESXi Parameters

VM Configuration Recommendations

The following configurations are recommended to improve performance.

Configure the VM Latency Sensitivity setting to High in VM Options > Advanced configurations as shown below:

VM Options Window

 

Limit instances to using cores from a single NUMA by configuring the numa.nodeaffinity option according to the number of the NUMA node the VM is on. Access this option using the path: VM Options > Advanced > Configuration Parameters > Edit Configuration  For example, in the following figure the Numa node is 0. 

Configuration Parameters Window

 

Additional VM configuration recommendations appear in the following table.

VM Configuration Recommendations

 
 SettingsRecommended Configuration
vCPU

Minimum 4 vCPUs required. Any number of vCPUs may be configured depending upon the call capacity requirements, but the number should be even (4, 6, 8, etc.) to avoid impacting performance.

Keep the Resource Allocation setting marked as 'unlimited.' Refer to Adjusting Resource Allocations for a VM for more information on setting CPU-related parameters.

 vRAM

Keep the Resource Allocation setting marked as 'unlimited'. Reserve the memory based on call capacity and configuration requirements. Refer to SBC SWe Performance Metrics for supported call capacities with different configuration limits.

Virtual Hard Disk

Set the Hard Disk (virtual) size as 100 GB or more (based on requirements of retaining CDRs, logs, etc. for number of days)

    • Use Thick provisioning (eager zero)
    • Hard disk size cannot be changed once SBC SWe software is installed
vNICs

Set the number of virtual NICs as 4 (1-MGMT, 1-HA, 1-PKT0 and 1-PKT1).

  • Use only the VMXNET3 driver.
  • Always use the automatic MAC address assignment option while creating vNICs.
  • Associate each vNIC with separate vSwitches.
  • Use the ESXi NIC teaming feature to achieve redundancy at the physical NIC level.
vSwitch settings
  • Use four different vSwitches for the four vNICs on the
    Unable to show "metadata-from": No such page "_space_variables"
    VM. This ensures various traffic is physically separated on the SBC.
    • Assign 1 physical NIC port (1 Gbps) to each vSwitch if physical NIC redundancy is not needed, otherwise assign 2 physical NIC ports (in active-standby mode using NIC team feature) to each vSwitch.

      Note

      The same physical NIC port cannot be associated with different vSwitches.

  • Use four different virtual networking labels, each with different VLANs or subnets.
  • Always run active and standby
    Unable to show "metadata-from": No such page "_space_variables"
    VMs on different physical servers.
  • Disable VM logging.
Note

Make sure the Processors, ESXi version and VM configuration (vCPUs, vRAM, Virtual Hard Disk, vNICs, and vSwitch Settings) are identical for an SBC SWe HA pair.

Note

Make sure that the BIOS and ESXi settings and recommendations are not changed once they are applied on the server.

 

  • No labels