Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Add_workflow_for_techpubs
AUTH1UserResourceIdentifier{userKey=8a00a0c87e188912017e4c24a00e0016, userName='null'}
JIRAIDAUTHSBX-116624
REV5UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV6UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cd5909df, userName='null'}
REV3UserResourceIdentifier{userKey=8a00a0c86cea5831016d24084ff50022, userName='null'}
REV1UserResourceIdentifier{userKey=8a00a0c86cea5831016d24084ff50022, userName='null'}


This section describes the hardware and software requirements for SBC SWe on an OpenStack platform.

OpenStack Requirements

  • Train with RHOSP 16.1.1 and RHEL 8.2


Multiexcerpt
MultiExcerptNameOpenStack Software

The SBC SWe supports the following OpenStack environments:

The SBC SWe was tested on OpenStack Queens with RHOSP 13 and RHEL 7.5.
  • Newton with RHOSP 10 and RHEL 7.4
  • Queens with RHOSP 13 and RHEL 7.5
Info
titleNote

Server Hardware Requirements

Multiexcerpt
MultiExcerptNameOpenStack Hardware


The following are supported for configuring as SR-

IOV and

IOV and DirectPath I/O pass-

through devices

through devices:

  • Intel I350
  • Intel x540
  • Intel x550
  • Intel x710
  • Intel 82599 Ethernet adapters
  • Mellanox Connect - 4x
  • Mellanox Connect - 5x
  • QLogic 536LR-T

ConfigurationRequirement
Processor

Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper-threading).

Info
titleNote

Ribbon recommends Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware. 


 RAMMinimum 24 GiB
 Hard DiskMinimum 100 GB
Network Interface Cards (NICs)

Minimum 4 NICs.

Info
titleNote

Make sure NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems.

Info
titleNote
Info
titleNote

Make sure NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems.


Info
titleNote

The PKT ports must be 10 Gbps SR-IOV enabled ports.


Info
titleNote

For packet port redundancy:

SRIOV

  • A minimum of 2 NICs are required.

DIO

  • A minimum of 4
6
  • NICs are required
to support PKT port redundancy
  • .


Multiexcerpt include
MultiExcerptName_vlan_sriov_disable_dcb
PageWithExcerpt_VLAN_SRIOV_Disable_DCB


S-SBC SWe Requirements

Multiexcerpt
MultiExcerptNameS-SBC SWe Requirements

The system hosting the SBC SWe must meet the following requirements to achieve the performance targets listed: 

S-SBC SWe Requirements
for 1000 CPS/120K Signaling Sessions 
Notes

32 vCPUs

Due to the workload characteristics, allocate 20 physical cores with two hyper-threaded CPUs from each core to the SBC.

128 GiB RAM

Must be Huge Page memory. The minimum page size is 2048 KiB, but 1048576 is recommended.

100 GB Disk

None

4 vNICs/6 vNICs

Attach MGT0 port to the Management VirtIO Tenant network.

HA port has to be on IPv4 VirtIO Tenant network.

Attach PKT0 and PKT1 ports to SR-IOV and Provider network.


Info
titleNote

All NIC ports must come from the NUMA node 0. The S-SBC SWe instance is hosted on dual-socket physical server with 10 physical cores coming from each NUMA node.



M-SBC SWe Requirements

Multiexcerpt
MultiExcerptNameM-SBC SWe Requirements


M-SBC SWe Requirements
for 40K Media Sessions
Notes

16 vCPUs

Due to the workload characteristics, allocate 10 physical cores with two hyper-threaded CPUs from each core and from single NUMA node to the SBC.

32 GiB RAM

Must be Huge Page memory. The minimum page size is 2048 KiB, but 1048576 is recommended.

100 GB Disk

None

4 vNICs/ 6 vNICs

Attach MGT0 port to the Management VirtIO Tenant network.

HA port has to be on IPv4 VirtIO Tenant network.


Info
titleNote

All NIC ports must come from the same NUMA node from which the M-SBC SWe instance is hosted.


OAM Node Requirements

Multiexcerpt
MultiExcerptNameOAM Node Requirements


OAM Node (minimum)Notes

4 2 vCPUs

None

16 GiB RAM

None

80 GB Disk

None

4 2 vNICs

None

  

I-SBC SWe Requirements

Multiexcerpt
MultiExcerptNameI-SBC SWe Requirements


I-SBC SWe RequirementsNotes

20 vCPUs


32 GiB RAM

Must be Huge Page memory. The minimum page size is 2048 KiB, but 1048576 is recommended.

100 GB Disk

None

4 vNICs/ 6 vNICs

Attach MGT0 port to the Management VirtIO Tenant network.

HA port has to be on IPv4 VirtIO Tenant network.



Info
titleNote

For deployments that require it, you can instantiate the SBC SWe in smaller-sized configurations that use limited memory and vCPU resources. However, the limited resources place some restrictions on capacity and capabilities. Refer to Small SBC SWe Deployment Characteristics and Performance Metrics for Small SBC SWe Configurations for more information.

Recommended Host Settings

To enable changing disk cache method from "none" to "writethrough", execute the following steps on the compute host:

1. Enter the following command to retrieve the current disk cache method in nova configuration on the compute host:

Code Block
crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes

Output: Parameter not found: disk_cachemodes.

Info
titleNote

The output may differ if only the file=writethrough was configured previously ("file=writethrough").

2. Change the policy from the default, that is empty string to " file=writethrough,network=writethrough":

Code Block
crudini --set /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes '"file=writethrough,network=writethrough"'

3. Restart all nova containers:

Code Block
docker restart $(docker ps|grep nova|sed -r 's/^([^ ]+).*/\1/')

Output

353567574c0f
3fe492a36297
42f36e21555b
e727b8cf0191

4. Verify the changes in libvirtd:

Code Block
virsh dumpxml instance-0000eb25 |grep "cache"

Output

<driver name='qemu' type='raw' cache='writethrough'/>

<driver name='qemu' type='raw' cache='writethrough' discard='unmap'/>

<driver name='qemu' type='raw' cache='writethrough' discard='unmap'/>

Note : To grep the instance id, execute "virsh list --all"

Example:

Code Block
virsh list --all
Id Name State
----------------------------------------------------
1 instance-0000eb1c running
2 instance-0000eb25 running

5. Verify the changes in qemu:

Info
titleNote

Before you verify the changes, ensure the instance is in running.


Code Block
virsh qemu-monitor-command instance-0000eb25 --pretty '{"execute":"query-block"}'|egrep 'cache|device|filename|no-flush|direct|writeback'

Output:

"no-flush": false
"direct": false
"writeback": false

6. Check the disk cache in nova configuration on the compute host:

Code Block
crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes

Output

"file=writethrough,network=writethrough"