This section describes the hardware and software requirements for SBC SWe on an OpenStack platform.
OpenStack Requirements
Multiexcerpt |
---|
MultiExcerptName | OpenStack Software |
---|
|
The SBC SWe supports the following OpenStack environments: The SBC SWe was tested on OpenStack Queens with RHOSP 13 and RHEL 7.5.- Newton with RHOSP 10 and RHEL 7.4
- Queens with RHOSP 13 and RHEL 7.5
Info |
---|
|
|
- Train with RHOSP 16.1.1 and RHEL 8.2
Server Hardware Requirements
Multiexcerpt |
---|
MultiExcerptName | OpenStack Hardware |
---|
|
Configuration | Requirement |
---|
Processor | Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper-threading). Info |
---|
| Ribbon recommends Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware. |
| RAM | Minimum 24 GiB | Hard Disk | Minimum 100 GB | Network Interface Cards (NICs) | Minimum 4 NICs. Info |
---|
| Make sure NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems. |
Info |
---|
| The following are supported for configuring as SR- IOV and IOV and DirectPath I/O pass- through devicesthrough devices: - Intel I350
- Intel x540
- Intel x550
- Intel x710
- Intel 82599 Ethernet adapters
- Mellanox Connect - 4x
- Mellanox Connect - 5x
- QLogic 536LR-T
Info |
---|
| Make sure NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems. |
Info |
---|
| The PKT ports must be 10 Gbps SR-IOV enabled ports. |
Info |
---|
| For packet port redundancy: SRIOV - A minimum of 2 NICs are required.
DIO 6 to support PKT port redundancy |
|
Multiexcerpt include |
---|
MultiExcerptName | _vlan_sriov_disable_dcb |
---|
PageWithExcerpt | _VLAN_SRIOV_Disable_DCB |
---|
|
|
S-SBC SWe Requirements
Multiexcerpt |
---|
MultiExcerptName | S-SBC SWe Requirements |
---|
|
The system hosting the SBC SWe must meet the following requirements to achieve the performance targets listed: S-SBC SWe Requirements for 1000 CPS/120K Signaling Sessions | Notes |
---|
32 vCPUs | Due to the workload characteristics, allocate 20 physical cores with two hyper-threaded CPUs from each core to the SBC. | 128 GiB RAM | Must be Huge Page memory. The minimum page size is 2048 KiB, but 1048576 is recommended. | 100 GB Disk | None | 4 vNICs/6 vNICs | Attach MGT0 port to the Management VirtIO Tenant network. HA port has to be on IPv4 VirtIO Tenant network. Attach PKT0 and PKT1 ports to SR-IOV and Provider network. You must have 6 vNICs to enable PKT port redundancy. For more information, refer to the SBC SWe Features Guide. |
Info |
---|
| All NIC ports must come from the NUMA node 0. The S-SBC SWe instance is hosted on dual-socket physical server with 10 physical cores coming from each NUMA node. |
|
M-SBC SWe Requirements
Multiexcerpt |
---|
MultiExcerptName | M-SBC SWe Requirements |
---|
|
M-SBC SWe Requirements for 40K Media Sessions | Notes |
---|
16 vCPUs | Due to the workload characteristics, allocate 10 physical cores with two hyper-threaded CPUs from each core and from single NUMA node to the SBC. | 32 GiB RAM | Must be Huge Page memory. The minimum page size is 2048 KiB, but 1048576 is recommended. | 100 GB Disk | None | 4 vNICs/ 6 vNICs | Attach MGT0 port to the Management VirtIO Tenant network. HA port has to be on IPv4 VirtIO Tenant network. Attach PKT0 and PKT1 ports to SR-IOV and Provider network.
You must have 6 vNICs to enable PKT port redundancy. For more information, refer to the SBC SWe Features Guide. |
Info |
---|
| All NIC ports must come from the same NUMA node from which the M-SBC SWe instance is hosted. |
|
OAM Node Requirements
Multiexcerpt |
---|
MultiExcerptName | OAM Node Requirements |
---|
|
OAM Node (minimum) | Notes |
---|
4 2 vCPUs | None | 16 GiB RAM | None | 80 GB Disk | None | 4 2 vNICs | None |
|
I-SBC SWe Requirements
Multiexcerpt |
---|
MultiExcerptName | I-SBC SWe Requirements |
---|
|
I-SBC SWe Requirements | Notes |
---|
20 vCPUs |
| 32 GiB RAM | Must be Huge Page memory. The minimum page size is 2048 KiB, but 1048576 is recommended. | 100 GB Disk | None | 4 vNICs/ 6 vNICs | Attach MGT0 port to the Management VirtIO Tenant network. HA port has to be on IPv4 VirtIO Tenant network. Attach PKT0 and PKT1 ports to SR-IOV and Provider network.
You must have 6 vNICs to enable PKT port redundancy. For more information, refer to the SBC SWe Features Guide. |
|
Recommended Host Settings
To enable changing disk cache method from "none" to "writethrough", execute the following steps on the compute host:
1. Enter the following command to retrieve the current disk cache method in nova configuration on the compute host:
Code Block |
---|
crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes |
Output: Parameter not found: disk_cachemodes.
Info |
---|
|
The output may differ if only the file=writethrough was configured previously ("file=writethrough"). |
2. Change the policy from the default, that is empty string to " file=writethrough,network=writethrough":
Code Block |
---|
crudini --set /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes '"file=writethrough,network=writethrough"' |
3. Restart all nova containers:
Code Block |
---|
docker restart $(docker ps|grep nova|sed -r 's/^([^ ]+).*/\1/') |
Output:
353567574c0f
3fe492a36297
42f36e21555b
e727b8cf0191
4. Verify the changes in libvirtd:
Code Block |
---|
virsh dumpxml instance-0000eb25 |grep "cache" |
Output:
<driver name='qemu' type='raw' cache='writethrough'/>
<driver name='qemu' type='raw' cache='writethrough' discard='unmap'/>
<driver name='qemu' type='raw' cache='writethrough' discard='unmap'/>
Note : To grep the instance id, execute "virsh list --all"
Example:
Code Block |
---|
virsh list --all
Id Name State
----------------------------------------------------
1 instance-0000eb1c running
2 instance-0000eb25 running |
5. Verify the changes in qemu:
Info |
---|
|
Before you verify the changes, ensure the instance is in running. |
Code Block |
---|
virsh qemu-monitor-command instance-0000eb25 --pretty '{"execute":"query-block"}'|egrep 'cache|device|filename|no-flush|direct|writeback' |
Output:
"no-flush": false
"direct": false
"writeback": false
6. Check the disk cache in nova configuration on the compute host:
Code Block |
---|
crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes |
Output:
"file=writethrough,network=writethrough"