This section describes the hardware and software requirements for SBC SWe on an OpenStack platform.
The SBC SWe supports the following OpenStack environments: The SBC SWe was tested on OpenStack Queens with RHOSP 13 and RHEL 7.5.
Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper-threading). Ribbon recommends Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware. Minimum 4 NICs. Make sure NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems. The Intel I350, x540, x550, and 82599 Ethernet adapters are supported for configuring as SR-IOV and DirectPath I/O pass-through devices. The PKT ports must be 10 Gbps SR-IOV enabled ports. 6 NICs are required to support PKT port redundancy. To configure VLAN on SRIOV and PCI Passthrough Ethernet interfaces, disable the Data Center Bridging (DCB) on the switch connected to the interfaces.Configuration Requirement Processor RAM Minimum 24 GiB Hard Disk Minimum 100 GB Network Interface Cards (NICs)
The system hosting the SBC SWe must meet the following requirements to achieve the performance targets listed: 32 vCPUs Due to the workload characteristics, allocate 20 physical cores with two hyper-threaded CPUs from each core to the SBC. 128 GiB RAM 100 GB Disk None 4 vNICs/6 vNICs Attach MGT0 port to the Management VirtIO Tenant network. HA port has to be on IPv4 VirtIO Tenant network. Attach PKT0 and PKT1 ports to SR-IOV and Provider network. You must have 6 vNICs to enable PKT port redundancy. For more information, refer to the SBC SWe Features Guide. All NIC ports must come from the NUMA node 0. The S-SBC SWe instance is hosted on dual-socket physical server with 10 physical cores coming from each NUMA node.S-SBC SWe Requirements
for 1000 CPS/120K Signaling Sessions Notes Must be Huge Page memory. The minimum page size is 2048 KiB, but 1048576 is recommended.
16 vCPUs Due to the workload characteristics, allocate 10 physical cores with two hyper-threaded CPUs from each core and from single NUMA node to the SBC. 32 GiB RAM 100 GB Disk None 4 vNICs/ 6 vNICs Attach MGT0 port to the Management VirtIO Tenant network. HA port has to be on IPv4 VirtIO Tenant network. Attach PKT0 and PKT1 ports to SR-IOV and Provider network. All NIC ports must come from the same NUMA node from which the M-SBC SWe instance is hosted.M-SBC SWe Requirements
for 40K Media SessionsNotes Must be Huge Page memory. The minimum page size is 2048 KiB, but 1048576 is recommended.
You must have 6 vNICs to enable PKT port redundancy. For more information, refer to the SBC SWe Features Guide.
OAM Node (minimum) | Notes |
---|---|
4 vCPUs | None |
16 GiB RAM | None |
80 GB Disk | None |
4 vNICs | None |
20 vCPUs 32 GiB RAM 100 GB Disk None 4 vNICs/ 6 vNICs Attach MGT0 port to the Management VirtIO Tenant network. HA port has to be on IPv4 VirtIO Tenant network. Attach PKT0 and PKT1 ports to SR-IOV and Provider network. I-SBC SWe Requirements Notes Must be Huge Page memory. The minimum page size is 2048 KiB, but 1048576 is recommended.
You must have 6 vNICs to enable PKT port redundancy. For more information, refer to the SBC SWe Features Guide.
For deployments that require it, you can instantiate the SBC SWe in smaller-sized configurations that use limited memory and vCPU resources. However, the limited resources place some restrictions on capacity and capabilities. Refer to Small SBC SWe Deployment Characteristics and Performance Metrics for Small SBC SWe Configurations for more information.
To enable changing disk cache method from "none" to "writethrough", execute the following steps on the compute host:
1. Enter the following command to retrieve the current disk cache method in nova configuration on the compute host:
crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes
Output: Parameter not found: disk_cachemodes.
The output may differ if only the file=writethrough was configured previously ("file=writethrough").
2. Change the policy from the default, that is empty string to " file=writethrough,network=writethrough":
crudini --set /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes '"file=writethrough,network=writethrough"'
3. Restart all nova containers:
docker restart $(docker ps|grep nova|sed -r 's/^([^ ]+).*/\1/')
Output:
353567574c0f
3fe492a36297
42f36e21555b
e727b8cf0191
4. Verify the changes in libvirtd:
virsh dumpxml instance-0000eb25 |grep "cache"
Output:
<driver name='qemu' type='raw' cache='writethrough'/>
<driver name='qemu' type='raw' cache='writethrough' discard='unmap'/>
<driver name='qemu' type='raw' cache='writethrough' discard='unmap'/>
Note : To grep the instance id, execute "virsh list --all"
Example:
virsh list --all Id Name State ---------------------------------------------------- 1 instance-0000eb1c running 2 instance-0000eb25 running
5. Verify the changes in qemu:
Before you verify the changes, ensure the instance is in running.
virsh qemu-monitor-command instance-0000eb25 --pretty '{"execute":"query-block"}'|egrep 'cache|device|filename|no-flush|direct|writeback'
Output:
"no-flush": false
"direct": false
"writeback": false
6. Check the disk cache in nova configuration on the compute host:
crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes
Output:
"file=writethrough,network=writethrough"