Contents:
Related pages:
Supported Deployment Scenarios
GPU acceleration is supported on SBC SWe I-SBC and T-SBC instances on OpenStack (Newton and above). T-SBC is a component in a Distributed SBC architecture that provides transcoding service.
GPU devices are attached to SBC cloud instances through PCIe pass-through – a single GPU device can be used by only one instance at a time. The process of enabling PCIe pass-through in OpenStack is detailed later in this document. For performance considerations, NUMA locality of devices should be ensured.
NVIDIA GRID is not supported.
Supported GPU Devices
NVIDIA Tesla V100(PCIe)
Supported Codecs
The following codecs are supported:
- Codecs supported on both GPU and CPU:
AMR-NB
,AMR-WB
,EVRC
,EVRCB
,G729
,G722
,EVS
,OPUS
- CPU only codecs :
G723
,G726
,G7221
,ILBC
,SILK_8
,SILK_16
,G7112G711
,T38,G711
For more information on "GPU + CPU codecs", see the section Dynamic Codec Allocation.
You can provision CPU codecs in the codec profile and associate it with the GPU traffic profile; however, you must provision at least one GPU codec in the sweCodecMixProfile
.
G.722 Silence Suppression is not supported with GPU transcoding.
The following procedures assume that the supported GPU devices have been properly installed on the server.
Dynamic Codec Allocation
The SBC dynamically manages GPU and CPU transcoding resources so they respond to various codec combinations without impacting service. The SBC leverages both GPU and CPU transcoding resources to support an incoming codec request. If GPU resources are available, the SBC utilizes these GPU resources to support GPU-supported codecs. If GPU resources are not available, the SBC utilizes CPU resources (if available) to support GPU-supported codecs. The SBC utilizes CPU resources (if available) to support non-GPU supported codecs. For information about GPU- and CPU-supported codecs, refer to Supported Codecs.
The percentage value for G7112G711
is used for estimating transcode and bandwidth cost.
The percentage value for G711
is not used for estimating transcode cost, but is used for bandwidth calculation of PXPAD scenarios.
The percentage value for G711
cannot be greater than the percentage value of non-G711 codecs.
The sum of all codec percentages is
100
.
DSP-based Tone detection is supported only on GPU-ISBC profile.
With dynamic GPU resource management, the sweTrafficProfiles
does not enforce any hard coded per-codec limit. You must still configure the codec percentage in sweCodecMixProfile
since this configuration acts as an input for the functional partitioning of VCPUs of the instance.
This feature does not affect the codec channel capacities in a non-GPU accelerated SWe (CPU-based SWe).
This feature supports the standard LSWU upgrade operation of the supported SBC releases.
The SBC NRM congestion monitoring mechanism monitors all UXPAD processes for congestion.
A sub-optimal resource utilization exists during the transient transcoding call mix scenarios.
Best Practice
Ribbon recommends monitoring of the status of codec channel capacities for codecs provisioned on GPU (on a per GPU device basis), as well as on CPU.
GPU Transcode Status
Shows codec channel capacities on a per GPU device basis.
The gpuTranscodeStatus
represents only available GPU resources. If the gpuTranscodeStatus
displays higher numbers than the dspStatus
, the system is CPU-limited.
Command Syntax
show table system gpuTranscodeStatus
Command Parameters
gpuTranscodeStatus Descriptions:Parameter Status Names and Descriptions gpuTranscodeStatus
<system name>
– The SBC system name.amrNbTotal
– Total AMR–NB resource capacity on this server.amrNbUtilization
– Percentage utilization of AMR–NB resources on this server.amrWbTotal
– Total AMR–WB resource capacity on this server.amrWbUtilization
– Percentage utilization of AMR–WB resources on this server.evrc0Total
– Total EVRC0 resource capacity on this server.evrc0Utilization
– Percentage utilization of EVRC0 resources on this server.evrcb0Total
– Total EVRCB0 resource capacity on this server.evrcb0Utilization
– Percentage utilization of EVRCB0 resources on this server.gpuAllocation
– Displays the overall GPU occupancy in a percentage.gpuNumber
- Numeric identifier for the GPU device.g722Total
– Total G.722 resource capacity on this server.g722Utilization
– Percentage utilization of G.722 resources on this server.g729AbTotal
– Total G729A+B resource capacity on this server.g729AbUtilization
– Percentage utilization of G729A+B resources on this server.opusTotal
– Total OPUS resource capacity on this server. The SBC does not support this parameter.opusUtilization
– Percentage utilization of OPUS resources on this server. The SBC does not support this parameter.
CPU Transcode Status
The cpuTranscodeStatus
captures the information on spilled over channels due to the exhaustion of GPU resources. GPU codecs appear on the CPU when the GPU resources are exhausted and cannot accommodate GPU-supported codec channels, so the codec channels spill onto the CPU.
Command Syntax
show table system cpuTranscodeStatus
Command Parameters
cpuTranscodeStatus Descriptions:Parameter Status Names and Descriptions cpuTranscodeStatus
<system name>
– The SBC system name.amrNbUsed
- Number of AMR channels spilled over onto CPU.amrWbUsed
- Number of AMR-WB channels spilled over onto CPU.evrc0Used
- Number of EVRC channels spilled over onto CPU.evrcb0Used
- Number of EVRCB channels spilled over onto CPU.evsUsed
- Number of EVS channels spilled over onto CPU.opusUsed
- Number of OPUS channels spilled over onto CPU.g722Used
- Number of G.722 channels spilled over onto CPU.g729AbUsed
- Number of G.729AB channels spilled over onto CPU.
Instantiating GPU T-SBC on OpenStack Cloud
The T-SBC is instantiated using the help of a specific heat template. The GPU T-SBC requires a special flavor that has appropriate directives to utilize GPU devices of the compute node available for PCIe pass-through.
T-SBC Heat Template
The T-SBC instance should be launched using the heatRgNoDhcp-TSBC-template.yaml template. This template shares all fields of an M-SBC template, and additionally has the following fields:
Field | Description | Example or |
---|---|---|
gpu | Indicate whether to use GPU or CPU for transcoding. Should be set as true for GPU TSBCs. Note: For GPU T-SBCs ,additional provisioning of codec percentages are required at the time of instantiation. Transcode resources for codecs are reserved and fixed for the lifetime of the instance. | True |
G729 | Percentage of channels to be allocated for G729 codec (0-100) Applicable only when gpu field is True. Provisioned on GPU and CPU for transcoding. | 45.50 |
G722 | Percentage of channels to be allocated for G722 codec (0-100) Applicable only when gpu field is True. Provisioned on GPU and CPU for transcoding. | 0 |
EVRCB | Percentage of channels to be allocated for EVRCB codec (0-100) Applicable only when gpu field is True. Provisioned on GPU and CPU for transcoding. | 0 |
EVRC | Percentage of channels to be allocated for EVRC codec (0-100) Applicable only when gpu field is True. Provisioned on GPU and CPU for transcoding. | 0 |
AMRWB | Percentage of channels to be allocated for AMRWB codec (0-100) Applicable only when gpu field is True. Provisioned on GPU and CPU for transcoding. | 0 |
AMR | Percentage of channels to be allocated for AMR codec (0-100) Applicable only when gpu field is True. Provisioned on GPU and CPU for transcoding. | 15 |
G723 | Percentage of channels to be allocated for G723 codec (0-100) Applicable only when gpu field is True. Provisioned for CPU transcoding. | 0 |
G726 | Percentage of channels to be allocated for G726 codec (0-100) Applicable only when gpu field is True. Provisioned for CPU transcoding. | 0 |
G7221 | Percentage of channels to be allocated for G7221 codec (0-100) Applicable only when gpu field is True. Provisioned for CPU transcoding. | 0 |
ILBC | Percentage of channels to be allocated for ILBC codec (0-100) Applicable only when gpu field is True. Provisioned for CPU transcoding. | 0 |
OPUS | Percentage of channels to be allocated for OPUS codec (0-100) Applicable only when gpu field is True. Provisioned for GPU and CPU for transcoding. | 0 |
SILK_8 | Percentage of channels to be allocated for SILK NB codec (0-100) Applicable only when gpu field is True. Provisioned for CPU transcoding. | 0 |
SILK_16 | Percentage of channels to be allocated for SILK WB codec (0-100) Applicable only when gpu field is True. Provisioned for CPU transcoding. | 0 |
EVS | Percentage of channels to be allocated for EVS codec (0-100) Applicable only when gpu field is True. Provisioned for GPU and CPU for transcoding. | 0 |
G7112G711 | Percentage of channels to be allocated for G7112G711 codec (0-100) Applicable only when gpu field is True. Provisioned for CPU transcoding. | 0 |
T38 | Percentage of channels to be allocated for T38 codec (0-100) Applicable only when gpu field is True. Provisioned for CPU transcoding. | 0.75 |
G711 | Percentage of channels to be allocated for G711 codec (0-100) Applicable only when gpu field is True. | 0 |
Host Changes on OpenStack for Enabling GPU Devices
This section describes the changes needed on the Controller node and the Compute node hosting GPU cards in order to enable instances to use GPU devices. While this section focuses purely on the GPU aspect, Ribbon recommends that you refer to broader OpenStack performance tuning recommendations covered in the following links:
Table 1: OS Configuration for Compute Node with GPU Device
Step | Action | Notes |
---|---|---|
1 | Edit /etc/sysconfig/grub and ensure that the following parameters are populated: intel_iommu=on iommu=pt rdblacklist=nouveau |
|
2 | Update grub using the following command: grub2-mkconfig -o /etc/grub2.cfg | |
3 | Create /etc/modprobe.d/nouveau.conf file with the following contents: blacklist nouveau | Blacklists opensource NVIDIA driver (nouveau) from loading on the host. |
4 | Reboot the compute node. |
Table 2: Openstack Configuration for Compute Node with GPU Device
Step | Action |
---|---|
1 | Add a PCI alias for the GPU device in /etc/nova/nova.conf. For V100:
Note: PCI alias will referred to by flavors that make use of this PCI device. |
2 | Add a GPU device to existing the PCIe whitelist entries in /etc/nova/nova.conf. For V100:
Note: Whitelist PCI device for use in OpenStack. |
3 | Restart nova-compute service:
|
Table 3: Openstack Configuration for Controller Node
Step | Procedure |
---|---|
1 | Ensure PciPassthroughFilter and NumaTopologyFilter are added to scheduler_default_filters list in /etc/nova/nova.conf file. Note: Enables nova to instantiate instances with CPU resources from the same NUMA as that of the PCI devices to be used. |
2 | Add PCI alias for the GPU device in /etc/nova/nova.conf file For V100: pci_alias={"vendor_id":"10de", "product_id":"1db4","device_type":"type-PCI", "name":"v100gpu"} Note:PCI alias will be referred to by flavors that make use of this PCI device |
3 | Restart nova-api service systemctl restart OpenStack-nova-api.service |
Guideline for Creating Flavors for GPU T-SBC Instances
Open the dashboard and create a flavor with the following properties. Check with your Ribbon account team to determine the appropriate instance size for your traffic needs.
This is a sample benchmark. Ribbon does not mandate the use of the processors shown here.
Table 4: Flavor Creation Guideline for GPU T-SBC Instances
Property | For V100 |
---|---|
VCPUs | 20 |
RAM | 25 GiB |
Root Disk (min) | 65 GiB |
After creating the flavor, update its metadata with the following key values.
Table 5: Metadata Key Values
Key | Value for V100 | Remark |
---|---|---|
hw:cpu_policy | dedicated | Ensures guest vCPUs are pinned to host cpus for performance. |
hw:numa_nodes | 1 | Ensures host cpus of a single NUMA node are used in the instance for performance. |
hw:cpu_thread_policy | prefer | This setting allocates each vCPU on thread siblings of physical CPUs. |
hw:cpu_max_sockets | 1 | This setting defines how KVM exposes the sockets and cores to the guest. |
pci_passthrough:alias | v100gpu:1 | Ensures NVIDIA PCIe devices are attached to the instance. |
Configuring the SBC for Invoking T-SBC
Refer to the following pages for basic configuration steps for the S-SBC and M-SBC:
- S-SBC Cluster Configuration using SBC Configuration Manager
- M-SBC Cluster Configuration using SBC Configuration Manager
- Instantiating SBC SWe on OpenStack using Heat Templates
For enabling T-SBC, some additional configurations are required in S-SBC and M-SBC which are described in subsequent sections.
Configuring and Activating T-SBC Cluster
Steps for configuration of T-SBC is similar to M-SBC with the following exception: The IP interface group creation procedure should create private interface groups instead of public. There are no public interface groups for T-SBC.
Additional Configuration for S-SBC
A DSP cluster needs to be configured in the S-SBC configuration to refer to the T-SBC cluster that is to be used for transcoding. The following steps describe the procedure for creation of this cluster:
Table 6: Creating a DSP Cluster
Step | Action |
---|---|
1 | Refer to Modifying SBC Cluster Configuration for information on accessing the SBC Configuration Manager to modify the configuration of the S-SBC cluster. |
2 | Click All > System > DSBC > Cluster > Type and add the T-SBC node entry by selecting DSP: |
3 | Click in the FQDN field and then add the corresponding FQDN for the T-SBC created in the T-SBC configuration. |
4 | Click Save. |
5 | Refer to System Provisioning - Packet Service Profile for configuration changes that must be made on the S-SBC to enable transcoding. Note: In GPU T-SBCs, the required codecs and their percentages must be provisioned in the Heat template as described in the previous section. This provisioning is fixed for the lifetime of the application. All members of a single T-SBC cluster should follow the same codec provisioning values. |
Additional Configuration for M-SBC
Configure the private IP interface group for relaying media packets to T-SBC using steps from the "Configure Private LIF Groups in M-SBC" section of Invoking MRF as a Transcoder for D-SBC.
Licensing
All pre-existing licensing related to transcoding apply to GPU codecs as well. There is no separate license for GPU functionality.