Contents:
GPU acceleration is supported on SBC SWe cloud-based T-SBC and I-SBC instances on OpenStack (Newton and above). T-SBC is a component in a Distributed SBC architecture that provides transcoding service.
GPU devices are attached to SBC cloud instances through PCIe pass-through – a single GPU device can be used by only one instance at a time. The process of enabling PCIe pass-through in OpenStack is detailed later in this document. For performance considerations, NUMA locality of devices should be ensured.
NVIDIA GRID is not supported.
NVIDIA Tesla V100(PCIe)
In addition, G.711 is supported for GPU instances, but only when G.711 is being transcoded to a non-G.711 codec. You cannot currently configure transcoding from G.711 to G.711 on GPU instances. The coding rates and packetization times for the supported codecs are shown in the tables on the Audio Codecs page.
G.722 Silence Suppression is not supported with GPU transcoding.
The following procedures assume that the supported GPU devices have been properly installed on the server.
The T-SBC is instantiated using the help of a specific heat template. The GPU T-SBC requires a special flavor that has appropriate directives to utilize GPU devices of the compute node available for PCIe pass-through.
The T-SBC instance should be launched using the heatRgNoDhcp-TSBC-template.yaml template. This template shares all fields of an M-SBC template, and additionally has the following fields:
This section describes the changes needed on the Controller node and the Compute node hosting GPU cards in order to enable instances to use GPU devices. While this section focuses purely on the GPU aspect, Ribbon recommends that you refer to broader OpenStack performance tuning recommendations covered in the following links:
Open the dashboard and create a flavor with the following properties. Check with your Ribbon account team to determine the appropriate instance size for your traffic needs.
This is a sample benchmark. Ribbon does not mandate the use of the processors shown here.
After creating the flavor, update its metadata with the following key values.
Refer to the following pages for basic configuration steps for the S-SBC and M-SBC:
For enabling T-SBC, some additional configurations are required in S-SBC and M-SBC which are described in subsequent sections.
Steps for configuration of T-SBC is similar to M-SBC with the following exception: The IP interface group creation procedure should create private interface groups instead of public. There are no public interface groups for T-SBC.
A DSP cluster needs to be configured in the S-SBC configuration to refer to the T-SBC cluster that is to be used for transcoding. The following steps describe the procedure for creation of this cluster:
The GPU transcoding solution currently does not support more than one non-G711 transcodable codec per leg on a trunk group. Therefore when configuring Packet Service Profiles, do not configure multiple non-G711 codecs on a single leg (This Leg/Other Leg parameters) when specifying the Codecs Allowed For Transcoding within Packet To Packet Control. Refer to Packet Service Profile - CLI or Packet To Packet Control - Codecs Allowed For Transcoding (EMA).
Configure the private IP interface group for relaying media packets to T-SBC using steps from the "Configure Private LIF Groups in M-SBC" section of Invoke MRF as a Transcoder for D-SBC.
Create a GPU I-SBC instance using only the following Redundancy modes:
heatStandaloneTemplateNoDhcp.yaml
heatHA11templateNoDhcp.yaml
The N:1 redundancy model does not support GPU I-SBC.
This section describes the changes needed on the Controller node and the Compute node hosting GPU cards in order to enable instances to use GPU devices. While this section focuses purely on the GPU aspect, Ribbon recommends that you refer to broader OpenStack performance tuning recommendations covered in the following links:
Open the dashboard and create a flavor with the following properties. Check with your Ribbon account team to determine the appropriate instance size for your traffic needs.
This is a sample benchmark. Ribbon does not mandate the use of the processors shown here.
After creating the flavor, update its metadata with the following key values.
After instantiation of the GPU I-SBC on Openstack, the application's activated traffic profile is the default
traffic profile.
To apply the custom GPU traffic profile, log on as admin
and execute the following steps:
Create a sweCodecMixProfile
with the codecs configured for GPU transcoding.
Syntax:
% set system sweCodecMixProfile <profile name> <codec> <ptime value: 10ms, 20ms, 30ms, 40ms, 60ms> percentage <percentage_value>
% set system sweCodecMixProfile customCodecMix amrwb p20 percentage 50 % set system sweCodecMixProfile customCodecMix g711 p20 percentage 50 % commit
Create a traffic profile. Set the transcodingCodecProfile with the sweCodecMixProfile
created in Step 1.
Syntax:
% set system sweTrafficProfiles <profile name> isAccess <value> callHoldTime <duration> passthroughCodecProfile <profile name> transcodePercent <percentage> transcodingCodecProfile <profile name> useGPUForTranscoding <false | true>
custom_gpu_profile
using the sweCodecMixProfile
created in Step 1 for trascoding profile:% set system sweTrafficProfiles custom_gpu_profile callHoldTime 100 transcodePercent 50 passthroughCodecProfile G711_20ms transcodingCodecProfile customCodecMix useGPUForTranscoding true % commit
Activate the traffic profile created in Step 2.
Syntax:
% set system sweActiveProfile name <profile name>
Use the same profile name as used in Step 2.
Example: To activate the custom GPU traffic profile created in Step 2, execute the following command:
% set system sweActiveProfile name custom_gpu_profile % commit
The instance reboots and comes up with the activated traffic profile.
Check the activated profile.
Syntax:
> show table system sweActiveProfile
Example:
admin@vsbc1> show table system sweActiveProfile name custom_gpu_profile; stateChangeTime 2018-12-09T13:25:43-00:00; [ok][2018-12-11 01:29:03]
All pre-existing licensing related to transcoding apply to GPU codecs as well. There is no separate license for GPU functionality.