Supported Deployment Scenarios

GPU acceleration is supported on SBC SWe cloud-based T-SBC and I-SBC instances on OpenStack (Newton and above). T-SBC is a component in a Distributed SBC architecture that provides transcoding service.

GPU devices are attached to SBC cloud instances through PCIe pass-through – a single GPU device can be used by only one instance at a time. The process of enabling PCIe pass-through in OpenStack is detailed later in this document. For performance considerations, NUMA locality of devices should be ensured.

NVIDIA GRID is not supported.

Supported GPU Devices

NVIDIA Tesla V100(PCIe)

Supported Codecs

  • AMR-NB
  • G729
  • G722
  •  AMR-WB

In addition, G.711 is supported for GPU instances, but only when G.711 is being transcoded to a non-G.711 codec. You cannot currently configure transcoding from G.711 to G.711 on GPU instances. The coding rates and packetization times for the supported codecs are shown in the tables on the Audio Codecs page.

Note

G.722 Silence Suppression is not supported with GPU transcoding.

Prerequisite Note

 The following procedures assume that the supported GPU devices have been properly installed on the server.

Instantiating GPU T-SBC on OpenStack

The T-SBC is instantiated using the help of a specific heat template. The GPU T-SBC requires a special flavor that has appropriate directives to utilize GPU devices of the compute node available for PCIe pass-through.

T-SBC Heat Template

The T-SBC instance should be launched using the heatRgNoDhcp-TSBC-template.yaml template. This template shares all fields of an M-SBC template, and additionally has the following fields:

T-SBC Heat Template

Field

Description

Example or
Recommendation

gpu

Indicate whether to use GPU or CPU for transcoding. Should be set as true for GPU TSBCs.

Note: For GPU T-SBCs ,additional provisioning of codec percentages are required at the time of instantiation.

Transcode resources for codecs are reserved and fixed for the lifetime of the instance.

True

AMR

Percentage of channels to be allocated for AMR codec (0-100)

Applicable only when gpu field is True.

100

AMRWB

Percentage of channels to be allocated for AMRWB codec (0-100)

Applicable only when gpu field is True.

0

EVRC (not applicable in this release

Percentage of channels to be allocated for EVRC codec (0-100)

Applicable only when gpu field is True.

0

EVRCB (not applicable in this release)

Percentage of channels to be allocated for EVRCB codec (0-100)

Applicable only when gpu field is True.

0

G729

Percentage of channels to be allocated for G729 codec (0-100)

Applicable only when gpu field is True.

0

G722

Percentage of channels to be allocated for G722 codec (0-100)

Applicable only when gpu field is True.

0

 

 

 


Host Changes on OpenStack for Enabling GPU Devices

This section describes the changes needed on the Controller node and the Compute node hosting GPU cards in order to enable instances to use GPU devices. While this section focuses purely on the GPU aspect, Ribbon recommends that you refer to broader OpenStack performance tuning recommendations covered in the following links:

 

OS Configuration for Compute Node with GPU Device

Step
Action
Notes
1

Edit /etc/sysconfig/grub and ensure that the following parameters are populated:

intel_iommu=on iommu=pt rdblacklist=nouveau

 

  • Enables kernel support for PCIe passthrough.
  • Blacklists opensource NVIDIA driver (nouveau) from loading on the host.
2

Update grub using the following command:

grub2-mkconfig -o /etc/grub2.cfg
3

Create /etc/modprobe.d/nouveau.conf file with the following contents:

blacklist nouveau
Blacklists opensource NVIDIA driver (nouveau) from loading on the host.
4Reboot the compute node. 
 

 

Openstack Configuration for Compute Node with GPU Device

Step
Action
1

Add a PCI alias for the GPU device in /etc/nova/nova.conf.

For V100:

pci_alias={"vendor_id":"10de", "product_id":"1db4","device_type":"type-PCI","name":"v100gpu"}

Note: PCI alias will referred to by flavors that make use of this PCI device.

2

Add a GPU device to existing the PCIe whitelist entries in /etc/nova/nova.conf.

For V100:

pci_passthrough_whitelist=[{"devname": "p5p1", "physical_network": "sriov_1"}, {"devname": "p5p2", "physical_network": "sriov_2"},{"devname": "p6p1", "physical_network": "sriov_3"},{"devname": "p6p2", "physical_network": "sriov_4"}, {"vendor_id":"10de","product_id":"1db4"}]

Note: Whitelist PCI device for use in OpenStack.

3

Restart nova-compute service:

systemctl restart OpenStack-nova-compute.service

 

 

Openstack Configuration for Controller Node

Step
Procedure
1

Ensure PciPassthroughFilter and NumaTopologyFilter are added to scheduler_default_filters list in /etc/nova/nova.conf file.

Note: Enables nova to instantiate instances with CPU resources from the same NUMA as that of the PCI devices to be used.

2

Add PCI alias for the GPU device in /etc/nova/nova.conf file

For V100:

pci_alias={"vendor_id":"10de", "product_id":"1db4","device_type":"type-PCI", "name":"v100gpu"}
 

Note:PCI alias will be referred to by flavors that make use of this PCI device

3

Restart nova-api service

systemctl restart OpenStack-nova-api.service
 

Guideline for Creating Flavors for GPU T-SBC Instances

Open the dashboard and create a flavor with the following properties. Check with your Ribbon account team to determine the appropriate instance size for your traffic needs.

Note

This is a sample benchmark. Ribbon does not mandate the use of the processors shown here.

Flavor Creation Guideline for GPU T-SBC Instances

Property
For V100
VCPUs20
RAM25 GiB
Root Disk (min)65 GiB
 

After creating the flavor, update its metadata with the following key values.

Metadata Key Values

Key
Value for V100
Remark
hw:cpu_policydedicatedEnsures guest vCPUs are pinned to host cpus for performance.
hw:numa_nodes1Ensures host cpus of a single NUMA node are used in the instance for performance.
hw:cpu_thread_policypreferThis setting allocates each vCPU on thread siblings of physical CPUs.
hw:cpu_max_sockets1This setting defines how KVM exposes the sockets and cores to the guest.
pci_passthrough:aliasv100gpu:1Ensures NVIDIA PCIe devices are attached to the instance.
 

Configuring the SBC for Invoking T-SBC

Refer to the following pages for basic configuration steps for the S-SBC and M-SBC:

For enabling T-SBC, some additional configurations are required in S-SBC and M-SBC which are described in subsequent sections.

Configuring and Activating T-SBC Cluster

Steps for configuration of T-SBC is similar to M-SBC with the following exception: The IP interface group creation procedure should create private interface groups instead of public. There are no public interface groups for T-SBC.

Additional Configuration for S-SBC

A DSP cluster needs to be configured in the S-SBC configuration to refer to the T-SBC cluster that is to be used for transcoding. The following steps describe the procedure for creation of this cluster:

Creating a DSP Cluster

StepAction
1Log on to the EMA and then click the All tab.
2

In the navigation pane, click System > DSBC > Cluster > Type and add the T-SBC node entry by selecting DSP:

3Click in the FQDN field and then add the corresponding FQDN for the T-SBC created in the T-SBC configuration.
4Click Save.
5

Refer to System Provisioning - Packet Service Profile for configuration changes that must be made on the S-SBC to enable transcoding.

Note: In GPU T-SBCs, the required codecs and their percentages must be provisioned in the Heat template as described in the previous section. This provisioning is fixed for the lifetime of the application. All members of a single T-SBC cluster should follow the same codec provisioning values.

Note:

The GPU transcoding solution currently does not support more than one non-G711 transcodable codec per leg on a trunk group. Therefore when configuring Packet Service Profiles, do not configure multiple non-G711 codecs on a single leg (This Leg/Other Leg parameters) when specifying the Codecs Allowed For Transcoding within Packet To Packet Control. Refer to Packet Service Profile - CLI or Packet To Packet Control - Codecs Allowed For Transcoding (EMA).

Additional Configuration for M-SBC

Configure the private IP interface group for relaying media packets to T-SBC using steps from the "Configure Private LIF Groups in M-SBC" section of Invoke MRF as a Transcoder for D-SBC.

 

 

Instantiate GPU I-SBC on Openstack

Create a GPU I-SBC instance using only the following Redundancy modes:

  1. Standalone mode, using the template: heatStandaloneTemplateNoDhcp.yaml
  2. 1:1 HA mode (centralized mode), using the template: heatHA11templateNoDhcp.yaml

Note

The N:1 redundancy model does not support GPU I-SBC.

Host Changes on OpenStack for Enabling GPU Devices

This section describes the changes needed on the Controller node and the Compute node hosting GPU cards in order to enable instances to use GPU devices. While this section focuses purely on the GPU aspect, Ribbon recommends that you refer to broader OpenStack performance tuning recommendations covered in the following links:

 

OS Configuration for Compute Node with GPU Device

Step
Action
Notes

Edit /etc/sysconfig/grub and ensure that the following parameters are populated:

intel_iommu=on iommu=pt rdblacklist=nouveau

 

  • Enables kernel support for PCIe passthrough.
  • Blacklists opensource NVIDIA driver (nouveau) from loading on the host.

Update grub using the following command:

grub2-mkconfig -o /etc/grub2.cfg

Create /etc/modprobe.d/nouveau.conf file with the following contents:

blacklist nouveau
Blacklists opensource NVIDIA driver (nouveau) from loading on the host.
Reboot the compute node. 

Openstack Configuration for Compute Node with GPU Device

Step
Action

Add a PCI alias for the GPU device in /etc/nova/nova.conf.

For V100:

pci_alias={"vendor_id":"10de", "product_id":"1db4","device_type":"type-PCI","name":"v100gpu"}

Note: PCI alias will referred to by flavors that make use of this PCI device.

Add a GPU device to existing the PCIe whitelist entries in /etc/nova/nova.conf.

For V100:

pci_passthrough_whitelist=[{"devname": "p5p1", "physical_network": "sriov_1"}, {"devname": "p5p2", "physical_network": "sriov_2"},{"devname": "p6p1", "physical_network": "sriov_3"},{"devname": "p6p2", "physical_network": "sriov_4"}, {"vendor_id":"10de","product_id":"1db4"}]

Note: Whitelist PCI device for use in OpenStack.

Restart nova-compute service:

systemctl restart OpenStack-nova-compute.service

 

 

Openstack Configuration for Controller Node

Step
Procedure
1

Ensure PciPassthroughFilter and NumaTopologyFilter are added to scheduler_default_filters list in /etc/nova/nova.conf file.

Note: Enables nova to instantiate instances with CPU resources from the same NUMA as that of the PCI devices to be used.

2

Add PCI alias for the GPU device in /etc/nova/nova.conf file

For V100:

pci_alias={"vendor_id":"10de", "product_id":"1db4","device_type":"type-PCI", "name":"v100gpu"}
 

Note:PCI alias will be referred to by flavors that make use of this PCI device

3

Restart nova-api service

systemctl restart OpenStack-nova-api.service
 

Guideline for Creating Flavors for GPU T-SBC Instances

Open the dashboard and create a flavor with the following properties. Check with your Ribbon account team to determine the appropriate instance size for your traffic needs.

Note

This is a sample benchmark. Ribbon does not mandate the use of the processors shown here.

Flavor Creation Guideline for GPU T-SBC Instances

Property
For V100
VCPUs20
RAM25 GiB
Root Disk (min)65 GiB
 

After creating the flavor, update its metadata with the following key values.

Metadata Key Values

Key
Value for V100
Remark
hw:cpu_policydedicatedEnsures guest vCPUs are pinned to host cpus for performance.
hw:numa_nodes1Ensures host cpus of a single NUMA node are used in the instance for performance.
hw:cpu_thread_policypreferThis setting allocates each vCPU on thread siblings of physical CPUs.
hw:cpu_max_sockets1This setting defines how KVM exposes the sockets and cores to the guest.
pci_passthrough:aliasv100gpu:1Ensures NVIDIA PCIe devices are attached to the instance.
 

Configure GPU I-SBC on Openstack

After instantiation of the GPU I-SBC on Openstack, the application's activated traffic profile is the default traffic profile.

To apply the custom GPU traffic profile, log on as admin and execute the following steps:

  1. Create a sweCodecMixProfile with the codecs configured for GPU transcoding.
    Syntax:

    % set system sweCodecMixProfile <profile name> <codec> <ptime value: 10ms, 20ms, 30ms, 40ms, 60ms> percentage <percentage_value>

    Parameter Description

    ParameterLength/RangeDescription
    <codec>N/AEnter the codec used by this Codec Mix Profile. Select one of the currently supported codecs on GPU: amr, amrwb, g729, g722, g711.
    <ptime value>N/A

    Select a packetization time value representing 10 ms, 20 ms, 30 ms, 40 ms, 60 ms:

    • p10
    • p20
    • p30
    • p40
    • p60
    percentage1-100<% value> – The percentage distribution allocated for the codec mix. The sum of all percentage entries of all columns in any row of the transcoding profile table must equal 100.
    sweCodecMixProfile1-40 characters

    <profile name> – Enter a unique SWe Codec Mix Profile name or one of the default profiles:

    The SWe Codec Mix Profile is attachable to transcodingCodecProfile objects of the SWe Traffic Profile.


    Example: to configure a codec profile for AMRWB-G711u transcoding, execute the following command:

    % set system sweCodecMixProfile customCodecMix amrwb p20 percentage 50
    % set system sweCodecMixProfile customCodecMix g711  p20 percentage 50
    % commit
  2. Create a traffic profile. Set the transcodingCodecProfile with the sweCodecMixProfile created in Step 1.
    Syntax:

    % set system sweTrafficProfiles <profile name> 
    	isAccess <value>
    	callHoldTime <duration>
    	passthroughCodecProfile <profile name>
    	transcodePercent <percentage>
    	transcodingCodecProfile <profile name>
    	useGPUForTranscoding <false | true>


    Parameter Description

    ParameterLength/RangeDescription
    sweTrafficProfiles1-40 characters 

    <profile name> – Enter a unique SWe Traffic Profile name.

    Note: To create additional profiles, delete any inactive custom profiles.

    isAccessN/A

    Set this flag to true to specify whether the deployment uses an access scenario.

    • false (default)
    • true

    Note: When set to 'true', the parameters internalRefreshTimer, registrationRefreshInterval and bhcaPerSubscriber are available for configuration.

    bhcaPerSubscriber0-5

    <# attempts> (default = 1) – Indicates busy hour call attempts (BHCA) per subscriber.

    Note: This parameter is available when the parameter accessScenario is set to true.

    callHoldTime10-10800<# seconds> – Enter the average call hold time, in seconds, of the call load for this profile. (default = 90)
    cryptoPercent0-100<% value> (default = 0) – The percentage of media sessions (including both transcoding and passthrough) requiring cryptographic treatment.The value is 50 when there is SRTP<->RTP interworking on all calls.
    directMediaPercent0-100

    <% value> (default 0) – The call load percentage for direct media.

    Note: Ensure that the combined total percentage of directMediaPercent and transcodePercent is not greater than 100%.

    passthroughCodecProfileN/A

    The name of the codec mix to associate with the Passthrough Codec Profile.

    • G711_20ms
    processorCapabilityIndexOverrideN/A

    Use this flag to enable/disable index overriding of the default CPU performance computation by the SBC SWe. When set to 'true', the computed indices (which are calculated during system boot-up) are ignored, and the value provided in processorCapabilityIndexOverrideValue attribute is used for all estimations.

    • false (default)
    • true

    Note: Since use cases for overriding the default computed indices are rare, Ribbon recommends not to set the value of processorCapabilityIndexOverride to "true", to avoid inaccurate session numbers and vCPU computations.

    processorCapabilityIndexOverrideValue0.2-10

    Use this parameter to specify the computational value to use to override the default computed indices. (default = 1)

    Note: This parameter is available when processorCapabilityIndexOverride is set to "true".

    internalRefreshTimer15-86400

    <# seconds> (default = 1800) – Use this parameter to specify the internal registration timer, in seconds.

    Note: The parameter is available only when isAccess is set to "true".

    externalRefreshTimer15–86400

    <# seconds> (default = 1800) – Use this parameter to specify the external registration timer, in seconds.

    Note: The parameter is available only when isAccess is set to "true".

    transcodePercent0-100

    <% value> (default = 0) – Use this parameter to specify the percentage of call load to use for transcoded calls.

    Note: Ensure that the combined total percentage of directMediaPercent and transcodePercent is not greater than 100%.

    transcodingCodecProfileN/AThe name of the codec mix to associate with the Transcoding Codec Profile.
    tonesPercent0-100<% value> (default = 0) – Use this parameter to specify the percentage of legs to use for tones treatment.
    useGPUForTranscodingN/A

    Set this flag to "true" to specify whether the deployment uses GPU for transcoding.

    • false (default)
    • true

    Example: To create a custom GPU profile with the name custom_gpu_profile using the sweCodecMixProfile created in Step 1 for trascoding profile:

    % set system sweTrafficProfiles custom_gpu_profile callHoldTime 100 transcodePercent 50 passthroughCodecProfile G711_20ms transcodingCodecProfile customCodecMix useGPUForTranscoding true
    % commit


  3. Activate the traffic profile created in Step 2.
    Syntax:

    % set system sweActiveProfile name <profile name>

    Use the same profile name as used in Step 2.
    Example: To activate the custom GPU traffic profile created in Step 2, execute the following command:

    % set system sweActiveProfile name custom_gpu_profile
    % commit


    Note

    The instance reboots and comes up with the activated traffic profile.

  4. Check the activated profile.
    Syntax:

    > show table system sweActiveProfile


    Example:

    admin@vsbc1> show table system sweActiveProfile
    name custom_gpu_profile;
    stateChangeTime 2018-12-09T13:25:43-00:00;
    [ok][2018-12-11 01:29:03]




Licensing

All pre-existing licensing related to transcoding apply to GPU codecs as well. There is no separate license for GPU functionality.