Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

 

Add_workflow_for_techpubs
REV2
AUTH1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26ca7f046c, userName='null'}
JIRAIDAUTHSBX-6582189160
REV5UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV6UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV3UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cbfd06fb, userName='null'}
REV1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cbfd06fb, userName='null'}REV1

Panel

Contents:

Table of Contents
maxLevel4

Supported Deployment Scenarios

GPU acceleration is supported on SBC SWe cloud-based I-SBC and T-SBC instances on OpenStack (Newton and above). T-SBC is a component in in a Distributed SBC architecture that provides transcoding service.

GPU devices are attached to SBC cloud instances through PCIe pass-through – a single GPU device can be used by only one instance at a time. The process of enabling PCIe pass-through in OpenStack is detailed later in this document. For performance considerations, NUMA locality of devices should be ensured.

Info

NVIDIA GRID is not supported.

Supported GPU Devices

NVIDIA Tesla V100(PCIe)

Supported Codecs

AMR-NB.

The following codecs are supported:

  • GPU + CPU codecs : AMR-NB, AMR-WB, EVRC, EVRCB, G729, G722, G711
  • CPU only codecs :  G723, G726, G7221, ILBC, OPUS, SILK_8, SILK_16, EVS, G7112G711, T38

For more information on "GPU + CPU codecs", see the section Support for CPU+GPU Hybrid Transcoding Instances.

Info
titleNote

You can provision CPU codecs in the codec profile and associate it with the GPU traffic profile; however, you must provision at least one GPU codec in the sweCodecMixProfile.

Include Page
G.722_Silence_Suppression_Note
G.722_Silence_Suppression_Note
 

Info
titlePrerequisite Note

 The following procedures assume that the supported GPU devices have been properly installed on the server.

Anchor
HybridTranscoding
HybridTranscoding
Support for CPU+GPU Hybrid Transcoding Instances

The term "Hybrid Transcoding" implies leveraging both CPU and GPU resources efficiently to accommodate a given codec combination for transcoding capability. With Hybrid Transcoding, a suitable VM instance (SBC SWe on KVM/Openstack) utilizes all the CPU and GPU resources allocated to it for provisioning a given transcode call mix scenario.

Prior to Hybrid Transcoding, the SBC supported either a pure CPU transcoding solution, or a pure GPU transcoding solution. However, in the pure GPU solution, many CPUs are left unused. For example, when a 32 vCPU, GPU-ISBC instance is used to provision just AMRWB-G711u calls, only 13 vCPUs are used, although such an instance can handle 7680 sessions for AMRWB-G711u transcoding. With Hybrid Transcoding, the remaining 19 vCPUs (32-13 vCPUs) are used for provisioning additional AMRWB-G711 sessions.

Hybrid Transcoding enables a GPU-SBC to support GPU codecs as well as non-GPU-supported codec in the same instance. For example, AMRWB-G711 and G726-G711 codecs are supported in the same instance. 

Hybrid Transcoding is supported in Custom GPU and Standard GPU traffic profiles.

For Hybrid Transcoding, the percentage value of the codec G7112G711 in sweCodecMixProfile indicates the proportion of total number of sessions designated for pure G711 transcoding (G711-G711). This is applicable for Hybrid Transcoding, as well as pure CPU transcoding solutions.

Info
titleNote

The percentage value for G7112G711 is used for estimating transcode and bandwidth cost.

The percentage value for G711 is not used for estimating transcode cost, but is used for bandwidth calculation of PXPAD scenarios.

The percentage value for G711 cannot be greater than the percentage value of non-G711 codecs.

The sum of all codec percentages is100.

DSP-based Tone detection is supported only on GPU-ISBC profile.

Best Practice

Ribbon recommends monitoring of the status of codec channel capacities for codecs provisioned on GPU (on a per GPU device basis), as well as on CPU.

  • To display the status of codec channel capacities for codecs provisioned on GPU, on a per GPU device basis, execute the following existing command:

    Code Block
    > show table system gpuTranscodeStatus
  • Similarly, to display the status of codec channel capacities for codecs provisioned on CPU, execute the following existing command:

    Code Block
    > show table system cpuTranscodeStatus

To display the status using the EMA, refer to:

Pagebreak


Instantiating GPU T-SBC on OpenStack Cloud

The T-SBC is instantiated using the help of a specific heat template.  The GPU T-SBC requires a special flavor that has appropriate directives to utilize GPU devices of the compute node available for PCIe pass-through.

T-SBC Heat Template

The T-SBC instance should be launched using the heatRgNoDhcp-TSBC-template.yaml template. This template shares all fields of an Man M-SBC template, and additionally has the following fields:

Caption
0Table
1T-SBC Heat Template

Field

Description

Example or
Recommendation

gpu

Indicate whether to use GPU or CPU for transcoding. Should be set as as true for  for GPU TSBCs.

Note: For  For GPU T-SBCs ,additional provisioning of codec percentages are required at the time of instantiation.

Transcode resources for codecs are reserved and fixed for the lifetime of the instance.

TrueAMR

G729

Percentage of channels to be allocated for AMR G729 codec (0-100)

Applicable only when when gpu field  field is True.

Provisioned on GPU and CPU for transcoding.

45.50
G722

Percentage of channels to be allocated for G722 codec (0-100)

Applicable only when gpu field is True.

Provisioned on GPU and CPU for transcoding.

0
EVRCB

Percentage of channels to be allocated for EVRCB codec (0-100)

Applicable only when gpu field is True.

Provisioned on GPU and CPU for transcoding.

0
EVRC

Percentage of channels to be allocated for EVRC codec (0-100)

Applicable only when gpu field is True.

Provisioned on GPU and CPU for transcoding.

0
AMRWB

AMRWB (not applicable for 7.0)

Percentage of channels to be allocated for AMRWB codec (0-100)

Applicable only when when gpu field  field is True.

Provisioned on GPU and CPU for transcoding.

0

EVRC (not applicable for 7.0)

AMR

Percentage of channels to be allocated for AMR codec (0-100)

Applicable only when gpu field is True.

Provisioned on GPU and CPU for transcoding.

15
G723

Percentage of channels to be allocated for G723 codec (0-100)

Applicable only when gpu field is True.

Provisioned for CPU transcoding.

0
G726

Percentage of channels to be allocated for EVRC G726 codec (0-100)

Applicable only when when gpu field  field is True.

Provisioned for CPU transcoding.

0

EVRCB (not applicable for 7.0)

G7221

Percentage of channels to be allocated for G7221 codec (0-100)

Applicable only when gpu field is True.

Provisioned for CPU transcoding.

0
ILBC

Percentage of channels to be allocated for ILBC codec (0-100)

Applicable only when gpu field is True.

Provisioned for CPU transcoding.

0
OPUS

Percentage of channels to be allocated for EVRCB OPUS codec (0-100)

Applicable only when when gpu field  field is True.

Provisioned for CPU transcoding.

0

G729 (not applicable for 7.0)

SILK_8

Percentage of channels to be allocated for SILK NB codec (0-100)

Applicable only when gpu field is True.

Provisioned for CPU transcoding.

0
SILK_16

Percentage of channels to be allocated for SILK WB codec (0-100)

Applicable only when gpu field is True.

Provisioned for CPU transcoding.

0
EVS

Percentage of channels to be allocated for G729 EVS codec (0-100)

Applicable only when when gpu field  field is True.

Provisioned for CPU transcoding.

0
G7112G711

Percentage of channels to be allocated for G7112G711 codec (0-100)

Applicable only when gpu field is True.

Provisioned for CPU transcoding.

0
T38

Percentage of channels to be allocated for T38 codec (0-100)

Applicable only when gpu field is True.

Provisioned for CPU transcoding.

0.75
G711

G722 (not applicable for 7.0)

Percentage of channels to be allocated for G722 G711 codec (0-100)

Applicable only when when gpu field  field is True.

0

 

 

 


Host Changes on OpenStack for Enabling GPU Devices

This section describes the changes needed on the Controller node and the Compute node hosting GPU cards in order to enable instances to use GPU devices. While this section focuses purely on the GPU aspect, Ribbon recommends that you refer to broader OpenStack performance tuning recommendations covered in the following links:

 

Caption
0Table
1OS Configuration for Compute Node with GPU Device
Step
Action
Notes
1

Edit /etc/sysconfig/grub and ensure that the following parameters are populated:

intel_iommu=on iommu=pt rdblacklist=nouveau

 

  • Enables kernel support for PCIe passthrough.
  • Blacklists opensource NVIDIA driver (nouveau) from loading on the host.
2

Update grub using the following command:

grub2-mkconfig -o /etc/grub2.cfg
3

Create /etc/modprobe.d/nouveau.conf file with the following contents:

blacklist nouveau
Blacklists opensource NVIDIA driver (nouveau) from loading on the host.
4Reboot the compute node. 
 

 

Caption
0Table
1 Openstack Configuration for Compute Node with GPU Device
Step
Action
Notes
1

Add a PCI alias for the GPU device in /etc/nova/nova.conf.

For V100:

pci_alias={"vendor_id":"10de", "product_id":"1db4","device_type":"type-PCI","name":"v100gpu"}

Note: PCI alias will referred to by flavors that make use of this PCI device.

2

Add a GPU device to existing the PCIe whitelist entries in /etc/nova/nova.conf.

For V100:

pci_alias={"vendor_id":"10de", "product_id":"1db4","device_type":"type-PCI","name":"v100gpu"}passthrough_whitelist=[{"devname": "p5p1", "physical_network": "sriov_1"}, {"devname": "p5p2", "physical_network": "sriov_2"},{"devname": "p6p1", "physical_network": "sriov_3"},{"devname": "p6p2", "physical_network": "sriov_4"}, {"vendor_id":"10de","product_id":"1db4"}]

Note: Whitelist PCI device for use in OpenStack.

3

Restart nova-compute service:

systemctl restart OpenStack-nova-compute.service

 

 

 
Caption
0Table
1Openstack Configuration for Controller Node
Step
ProcedureRemark
1

Ensure PciPassthroughFilter and NumaTopologyFilter are added to scheduler_default_filters list in /etc/nova/nova.conf file.

Note: Enables nova to instantiate instances with CPU resources from the same NUMA as that of the PCI devices to be used.

2

Add PCI alias for the GPU device in /etc/nova/nova.conf file

For V100:

pci_alias={"vendor_id":"10de", "product_id":"1db4","device_type":"type-PCI", "name":"v100gpu"}
 

Note:PCI alias will be referred to by flavors that make use of this PCI device

3

Restart nova-api service

systemctl restart OpenStack-nova-api.service
  

Guideline for Creating Flavors for GPU T-SBC Instances

Open the dashboard and create a flavor with the following properties. Check with your Ribbon account team to determine the appropriate instance size for your traffic needs.

Info
titleNote

This is a sample benchmark. Ribbon does not mandate the use of the processors shown here.

Caption
0Table
1Flavor Creation Guideline for GPU T-SBC Instances
Property
For V100
VCPUs20
RAM25 GiB
Root Disk (min)65 GiB
 

After creating the flavor, update its metadata with the following key values.

Caption
0Table
1Metadata Key Values
Key
Value for V100
Remark
hw:cpu_policydedicatedEnsures guest vCPUs are pinned to host cpus for performance.
hw:numa_nodes1Ensures host cpus of a single NUMA node are used in the instance for performance.
hw:cpu_thread_policypreferThis setting allocates each vCPU on thread siblings of physical CPUs.
hw:cpu_max_sockets1This setting defines how KVM exposes the sockets and cores to the guest.
pci_passthrough:aliasv100gpu:1Ensures NVIDIA PCIe devices are attached to the instance.
 

Configuring the SBC for Invoking T-SBC

Refer to the following pages for basic configuration steps for the S-SBC and M-SBC:

For enabling T-SBC, some additional configurations are required in S-SBC and M-SBC which are described in subsequent sections.

Configuring and Activating T-SBC Cluster

Steps for configuration of T-SBC is similar to M-SBC with the following exception: The IP interface group creation procedure should create private interface groups instead of public. There are no public interface groups for T-SBC.

Additional Configuration for S-SBC

A DSP cluster needs to be configured in the S-SBC configuration to refer to the T-SBC cluster that is intended to be used for transcoding. The following steps describe the procedure for creation of this cluster:

Caption
0Table
1Creating a DSP Cluster
StepActionNotes
1Log on to the EMA and then click the All tab. Refer to Modifying SBC Cluster Configuration for information on accessing the SBC Configuration Manager to modify the configuration of the S-SBC cluster.
2

Click All >

2

In the navigation pane, click System > DSBC > Cluster > Type and add the T-SBC node entry by selecting DSP:

Image Removed

 
3Click in the FQDN field and then add the corresponding FQDN for the T-SBC created in the T-SBC configuration. 
4Click Save. 
5

Refer to System Provisioning - Packet Service Profile for configuration changes that must be made on the S-SBC to enable transcoding.

 

Note: In GPU T-SBCs, the required codecs and their percentages

also need to

must be provisioned in the Heat template as described in the previous section. This provisioning is fixed for the lifetime of the application. All members of a single T-SBC cluster should follow the same codec provisioning values.

 

Include Page
GPU_T-SBC_Codec_Restriction
GPU_T-SBC_Codec_Restriction

Additional Configuration for M-SBC

Configure the private IP interface group for relaying media packets to T-SBC using steps from the "Configure Private LIF Groups in M-SBC" section of Invoke MRF as a Transcoder for D-SBC.

Licensing

All pre-existing licensing related to transcoding apply to GPU codecs as well. There is no separate license for GPU functionality.pagebreak