Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: edited, corrected cloud terminology

Overview

Audio Transcoding and Video Relay

Multiexcerpt
MultiExcerptNameInvoke MRF Intro

The 

Spacevars
0product3
 inter-operates SBC SWe systems in an OpenStack cloud environment inter-operate with a third-party transcoding platform called Media Resource Function (MRF) to transcode audio and relay video/T140.

Info
titleNote

Only the

Spacevars
0product3
on OpenStack distributed SBC (D-SBC) SWe systems on OpenStack supports this feature. 


The
Spacevars
0product3product
 supports the following functionality:

  • Relaying both audio and video streams
  • Relaying audio, video and T140 streams
  • Audio transcode through MRF and video relay
  • Audio transcode through MRF and video/T140 relay
  • Audio transcode through MRF and T140 relay
  • Audio and T.140 transcode through MRF (see T.140 and TTY Interworking Support below)
Info
titleNote

The

Spacevars
0product3product2
supports this functionality only for MRF-transcoded calls on D-SBC platforms.

T.140 and TTY Interworking Support

Multiexcerpt
MultiExcerptNameT140 Support

Prior to release 7.1.0, the

Spacevars
0product3product
invoked MRF only for audio streams to achieve transcoding. Non-audio streams were relayed end-to-end even when the audio was sent to the MRF. Teletype (TTY) as the legacy service offered through encoding text characters as tones that are embedded in a carrier (PCMU, PCMA, or EVRC) media stream. The T.140 streams carry text as a separate payload.

Henceforth, the

Spacevars
0product3product
invokes MRF for T.140 and TTY interworking to achieve transcoding. When T.140 and TTY interwork, text characters exchange between the T.140 stream and the tones carried inband with the audio. If the audio is pass-through and T.140 requires transcoding, the SBC does not invoke MRF and instead rejects the text stream on the offer leg (see the following call flow). Keep in mind that T.140 pass-through scenarios are supported without any MRF interaction.

Info
titleNote

This feature does not support sessions that only have a T.140 stream.

The

Spacevars
0product3product
does not invoke T.140 and TTY interworking when T.140 is present on both legs and has different transmission rates, or a difference in redundancy packet support.

Info
titleNote

Only the

Spacevars
0product3product2
on OpenStack (D-SBC) supports this feature. 

Caption
0Figure
1Audio and T.140 Transcoded
3Audio and T.140 Transcoded

 

For T.140 and TTY interworking to succeed,

  • the offer received by the SBC must have a text stream with a valid IP and Port, and the answer received by the SBC must have a text stream with port=0,
  • the audio must be transcoded,
  • the audio codec on the TTY leg must be Baudot capable (G711U, G711A, or EVRC).

Info
titleNote
  • If the flag transcode-only is enabled, like other packet-to-packet control configurations, the transcode-only applies only to the audio stream.
  • The T.140 stream can be passed-through or transcoded based on the above conditions.

The existing t140Call flag in the PSP achieves T.140 and TTY interworking for MRF transcoding. The following table outlines when the T.140 and TTY can and cannot interwork.

Caption
0Table
1PSP Configuration

Offer Leg Route

PSP (T.140 Call)

Answer Leg Route

PSP (T.140 Call)

 

Result

DisabledDisabledT.140 disabled on both legs
DisabledEnabledT.140 disabled on both legs
EnabledDisabledT.140-TTY can interwork
EnabledEnabledT.140-TTY can interwork

Info
titleNote

A need for interworking is evident when an m=text line for the leg that sends T.140 is below the m=audio line for that leg, and when there is no m=text line for the other leg in the offer sent toward MRF.

If T.140 and TTY interworking is not required but audio transcoding is required, the audio streams go through MRF and the T.140 streams do not go through MRF.

If the audio is pass-through and T.140 requires transcoding, the SBC does not invoke MRF and instead rejects the text stream on the offer leg (see the following call flow).

Caption
0Figure
1Audio pass-through

 

If T.140 and TTY interworking is required but MRF does not support interworking, the SBC rejects the T.140 stream on the leg that offers T.140.

SBC SWe Cloud

Limitations

  • Audio-less calls are not supported.
  • Only video and T140 streams among the non-audio streams are supported.

For configuration details, refer to:

To view media stream statistics, refer to Show Status Address Context - Call Status Details

Prerequisites to Invoking MRF

Info
titleNote

The first three activities below cause the D-SBC to invoke MRF.

 

  1. Configure MRF Profile in S-SBC
  2. Configure Private LIF Groups in M-SBC
  3. Enable transcoding at the Packet Service Profile (refer to 
    Link_in_new_tab
    TextPacket Service Profile - CLI
    URLPacket Service Profile - CLI
    ).
  4. Create a Path Check Profile, ARS profile, and CAC profile during the initial configuration

Include Page
Path_Check_Profile_vs_ARS_Profile
Path_Check_Profile_vs_ARS_Profile

Configure an MRF Profile

in

on the S-SBC

This configuration example explains how to configure the MRF cluster profile in the S-SBC.

 

Noprint

Toggle Cloak
Click to expand...

The MRF servers are configured as FQDN or the IP address is decided by Routing Type configured in the MRF Profile.

 

Cloak
greentransparent

To configure the

Domain Name

domain name of the MRF

Server

server, select FQDN:

Note

When FQDN routing is enabled, configure a DnsGroup on zone in which mrfTgName is present. 

To configure an IP

-Address

address for the MRF

Server

server, select IpAddress.

Note

When the Routing Type is selected as IP Address, a minimum of one IP must be configured. In case of multiple IP addresses, each IP address is separated by a comma (,).

Ribbon supports a maximum of four IP address configurations.

To configure a dedicated

TG

trunk group on MRF servers:

To configure transport type for MRF server:

NoteDefault value is UDP.

To configure

request

the Request URI sent in the

invite towards

INVITE message toward the MRF server:

To configure the Port of the MRF server in the MRF

Profile

profile:

Note

When the mrfRoutingType is selected as IpAddress, the mrfPort default value is 5060.

When the mrfRoutingType is selected as fqdn, the mrfPort default value is 0. When the value for the port is 0,

user

you must configure the desired port in DNS server for SRV record.

To configure the state of the MRF server:

Configure Private LIF Groups

in

on the M-SBC

This configuration example explains the CLI command required to configure the MRF cluster profile in on the M-SBC.

Noprint

Toggle Cloak
Click to expand...

Cloak
greentransparent

To configure

Private

a private IP Interface Group that communicates

towards

with the MRF, execute the loadBalancingService set command:

 

To view the configured

Private

private IP Interface

Group Name

group name, execute the loadBalancingService show command:


Debug Statistics Commands

The following CLI can be used

Use the following command to get the media statistics corresponding to the private NIF resources for an MRF call.

 

Use the following CLI 'show' command to view the call statistics for an MRF call.

> show status global callDetailStatus

The callDetailStatus command contains the following

new

fields (with example output):

 

 

Use the following CLI 'show' command to view the call resource statistics for an MRF call.

show status global callResourceDetailStatus

Note

Value dresMrf indicates MRF is used for transcoding the call.

Parameter: resType
Value: dresMrf

     

 

Use the following CLI 'show' command to view the call media leg information for an MRF call.

Create a Path Check Profile, ARS profile, and CAC Profile During Initial Configuration

Div
stylepadding-left:1%;
idindent

 

Path Check Profile

The Path Check Profile specifies the conditions that constitute a connectivity failure, and in the event of such a failure, the conditions that constitute a connectivity recovery.

  • For more information on path check, refer Service Profiles - Path Check Profile
  • For more information on creating IP Peer, refer to System Provisioning - IP Peer for GUI or Zone - IP Peer - CLI.

    Info
    titleNote

    If using an IP address, create different IP Peers peers for each IP addresses address configured in the MRF cluster profile as an MRF IP address and attach the Path Check Profileprofile.

    If using an FQDN, create the IP Peer peer with an FQDN and attach the Path Check Profileprofile.

ARS Profile

The Address Reachability Service (ARS) determines whether a server is reachable, able to blacklist a server IP address when unreachable, and remove the server from blacklist state. ARS profiles can be created to configure blacklisting and recovery algorithm variants. For more information, refer to Service Profiles - SIP ARS Profile (EMA) or SIP ARS Profile - CLI.

Create an ARS profile and attach it to the MRF TG trunk group as configured in the cluster profile. The ARS feature controls the congestion to handle the 503 response.

CAC Profile

 

Invoking the MRF Server

In a cluster profile, you can configure the routing type for an FQDN or a list of IP addresses.

MRF Server

configured

Configured as FQDN

When the FQDN is chosen, the FQDN resolves into a list of IP addresses.

If the MRF profile is configured with an FQDN and a call is routed to MRF server(s) as follows:

  • If mrfPort is configured as '0', the SBC performs SRV query to fetch the port number based on the priority and weight. Post SRV query, it performs A or AAAA query to fetch the corresponding IP addresses.
  • If mrfPort is configured with a valid port number, the SBC performs only A or AAAA query.
  • if there is 'No Response' received from MRF server, SBC re-transmits the INVITE for six times that lasts 32 seconds. This number of re-transmissions is configurable under Trunk Group trunk group as follows. After these configurable the configured number of re-transmissions, the SBC tries to re-transmits transmit to the alternative MRF server IP address available in the list by default.
Code Block
 % set addressContext <address_context> zone <zone name> sipTrunkGroup <TG Name> signalling retryCounters invite <0-6>

The DNS crankback profile is configured such that, it retries the other records for any error responses received from MRF server. If the error code matches with the entry in the DNS crankbank profile, then SBC retries for alternative MRF server, otherwise the call will be rejected.

Selecting an SRV RR based on priority and weight

The SRV record look-up response is as follows: 


# _service._proto.name.
TTL
class
srv
 
priority
 
weight
 
port
target host
_sip._tcp.ribbon.com.
86400
IN
SRV
10
60
5060
bigserver.ribbon.com.
_sip._tcp.ribbon.com.
86400
IN
SRV
10
20
5060
smallserver.ribbon.com.
_sip._tcp.ribbon.com.
86400
IN
SRV
10
10
5060
smallserver.ribbon.com.
_sip._tcp.ribbon.com.
86400
IN
SRV
10
10
5070
smallserver.ribbon.com.
_sip._tcp.ribbon.com.
86400
IN
SRV
20
0
5060
backupserver.ribbon.com.

 

Priority : Determines the precedence of use of the record's data. The SRV record with the lowest-numbered priority value is used first. if the connection to the host fails, it fallbacks falls back to other records of equal or higher priority.

Weight: If a service has multiple SRV records with the same priority value, clients use the weight to determine the host to be used. The weight value is relevant only in relation to other weight values for the service, and only among records with the same priority value.

The table above tabulates, the first four records share a priority of 10, so the weight field's value is used to determine which server (host and port combination) to contact. The sum of all four values is 100, so bigserver.ribbon.com is used 60% of the time. The two hosts mediumserver and smallserver is used for 20% of requests each, with half of the requests that are sent to smallserver ( that is 10% of the total requests) going to port 5060 and the remaining half to port 5070. If bigserver is unavailable, these two remaining machines  shares the load equally, because they are to be selected 50% of the time. If all four servers with priority 10 are unavailable, the record with the next highest priority value will be chosen, which is backupserver.ribbon.com.

  • If the target host is “.”, SBC discards the record.
  • SBC use priority and weight fields in SRV record selection.
    • SBC attempts to contact the target host with the lowest-numbered priority it can reach.
    • If multiple records has the same priority, the SBC usesthe weight field to select the target host with records with larger weight given a proportionately higher probability of being selected.
      • SBC uses “running sum” mechanism that is described in RFC 2782 for load-balancing across SRV RRs of the same priority.

Once a given SRV record is selected, the SBC performs A-Record lookup. If A-record lookup fails, AAAA record look-up is performed.

Selecting

a

an A/AAAA RR based on configuration

SBC performes the A-Record lookup and if that fails AAAA record lookup is done. The SBC handles IPV4 and/or IPv6 addresses returned in AAAA record look-up response.

An example of A record look-up response is as follows:

bigserver.ribbon.com 86400 IN A 192.168.1.10

bigserver.ribbon.com 86400 com 86400 IN A 192.168.1.11

An example of AAAA record look-up response is as follows:

bigserver.ribbon.com 86400 com 86400 IN AAAA fe80:0:0:0:214:4fff:fe56:848d

bigserver.ribbon.com 86400 com 86400 IN AAAA fd00:10:6b50:110::28

SBC distributes the A/AAAA records based on the configuration of recordOrder  in the DNS Groupgroup.

Code Block
% set addressContext <addressContext name> dnsGroup <dnsGroup name> server <DNS server name> recordOrder <centralized-roundrobin | priority | roundrobin>

Where:

  • recordOrder – Indicates the lookup order of local name service records associated with the specified DNS server.
  • centralized-roundrobin – (recommended) This option uses the round-robin technique with respect to the whole system.
  • priority (default) – Indicates the lookup order is based on the order of entries returned in DNS response.
  • roundrobin – This option is used to share and distribute local records among internal SBC processes in a round-robin fashion. Over a large number of calls, a fair amount of distribution occur across all DNS records.

In case of multiple SRV RR and multiple A/AAAA RR, SBC selects the next-available SRV address (if all the IP addresses for a given A/AAAA record are tried and not reachable) and retries to reach the MRF sever, using the procedures specified above for selecting an SRV based on priority and weight.

MRF Server configured as IPIP 

In this profile,:

  • A maximum of 4 IPV4/IPV6 can be configured.
  • If "No Response or 504" is received from MRF, then SBC will not try for an alternative MRF server and the call gets rejected.
  • If "488/500/503" response is received from any MRF server, then SBC tries for an alternative MRF server before rejecting the call.

If the MRF profile is configured with a list of MRF server IP addresses and then a call is routed to MRF server(s) as follows:

  • S-SBC tries to connect to the configured MRF server IP addresses in a round-robin fashion.
  • If any failure/no response is received from an MRF server for a specific IP address, the same IP address is blacklisted. When blacklisted, the S-SBC continuously sends an option message to MRF server to check whether the IP is active/inactive. Once the IP is active, S-SBC removes the IP address from the blacklist state and tries to connect to the same IP when the next call is routed to MRF Server.
  • The S-SBC tries for the next available MRF server IP address configured in the list alternatively.
  • This process is repeated until S-SBC either receives a SUCCESS response from any of the MRF servers or all the MRF server IP addresses in list is exhausted.

Example: The MRF profile is configured with a list of MRF server IP addresses such as IP1, IP2, IP3 and IP4, then for . For the 1st call, the S-SBC tries to connect for to the MRF server IP1. Meanwhile, the S-SBC received receives 2nd, 3rd, 4th calls and connected connects them to the MRF servers IP2, IP3 and IP4 respectively. For If for the 1st call , the S-SBC has received receives a Failure/No response from the MRF server IP1. Hence, the S-SBC tries with IP2 and connects successfully.

Signaling and Media Flow

Signaling and Media flow for a transcoded call using an S-SBC, M-SBC and MRF:

  • S-SBC: Provides signaling services and responsible for allocating/activating/managing various resources (including MRF). Configures media flow through M-SBC and MRF.
  • M-SBC: Provides media services. Public interface is used to communicate with peers and private interface is used to communicate with MRF.
  • MRF: Provides transcoding services. Configured in private network of SBC and uses RFC-4117 interface to communicate with S-SBC.

 

Caption
0Figure
1Signaling and Media Flow

 

Pagebreak