In this section:
Default values are enclosed in square brackets [ ].
% set profiles media packetServiceProfile <DEFAULT> applicationStream maxNonRtpBandwidth <0...50000>
% set profiles media packetServiceProfile <DEFAULT> qosValues applicationDscp <0...63>
show profiles media codecEntry EVS codec evs; packetSize <20 | 40 | 60 | 80 | 100>; preferredRtpPayloadType <0 - 127>; dtmf { relay <relay_type>; removeDigits <disable | enable>; } useCompactHeader <0 | 1>; partialRedundancy < -1 | 0 | 2 | 3 | 5 | 7 >; EVSAMRWBIOModeSupport <0 | 1>; supportAsymmetricBitRate <0 | 1>; maxChannels <1 - 6>; minBitRate <5.9 | 7.2 | 8 | 9.6 | 13.2 | 16.4 | 24.4 | 32 | 48 | 64 | 96 | 128>; maxBitRate <5.9 | 7.2 | 8 | 9.6 | 13.2 | 16.4 | 24.4 | 32 | 48 | 64 | 96 | 128>;
% set profiles media packetServiceProfile <unique_profile_name> packetToPacketControl codecsAllowedForTranscoding otherLeg <amr | efr | evrc | evs | g711a | g711u | g722 | g726 | g729 | g7221 | g7222 | g7231 | ilbc | opus | silk | t38> thisLeg <amr | efr | evrc | evs | g711a | g711u | g722 | g726 | g729 | g7221 | g7222 | g7231 | ilbc | opus | silk | t38>
Use the following command to set and configure the bfd
parameter in an ipInterface
.
You can only configure the ceName
when you initially create the BFD session.
set addressContext <addressContext_name> ipInterfaceGroup <lif_group_name> ipInterface <IP_interface_name> bfd <bfd_session_name> remoteIp <remote_IP_address> remotePort <remote_port_number> requiredMinRxInterval <1-50> desiredMinTxInterval <1-50> ceName <ceName> state <disabled | enabled>
You can only modify the bfd
parameters when the bfd state
is disabled
.
Use the following command to configure the ocspStapling
flag in the ocspProfile
.
set profiles security ocspProfile <profile name> ocspStapling <disabled | enabled>
Use the following command to configure the ocspResponseCachingTimer
parameter in the ocspProfile
.
set profiles security ocspProfile <profile name> ocspResponseCachingTimer <1-30>
The SBC provisions the internalSipCauseMapProfile
to attach at the trunk group level. In addition, SBC can attach the internalSipCauseMapProfile
profile to the signaling zone level. If the trunk group is in a disabled or out-of-service state, the SBC does not use this profile.
There no new parameters added. Instead, new values that are listed in the table are added.
Create the internalSipCauseMapProfile
and map causeMap
to the sipcause
by executing the command:
set profiles signaling sipCauseCodeMapping internalSipCauseMapProfile <internalSipCauseMapProfile> causeMap <causemap> sipCause Possible completions: <SIP Cause value for a given Internal cause> Enter value in range of 300-606>
Attach the internalSipCauseMapProfile
profile to the trunk group by executing the command:
% set addressContext default zone <ZONE_INGRESS> sipTrunkGroup <SipTRunkGroup1> signaling causeCodeMapping Possible completions: cpcSipCauseMappingProfile - The name of the CPC to SIP mapping profile. sipCpcCauseMappingProfile - The name of the SIP to CPC cause mapping profile . sipInternalCauseMappingProfile - The name of internal cause to SIP mapping profile useNonDefaultCauseCodeforARSBlackList - When enabled uses cause code 168 for mapping profile mapping profile % set addressContext default zone <ZONE_INGRESS> sipTrunkGroup <SipTRunkGroup> signaling causeCodeMapping sipInternalCauseMappingProfile <sipInternalCauseMappingProfile>
Attach the sipInternalCauseMappingProfile
profile to the zone
by executing the command:
% set addressContext default zone <ZONE_INGRESS> causeCodeMapping Possible completions: sipInternalCauseMappingProfile - The name of internal cause to SIP mapping profile % set addressContext default zone <ZONE_INGRESS> causeCodeMapping sipInternalCauseMappingProfile <sipInternalCauseMappingProfile>
The SBC uses the existing profile InternalSipCauseMapProfile
to define the new mapping entries of internal errors/failures to the SIP response code.
View the mapping entries by executing the command:
show profiles signaling sipCauseCodeMapping internalSipCauseMapProfile <internalSipCauseMapProfile>
To overcome adding one rule at a time for a new group, the CLI command "aaarule-display-generatecl
i" to display the applicable rules for an existing group and get an equivalent output in a file containing CLI commands. The user then edits this file to define the new set of rules and source the updated file in CLI to assign rules to the new custom group.
aaarule-display-generatecli
To create new rules, refer to Local Authentication - CLI
Create the new rules for the custom group by executing the command:
aaarule-display-generatecli -h usage: [--help|-h] [--administrator|-a] [--operator|-o] [--fieldService|-f] [--guest|-g] [--calea|-c] [--securityAuditor|-s] [--group <new group name>] --display|-cli --help|-h: Help for usage --administrator|-a: Prints Administrator rules --operator|-o: Prints Operator rules --fieldService|-f: Prints Field Service rules --guest|-g: Prints Guest rules --calea|-c: Prints Calea rules --securityAuditor|-s: Prints Security Auditor rules --cli: CLI output for any of the specified groups. At least one group must be given in argument. --display: Display rules for any of the specified groups. At least one group must be given in argument. --group: New group name. The rules will be applied to this group. Else the name will be derived from default group
The options allow you to display and/or create CLI output files for one or more groups at a time. The user group name is required and/or display (--display) or a cli (--cli) option as mandatory parameters.
If the –cli option is given, the SBC stores the CLI output in the user home directory and can modify it.
The SBC Core provides new global configurations to enable generating CDRs in Q-SBC format, to enable checksum validation of the CDR files, and to specify call duration rounding policy. A user must have admin privileges to configure these options.
The configuration options added to support generating CDRs in Q-SBC format have the following syntax.
% set oam accounting qSbcCdr admin addChecksum < disabled | enabled > callDurationRoundUp <enabled | disabled> checksumKey <key> state <disabled | enabled>
Parameter | Length/range | Description |
---|---|---|
addChecksum | n/a | Enable this flag to add checksum validation to the Q-SBC format CDR file. When enabled, the SBC inserts a file header into each CDR log file and then executes the HMAC-MD5 hashing algorithm to generate a checksum for the file, using an operator-configured, private shared key. The SBC converts the resulting binary output from the algorithm to a text format that is consistent with the rest of the CDR file and appends it as the last line in the CDR log file. The options are:
Note: To enable this option, you must also configure a |
| n/a | Enable this flag to have the SBC round up to the next second in Q-SBC CDR call duration fields 3 and 6 if the call duration includes any part of a second. When disabled the SBC rounds down if the partial second duration is less than 500 milliseconds. The options are:
|
| 16 to 64 characters | Specifies the checksum key to use when generating the CDR file checksum. The key value can contain upper/lower case characters and digits only. |
| n/a | When enabled, the SBC generate CDR files in Q-SBC format. When disabled, the CDR file format is the standard SBC Core (former Sonus) CDR format. The options are:
If the SBC is configured to generate intermediate CDRs, a switch of CDR formats to either format type will generate an intermediate CDR for each active call. Ribbon recommends that you change the state value prior to deployment or in a maintenance window. |
SWe traffic profiles support four parameters when creating custom SWe traffic profiles. These parameters provide additional options to characterize the anticipated call mix for a SWe system.
% set system SweTrafficProfiles <profile name> mediaCostFactor <media factor> rxPPSFactor <Rx PPS factor> sigCostFactor <signaling factor> txPPSFactor <Tx PPS factor>
The following table describes the new parameters added to SWe traffic profiles.
Parameter | Length/Range | Default | Description |
---|---|---|---|
mediaCostFactor | 0.0001 to 100 | 1.0 | Use this parameter to specify a media cost factor to use during capacity estimation. This factor affects the media plane estimation,such as crypto session and pass-through session estimation. |
sigCostFactor | 0.0001 to 100 | 1.0 | Use this parameter to specify a signaling cost factor to use during capacity estimation. This factor affects the signaling plane estimation, such as cps estimation. |
rxPPSFactor | 1.0 to 100 | 1.0 | Use this parameter to specify a received (rx) PPS factor to use during capacity estimation. |
txPPSFactor | 1.0 to 100 | 1.0 | Use this parameter to specify a transmitted (tx) PPS factor to use during capacity estimation. Use the Rx/Tx parameters for scenarios such as SIPREC where the received/transmitted PPS may not be the same (asymmetric). |
For SBC SWe cluster deployments operating in OAM configuration mode, new command parameters provide additional options for managing configuration changes when using the CLI.
The request system admin
command supports three new parameters to manage configuration changes on the OAM node and a new show
utility that outputs configuration change information in the form of transaction logs.
The following statements show the syntax for the new request
command options for managing configuration on the OAM node.
> request system admin <SYSTEM NAME> discardCandidateConfiguration restoreRevision revision <revision number> viewConfigurationChanges revision <revision number>
The following statements show the syntax for the new show utils
command options for listing the candidate configuration or changes for a specific revision.
> show utils transactionLog revision <revision number>
The parameter callQueuing
is added to global
. For detailed information refer to Call Queuing - Global - CLI.
set global callQueuing queueLength <1-4096>
The parameter hpcCallLimits
is added to sipCacProfile
. For detailed information, refer to SIP CAC Profile - CLI.
set profiles sipCacProfile <profile_name> hpcCallLimits maxCalls <unlimited | blockAll | 1-1000> maxIngressRate <unlimited | 1-2000> ingressRateInterval <1-30> bucketSize <1-4000>
A new profile, dscpProfile
, is introduced. For detailed information, refer to DSCP Profile - CLI.
set profiles services dscpProfile <dscp_profile_name> hpcDscpValue <0-63> dscpValue <0-63> state <disabled | enabled>
The profile dscpProfile
is also added under policyServer globalConfig
. For detailed information, refer to Policy Server - CLI.
set system policyServer globalConfig dscpProfile <profile_name>
Many new parameters are added under hpcCallProfile
, as described below. For detailed information, refer to HPC Call Profile - CLI.
set profiles services hpcCallProfile <hpcCallProfile name> dscp <egress | ingress> useRecvdValue <diabled | enabled> getsStrings accessNumber an <3-10 digits> featureCode fc <3-10 characters> numberTranslation nt <3-10 digits> queue length <1-256> state <disabled | enabled> timeout <1-90> rph egress nonEtsWps <dontInclude | include> validEtsWps <dontInclude | include> ingress invalidEtsWps <ignore | reject> nonEtsWps <accept | ignore> validEtsWps <accept | ignore | reject>
The following new statistics are added under show <table | status > global <callCountCurrentStatistics | callCountIntervalStatistics>
:
The following new statistics are added under show status addressContext <acName> zone <zoneName> sipPeerCacStatus <ipPeerName> <ipAddress> <port>
:
The following statistics are added under show status addressContext <acName> zone <zoneName> <sipIntervalStatistics | sipCurrentStatistics
>:
The following changes are applicable for SBC SWe on Openstack and KVM, particularly GPU-ISBC and GPU-TSBC:
A new codec G7112G711
is introduced in the list of supported codecs in sweCodecMixProfile
. The percentage
value of this parameter indicates the proportion of total number of sessions designated for pure G711
transcoding (G711-G711
). This is applicable for Hybrid Transcoding, as well as pure CPU transcoding solutions.
The percentage value for G7112G711
is used for estimating transcode and bandwidth cost.
The percentage value for G711
is not used for estimating transcode cost, but is used for bandwidth calculation of PXPAD scenarios.
The percentage value for G711
cannot be greater than the percentage value of non-G711 codecs.
The sum of all codec percentages is
100
.
DSP-based Tone detection is supported only on GPU-ISBC profile.
The existing codec T38
is added as a supported codec in sweCodecMixProfile
.
With the enhancements mentioned above, the following CPU and GPU codecs are supported for SBC SWe on Openstack and KVM:
GPU + CPU codecs : AMR-NB
, AMR-WB
, EVRC
, EVRCB
, G729
, G722
, G711
CPU only codecs : G723
, G726
, G7221
, ILBC
, OPUS
, SILK_8
, SILK_16
, EVS
, G7112G711
, T38
Note that you can provision CPU codecs in the codec profile and associate it with the GPU traffic profile; however, you must provision at least one GPU codec in the sweCodecMixProfile
.
sweCodecMixProfile
:p80
and p100
are added to the list of supported packetization time values (ptime value), indicating 80ms
and 100ms
of packetization times respectively.
For SBC SWe Cloud 8.1, ptime values p80
and p100
are not tested.
0.00-100.00
. Prior to the enhancement, only integers were supported.sweTrafficProfiles
, a new parameter useGPUForTranscoding
is added.For the enhancements mentioned above, the following syntax shows a valid configuration:
sweCodecMixProfile
% set system sweCodecMixProfile <profile_name> < codec: all_cuurently_applicable_codecs | G7112G711 | T38 > < ptime value: p10 | p20 | p30 | p40 | p60 | p80 | p100 > percentage < percentage value range: 0.00 - 100.00 >
For SBC SWe Cloud 8.1, ptime values p80
and p100
are not tested.
sweTrafficProfiles
% set system sweTrafficProfiles <profile_name> transcodePercent <percent> transcodingCodecProfile <profile_name> useGPUForTranscoding <false | true>
The CLI changes are as follows:
dsrProtocolVersion
is added to callDataChannel
groupTwoThousand
is added as a vendorId
% set addressContext <address_context> intercept callDataChannel <CDC_name> mediationServer <server_1> ... <server_16> dsrProtocolVersion <protocol_version> vendorId <existing vendor ids | groupTwoThousand> mediaIpInterfaceGroupName <media_Ip_Interface_Group_Name> ipInterfaceGroupName <ip_Interface_Group_Name>
Parameter | Length/Range | Default | Description | M/O |
---|---|---|---|---|
dsrProtocolVersion | N/A | 0 | Signifies the intercepted X2 signaling protocol version towards the mediation servers. The default value 0 maintains backward compatibility with SBC Core 8.0 or earlier
| O |
The flag sipRecLegsCapture
is introduced under callFilter
parameter of the global callTrace
object.
% set global callTrace callFilter <callFilterName> sipRecLegsCapture <disable | enable>
The CLI changes are as follows:
show
command described below displays signaling and media statistics information for a maximum of 16 mediation servers, as configured by an authorized CALEA user. Note that no new statistics are added; the existing statistics display for up to 16 configured mediation servers.dsrProtocolVersion
is added to callDataChannel
groupTwoThousand
is added as a vendorId
For signaling status:
> show status addressContext <address_context> intercept callDataChannel <CDC_name> mediationServerSignalingStatus <mediation_server_1> ... <mediation_server_16>
For media status:
> show status addressContext <address_context> intercept callDataChannel <CDC_name> mediationServerMediaStatus <mediation_server_1> ... <mediation_server_16>
For Call Data Channel, the show
command displays the parameter dsrProtocolVersion
:
> show addressContext <address_context> intercept callDataChannel <CDC_name> interceptStandard threeGpp; vendorId groupTwoThousand; ipInterfaceGroupName LIF2; liPolDipForRegdOodMsg enabled; rtcpInterception enabled; mediaIpInterfaceGroupName LIF2; dsrProtocolVersion 1;
Parameter | Length/Range | Default | Description | M/O |
---|---|---|---|---|
dsrProtocolVersion | N/A | 0 | Signifies the intercepted X2 signaling protocol version towards the mediation servers. The default value 0 maintains backward compatibility with SBC Core 8.0 or earlier
| O |
The CLI of D-SBC (S-SBC and M-SBC) is enhanced with commands to configure Default LI and IMS LI. This enhancement is in addition to the existing support for PSCI LI.
The commands and the parameters are not new; they are already available on hardware platforms and/or on I-SBC for virtual platforms. For more information, refer to Intercept - CLI.
As a prerequisite to configure LI (any supported variety) on D-SBC, ensure that the Call Data Channel (CDC) and the Mediation Server are enabled and running.
To provision IMS LI on S-SBC, configure the Signaling (X2) interface using the two commands shown below:
Perform a commit after each step.
% set addressContext <address context name> intercept callDataChannel <CDC Name> interceptStandard <IMS LI supported Value> vendorId <IMS LI Supported Values> ipInterfaceGroupName <LIF Group information> % set addressContext <address context name > intercept callDataChannel <CDC Name> mediationServer <mediation server name> signaling ipAddress <ip address of mediation server> portNumber <port number of mediation server> protocolType <tcp | udp>
To configure IMS LI on M-SBC, configure the Media (X3) interface using the two commands shown below:
Perform a commit after each step.
% set addressContext <address context name> intercept callDataChannel <CDC Name> interceptStandard <IMS LI flavor> vendorId <IMS LI Falvour> mediaIpInterfaceGroupName <LIG1 from where data is sent> % set addressContext <address context name> intercept callDataChannel <CDC Name> mediationServer <mediation server name> media <transport type> ipAddress <ip address of mediation server> portNumber <port number of mediation server>
To configure Default LI on S-SBC:
% set addressContext <address context name> intercept callDataChannel <CDC Name> interceptStandard <Default LI supported Value> vendorId <Default LI Supported Values> ipInterfaceGroupName <LIG1 from where data is sent> priIpAddress <IPv4 address> priPort <port> priState <disbled | enabled> priMode <active | outofservice | standby>
In the above example, you can also configure secondary IP address, by including the following parameters:
secIpAddress <IP_Address>
secMode <active | outofservice | standby>
secState <disabled | enabled>
To configure Default LI on M-SBC:
% set addressContext <address context name> intercept callDataChannel <CDC Name> interceptStandard <Default LI supported Value> vendorId <Default LI Supported Values>
A new profile, dscpProfile
, is added to services
. The parameters associated with dscpProfile
are as follows:
dscpValue
hpcDscpValue
state
For DSCP marking of DIAMETER+ packets, associate the DSCP profile with the policy server (policyServer
globalConfig
).
To configure DSCP values for HPC and non-HPC calls, use the following syntax:
% set profiles services dscpProfile <dscp_profile_name> dscpValue <dscp_value> hpcDscpValue <hpc_dscp_value> state <disable | enable>
To associate the DSCP profile with the policy server for DSCP marking of DIAMETER+ packets, use the following syntax:
% set system policyServer globalConfig dscpProfile <dscp_profile_name>
Configuration Examples
To configure DSCP values for HPC and non-HPC calls, refer to the following example:
% set profiles services dscpProfile test_dscp_profile dscpValue 36 hpcDscpValue 18 state enabled
To associate the DSCP profile with the policy server for DSCP marking of DIAMETER+ packets, refer to the following example:
% set system policyServer globalConfig dscpProfile test_dscp_profile
To reserve a part of the Media Port Range (MPR) and LIF bandwidth for GETS/HPC calls on media-bearing SBC platforms, use the following command syntax:
% set system media mediaPortRange highPriorityPortRangeLocation <bottom | top> highPriorityPortRangeSize <0-25>
The percentage specified as highPriorityPortRangeSize
is used as the High Priority reserve for both MPR and LIF bandwidth.
For more information on Media Port Range, refer to Media System - CLI.
Starting with SBC Core 8.1, the SBC supports configuring the existing mediaPortRange
parameters highPriorityPortRangeLocation
and highPriorityPortRangeSize
across all supported hardware, software, and cloud platforms.
Prior to this release, the parameters were restricted to hardware platforms.
Best Practice
The configuration for the parameters highPriorityPortRangeLocation
and highPriorityPortRangeSize
defines the High Priority Media Port Range (HPMPR) as a subset of the overall MPR. The SBC uses the configuration to quickly identify UDP packets (both media and non-media) arriving within the overall MPR, and prioritizes them while processing ingress UDP packets. For GETS/HPC applications, Ribbon recommends reserving 10% of the MPR as HPMPR. If additional SIP Signaling Ports (besides the default port 5060), and/or other Control UDP ports are within the overall MPR, Ribbon recommends configuring them within the HPMPR. Such configuration ensures that during congestion, they are prioritized while processing ingress packets.
For example, if the overall MPR is defined as 1024-65535, and the High Priority Port Range is 10% (starting with the lower limit of MPR), then the HPMPR is 1024-7475. Ribbon recommends configuring additional SIP Signaling and other Control UDP ports within the range 1024-7475. In case of congestion, such configuration ensures that while processing ingress packets, packets received at ports within the range configured for HPMPR are prioritized. Normal calls are allocated local UDP ports ranging from 7476-65535, and GETS/HPC calls are allocated local UDP ports ranging from 1024-7475.
The SBC CLI is enhanced with the following:
queue
object, with its parameters and flags, are added to the profile services hpcCallProfile
:queue
length
timeout
state
callQueuing
object, with its parameter queueLength
, are added to global
:callQueuing
queueLength
The commands show status global callCountCurrentStatistics
and show status global callCountIntervalStatistics
displays the following new statistics related to this feature:
hpcQueueAttempts
hpcQueueOverflows
hpcQueueAbandons
hpcQueueTimeouts
hpcCallProfile queue
% set profile services hpcCallProfile <hpc_call_profile> queue length <queue_length_value> timeout <queue_timeout_value> state <disable | enable>
global callQueuing
% set global callQueuing queueLength <queue_length_value>
With the CLI enhancements, the command show table profiles services hpcCallProfile <hpc_call_profile>
displays the objects dscp
and queue
. For more information, refer to Show Table Profiles - Services - HPC Call Profile.
The parameters and flags for the new object queue
under hpcCallProfile
are as follows:
The parameters for the new object callQueuing
under global
are as follows:
Enables, disables, and configures the SBC for use with the SIP Aware Front End Load Balancer (SLB). See SIP-Aware Front-End Load Balancer for a feature overview.
% set system slb usage <disabled | enabled> commInterface <> slbIpAddress <> % set system addressContext <zone> <slbZoneName
Parameter | Length/Range | Default | Description | M/O |
---|---|---|---|---|
slb usage | n/a | disabled | Enable this flag to allow the SBC to use SLB.
| O |
commInterface | ||||
slbIpAddress |
New CLI lets you:
% set system media policing spikeAction <alarm | alarmAndDiscard | none> bwOverloadAlarmTimer < 30...3600 seconds> |
Parameter | Length/Range | Default | Description |
---|---|---|---|
spikeAction |
| none | Determines the action SBC takes upon finding non-confirming packets.
|
bwOverloadAlarmTimer | 30 - 3600 | 300 | Determines the duration after which the SBC checks for non-confirming packets. (The SBC raises an alrm if it finds non-confirming packets) |
% set system media policing spikeAction none bwOverloadAlarmTimer 600 seconds> |
To disable alarm/trap from being generated in "spikeAction = alarm" or "alarmAndDiscard" mode, use the following CLI commands:
% set oam traps admin sonusSbxMediaOverFlowPacketsNotification state disable % set oam traps admin sonusSbxMediaOverFlowPacketsCleared state disable
Only the clusterComm
parameters under system
clusterAdmin
remain user-configurable and these options apply only to M-SBC instances within a distributed SBC deployment. The clusterAdmin
options listed below are no longer visible in the CLI.
% set system clusterAdmin dataComm discoveryComm seedFqdn seedIpAddress state