The RN PN: 550-08477 describes new features, the latest hardware and software requirements, known limitations and other pertinent release information for the latest release of SBC Core.
Please note that all Ribbon bugs reported by customers on a given software release will be fixed in the latest release on that software release branch.
To view and download the latest End of Product Sale (EoPS) and other End Of Life (EOL) notices, navigate to the Resource Library on the corporate website (https://ribboncommunications.com/company/get-help/resource-library).
The SBC Core 08.01.xx documentation is located at the following Wiki space: SBC Core Documentation.
Ribbon Release Notes are protected under the copyright laws of the United States of America. This work contains proprietary information of Ribbon Communications, Westford, MA-01886, USA. Use, disclosure, or reproduction in any form is strictly prohibited without prior authorization from Ribbon Communications.
The following Ribbon Bulletins are referenced in this release note:
Bulletin-18-00028529: The System Security Intrusion Detection AIDE Reports False Positive Alarms
To view/download Ribbon bulletins, do the following:
For problems or questions, contact Ribbon Support through telephone or fax:
Worldwide Voice: 1 (978) 614-8589
USA Toll-free: 1 (888) 391-3434
Worldwide Fax: 1 (978) 614-8609
The SBC Core platforms address the next-generation needs of SIP communications by delivering media transcoding, robust security and advanced call routing in a high-performance, 2RU, and 5RU form-factor devices enabling service providers and enterprises to quickly and securely enhance their network by implementing services like SIP trunking, secure Unified Communications and Voice over IP (VoIP).
For more product information, refer to the section About SBC Core in the main documentation space.
The SBC Core software interoperates with the following:
NetScore maintains a list of remote host keys for all nodes from which it collects data. If NetScore is deployed in your network, connectivity to the SBC will be lost any time the SBC software is reinstalled because the SBC’s host key is updated during the install. Refer to NetScore Release Notes for steps needed to reconnect to the SBC.
When using H.323-SIP and SIP-H.323 call flows, an additional Re-invite/Update may get generated towards the SIP side. To suppress this, enable the IP Signaling Profile (IPSP) flag Minimize Relaying Of Media Changes From Other Call Leg
at the SIP side.
H.323 is not supported on SBC SWe cloud deployments.
When upgrading your network, ensure to upgrade each product to the most current release to take advantage of the latest features, enhancements, and fixes.
For complete interoperability details between various Ribbon products, including backwards compatibility, refer to Ribbon Product Compatibilities.
Refer to SBC 5000-7000-SWe Interoperability Matrices for the latest and minimum compatible product versions supporting the 08.01.00R000 release.
To instantiate the SBC instances, the following templates can be used:
Example template files are packaged together in .tar.gz and .md5 files separate from the SBC Core application installation and upgrade files:
The system hosting the SBC SWe Cloud must meet the below requirements for OpenStack:
Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper-threading). Ribbon recommends Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware. Minimum 4 NICs. Make sure NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems. The Intel I350, x540, x550, and 82599 Ethernet adapters are supported for configuring as SR-IOV and DirectPath I/O pass-through devices. The PKT ports must be 10 Gbps SR-IOV enabled ports. 6 NICs are required to support PKT port redundancy.Configuration Requirement Processor RAM Minimum 24 GiB Hard Disk Minimum 100 GB Network Interface Cards (NICs)
The system hosting the SBC SWe must meet the following requirements to achieve the performance targets listed:
All NIC ports must come from the same NUMA node from which the M-SBC SWe instance is hosted.
The SBC SWe supports the following OpenStack environments: The SBC SWe was tested on OpenStack Queens with RHOSP 13 and RHEL 7.5.OpenStack Requirements
The following table lists the server hardware requirements. Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper threading). Ribbon recommends using Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware. The supported CPU Family number is 6 and CPU Model number must be newer than 26. Refer to the Intel Architecture and Processor Identification document for more information. Make sure NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems. The Intel I350, x540, x550, and 82599 Ethernet adapters are supported for configuring as SR-IOV and DirectPath I/O pass-through devices. Number of ports allowed:Configuration Requirement Processor RAM Minimum 24 GB Hard Disk Minimum 500 GB Network Interface Cards (NICs) Ports
The SBC SWe software only runs on platforms using Intel processors. Platforms using AMD processors are not supported. The following table lists the server hardware requirements: Intel Xeon processors (Nehalem micro-architecture or above) with 6 cores and above (processors should support hyper threading). Ribbon recommends using Westmere (or newer) processors for better SRTP performance. These processors have the AES-NI instruction set for performing cryptographic operations in hardware. The supported CPU Family number is 6 and CPU Model number must be newer than 26. Refer to the Intel Architecture and Processor Identification document for more information. ESXi 6.5 and later releases require approximately 2 physical cores to be set aside for hypervisor functionality. The number of VMs which can be hosted on a server must be planned for accordingly. Minimum 4 NICs, if physical NIC redundancy is not required. Make sure NICs have multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems. Intel I350, x540, x550, x710 and 82599, Mellanox Connect - 4x, and Mellanox Connect - 5x. Number of ports allowed: Configuration Requirement Processor RAM Minimum 24 GB Hard Disk Minimum 500 GB Network Interface Cards (NICs) Ports
The following tarball file is required to use the IaC environment to deploy SWe N:1 deployments on VMware:
The environment in which you place and expand the IaC tarball must include:
For more information on IaC, refer to Using the Ribbon IaC Environment to Deploy SBC SWe on VMware.
The following SBC 5000 series (51x0/52x0), SBC 5400 and SBC 7000 software and firmware versions are required for this release. For 5xx0 the BIOS can be installed during app install, whereas for 5400 and 7000 the BIOS is included in the firmware package and is installed during the firmware upgrade.
The firmware package of SBC 5400 and 7000 series includes BMC, BIOS, and other binaries. The firmware is upgraded from the BMC.
Use the EMA to verify the currently installed software and firmware versions.
Log on to the EMA, and from the main screen navigate to Monitoring > Dashboard > System and Software Info.
The following software release bundle is available for download from the Customer Portal:
Download the appropriate software packages for your desired configuration from the Customer Portal (https://ribboncommunications.com/services/ribbon-support-portal-login) to your PC:
firmware-5XX0-V03.20.00-R000.img
firmware-5XX0-V03.20.00-R000.img.md5
bmc5X00_v3.20.0-R0.rom.md5sum
bmc5X00_v3.20.0-R0.rom
Execute the Method Of Procedure (MOP) only for upgrading the FPGA image of an SBC 7000 DSP-LC card when the SBC 7000 DSP-LC FPGA version is 0x14. The MOP can be applied at any version time, with the only restriction being that the BMC firmware version is at least 1.25.0. However, if the SBC application is running version V05.01.00R000 or higher, then the DSPs will be set to disabled and transcoding and transrating calls will fail if the SBC 7000 DSP-LC FPGA version is 0x14. Therefore, it is necessary to upgrade the SBC 7000 DSP-LC FPGA if the version is 0x14, before upgrading the SBC to 5.1.0. However, the MOP can be applied if the application version is higher than 5.1.0. Click Here to view the 550-06210_DSP-LC_FPGA_Upgrade_MOP.
The ConnexIP Operating System installation package for SBC Core:
Once the ConnexIP ISO procedure is completed, the SBC application package is automatically uploaded to SBC platforms.
The SBC Application installation and upgrade package for SBC Core:
For detailed information on installation and upgrade procedures, refer to SBC Core Software Installation and Upgrade Guide.
These files are for SBC SWe deployments in the OpenStack cloud using VNFM.
For VNFM deployment, the VNF Descriptor (VNFD) file is provided in a Cloud Service Archive (CSAR) package for the type of SBC cluster being deploying. VNFs are independent and CSAR definitions are imported into the VNFM via an Onboarding mechanism. There is a procedure for producing the required CSAR variant, for different personalities (S-SBC, M-SBC), different interface types (virtio, sriov).
Files required for CSAR creation:
For detailed information on installation and upgrade procedures, refer to SBC Core Software Installation and Upgrade Guide.
For details on CSAR creation, refer to Creating a CSAR Package File.
A LSWU on an SBC 7000 should only be performed when the total number of active calls on the system is below 18,000. If the criteria is not met, a double failure during the upgrade may occur, thereby losing all active calls. If such a failure occurs, both active and standby SBC services will go down. Contact Ribbon Support immediately.
Release 8.0 requires additional user account security practices for SBC SWe deployments in Openstack cloud environments. During upgrade of SBC SWe cloud instances deployed using Heat templates, you must use a template that includes SSH keys or passwords for the admin and linuxadmin accounts. The example Heat templates have been updated to include information on how to specify this type of data in the userdata section of a template.
Once the installation or upgrade completes on the SBC 51x0 and SBC SWe platforms, the copy of the installation package (SBC Core 08.01.00R000 Release Notes) is automatically removed from the system.
As an SBC Core password security enhancement, user passwords automatically expire after upgrading to 8.0.x. As a result, users are required to change their passwords upon initial login immediately following the upgrade.
Customers using network licensing mode will be converted to node locked mode (formerly legacy mode) after upgrade to the SBC 8.0.0 Release.
The SBC 8.0 5xx0 and 7000 platforms may exhibit a 7% degradation of CPU performance relative to earlier releases. This is attributable to the Spectre/Meltdown security patches.
For the procedure specific to SBC SWe upgrades on KVM Hypervisor or VMware to take advantage of performance improvements due to hyper-threading, refer to MOP to increase vCPUs Prior to Upgrading SBC SWe on VMware or KVM Hypervisor.
In the case of a Live Software Upgrade (LSWU) from 6.0.0R000/6.0.0R001/6.0.0F001/6.0.0F002 to 8.0, The action “Perform Pre-Upgrade Checks” from PM is not supported. Please contact Ribbon Support.
The number of rules across SMM profiles in a system is limited to 2000, and the number of actions across profiles in a system is limited to 10000.
Ensure the above conditions are met before LSWU.
In NFV environments, the method used for upgrades involves rebuilding the instance, which requires additional disk space on the host. The minimum disk space needed for this operation is listed in the table below.
The SBC 51xx and 52xx systems require 24GB of RAM to run 6.x code or higher.
SWe SBC software enforces I-SBC instances to run only with a single vNUMA node in order to achieve deterministic performance. SWe SBC VM having >8 vCPUs hosted on dual-socket physical server with VMware ESXi software needs to follow the steps below to correct vNUMA topology before upgrading to latest SWe SBC software:
vsish -e get /net/pNics/<PKT port name - vmnicX>/properties | grep "NUMA"
If any of the above settings requires modification, follow the steps below on SWe SBC HA system:
numa.autosize.once = FALSE
numa.nodeAffinity’ = 0 or 1 (based on PKT port NIC affinity)
On ESXi 6.5 and above releases, vSphere web client can be used to add above rows under Edit settings > VM options > configuration parameters > add parameters;
On ESXi 6.0 and below releases, it can be added under Edit > Advanced > general > configuration parameters > add rows using vSphere client.
For more information, refer to:
Prior to performing an upgrade to release 08.00.0R000, usernames that do not conform to the new SBC user-naming rules must be removed to prevent upgrade failure. Upgrade can proceed successfully after removing all invalid usernames. The following user-naming rules apply:
Usernames can contain a maximum of 23 characters.
The following names are not allowed:
tty disk kmem dialout fax voice cdrom floppy tape sudo audio dip src utmp video sasl plugdev staff users nogroup i2c dba operator
Note: Any CLI usernames consisting of digits only or not conforming to new user naming rules will be removed after performing a restore config in release 8.0.0R000.
Prior to performing an upgrade to the 8.0 release, the dnsGroups with type mgmt must be specified/updated with the "interface" field. The steps are included in WBA "W-17-00022847". To view the WBA, log on to the Support Portal and click the Bulletins link from the menu bar. Enter the bulletin number (last eight numbers) in the search field and press Return.
If the above MOP is not run, the LSWU process may fail because of duplicate trunk group or zone names.
Prior to performing an upgrade to 8.0 release, the duplicate trunk groups or zones must be removed. The steps are included in WBA "W-17-00022689". To view the WBA, log on to the Support Portal and click the Bulletins link from the menu bar. Enter the bulletin number (last eight numbers) in the search field and press Return.
If you are upgrading from any SBC version with ePSX configuration to the 08.01.00R000 release, execute the Method of Procedure, MOP to Re-configure SBC (with ePSX) to External PSX Prior to an Upgrade to 06.00.00R000 Release prior to performing an upgrade. For a list of supported LSWU paths, refer to SBC Core 08.01.00R000 Release Notes.
When upgrading SBC Core to release 5.0.0 (and above) from a pre-4.2.4 release, complete the "Action to take" immediately after the upgrade if either condition that follows is applicable:
Action to take: On the SIP trunk group facing Broadsoft (or other feature server), set the SIP Trunk Group signaling flag, honorMaddrParam
, to enabled
on the Trunk Group(s) requiring maddr handling. The default is ‘disabled
’.
set addressContext <addressContext name> zone <zone name> sipTrunkGroup <TG name> signaling honorMaddrParam <disabled | enabled>
See the following pages for configuration details:
Starting with 4.2.4R0 release, CPU resource allocation requirements for SBC SWe VM are strictly enforced contrary to previous releases. You must review and verify these VM settings (including co-hosted VMs) against the documented "VM Configuration Recommendations" on the For VMware page in the Hardware and Software Requirements section before upgrading. If you encounter a problem, correct the CPU reservation settings as specified in step 6 of the "Adjust Resource Allocations" procedure on Creating a New SBC SWe VM Instance with VMXNET3. CPU reservation should be set as “number of vCPUs assigned to VM * physical processor CPU frequency". If VM uses the same number of vCPUs as the number of physical processors on the server, this reservation may not be possible. In this case, reduce the number of vCPUs assigned to VM by one and set the CPU reservation to the appropriate value.
When using the show table system serverSoftwareUpgradeStatus
command during the upgrade, the Standby server's LSWU status will always display "Upgrading" even though the upgrade may have failed due to host checker validation. To check if host validation failed for the Standby, check for HostCheck Validation Failed message in the upgrade.out
log.
As a prerequisite for SWe LSWU/upgrade, disable the Call Trace feature prior to performing the LSWU/upgrade and re-enable it once the LSWU/upgrade is completed.
Perform the following procedure on the Standby to check for the Hostcheck Validation Failed message in the upgrade.out
log.
/opt/sonus/staging/upgrade.out
(this log shows the Hostcheck Validation Failed error).show table system serverSoftwareUpgradeStatus
to confirm the successful upgrade.Prior to performing a Live Software Upgrade (LSWU), verify if the system and the databases are in sync. The steps are included in WBA "Warning-14-00020748". To view the WBA, log on to the Support Portal and click the Bulletins link from the menu bar. Enter the bulletin number (last eight numbers) in the search field and press Return.
The SBC 8.0 release skips the SRV query if the flag in a DNS NAPTR response from the DNS server indicates to proceed with "A" record query as per RFC 2915/3403. This is a change in behavior from previous releases, where the SBC performed SRV queries irrespective of the "flag" setting returned by DNS Server. If you use DNS NAPTR/SRV/A record query from SBC to determine peer transport address, ensure the DNS Server is configured to return ‘S’ flag to invoke an SRV query.
In this release, LSWU infrastructure is added to the Platform Manager (PM), providing the ability to perform LSWU upgrades to later releases using the PM. However, this feature is not currently supported in 4.2.x releases and should not be used at this time.
Please read the following information and take necessary actions before starting your upgrade to this release.
Since the release 4.1.4, the cryptographic key pair used to sign and verify the package has been changed to enhance security. When installing/upgrading from all 4.0.x releases, all pre-4.1.4 releases (4.1.3 and earlier), and all pre-4.2.3 releases (4.2.2R00x and earlier), do one of the following, depending upon your upgrade method:
LSWU through CLI: Skip the integrity check during LSWU by using the CLI command below.
During LSWU, use the integrityCheck
skip
option when upgrading from CLI:
> request system serverAdmin <server> startSoftwareUpgrade integrityCheck skip package <package>
Integrity check works as expected only when upgrades are started from 4.1.x releases (4.1.4R000 or later) or from 4.2.3R000 or later releases.
Downgrading to any pre-5.0 release from this release requires a ConnexIP re-ISO installation. For more information, refer to:
The SBC Core supports Live Software Upgrade from releases listed in the table below:
EVS Transcoding is an enhancement to the FR, HR, EFR, AMR, and AMR-WB codecs. The SBC EVS Transcoding supports all bit-rates between 5.9 and 24.4. Source-controlled variable bit-rate operation gives improved capacity. EVS Transcoding is backward inter-operable to AMR-WB.
This feature incorporates the following functionalities in the SBC:
For more information, refer to:
This feature updates the SBC to support SIP Cause Code Mapping for CPC to SIP for trunk groups regardless of the signaling zone.
The PSX sends the SIP_USED_IN_CORE flag with the policy response to the egress SBC. If the egress SBC receives the SIP_USED_IN_CORE flag and the ingress zone of the egress SBC is in the default signaling zone, the egress SBC does not allow SIP Cause Code Mapping for CPC to SIP because the scenario is SIP In Core. If the PSX either is not present or does not enable the SIP_IN_CORE flag, then SIP Cause Code Mapping for CPC to SIP functions as if the SBC uses a non-default signaling zone.
For more information, refer to:
The Backup/Restore EMA screens are updated for this release.
For more information, refer to the following section:
Note: Default LI on D-SBC supports interception of Audio streams only; lawful Intercept of other media streams are supported by IMS LI and PSCI LI.
For more information, refer to the following pages:
For deployments that require it, users with admin privileges can configure the SBC Core to generate CDRs in the format of the (former GENBAND) Q-SBC. Similar to SBC Core CDRs, the Q-SBC format is an ASCII-format text file with multiple records per file. Ribbon CDR fields and other data are mapped to populate Q-SBC CDRs.
To verify CDR data integrity and authenticity when transmitting Q-SBC format CDR files to another system, you can also configure the SBC Core to generate an HMAC-MD5 (Hash-Based Message Authentication Code - Message Digest algorithm 5) checksum and include it in each Q-SBC CDR log file. A receiving system uses the checksum to verify the file transmitted correctly and that there was no data tampering during transmission.
For more information, refer to:
The SBC includes several default groups (such as Administrator, Operator, Field Service, Guest, Calea and Security Auditor) that use various levels of access to CDB data, and to CLI and NETCONF commands. Group permissions are based on the defined AAA rules which are not user-editable. As a result, there is little flexibility to create user-defined groups with permissions for the pre-defined groups. Additionally, users belonging to the custom group can only login to the CLI, but not the EMA.
The SBC is enhanced to create user-defined groups with flexibility to modify the AAA rules. New functionality includes:
For more information, refer to:
Unlike some systems which includes the CDR Established Time field in the CDR, the SBC calculates the CDR Established Time value using the CDR fields.
The SBC calculates the Call Established Time using the following fields stored in the CDR records:
For more information, refer to:
This feature updates the EMA Routing screen. The Routing screen replaces the Toolkit workspace panel with buttons, and other selection options, in the Routing Labels + Routing Label Routes and Routes tabs. This feature also updates the configuration panels in these tabs to improve usability.
For more information, refer to:
This feature updates the SBC to handle a 3xx response to the non-INVITE OOD SIP messages with a Contact header that has the tgrp and trunk-context parameters in the embedded Route header.
For more information, refer to:
The SBC allows configuration of a maximum of 16 mediation servers for IMS LI in the Call Data Channel (CDC). When a call is tapped, the SBC selects among the Delivery Function 2 (DF2) servers in a round-robin manner, and establishes persistent TCP connections with all configured mediation servers. Prior to the enhancement, only one mediation server was supported.
The SBC uses Bidirectional Forwarding Detection (BFD) in remote end points and routers to continuously monitor the link availability of the SBC. If the BFD session is down, the router declares the link as down and the upper layer application protocol performs the appropriate actions (such as, not sending control packets).
Note: The SBC supports this feature only on User Datagram Protocol (UDP) and IPv4. This feature does not support any authentication mechanism.
The SBC supports the BFD asynchronous mode. In the asynchronous mode, the detection time decides the failure of the BFD session.
The ipInterface
configuration adds the bfd
parameter and its associated parameters.
For more information, refer to:
The SBC supports Online Certificate Status Protocol (OCSP) stapling (refer to RFC 6961). OCSP stapling allows you to provide the validity information of your security certificate. With OCSP stapling, the client (the ingress peer that initiates the call to the SBC) does not need to query the OCSP responder to retrieve the certificate status.
Note: OSCP stapling supports the following interworking scenarios:
The security
configuration adds the ocspStapling
flag and ocspResponseCachingTimer
parameter in the ocspProfile
.
For more information, refer to:
The D-SBC is enhanced to support interception of all supported media streams, such as:
Prior to the enhancement, D-SBC supported interception of Audio calls only.
For more information, refer to the following pages:
In earlier releases, the message body is not sent transparently to the other gateway when the following conditions applied:
The SBC is enhanced so that:
The SBC Core is enhanced with the capability to trace SIPREC legs to assist in debugging various issues associated with SIPREC legs, such as:
.DBG
) logsAs part of the enhancement, the flag sipRecLegsCapture
is introduced under the callFilter
parameter of the global callTrace
object.
When the main call is traced and SIPREC is invoked based on a matching criteria, if the flag sipRecLegsCapture
is set to enable
:
.PKT
file..TRC
file contains SIP Protocol Data Units (PDUs) for SIPREC calls, as well as tracing data of the main call (depending on the level of the trace)..PKT
files do not have the media of the main call. However, media packets of the SIPREC legs are logged in the .PKT
files..TRC
file contains SIP PDUs for SIPREC calls, as well as tracing data of the main call (depending on the level of the trace).For more information, refer to the following pages:
The SBC supports relay of "Application TCP/UDP" streams established using a SIP session.
New application bandwidth is introduced to allocate bandwidth from the media bandwidth pool. Before this feature, the application TCP/BFCP was supported only with video stream. This feature supports the application TCP/BFCP stream independent of video stream.
This feature provides D-SBC with support to relay application streams established using SIP session with TCP, TCP/BFCP, TCP/* and UDP protocols.
For more information, refer to:
The EMA Live Monitor capability now supports reporting on EVS transcoding. In the License Service Usage graph, for a customer configurable duration, EMA can display the number of licensed EVS transcoding sessions and the usage level.
For more information, refer to:
The SBC allows configuration of a maximum of 16 mediation servers for IMS LI in the Call Data Channel (CDC). When a call is tapped, the SBC selects among the Delivery Function 2 (DF2) servers in a round-robin manner, and establishes persistent TCP connections with all configured mediation servers. Prior to the enhancement, only one mediation server was supported.
Each mediation server object contains the Signaling(X2) and Media (X3) IP addresses. The SBC allows configuration of multiple mediation servers with the same X2 IP address but a different X3 IP address.
For IMS LI, the SBC does not support any Active-Standby configuration for the X2 servers. It assumes that the DF2 servers are running in Active-Active mode, and in case of a failure, moves the IP address of the active DF2 server to the standby DF2 server.
The X2 and X3 servers operate independently. Even if the X2 servers are not reachable, the SBC sends X3 media if DF3 servers are available, and vice versa.
The parameter dsrProtocolVersion
is added to callDataChannel
, and the groupTwoThousand
is added as a vendorId
.
For more information, refer to the following pages:
SBC SWe traffic profiles are used to characterize the call mix that operators expect to occur on their SBC SWe systems. SBC SWe systems can then enhance their VM performance by allocating CPU cores in a manner that maximizes capacity for the call mix specified in the active traffic profile. SBC SWe traffic profiles now include four additional parameters to better characterize the call mix of an SBC SWe system:
These profile parameters enable two additional attributes to appear in SWe capacity estimation output:
For more information, refer to:
The SBC SWe is qualified for the Dell EMC PowerEdge r740 in addition to the HP ProLiant.
The SBC is enhanced with the latest Debian Stretch kernel, so that the latest fixes and features are received from the Debian community.
The SBC is enhanced in this release with an updated Data Plane Development Kit (DPDK) to facilitate moving to the latest LTS release of the third-party open source software to allow access the latest fixes and features from the open source community.
The term "Hybrid Transcoding" implies leveraging both CPU and GPU resources efficiently to accommodate a given codec combination for transcoding capability. With Hybrid Transcoding, a suitable VM instance (SBC SWe on KVM/Openstack) utilizes all the CPU and GPU resources allocated to it for provisioning a given transcode call mix scenario.
Prior to Hybrid Transcoding, the SBC supported either a pure CPU transcoding solution, or a pure GPU transcoding solution. However, in the pure GPU solution, many CPUs are left unused. For example, when a 32
vCPU, GPU-ISBC instance is used to provision just AMRWB-G711u
calls, only 13
vCPUs are used, although such an instance can handle 7680
sessions for AMRWB-G711u
transcoding. With Hybrid Transcoding, the remaining 19
vCPUs (32-13
vCPUs) are used for provisioning additional AMRWB-G711
sessions.
Hybrid Transcoding enables a GPU-SBC to support GPU codecs as well as non-GPU-supported codec in the same instance. For example, AMRWB-G711 and G726-G711 codecs are supported in the same instance.
Hybrid Transcoding is supported in Custom GPU and Standard GPU traffic profiles.
For more information, refer to the following pages:
The SBC is enhanced to send a response code for any internal errors or failures encountered by defining a new error string in an existing profile that consists of mapping of error strings to SIP response codes. The SBC defines a unique error string for each location in the code from where the error originates. The SBC then maps the error string to the configurable SIP response code using the existing profile.
Prior to SBC Core version 8.1, if a trunk group was in Out-of-Service state and the SBC received the INVITE message on that trunk group, it rejected the INVITE message with a fixed error code.
For more information, refer to:
To bring consistency and simplicity to deployment models, the SBC now supports only two basic high-availability (HA) models. The supported models are:
N:1 HA deployments are now supported for SBC SWe deployments on VMware. Ribbon provides an infrastructure as code (IaC) environment to deploy SBC SWe N:1 HA instances in a VMware environment. The IaC environment provides the code, templates, sample files, and instructions needed for launching and managing virtualized SBC deployments. The instructions for working within the IaC environment are embedded as inline comments within the sample and template files and as separate readme files.
For more information, refer to:
Some networks are divided into geographical superzones. In such cases, when video traffic travels between the superzones, the SBC applies the following policing for video traffic (the values shown here are hypothetical):
The maximum of Token Bucket Size of GilCo's NGN is 100,000. Thus, when B-line is more than 10,000, the SBC may apply excessive policing and cause video deterioration. Therefore, token bucket size 80,000 for video is increased to more than 100,000 or disabling policing. To address this need, you can now disable the SBC's media policing function for audio/video/application streams.
For more information, refer to the following pages:
For SBC SWe cluster deployments operating in OAM configuration mode, new command parameters provide additional options for managing configuration changes when using the CLI.
For VNFM users, new options provided for the "createVnfmCsar.py" script that generates CSAR packages allow you to define VNFD files that create multiple redundancy groups in the same VNF and that enable customizing VM sizing of either the SBC or OAM nodes.
In addition to N:1 deployments in an OpenStack cloud environment, SBC SWe N:1 HA deployments on VMware also require use of OAM node configuration mode.
For more information, refer to:
The altIpVar configuration is the equivalent of altMediaIp configuration in cloud. Initially, when support for alt media IP addresses was added, a configuration of up to 14 addresses was supported. It is now extended to 254 addresses.
For more information, refer to the following page:
The following issues are resolved in this release:
The following issues exist in this release:
The following limitations exist in this release:
Due to a known EMA GUI issue, it can take up to 10 minutes to process each SMM rule when provisioning SMM on the SBC using the EMA. This will be fixed in a future release.
The HA interface must not be configured with link localaddress or subnet. For example, do not configure it with 169.254.0.0/16 subnet.
The VLAN tagged SRIOV packet interfaces are unable to ping endpoint Gateway IPs in the VMware platform because of an issue with VMWare.
When upgrading SBC SWe cloud instances to release 8.0, you must update your Heat template userdata section to include mandatory SSH key information. An issue in OpenStack requires that you use the stack-update process rather than re-launch after updating the template, which leads to a new UUID for the instance. As a result, you must regenerate and apply new license bundles to the upgraded instances during the upgrade.
Refer to Upgrading SBC SWe Cloud in an N:1 HA Model for the relevant procedure.
The following functionalities are not supported with SBC Microservices:
Two stage calls