Feature Overview

With the wide adoption of cloud and virtualization technologies (NFV), the physical network functions (PNF) become virtual network functions (VNF) running on virtualized infrastructure manager (VIM) (like Openstack), Ribbon has virtualized SBC, PSX and EMS. While NFV adoption will ultimately result in Capex/Opex reduction for operators, this also presents some challenges.

  • Seamless Capacity Expansion with no impact on external network topology (i.e. need to reduce the impact of new VNFC creation on the peer(s))
  • Need for IP Address Consolidation
  • Challenges due to Cloud-native architecture
  • Need for Single SIP IP Address Appearance

To address these concerns, a SIP-aware front-end Load-Balancer (SLB) is available for SBC and PSX. SLB as a front-end means that a single IP address is exposed towards peer operators. SLB in turn distributes the traffic among back-end call-processing VMs. Ultimately, SLB has its own capacity limits and deployments need to make use of DNS, should the traffic requirements demand more than one SLB instance (SLB is in HA mode and hence it is actually more than one HA pair). However, as long as a single SLB (HA pair) addresses the traffic equivalent to a single site traffic (and hence that equivalent number of back-end call-processing VMs), there is no need to return to using DNS for SLB. Instead, if the traffic goes beyond what a SLB can handle, the solution is to create a new SLB with a new IP and expose that to the peer.

Key Features of This Architecture 

The following are the advantages of this architecture:

  • The peer operator need not update ACLs when a new VM is spawned.
  • The SLB can have network-specific overload-detection and load-balancing mechanism(s) towards/between back-end call-processing VMs.
  • The new VM instance can start traffic immediately.
  • For access deployments, SLB can terminate IPSec/TLS/TCP connections towards the peers and scale-in/scale-out back-end VMs. If the back-end VM’s IP address is exposed directly, the UE will not re-anchor the registration state to a different VM (in case of scale-down).

The SBC must be configured to mark it as "behind" the SLB.


Note
  • If the SBC is behind the SLB, you must enable SLB usage before making any configuration changes.
  • If you are moving the SBC SWe from an SLB deployment to a non-SLB deployment (or vice versa), you must clear the configuration using the clearDBs.sh script, and then reconfigure the SBC SWe instance.

SLB/SBC Discovery Process

  • The SBC initiates the handshake/discovery process by sending “SlbRegisterReq” to the pre-configured SLB IP Address.
  • The SLB assigns a unique instance ID to each SBC and then the SBC embeds this ID in the branch and tags the parameter as follows:
      • Via: SIP/2.0/UDP 10.3.0.175:5060;branch=z9hG4bK38B0000c956f5f96dae_cK0003.
      • To: 3334445566 <sip:3334445566@10.2.0.182:5060>;tag=gK38800063_cK0003.
  • The SBC registers each zone with the SLB and download the SSP data associated with each zone.
  • The zone name on SBC should ideally match that on the SLB.
  • The SBC uses “SlbServiceStatusMsg” to update its status (In-Service/OOS) to the SLB.

Use-Case Diagrams


SLB as a Front-end for N:1 HA S-SBC

The SLB is deployed as front-end for Ribbon S-SBC. The Ribbon S-SBC can be deployed as an Access SBC and/or Peering SBC.

Typical S-SBC VNF Front-ended by an SLB VNF

SLB Front-ending Multiple S-SBC VNFs

 

  • No labels