In this section:



SBC CNe Architecture Overview

SBC CNe is the Cloud Native decomposition of core SBC functionality in terms of various microservices which interact with each other to provide SBC functionality. As part of the CNe solution, the SBC Manager GUI will be launched from RAMP.

The microservices which make up SBC CNe are described below.


Ribbon CNe Architecture Overview




Ribbon CNe Solution Overview



Ribbon recommends the following when the Ribbon CNe solution is deployed in the operator's network:

    • RAMP CNFs - there will always be two RAMP CNFs running in GR mode (one RAMP CNF shall be deployed in a cluster in geographic region/site-1, while the other RAMP CNF is deployed in a cluster in another geographic region/site-2).  The state of the RAMP CNFs (i.e. ACTIVE and STANDBY) are dynamically controlled by using etcd CNFs (3 etcd instances are needed and they should be running in different clusters.  Ribbon's recommendation is to have the etcd CNFs instances running on 3 different geographic regions/sites).
    • PSX-Primary CNFs - The PSX-Primary CNFs shall be deployed in odd numbers (primarily to avoid split brain/loss of quorum conditions during network isolation events). Minimum of 3 PSX-Primary CNFs (spread across multiple clusters and geographic regions) shall be deployed.  
    • PSX-Replica CNFs - The number of PSX-Replica CNFs to be deployed is primarily decided based on the call capacity (i.e. for PSX-Replica, it will be Diameter+ queries), ENUM dips, etc.  Each PSX-Replica CNF shall have multiple PSX-Replica pods. 
    • SBC CNFs - The number of SBC CNFs (to achieve cluster level and geographic region/site level redundancy) is decided based on the call pattern (i.e., call/session capacity, call type [passthrough, transcode, direct media], SIPREC, Lawful Intercept, etc).  The SC pods in the SBC CNF can dynamically scale up/down based on the call traffic. 


Ribbon CNe Terminology Definitions


Term

Ribbon Definitions and Usage

Microservice

Microservices – also known as the microservice architecture - is an architectural style that structures an application as a collection of services that are:

  • Highly maintainable and testable
  • Loosely coupled
  • Independently deployable
  • Organized around business capabilities
  • Owned by a small team
Container

A Container image is a ready-to-run software package, containing everything needed to run a function: the code and any runtime it requires, application and system libraries, and default values for any essential settings.

By design, a container is immutable: you cannot change the code of a container that is already running. If you have a containerized function and want to make changes, you need to build a new image that includes the change, then recreate the container to start from the updated image.

K8S Pod

A Pod is the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod is a group of one or more container with shared storage and network resources and a specification for how to run the containers.

Fore details, see  

Pod – https://kubernetes.io/docs/concepts/workloads/pods/

Deployment – https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

K8S Deployment

A Deployment is a resource object in Kubernetes that provides declarative updates to applications. A deployment allows you to describe an application's life cycle, such as which images to use for the app, the number of pods there should be, and the way in which they should be updated.

Fore details, see  https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

K8S Replicaset

A ReplicaSet is a process that runs multiple instances of a Pod and keeps the specified number of Pods constant. Its purpose is to maintain the specified number of Pod instances running in a cluster at any given time to prevent users from losing access to their application when a Pod fails or is inaccessible.

Fore details, see  https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/

K8S Statefulset

A StatefulSet is the workload API object used to manage stateful applications. Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.

Fore details, see  https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

K8S Daemonset

DaemonSet feature is used to ensure that some or all of your pods are scheduled and running on every single available node in the K8S cluster. This essentially runs a copy of the desired pod across all nodes. When a new node is added to a Kubernetes cluster, a new pod will be added to that newly attached node.

Fore details, see https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

K8S Job

A Job is a supervisor for pods carrying out batch processes, that is, a process that runs for a certain time to completion.

Fore details, see https://kubernetes.io/docs/concepts/workloads/controllers/job/

K8S Rollout

Rollout simply means rolling update of application. Rolling update means that application is updated gradually, gracefully and with no downtime.

Fore details, see https://argoproj.github.io/argo-rollouts/

K8S Service

A Service is a logical abstraction for a deployed group of pods in a cluster (which all perform the same function). Since pods are ephemeral, a service enables a group of pods, which provide specific functions (web services, image processing, etc.) to be assigned a name and unique IP address (clusterIP).

Fore details, see  https://kubernetes.io/docs/concepts/services-networking/service/

K8S Storage Class (SC)

A StorageClass provides a way for administrators to describe the "classes" of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. 

Fore details, see  https://kubernetes.io/docs/concepts/storage/storage-classes/

K8S Persistent Volume (PV)

A PersistentVolume is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. PVs are volume plugins like Volumes but have a lifecycle independent of any individual Pod that uses the PV. GlusterFS is used in the Ribbon VNF to share the data across managed nodes.

Some use cases for PV include:

  • The configurations are shared from OAM across other pods via PV
  • Performace statistics are shared from various pods to OAM via PV
K8S Persistent Volume Claim (PVC)

A PersistentVolumeClaim is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany).

Fore details, see  https://kubernetes.io/docs/concepts/storage/persistent-volumes/

K8S ConfigMap

A ConfigMap is an API object that allows you to store data as key-value pairs. Kubernetes pods can use ConfigMaps as configuration files, environment variables or command-line arguments. ConfigMaps allow you to decouple environment-specific configurations from containers to make applications portable.

Fore details, see  https://kubernetes.io/docs/concepts/configuration/configmap/

K8S Secrets

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don't need to include confidential data in your application code.

Fore details, see –– https://kubernetes.io/docs/concepts/configuration/secret/

RBBN CNF

A Cloud-Native Network Function is a software-implementation of a Network Function implemented in container(s) and orchestrated by Kubernetesbuilt and deployed in a cloud-native way. A CNF is a single deployable, upgradable and manageable entity.

 – A RBBN CNF consists or 1 or more Pods, packaged via a Helm Chart.

RBBN Application

An Application (or Application Service for aaS offers) is composed of one or more CNFs. In the near term an Application will likely map 1:1 with a CNF (eg: SBC CNF). 

RBBN Solution

A RBBN Solution consists of a collection of Applications or CNFs which together form an overall solution. 

e.g. a solution may contain RAMP, PSX, AS and SBC Applications. The scope of the application teams is to define LCM operations and the Solutions/Services teams will define the solution, with workflows integrating application level LCM operations.

 – Ribbon Call Trust and Ribbon Voice Sync are examples of Solutions, composed of multiple Applications/CNFs.

RBBN CNF Solution Hierarchy
  • Solution → Collection of Helm Charts
    • Application (1-n...) → currently 1:1, Application to CNF
      • Pod (1-n...)
        • Container (1-n...)
Observability

Observability – A complex system's internal state can be deduced from external outputs. The goal of Ribbon observability is to facilitate the exposure of metrics and logs..

Ribbon uses OpenTelemetry to implement the Ribbon Observability Stack and supports the selected components from the OpenTelemetry community. Ribbon's aim is to support all CNF-based Ribbon products in order to stream the observability data. With the introduction of this new stack, all logs and metrics and in future traces are normalized and pre-processed, which will help to correlate multiple data points captured from each microservice and deliver them to configured backends.

Abbreviations List:

Abbreviation

Definition

AAAuthentication and Authorization
CACCall Admission Control
CI/CDContinuous Integration/Continuous Deployment
CMConfiguration Management
CNCFCloud Native Computing Foundation
CNeCloud-Native edition
CNFCloud Native Network Function
CSCommon Services
DevOpsDevelopment and Operations
DMDevice Management
EFKElasticsearch, Fluentd, and Kibana stack
EPUEndpoint Updater
FCAPS Faults, Configuration, Accounting, Performance, and Security
FMFault Management
IDMIdentity Management
K8SKubernetes
LMLicense Management
NSNetwork Services
OAMOperations, Administration and Maintenance
OCPOpenshift Container Platform
PFEPacket Front End
PMPerformance Metrics
PSX-PPSX Primary
PSX-RPSX Replica
RACRole Assignment Controller
RBBNRibbon Communications
RHPARibbon Horizontal Pod Autoscaler
SCSession Control
SIP-TSIP-Telephony
TOSCATopology and Orchestration Specification for Cloud Applications



Microservice/POD Details

SBC CNe is the Cloud Native decomposition of core SBC functionality in terms of various microservices which interact with each other to provide SBC functionality. The components which make up SBC CNe are described below.


Session Handling Related Services

1.1.1.2.1          SIP Load Balancer (SLB)

SLB acts as the single entry/exit point for all SIP Signaling to the SBC CNF. It allows peers and endpoints to communicate with the SBC CNF without knowing the details of the internal components which may dynamically change as part of auto-scaling or failure recovery procedures. SLB enables load balancing of sessions to relevant back-end components based on the metrics reported by them. Another critical functionality provided by SLB is the seamless support for complex SIP signaling flows, e.g., INVITE with Replaces. 

1.1.1.2.2          Session Control (SC)

SC service is the processing engine for SIP sessions. It also provides media and transcoding functionality for the SIP sessions. The number of SC service PODs auto scales based on current load of existing instances with new instances getting created or terminated as needed.

Internal Supporting Services

1.1.1.2.3          Network Services (NS)

NS manages the public IP Address pool. SBC CNF is based on a cloud native architecture and therefore the SC and SLB instances are not associated with public IP addresses in a hard-coded way. They are assigned a public IP Address from the pool managed by NS. 

1.1.1.2.4          Role Assignment Controller (RAC)

RAC is the central authority managing the active and standby status of other service instances.

1.1.1.2.5          Ribbon Horizontal Pod Autoscaler (RHPA)

RHPA scales the number of SC service instances based on the metrics they report. It aggregates all received metrics and instantiates/terminates SC service instances based on configured thresholds.

1.1.1.2.6          Common Services (CS)

There are certain functionalities which require a cluster wide view for e.g., peer overload, reachability tracking. CS is the centralized entity which performs this task and shares the aggregate view with all other relevant Pods in the cluster.

1.1.1.2.7          Call Admission Control (CAC)

CAC is applied on SBC CNF basis as this avoids per SC instance call quota compartmentalization and is a prerequisite for a Cloud Native Architecture. CAC POD is the entity that maintains the aggregate at various levels, e.g., Zone, IPTG.

1.1.1.2.8          EndPoint Updater (EPU)

EPU service facilitates discovering (non-eth0) Eth1 IP addresses of service instances and publishes IP addresses associated to all service instances.

OAM related Services (Closely maps to OAM functionality from existing VNF OAM)

1.1.1.2.9          Operations and Management (OAM)

OAM is the single point of interaction for all SBC CNF configuration. OAM exposes a REST API and CLI through which configuration can be provided and queried. It also can be accessed via SBC Manager in the RAMP. OAM distributes configuration information to relevant Pod instances. OAM interfaces with RAMP (EMS) for statistics, alarms, traps, licensing and CDRs. 

1.1.1.2.10         DB Cache (DB)

Redis is used to store HA related data such as session data and state. The SC instance taking over the responsibility of a failed one retrieves relevant state from the Redis in-memory cache.