In this section:
SBC CNe Architecture Overview
SBC CNe is the Cloud Native decomposition of core SBC functionality in terms of various microservices which interact with each other to provide SBC functionality. As part of the CNe solution, the SBC Manager GUI will be launched from RAMP.
The microservices which make up SBC CNe are described below.
Ribbon CNe Architecture Overview
Ribbon CNe Solution Overview
Ribbon recommends the following when the Ribbon CNe solution is deployed in the operator's network:
- RAMP CNFs - there will always be two RAMP CNFs running in GR mode (one RAMP CNF shall be deployed in a cluster in geographic region/site-1, while the other RAMP CNF is deployed in a cluster in another geographic region/site-2). The state of the RAMP CNFs (i.e. ACTIVE and STANDBY) are dynamically controlled by using etcd CNFs (3 etcd instances are needed and they should be running in different clusters. Ribbon's recommendation is to have the etcd CNFs instances running on 3 different geographic regions/sites).
- PSX-Primary CNFs - The PSX-Primary CNFs shall be deployed in odd numbers (primarily to avoid split brain/loss of quorum conditions during network isolation events). Minimum of 3 PSX-Primary CNFs (spread across multiple clusters and geographic regions) shall be deployed.
- PSX-Replica CNFs - The number of PSX-Replica CNFs to be deployed is primarily decided based on the call capacity (i.e. for PSX-Replica, it will be Diameter+ queries), ENUM dips, etc. Each PSX-Replica CNF shall have multiple PSX-Replica pods.
- SBC CNFs - The number of SBC CNFs (to achieve cluster level and geographic region/site level redundancy) is decided based on the call pattern (i.e., call/session capacity, call type [passthrough, transcode, direct media], SIPREC, Lawful Intercept, etc). The SC pods in the SBC CNF can dynamically scale up/down based on the call traffic.
Term Ribbon Definitions and Usage Microservices – also known as the microservice architecture - is an architectural style that structures an application as a collection of services that are: A Container image is a ready-to-run software package, containing everything needed to run a function: the code and any runtime it requires, application and system libraries, and default values for any essential settings. By design, a container is immutable: you cannot change the code of a container that is already running. If you have a containerized function and want to make changes, you need to build a new image that includes the change, then recreate the container to start from the updated image. A Pod is the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod is a group of one or more container with shared storage and network resources and a specification for how to run the containers. Fore details, see Pod – https://kubernetes.io/docs/concepts/workloads/pods/ Deployment – https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ A Deployment is a resource object in Kubernetes that provides declarative updates to applications. A deployment allows you to describe an application's life cycle, such as which images to use for the app, the number of pods there should be, and the way in which they should be updated. Fore details, see https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ A ReplicaSet is a process that runs multiple instances of a Pod and keeps the specified number of Pods constant. Its purpose is to maintain the specified number of Pod instances running in a cluster at any given time to prevent users from losing access to their application when a Pod fails or is inaccessible. Fore details, see https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ A StatefulSet is the workload API object used to manage stateful applications. Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. Fore details, see https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ DaemonSet feature is used to ensure that some or all of your pods are scheduled and running on every single available node in the K8S cluster. This essentially runs a copy of the desired pod across all nodes. When a new node is added to a Kubernetes cluster, a new pod will be added to that newly attached node. Fore details, see https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ A Job is a supervisor for pods carrying out batch processes, that is, a process that runs for a certain time to completion. Fore details, see https://kubernetes.io/docs/concepts/workloads/controllers/job/ Rollout simply means rolling update of application. Rolling update means that application is updated gradually, gracefully and with no downtime. Fore details, see https://argoproj.github.io/argo-rollouts/ A Service is a logical abstraction for a deployed group of pods in a cluster (which all perform the same function). Since pods are ephemeral, a service enables a group of pods, which provide specific functions (web services, image processing, etc.) to be assigned a name and unique IP address (clusterIP). Fore details, see https://kubernetes.io/docs/concepts/services-networking/service/ A StorageClass provides a way for administrators to describe the "classes" of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. Fore details, see https://kubernetes.io/docs/concepts/storage/storage-classes/ A PersistentVolume is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. PVs are volume plugins like Volumes but have a lifecycle independent of any individual Pod that uses the PV. GlusterFS is used in the Ribbon VNF to share the data across managed nodes. Some use cases for PV include: A PersistentVolumeClaim is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany). Fore details, see https://kubernetes.io/docs/concepts/storage/persistent-volumes/ A ConfigMap is an API object that allows you to store data as key-value pairs. Kubernetes pods can use ConfigMaps as configuration files, environment variables or command-line arguments. ConfigMaps allow you to decouple environment-specific configurations from containers to make applications portable. Fore details, see https://kubernetes.io/docs/concepts/configuration/configmap/ A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don't need to include confidential data in your application code. Fore details, see –– https://kubernetes.io/docs/concepts/configuration/secret/ A Cloud-Native Network Function is a software-implementation of a Network Function implemented in container(s) and orchestrated by Kubernetes, built and deployed in a cloud-native way. A CNF is a single deployable, upgradable and manageable entity. – A RBBN CNF consists or 1 or more Pods, packaged via a Helm Chart. An Application (or Application Service for aaS offers) is composed of one or more CNFs. In the near term an Application will likely map 1:1 with a CNF (eg: SBC CNF). A RBBN Solution consists of a collection of Applications or CNFs which together form an overall solution. e.g. a solution may contain RAMP, PSX, AS and SBC Applications. The scope of the application teams is to define LCM operations and the Solutions/Services teams will define the solution, with workflows integrating application level LCM operations. – Ribbon Call Trust and Ribbon Voice Sync are examples of Solutions, composed of multiple Applications/CNFs. Observability – A complex system's internal state can be deduced from external outputs. The goal of Ribbon observability is to facilitate the exposure of metrics and logs.. Abbreviations List: Abbreviation DefinitionRibbon CNe Terminology Definitions
Microservice Container K8S Pod K8S Deployment K8S Replicaset K8S Statefulset K8S Daemonset K8S Job K8S Rollout K8S Service K8S Storage Class (SC) K8S Persistent Volume (PV) K8S Persistent Volume Claim (PVC) K8S ConfigMap K8S Secrets RBBN CNF RBBN Application RBBN Solution RBBN CNF Solution Hierarchy Observability AA Authentication and Authorization CAC Call Admission Control CI/CD Continuous Integration/Continuous Deployment CM Configuration Management CNCF Cloud Native Computing Foundation CNe Cloud-Native edition CNF Cloud Native Network Function CS Common Services DevOps Development and Operations DM Device Management EFK Elasticsearch, Fluentd, and Kibana stack EPU Endpoint Updater FCAPS Faults, Configuration, Accounting, Performance, and Security FM Fault Management IDM Identity Management K8S Kubernetes LM License Management NS Network Services OAM Operations, Administration and Maintenance OCP Openshift Container Platform PFE Packet Front End PM Performance Metrics PSX-P PSX Primary PSX-R PSX Replica RAC Role Assignment Controller RBBN Ribbon Communications RHPA Ribbon Horizontal Pod Autoscaler SC Session Control SIP-T SIP-Telephony TOSCA Topology and Orchestration Specification for Cloud Applications
Microservice/POD Details
SBC CNe is the Cloud Native decomposition of core SBC functionality in terms of various microservices which interact with each other to provide SBC functionality. The components which make up SBC CNe are described below.
Session Handling Related Services
1.1.1.2.1 SIP Load Balancer (SLB)
SLB acts as the single entry/exit point for all SIP Signaling to the SBC CNF. It allows peers and endpoints to communicate with the SBC CNF without knowing the details of the internal components which may dynamically change as part of auto-scaling or failure recovery procedures. SLB enables load balancing of sessions to relevant back-end components based on the metrics reported by them. Another critical functionality provided by SLB is the seamless support for complex SIP signaling flows, e.g., INVITE with Replaces.
1.1.1.2.2 Session Control (SC)
SC service is the processing engine for SIP sessions. It also provides media and transcoding functionality for the SIP sessions. The number of SC service PODs auto scales based on current load of existing instances with new instances getting created or terminated as needed.
Internal Supporting Services
1.1.1.2.3 Network Services (NS)
NS manages the public IP Address pool. SBC CNF is based on a cloud native architecture and therefore the SC and SLB instances are not associated with public IP addresses in a hard-coded way. They are assigned a public IP Address from the pool managed by NS.
1.1.1.2.4 Role Assignment Controller (RAC)
RAC is the central authority managing the active and standby status of other service instances.
1.1.1.2.5 Ribbon Horizontal Pod Autoscaler (RHPA)
RHPA scales the number of SC service instances based on the metrics they report. It aggregates all received metrics and instantiates/terminates SC service instances based on configured thresholds.
1.1.1.2.6 Common Services (CS)
There are certain functionalities which require a cluster wide view for e.g., peer overload, reachability tracking. CS is the centralized entity which performs this task and shares the aggregate view with all other relevant Pods in the cluster.
1.1.1.2.7 Call Admission Control (CAC)
CAC is applied on SBC CNF basis as this avoids per SC instance call quota compartmentalization and is a prerequisite for a Cloud Native Architecture. CAC POD is the entity that maintains the aggregate at various levels, e.g., Zone, IPTG.
1.1.1.2.8 EndPoint Updater (EPU)
EPU service facilitates discovering (non-eth0) Eth1 IP addresses of service instances and publishes IP addresses associated to all service instances.
OAM related Services (Closely maps to OAM functionality from existing VNF OAM)
1.1.1.2.9 Operations and Management (OAM)
OAM is the single point of interaction for all SBC CNF configuration. OAM exposes a REST API and CLI through which configuration can be provided and queried. It also can be accessed via SBC Manager in the RAMP. OAM distributes configuration information to relevant Pod instances. OAM interfaces with RAMP (EMS) for statistics, alarms, traps, licensing and CDRs.
1.1.1.2.10 DB Cache (DB)
Redis is used to store HA related data such as session data and state. The SC instance taking over the responsibility of a failed one retrieves relevant state from the Redis in-memory cache.