In this section:

Modified: for 12.1.1


Overview

Kubernetes Container Platform

You can deploy the Ribbon CNF solution on any Kubernetes Container Platform. The RedHat OpenShift Container Platform (OCP) option is provided here as a reference.

Two methods are available to install/deploy the SBC CNe on RedHat OCP (OpenShift Container Platform):

  1. Using Helm
    1. Helm is the package manager for Kubernetes. It helps you define, install and upgrade the most complex Kubernetes application (SBC CNe). It uses a packaging format called Charts. Simply put, the helm chart is just a combination of multiple Kubernetes resource manifests combined into a single package that can be deployed in the Kubernetes cluster in a single shot. Refer to https://helm.sh/ for more details about helm.
  2. Using GitOps
    1. GitOps is a set of best practices and a methodology for managing infrastructure and applications declaratively using Git as the single source of truth. The core idea of GitOps is to use Git as a central repository for storing the desired state of your infrastructure and applications and using automation (FluxCD) to keep the actual state of your system in sync with the desired state.

Onboarding Prerequisites

Installing the SBC CNe in a RedHat OCP Cluster Using Helm

Kubernetes Manifest

In the cloud VNF (Virtual Network Function - referred for VM solution) environment, Heat templates (in OpenStack) or CFN templates (in AWS) are used to define and deploy the resources/components (like VM, Volumes, Network Interfaces) with specific attributes and configurations that met the requirement of a given solution.

Similarly, in the case of CNF (Cloud-native Network Function - referred to as Containerized/Microservices solution) environment, Kubernetes (K8s) resource manifest that defines the attributes with which the resources/objects (Pod, Service, Job, Secrets, PVC, and so on) have to be deployed.

A Kubernetes manifest is a YAML file that describes each component or resource of your deployment and the desired state for your Kubernetes cluster once applied.

Sample Kubernetes Manifest - K8S Service object

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376

The manifest above gives the following information about the service to be deployed :

  1. The name of the service - my-service
  2. The label of the object (pod/deployment/statefulset) that the service has to front-end - app: MyApp
  3. The transport layer protocol to be used - tcp
  4. The port of the service to be exposed - 80
  5. The pod's target port to which the service will forward the packets
The manifest above is a single resource definition. A complete solution will consist of multiple resources deployed to work in unison. Deploying them individually using singular manifest files would be a hectic procedure; using Helm simplifies the procedure.

SBC CNe Helm Charts

The SBC CNe helm chart takes the following form:

├── rbbn-core-cnf
│   ├── charts
│   │   ├── common-modules
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── cnf-log-files-pvc.yaml
│   │   │   │   ├── configmap-ns.yaml
│   │   │   │   ├── config-pvc.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── image-secret.yaml
│   │   │   │   ├── ns-pvc.yaml
│   │   │   │   ├── pmstats-pvc.yaml
│   │   │   │   ├── role.yaml
│   │   │   │   ├── sbx-debug-log-pvc.yaml
│   │   │   │   ├── tshark-log-pvc.yaml
│   │   │   │   └── upgrade-pvc.yaml
│   │   │   └── values.yaml
│   │   ├── cs
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── analysis-template.yaml
│   │   │   │   ├── configmap.yaml
│   │   │   │   ├── deployment.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── rollout.yaml
│   │   │   │   └── svc-ep.yaml
│   │   │   └── values.yaml
│   │   ├── epu
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── analysis-template.yaml
│   │   │   │   ├── configmap.yaml
│   │   │   │   ├── deployment.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── role.yaml
│   │   │   │   └── rollout.yaml
│   │   │   └── values.yaml
│   │   ├── hpa
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── analysis-template.yaml
│   │   │   │   ├── configmap.yaml
│   │   │   │   ├── deployment.yaml
│   │   │   │   ├── role.yaml
│   │   │   │   ├── rollout.yaml
│   │   │   │   └── svc-ep.yaml
│   │   │   └── values.yaml
│   │   ├── network-service
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── analysis-template.yaml
│   │   │   │   ├── deployment.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── role.yaml
│   │   │   │   ├── rollout.yaml
│   │   │   │   └── svc-ep.yaml
│   │   │   └── values.yaml
│   │   ├── oam
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── analysis-template.yaml
│   │   │   │   ├── configmap.yaml
│   │   │   │   ├── deployment.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── role.yaml
│   │   │   │   ├── rollout.yaml
│   │   │   │   └── svc-ep.yaml
│   │   │   └── values.yaml
│   │   ├── pfe
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── analysis-template.yaml
│   │   │   │   ├── deployment.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── rollout.yaml
│   │   │   │   └── svc-ep.yaml
│   │   │   └── values.yaml
│   │   ├── rac
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── analysis-template.yaml
│   │   │   │   ├── deployment.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── rollout.yaml
│   │   │   │   └── svc-ep.yaml
│   │   │   └── values.yaml
│   │   ├── rbbn-cache
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── configmap.yaml
│   │   │   │   ├── deployment.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── job.yaml
│   │   │   │   ├── progressivestatefulsetmanager.yaml
│   │   │   │   ├── prsmanalysistemplate.yaml
│   │   │   │   ├── redis-pvc-delete.yaml
│   │   │   │   ├── secret.yaml
│   │   │   │   ├── service.yaml
│   │   │   │   └── statefulset.yaml
│   │   │   └── values.yaml
│   │   ├── rbbn-observe
│   │   │   ├── Chart.yaml
│   │   │   ├── pm-stats.yaml
│   │   │   ├── templates
│   │   │   │   ├── agentconfigmap.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── obs-backend-secret.yaml
│   │   │   │   ├── obs-pvc.yaml
│   │   │   │   ├── pod-monitor.yaml
│   │   │   │   ├── role.yaml
│   │   │   │   └── serviceaccount.yaml
│   │   │   └── values.yaml
│   │   ├── rs
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── analysis-template.yaml
│   │   │   │   ├── configmap.yaml
│   │   │   │   ├── deployment.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── rollout.yaml
│   │   │   │   └── svc-ep.yaml
│   │   │   └── values.yaml
│   │   ├── sc
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── analysis-template.yaml
│   │   │   │   ├── configmap.yaml
│   │   │   │   ├── deployment.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── rollout.yaml
│   │   │   │   ├── sc-sizing-config.yaml
│   │   │   │   └── svc-ep.yaml
│   │   │   └── values.yaml
│   │   ├── sg
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── analysis-template.yaml
│   │   │   │   ├── configmap.yaml
│   │   │   │   ├── deployment.yaml
│   │   │   │   ├── _helpers.tpl
│   │   │   │   ├── rollout.yaml
│   │   │   │   └── svc-ep.yaml
│   │   │   └── values.yaml
│   │   └── slb
│   │       ├── Chart.yaml
│   │       ├── templates
│   │       │   ├── analysis-template.yaml
│   │       │   ├── configmap.yaml
│   │       │   ├── deployment.yaml
│   │       │   ├── _helpers.tpl
│   │       │   ├── rollout.yaml
│   │       │   └── svc-ep.yaml
│   │       └── values.yaml
│   ├── Chart.yaml
│   ├── README.md
│   ├── templates
│   │   ├── _helpers.tpl
│   │   └── NOTES.txt
│   ├── values-pfe.yaml
│   ├── values.schema.json
│   ├── values.yaml
│   ├── values.yaml-advanced-user


As part of the helm chart, the Chart.yaml file provides the following details:

ElementDescription
appVersion 
The version of the app that this contains (optional). It need not be SemVer (for example,  12.1.1-R000).
description
A single-sentence description of this project (for example, A Helm chart for CORE SBC CNF solution).
name 
The name of the chart (for example, rbbn-core-cnf).
type 
The type of the chart (for example,  application).
version
A SemVer 2 version (for example, 1.0.0).



values.yaml

Before deploying the SBC CNe:

  • You must update the values.yaml file present under the rbbn-core-cnf folder in accordance with the production deployment.
  • The values.yaml-advanced-user file contains some additional parameters compared to values.yaml file.
  • The values.yaml-advanced-user file can be used in the lab environment.
  • The values.yaml files present under the sub-folders are not editable.

Before deploying the SBC CNe application in the Kubernetes cluster, the following Kubernetes objects must be created and available in the Cluster:

  • Namespace
    • In Kubernetes, namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace but not across namespaces. Refer to https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ for more details about namespace.
    • According to the SBC CNF deployment, it must be deployed under a single namespace.

Namespace
[user@cli-server ~]$ oc describe namespace sbx-dev
Name:         sbx-dev
Labels:       kubernetes.io/metadata.name=sbx-dev
              openshift-pipelines.tekton.dev/namespace-reconcile-version=v1.6.4
              prometheus=dev-ops
Annotations:  openshift.io/description: SBX Development. Owner: svenk.rbbn.com
              openshift.io/display-name: SBX Development. Owner: svenk.rbbn.com
              openshift.io/node-selector: type=general
              openshift.io/sa.scc.mcs: s0:c27,c14
              openshift.io/sa.scc.supplemental-groups: 1000730000/10000
              openshift.io/sa.scc.uid-range: 1000730000/10000
Status:       Active

Resource Quotas
 Name:            sbx-dev-resources
 Resource         Used       Hard
 --------         ---        ---
 limits.cpu       292516m    500
 limits.memory    1036230Mi  1600Gi
 requests.cpu     227998m    265
 requests.memory  1036230Mi  1600Gi

No LimitRange resource.
Namespace

The SBC CNF must be deployed in a single namespace. For example, sbx-dev is a namespace in which the SBC CNF can be deployed. 

  • Role and Role Binding
    • Role:  An RBAC Role or ClusterRole contains rules that represent a set of permissions. Permissions are purely additive (there are no "deny" rules). A Role always sets permissions within a particular namespace; when you create a Role, you must specify the namespace it belongs in.
    • Role Binding: A role binding grants the permissions defined in a role to a user or set of users. It contains a list of subjects (users, groups, or service accounts) and a reference to the role granted. A RoleBinding grants permissions within a specific namespace.

Role and Role Binding
[shanmagesh@cli-server ~]$ oc describe role an-isbc-hpa-role
Name:         an-isbc-hpa-role
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: an-isbc
              meta.helm.sh/release-namespace: sbx-dev
PolicyRule:
  Resources               Non-Resource URLs  Resource Names  Verbs
  ---------               -----------------  --------------  -----
  pods                    []                 []              [get watch list patch delete]
  deployments.apps/scale  []                 []              [get watch list patch]
  deployments.apps        []                 []              [get watch list patch]
  services                []                 []              [get watch list]


[shanmagesh@cli-server ~]$ oc describe rolebinding an-isbc-hpa-role-binding
Name:         an-isbc-hpa-role-binding
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: an-isbc
              meta.helm.sh/release-namespace: sbx-dev
Role:
  Kind:  Role
  Name:  an-isbc-hpa-role
Subjects:
  Kind            Name     Namespace
  ----            ----     ---------
  ServiceAccount  default  sbx-dev
  • Storage Class
    • A StorageClass allows administrators to describe the "classes" of storage they offer. Different classes might map to quality-of-service levels, backup policies, or arbitrary policies determined by the cluster administrators.

StorageClass
[shanmagesh@cli-server ~]$ oc get storageclass
NAME                            PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   
csi-rbd-sc                      rbd.csi.ceph.com   Delete          Immediate           true                   
csi-rbd-sc-rco                  rbd.csi.ceph.com   Delete          Immediate           true                   
managed-nfs-storage (default)   storage.io/nfs     Delete          Immediate           false                  
netapp-nfs-san                  storage.io/nfs     Delete          Immediate           false                  


[shanmagesh@cli-server ~]$ oc describe storageclass managed-nfs-storage
Name:                  managed-nfs-storage
IsDefaultClass:        Yes
Annotations:           storageclass.kubernetes.io/is-default-class=true
Provisioner:           storage.io/nfs
Parameters:            archiveOnDelete=false
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

[shanmagesh@cli-server ~]$ oc describe storageclass csi-rbd-sc
Name:                  csi-rbd-sc
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           rbd.csi.ceph.com
Parameters:            clusterID=b6f53153-5464-4d27-a34e-6060d464ab33,csi.storage.k8s.io/controller-expand-secret-name=csi-rbd-secret-rco,csi.storage.k8s.io/controller-expand-secret-namespace=default,csi.storage.k8s.io/fstype=ext4,csi.storage.k8s.io/node-stage-secret-name=csi-rbd-secret-rco,csi.storage.k8s.io/node-stage-secret-namespace=default,csi.storage.k8s.io/provisioner-secret-name=csi-rbd-secret-rco,csi.storage.k8s.io/provisioner-secret-namespace=default,imageFeatures=layering,pool=rbd_volume_rco_ocp1
AllowVolumeExpansion:  True
MountOptions:
  discard
ReclaimPolicy:      Delete
VolumeBindingMode:  Immediate
Events:             <none>
  • Network Attachment Definition
    • The 'NetworkAttachmentDefinition' is used to set up the network attachment, that is, the secondary interface for the pod. Multus CNI (Container Network Interface) is used as the CNI for the RedHat OCP. Multus CNI is a container network interface plugin for Kubernetes that enables attaching multiple network interfaces to pods. In Kubernetes, each pod has only one network interface by default, other than local loopback. With Multus, you can create multi-homed pods that have multiple interfaces. Multus acts as a ‘meta’ plugin that can call other CNI plugins to configure additional interfaces.
[shanmagesh@cli-server ~]$ oc describe net-attach-def ha-net-1-ipv6
Name:         ha-net-1-ipv6
Namespace:    sbx-dev
Labels:       <none>
Annotations:  <none>
API Version:  k8s.cni.cncf.io/v1
Kind:         NetworkAttachmentDefinition
Metadata:
  Creation Timestamp:  2021-10-04T07:55:11Z
  Generation:          2
  Managed Fields:
    API Version:  k8s.cni.cncf.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:ownerReferences:
          .:
          k:{"uid":"b259a8e2-c679-44ea-b3d3-f9c7968333f8"}:
            .:
            f:apiVersion:
            f:blockOwnerDeletion:
            f:controller:
            f:kind:
            f:name:
            f:uid:
      f:spec:
        .:
        f:config:
    Manager:    cluster-network-operator
    Operation:  Update
    Time:       2021-10-04T07:55:11Z
  Owner References:
    API Version:           operator.openshift.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Network
    Name:                  cluster
    UID:                   b259a8e2-c679-44ea-b3d3-f9c7968333f8
  Resource Version:        849362762
  UID:                     c052a745-b2d3-4767-a3b9-750c50d7ecb0
Spec:
  Config:  { "cniVersion": "0.3.1", "type": "macvlan", "master": "ens3f0.501", "mode": "bridge", "ipam": { "type": "whereabouts", "datastore": "etcd", "etcd_host": "10.232.178.217:2379",  "range": "2001:db8::/64" } }
Events:    <none>


[shanmagesh@cli-server ~]$ oc describe net-attach-def sbx-dev-sriov-net-1-ipv6
Name:         sbx-dev-sriov-net-1-ipv6
Namespace:    sbx-dev
Labels:       <none>
Annotations:  k8s.v1.cni.cncf.io/resourceName: openshift.io/sriov1
API Version:  k8s.cni.cncf.io/v1
Kind:         NetworkAttachmentDefinition
Metadata:
  Creation Timestamp:  2021-06-18T22:26:03Z
  Generation:          2
  Managed Fields:
    API Version:  k8s.cni.cncf.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:k8s.v1.cni.cncf.io/resourceName:
      f:spec:
        .:
        f:config:
    Manager:         sriov-network-operator
    Operation:       Update
    Time:            2021-06-18T22:26:03Z
  Resource Version:  464309604
  UID:               43ead257-1448-43d8-b503-2d87a0968d97
Spec:
  Config:  { "cniVersion":"0.3.1", "name":"sbx-dev-sriov-net-1-ipv6","type":"sriov","vlan":491,"spoofchk":"on","trust":"on","vlanQoS":0,"capabilities":{"mac": true, "ip": true},"link_state":"auto","ipam":{} }
Events:    <none>


Once the above Kubernetes objects are created, perform the following steps: 


  • Update the parent values.yaml (values.yaml file present under rbbn-core-cnf directory) file of the SBC CNe helm chart with the required details (a few of them are listed below):
    1. Namespace
    2. Storage Class
    3. Artifactory Access details (Login/Password)
    4. Container Image Artifactory path and tag
    5. Associating the correct network attachment definition
    6. ... and so on

  • Once the values.yaml file is updated, navigate to the rbbn-core-cnf directory  and run helm install <helm deployment name> --values values.yaml. (In the below example, the <helm deployment name> must be replaced with sgundasiprec)
  • Verify the following to ensure that the SBC CNe helm chart is successfully deployed by running the "helm list" command.
    • The name of the chart deployed is associated with the status as deployed.
    • The name of the chart is present in Chart.yaml.

helm list command
[prrao@cli-blr-1 12.1.1-142_build_SBC]$ helm list
NAME    NAMESPACE  REVISION   UPDATED                                  STATUS          CHART                           APP VERSION
vgsbc   sbc-svt    3          2024-03-05 20:35:40.949699595 +0530 IST  deployed        rbbn-core-cnf-12.1.1-135        12.1.1-135
  • Check if all the pods in the SBC CNe are deployed successfully with all the containers in the "Running" state. This completes the deployment of the SBC CNe solution using helm.

SBC CNe Deployment
prrao@cli-blr-1 12.1.1-142_build_SBC]$ oc get pod -o wide
NAME                                READY    STATUS    RESTARTS  AGE     IP              NODE                              NOMINATED      READINESS
																														   NODE   			GATES
vgsbc-cache-0                        2/2     Running   0         22h     10.231.9.230    worker-16.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-cache-1                        2/2     Running   0         22h     10.231.37.11    worker-18.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-cache-2                        2/2     Running   0         22h     10.231.6.130    worker-17.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-cache-3                        2/2     Running   0         22h     10.231.55.4     worker-13.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-cache-4                        2/2     Running   0         22h     10.231.32.133   worker-14.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-cache-5                        2/2     Running   0         22h     10.231.52.97    worker-11.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-cache-proxy-6985b46dbc-ch9lq   2/2     Running   0         13h     10.231.52.158   worker-11.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-cache-proxy-6985b46dbc-fxn79   2/2     Running   0         13h     10.231.32.164   worker-14.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-cache-proxy-6985b46dbc-wgv57   2/2     Running   0         13h     10.231.37.55    worker-18.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-cs-67c84d95b6-kkm2d            4/4     Running   0         13h     10.231.12.167   worker-12.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-cs-67c84d95b6-zfp22            4/4     Running   0         13h     10.231.34.164   worker-15.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-hpa-5f9b57f659-sbbqf           3/3     Running   0         13h     10.231.12.166   worker-12.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-hpa-5f9b57f659-vpj27           3/3     Running   0         13h     10.231.52.159   worker-11.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-ns-f7fc586b7-ds8h5             3/3     Running   0         13h     10.231.32.162   worker-14.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-ns-f7fc586b7-vp6jt             3/3     Running   0         13h     10.231.6.191    worker-17.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-oam-6c6f97c7c6-55nv6           2/2     Running   0         13h     10.231.12.168   worker-12.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-oam-6c6f97c7c6-cfj75           2/2     Running   0         13h     10.231.55.72    worker-13.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-rac-86f67895dd-b59x5           3/3     Running   0         13h     10.231.32.163   worker-14.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-rac-86f67895dd-mzk5d           3/3     Running   0         13h     10.231.6.192    worker-17.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-rs-6b5d4b8d9-7plxw             4/4     Running   0         13h     10.231.32.165   worker-14.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-rs-6b5d4b8d9-q4tq2             4/4     Running   0         13h     10.231.12.170   worker-12.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-sc-56b956dfc6-68lvv            4/4     Running   0         13h     10.231.34.163   worker-15.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-sc-56b956dfc6-n448m            4/4     Running   0         13h     10.231.12.171   worker-12.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-sc-56b956dfc6-q9zjr            4/4     Running   0         13h     10.231.32.167   worker-14.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-sg-79f84c57b6-82bgk            4/4     Running   0         13h     10.231.32.166   worker-14.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-sg-79f84c57b6-swrw9            4/4     Running   0         22h     10.231.12.65    worker-12.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-slb-55dcbfc849-cktc8           4/4     Running   0         15h     10.231.37.45    worker-18.blr-ocp3.lab.rbbn.com   <none>           <none>
vgsbc-slb-76b74f84d5-lxdcx           4/4     Running   0         13h     10.231.12.169   worker-12.blr-ocp3.lab.rbbn.com   <none>           <none>
  • To check the individual containers running in each pod are instantiated successfully, you can get into individual containers using oc exec -it <pod_name> -c <container_name> /bin/bash (for example,  oc exec -it sbccne-sc-668f5bdf-mh8fz -c sc-container /bin/bash). Once you are inside the container, you can check its health. In production, since there won't be access to the individual pods/containers, to check the health of the overall SBC CNe deployment, run the following CLI command (after connecting to the CLI using "ssh -p 2024 linuxadmin@<OAM Mgmt IP Address>)
    1. show table cnfGlobal cnfHealth - This will give the overall health of the deployed SBC CNe.
    2. show table service ALL podName ALL cnfStatus - This command will give the individual pod level and container level details of the deployed SBC CNe.

SBC CNe Health
If all the pods and containers are up and running, the cnfHealth CLI command will display the overall CNF health status as "Healthy".
admin@vsbc1> show table cnfGlobal cnfHealth
POD   CONTAINER  CONTAINER
NAME  NAME       STATUS
----------------------------
ALL   ALL        Healthy
[ok][2023-06-05 02:39:18]

If any container(s) running in the pod(s) is not healthy, the cnfHealth CLI command will display the status of the individual containers that are not healthy as "Unhealthy".
admin@vsbc1> show table cnfGlobal cnfHealth
                               CONTAINER     CONTAINER
POD NAME                       NAME          STATUS
--------------------------------------------------------
sksbx-v11-sc-7c94c858f9-5j6wx  sc-container  Unhealthy
sksbx-v11-sc-7c94c858f9-7s2p8  sc-container  Unhealthy
sksbx-v11-sc-7c94c858f9-95lpd  sc-container  Unhealthy
sksbx-v11-sc-7c94c858f9-cc9pk  sc-container  Unhealthy
[ok][2023-06-05 08:10:47]
SBC CNe Status - Role of Pods and Individual Container Status
admin@vsbc1> show table service ALL podName ALL cnfStatus cnfStatus {
    podRole vgsbc-ns-f7fc586b7-ds8h5 {
        PodRole inactive;
    }
    podRole vgsbc-ns-f7fc586b7-vp6jt {
        PodRole active;
    }
    podRole vgsbc-rs-6b5d4b8d9-7plxw {
        PodRole InActive;
    }
    podRole vgsbc-rs-6b5d4b8d9-q4tq2 {
        PodRole Active;
    }
    podRole vgsbc-cs-67c84d95b6-kkm2d {
        PodRole Active;
    }
    podRole vgsbc-cs-67c84d95b6-zfp22 {
        PodRole InActive;
    }
    podRole vgsbc-sc-56b956dfc6-68lvv {
        PodRole Active;
    }
    podRole vgsbc-sc-56b956dfc6-n448m {
        PodRole InActive;
    }
    podRole vgsbc-sc-56b956dfc6-q9zjr {
        PodRole Active;
    }
    podRole vgsbc-sg-79f84c57b6-82bgk {
        PodRole InActive;
    }
    podRole vgsbc-sg-79f84c57b6-swrw9 {
        PodRole Active;
    }
    podRole vgsbc-hpa-5f9b57f659-sbbqf {
        PodRole active;
    }
    podRole vgsbc-hpa-5f9b57f659-vpj27 {
        PodRole inactive;
    }
    podRole vgsbc-rac-86f67895dd-b59x5 {
        PodRole active;
    }
    podRole vgsbc-rac-86f67895dd-mzk5d {
        PodRole standby;
    }
    podRole vgsbc-slb-55dcbfc849-cktc8 {
        PodRole Active;
    }
    podRole vgsbc-slb-76b74f84d5-lxdcx {
        PodRole InActive;
    }
    containerStatus vgsbc-cache-0 rbbn-cache {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m55s;
        ContainerCpuUsage    53m;
        ContainerMemoryUsage 34904Ki;
    }
    containerStatus vgsbc-cache-0 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m55s;
        ContainerCpuUsage    38m;
        ContainerMemoryUsage 176592Ki;
    }
    containerStatus vgsbc-cache-1 rbbn-cache {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m49s;
        ContainerCpuUsage    66m;
        ContainerMemoryUsage 33304Ki;
    }
    containerStatus vgsbc-cache-1 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m49s;
        ContainerCpuUsage    22m;
        ContainerMemoryUsage 182508Ki;
    }
    containerStatus vgsbc-cache-2 rbbn-cache {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m54s;
        ContainerCpuUsage    67m;
        ContainerMemoryUsage 36500Ki;
    }
    containerStatus vgsbc-cache-2 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m54s;
        ContainerCpuUsage    45m;
        ContainerMemoryUsage 179872Ki;
    }
    containerStatus vgsbc-cache-3 rbbn-cache {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m49s;
        ContainerCpuUsage    65m;
        ContainerMemoryUsage 34588Ki;
    }
    containerStatus vgsbc-cache-3 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m49s;
        ContainerCpuUsage    20m;
        ContainerMemoryUsage 170988Ki;
    }
    containerStatus vgsbc-cache-4 rbbn-cache {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m50s;
        ContainerCpuUsage    46m;
        ContainerMemoryUsage 41328Ki;
    }
    containerStatus vgsbc-cache-4 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m50s;
        ContainerCpuUsage    24m;
        ContainerMemoryUsage 173564Ki;
    }
    containerStatus vgsbc-cache-5 rbbn-cache {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m49s;
        ContainerCpuUsage    61m;
        ContainerMemoryUsage 34016Ki;
    }
    containerStatus vgsbc-cache-5 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m49s;
        ContainerCpuUsage    27m;
        ContainerMemoryUsage 169540Ki;
    }
    containerStatus vgsbc-ns-f7fc586b7-ds8h5 network-service {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m32s;
        ContainerCpuUsage    25m;
        ContainerMemoryUsage 127284Ki;
    }
    containerStatus vgsbc-ns-f7fc586b7-ds8h5 oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m28s;
        ContainerCpuUsage    12m;
        ContainerMemoryUsage 36676Ki;
    }
    containerStatus vgsbc-ns-f7fc586b7-ds8h5 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    36m;
        ContainerMemoryUsage 195008Ki;
    }
    containerStatus vgsbc-ns-f7fc586b7-vp6jt network-service {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m16s;
        ContainerCpuUsage    27m;
        ContainerMemoryUsage 137352Ki;
    }
    containerStatus vgsbc-ns-f7fc586b7-vp6jt oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m15s;
        ContainerCpuUsage    9m;
        ContainerMemoryUsage 43712Ki;
    }
    containerStatus vgsbc-ns-f7fc586b7-vp6jt rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m16s;
        ContainerCpuUsage    12m;
        ContainerMemoryUsage 225104Ki;
    }
    containerStatus vgsbc-rs-6b5d4b8d9-7plxw rs-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    330m;
        ContainerMemoryUsage 3533748Ki;
    }
    containerStatus vgsbc-rs-6b5d4b8d9-7plxw oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    4m;
        ContainerMemoryUsage 20080Ki;
    }
    containerStatus vgsbc-rs-6b5d4b8d9-7plxw pvclogger-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    9m;
        ContainerMemoryUsage 39064Ki;
    }
    containerStatus vgsbc-rs-6b5d4b8d9-7plxw rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m30s;
        ContainerCpuUsage    47m;
        ContainerMemoryUsage 184576Ki;
    }
    containerStatus vgsbc-rs-6b5d4b8d9-q4tq2 rs-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m40s;
        ContainerCpuUsage    244m;
        ContainerMemoryUsage 3597564Ki;
    }
    containerStatus vgsbc-rs-6b5d4b8d9-q4tq2 oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m40s;
        ContainerCpuUsage    4m;
        ContainerMemoryUsage 19064Ki;
    }
    containerStatus vgsbc-rs-6b5d4b8d9-q4tq2 pvclogger-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m40s;
        ContainerCpuUsage    15m;
        ContainerMemoryUsage 39044Ki;
    }
    containerStatus vgsbc-rs-6b5d4b8d9-q4tq2 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m40s;
        ContainerCpuUsage    30m;
        ContainerMemoryUsage 206140Ki;
    }
    containerStatus vgsbc-cs-67c84d95b6-kkm2d cs-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    244m;
        ContainerMemoryUsage 2844272Ki;
    }
    containerStatus vgsbc-cs-67c84d95b6-kkm2d oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    4m;
        ContainerMemoryUsage 20780Ki;
    }
    containerStatus vgsbc-cs-67c84d95b6-kkm2d pvclogger-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    15m;
        ContainerMemoryUsage 38488Ki;
    }
    containerStatus vgsbc-cs-67c84d95b6-kkm2d rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    29m;
        ContainerMemoryUsage 203212Ki;
    }
    containerStatus vgsbc-cs-67c84d95b6-zfp22 cs-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m51s;
        ContainerCpuUsage    198m;
        ContainerMemoryUsage 2732148Ki;
    }
    containerStatus vgsbc-cs-67c84d95b6-zfp22 oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m51s;
        ContainerCpuUsage    11m;
        ContainerMemoryUsage 27Mi;
    }
    containerStatus vgsbc-cs-67c84d95b6-zfp22 pvclogger-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m50s;
        ContainerCpuUsage    14m;
        ContainerMemoryUsage 42712Ki;
    }
    containerStatus vgsbc-cs-67c84d95b6-zfp22 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m50s;
        ContainerCpuUsage    46m;
        ContainerMemoryUsage 189812Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-68lvv isbc-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    339m;
        ContainerMemoryUsage 5167880Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-68lvv oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    8m;
        ContainerMemoryUsage 26168Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-68lvv pvclogger-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    16m;
        ContainerMemoryUsage 41448Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-68lvv rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    63m;
        ContainerMemoryUsage 215632Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-n448m isbc-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h48m46s;
        ContainerCpuUsage    379m;
        ContainerMemoryUsage 5099064Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-n448m oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h48m46s;
        ContainerCpuUsage    7m;
        ContainerMemoryUsage 20208Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-n448m pvclogger-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h48m46s;
        ContainerCpuUsage    12m;
        ContainerMemoryUsage 40824Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-n448m rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h48m45s;
        ContainerCpuUsage    26m;
        ContainerMemoryUsage 195184Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-q9zjr isbc-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m31s;
        ContainerCpuUsage    264m;
        ContainerMemoryUsage 5100016Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-q9zjr oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m31s;
        ContainerCpuUsage    10m;
        ContainerMemoryUsage 20616Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-q9zjr pvclogger-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m31s;
        ContainerCpuUsage    15m;
        ContainerMemoryUsage 42088Ki;
    }
    containerStatus vgsbc-sc-56b956dfc6-q9zjr rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h51m30s;
        ContainerCpuUsage    55m;
        ContainerMemoryUsage 208Mi;
    }
    containerStatus vgsbc-sg-79f84c57b6-82bgk sg-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m32s;
        ContainerCpuUsage    309m;
        ContainerMemoryUsage 3172640Ki;
    }
    containerStatus vgsbc-sg-79f84c57b6-82bgk oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m32s;
        ContainerCpuUsage    10m;
        ContainerMemoryUsage 21308Ki;
    }
    containerStatus vgsbc-sg-79f84c57b6-82bgk pvclogger-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    17m;
        ContainerMemoryUsage 40492Ki;
    }
    containerStatus vgsbc-sg-79f84c57b6-82bgk rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    46m;
        ContainerMemoryUsage 190436Ki;
    }
    containerStatus vgsbc-sg-79f84c57b6-swrw9 sg-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m54s;
        ContainerCpuUsage    175m;
        ContainerMemoryUsage 3138740Ki;
    }
    containerStatus vgsbc-sg-79f84c57b6-swrw9 oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m54s;
        ContainerCpuUsage    9m;
        ContainerMemoryUsage 24452Ki;
    }
    containerStatus vgsbc-sg-79f84c57b6-swrw9 pvclogger-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m53s;
        ContainerCpuUsage    19m;
        ContainerMemoryUsage 40056Ki;
    }
    containerStatus vgsbc-sg-79f84c57b6-swrw9 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d22h27m53s;
        ContainerCpuUsage    57m;
        ContainerMemoryUsage 218568Ki;
    }
    containerStatus vgsbc-hpa-5f9b57f659-sbbqf hpa {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m18s;
        ContainerCpuUsage    59m;
        ContainerMemoryUsage 64968Ki;
    }
    containerStatus vgsbc-hpa-5f9b57f659-sbbqf oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m17s;
        ContainerCpuUsage    16m;
        ContainerMemoryUsage 42548Ki;
    }
    containerStatus vgsbc-hpa-5f9b57f659-sbbqf rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m18s;
        ContainerCpuUsage    49m;
        ContainerMemoryUsage 190112Ki;
    }
    containerStatus vgsbc-hpa-5f9b57f659-vpj27 hpa {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h54m51s;
        ContainerCpuUsage    32m;
        ContainerMemoryUsage 66748Ki;
    }
    containerStatus vgsbc-hpa-5f9b57f659-vpj27 oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h54m40s;
        ContainerCpuUsage    7m;
        ContainerMemoryUsage 51028Ki;
    }
    containerStatus vgsbc-hpa-5f9b57f659-vpj27 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h54m51s;
        ContainerCpuUsage    46m;
        ContainerMemoryUsage 186828Ki;
    }
    containerStatus vgsbc-oam-6c6f97c7c6-55nv6 oam-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m30s;
        ContainerCpuUsage    153m;
        ContainerMemoryUsage 3311892Ki;
    }
    containerStatus vgsbc-oam-6c6f97c7c6-55nv6 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m29s;
        ContainerCpuUsage    17m;
        ContainerMemoryUsage 176904Ki;
    }
    containerStatus vgsbc-oam-6c6f97c7c6-cfj75 oam-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h48m24s;
        ContainerCpuUsage    468m;
        ContainerMemoryUsage 3213264Ki;
    }
    containerStatus vgsbc-oam-6c6f97c7c6-cfj75 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h48m24s;
        ContainerCpuUsage    44m;
        ContainerMemoryUsage 178932Ki;
    }
    containerStatus vgsbc-rac-86f67895dd-b59x5 rac-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m32s;
        ContainerCpuUsage    22m;
        ContainerMemoryUsage 63108Ki;
    }
    containerStatus vgsbc-rac-86f67895dd-b59x5 oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m28s;
        ContainerCpuUsage    11m;
        ContainerMemoryUsage 41312Ki;
    }
    containerStatus vgsbc-rac-86f67895dd-b59x5 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m32s;
        ContainerCpuUsage    30m;
        ContainerMemoryUsage 173364Ki;
    }
    containerStatus vgsbc-rac-86f67895dd-mzk5d rac-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m11s;
        ContainerCpuUsage    25m;
        ContainerMemoryUsage 66720Ki;
    }
    containerStatus vgsbc-rac-86f67895dd-mzk5d oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m11s;
        ContainerCpuUsage    11m;
        ContainerMemoryUsage 45264Ki;
    }
    containerStatus vgsbc-rac-86f67895dd-mzk5d rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m11s;
        ContainerCpuUsage    31m;
        ContainerMemoryUsage 180444Ki;
    }
    containerStatus vgsbc-slb-55dcbfc849-cktc8 slb-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d16h6m56s;
        ContainerCpuUsage    390m;
        ContainerMemoryUsage 4295536Ki;
    }
    containerStatus vgsbc-slb-55dcbfc849-cktc8 oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d16h6m56s;
        ContainerCpuUsage    6m;
        ContainerMemoryUsage 33132Ki;
    }
    containerStatus vgsbc-slb-55dcbfc849-cktc8 pvclogger-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d16h6m55s;
        ContainerCpuUsage    24m;
        ContainerMemoryUsage 38152Ki;
    }
    containerStatus vgsbc-slb-55dcbfc849-cktc8 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d16h6m55s;
        ContainerCpuUsage    36m;
        ContainerMemoryUsage 233556Ki;
    }
    containerStatus vgsbc-slb-76b74f84d5-lxdcx slb-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m32s;
        ContainerCpuUsage    445m;
        ContainerMemoryUsage 4255692Ki;
    }
    containerStatus vgsbc-slb-76b74f84d5-lxdcx oamproxy-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    6m;
        ContainerMemoryUsage 27356Ki;
    }
    containerStatus vgsbc-slb-76b74f84d5-lxdcx pvclogger-container {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    14m;
        ContainerMemoryUsage 37048Ki;
    }
    containerStatus vgsbc-slb-76b74f84d5-lxdcx rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    24m;
        ContainerMemoryUsage 217196Ki;
    }
    containerStatus vgsbc-cache-proxy-6985b46dbc-ch9lq rbbn-cache-proxy {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m1s;
        ContainerCpuUsage    91m;
        ContainerMemoryUsage 105808Ki;
    }
    containerStatus vgsbc-cache-proxy-6985b46dbc-ch9lq rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m1s;
        ContainerCpuUsage    37m;
        ContainerMemoryUsage 181472Ki;
    }
    containerStatus vgsbc-cache-proxy-6985b46dbc-fxn79 rbbn-cache-proxy {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m32s;
        ContainerCpuUsage    86m;
        ContainerMemoryUsage 154872Ki;
    }
    containerStatus vgsbc-cache-proxy-6985b46dbc-fxn79 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h55m31s;
        ContainerCpuUsage    23m;
        ContainerMemoryUsage 168940Ki;
    }
    containerStatus vgsbc-cache-proxy-6985b46dbc-wgv57 rbbn-cache-proxy {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h54m31s;
        ContainerCpuUsage    41m;
        ContainerMemoryUsage 109992Ki;
    }
    containerStatus vgsbc-cache-proxy-6985b46dbc-wgv57 rbbn-telemetry-agent {
        ContainerState       Running;
        ContainerRestarts    0;
        ContainerAge         0d13h54m31s;
        ContainerCpuUsage    22m;
        ContainerMemoryUsage 184500Ki;
    }
    podStatus vgsbc-cache-0 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d22h28m12s;
        PodContainerState 2/2;
        PodNode           worker-16.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.9.230;
        PodIPv6           2001:db8:0:204::59e;
        PodCpuUsage       94m;
        PodMemoryUsage    211096Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.9.230;
    }
    podStatus vgsbc-cache-1 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d22h28m6s;
        PodContainerState 2/2;
        PodNode           worker-18.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.37.11;
        PodIPv6           2001:db8:0:212::4c5;
        PodCpuUsage       59m;
        PodMemoryUsage    216412Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.37.11;
    }
    podStatus vgsbc-cache-2 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d22h28m11s;
        PodContainerState 2/2;
        PodNode           worker-17.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.6.130;
        PodIPv6           2001:db8:0:203::63f;
        PodCpuUsage       66m;
        PodMemoryUsage    215860Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.6.130;
    }
    podStatus vgsbc-cache-3 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d22h28m6s;
        PodContainerState 2/2;
        PodNode           worker-13.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.55.4;
        PodIPv6           2001:db8:0:21b::4e6;
        PodCpuUsage       85m;
        PodMemoryUsage    205576Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.55.4;
    }
    podStatus vgsbc-cache-4 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d22h28m6s;
        PodContainerState 2/2;
        PodNode           worker-14.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.32.133;
        PodIPv6           2001:db8:0:210::45d;
        PodCpuUsage       99m;
        PodMemoryUsage    215604Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.32.133;
    }
    podStatus vgsbc-cache-5 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d22h28m6s;
        PodContainerState 2/2;
        PodNode           worker-11.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.52.97;
        PodIPv6           2001:db8:0:21a::434;
        PodCpuUsage       89m;
        PodMemoryUsage    202768Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.52.97;
    }
    podStatus vgsbc-ns-f7fc586b7-ds8h5 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m48s;
        PodContainerState 3/3;
        PodNode           worker-14.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.32.162;
        PodIPv6           2001:db8:0:210::47a;
        PodCpuUsage       77m;
        PodMemoryUsage    363496Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.32.162;
    }
    podStatus vgsbc-ns-f7fc586b7-vp6jt {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m33s;
        PodContainerState 3/3;
        PodNode           worker-17.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.6.191;
        PodIPv6           2001:db8:0:203::67c;
        PodCpuUsage       79m;
        PodMemoryUsage    407392Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.6.191;
    }
    podStatus vgsbc-rs-6b5d4b8d9-7plxw {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m48s;
        PodContainerState 4/4;
        PodNode           worker-14.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.32.165;
        PodIPv6           2001:db8:0:210::47d;
        PodCpuUsage       424m;
        PodMemoryUsage    3770204Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.32.165;
    }
    podStatus vgsbc-rs-6b5d4b8d9-q4tq2 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h51m57s;
        PodContainerState 4/4;
        PodNode           worker-12.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.12.170;
        PodIPv6           2001:db8:0:206::465;
        PodCpuUsage       293m;
        PodMemoryUsage    3861812Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.12.170;
    }
    podStatus vgsbc-cs-67c84d95b6-kkm2d {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m48s;
        PodContainerState 4/4;
        PodNode           worker-12.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.12.167;
        PodIPv6           2001:db8:0:206::462;
        PodCpuUsage       292m;
        PodMemoryUsage    3106752Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.12.167;
    }
    podStatus vgsbc-cs-67c84d95b6-zfp22 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h52m8s;
        PodContainerState 4/4;
        PodNode           worker-15.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.34.164;
        PodIPv6           2001:db8:0:211::487;
        PodCpuUsage       272m;
        PodMemoryUsage    2997416Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.34.164;
    }
    podStatus vgsbc-sc-56b956dfc6-68lvv {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m48s;
        PodContainerState 4/4;
        PodNode           worker-15.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.34.163;
        PodIPv6           2001:db8:0:211::486;
        PodCpuUsage       422m;
        PodMemoryUsage    5453040Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.34.163;
    }
    podStatus vgsbc-sc-56b956dfc6-n448m {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h49m1s;
        PodContainerState 4/4;
        PodNode           worker-12.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.12.171;
        PodIPv6           2001:db8:0:206::466;
        PodCpuUsage       424m;
        PodMemoryUsage    5355280Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.12.171;
    }
    podStatus vgsbc-sc-56b956dfc6-q9zjr {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h51m47s;
        PodContainerState 4/4;
        PodNode           worker-14.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.32.167;
        PodIPv6           2001:db8:0:210::47f;
        PodCpuUsage       405m;
        PodMemoryUsage    5377244Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.32.167;
    }
    podStatus vgsbc-sg-79f84c57b6-82bgk {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m48s;
        PodContainerState 4/4;
        PodNode           worker-14.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.32.166;
        PodIPv6           2001:db8:0:210::47e;
        PodCpuUsage       420m;
        PodMemoryUsage    3427048Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.32.166;
    }
    podStatus vgsbc-sg-79f84c57b6-swrw9 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d22h28m10s;
        PodContainerState 4/4;
        PodNode           worker-12.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.12.65;
        PodIPv6           2001:db8:0:206::40f;
        PodCpuUsage       260m;
        PodMemoryUsage    3421816Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.12.65;
    }
    podStatus vgsbc-hpa-5f9b57f659-sbbqf {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m48s;
        PodContainerState 3/3;
        PodNode           worker-12.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.12.166;
        PodIPv6           2001:db8:0:206::461;
        PodCpuUsage       124m;
        PodMemoryUsage    297628Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.12.166;
    }
    podStatus vgsbc-hpa-5f9b57f659-vpj27 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m18s;
        PodContainerState 3/3;
        PodNode           worker-11.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.52.159;
        PodIPv6           2001:db8:0:21a::46d;
        PodCpuUsage       78m;
        PodMemoryUsage    305380Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.52.159;
    }
    podStatus vgsbc-oam-6c6f97c7c6-55nv6 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m48s;
        PodContainerState 2/2;
        PodNode           worker-12.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.12.168;
        PodIPv6           2001:db8:0:206::463;
        PodCpuUsage       170m;
        PodMemoryUsage    3488796Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.12.168;
    }
    podStatus vgsbc-oam-6c6f97c7c6-cfj75 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h50m37s;
        PodContainerState 2/2;
        PodNode           worker-13.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.55.72;
        PodIPv6           2001:db8:0:21b::52a;
        PodCpuUsage       512m;
        PodMemoryUsage    3392196Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.55.72;
    }
    podStatus vgsbc-rac-86f67895dd-b59x5 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m48s;
        PodContainerState 3/3;
        PodNode           worker-14.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.32.163;
        PodIPv6           2001:db8:0:210::47b;
        PodCpuUsage       66m;
        PodMemoryUsage    279784Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.32.163;
    }
    podStatus vgsbc-rac-86f67895dd-mzk5d {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m28s;
        PodContainerState 3/3;
        PodNode           worker-17.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.6.192;
        PodIPv6           2001:db8:0:203::67d;
        PodCpuUsage       88m;
        PodMemoryUsage    290256Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.6.192;
    }
    podStatus vgsbc-slb-55dcbfc849-cktc8 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d16h7m12s;
        PodContainerState 4/4;
        PodNode           worker-18.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.37.45;
        PodIPv6           2001:db8:0:212::4e7;
        PodCpuUsage       359m;
        PodMemoryUsage    4599812Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.37.45;
    }
    podStatus vgsbc-slb-76b74f84d5-lxdcx {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m48s;
        PodContainerState 3/4;
        PodNode           worker-12.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.12.169;
        PodIPv6           2001:db8:0:206::464;
        PodCpuUsage       489m;
        PodMemoryUsage    4537292Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.12.169;
    }
    podStatus vgsbc-cache-proxy-6985b46dbc-ch9lq {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m18s;
        PodContainerState 2/2;
        PodNode           worker-11.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.52.158;
        PodIPv6           2001:db8:0:21a::46c;
        PodCpuUsage       137m;
        PodMemoryUsage    290700Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.52.158;
    }
    podStatus vgsbc-cache-proxy-6985b46dbc-fxn79 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h55m48s;
        PodContainerState 2/2;
        PodNode           worker-14.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.32.164;
        PodIPv6           2001:db8:0:210::47c;
        PodCpuUsage       84m;
        PodMemoryUsage    325240Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.32.164;
    }
    podStatus vgsbc-cache-proxy-6985b46dbc-wgv57 {
        PodState          Running;
        PodRestarts       "";
        PodAge            0d13h54m47s;
        PodContainerState 2/2;
        PodNode           worker-18.blr-ocp3.lab.rbbn.com;
        PodIPv4           10.231.37.55;
        PodIPv6           2001:db8:0:212::4f1;
        PodCpuUsage       152m;
        PodMemoryUsage    293444Ki;
        PodDiskIoUsage    "";
        PodNetworkIoUsage "";
        PodSriovIntfUsage "";
        InterPodIP        10.231.37.55;
    }
}
[ok][2024-03-06 05:01:31]

Installing SBC CNe in a RedHat OCP Cluster using Git

Prerequisites

  1. The SBC CNe helm template must be available in the Helm artifactory or Git repository.
  2. The application deployment tool (a CD tool such as FluxCD) must be installed on the K8S cluster.
  3. A Git repository for the Helm manifest.
  4. Git permission for the users to commit the SBC CNe-related changes in the helm manifest.

Creation and Access to Git Repository & Helm Artifactory

The Git repository, Helm artifactory and container image artifactory must be created by operators in their lab/production environment. Access to the Git repository, Helm, and Image artifactory are provided by operators.


Install

  1. Create the SBC CNe deployment yaml file based on the requirements.
Git - SBC CNe Helm Release file
# Values for RBBN-CORE-CNF chart
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: lpl-cnf-1
  namespace: cnf-demolab
spec:
  releaseName: lpl-cnf-1
  targetNamespace: cnf-demolab
  chart:
    spec:
      chart: rbbn-core-cnf
      version: 12.0.0-14505
      sourceRef:
        kind: HelmRepository
        name: sbx-helm-prod-plano
        namespace: cnf-demolab
  install:
    disableWait: true
  upgrade:
    disableWait: true
  interval: 2m
  values:
    # Global chart values
    global:
      serviceAccount:
        name: default

      # namespace where the core-cnf solution has to be deployed.
      namespace: cnf-demolab

      # Platform on which the cluster is deployed.
      kubernetesPlatform: ocp

      # Storage Class for the PVC creation.
      # -----------------------------------
      # Available options - netapp-nfs-san(default), managed-nfs-storage
      storageClass: managed-nfs-storage
:
under values section provide all helmchart parameters to orcestrate SBC CNF
:

2. Create the Kustomization.yaml file to include the SBC CNe deployment file name.

Git - SBC CNe Kustomization file
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
nameSuffix: -llokesh
resources:
   - values_sbc.yaml   ---> This is the name of the HelmRelease file.

3. Commit and Push the changes using Git commands (refer to the figure regarding the Git repository flow).

Git related commands:

git clone
git add
git commit
git push
git fetch
git diff
[lokesh@cli-blr-2 jenkinsbuild-dev]$ git add -A; git commit -a -m "installing sbc"; git push origin master
[master ae8c499] installing sbc
 2 files changed, 100 insertions(+)
Enumerating objects: 11, done.
Counting objects: 100% (11/11), done.
Delta compression using up to 16 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 654 bytes | 654.00 KiB/s, done.
Total 6 (delta 3), reused 0 (delta 0), pack-reused 0
To https://bitbucket.rbbn.com/scm/sbc/rbbn-gitops-cnf-demolab.git
   dbfb561..ae8c499  master -> master


Git Repository steps and commands:


  1. Monitor the deployment in the K8S cluster by running the following commands:

    oc get kustomization
    oc get helmcharts/oc describe helmcharts <chart_name>
    oc get helmrelease/oc describe helerelease <release_name>

     

Git - SBC CNe Monitoring Deployment
[llokesh@cli-server llokesh]$ oc get gitrepository
NAME                         URL                                                                 AGE    READY   STATUS
rbbn-sbccore-cnf-manifests   https://bitbucket.rbbn.com/scm/sbc/rbbn-sbccore-cnf-manifests.git   314d   True    stored artifact for revision 'master/7b895d31bea49a9de707af8ad709c43e8ec17fc8'
 
 
[llokesh@cli-blr-2 ~]]$ oc get kustomization
NAME                                       AGE    READY   STATUS
rbbn-gitops-cnf-demolab-llokesh            314d   True    Applied revision: master/de4a669ab80ff93bc8fe02114607fb1e33f99b04
 
 
[llokesh@cli-blr-2 ~]$ oc get helmchart
NAME                                        CHART               VERSION   SOURCE KIND     SOURCE NAME                  AGE     READY   STATUS
cnf-demolab-lpl-blr1-sbc-llokesh            v12.1.1-148/      *         GitRepository   rbbn-sbccore-cnf-manifests   6m19s   True    packaged 'rbbn-core-cnf' chart with version '12.1.1-148'
 
 
[llokesh@cli-blr-2 ~]$ oc get helmrelease
NAME                            AGE     READY   STATUS
lpl-blr1-sbc-llokesh            6m37s   True    Release reconciliation succeeded
 
 
[llokesh@cli-blr-2 ~]$ helm ls
NAME                    NAMESPACE   REVISION    UPDATED                                 STATUS      CHART                       APP VERSION
lpl-blr1-sbc            cnf-demolab 1           2023-05-30 10:16:20.895043071 +0000 UTC deployed    rbbn-core-cnf-12.1.1-148    12.1.1-148
[llokesh@cli-blr-2 ~]$
 
 
[llokesh@cli-blr-2 ~]$ helm history lpl-blr1-sbc
REVISION    UPDATED                     STATUS      CHART                       APP VERSION     DESCRIPTION    
1           Tue May 30 10:16:20 2023    deployed    rbbn-core-cnf-12.0.0-14505  12.0.0-14505    Install complete

2. Check if all the pods in the SBC CNe are deployed successfully with all the containers in "Running" state. This completes the deployment of SBC CNe solution using Helm.

Git - SBC CNe Deployment Status
[llokesh@cli-blr-2 ~]$ oc get pods 
NAME                                         READY   STATUS    RESTARTS            AGE lpl-blr1-sbc-cache-0                                  2/2     Running   0          19h
lpl-blr1-sbc-cache-1                                  2/2     Running   0          19h
lpl-blr1-sbc-cache-2                                  2/2     Running   0          19h
lpl-blr1-sbc-cache-3                                  2/2     Running   0          19h
lpl-blr1-sbc-cache-4                                  2/2     Running   0          19h
lpl-blr1-sbc-cache-5                                  2/2     Running   0          19h
lpl-blr1-sbc-cache-proxy-5574d775b8-2xk7n             2/2     Running   0          19h
lpl-blr1-sbc-cache-proxy-5574d775b8-r54m4             2/2     Running   0          19h
lpl-blr1-sbc-cache-proxy-5574d775b8-rgjp4             2/2     Running   0          19h
lpl-blr1-sbc-cs-7879fbf4c6-4czvl                      4/4     Running   0          19h
lpl-blr1-sbc-cs-7879fbf4c6-h4mq6                      4/4     Running   0          19h
lpl-blr1-sbc-hpa-7ddbbb8456-9nbrk                     3/3     Running   0          19h
lpl-blr1-sbc-hpa-7ddbbb8456-v26lj                     3/3     Running   0          19h
lpl-blr1-sbc-ns-7b4556699f-4cpsh                      3/3     Running   0          19h
lpl-blr1-sbc-ns-7b4556699f-mrfjw                      3/3     Running   0          19h
lpl-blr1-sbc-oam-579ff46db7-8q5hn                     2/2     Running   0          19h
lpl-blr1-sbc-oam-579ff46db7-qf64w                     2/2     Running   0          19h
lpl-blr1-sbc-rac-64b5b94c6d-2wjq9                     3/3     Running   0          19h
lpl-blr1-sbc-rac-64b5b94c6d-njcg7                     3/3     Running   0          19h
lpl-blr1-sbc-rs-fc8f78775-nn77b                       4/4     Running   0          19h
lpl-blr1-sbc-rs-fc8f78775-t5v7q                       4/4     Running   0          19h
lpl-blr1-sbc-sc-78dc86c789-7vwf9                      4/4     Running   0          19h
lpl-blr1-sbc-sc-78dc86c789-tz8gh                      4/4     Running   0          19h
lpl-blr1-sbc-sg-df745d99b-f76ml                       4/4     Running   0          19h
lpl-blr1-sbc-sg-df745d99b-fp5cp                       4/4     Running   0          19h
lpl-blr1-sbc-slb-5b986cb46c-8qkxt                     4/4     Running   0          19h
lpl-blr1-sbc-slb-5b986cb46c-kmcqv                     4/4     Running   0          19h

Post-Installation

To check the individual containers running in each pod are instantiated successfully, you can get into individual containers using "oc exec -it <pod_name> -c <container_name> /bin/bash" (for example,  oc exec -it sbccne-sc-668f5bdf-mh8fz -c isbc-container /bin/bash). Once you are inside the container, you can check the container health.

  1. Connect to the CLI using "ssh -p 2024 linuxadmin@<OAM Mgmt IP Address>.
  2. Check the health of the overall SBC CNe deployment. 

    > show status cnfGlobal cnfHealth
    SBC CNe Health Examples:
    If all the pods and containers are up and running, the cnfHealth CLI command will display the overall CNF health status as "Healthy":
    
    admin@vsbc1> show table cnfGlobal cnfHealth
    POD   CONTAINER  CONTAINER
    NAME  NAME       STATUS
    ----------------------------
    ALL   ALL        Healthy
    [ok][2023-06-05 02:39:18]
    
    If any container(s) running in the pod(s) is not healthy, the cnfHealth CLI command will display the status of the individual containers that are not healthy as "Unhealthy":
    
    admin@vsbc1> show table cnfGlobal cnfHealth
                                   CONTAINER     CONTAINER
    POD NAME                       NAME          STATUS
    --------------------------------------------------------
    sksbx-v11-sc-7c94c858f9-5j6wx  sc-container  Unhealthy
    sksbx-v11-sc-7c94c858f9-7s2p8  sc-container  Unhealthy
    sksbx-v11-sc-7c94c858f9-95lpd  sc-container  Unhealthy
    sksbx-v11-sc-7c94c858f9-cc9pk  sc-container  Unhealthy
    [ok][2023-06-05 08:10:47]