Note

The 12.x Insight EMS supports two configuration models for SBC clusters, beginning with SBC release 8.0. SBC N:1 clusters comprising multiple active SBC instances must be defined with a configuration type of "OAM." For these clusters, a 1:1 OAM node pair provides the northbound Operations, Administration, and Maintenance (OA&M) functions for the cluster. SBC clusters containing a single SBC or a 1:1 SBC HA pair can be defined with a configuration type of "Direct Single." For these clusters, the active SBC is used for OA&M operations.

The 12.x Insight EMS continues to support the existing HeadEnd model for SBC 7.2 clusters. However, SBC cluster deployments must migrate to one of the new configuration models when upgrading to SBC 8.0 or later. Refer to Migrating an Existing SBC Cluster to OAM Configuration Mode or to Migrating an Existing SBC Cluster to Direct Single Configuration Mode before upgrading an SBC cluster.

Warning

In a distributed SBC deployment, upgrade the clusters in the following order: OAM nodes, then T-SBC clusters, then M-SBC clusters, then S-SBC clusters.


This section describes the upgrade process for SBC instances in an N:1 redundancy group orchestrated in an OpenStack environment using Heat templates. N represents the number of active instances and can be up to 4, so a full 4:1 deployment has 5 total SBC instances. If the deployment you are upgrading includes OAM nodes, the OAM nodes should be upgraded before you upgrade the SBC nodes. 

The following procedure describes the upgrade procedure for a full 4:1 SBC deployment that includes OAM nodes. Your implementation could include fewer instances and therefore require fewer steps.

Prerequisites

Perform the following activities prior to upgrading the SBC instances within the deployment.

Upgrade the Insight EMS

Upgrade the EMS in your deployment before upgrading the OAM nodes or the SBC nodes. Refer to EMS SWe on OpenStack

Download the Software Image

Download the required .QCOW2 and .sha256 files from the Customer Portal.

Upload the Software Image to OpenStack

To upload the .QCOW2 file to OpenStack, navigate to Project > Compute > Images. For more information, refer to Creating a Glance Image.

Update the Heat Templates with Mandatory Login Information

Beginning with release 7.1, you must include SSH keys or passwords for the admin and linuxadmin accounts in the userdata you provide during orchestration of an SBC instance. Therefore during upgrade from a pre-7.1 release, an updated Heat template that contains the mandatory keys or passwords must be applied.

Prior to upgrade, you must update the template used to deploy the instance to include the mandatory SSH key or password userdata. The example templates Ribbon provides include information on how to include this data in a template. Because they are more secure, SSH key fields are mandatory in the example Heat templates. Passwords are optional fields.The password input is not plain text, it is a hash of the password. Refer to Metadata and Userdata Format on OpenStack for more information on generating and including the login userdata that must be provided. 

Check the Status of the Instances

Ensure all the SBC instances (five, in a 4:1 redundancy group), the OAM nodes, and the EMS are up and running.

To check the instance status in the Instances window, navigate to Project > Compute > Instances in the Horizon GUI.

Upgrade the OAM and SBC Nodes

  1. For deployments that include existing OAM nodes, upgrade the OAM nodes before upgrading the SBC nodes in the deployment. Upgrade the nodes in the order given in these procedures (the EMS, followed by the OAM nodes, followed by the SBC nodes) to ensure that the existing SBC configuration data is preserved and upgraded to the current format.

    Note

    The Upgrade procedure of OAM deployment impacts service as it involves stack deletion and recreation. Therefore, before starting the upgrade, divert the traffic from the OAM cluster you are upgrading.

  2. In OpenStack, use either the Horizon dashboard or the CLI to shut down the instance (Shut off Instance option in Horizon).
    Use the templates: 
    In OpenStack, use either the Horizon dashboard or the CLI (stack delete command) to remove the OAM and SBCs.
  3. With the templates used initially to spawn the instances in the base build, bring up the OAM first (ensure the OAM has config present, as it will download from the existing cluster on the EMS), and then the SBCs in the same order. Ensure that the image is of the build in which you are upgrading the deployment.

Identify the Standby OAM Node

To identify the standby OAM node:

  1. Log onto the CLI of one of the OAM nodes.
  2. Execute the following command: show status system serverStatus
  3. Check the value of the parameter mgmtRedundancyRole. The output identifies the node as either active or standby

Upgrade each OAM Node

Perform the following steps to upgrade the OAM nodes, beginning with the instance you identified as the standby node.

  1. In OpenStack, use either the Horizon dashboard or the CLI to shut down the instance (Shut off Instance option in Horizon).

  2. Ensure you have updated any metadata or other parameters in Heat templates as required for the target release. Refer to Metadata and Userdata Format on OpenStack and Developing a Heat Template.

  3. In OpenStack, use either the Horizon dashboard or the CLI (heat stack-update command) to replace the instance. 

  4. After the upgrade of the standby instance completes, log onto the active node and perform a switchover: 
    request system admin <SYSTEM NAME> switchover
    The standby instance that you just upgraded becomes the active node.
  5. On the newly active (upgraded) OAM, check whether the configuration revision that occurs as part of the upgrade process was saved successfully using the following command: 
    show table system admin <SYSTEM NAME> savedConfigurations
     
    Verify that the output includes a revision showing software version matching the target version to which you are upgrading. 
  6. Repeat steps 1 through 3 to upgrade the current standby (original active) OAM node instance. Once the upgrade of the second OAM node completes, continue with upgrading the SBC nodes.
Note

For upgrades from release 7.2, if the OAM nodes go to an unregistered state on the EMS while coming up, reboot the OAM nodes to get them back to a registered online state. If needed, this reboot is not service impacting because the OAM nodes are not processing calls.

Upgrade the SBC Nodes

Upgrade the nodes in an SBC redundancy group beginning with the standby node.

Identify the Standby Instance in the SBC Redundancy Group

To identify the assigned standby instance:

  1. Log onto the CLI of any of the SBC instances as the admin user.
  2. Execute the following command:  
    show status system rgStatus
  3. Check the value of the parameter assignedRole. In the following example output, the second node listed is the standby node.
> show status system rgStatus
rgStatus vsbc1-192.168.2.3 {
    actualCeName    vsbc1;
    assignedRole    active;
    currentRole     active;
    nodeId          1;
    serviceId       0;
    syncStatus      syncCompleted;
    usingMetavarsOf vsbc1-192.168.2.3;
    appVersion      V07.02.00;
}
rgStatus vsbc1-192.168.2.4 {
    actualCeName    vsbc1;
    assignedRole    standby;
    currentRole     standby;
    nodeId          2;
    serviceId       1;
    syncStatus      unprotectedRunningStandby;
    usingMetavarsOf vsbc1-192.168.2.4;
    appVersion      V07.02.00;
}
rgStatus vsbc1-192.168.2.5 {
    actualCeName    vsbc1;
    assignedRole    active;
    currentRole     active;
    nodeId          3;
    serviceId       2;
    syncStatus      syncCompleted;
    usingMetavarsOf vsbc1-192.168.2.5;
    appVersion      V07.02.00;
}
rgStatus vsbc1-192.168.2.6 {
    actualCeName    vsbc1;
    assignedRole    active;
    currentRole     active;
    nodeId          4;
    serviceId       3;
    syncStatus      syncCompleted;
    usingMetavarsOf vsbc1-192.168.2.6;
    appVersion      V07.02.00;
}
rgStatus vsbc1-192.168.2.7 {
    actualCeName    vsbc1;
    assignedRole    active;
    currentRole     active;
    nodeId          5;
    serviceId       4;
    syncStatus      syncCompleted;
    usingMetavarsOf vsbc1-192.168.2.7;
    appVersion      V07.02.00;
}

Upgrade each SBC Node

Note
  • The assignedRole standby instance must be upgraded first, followed by the assignedRole active instances.
  • Upgrade only one instance at a time.

Perform the following steps to upgrade, beginning with the instance you identified as the assigned standby instance.

  1. Using EMA or the SBC CLI, determine the current role of the SBC instance to be upgraded. If the current role is active:

    1. Check the syncStatus field in the output of the rgStatus command to ensure that the sync status of the instance is syncCompleted. It may take up to 10 minutes after the previous SBC was upgraded before sync is achieved. 

    2. Use EMA or the SBC CLI to force a switchover of the SBC instance. 

  2. In OpenStack, use either the Horizon dashboard or the CLI to shut down the instance (Shut off Instance option in Horizon).

  3. Ensure you have updated any metadata or other parameters in Heat templates as required for the target release. Refer to Metadata and Userdata Format on OpenStack and Developing a Heat Template.

  4. In OpenStack, use either the Horizon dashboard or the CLI (heat stack-update command) to replace the instance. 

  5. Select the next SBC instance to be upgraded. Repeat steps 1 through 5 on the instance.

  6. Repeat step 6 for each subsequent instance until all instances are upgraded.

Post-Upgrade Monitoring

To verify if the instances are up and running with the upgraded software image:

  1. Log on to CLI of any of the SBC instances as an admin user.
  2. Execute the following command and check the appVersion field for each of the instances. Check the syncStatus field in the rgStatus output to ensure that all the currently active instances are in the syncCompleted state. The standby instance displays syncStatus as unprotectedRunningStandby.

    > show status system rgStatus
    rgStatus vsbc1-192.168.2.3 {
        actualCeName    vsbc1;
        assignedRole    active;
        currentRole     active;
        nodeId          1;
        serviceId       0;
        syncStatus      syncCompleted;
        usingMetavarsOf vsbc1-192.168.2.3;
        appVersion      V08.01.00;
    }
    rgStatus vsbc1-192.168.2.4 {
        actualCeName    vsbc1;
        assignedRole    standby;
        currentRole     standby;
        nodeId          2;
        serviceId       1;
        syncStatus      unprotectedRunningStandby;
        usingMetavarsOf vsbc1-192.168.2.4;
        appVersion      V08.01.00;
    }
    rgStatus vsbc1-192.168.2.5 {
        actualCeName    vsbc1;
        assignedRole    active;
        currentRole     active;
        nodeId          3;
        serviceId       2;
        syncStatus      syncCompleted;
        usingMetavarsOf vsbc1-192.168.2.5;
        appVersion      V08.01.00;
    }
    rgStatus vsbc1-192.168.2.6 {
        actualCeName    vsbc1;
        assignedRole    active;
        currentRole     active;
        nodeId          4;
        serviceId       3;
        syncStatus      syncCompleted;
        usingMetavarsOf vsbc1-192.168.2.6;
        appVersion      V08.01.00;
    }
    rgStatus vsbc1-192.168.2.7 {
        actualCeName    vsbc1;
        assignedRole    active;
        currentRole     active;
        nodeId          5;
        serviceId       4;
        syncStatus      syncCompleted;
        usingMetavarsOf vsbc1-192.168.2.7;
        appVersion      V08.01.00;
    }
Note
  • Upgrading a redundancy group may impact transient calls. Stable calls are not affected.
  • The upgrade process of a redundancy group is completed only after all the instances of the group are upgraded to the same build.
  • If the upgrade fails for any of the instances, you must revert back all the instances of the group to the previous build. Reverting instances in a cloud environment is service-impacting.
Note

In earlier releases on the N:1 SWe and Cloud SBC instances, the processor indices (which are used for call capacity estimates) were hard-coded to the following values:

  • Transcode Index : 1.2
  • Crypto Index : 1.0
  • Signaling Index : 1.0
  • Passthrough Index : 1.0

This was made so that all the instances constituting a redundancy group uses the same indices, resulting in same call capacity estimate.

In the current release, these processor indices are calculated and exchanged across the instances in a redundancy group, resulting in all the instances locking themselves to the same set of indices in a redundancy group. This also results in more realistic call capacity estimate by the instances in N:1 SWe and Cloud SBC scenario.
If you wish to continue using the hard-coded processor indices after upgrading to the current release, you need to provide an additional metadata parameter to the standby instance prior to the upgrade. Following is the key value pair to be updated in the metadata:

  • Key : useHardcodedIndices
  • Value : true

(No changes are required during the upgrade in the metadata/userdata if you wish to use the calculated processor indices.)