In this section:
RAMP supports two configuration models for SBC clusters. Beginning with SBC 8.0, you must define N:1 clusters comprising multiple active SBC instances with a configuration type of "OAM." For these clusters, a 1:1 OAM node pair provides the northbound Operations, Administration, and Maintenance (OA&M) functions for the cluster.
You can define SBC clusters containing a single SBC or a 1:1 SBC HA pair with a configuration type of "Direct Single." The active SBC for these clusters is used for OA&M operations.
In a distributed SBC deployment, upgrade the clusters in the following order: OAM nodes, then T-SBC clusters, then M-SBC clusters, then S-SBC clusters.
This section describes the upgrade process for SBC instances in an N:1 redundancy group orchestrated in an OpenStack environment using Heat templates. N represents the number of active instances and can be up to 4, so a full 4:1 deployment has 5 total SBC instances. If the deployment you are upgrading includes OAM nodes, the OAM nodes should be upgraded before you upgrade the SBC nodes.
The following procedure describes the upgrade procedure for a full 4:1 SBC deployment that includes OAM nodes. Your implementation could include fewer instances and therefore require fewer steps.
Prerequisites
Perform the following activities prior to upgrading the SBC instances within the deployment.
Upgrade RAMP
Upgrade RAMP in your deployment before upgrading the OAM nodes or the SBC nodes. Refer to Installing RAMP on OpenStack.
Download the Software Image
Download the required .QCOW2 and .sha256 files from the Customer Portal.
Upload the Software Image to OpenStack
To upload the .QCOW2 file to OpenStack, navigate to Project > Compute > Images. For more information, refer to Creating a Glance Image.
Update the Heat Templates with Mandatory Login Information
Beginning with release 7.1, you must include SSH keys or passwords for the admin and linuxadmin accounts in the userdata you provide during orchestration of an SBC instance. Therefore during upgrade from a pre-7.1 release, an updated Heat template that contains the mandatory keys or passwords must be applied.
Prior to upgrade, you must update the template used to deploy the instance to include the mandatory SSH key or password userdata. The example templates Ribbon provides include information on how to include this data in a template. Because they are more secure, SSH key fields are mandatory in the example Heat templates. Passwords are optional fields.The password input is not plain text, it is a hash of the password. Refer to Metadata and Userdata Format on OpenStack for more information on generating and including the login userdata that must be provided.
Check the Status of the Instances
Ensure all SBC instances (five, in a 4:1 redundancy group), the OAM nodes, and RAMP are up and running.
To check the instance status in the Instances window, navigate to Project > Compute > Instances in the Horizon GUI.
Upgrade the OAM and SBC Nodes
For deployments that include existing OAM nodes, upgrade the OAM nodes before upgrading the SBC nodes in the deployment. Upgrade the nodes in the order given in these procedures (In the following order: RAMP, OAM nodes and SBC nodes) to ensure that the existing SBC configuration data is preserved and upgraded to the current format.
NoteThe Upgrade procedure of OAM deployment is service impacting as it involves stack deletion and recreation. Therefore, before starting the upgrade, divert the traffic from the OAM cluster you are upgrading.
- In OpenStack, use either the Horizon dashboard or the CLI to shut down the instance (Shut off Instance option in Horizon).
Use the templates: In OpenStack, use either the Horizon dashboard or the CLI (stack delete
command) to remove the OAM and SBCs. - With the templates used initially to spawn the instances in the base build, bring up the OAM first (ensure the OAM has config present, as it will download from the existing cluster on the RAMP), and then the SBCs in the same order. Ensure that the image is of the build in which you are upgrading the deployment.
Post-Upgrade Monitoring
To verify if the instances are up and running with the upgraded software image:
- Log on to CLI of any of the SBC instances as an
admin
user. Execute the following command and check the
appVersion
field for each of the instances. Check thesyncStatus
field in thergStatus
output to ensure that all the currently active instances are in thesyncCompleted
state. The standby instance displayssyncStatus
asunprotectedRunningStandby
.> show status system rgStatus rgStatus vsbc1-192.168.2.3 { actualCeName vsbc1; assignedRole active; currentRole active; nodeId 1; serviceId 0; syncStatus syncCompleted; usingMetavarsOf vsbc1-192.168.2.3; appVersion V08.01.00; } rgStatus vsbc1-192.168.2.4 { actualCeName vsbc1; assignedRole standby; currentRole standby; nodeId 2; serviceId 1; syncStatus unprotectedRunningStandby; usingMetavarsOf vsbc1-192.168.2.4; appVersion V08.01.00; } rgStatus vsbc1-192.168.2.5 { actualCeName vsbc1; assignedRole active; currentRole active; nodeId 3; serviceId 2; syncStatus syncCompleted; usingMetavarsOf vsbc1-192.168.2.5; appVersion V08.01.00; } rgStatus vsbc1-192.168.2.6 { actualCeName vsbc1; assignedRole active; currentRole active; nodeId 4; serviceId 3; syncStatus syncCompleted; usingMetavarsOf vsbc1-192.168.2.6; appVersion V08.01.00; } rgStatus vsbc1-192.168.2.7 { actualCeName vsbc1; assignedRole active; currentRole active; nodeId 5; serviceId 4; syncStatus syncCompleted; usingMetavarsOf vsbc1-192.168.2.7; appVersion V08.01.00; }
- Upgrading a redundancy group may impact transient calls. Stable calls are not affected.
- The upgrade process of a redundancy group is completed only after all the instances of the group are upgraded to the same build.
- If the upgrade fails for any of the instances, you must revert back all the instances of the group to the previous build. Reverting instances in a cloud environment is service-impacting.
In earlier releases on the N:1 SWe and Cloud SBC instances, the processor indices (which are used for call capacity estimates) were hard-coded to the following values:
- Transcode Index : 1.2
- Crypto Index : 1.0
- Signaling Index : 1.0
- Passthrough Index : 1.0
This was made so that all the instances constituting a redundancy group uses the same indices, resulting in same call capacity estimate.
In the current release, these processor indices are calculated and exchanged across the instances in a redundancy group, resulting in all the instances locking themselves to the same set of indices in a redundancy group. This also results in more realistic call capacity estimate by the instances in N:1 SWe and Cloud SBC scenario.
If you wish to continue using the hard-coded processor indices after upgrading to the current release, you need to provide an additional metadata parameter to the standby instance prior to the upgrade. Following is the key value pair to be updated in the metadata:
- Key : useHardcodedIndices
- Value : true
(No changes are required during the upgrade in the metadata/userdata if you wish to use the calculated processor indices)