In this section:
RAMP supports two configuration models for SBC clusters. Beginning with SBC 8.0, you must define N:1 clusters comprising multiple active SBC instances with a configuration type of "OAM." For these clusters, a 1:1 OAM node pair provides the northbound Operations, Administration, and Maintenance (OA&M) functions for the cluster.
You can define SBC clusters containing a single SBC or a 1:1 SBC HA pair with a configuration type of "Direct Single." The active SBC for these clusters is used for OA&M operations.
In a distributed SBC deployment, upgrade the clusters in the following order: OAM nodes, then T-SBC clusters, then M-SBC clusters, then S-SBC clusters.
This section describes the upgrade process for SBC instances in an N:1 redundancy group orchestrated in an OpenStack environment using Heat templates. N represents the number of active instances and can be up to 4, so a full 4:1 deployment has 5 total SBC instances. If the deployment you are upgrading includes OAM nodes, the OAM nodes should be upgraded before you upgrade the SBC nodes.
The following procedure describes the upgrade procedure for a full 4:1 SBC deployment that includes OAM nodes. Your implementation could include fewer instances and therefore require fewer steps.
Perform the following activities prior to upgrading the SBC instances within the deployment.
Upgrade RAMP in your deployment before upgrading the OAM nodes or the SBC nodes. Refer to Installing RAMP on OpenStack.
Download the required .QCOW2 and .sha256 files from the Customer Portal.
To upload the .QCOW2 file to OpenStack, navigate to Project > Compute > Images. For more information, refer to Creating a Glance Image.
Beginning with release 7.1, you must include SSH keys or passwords for the admin and linuxadmin accounts in the userdata you provide during orchestration of an SBC instance. Therefore during upgrade from a pre-7.1 release, an updated Heat template that contains the mandatory keys or passwords must be applied.
Prior to upgrade, you must update the template used to deploy the instance to include the mandatory SSH key or password userdata. The example templates Ribbon provides include information on how to include this data in a template. Because they are more secure, SSH key fields are mandatory in the example Heat templates. Passwords are optional fields.The password input is not plain text, it is a hash of the password. Refer to Metadata and Userdata Format on OpenStack for more information on generating and including the login userdata that must be provided.
Ensure all SBC instances (five, in a 4:1 redundancy group), the OAM nodes, and RAMP are up and running.
To check the instance status in the Instances window, navigate to Project > Compute > Instances in the Horizon GUI.
For deployments that include existing OAM nodes, upgrade the OAM nodes before upgrading the SBC nodes in the deployment. Upgrade the nodes in the order given in these procedures (In the following order: RAMP, OAM nodes and SBC nodes) to ensure that the existing SBC configuration data is preserved and upgraded to the current format.
The Upgrade procedure of OAM deployment is service impacting as it involves stack deletion and recreation. Therefore, before starting the upgrade, divert the traffic from the OAM cluster you are upgrading.
stack delete
command) to remove the OAM and SBCs.To verify if the instances are up and running with the upgraded software image:
admin
user.Execute the following command and check the appVersion
field for each of the instances. Check the syncStatus
field in the rgStatus
output to ensure that all the currently active instances are in the syncCompleted
state. The standby instance displays syncStatus
as unprotectedRunningStandby
.
> show status system rgStatus rgStatus vsbc1-192.168.2.3 { actualCeName vsbc1; assignedRole active; currentRole active; nodeId 1; serviceId 0; syncStatus syncCompleted; usingMetavarsOf vsbc1-192.168.2.3; appVersion V08.01.00; } rgStatus vsbc1-192.168.2.4 { actualCeName vsbc1; assignedRole standby; currentRole standby; nodeId 2; serviceId 1; syncStatus unprotectedRunningStandby; usingMetavarsOf vsbc1-192.168.2.4; appVersion V08.01.00; } rgStatus vsbc1-192.168.2.5 { actualCeName vsbc1; assignedRole active; currentRole active; nodeId 3; serviceId 2; syncStatus syncCompleted; usingMetavarsOf vsbc1-192.168.2.5; appVersion V08.01.00; } rgStatus vsbc1-192.168.2.6 { actualCeName vsbc1; assignedRole active; currentRole active; nodeId 4; serviceId 3; syncStatus syncCompleted; usingMetavarsOf vsbc1-192.168.2.6; appVersion V08.01.00; } rgStatus vsbc1-192.168.2.7 { actualCeName vsbc1; assignedRole active; currentRole active; nodeId 5; serviceId 4; syncStatus syncCompleted; usingMetavarsOf vsbc1-192.168.2.7; appVersion V08.01.00; }
In earlier releases on the N:1 SWe and Cloud SBC instances, the processor indices (which are used for call capacity estimates) were hard-coded to the following values:
This was made so that all the instances constituting a redundancy group uses the same indices, resulting in same call capacity estimate.
In the current release, these processor indices are calculated and exchanged across the instances in a redundancy group, resulting in all the instances locking themselves to the same set of indices in a redundancy group. This also results in more realistic call capacity estimate by the instances in N:1 SWe and Cloud SBC scenario.
If you wish to continue using the hard-coded processor indices after upgrading to the current release, you need to provide an additional metadata parameter to the standby instance prior to the upgrade. Following is the key value pair to be updated in the metadata:
(No changes are required during the upgrade in the metadata/userdata if you wish to use the calculated processor indices)