Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Add_workflow_for_techpubs
AUTH1UserResourceIdentifier{userKey=

8a00a0c86820e56901685f374974002d

8a00a0c86e9b2550016ec54396b5000a, userName='null'}
JIRAIDAUTHSBX-

106750

124948
REV5UserResourceIdentifier{userKey=

8a00a02355cd1c2f0155cd26cb8305e9

8a00a0c87e188912017e4c24a00e0016, userName='null'}
REV6UserResourceIdentifier{userKey=

8a00a02355cd1c2f0155cd26cb8305e9

8a00a0c87e188912017e4c24a00e0016, userName='null'}
REV3UserResourceIdentifier{userKey=

8a00a02355cd1c2f0155cd26cdec0ad3

8a00a0c8613a801e0161653d9e2d0027, userName='null'}
REV1UserResourceIdentifier{userKey=

8a00a02355cd1c2f0155cd26cdec0ad3

8a00a0c8613a801e016164d824e70025, userName='null'}

You can upgrade the VM nodes in an SBC SWe VNF to a new version of software using VNFM if the nodes were instantiated using VNFM. Within VNFM, you specify the software version of software to which you want to upgrade, select an upgrade method, and then upgrade the VM nodes, beginning with the node that you determine currently has the standby role. To avoid service disruption you should only upgrade VM nodes when they are in standby mode.

The following procedure supports upgrading SBC N:1 HA deployments, where a maximum 4:1 HA deployment includes 5 VM instances (four active, one standby). However the process is similar regardless of the number of nodes in the cluster.

Info
titleNOTICE: Upgrading Deployments with OAM VMs to Release 9.0

Due to release 9.0 changes made in OAM function to reduce the number of interfaces required on an OAM VM,

Spacevars
0company
recommends performing a new installation using VNFM of any OAM VNFs, rather than an upgrade. If the VNF is not newly installed, the prior interfaces will remain, and you must perform additional procedures to support an upgrade. If you cannot perform a fresh install of the OAM VNF, contact
Spacevars
0company
 Global Product Support for additional procedures to perform as part of the upgrade.


Info
titleNote

A compute host hardware change is not supported during an upgrade. A new VNFC instantiation is needed to do so.


Prerequisites

Before initiating the upgrade ensure the following:

  • If your deployment includes a

    Spacevars
    0model3
    system, upgrade it before upgrading OAM nodes (if present) and the SBC nodes.
    Refer to VNFM page, Upgrading a VNFand the Ribbon Application Management Platform Documentation for additional details.

  • Download the required files for the new version of SBC software. Refer to the release notes for the new SBC software release. The required files include:
    • .qcow2 file that contains the SBC software
    • Script to generate a customized Cloud Service Archive (CSAR) package file
    • Virtual Network Function Descriptor (VNFD) template file to use with the script
  • Run the script to generate a custom CSAR package file for your deployment. Refer to Creating a CSAR Package File
  • Create an OpenStack Glance image using the SBC application software .qcow2 file. Refer to Creating a Glance Image.
  • Onboard the CSAR package file you created for your deployment. Refer to Onboarding the SBC CSAR package file.
  • Check the VNF Lifecycle window. The VNF you want to upgrade must be in Ready status, meaning that the status of each VM within it is Available.
  • Ensure the pre-upgrade cluster configuration is saved to
    Spacevars
    0model3
    .  Note that for N:1 HA M-SBC upgrades, the Load Balancing Service (LBS) must be configured on the nodes prior to upgrading.
  • Use the
    Spacevars
    0model3
    to determine which node within the VNF currently has the standby role, or use the following CLI procedure: 
    1. Log into the CLI of one of the nodes. 
    2. Issue the command: show status system rgStatus 
    3. Check the output to determine which node shows its currentRole as standby.

Upgrading Deployments that Include OAM Nodes

If the deployment you are upgrading includes OAM nodes, VNFM instantiates the OAM nodes in their own cluster, separate from the redundancy groups containing the SBC nodes the OAM nodes manage. When upgrading the deployment, upgrade the OAM nodes in the VNF first, and then upgrade the SBC nodes. Typically, the The OAM nodes are typically deployed in a 1:1 HA configuration, including an active and a standby OAM node. Upgrade the standby OAM node first. After the upgrade of the standby OAM completes, manually switchover the active OAM node (request system admin <system name> switchover) to switch roles. Then, upgrade the new standby (former active) OAM node.  Once Once you complete the upgrade of the OAM nodes, you can continue with upgrading the SBC nodes.

Info
titleNote

In N:1 model, the VNF has a pair of OAM nodes and one or more redundancy groups. You can upgrade all these in rolling and automated fashion with no service impact.

Upgrading for the

bullseye

Bullseye SBC

The VNFM is primarily tested with an automated upgrade system. With manual upgrades, the user has the control and can choose which nodes can be upgraded. You can select both nodes to upgrade to avoid gluster issues. The VNFM upgrades both nodes, and there is a service impact for OAM nodes. On a successful OAM node upgrade, you can select the managed node standby first and active. There is no service impact for the managed node. A user must manually log into each node , and check the status, and once . Once they are up and running then proceed with upgrade with they must proceed upgrading the other nodes. 

When an automated option upgrade is chosen first, it prioritizes each node and upgrades each node sequentially. This upgrade is done similarly to a hardware based approach where it starts , starting with the standby OAM and later an active OAM. Once an upgrade is successful, it continues moving to a managed node. There is no service impact on the nodes. The OpenStack approach is differentdiffers, as the stack is deleted first, and the new SBC version is installed. There is a service impact during this upgrade. 

On a VNFM automated upgrade, there is an issue when upgrading from a buster to a bullseye SBC version. This occurs in buster with a gluster filesystem, with 5.5 version and in bullseye, it is a 9.2 gluster version. There are many Many changes that went in to into gluster 9.2, and gluster 9.2 and the gluster 5.5 version are not compatible. When the Standby OAM is upgraded with gluster 9.2 and the active OAM still runs gluster 5.5, they do not communicate correctly and the nodes must exchange messages to further proceed with an upgrade. 

Due to these limitations, an automated VNFM upgrade is not supported for a buster to bullseye upgradethe following scenarios:

  • Upgrading from buster or lower version to bullseye.
  • Upgrading from 11.1 to any higher version.

Upgrading the VNFM Manually

  1. Choose the manual upgrade option.
  2. Select both OAM node nodes and perform an upgrade.
  3. Both OAM nodes are deleted and re-created.
  4. Log into each node and once they are up and running, go for other proceed to the non-OAM nodes.
  5. For other non OAM nodes/managed nodes, select the standby and perform an upgrade
  6. Once the upgrade is complete and the service is up, select another node.

Monitoring Node Status During Upgrade

Before and during the upgrade procedure, use the SBC CLI to check the status of the nodes in the redundancy group (RG) you are upgrading:

  1. Log into the CLI of one of the nodes. 
  2. Issue the command: show status system rgStatus and check the output to:
    • Determine which node shows its assignedRole as standby. This is the node to upgrade first.
    • Verify that the syncStatus of all the currently active SBC VM(s) is syncCompleted.
    • Verify that the syncStatus of the current standby SBC VM is unprotectedRunningStandby.

The following is an example of the type of output that would appear in a full 4:1 HA deployment. Smaller deployments would include fewer node entries. In a 1:1 HA configuration, the output would include entries for one active and one standby node, similar to the first two entries in the example.

Code Block
> show status system rgStatus
rgStatus vsbc1-192.168.2.3 {
    actualCeName    vsbc1;
    assignedRole    active;
    currentRole     active;
    nodeId          1;
    serviceId       0;
    syncStatus      syncCompleted;
    usingMetavarsOf vsbc1-192.168.2.3;
    appVersion      V08.00.00;
}
rgStatus vsbc1-192.168.2.4 {
    actualCeName    vsbc1;
    assignedRole    standby;
    currentRole     standby;
    nodeId          2;
    serviceId       1;
    syncStatus      unprotectedRunningStandby;
    usingMetavarsOf vsbc1-192.168.2.4;
    appVersion      V08.00.00;
}
rgStatus vsbc1-192.168.2.5 {
    actualCeName    vsbc1;
    assignedRole    active;
    currentRole     active;
    nodeId          3;
    serviceId       2;
    syncStatus      syncCompleted;
    usingMetavarsOf vsbc1-192.168.2.5;
    appVersion      V08.00.00;
}
rgStatus vsbc1-192.168.2.6 {
    actualCeName    vsbc1;
    assignedRole    active;
    currentRole     active;
    nodeId          4;
    serviceId       3;
    syncStatus      syncCompleted;
    usingMetavarsOf vsbc1-192.168.2.6;
    appVersion      V08.00.00;
}
rgStatus vsbc1-192.168.2.7 {
    actualCeName    vsbc1;
    assignedRole    active;
    currentRole     active;
    nodeId          5;
    serviceId       4;
    syncStatus      syncCompleted;
    usingMetavarsOf vsbc1-192.168.2.7;
    appVersion      V08.00.00;
}

Throughout the upgrade process, maintain a CLI session in one of the VMs to monitor rgStatus. When it is time to upgrade the VM where you are monitoring rgStatus, start a CLI session in an upgraded VM to continue monitoring rgStatus. While upgrading M-SBC deployments it is also helpful to keep track of which node is the leader node for the cluster by issuing the command: 
show table system loadBalancingService leaderStatus

The command output lists all of the active nodes in the cluster, and the leader leader node is the first node listed, for . For example:

Code Block
> show table system loadBalancingService leaderStatus
LEADER  LEADER
ID      IPADDRESS  STATE
--------------------------
0 192.168.10.6 completed
1 192.168.10.10 completed
[ok][2018-11-15 21:52:51]

Upgrade Procedure

The upgrade procedure on VNFM occurs in two phases. First you upgrade the VNF, during which you specify the software version to which you want to upgrade. Then you upgrade the individual VM nodes (VNFCs) to the specified software version.

Upgrading a VNF

Use the following procedure to upgrade a VNF. 

  1. Perform the procedure Performing Onboarding Through the UI. You must add a new version of the VNF to the VNF catalog.

  2. Click VNF Lifecycle to display all VNFs and VNFCs.
  3. Select the VNF you want to upgrade from the VNF Lifecycle panel.

  4. Select Upgrade from the Select Action drop-down menu for the selected VNF.

    VNF Ready to upgrade
    Info
    titleNote

    In the VNF Lifecycle panel, the status of the VNF you want to upgrade must be Ready. The Upgrade action is only available if the VNF is in the Ready state.

    Caption
    0Figure
    1

    Image Modified


    The Upgrade VNF window appears.

    caption

    0Figure
    1Upgrade VNF
    Image Modified


  5. Provide a reason for the upgrade (optional). This information is included in logs and passes to the VNFs during the upgrade. 

  6. Select the upgrade version from the Upgrade Version drop-down menu.

  7. Click Save.
    The VNF status in the VNF Lifecycle panel updates to Upgrading.
  8. Continue with the Upgrading a VNFC procedure to upgrade each VM node, one by one.
    After you upgrade each VNFC, the VNF status is Upgraded.

Upgrading a VNFC
Anchor
Upgrading a VNFC
Upgrading a VNFC

Before you can upgrade a VNFC, you must complete the "Upgrading a VNF" procedure that specifies the software version. After completing that procedure, the status for the individual nodes (VNFCs) within the VNF appears as Upgrade RequiredThe Upgrade action only displays if the VNF status is Upgrading and the VNFC status is Upgrade Required.

Use the following procedure to upgrade a VNFC.

  1. Log into the VNFM UI. Refer to Logging into the VNFM UI.
  2. Click VNF Lifecycle to display all VNFs and all VNFCs.
  3. Select the VNF you are upgrading in the VNF Lifecycle panel. The nodes within the selected VNF are listed below in the VNFC Lifecycle panel.
  4. In the VNFC Lifecycle panel, select the node you previously determined to be the standby node. The standby node must be upgraded first.
  5. Log into the CLI of the node you are upgrading and issue the command: sbxstop 
    Note: Make sure the process to stop the SBC has completed before continuing.
  6. Select Upgrade from the Select Action drop-down menu in the VNFC Lifecycle panel for the node you are upgrading.

    Caption0Figure1

    Upgrade VNFCImage Modified

    The Upgrade VNFC window appears.

  7. Select the upgrade option. Note that if the VM has an attached Cinder boot volume, the Rebuild option is not offered, you must use Recreate (reuse).

    • Rebuild - upgrades the VM using the upgrade image. The VM UUID will not change. Not available if the VM has an attached Cinder boot volume.
    • Recreate (reuse) - re-creates the VM instance using the upgrade image. The VM UUID will change.
  8. Click Upgrade. VNFM initiates the upgrade.  

  9. After the VM status changes to "Upgraded" and the node is back in service, determine the next node to upgrade. In a 1:1 HA deployment, this will be the currently active node. In an N:1 HA deployment, select one of the active nodes.  Before upgrading an instance, check the syncStatus field in the output of the rgStatus command to ensure that all the currently active instances are in the syncCompleted state. The standby instance displays syncStatus as unprotectedRunningStandby

  10. Repeat steps 5 through 8 for the active node in a 1:1 HA deployment. In an N:1 HA deployment, repeat steps 5 through 8 for each active node, one by one, until all VMs are upgraded. 
    The VNF status in the VNF Lifecycle panel should be Upgrade in Progress until the VNFC upgrades are complete.  After each node is upgraded, its status changes to Upgraded. After the last node in the VNF is upgraded, the status of the active node(s) changes to Available. The status of the standby node changes to Maintenance for a short time before it comes into service and then it also becomes Available.
    The VNF status in the VNF Lifecycle panel then changes to Ready

Post-Upgrade Verification

To verify that the nodes instances are up and running with the upgraded software version:

  1. Log into the CLI of one of the nodes.
  2. Issue the command: show status system rgStatus 
  3. In the output, check that the value for syncStatus for the currently active instance is syncCompleted. The value for the standby instance should appear as unprotectedRunningStandby.

pagebreak