In this section:
Perform the following steps to upgrade the SBC SWE on Azure using IaC.
Create Network Security Group rules to allow the instance to allow SSH access to SBC MGT IP on ports 22, 2024, 444, and 443. Refer to Instantiate Standalone SBC on Azure > Create Network Security Group.
If the NIC for the IAC instance was created in the same subnet as the SBC MGT, this step is not needed.
Run az login and sign in as a user with the role 'owner' for the subscription.
If not yet created, create a Service Principal that contains 'owner' permissions for the subscription. For example:
az ad sp create-for-rbac -n rbbn-iac --role="owner" --scopes="/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXa511"
Export the following values from the Service Principal:
export ARM_SUBSCRIPTION_ID="<subscription_id>" export ARM_TENANT_ID="<tenant_id>" export ARM_CLIENT_ID="<client_id>" export ARM_CLIENT_SECRET="<client_secret>"
You can add this values to the file and source it before running the upgrade command.
The
azure_access.yml
and upgrade-revert.yml
templates (needed for upgrade) are located in the user-specified path.Example: /home/ubuntu-user/10.0/iac-270/management/SBC/upgrade
.
Steps
standard F4s
instance with LCM ImageOnce successfully instantiated, log into the instance as ubuntu-user and switch to root
:
ssh -i lcm.pem ubuntu-user@<LCM instance IP address> sudo su -
Create a directory within /home/ubuntu-user/
using the command:
mkdir /home/ubuntu-user/iac-270
/home/ubuntu-user/iac-270
.Untar the tarball inside /home/ubuntu-user/iac-270
using the command:
tar -xzvf iac_sustaining_22.05_b296.tar.gz
/home/ubuntu-user/iac-270
directory and view the README file for any updated instructions.Install python-pip package using method appropriate for OS - see examples:
Amazon Ubuntu 2 Image (HVM), SSD Volume Type (x86_64): amazon-Ubuntu-extras install epel; yum install -y python-pip
CentOS7 : yum install -y epel-release; yum install -y python2-pip
RHEL7: yum install -y python2-pip
Debian9/Ubuntu18: apt-get install python-pip
Run the RAF activate script using command:
sudo source /home/ubuntu-user/iac-270/rafenv/bin/activate
/setup.py
/home/ubuntu-user/iac-270/management/SBC/upgrade
.upgrade.py
azure_access.yml
upgrade-revert.yml
It is safe to shut down the LCM instance after the process finishes. If you decide to terminate/remove the LCM instance, you must make a backup of /var/log/ribbon
which is required for future reversions and debugging.
azure_access.yml
, upgrade-revert.yml,
will be present in the /home/ubuntu-user/iac-270/management/SBC/upgrade/
directory of the LCM instance.)source rafenv/bin/activate
".Change to /home/ubuntu-user/iac-270/management/SBC/upgrade
and enter the workflow directory:
ssh -i lcm.pem ubuntu-user@<LCM instance IP address> sudo su - source rafenv/bin/activate cd /home/ubuntu-user/iac-270/management/SBC/upgrade
Edit the file /home/ubuntu-user/iac-270/management/SBC/upgrade/azure_access.yml
to provide Azure access details and HA pairing information. This information is used for mapping HA pairs. An example of a completed azure
_access.yml
file is shown below.
Use either the private managment IP address or the elastic IP address of instance1/instance2. Whichever you chose, ensure the IP address is reachable from the LCM server to the SBC SWe.
######################################################################## # This file has 2 blocks of information: # 1) Azure general resources details # 2) SBC Instance/group details # Update this file as per the directions provided at respective fields ######################################################################## ############################################################################# # Azure authentication details should be supplied via azure CLI login: # - For service principal: az login --service-principal -u '<clientId>' -p '<clientSecret>' --tenant '<tenantId>' # - For Azure Active Directory: az login ############################################################################## provider: "azure" # # Update subscription, resource group, location # subscription_id: "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX8b96" resource_group: "SBC-Core-RG" location: "East US" availability_zone: 0 # # Update redundancy group details, in case of active/standby (1:1) configuration, # provide details of both the instances. Order doesn't matter. # # Note: The script is limited to support just 1 redundancy group. Please dont add other redundancy group to this file else it will fail. redundancy_group1: instance1: instance_id: "ap-sbc02-active" instance_ip: "20.75.155.181" login_details: username: "admin" password: "Sonus@123" linuxadmin_key_file: "/home/rbbntest/Mahesh-Key.pem" instance2: instance_id: "ap-sbc02-standby" instance_ip: "20.75.155.206" login_details: username: "admin" password: "Sonus@123" linuxadmin_key_file: "/home/rbbntest/Mahesh-Key.pem"
Edit the file /home/ubuntu-user/iac-270/management/SBC/upgrade/upgrade-revert.yml
to provide Image ID, Upgrade Tag and order of instance upgrade to the new SBC version. The following example provides the Image version to use for the upgrade and specifies to upgrade instance ap-sbc02-active
and then ap-sbc02-standby
.
####################################################################### # This file defines which instances to upgrade and in which order # Update this file as per the directions provided at respective fields ######################################################################## # # image_id - new image to use for upgrade. Use it as follows for different providers: # aws - AMI id with new SBC image # gcp - Name of the new SBC image # openstack - image id with new SBC image # vmware - Absolute path to vmdk with new SBC image # rhv/kvm - Absolute path to qcow2 image # azure - Snapshot name # image_id: "release-sbc-v10-00-00r000-05-19-21-22-06.snap" # # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged # with these so that future reference to these will be easier with appropriate tag. # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs. # # If multiple upgrades are done by using same upgrade-revert.yml file, use different upgrade_tag each time. Refer # the corresponding upgrade_tag while doing the revert. upgrade_tag: "IAC_TEST" # Order of upgrade: # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel. # # WARNING: If both active and standby of a HA pair are listed in the same upgrade group, that will impact service. # # On successful upgrade of instances in one group, instances in next group will be picked. # Example Usecases: # 1) While upgrading a 1:1(active/standby), list standby in first group and active in second group # 2) If want to upgrade just standby, list that in first group and remove group 2 # 3) While upgrading a standalone, list instance in first group and remove group 2 # tasks: # Each upgrade group should have a list of instance that will be upgraded in parallel upgradeGroup1: tag: "standby" instances: - "ap-sbc02-standby" upgradeGroup2: tag: "active" instances: - "ap-sbc02-active"
From the LCM session /home/ubuntu-user/iac-270/management/SBC/upgrade
, run the upgrade command with azure_access.yml and upgrade-revert.yml as inputs:
./upgrade.py -a azure_access.yml -u upgrade-revert.yml
For an offline upgrade, use the command: ./upgrade.py -a azure_access.yml -u upgrade-revert.yml -o
Upgrade progress and logs are shown on-screen and also logged in /home/ubuntu-user/9.2/iac-270/management/log/SBC/upgrade/azure/latest
.
After successfully upgrading all nodes listed in upgrade-revert.yml
, timestamped logs are moved to /home/ubuntu-user/9.2/iac-270/management/log/SBC/upgrade/azure/history
.
Volumes with older software versions are left intact on Azure in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.
Edit the file /home/ubuntu-user/iac-270/management/SBC/upgrade/upgrade-revert.yml
by designating the instances to revert. The following example provides a list of instances.
####################################################################### # This file defines which instances to upgrade and in which order # Update this file as per the directions provided at respective fields ######################################################################## # # image_id - new image to use for upgrade. Use it as follows for different providers: # aws - AMI id with new SBC image # gcp - Name of the new SBC image # openstack - image id with new SBC image # vmware - Absolute path to vmdk with new SBC image # rhv/kvm - Absolute path to qcow2 image # azure - Snapshot name # image_id: "release-sbc-v10-00-00r000-05-19-21-22-06.snap" # # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged # with these so that future reference to these will be easier with appropriate tag. # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs. # # If multiple upgrades are done by using same upgrade-revert.yml file, use different upgrade_tag each time. Refer # the corresponding upgrade_tag while doing the revert. upgrade_tag: "IAC_TEST" # Order of upgrade: # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel. # # WARNING: If both active and standby of a HA pair are listed in the same upgrade group, that will impact service. # # On successful upgrade of instances in one group, instances in next group will be picked. # Example Usecases: # 1) While upgrading a 1:1(active/standby), list standby in first group and active in second group # 2) If want to upgrade just standby, list that in first group and remove group 2 # 3) While upgrading a standalone, list instance in first group and remove group 2 # tasks: # Each upgrade group should have a list of instance that will be upgraded in parallel upgradeGroup1: tag: "standby" instances: - "ap-sbc02-standby" upgradeGroup2: tag: "active" instances: - "ap-sbc02-active"
From the LCM session /
/home/ubuntu-user/iac-270/management/SBC/upgrade/
, run the revert command with azure_access.yml and upgrade-revert.yml as inputs:
./revert.py -a azure_access.yml -u upgrade-revert.yml
Reversion progress and logs are shown on-screen and also logged in /home/ubuntu-user/9.2/iac-270/management/log/SBC/revert/azure/latest.
After successfully reverting all nodes listed in upgrade-revert.yml, timestamped logs are moved to /home/ubuntu-user/9.2/iac-270/management/log/SBC/revert/azure/history
.
Volumes with older software versions are left intact on Azure in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.
Use the following procedure to recover the SBC SWe HA on public cloud after reverting to a pre-11.0 release if the SBC application does not come up on both the active and standby nodes. This procedure is only applicable if the system is upgraded from a pre-11.0 release to this release using the following method: Once complete, check to see if application is coming up fine on both active and standby nodes. If present, remove the following file/directories from both the active and standby SBC: Check to make sure that the application starts up on both nodes. If you are using the RAF for upgrade/revert, perform the steps 1.3 to 3.1 within 30 minutes after running the revert operation. This ensures that the RAF can detect the correct state of VNF and mark the operation as complete.Step Action 1
command sbxstop
.2 rm -rf /home/cnxipmadmin/peerDynamicHANewComps
rm -rf /opt/sonus/sbx/openclovis/var
3 sbxstart
’ command.