Page History
Panel | |
---|---|
In this section:
|
Add_workflow_for_techpubs |
---|
|
|
|
|
|
|
|
Panel | ||||
---|---|---|---|---|
In this section:
|
Customer Method to Install Life Cycle Manager
The Ribbon Life Cycle Manager (LCM) provides SBC upgrades in AWS. Once the LCM AMI is launched, files aws_access.yml and upgrade-revert.yml templates (needed for upgrade) are located in the user-specified path.
Example: /home/ec2-user/9.2/iac/management/SBC/upgrade
Prerequisite
- Determine a Linux AMI to use for the LCM using the procedure in Locate Linux AMI ID for use in LCM deployments.
- Retrieve the RBBN LCM scripts from the Ribbon Support Portal.
- The minimum supported version is iac_sustaining_22.05_b296.tar.gz
Steps
- Launch a t2.micro instance with LCM AMI.
Once successfully instantiated, log into the instance as ec2-user and switch to
root
:Code Block ssh -i lcm.pem ec2-user@<LCM instance IP address> sudo su -
Create a directory within
/home/ec2-user/
using the command:Code Block mkdir /home/ec2-user/iac
- Copy the LCM tarball IAC package to
/home/ec2-user/iac.
Untar the tarball inside
/home/ec2-user/iac
using the command:Code Block tar -xzvf iac_sustaining_*.tar.gz
Change to the
/home/ec2-user/iac
directory and view the README file for any updated instructions.Install python-pip package using method appropriate for OS - see examples:
Amazon Linux 2 AMI (HVM), SSD Volume Type (x86_64): amazon-linux-extras install epel; yum install -y python-pip
CentOS7 : yum install -y epel-release; yum install -y python2-pip
RHEL7: yum install -y python2-pip
Debian9/Ubuntu18: apt-get install python-pip
Install Python virtual environment.
Code Block pip2 install virtualenv
Create virtual environment and provide a name e.g. 'iacenv'.
Code Block virtualenv -p /usr/bin/python2 iacenv
Activate virtual environment.
Code Block source iacenv/bin/activate
Complete the LCM setup by installing the required packages and setup the environment.
Code Block /setup.py
- Change directory to
/home/ec2-user/iac/management/SBC/upgrade
. - List the contents of the directory.
- Ensure the following files are present:
upgrade.py
aws_access.yml
upgrade-revert.yml
Info |
---|
It is safe to shut down the LCM instance after the process finishes. If you decide to terminate/remove the LCM instance, you must make a backup of |
Customer Methods for Upgrading and Reverting
Prerequisites
- The new SBC and LCM AMI must be uploaded and available in AWS.
- You must have an AWS Instance (t2.micro) with installed Life Cycle Manager (LCM) AMI. (The default
aws_access.yml
,upgrade-revert.yml,
will be present in the/home/ec2-user/iac/management/SBC/upgrade/
directory of the LCM instance.)
Info |
---|
|
Steps
Open a session to the Life Cycle Manager instance (LCM) and switch to root.
Code Block ssh -i lcm.pem ec2-user@<LCM instance IP address>
Activate virtual environment.
Code Block source iacenv/bin/activate
Change to
/home/ec2-user/iac/management/SBC/upgrade
and enter the workflow directory:Code Block cd /home/ec2-user/iac/management/SBC/upgrade
Edit the file
/home/ec2-user/iac/management/SBC/upgrade/aws_access.yml
to provide AWS access details and HA pairing information. This information is used for mapping HA pairs. An example of a completed aws_access.yml
file is shown below.Info title Note Use either the private managment IP address or the elastic IP address of instance1/instance2. Whichever you chose, ensure the IP address is reachable from the LCM server to the SBC SWe.
Code Block title aws_access.yml ############################################################################ # This file has 2 blocks of information: # 1) AWS access details # 2) SBC Instance/group details # Update this file as per the directions provided at respective fields ############################################################################# #AWS access details should be sourced as environment variables as follows: #export AWS_ACCESS_KEY_ID=my-aws-access-key #export AWS_SECRET_ACCESS_KEY=my-aws-secret-key ############################################################################# # # Update AWS region and zone # provider: "aws" region: "ap-southeast-1" zone: "ap-southeast-1c" # # Update SBC instance's CLI login details, user must be Administrator group, e.g. default user 'admin' # login_details: username: "admin" password: "myAdminPassword" # # Update redundancy group details # 1) In case of active/standby (1:1) configuration, provide details of both the instances in a redundancy group. Order doesn't matter. # 2) In case of standalone (single node) configuration, a redundancy group will have info of the single instance only. # If username and password are same for all the instances and same as in "login_details" above, # can remove those lines, e.g. a simpler version looks like this: # instance1: # instance_id: "i-my-instance-id-1" # instance_ip: "1.2.3.4" # # Note: The script is limited to support just 1 redundancy group. Please dont add other redundancy group to this file else it will fail. redundancy_group: instance1: instance_id: "i-my-instance-id-1" instance_ip: "1.2.3.4" login_details: username: "admin" password: "myAdminPassword" instance2: instance_id: "i-my-instance-id-2" instance_ip: "1.2.3.5" login_details: username: "admin"
Overview
The SBC on AWS supports upgrades and reverts from the following releases to 12.00.xx.
Warning |
---|
Offline upgrades are service-impacting. |
Due to enforced security features (admin ssh key requirement), and cgroup support, Ribbon requires updating SBC instance user-data in prior releases before attempting an upgrade.
Info | ||
---|---|---|
| ||
You must accomplish the following:
This upgrade procedure supports an SBC HA or SBC HA with HFE upgrade. Currently, upgrades are not supported in GCP. |
Refer to Linuxadmin sudo Permissions for SBC SWe in AWS for more information about ssh key requirements.
Refer to Implement C-group Support for Third-Party Software Installations for more details about Linux cgroup support. If the user wishes to use this feature after the upgrade, the user-data parameters ThirdPartyCpuAlloc and ThirdPartyMemAlloc must be set to a value other than zero(0) prior to upgrade.
After user-data updates are complete, perform the normal Replacement Upgrade/Revert of the SBC instance in AWS in accordance with Replacement Upgrade for AWS.
HA SBC Upgrade
HA SBC Pre-Upgrade Steps
To update the user-data to include admin ssh keys and 3rd Party cgroup information for an HA SBC pair, complete the following procedure first on the standby SBC, and then on the formerly active SBC.
Update Standby SBC user-data
Log onto the standby SBC management IP as user linuxadmin.
Exit standby unit.
Log into the active unit as admin.
Ensure the current role active SBC instance sync status shows syncCompleted and app IP and versions are correct.
Code Block title Verify sync status admin@vsbc1> show table system rgStatus ACTUAL CE ASSIGNED CURRENT NODE SERVICE INSTANCE RG NAME NAME ROLE ROLE ID ID SYNC STATUS USING METAVARS OF APP VERSION ------------------------------------------------------------------------------------------------------------------------------ vsbc1-172.31.11.73 vsbc1 active standby 1 1 unprotectedRunningStandby vsbc1-172.31.11.73 V07.00.00S406 vsbc2-172.31.11.101 vsbc2 standby active 2 0 syncCompleted vsbc2-172.31.11.101 V07.00.00S406
- Log onto AWS.
Click the Services drop-down list.
The Services list is displayed.Click EC2 from Management Tools section.
To stop the Unit
- Login to the standby unit as linuxadmin and sudo to root using (sudo su -)
- Stop the instance using the command "shutdown -h now".
Using the left navigation panel of the AWS EC2 dashboard, navigate to INSTANCES > Instances.
Locate the standby Instance in the list. (for example, if using EIP for management IP, enter the following into the instances search bar type "Public IP : <standby management ip>" and hit enter to quickly find the instance)
password: "myAdminPassword"
sudo to root and verify that this unit has a "current host role" indicating standby using the swinfo command.
Code Block |
---|
linuxadmin@vsbc2:~$ sudo su
[root@vsbcSystem-vsbc2 linuxadmin]# swinfo
===================================================
SERVER: vsbc2
OS: V06.00.00-S406
SonusDB: V07.00.00-S406
EMA: V07.00.00-S406
SBC: V07.00.00-S406
SBC Type: isbc
Management mode: xxx
Build Number: xxx
===================================================
Installed host role: standby
Current host role: standby
===================================================
===================================================
SERVER: vsbc1
OS: V06.00.00-S406
SonusDB: V07.00.00-S406
EMA: V07.00.00-S406
SBC: V07.00.00-S406
SBC Type: isbc
Management mode: xxx
Build Number: xxx
===================================================
Installed host role: active
Current host role: active
=================================================== |
Once the Instance State shows stopped, proceed to the next step.
Upgrading
Edit the file
/home/ec2-user/iac/management/SBC/upgrade/upgrade-revert.yml
to provide AMI ID, Upgrade Tag and order of instance upgrade to the new SBC version. The following example provides the AMI version to use for the upgrade and specifies to upgrade instancei-0987654321dcba
and theninstance i-0123456789abcd
.Code Block title upgrade-revert.yml ######################################################################## # This file defines which instances to upgrade and in which order # Update this file as per the directions provided at respective fields ######################################################################## # # image_id - new image to use for upgrade. Use it as follows for different providers: # aws - AMI id with new SBC image # gcp - Name of the new SBC image # openstack - image id with new SBC image # vmware - Absolute path to vmdk with new SBC image # rhv/kvm - Absolute path to qcow2 image # azure - Snapshot name # image_id: "image_or_ami_use-for-upgrade" # # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged # with these so that future reference to these will be easier with appropriate tag. # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs. # # If multiple upgrades are done by using same upgrade-revert.yml file, use different upgrade_tag each time. Refer # the corresponding upgrade_tag while doing the revert. upgrade_tag: "IAC_TEST" # # Order of upgrade: # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel. # # WARNING: If both active and standby of a HA pair are listed in the same upgrade group, that will impact service. # # On successful upgrade of instances in one group, instances in next group will be picked. # Example Usecases: # 1) While upgrading a 1:1(active/standby), list standby in first group and active in second group # 2) If want to upgrade just standby, list that in first group and remove group 2 # 3) While upgrading a standalone, list instance in first group and remove group 2 # tasks: # Each upgrade group should have a list of instance that will be upgraded in parallel upgradeGroup1: tag: "test1"
To update user data- Right click on the Instance on the AWS Dashboard and choose Instance Settings > View/Change Userdata.
Click save to save the user data.
To Re-Start unit
Again, select the desired instance.
Right click or use the Actions pull-down to choose InstanceState > Start.
- Click Yes, Start when prompted for confirmation.
Wait for the instance to run with 2/2 Status Check showing "success".
Log into this "current host role" standby instance as linuxadmin and use the swinfo tool to wait for the system to recover as standby. This will take approximately 10 minutes.
Code Block title swinfo to check for standby system recovery linuxadmin@vsbc2:~$ sudo su [root@vsbcSystem-vsbc2 linuxadmin]# swinfo =================================================== SERVER: vsbc2 OS: V06.00.00-S406 SonusDB: V07.00.00-S406 EMA: V07.00.00-S406 SBC:instances: V07.00.00-S406 SBC Type: - "i-instance-id-2" isbc Management mode: xxx Build Number: xxx =================================================== Installed host role: standby Current host role: standby =================================================== =================================================== SERVER: vsbc1 OS: V06.00.00-S406 SonusDB: V07.00.00-S406 EMA: V07.00.00-S406 SBC: V07.00.00-S406 SBC Type: isbc Management mode: xxx Build Number: xxx =================================================== Installed host role: active Current host role: active ===================================================
Log into the active SBC as admin and wait for the Current role Active instance syncStatus to show syncCompleted.
Code Block title Verify sync status admin@vsbc1> admin@vsbc1> show table system rgStatus ACTUAL CE ASSIGNED CURRENT NODE SERVICE INSTANCE RG NAME NAME ROLE ROLE ID ID SYNC STATUS USING METAVARS OF APP VERSION ------------------------------------------------------------------------------------------------------------------------------ vsbc1-172.31.11.73 vsbc1 active standby 1 1 unprotectedRunningStandby vsbc1-172.31.11.73 V07.00.00S406 vsbc2-172.31.11.101 vsbc2 standby active 2 0 syncCompleted vsbc2-172.31.11.101 V07.00.00S406
Update the user data to include entries for "AdminSshKey", "ThirdPartyCpuAlloc", and "ThirdPartyMemAlloc" in accordance with the example below:
Info | ||
---|---|---|
| ||
Refer to Linuxadmin sudo Permissions for SBC SWe in AWS for details on generating an ssh key for use with "AdminSshKey". |
MultiExcerptName | HA user-data |
---|
Code Block | ||
---|---|---|
| ||
{
"ALT_Mgt0_00": "LOGICAL_MGMT_IP",
"ALT_Pkt0_00": "VIP1",
"ALT_Pkt1_00": "VIP2",
"AdminSshKey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCJnrFMr/RXJD3rVLMLdkJBYau+lWQ+F55Xj+KjunVBtw/zXURV38QIQ1zCw/GDO2CZTSyehUeiV0pi2moUs0ZiK6/TdWTzcOP3RCUhNI26sBFv/Tk5MdaojSqUc2NMpS/c1ESCmaUMBv4F7PfeHt0f3PqpUsxvKeNQQuEZyXjFEwAUdbkCMEptgaroYwuEz4SpFCfNBh0obUSoX5FNiNO/OyXcR8poVH0UhFim0Rdneo7VEH5FeqdkdGyZcTFs7A7aWpBRY3N8KUwklmNSWdDZ9//epEwgaF3m5U7XMd4M9zHURF1uQ/Nc+aiyVId9Mje2EU+nh6npaw/tEOPUiC1v",
"CEName": "ISBC90R0SBC01",
"CERole": "ACTIVE",
"ClusterIp": "172.31.11.32",
"HFE": "172.31.10.161",
"IAM_ROLE": "SWe",
"NodeName": "MA-90R0-91-Upgrade-HFEHA",
"PeerCEHa0IPv4Address": "172.31.11.32",
"PeerCEName": "ISBC90R0SBC02",
"ReverseNatPkt0": "True",
"ReverseNatPkt1": "False",
"SbcHaMode": "1to1",
"SbcPersonalityType": "isbc",
"SortHfeEip": "True",
"SystemName": "ISBC90R0SBC",
"TemplateName": "AWS_HA_template.json",
"TemplateVersion": "V09.00.00R000",
"ThirdPartyCpuAlloc": "0",
"ThirdPartyMemAlloc": "0"
} |
Repeat the previous procedure on the formerly active SBC.
HA SBC Upgrade Steps
Once the pre-upgrade steps are completed successfully, perform the upgrade at Replacement Upgrade for AWS.
- "i-instance-id-4" upgradeGroup2: tag: "test2" instances: - "i-instance-id-1" - "i-instance-id-3"
From the LCM session
/home/ec2-user/iac/management/SBC/upgrade
, run the upgrade command with aws_access.yml and upgrade-revert.yml as inputs:Code Block ./upgrade.py -a aws_access.yml -u upgrade-revert.yml
Info For an offline upgrade, use the command:
./upgrade.py -a aws_access.yml -u upgrade-revert.yml -o
Info Upgrade progress and logs are shown on-screen and also logged in
/home/ec2-user/9.2/iac/management/log/SBC/upgrade/aws/latest
.
After successfully upgrading all nodes listed in
upgrade-revert.yml
, timestamped logs are moved to/home/ec2-user/9.2/iac/management/log/SBC/upgrade/aws/history
.Info Volumes with older software versions are left intact on AWS in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.
Reverting
Edit the file
/home/ec2-user/iac/management/SBC/upgrade/upgrade-revert.yml
by designating the instances to revert. The following example provides a list of instances.Info - The reversion process runs in parallel on all the instances, and could impact service.
- Make sure that all the instances of a redundancy group get reverted to same SBC version, failure to maintain the same version within a group will cause unknown behavior and could cause a service outage.
Code Block title upgrade-revert.yml ######################################################################## # This file defines which instances to upgrade and in which order # Update this file as per the directions provided at respective fields ######################################################################## # # image_id - new image to use for upgrade. Use it as follows for different providers: # aws - AMI id with new SBC image # gcp - Name of the new SBC image # openstack - image id with new SBC image # vmware - Absolute path to vmdk with new SBC image # rhv/kvm - Absolute path to qcow2 image # azure - Snapshot name # image_id: "image_or_ami_use-for-upgrade" # # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged # with these so that future reference to these will be easier with appropriate tag. # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs. # # If multiple upgrades are done by using same upgrade-revert.yml file, use different upgrade_tag each time. Refer # the corresponding upgrade_tag while doing the revert. upgrade_tag: "IAC_TEST" # # Order of upgrade: # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel. # # WARNING: If both active and standby of a HA pair are listed in the same upgrade group, that will impact service. # # On successful upgrade of instances in one group, instances in next group will be picked. # Example Usecases: # 1) While upgrading a 1:1(active/standby), list standby in first group and active in second group # 2) If want to upgrade just standby, list that in first group and remove group 2 # 3) While upgrading a standalone, list instance in first group and remove group 2 # tasks: # Each upgrade group should have a list of instance that will be upgraded in parallel upgradeGroup1: tag: "test1" instances: - "i-instance-id-2" - "i-instance-id-4" upgradeGroup2: tag: "test2" instances: - "i-instance-id-1" - "i-instance-id-3"
From the LCM session
/
/home/ec2-user/iac/management/SBC/upgrade/
, run the revert command with aws_access.yml and upgrade-revert.yml as inputs:Code Block ./revert.py -a aws_access.yml -u upgrade-revert.yml
Info Reversion progress and logs are shown on-screen and also logged in
/home/ec2-user/9.2/iac/management/log/SBC/revert/aws/latest.
After successfully reverting all nodes listed in upgrade-revert.yml, timestamped logs are moved to
/home/ec2-user/9.2/iac/management/log/SBC/revert/aws/history
.
Info Volumes with older software versions are left intact on AWS in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion
.