In this section:
The Ribbon Life Cycle Manager (LCM) provides SBC upgrades in AWS. Once the LCM AMI is launched, files aws_access.yml
and upgrade-revert.yml
templates (needed for upgrade) are located in the /home/ec2-user/raf-240/management/SBC/upgrade directory.
Steps
Once successfully instantiated, log into the instance as ec2-user and switch to root
:
ssh -i lcm.pem ec2-user@<LCM instance IP address> sudo su -
Create a directory within /home/ec2-user/
using the command:
mkdir /home/ec2-user/raf-240
iac_sustaining_21.10_b286.tar.gz
" to /home/ec2-user/raf-240
.Untar the tarball inside /home/ec2-user/raf-240
using the command:
tar -xzvf iac_sustaining_21.10_b286.tar.gz
/home/ec2-user/raf-240
directory and view the README file for any updated instructions.Install python-pip package using method appropriate for OS - see examples:
Amazon Linux 2 AMI (HVM), SSD Volume Type (x86_64): amazon-linux-extras install epel; yum install -y python-pip
CentOS7 : yum install -y epel-release; yum install -y python2-pip
RHEL7: yum install -y python2-pip
Debian9/Ubuntu18: apt-get install python-pip
Run the RAF activate script using command:
sudo source /home/ec2-user/raf-240/rafenv/bin/activate
./setup.py
/home/ec2-user/raf-240/management/SBC/upgrade
.upgrade.py
aws_access.yml
upgrade-revert.yml
It is safe to shut down the LCM instance after the process finishes. If you decide to terminate/remove the LCM instance, ensure to take a backup of the entire /home/ec2-user/raf-240
directory that you can use, if needed, for a future rollback.
aws_access.yml
, upgrade-revert.yml,
will be present in the /home/ec2-user/raf-240/management/SBC/upgrade/
directory of the LCM instance.)source rafenv/bin/activate
".Change to /home/ec2-user/raf-240/management/SBC/upgrade
and enter the workflow directory:
ssh -i lcm.pem ec2-user@<LCM instance IP address> sudo su - source rafenv/bin/activate cd /home/ec2-user/raf-240/management/SBC/upgrade
Edit the file /home/ec2-user/raf-240/management/SBC/upgrade/aws_access.yml
to provide AWS access details and HA pairing information. This information is used for mapping HA pairs. An example of a completed aws_access.yml
file is shown below.
Use either the private management IP address or the elastic IP address of instance1/instance2. Whichever you chose, ensure the IP address is reachable from the RAF/LCM server to the SBC SWe.
############################################################################ # This file has 2 blocks of information: # 1) AWS access details # 2) SBC Instance/group details # Update this file as per the directions provided at respective fields ############################################################################# #AWS access details should be sourced as environment variables as follows: #export AWS_ACCESS_KEY_ID=my-aws-access-key #export AWS_SECRET_ACCESS_KEY=my-aws-secret-key ############################################################################# # # Update AWS region and zone # provider: "aws" region: "ap-southeast-1" zone: "ap-southeast-1c" # # Update SBC instance's CLI login details, user must be Administrator group, e.g. default user 'admin' # login_details: username: "admin" password: "myAdminPassword" # # Update redundancy group details # 1) In case of active/standby (1:1) configuration, provide details of both the instances in a redundancy group. Order doesn't matter. # 2) In case of standalone (single node) configuration, a redundancy group will have info of the single instance only. # If username and password are same for all the instances and same as in "login_details" above, # can remove those lines, e.g. a simpler version looks like this: # instance1: # instance_id: "i-my-instance-id-1" # instance_ip: "1.2.3.4" # # Note: The script is limited to support just 1 redundancy group. Please dont add other redundancy group to this file else it will fail. redundancy_group: instance1: instance_id: "i-my-instance-id-1" instance_ip: "1.2.3.4" login_details: username: "admin" password: "myAdminPassword" instance2: instance_id: "i-my-instance-id-2" instance_ip: "1.2.3.5" login_details: username: "admin" password: "myAdminPassword"
Edit the file /home/ec2-user/raf-240/management/SBC/upgrade/upgrade-revert.yml
to provide AMI ID, Upgrade Tag and order of instance upgrade to the new SBC version. The following example provides the AMI version to use for the upgrade and specifies to upgrade instance i-0987654321dcba
and then instance i-0123456789abcd
.
# # Image/AMI id to use for upgrade # image_id: ami-001122334455abcd # # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged # with these so that future reference to these will be easier with appropriate tag. # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs. # upgrade_tag: RAF_UPGRADE_70S406_72S400_344 # # Order of upgrade: # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel. # # WARNING: If both active and standby of a HA pair are listed in the same group, that will impact service. # # On successful upgrade of instances in one group, instances in next group will be picked. # Example Usecases: # 1) While upgrading a 1:1(active/standby), list standby in first group and active in second group # 2) If want to upgrade just standby, list that in first group and remove group 2 # tasks: # Each upgrade group should have a list of instance that will be upgraded in parallel upgradeGroup1: tag: RAF_UPGRADE_ACTIVENEW-1 instances: - i-0987654321dcba upgradeGroup2: tag: RAF_UPGRADE_STDBYNEW-1 instances: - i-0123456789abcd
From the LCM session /home/ec2-user/raf-240/management/SBC/upgrade
, run the upgrade command with aws_access.yml and upgrade-revert.yml as inputs:
./upgrade.py -a aws_access.yml -u upgrade-revert.yml
For an offline upgrade, use the command: ./upgrade.py -a aws_access.yml -u upgrade-revert.yml -o
Upgrade progress and logs are shown on-screen and also logged in /home/ec2-user/raf-240/management/log/SBC/upgrade/aws/latest
.
After successfully upgrading all nodes listed in upgrade-revert.yml
, timestamped logs are moved to /home/ec2-user/raf-240/management/log/SBC/upgrade/aws/history
.
Volumes with older software versions are left intact on AWS in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.
Edit the file /home/ec2-user/raf-240/management/SBC/upgrade/upgrade-revert.yml
by designating the instances to revert. The following example provides a list of instances.
# # Image/AMI id to use for upgrade # image_id: ami-001122334455abcd # # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged # with these so that future reference to these will be easier with appropriate tag. # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs. # upgrade_tag: RAF_UPGRADE_70S406_72S400_344 # # Order of upgrade: # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel. # # WARNING: If both active and standby of a HA pair are listed in the same group, that will impact service. # # On successful upgrade of instances in one group, instances in next group will be picked. # Example Usecases: # 1) While upgrading a 1:1(active/standby), list standby in first group and active in second group # 2) If want to upgrade just standby, list that in first group and remove group 2 # tasks: # Each upgrade group should have a list of instance that will be upgraded in parallel upgradeGroup1: tag: RAF_UPGRADE_ACTIVENEW-1 instances: - i-0987654321dcba upgradeGroup2: tag: RAF_UPGRADE_STDBYNEW-1 instances: - i-0123456789abcd
From the LCM session /
/home/ec2-user/raf-240/management/SBC/upgrade/
, run the revert command with aws_access.yml and upgrade-revert.yml as inputs:
./revert.py -a aws_access.yml -u upgrade-revert.yml
Reversion progress and logs are shown on-screen and also logged in /home/ec2-user/raf-240/management/log/SBC/revert/aws/latest.
After successfully reverting all nodes listed in upgrade-revert.yml, timestamped logs are moved to /home/ec2-user/raf-240/management/log/SBC/revert/aws/history
.
Volumes with older software versions are left intact on AWS in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.