In this MOP...





Customer Method to Install Life Cycle Manager 

The Ribbon Life Cycle Manager (LCM) provides SBC upgrades in AWS. Once the LCM AMI is launched, files access.yml and upgrade.yml templates (needed for upgrade) are located in the /home/ec2-user/iac/management/SBC/upgrade/ directory.

Prerequisite

Ensure LCM AMI is available/shared with the account.

Steps

  1. Launch a t2.micro instance with LCM AMI
  2. Once successfully instantiated, log into the instance as ec2-user and switch to root:

    ssh -i lcm.pem ec2-user@<LCM instance IP address>
    sudo su -
  3. Change directory to /home/ec2-user/iac/management/SBC/upgrade.
  4. List the contents of the directory.
  5. Ensure that files upgrade.py, aws_access.yml and upgrade.yml are present.


It is safe to shut down the LCM instance after the process finishes. If you decide to terminate/remove the LCM instance, you must make a backup of /var/log/ribbon required for future reversions and debugging.

Customer Methods for Upgrading and Reverting

Prerequisites

  • The new SBC and LCM AMI must be uploaded and available in AWS .
  • You must have an AWS Instance (t2.micro) with installed Life Cycle Manager (LCM) AMI. (The default access.yml, upgrade.yml, and revert.yml will be present in the /opt/ribbon/lcm/wflow directory of the LCM instance.)


  • Make sure that the SBC instances' security group setting allow the LCM instances to reach the SBC instances' mgmt IP on ports 22, 443 and 444 for ssh, sftp and http services.
  • The LCM instance must have enough privileges to access AMI, start/stop instances, make volume, attach, detach volume, show, start, stop, reboot and update instances.


Steps

1. Open a session to the Life Cycle Manager instance (LCM), and then change to /opt/ribbon and enter the workflow directory:


ssh -i lcm.pem ec2-user@<LCM instance IP address>

sudo su -

cd /opt/ribbon/lcm/wflow



2. Edit the file /opt/ribbon/lcm/wflow/access.yml to provide AWS access details and HA pairing information. This information is used for mapping HA pairs. An example of a completed access.yml file is shown here:


access.yml
# Yaml file that will contain access details for each of the instances

region: us-east-1

zone: us-east-1c

access_keys:
      aws_access_key: MYAWSACCESSKEY
      aws_secret_key: MYAWSSECRETKEY

auth_details:
      username: admin
      password: myAdminPasswd

haGroup1:
      instance1:
            instance_id: i-0f4131dadc03d23c8
            instance_ip: 172.31.50.76
            auth_details:
                  username: admin
                  password: myAdminPassword
 
      instance2:
            instance_id: i-0d4958cf4727814f0
            instance_ip: 172.31.50.251
            auth_details:
                  username: admin
                  password: myAdminPassword


Upgrading


1. Edit the file /home/ec2-user/iac/management/SBC/upgrade/upgrade.yml to provide AMI and upgrade sequence to the new SBC version. The following example indicates to upgrade instance i-0987654321dcba and
    then i-0123456789abcd.


upgrade.yml
amiid: ami-001122xxxx55abcd

tasks:
      upgradeGroup1:
            - i-0987654321dcba

      upgradeGroup2:
            - i-0123456789abcd


2. From the LCM session /home/ec2-user/iac/management/SBC/upgrade, run the upgrade command with aws_access.yml and upgrade.yml as inputs::


./upgradeSbc.py -a access.yml -u upgrade.yml

Upgrade progress and logs are shown on-screen and also logged in /var/log/ribbon/upgrade/latest.

  

3. After successfully upgrading all nodes listed in upgrade.yml, timestamped logs are moved to /var/log/ribbon/upgrade/history.


Volumes with older software versions are left intact on AWS in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name.


Reverting

1. Edit the file /home/ec2-user/iac/management/SBC/upgrade/upgrade.yml by designating the instances to revert. The following example provides a list of instances. 


  • The reversion process runs in parallel on all the instances, and could impact service.
  • Make sure that all the instances of a redundancy group get reverted to same SBC version, failure to maintain the same version within a group will cause unknown behavior and could cause a service outage.
upgrade.yml
tasks:
      revertGroup1:
            - i-0987654321dcba
            - i-0123456789abcd



2.  From the LCM session /home/ec2-user/iac/management/SBC/upgrade/, run the revert command with aws_access.yml and upgrade.yml as inputs::


./revertSbc.py -a access.yml -u revert.yml

Reversion progress and logs are shown on-screen and also logged in /var/log/ribbon/revert/latest.


3. After successfully reverting all nodes listed in revert.yml, timestamped logs are moved to /var/log/ribbon/revert/history.

 

Volumes with older software versions are left intact on AWS in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.