In this section:






Customer Method to Install Life Cycle Manager 

The Ribbon Life Cycle Manager (LCM) provides SBC upgrades in AWS. Once the LCM AMI is launched, files aws_access.yml and upgrade.yml templates (needed for upgrade) are located in the /home/ec2-user/iac/management/SBC/upgrade/ directory.

Prerequisite

Steps

  1. Launch a t2.micro instance with LCM AMI
  2. Once successfully instantiated, log into the instance as ec2-user and switch to root:

    ssh -i lcm.pem ec2-user@<LCM instance IP address>
    sudo su -
  3. Copy the LCM tarball (iac-1.2-20200530-082800.tar.gz) to this instance.
  4. Untar the tarball using the command "tar -xvf"
  5. Change to the /home/ec2-user/iac directory and view the README file for any updated instructions.
  6.  Install python-pip package using method appropriate for OS - see examples:

    1. Amazon Linux 2 AMI (HVM), SSD Volume Type (x86_64): amazon-linux-extras install epel; yum install -y python-pip
    2. CentOS7 : yum install -y epel-release; yum install -y python2-pip
    3. RHEL7: yum install -y python2-pip
    4. Debian9/Ubuntu18: apt-get install python-pip
  7. Install Python virtual environment using command "pip install virtualenv". Refer to: https://packaging.python.org/guides/installing-using-pip-and-virtualenv/ for additional info on pip virtualenv
  8. Activate the python virtualenv with command "python -m virtualenv iacenv"
  9. Run the IAC activate script  using command "source iacenv/bin/activate
  10. Complete the LCM instance setup by running setup.py
    e.g../setup.py
  11. Change directory to /home/ec2-user/iac/management/SBC/upgrade.
  12. List the contents of the directory.
  13. Ensure that files upgrade.py, aws_access.yml and upgrade.yml are present.


It is safe to shut down the LCM instance after the process finishes. If you decide to terminate/remove the LCM instance, you must make a backup of /var/log/ribbon which is required for future reversions and debugging.

Customer Methods for Upgrading and Reverting

Prerequisites

  • The new SBC and LCM AMI must be uploaded and available in AWS.
  • You must have an AWS Instance (t2.micro) with installed Life Cycle Manager (LCM) AMI. (The default aws_access.yml, upgrade.yml, will be present in the /home/ec2-user/iac/management/SBC/upgrade/ directory of the LCM instance.)


  • Make sure that the SBC instances' security group setting allow the LCM instances to reach the SBC instances' mgmt IP on ports 22, 443 and 444 for ssh, sftp and http services.
  • The LCM instance must have enough privileges to access AMI, start/stop instances, make volume, attach, detach volume, show, start, stop, reboot and update instances.


Steps

  1. Open a session to the Life Cycle Manager instance (LCM) and switch to root.
  2. Enter the LCM virtual environment using command "source iacenv/bin/activate".
  3. Change to /home/ec2-user/iac/management/SBC/upgrade and enter the workflow directory:

    ssh -i lcm.pem ec2-user@<LCM instance IP address>
    
    sudo su - source iacenv/bin/activate
    
    cd /home/ec2-user/iac/management/SBC/upgrade
  4. Edit the file /home/ec2-user/iac/management/SBC/upgrade/aws_access.yml to provide AWS access details and HA pairing information. This information is used for mapping HA pairs. An example of a completed aws_access.yml file is shown below.

    Note

    Use either the private managment IP address or the elastic IP address of instance1/instance2. Whichever you chose, ensure the IP address is reachable from the IAC/LCM server to the SBC SWe.

    aws_access.yml
    ########################################################################
    # This file has 2 blocks of information:
    #   1) AWS access details
    #   2) SBC Instance/group details
    # Update this file as per the directions provided at respective fields
    ########################################################################
    #
    # Update AWS region and zone
    #
    provider: aws
    region: ap-southeast-1
    zone: ap-southeast-1c
     
    #
    # Update AWS access and security keys
    #
    access_keys:
          aws_access_key: my_access_key
          aws_secret_key: my_secret_key
     
    #
    # Update SBC instance's CLI login details, user must be Administrator group, e.g. default user 'admin'
    #
    login_details:
          username: admin
          password: myAdminPassword
    #
    # Update redundancy group details, in case of active/standby (1:1) configuration,
    # provide details of both the instances. Order doesn't matter.
    # If username and password are same for all the instances and same as in "login_details" above,
    # can remove those lines, e.g. a simpler version looks like this:
    #      instance1:
    #            instance_id: i-my-instance-id-1
    #            instance_ip: 1.2.3.4
    #
    redundancy_group1:
          instance1:
                instance_id: i-my-instance-id-1
                instance_ip: 1.2.3.4
                login_details:
                      username: admin
                      password: myAdminPassword
     
          instance2:
                instance_id: i-my-instance-id-2
                instance_ip: 1.2.3.5
                login_details:
                      username: admin
                      password: myAdminPassword
    #
    # Repeat additional redundancy groups similar to "redundancy_group1" if the operations(e.g. upgrades) are performed on multiple groups simultaneously.
    # If only one redundancy group is updated at a time, remove below block.
    #
    redundancy_group2:
          instance1:
                instance_id: i-my-instanceid-3
                instance_ip: 1.2.3.6
                login_details:
                      username: admin
                      password: myAdminPassword
     
          instance2:
                instance_id: i-my-instanceid-4
                instance_ip: 1.2.3.7
                login_details:
                      username: admin
                      password: myAdminPassword

Upgrading


  1. Edit the file /home/ec2-user/iac/management/SBC/upgrade/upgrade.yml to provide AMI ID, Upgrade Tag and order of instance upgrade to the new SBC version. The following example provides the  AMI version to use for the upgrade and specifies to upgrade instance i-0987654321dcba and then instance i-0123456789abcd.

    upgrade.yml
    ########################################################################
    # This file defines which instances to upgrade and in which order
    # Update this file as per the directions provided at respective fields
    ########################################################################
     
    #
    # Image/AMI id to use for upgrade
    #
    image_id: ami-id-to-use-for-upgrade
     
    #
    # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged
    # with these so that future reference to these will be easier with appropriate tag.
    # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs.
    #
    upgrade_tag: IAC_TEST
     
    #
    # Order of upgrade:
    # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel.
    #
    # WARNING: If both active and standby of a HA pair are listed in the same group, that will impact service.
    #
    # On successful upgrade of instances in one group, instances in next group will be picked.
    # Example Usecases:
    #   1) While upgrading a 1:1(active/standby), list standby in first group and active in second group
    #   2) If want to upgrade just standby, list that in first group and remove group 2
    #
    tasks:
          # Each upgrade group should have a list of instance that will be upgraded in parallel
          upgradeGroup1:
                tag: test1
                instances:
                      - i-instance-id-1
                      - i-instance-id-2
          upgradeGroup2:
                tag: test2
                instances:
                      - i-instance-id-3
                      - i-instance-id-4
  2. From the LCM session /home/ec2-user/iac/management/SBC/upgrade, run the upgrade command with aws_access.yml and upgrade.yml as inputs:

    ./upgrade.py -a aws_access.yml -u upgrade.yml

    Upgrade progress and logs are shown on-screen and also logged in /home/ec2-user/iac/management/log/SBC/upgrade/aws/latest.

  3. After successfully upgrading all nodes listed in upgrade.yml, timestamped logs are moved to /home/ec2-user/iac/management/log/SBC/upgrade/aws/history.

    Volumes with older software versions are left intact on AWS in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name.

Reverting

  1. Edit the file /home/ec2-user/iac/management/SBC/upgrade/upgrade.yml by designating the instances to revert. The following example provides a list of instances. 

    • The reversion process runs in parallel on all the instances, and could impact service.
    • Make sure that all the instances of a redundancy group get reverted to same SBC version, failure to maintain the same version within a group will cause unknown behavior and could cause a service outage.
    upgrade.yml
    #
    # Image/AMI id to use for upgrade
    #
    image_id: ami-001122334455abcd
    #
    # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged
    # with these so that future reference to these will be easier with appropriate tag.
    # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs.
    #
    upgrade_tag: IAC_UPGRADE_70S406_72S400_344
    #
    # Order of upgrade:
    # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel.
    #
    # WARNING: If both active and standby of a HA pair are listed in the same group, that will impact service.
    #
    # On successful upgrade of instances in one group, instances in next group will be picked.
    # Example Usecases:
    #   1) While upgrading a 1:1(active/standby), list standby in first group and active in second group
    #   2) If want to upgrade just standby, list that in first group and remove group 2
    #
    tasks:
          # Each upgrade group should have a list of instance that will be upgraded in parallel
          upgradeGroup1:
                tag: IAC_UPGRADE_ACTIVENEW-1
                instances:
                      - i-0987654321dcba
          upgradeGroup2:
                tag: IAC_UPGRADE_STDBYNEW-1
                instances:
                      - i-0123456789abcd
  2. From the LCM session //home/ec2-user/iac/management/SBC/upgrade/, run the revert command with aws_access.yml and upgrade.yml as inputs:

     ./revert.py -a aws_access.yml -u upgrade.yml

    Reversion progress and logs are shown on-screen and also logged in /home/ec2-user/iac/management/log/SBC/revert/aws/latest.

  3. After successfully reverting all nodes listed in upgrade.yml, timestamped logs are moved to /home/ec2-user/iac/management/log/SBC/revert/aws/history.

    Volumes with older software versions are left intact on AWS in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.