In this section:

Customer Method to Install Life Cycle Manager 

The Ribbon Life Cycle Manager (LCM) provides standalone SBC upgrades in GCP. Once the LCM Image is launched, the files gcp_access.yml and upgrade-revert.yml templates (needed for upgrade) are available in the user-specified path.

Example:  /home/ubuntu-user/10.0/iac/management/SBC/upgrade directory.

Prerequisites

  • Determine the Linux Image to use for the LCM.
  • Retrieve the RBBN LCM scripts from the Ribbon Support Portal.
    • The minimum supported version is iac_sustaining_22.05_b296.tar.gz.

Steps

  1. Launch a n1-standard-1 instance with LCM Image.
  2. After it is successfully instantiated, log in to the instance as ubuntu-user and switch to root:

    ssh -i lcm.pem ubuntu-user@<LCM instance IP address>
    sudo su -
  3. Create a directory within /home/ubuntu-user/ using the command:

    mkdir /home/ubuntu-user/iac
  4. Copy the latest LCM tarball IAC package from the Ribbon Support Portal to /home/ubuntu-user/iac.
  5. Untar the tarball inside /home/ubuntu-user/iac using the command: 

    tar -xzvf iac_sustaining_*.tar.gz
  6. Change to the /home/ubuntu-user/iac directory and read the README file for any updated instructions.
  7.  Install the python-pip package using a method appropriate for the OS. See the following examples, for reference:

    1. Amazon Linux 2 Image (HVM), SSD Volume Type (x86_64): amazon-linux-extras install epel; yum install -y python-pip
    2. CentOS7 : yum install -y epel-release; yum install -y python2-pip
    3. RHEL7: yum install -y python2-pip
    4. Debian9/Ubuntu18: apt-get install python-pip

      The Ribbon LCM package requires python2 and python2-pip to run. The examples above show the Linux distribution version that support python2. In newer versions, the python2 package is not available from the standard package repositories. Therefore, you must install it from a different source.

  8. Install the Python virtual environment.

    pip2 install virtualenv
  9. Create the virtual environment and provide a name. for example, 'iacenv'.

    virtualenv -p /usr/bin/python2 iacenv
  10. Activate the virtual environment.

    source iacenv/bin/activate


  11. Complete the LCM instance setup by running the following command: 

    ./setup.py
  12. Change the directory to /home/ubuntu-user/iac/management/SBC/upgrade.
  13. List the contents of the directory.
  14. Ensure the following files are present:
    1. upgrade.py
    2. gcp_access.yml
    3. upgrade-revert.yml
Note

It is safe to shut down the LCM instance after the process finishes. If you decide to terminate/remove the LCM instance, you must make a backup of /var/log/ribbon which is required for future reversions and debugging.

Customer Methods for Upgrading and Reverting

Prerequisites

  • Upload a copy of the new SBC image and ensure it is available in the GCP.
  • Copy the latest LCM tarball IAC package from the Ribbon support portal and upload it to the LCM instance and extract it.

  • Ensure that a GCP Instance (n1-standard-1) with the installed Life Cycle Manager (LCM) Image is available. The default gcp_access.yml, upgrade-revert.yml, is present in the /home/ubuntu-user/iac/management/SBC/upgrade/ directory of the LCM instance.

Note
  • Ensure that the security group setting of the SBC instances allows the LCM instances to reach the mgmt IP of the SBC instances on the ports 22, 443 and 444, for ssh, sftp, and http services.
  • Ensure that the LCM instance has enough privileges to access Image, start/stop instances, make volume, attach, detach volume, show, start, stop, reboot and update instances.

Steps

  1. Open a session to the Life Cycle Manager instance (LCM) and switch to root.

    ssh -i lcm.pem ubuntu-user@<LCM instance IP address>
    sudo su
  2. Activate the virtual environment.

    source iacenv/bin/activate
  3. Change to /home/ubuntu-user/iac/management/SBC/upgrade and enter the workflow directory:

    cd /home/ubuntu-user/iac/management/SBC/upgrade
  4. Edit the file /home/ubuntu-user/iac/management/SBC/upgrade/gcp_access.yml to provide GCP access details and HA pairing information. This information is used for mapping HA pairs. An example of a completed gcp_access.yml file is shown below.

    Note

    Use either the private management IP address or the elastic IP address of instance1/instance2. Whichever you choose, ensure the IP address is reachable from the RAF/LCM server to the SBC SWe.

    gcp_access.yml
    ########################################################################
    # This file has 2 blocks of information:
    #   1) GCP access details
    #   2) SBC Instance/group details
    # Update this file as per the directions provided at respective fields
    ########################################################################
    #
    # Update GCP region and zone
    #
    provider: "gcp"
    region: "us-central1"
    zone: "us-central1-a"
     
    #
    # Update GCP access and security keys
    #
    access_data:
          gcp_auth_kind: "serviceaccount"
          gcp_service_account_file: "/home/ubuntu-user/account.json"
          gcp_project: "sonus-svt-gc-poc"
    # Update SBC instance's CLI login details, user must be Administrator group, e.g. default user 'admin'
    #
    login_details:
          username: "admin"
          password: "Sonus@123"
     
    #
    # Update redundancy group details, in case of active/standby (1:1) configuration,
    # provide details of both the instances. Order doesn't matter.
    # If username and password are same for all the instances and same as in "login_details" above,
    # can remove those lines, e.g. a simpler version looks like this:
    #      instance1:
    #            instance_id: i-my-instance-id-1
    #            instance_ip: 1.2.3.4
    #
    # Note: The script is limited to support just 1 redundancy group. Please dont add other redundancy group to this file else it will fail.
     
    redundancy_group1:
          instance1:
                instance_id: "1496492748014486928"
                instance_ip: "35.202.116.196"
                              login_details:
                                  username: "admin"
                     password: "myAdminPassword"
                      instance2:
                            instance_id: "i-my-instance-id-2"
                            instance_ip: "1.2.3.5"
                            login_details:
                                  username: "admin"
                     password: "myAdminPassword"

Upgrade

  1. Edit the file /home/ubuntu-user/iac/management/SBC/upgrade/upgrade-revert.yml to provide Image ID, Upgrade Tag, and order of instance upgrade, to the new SBC version. The following example provides the Image version to use for the upgrade and specifies to upgrade instance i-0987654321dcba and then instance i-0123456789abcd.

    upgrade-revert.yml
    ########################################################################
    # This file defines which instances to upgrade and in which order
    # Update this file as per the directions provided at respective fields
    ########################################################################
     
    #
    # image_id - new image to use for upgrade. Use it as follows for different providers:
    # AWS       - AMI id with new SBC image
    # gcp       - Name of the new SBC  image
    # openstack - image id with new SBC image
    # vmware    - Absolute path to vmdk with new SBC image
    # rhv/kvm   - Absolute path to qcow2 image
    # azure     - Snapshot name
    #
    image_id: "image_or_ami_use-for-upgrade"
    #
    # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged
    # with these so that future reference to these will be easier with appropriate tag.
    # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs.
    #
    # If multiple upgrades are done by using same upgrade-revert.yml file, use different upgrade_tag each time. Refer
    # the corresponding upgrade_tag while doing the revert.
    upgrade_tag: "IAC_TEST"
    #
    # Order of upgrade:
    # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel.
    #
    # WARNING: If both active and standby of a HA pair are listed in the same upgrade group, that will impact service.
    #
    # On successful upgrade of instances in one group, instances in next group will be picked.
    # Example Usecases:
    #   1) While upgrading a 1:1(active/standby), list standby in first group and active in second group
    #   2) If want to upgrade just standby, list that in first group and remove group 2
    #   3) While upgrading a standalone, list instance in first group and remove group 2
    #
    tasks:
          # Each upgrade group should have a list of instance that will be upgraded in parallel
          upgradeGroup1:
                tag: "test1"
                instances:
                      - "i-instance-id-2"
                      - "i-instance-id-4"
          upgradeGroup2:
                tag: "test2"
                instances:
                      - "i-instance-id-1"
                      - "i-instance-id-3"
  2. From the LCM session /home/ubuntu-user/iac/management/SBC/upgrade, run the upgrade command with gcp_access.yml and upgrade-revert.yml as inputs:

    ./upgrade.py -a gcp_access.yml -u upgrade-revert.yml
    Note

    For an offline upgrade, use the command:  ./upgrade.py -a gcp_access.yml -u upgrade-revert.yml -o

    Note

    Upgrade progress and logs are shown on-screen and also logged in /home/ubuntu-user/9.2/iac/management/log/SBC/upgrade/gcp/latest.

  3. After successfully upgrading all nodes listed in upgrade-revert.yml, timestamped logs are moved to /home/ubuntu-user/9.2/iac/management/log/SBC/upgrade/gcp/history.

    Note

    Volumes with older software versions are left intact on the GCP in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.

Revert

  1. Edit the file /home/ubuntu-user/iac/management/SBC/upgrade/upgrade-revert.yml by designating the instances to revert. The following example provides a list of instances. 

    Note
    • The reversion process runs in parallel on all the instances, and could impact service.
    • Make sure that all the instances of a redundancy group revert to the same SBC version as failure to maintain the same version within a group will cause unpredictable behavior and could cause a service outage.
    upgrade-revert.yml
    #
    # Image/AMI id to use for upgrade
    #
    image_id: image_or_ami_use-for-upgrade
    #
    # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged
    # with these so that future reference to these will be easier with appropriate tag.
    # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs.
    #
    upgrade_tag: RAF_UPGRADE_70S406_72S400_344
    #
    # Order of upgrade:
    # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel.
    #
    # WARNING: If both active and standby of a HA pair are listed in the same group, that will impact service.
    #
    # On successful upgrade of instances in one group, instances in next group will be picked.
    # Example Usecases:
    #   1) While upgrading a 1:1(active/standby), list standby in first group and active in second group
    #   2) If want to upgrade just standby, list that in first group and remove group 2
    #
    tasks:
          # Each upgrade group should have a list of instance that will be upgraded in parallel
          upgradeGroup1:
                tag: RAF_UPGRADE_ACTIVENEW-1
                instances:
                      - i-0987654321dcba
          upgradeGroup2:
                tag: RAF_UPGRADE_STDBYNEW-1
                instances:
                      - i-0123456789abcd
  2. From the LCM session //home/ubuntu-user/iac/management/SBC/upgrade/, run the revert command with gcp_access.yml and upgrade-revert.yml as inputs:

     ./revert.py -a gcp_access.yml -u upgrade-revert.yml
    Note

    Reversion progress and logs are shown on-screen and also logged in /home/ubuntu-user/9.2/iac/management/log/SBC/revert/gcp/latest.

  3. After successfully reverting all nodes listed in upgrade-revert.yml, timestamped logs are moved to /home/ubuntu-user/9.2/iac/management/log/SBC/revert/gcp/history.

    Note

    Volumes with older software versions are left intact on GCP in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes to perform a reversion.