Page History
Add_workflow_for_techpubs | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Panel | ||||
---|---|---|---|---|
In this section:
|
Customer Method to Install Life Cycle Manager
The Ribbon Life Cycle Manager (LCM) provides standalone SBC upgrades in GCP. Once the LCM Image is launched, the files gcp_access.yml
and upgrade-revert.yml
templates (needed for upgrade) are available in the user-specified path.
Example: /home/ubuntu-user/10.0/iac/management/SBC/upgrade
directory.
Prerequisites
- Determine the Linux Image to use for the LCM.
- Retrieve the RBBN LCM scripts from the Ribbon Support Portal.
- The minimum supported version is
iac_sustaining_22.05_b296.tar.gz
.
- The minimum supported version is
Steps
- Launch a n1-standard-1 instance with LCM Image.
After it is successfully instantiated, log in to the instance as ubuntu-user and switch to
root
:Code Block ssh -i lcm.pem ubuntu-user@<LCM instance IP address> sudo su -
Create a directory within
/home/ubuntu-user/
using the command:Code Block mkdir /home/ubuntu-user/iac
- Copy the latest LCM tarball IAC package from the
Support Portal toSpacevars 0 company /home/ubuntu-user/iac
. Untar the tarball inside
/home/ubuntu-user/iac
using the command:Code Block tar -xzvf iac_sustaining_*.tar.gz
- Change to the
/home/ubuntu-user/iac
directory and read the README file for any updated instructions. Install the python-pip package using a method appropriate for the OS. See the following examples, for reference:
Amazon Linux 2 Image (HVM), SSD Volume Type (x86_64): amazon-linux-extras install epel; yum install -y python-pip
CentOS7 : yum install -y epel-release; yum install -y python2-pip
RHEL7: yum install -y python2-pip
Debian9/Ubuntu18: apt-get install python-pip
Warning The Ribbon LCM package requires python2 and python2-pip to run. The examples above show the Linux distribution version that support python2. In newer versions, the python2 package is not available from the standard package repositories. Therefore, you must install it from a different source.
Install the Python virtual environment.
Code Block pip2 install virtualenv
Create the virtual environment and provide a name. for example, 'iacenv'.
Code Block virtualenv -p /usr/bin/python2 iacenv
Activate the virtual environment.
Code Block source iacenv/bin/activate
Complete the LCM instance setup by running the following command:
Code Block ./setup.py
- Change the directory to
/home/ubuntu-user/iac/management/SBC/upgrade
. - List the contents of the directory.
- Ensure the following files are present:
upgrade.py
gcp_access.yml
upgrade-revert.yml
Info | ||
---|---|---|
| ||
It is safe to shut down the LCM instance after the process finishes. If you decide to terminate/remove the LCM instance, you must make a backup of |
Customer Methods for Upgrading and Reverting
Prerequisites
- Upload a copy of the new SBC image and ensure it is available in the GCP.
- Copy the latest LCM tarball IAC package from the
support portal and upload it to the LCM instance and extract it.Spacevars 0 company - Ensure that a GCP Instance (n1-standard-1) with the installed Life Cycle Manager (LCM) Image is available. The default
gcp_access.yml
,upgrade-revert.yml,
is present in the/home/ubuntu-user/iac/management/SBC/upgrade/
directory of the LCM instance.
Info | ||
---|---|---|
| ||
|
Steps
Open a session to the Life Cycle Manager instance (LCM) and switch to root.
Code Block ssh -i lcm.pem ubuntu-user@<LCM instance IP address> sudo su
Activate the virtual environment.
Code Block source iacenv/bin/activate
Change to
/home/ubuntu-user/iac/management/SBC/upgrade
and enter the workflow directory:Code Block cd /home/ubuntu-user/iac/management/SBC/upgrade
Edit the file
/home/ubuntu-user/iac/management/SBC/upgrade/gcp_access.yml
to provide GCP access details and HA pairing information. This information is used for mapping HA pairs. An example of a completed gcp_access.yml
file is shown below.Info title Note When performing an upgrade, try performing an upgrade with the Instance ID and if errors are encountered, then try with the name instead. The name and instance ID values are available from the Basic Information screen for the VM in GCE.
Info title Note Use either the private management IP address or the elastic IP address of instance1/instance2. Whichever you choose, ensure the IP address is reachable from the RAF/LCM server to the SBC SWe.
Code Block title gcp_access.yml ######################################################################## # This file has 2 blocks of information: # 1) GCP access details # 2) SBC Instance/group details # Update this file as per the directions provided at respective fields ######################################################################## # # Update GCP region and zone # provider: "gcp" region: "us-central1" zone: "us-central1-a" # # Update GCP access and security keys # access_data: gcp_auth_kind: "serviceaccount" gcp_service_account_file: "/home/ubuntu-user/account.json" gcp_project: "sonus-svt-gc-poc" # Update SBC instance's CLI login details, user must be Administrator group, e.g. default user 'admin' # login_details: username: "admin" password: "Sonus@123" # # Update redundancy group details, in case of active/standby (1:1) configuration, # provide details of both the instances. Order doesn't matter. # If username and password are same for all the instances and same as in "login_details" above, # can remove those lines, e.g. a simpler version looks like this: # instance1: # instance_id: i-my-instance-id-1 # instance_ip: 1.2.3.4 # # Note: The script is limited to support just 1 redundancy group. Please dont add other redundancy group to this file else it will fail. redundancy_group1: instance1: instance_id: "1496492748014486928" instance_ip: "35.202.116.196" login_details: username: "admin" password: "myAdminPassword" instance2: instance_id: "i-my-instance-id-2" instance_ip: "1.2.3.5" login_details: username: "admin" password: "myAdminPassword"
Upgrade
Edit the file
/home/ubuntu-user/iac/management/SBC/upgrade/upgrade-revert.yml
to provide Image ID, Upgrade Tag, and order of instance upgrade, to the new SBC version. The following example provides the Image version to use for the upgrade and specifies to upgrade instancei-0987654321dcba
and theninstance i-0123456789abcd
.Code Block title upgrade-revert.yml ######################################################################## # This file defines which instances to upgrade and in which order # Update this file as per the directions provided at respective fields ######################################################################## # # image_id - new image to use for upgrade. Use it as follows for different providers: # AWS - AMI id with new SBC image # gcp - Name of the new SBC image # openstack - image id with new SBC image # vmware - Absolute path to vmdk with new SBC image # rhv/kvm - Absolute path to qcow2 image # azure - Snapshot name # image_id: "image_or_ami_use-for-upgrade" # # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged # with these so that future reference to these will be easier with appropriate tag. # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs. # # If multiple upgrades are done by using same upgrade-revert.yml file, use different upgrade_tag each time. Refer # the corresponding upgrade_tag while doing the revert. upgrade_tag: "IAC_TEST" # # Order of upgrade: # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel. # # WARNING: If both active and standby of a HA pair are listed in the same upgrade group, that will impact service. # # On successful upgrade of instances in one group, instances in next group will be picked. # Example Usecases: # 1) While upgrading a 1:1(active/standby), list standby in first group and active in second group # 2) If want to upgrade just standby, list that in first group and remove group 2 # 3) While upgrading a standalone, list instance in first group and remove group 2 # tasks: # Each upgrade group should have a list of instance that will be upgraded in parallel upgradeGroup1: tag: "test1" instances: - "i-instance-id-2" - "i-instance-id-4" upgradeGroup2: tag: "test2" instances: - "i-instance-id-1" - "i-instance-id-3"
From the LCM session
/home/ubuntu-user/iac/management/SBC/upgrade
, run the upgrade command with gcp_access.yml and upgrade-revert.yml as inputs:Code Block ./upgrade.py -a gcp_access.yml -u upgrade-revert.yml
Info title Note For an offline upgrade, use the command:
./upgrade.py -a gcp_access.yml -u upgrade-revert.yml -o
Info title Note Upgrade progress and logs are shown on-screen and also logged in
/home/ubuntu-user/9.2/iac/management/log/SBC/upgrade/gcp/latest
.
After successfully upgrading all nodes listed in
upgrade-revert.yml
, timestamped logs are moved to/home/ubuntu-user/9.2/iac/management/log/SBC/upgrade/gcp/history
.Info title Note Volumes with older software versions are left intact on the GCP in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.
Revert
Edit the file
/home/ubuntu-user/iac/management/SBC/upgrade/upgrade-revert.yml
by designating the instances to revert. The following example provides a list of instances.Info title Note - The reversion process runs in parallel on all the instances, and could impact service.
- Make sure that all the instances of a redundancy group revert to the same SBC version as failure to maintain the same version within a group will cause unpredictable behavior and could cause a service outage.
Code Block title upgrade-revert.yml # # Image/AMI id to use for upgrade # image_id: image_or_ami_use-for-upgrade # # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged # with these so that future reference to these will be easier with appropriate tag. # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs. # upgrade_tag: RAF_UPGRADE_70S406_72S400_344 # # Order of upgrade: # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel. # # WARNING: If both active and standby of a HA pair are listed in the same group, that will impact service. # # On successful upgrade of instances in one group, instances in next group will be picked. # Example Usecases: # 1) While upgrading a 1:1(active/standby), list standby in first group and active in second group # 2) If want to upgrade just standby, list that in first group and remove group 2 # tasks: # Each upgrade group should have a list of instance that will be upgraded in parallel upgradeGroup1: tag: RAF_UPGRADE_ACTIVENEW-1 instances: - i-0987654321dcba upgradeGroup2: tag: RAF_UPGRADE_STDBYNEW-1 instances: - i-0123456789abcd
From the LCM session
/
/home/ubuntu-user/iac/management/SBC/upgrade/
, run the revert command with gcp_access.yml and upgrade-revert.yml as inputs:Code Block ./revert.py -a gcp_access.yml -u upgrade-revert.yml
Info title Note Reversion progress and logs are shown on-screen and also logged in
/home/ubuntu-user/9.2/iac/management/log/SBC/revert/gcp/latest.
After successfully reverting all nodes listed in upgrade-revert.yml, timestamped logs are moved to
/home/ubuntu-user/9.2/iac/management/log/SBC/revert/gcp/history
.
Info title Note Volumes with older software versions are left intact on GCP in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes to perform a reversion.