Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Panel

In this section:

Table of Contents

Add_workflow_for_techpubs

AUTH2

UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}

AUTH1UserResourceIdentifier{userKey=

8a00a02355cd1c2f0155cd26cd5909df

8a00a0c86820e56901685f374974002d, userName='null'}
JIRAIDAUTHSBX-

91521

117834
REV5UserResourceIdentifier{userKey=

8a00a02355cd1c2f0155cd26cca00892

8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV6UserResourceIdentifier{userKey=

8a00a02355cd1c2f0155cd26cc4107ae

8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV3UserResourceIdentifier{userKey=

8a00a02355cd1c2f0155cd26ccf8090b

8a00a0c87b4755e3017b4ba436730001, userName='null'}

REV4

REV1UserResourceIdentifier{userKey=

8a00a02355cd1c2f0155cd26ca7c0455

8a00a02355cd1c2f0155cd26ccf9091c, userName='null'}



Panel
REV1UserResourceIdentifier{userKey=8a00a0c86268f97a016270f846210010, userName='null'}

In this section:

Table of Contents
maxLevel3


Customer Method to Install Life Cycle Manager 

The Ribbon Life Cycle Manager (LCM) provides SBC upgrades in AWS. Once the LCM AMI is launched, files aws_access.yml and upgrade-revert.yml templates (needed for upgrade) are located in the user-specified path.

Example: /home/ec2-user/9.2/iac/management/SBC/upgrade

Prerequisite

Steps

  1. Launch a t2.micro instance with LCM AMI.
  2. Once successfully instantiated, log into the instance as ec2-user and switch to root:

    Code Block
    ssh -i lcm.pem ec2-user@<LCM instance IP address>
    sudo su -


  3. Create a directory within /home/ec2-user/ using the command:

    Code Block
    mkdir /home/ec2-user/iac


  4. Copy the LCM tarball IAC package to /home/ec2-user/iac.
  5. Untar the tarball inside /home/ec2-user/iac using the command:  

    Code Block
    tar -xzvf iac_sustaining_*.tar.gz


  6. Change to the /home/ec2-user/iac directory and view the README file for any updated instructions.

  7.  Install python-pip package using method appropriate for OS - see examples:

    1. Amazon Linux 2 AMI (HVM), SSD Volume Type (x86_64): amazon-linux-extras install epel; yum install -y python-pip
    2. CentOS7 : yum install -y epel-release; yum install -y python2-pip
    3. RHEL7: yum install -y python2-pip
    4. Debian9/Ubuntu18: apt-get install python-pip
  8. Install Python virtual environment.

    Code Block
    pip2 install virtualenv


  9. Create virtual environment and provide a name e.g. 'iacenv'.

    Code Block
    virtualenv -p /usr/bin/python2 iacenv


  10. Activate virtual environment.

    Code Block
    source iacenv/bin/activate


  11. Complete the LCM setup by installing the required packages and setup the environment.

    Code Block
    /setup.py


  12. Change directory to /home/ec2-user/iac/management/SBC/upgrade.
  13. List the contents of the directory.
  14. Ensure the following files are present:
    1. upgrade.py
    2. aws_access.yml
    3. upgrade-revert.yml
Info

It is safe to shut down the LCM instance after the process finishes. If you decide to terminate/remove the LCM instance, you must make a backup of /var/log/ribbon which is required for future reversions and debugging.


Customer Methods for Upgrading and Reverting

Prerequisites

  • The new SBC and LCM AMI must be uploaded and available in AWS.
  • You must have an AWS Instance (t2.micro) with installed Life Cycle Manager (LCM) AMI. (The default aws_access.yml, upgrade-revert.yml, will be present in the /home/ec2-user/iac/management/SBC/upgrade/ directory of the LCM instance.)


Info
  • Make sure that the SBC instances' security group setting allow the LCM instances to reach the SBC instances' mgmt IP on ports 22, 443 and 444 for ssh, sftp and http services.
  • The LCM instance must have enough privileges to access AMI, start/stop instances, make volume, attach, detach volume, show, start, stop, reboot and update instances.


Steps

  1. Open a session to the Life Cycle Manager instance (LCM) and switch to root.

    Code Block
    ssh -i lcm.pem ec2-user@<LCM instance IP address>


  2. Activate virtual environment.

    Code Block
    source iacenv/bin/activate


  3. Change to /home/ec2-user/iac/management/SBC/upgrade and enter the workflow directory:

    Code Block
    cd /home/ec2-user/iac/management/SBC/upgrade


  4. Edit the file /home/ec2-user/iac/management/SBC/upgrade/aws_access.yml to provide AWS access details and HA pairing information. This information is used for mapping HA pairs. An example of a completed aws_access.yml file is shown below.

    Info
    titleNote

    Use either the private managment IP address or the elastic IP address of instance1/instance2. Whichever you chose, ensure the IP address is reachable from the LCM server to the SBC SWe.


    Code Block
    titleaws_access.yml
    ############################################################################
    # This file has 2 blocks of information:
    #   1) AWS access details
    #   2) SBC Instance/group details
    # Update this file as per the directions provided at respective fields
    #############################################################################
    #AWS access details should be sourced as environment variables as follows:
    #export AWS_ACCESS_KEY_ID=my-aws-access-key
    #export AWS_SECRET_ACCESS_KEY=my-aws-secret-key
    #############################################################################
    #
    # Update AWS region and zone
    #
    provider: "aws"
    region: "ap-southeast-1"
    zone: "ap-southeast-1c"
     
    #
    # Update SBC instance's CLI login details, user must be Administrator group, e.g. default user 'admin'
    #
    login_details:
          username: "admin"
          password: "myAdminPassword"
    #
    # Update redundancy group details
    #    1) In case of active/standby (1:1) configuration, provide details of both the instances in a redundancy group. Order doesn't matter.
    #    2) In case of standalone (single node) configuration, a redundancy group will have info of the single instance only.
    # If username and password are same for all the instances and same as in "login_details" above,
    # can remove those lines, e.g. a simpler version looks like this:
    #      instance1:
    #            instance_id: "i-my-instance-id-1"
    #            instance_ip: "1.2.3.4"
    #
    # Note: The script is limited to support just 1 redundancy group. Please dont add other redundancy group to this file else it will fail.
    redundancy_group:
          instance1:
                instance_id: "i-my-instance-id-1"
                instance_ip: "1.2.3.4"
                login_details:
                      username: "admin"
                      password: "myAdminPassword"
     
          instance2:
                instance_id: "i-my-instance-id-2"
                instance_ip: "1.2.3.5"
                login_details:
                      username: "admin"
                      password: "myAdminPassword"


Upgrading


  1. Edit the file /home/ec2-user/iac/management/SBC/upgrade/upgrade-revert.yml to provide AMI ID, Upgrade Tag and order of instance upgrade to the new SBC version. The following example provides the  AMI version to use for the upgrade and specifies to upgrade instance i-0987654321dcba and then instance i-0123456789abcd.

    Code Block
    titleupgrade-revert.yml
    ########################################################################
    # This file defines which instances to upgrade and in which order
    # Update this file as per the directions provided at respective fields
    ########################################################################
     
    #
    # image_id - new image to use for upgrade. Use it as follows for different providers:
    # aws       - AMI id with new SBC image
    # gcp       - Name of the new SBC  image
    # openstack - image id with new SBC image
    # vmware    - Absolute path to vmdk with new SBC image
    # rhv/kvm   - Absolute path to qcow2 image
    # azure     - Snapshot name
    #
    image_id: "image_or_ami_use-for-upgrade"
    #
    # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged
    # with these so that future reference to these will be easier with appropriate tag.
    # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs.
    #
    # If multiple upgrades are done by using same upgrade-revert.yml file, use different upgrade_tag each time. Refer
    # the corresponding upgrade_tag while doing the revert.
    upgrade_tag: "IAC_TEST"
    #
    # Order of upgrade:
    # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel.
    #
    # WARNING: If both active and standby of a HA pair are listed in the same upgrade group, that will impact service.
    #
    # On successful upgrade of instances in one group, instances in next group will be picked.
    # Example Usecases:
    #   1) While upgrading a 1:1(active/standby), list standby in first group and active in second group
    #   2) If want to upgrade just standby, list that in first group and remove group 2
    #   3) While upgrading a standalone, list instance in first group and remove group 2
    #
    tasks:
          # Each upgrade group should have a list of instance that will be upgraded in parallel
          upgradeGroup1:
                tag: "test1"
                instances:
       

Overview

This feature supports upgrades and revert from SBC 07.00.00S406, 07.00.00S407, 07.02.00S400, 07.02.00S401 and 07.02.03S40X to SBC 09.00.00R000 on AWS.

Info
titleNote

You must accomplish the following:

  • Update the SBC instance user-data on the prior release before attempting upgrades.
  • Update the HFE.sh script manually during this upgrade.

This upgrade procedure supports an SBC HA or SBC HA with HFE upgrade.

Currently, upgrades are not supported in GCP.

Due to enforced security features (admin ssh key requirement), and cgroup support, upgrading from the prior release is impacted. Ribbon requires updating SBC instance user-data in prior releases before attempting an upgrade.

Refer to Linuxadmin sudo Permissions for SBC SWe in AWS for more information about ssh key requirements.

Refer to Implement C-group Support for Third-Party Software Installations for more details about Linux cgroup support.  If the user wishes to use this feature after the upgrade, the user-data parameters ThirdPartyCpuAlloc and ThirdPartyMemAlloc must be set to a value other than zero(0) prior to upgrade.

After user-data updates are complete, perform the normal Replacement Upgrade/Revert of the SBC instance in AWS in accordance with Replacement Upgrade for AWS.

HA SBC Upgrade

HA SBC Pre-Upgrade Steps

To update the user-data to include admin ssh keys and 3rd Party cgroup information for an HA SBC Pair, complete the following procedure.

Update Standby SBC user-data

  1. Log into the standby SBC management IP as user linuxadmin.

  2. sudo to root and verify that this unit has a "current host role" indicating standby using the swinfo command.

    Code Block
    linuxadmin@vsbc2:~$ sudo su
    [root@vsbcSystem-vsbc2 linuxadmin]# swinfo
    ===================================================
    SERVER:          vsbc2
    OS:              V06.00.00-S406
    SonusDB:         V07.00.00-S406
    EMA:             V07.00.00-S406
    SBC:             V07.00.00-S406
    SBC Type:        isbc
    Management mode: xxx
    Build Number:    xxx
    ===================================================
    Installed host role:   standby
    Current   host role:   standby
    ===================================================
    
    ===================================================
    SERVER:          vsbc1
    OS:              V06.00.00-S406
    SonusDB:         V07.00.00-S406
    EMA:             V07.00.00-S406
    SBC:             V07.00.00-S406
    SBC Type:        isbc
    Management mode: xxx
    Build Number:    xxx
    ===================================================
    Installed host role:   active
    Current   host role:   active
    ===================================================
  3. Exit standby unit.

  4. Log into the active unit as admin.

  5. Ensure the current role active SBC instance sync status shows syncCompleted and app IP and versions are correct.

    Code Block
    titleVerify sync status
    admin@vsbc1> show table system rgStatus
                         ACTUAL
                         CE      ASSIGNED  CURRENT  NODE  SERVICE
    INSTANCE RG NAME     NAME    ROLE      ROLE     ID    ID       SYNC STATUS                USING METAVARS OF    APP VERSION
    ------------------------------------------------------------------------------------------------------------------------------
    vsbc1-172.31.11.73   vsbc1   active    standby  1     1        unprotectedRunningStandby  vsbc1-172.31.11.73   V07.00.00S406
    vsbc2-172.31.11.101  vsbc2   standby   active   2     0        syncCompleted              vsbc2-172.31.11.101  V07.00.00S406
  6. Log onto AWS.
  7. Click the Services drop-down list.
    The Services list is displayed.

  8. Click EC2 from Management Tools section.

    To stop the Unit

  9. Login to the standby unit as linuxadmin and sudo to root using (sudo su -)
  10. Stop the instance using the command "shutdown -h now".
  11. Using the left navigation panel of the AWS EC2 dashboard, navigate to INSTANCES > Instances.

  12. Locate the standby Instance in the list. (for example, if using EIP for management IP, enter the following into the instances search bar type "Public IP : <standby management ip>" and hit enter to quickly find the instance)

  13. Once the Instance State shows stopped, proceed to the next step.

    To update user data
  14. Right click on the Instance on the AWS Dashboard and choose Instance Settings > View/Change Userdata.
  15. Update the user data to include entries for "AdminSshKey", "ThirdPartyCpuAlloc",  and "ThirdPartyMemAlloc" in accordance with the example below:

    Info
    titleNote

     Refer to Linuxadmin sudo Permissions for SBC SWe in AWS for details on generating an ssh key for use with "AdminSshKey".

    Multiexcerpt
    MultiExcerptNameHA user-data
    Code Block
    titleHA user-data
    {
        "ALT_Mgt0_00": "LOGICAL_MGMT_IP",
        "ALT_Pkt0_00": "VIP1",
        "ALT_Pkt1_00": "VIP2",
        "AdminSshKey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCJnrFMr/RXJD3rVLMLdkJBYau+lWQ+F55Xj+KjunVBtw/zXURV38QIQ1zCw/GDO2CZTSyehUeiV0pi2moUs0ZiK6/TdWTzcOP3RCUhNI26sBFv/Tk5MdaojSqUc2NMpS/c1ESCmaUMBv4F7PfeHt0f3PqpUsxvKeNQQuEZyXjFEwAUdbkCMEptgaroYwuEz4SpFCfNBh0obUSoX5FNiNO/OyXcR8poVH0UhFim0Rdneo7VEH5FeqdkdGyZcTFs7A7aWpBRY3N8KUwklmNSWdDZ9//epEwgaF3m5U7XMd4M9zHURF1uQ/Nc+aiyVId9Mje2EU+nh6npaw/tEOPUiC1v",
        "CEName": "ISBC90R0SBC01",
        "CERole": "ACTIVE",
        "ClusterIp": "172.31.11.32",
        "HFE": "172.31.10.161",
        "IAM_ROLE": "SWe",
        "NodeName": "MA-90R0-91-Upgrade-HFEHA",
        "PeerCEHa0IPv4Address": "172.31.11.32",
        "PeerCEName": "ISBC90R0SBC02",
        "ReverseNatPkt0": "True",
        "ReverseNatPkt1": "False",
        "SbcHaMode": "1to1",
        "SbcPersonalityType": "isbc",
        "SortHfeEip": "True",
        "SystemName": "ISBC90R0SBC",
        "TemplateName": "AWS_HA_template.json",
        "TemplateVersion": "V09.00.00R000",
        "ThirdPartyCpuAlloc": "0",
        "ThirdPartyMemAlloc": "0"
    }
  16. Click save to save the user data.

    To Re-Start unit

  17. Again, select the desired instance.

  18. Right click or use the Actions pull-down to choose InstanceState > Start.

  19. Click Yes, Start when prompted for confirmation. 
  20. Wait for the instance to run with 2/2 Status Check showing "success". 

  21. Log into this "current host role" standby instance as linuxadmin and use the swinfo tool to wait for the system to recover as standby. This will take approximately 10 minutes.

    Code Block
    titleswinfo to check for standby system recovery
    linuxadmin@vsbc2:~$ sudo su
    [root@vsbcSystem-vsbc2 linuxadmin]# swinfo
    ===================================================
    SERVER:          vsbc2
    OS:              V06.00.00-S406
    SonusDB:         V07.00.00-S406
    EMA:             V07.00.00-S406
    SBC:             V07.00.00-S406
    SBC Type:        isbc
    Management mode: xxx
    Build Number:    xxx
    ===================================================
    Installed host role:   standby
    Current   host role:   standby
    ===================================================
    
    ===================================================
    SERVER:          vsbc1
    OS:              V06.00.00-S406
    SonusDB: - "i-instance-id-2"
             V07.00.00-S406
    EMA:             V07.00.00-S406
    SBC:- "i-instance-id-4"
          upgradeGroup2:
           V07.00.00-S406
    SBC Type:    tag: "test2"
                isbc
    Management mode: xxx
    Build Number:    xxx
    ===================================================
    Installed host role:   active
    Current   host role:   active
    ===================================================

    Log into the active SBC as admin and wait for the Current role Active instance syncStatus to show syncCompleted.

    Code Block
    titleVerify sync status
    admin@vsbc1> admin@vsbc1> show table system rgStatus
                         ACTUAL
                         CE      ASSIGNED  CURRENT  NODE  SERVICE
    INSTANCE RG NAME     NAME    ROLE      ROLE     ID    ID       SYNC STATUS                USING METAVARS OF    APP VERSION
    ------------------------------------------------------------------------------------------------------------------------------
    vsbc1-172.31.11.73   vsbc1   active    standby  1     1        unprotectedRunningStandby  vsbc1-172.31.11.73   V07.00.00S406
    vsbc2-172.31.11.101  vsbc2   standby   active   2     0        syncCompleted              vsbc2-172.31.11.101  V07.00.00S406
    
    To update formerly active user-data
  22. Repeat steps 9-22 for the active unit.

HA SBC Upgrade Steps

Once the SBC 07.02.00S401 pre-upgrade steps are completed successfully, perform the upgrade at Replacement Upgrade for AWS.

If you deployed with HFE, proceed to Upgrade HFE Node for AWS.

  1. instances:
                      - "i-instance-id-1"
                      - "i-instance-id-3"


  2. From the LCM session /home/ec2-user/iac/management/SBC/upgrade, run the upgrade command with aws_access.yml and upgrade-revert.yml as inputs:

    Code Block
    ./upgrade.py -a aws_access.yml -u upgrade-revert.yml


    Info

    For an offline upgrade, use the command:  ./upgrade.py -a aws_access.yml -u upgrade-revert.yml -o


    Info

    Upgrade progress and logs are shown on-screen and also logged in /home/ec2-user/9.2/iac/management/log/SBC/upgrade/aws/latest.


  3. After successfully upgrading all nodes listed in upgrade-revert.yml, timestamped logs are moved to /home/ec2-user/9.2/iac/management/log/SBC/upgrade/aws/history.

    Info

    Volumes with older software versions are left intact on AWS in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.


Reverting

  1. Edit the file /home/ec2-user/iac/management/SBC/upgrade/upgrade-revert.yml by designating the instances to revert. The following example provides a list of instances. 

    Info
    • The reversion process runs in parallel on all the instances, and could impact service.
    • Make sure that all the instances of a redundancy group get reverted to same SBC version, failure to maintain the same version within a group will cause unknown behavior and could cause a service outage.


    Code Block
    titleupgrade-revert.yml
    ########################################################################
    # This file defines which instances to upgrade and in which order
    # Update this file as per the directions provided at respective fields
    ########################################################################
     
    #
    # image_id - new image to use for upgrade. Use it as follows for different providers:
    # aws       - AMI id with new SBC image
    # gcp       - Name of the new SBC  image
    # openstack - image id with new SBC image
    # vmware    - Absolute path to vmdk with new SBC image
    # rhv/kvm   - Absolute path to qcow2 image
    # azure     - Snapshot name
    #
    image_id: "image_or_ami_use-for-upgrade"
    #
    # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged
    # with these so that future reference to these will be easier with appropriate tag.
    # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs.
    #
    # If multiple upgrades are done by using same upgrade-revert.yml file, use different upgrade_tag each time. Refer
    # the corresponding upgrade_tag while doing the revert.
    upgrade_tag: "IAC_TEST"
    #
    # Order of upgrade:
    # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel.
    #
    # WARNING: If both active and standby of a HA pair are listed in the same upgrade group, that will impact service.
    #
    # On successful upgrade of instances in one group, instances in next group will be picked.
    # Example Usecases:
    #   1) While upgrading a 1:1(active/standby), list standby in first group and active in second group
    #   2) If want to upgrade just standby, list that in first group and remove group 2
    #   3) While upgrading a standalone, list instance in first group and remove group 2
    #
    tasks:
          # Each upgrade group should have a list of instance that will be upgraded in parallel
          upgradeGroup1:
                tag: "test1"
                instances:
                      - "i-instance-id-2"
                      - "i-instance-id-4"
          upgradeGroup2:
                tag: "test2"
                instances:
                      - "i-instance-id-1"
                      - "i-instance-id-3"


  2. From the LCM session //home/ec2-user/iac/management/SBC/upgrade/, run the revert command with aws_access.yml and upgrade-revert.yml as inputs:

    Code Block
     ./revert.py -a aws_access.yml -u upgrade-revert.yml


    Info

    Reversion progress and logs are shown on-screen and also logged in /home/ec2-user/9.2/iac/management/log/SBC/revert/aws/latest.



  3. After successfully reverting all nodes listed in upgrade-revert.yml, timestamped logs are moved to /home/ec2-user/9.2/iac/management/log/SBC/revert/aws/history.

    Info

    Volumes with older software versions are left intact on AWS in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.

pagebreak