Initial Setup

Perform the following steps to upgrade the SBC SWE on Azure using IaC. The Azure CLI version used for this documentation is 2.24.

  1. Create an Ubuntu 18.04 LTS instance in Azure.
  2. Create Network Security Group rules to allow the instance to allow SSH access to SBC MGT IP on ports 22, 2024, 444, and 443. Refer to Create Network Security Group. 

    Note

    If the NIC for the IaC instance was created in the same subnet as the SBC MGT, this step is not needed.

  3. Run az login and sign in as a user with the role 'owner' for the subscription.

  4. If not yet created, create a Service Principal that contains 'owner' permissions for the subscription. For example: 

    az ad sp create-for-rbac -n rbbn-iac --role="owner" --scopes="/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXa511"
  5. Export the following values from the Service Principal

    export ARM_SUBSCRIPTION_ID="<subscription_id>"
    export ARM_TENANT_ID="<tenant_id>"
    export ARM_CLIENT_ID="<client_id>"
    export ARM_CLIENT_SECRET="<client_secret>"
    
    
    Note

    You can add this values to the file and source it before running the upgrade command.

Customer Method to Install Life Cycle Manager 

The Ribbon Life Cycle Manager (LCM) provides SBC upgrades in Azure. Once the LCM VM is launched, files azure_access.yml and upgrade-revert.yml templates (needed for upgrade) are located in the user-specified path.

Example: /home/ubuntu-user/xx/iac/management/SBC/upgrade.

Here 'xx' refers to the folder name.

Prerequisite

Steps

  1. Launch a standard F4s instance with LCM VM.
  2. Once successfully instantiated, log into the instance as ubuntu-user and switch to root:

    ssh -i lcm.pem ubuntu-user@<LCM instance IP address>
    sudo su -
  3. Create a directory within /home/ubuntu-user/ using the command:

    mkdir /home/ubuntu-user/iac
  4. Copy the latest LCM tarball IAC package from Salesforce to /home/ubuntu-user/iac.
  5. Untar the tarball inside /home/ubuntu-user/iac using the command: 

    tar -xzvf iac_sustaining_*.tar.gz
  6. Change to the /home/ubuntu-user/iac directory and view the README file for any updated instructions.
  7.  Install python2-pip package using method appropriate for OS - see examples:

    1. Redhat: yum install -y python2-pip
    2. Amzn2 AMI: amazon-linux-extras install epel; yum install -y python2-pip
    3. Centos: curl 'https://bootstrap.pypa.io/pip/2.7/get-pip.py' -o 'get-pip.py'; python get-pip.py
    4. Debian/Ubuntu: apt-get update; apt-get install python-pip

      The Ribbon LCM package requires python2 and python2-pip to run. The examples above show the linux distributions version that support python2. In newer versions; the python2 package is not avaiable from the standard package repositories and instead will need to be installed from a different source.

  8. Install Python virtual environment.

    pip2 install virtualenv
  9. Create virtual environment and provide a name e.g. 'iacenv'.

    virtualenv -p /usr/bin/python2 iacenv
  10. Activate virtual environment.

    source iacenv/bin/activate


  11. Complete the LCM setup by installing the required packages and setup the environment/

    ./setup.py --azure
  12. Change directory to /home/ubuntu-user/iac/management/SBC/upgrade.
  13. List the contents of the directory.
  14. Ensure the following files are present:
    1. upgrade.py
    2. azure_access.yml
    3. upgrade-revert.yml


It is safe to shut down the LCM instance after the process finishes. If you decide to terminate/remove the LCM instance, you must make a snapshot of the VM, please refer to the following wiki:
Procedure to create snapshot of the VM & redeploy VM using created snapshot in Azure

Customer Methods for Upgrading and Reverting

Prerequisites

  • A snapshot of the target SBC image in the same Resource Group as the SBC setup being upgraded
  • Copy the latest LCM tarball IAC package from Salesforce and upload to LCM instance and extract
  • You must have an Azure Instance (Standard F4s) with installed Life Cycle Manager (LCM) Image. (The default azure_access.yml, upgrade-revert.yml, will be present in the /home/ubuntu-user/iac/management/SBC/upgrade/ directory of the LCM instance.)


  • Make sure that the SBC instances' security group setting allow the LCM instances to reach the SBC instances' mgmt IP on ports 22, 443 and 444 for ssh, sftp and http services.
  • The LCM instance must have enough privileges to access Image, start/stop instances, make volume, attach, detach volume, show, start, stop, reboot and update instances.


Steps

  1. Open a session to the Life Cycle Manager instance (LCM) and switch to root.

    ssh -i lcm.pem ubuntu-user@<LCM instance IP address>
    sudo su
  2. Activate virtual environment.

    source iacenv/bin/activate
  3. Change to /home/ubuntu-user/iac/management/SBC/upgrade and enter the workflow directory:

    cd /home/ubuntu-user/iac/management/SBC/upgrade
  4. Edit the file /home/ubuntu-user/iac/management/SBC/upgrade/azure_access.yml to provide Azure access details and HA pairing information. This information is used for mapping HA pairs. An example of a completed azure_access.yml file is shown below.

    Note

    Use either the private managment IP address or the elastic IP address of instance1/instance2. Whichever you chose, ensure the IP address is reachable from the LCM server to the SBC SWe.

    azure_access.yml
    ########################################################################
    # This file has 2 blocks of information:
    #   1) Azure general resources details
    #   2) SBC Instance/group details
    # Update this file as per the directions provided at respective fields
    ########################################################################
     
    #############################################################################
    # Azure authentication details should be supplied via azure CLI login:
    #  - For service principal: az login --service-principal -u '<clientId>' -p '<clientSecret>' --tenant '<tenantId>'
    #  - For Azure Active Directory: az login
    ##############################################################################
    provider: "azure"
     
    #
    # Update subscription, resource group, location
    #
    subscription_id: "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX8b96"
    resource_group: "SBC-Core-RG"
    location: "East US"
    availability_zone: 0
     
    #
    # Update redundancy group details, in case of active/standby (1:1) configuration,
    # provide details of both the instances. Order doesn't matter.
    #
    # Note: The script is limited to support just 1 redundancy group. Please dont add other redundancy group to this file else it will fail.
     
    redundancy_group1:
          instance1:
                instance_id: "ap-sbc02-active"
                instance_ip: "20.75.155.181"
                login_details:
                      username: "admin"
                      password: "Sonus@123"
                      linuxadmin_key_file: "/home/rbbntest/Mahesh-Key.pem"
          instance2:
                instance_id: "ap-sbc02-standby"
                instance_ip: "20.75.155.206"
                login_details:
                      username: "admin"
                      password: "Sonus@123"
                      linuxadmin_key_file: "/home/rbbntest/Mahesh-Key.pem"

Upgrading


  1. Edit the file /home/ubuntu-user/iac/management/SBC/upgrade/upgrade-revert.yml to provide Image ID, Upgrade Tag and order of instance upgrade to the new SBC version. The following example provides the Image version to use for the upgrade and specifies to upgrade instance ap-sbc02-active and then ap-sbc02-standby.

    upgrade-revert.yml
    #######################################################################
    # This file defines which instances to upgrade and in which order
    # Update this file as per the directions provided at respective fields
    ########################################################################
     
    #
    # image_id - new image to use for upgrade. Use it as follows for different providers:
    # aws       - AMI id with new SBC image
    # gcp       - Name of the new SBC  image
    # openstack - image id with new SBC image
    # vmware    - Absolute path to vmdk with new SBC image
    # rhv/kvm   - Absolute path to qcow2 image
    # azure     - Snapshot name
    #
    image_id: "release-sbc-v10-00-00r000-05-19-21-22-06.snap"
    #
    # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged
    # with these so that future reference to these will be easier with appropriate tag.
    # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs.
    #
    # If multiple upgrades are done by using same upgrade-revert.yml file, use different upgrade_tag each time. Refer
    # the corresponding upgrade_tag while doing the revert.
    upgrade_tag: "IAC_TEST"
    # Order of upgrade:
    # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel.
    #
    # WARNING: If both active and standby of a HA pair are listed in the same upgrade group, that will impact service.
    #
    # On successful upgrade of instances in one group, instances in next group will be picked.
    # Example Usecases:
    #   1) While upgrading a 1:1(active/standby), list standby in first group and active in second group
    #   2) If want to upgrade just standby, list that in first group and remove group 2
    #   3) While upgrading a standalone, list instance in first group and remove group 2
    #
    tasks:
          # Each upgrade group should have a list of instance that will be upgraded in parallel
          upgradeGroup1:
                tag: "standby"
                instances:
                      - "ap-sbc02-standby"
          upgradeGroup2:
                tag: "active"
                instances:
                      - "ap-sbc02-active"
  2. From the LCM session /home/ubuntu-user/iac/management/SBC/upgrade, run the upgrade command with azure_access.yml and upgrade-revert.yml as inputs:

    ./upgrade.py -a azure_access.yml -u upgrade-revert.yml
    Note

    For an offline upgrade, use the command:  ./upgrade.py -a azure_access.yml -u upgrade-revert.yml -o

    • The offline upgrade takes all available SBCs down and performs software updgrade.
    • The SBC configuration is lost with offline upgrade.
    • Ribbon recommends that you back up the SBC configuration before performing an offline upgrade.
    • The SBC configuration needs to be reloaded onto the SBC after the upgrade is complete.


    For an online upgrade, use the command:  ./upgrade.py -a azure_access.yml -u upgrade-revert.yml

    • The online upgrade performs like a Live Software Upgrade.
    • The SBC configuration is not lost with an online upgrade.
    • Active calls should not be affected with an online upgrade.
    • First the Standby SBC is upgraded, then Switchover occurs to the new Active SBC with the latest images, retaining all SBC configuration and active calls. The new Standby is then upgraded after switchover completes.
    Note

    Upgrade progress and logs are shown on-screen and also logged in /home/ubuntu-user/9.2/iac/management/log/SBC/upgrade/azure/latest.

  3. After successfully upgrading all nodes listed in upgrade-revert.yml, timestamped logs are moved to /home/ubuntu-user/9.2/iac/management/log/SBC/upgrade/azure/history.

    Note

    Volumes with older software versions are left intact on Azure in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.

Reverting

  1. Edit the file /home/ubuntu-user/iac/management/SBC/upgrade/upgrade-revert.yml by designating the instances to revert. The following example provides a list of instances.

    • The reversion process runs in parallel on all the instances, and could impact service.
    • Make sure that all the instances of a redundancy group are reverted to the same SBC version. Failure to maintain the same version within a group will cause unknown behavior and could cause a service outage.
    upgrade-revert.yml
    #######################################################################
    # This file defines which instances to upgrade and in which order
    # Update this file as per the directions provided at respective fields
    ########################################################################
      
    #
    # image_id - new image to use for upgrade. Use it as follows for different providers:
    # aws       - AMI id with new SBC image
    # gcp       - Name of the new SBC  image
    # openstack - image id with new SBC image
    # vmware    - Absolute path to vmdk with new SBC image
    # rhv/kvm   - Absolute path to qcow2 image
    # azure     - Snapshot name
    #
    image_id: "release-sbc-v10-00-00r000-05-19-21-22-06.snap"
    #
    # A tag to uniquely identify this upgrade. Logs and directory structure of logs are tagged
    # with these so that future reference to these will be easier with appropriate tag.
    # When the same system goes through multiple upgrades, this becomes a very handy way to jump to right set of logs.
    #
    # If multiple upgrades are done by using same upgrade-revert.yml file, use different upgrade_tag each time. Refer
    # the corresponding upgrade_tag while doing the revert.
    upgrade_tag: "IAC_TEST"
    # Order of upgrade:
    # All the instances listed under one group gets upgraded in parallel, so they get stopped, rebuilt and started in parallel.
    #
    # WARNING: If both active and standby of a HA pair are listed in the same upgrade group, that will impact service.
    #
    # On successful upgrade of instances in one group, instances in next group will be picked.
    # Example Usecases:
    #   1) While upgrading a 1:1(active/standby), list standby in first group and active in second group
    #   2) If want to upgrade just standby, list that in first group and remove group 2
    #   3) While upgrading a standalone, list instance in first group and remove group 2
    #
    tasks:
          # Each upgrade group should have a list of instance that will be upgraded in parallel
          upgradeGroup1:
                tag: "standby"
                instances:
                      - "ap-sbc02-standby"
          upgradeGroup2:
                tag: "active"
                instances:
                      - "ap-sbc02-active"
  2. From the LCM session //home/ubuntu-user/iac/management/SBC/upgrade/, run the revert command with azure_access.yml and upgrade-revert.yml as inputs:

     ./revert.py -a azure_access.yml -u upgrade-revert.yml
    Note

    Reversion progress and logs are shown on-screen and also logged in /home/ubuntu-user/9.2/iac/management/log/SBC/revert/azure/latest.

  3. After successfully reverting all nodes listed in upgrade-revert.yml, the timestamped logs are moved to /home/ubuntu-user/9.2/iac/management/log/SBC/revert/azure/history.

    Note

    Volumes with older software versions are left intact on Azure in case they are needed for future reversions. Information about these volumes is stored in the file with instance-id as part of its name. Do not delete these older volumes – you must have these volumes in order to perform a reversion.