This section describes a procedure for installation or upgrade of an SBC SWe 1:1 HA pair on KVM using a QCOW2 image file and an ISO config drive you create using the CLI tool createConfigDrive.py. The tool is contained within the tarball createConfigDrive-1.0.0.tar.gz (available as part of the release software bundle). It significantly improves the workflow for installing and upgrading SBC SWe on KVM compared to the traditional procedure involving an ISO file, especially when an LSWU is performed.


Note

The tool and the associated procedure are applicable only when using a QCOW2 file to deploy SBC SWe 8.1 or higher on KVM, launched in Nto1 HA mode. 

  • For MRFP(Nto1 and 1to1 HA mode), you may use it to upgrade from SBC 7.2.x to SBC 8.0 or higher versions.
  • Starting with SBC 8.1, this is the recommended procedure for installation and upgrade of SBC SWe on KVM launched in Nto1 HA mode.
  • Starting with SBC 10.1.3 release, the upgrade support has been introduced for SBC SWe launched in 1to1 HA mode using qcow2. The upgrade and installation procedure remain the same.
Note

The procedure described in this section assumes and recommends that the active and standby instances are guests in two different KVM hosts.

Tip

For installation or upgrade of standalone instances, complete the steps for a single KVM host, rather than two hosts, and for only one instance, rather than both active and standby instances.

Prerequisites

Before performing an installation followed by an upgrade, perform the following steps:

  1. Back up the system configuration and the data.
  2. Ensure that you have root level access to the KVM hosts. For an HA pair, log on to both KVM hosts as root (or equivalent, with full administrative privileges).
  3. On each KVM host, perform the following:
    1. Download the release software bundle and untar it. The unpacked bundle contains the tarball createConfigDrive-1.0.0.tar.gz. Copy it to your home directory.

      # cp createConfigDrive-1.0.0.tar.gz ~/
    2. Unpack the tarball createConfigDrive-1.0.0.tar.gz.

      # tar -xvzf createConfigDrive-1.0.0.tar.gz
    3. Navigate to ~/createConfigDrive-1.0.0 and ensure both createConfigDrive.py and README.md are present. Open README.md in an editor such as vi to find more information on the createConfigDrive.py tool.

      # cd ~/createConfigDrive-1.0.0
      # vi README.md
      Note

      Ensure that the following packages that createConfigDrive.py requires are installed on the instances:

      • The Python package netaddr. If you have the Python package manager "pip" installed, use the command "pip install netaddr" to install the package.
      • Linux package:
        • genisoimage
        For Debian-based Linux distributions, use the command "sudo apt install <dependency>" to install the Linux packages.
    4. Copy the QCOW2 images of the base build (for initial installation) and the build to which you want to upgrade to the /var/lib/libvirt/images directory.

      # cp <path_to_qcow2_image_file> /var/lib/libvirt/images
    5. Create copies of the base and final builds in the /var/lib/libvirt/images directory. While performing the cp command, add the suffix "_cp1" and "_cp2" to the base and final build file names respectively. For example, the names of the copied files should be similar to "sbc-V08.01.00R000-connexip-os_07.00.00-A019_4684_amd64_cp1.qcow2" and "sbc-V08.01.00R000-connexip-os_07.00.00-A020_146_amd64_cp2.qcow2".

      # cd /var/lib/libvirt/images
      # cp <base_image_file_name>.qcow2 <base_image_file_name>_cp1.qcow2
      # cp <final_image_file_name>.qcow2 <final_image_file_name>_cp2.qcow2

Installation

Tip

To perform only an upgrade, skip the procedure for installation.

Active Instance

Log on to the KVM host reserved for the active instance as root (or equivalent, with full administrative privileges), and perform the following steps:

  1. Navigate to the ~/createConfigDrive-1.0.0 directory.

    # cd ~/createConfigDrive-1.0.0 
  2. Run the script createConfigDrive.py. For installation, you can run the script in either of two modes. The --cli option provides screen prompts after which you can enter configuration data for the deployment. The --file option requires you to pass the script a file, sbx.json, which must contain all the required configuration data in a json file format. Both options are shown below.

    1. The --cli option:

      1. Enter the following command to provide configuration data in response to prompts.

        # python createConfigDrive.py --cli
      2.  Your responses determine the nature of the instance. For example, to create the active instance of an HA pair, enter values similar to the example below:

        > python createConfigDrive.py --cli
        WARNING:__main__:Another instance of CLOUD_CONFIG Install is running...
        
        INFO:_main_:##################################################################
        INFO:_main_: Executing Command : createConfigDrive.py --cli
        INFO:_main_:##################################################################
        Enter the Product Type [1 - SBC , 2 - OAM]: 1
        Enter the SBC HA Mode Type [1 - 1:1 , 2 - N:1]: 1
        Enter the SBC Install Type [1- Standalone, 2- HA]: 2
        Enter the System Name: KVMHA
        Enter the Ce Name: KVM1
        Enter the Type of mgt IP[1- V4, 2- V6, 3- V4+v6]: 1
        Enter the V4 mgt IP of the Active server: 10.6.7.126
        Enter the V4 mgt Prefix for Active server: 24
        Enter the V4 mgt GW for the Active server: 10.6.7.1
        Do you want to configure second management port (mgt1) [1- Yes, 2-No ]: 2
        Enter the Peer Ce Name: KVM2
        Enter the Type of Peer mgt IP[1-V4, 2-V6, 3-V4+V6]: 1
        Enter the V4 mgt IP of the Standby server: 10.6.7.127
        Enter the V4 mgt Prefix for Standby server: 24
        Enter the V4 mgt GW for the Standby server: 10.6.7.1
        Enter the ha IP for Active server[V4]: 169.254.99.13
        Enter the ha Prefix for Active server: 16
        Enter the ha IP for Standby server[V4]: 169.254.88.13
        Enter the ntp IP for the server: 10.1.1.2
        Enter the Timezone ID [Check README for the ID(1-595)]: 12
        Enter the SBC Type[isbc,ssbc,msbc,mrfp]: isbc
        Enter the TIPC id for the system: 1569
        Do you want SSH Public Key based authentication for 'admin' and 'linuxadmin' user [1- Yes, 2-No ]: 1
        Enter SSH Public Key for 'admin' user: ssh-rsa
        AAAAB3NzaC1yc2EAAAADAQABAAABAQCnJ4s1adPtqTKb+8mtmVs4ZkIgP5a1CmCucJp0XtHrqFU454XO5APmlPknBI2qhb2W71HWl
        wPI88XqLM+KnpkoVC44UpZeCUoTuQfnuCt5XfXZnlbbJVV71N+fbmGDvCgTUqG5/cfVIrhqZ4OU8rpiznH3Vuofn/kLN/
        EBGnvUjODUe+EbmuAeD1CpWxm47TrbfZESMOXHfNHDNKZwmHwnX8qWOpltgYpK1Kx7m82Bb5H4qpREAjtJbkOH1Qm+Mm7f
        8ysMqgRrrpcEZgPWUSDQjsLofDlwERO++npJpzm6QiqRygKlpf9IRUWU3ckKpVwDfQhol3tbd3Pagav3cKTf
        Enter SSH Public Key for 'linuxadmin' user: ssh-rsa
        AAAAB3NzaC1yc2EAAAADAQABAAABAQCnJ4s1adPtqTKb+8mtmVs4ZkIgP5a1CmCucJp0XtHrqFU454XO5APmlPknBI2qhb2W71HWlwPI88
        XqLM+KnpkoVC44UpZeCUoTuQfnuCt5XfXZnlbbJVV71N+fbmGDvCgTUqG5/cfVIrhqZ4OU8rpiznH3Vuofn/kLN/EBGnvUjODUe+
        EbmuAeD1CpWxm47TrbfZESMOXHfNHDNKZwmHwnX8qWOpltgYpK1Kx7m82Bb5H4qpREAjtJbkOH1Qm+Mm7f8ysMqgRrrpcEZgPWU
        SDQjsLofDlwERO++npJpzm6QiqRygKlpf9IRUWU3ckKpVwDfQhol3tbd3Pagav3cKTf
        Do you want to enter EMS settings [1- Yes, 2-No ]: 2
        Enable or Disable EMA REST module? [1- enable, 2- disable]: 1
        Enable or Disable EMA Core module? [1- enable, 2- disable]: 1
        Enable or Disable EMA Troubleshooting module? [1- enable, 2- disable]: 1
        Do you want to enter OAM settings [1- Yes, 2-No ]: 2
        Do you want to enter TSBC settings [1- Yes, 2-No ]: 2
        INFO:_main_:Cleared old JSON files : Done
        I: -input-charset not specified, using utf-8 (detected in locale settings)
        Total translation table size: 0
        Total rockridge attributes bytes: 411
        Total directory bytes: 0
        Path table size(bytes): 10
        Max brk space used 23000
        177 extents written (0 MB)
        INFO:_main_:Created the active server config drive name : config_drive_kvm1.iso under output dir
        I: -input-charset not specified, using utf-8 (detected in locale settings)
        Total translation table size: 0
        Total rockridge attributes bytes: 411
        Total directory bytes: 0
        Path table size(bytes): 10
        Max brk space used 23000
        177 extents written (0 MB)
        INFO:_main_:Created the standby server config drive name : config_drive_kvm2.iso under output dir
    2. The --file option:

      1. Copy the template sbx.json file in the ~/createConfigDrive-1.0.0/input directory and modify it as appropriate for the deployment. The contents of the sbx.json file are similar to the values you provide with the --cli option. An example of a file edited for an HA pair is shown below:

        { 
        "sbxConf":{
            "installType": "1",
            "role": "1",
            "systemName": "vsbcSystem",
            "ceName": "vsbc1",
            "peerCeName": "vbsc2",
            "mgtIpType": "1",
            "mgtIp": "1.1.10.10",
            "mgtPrefix": "24",
            "mgtGw": "1.1.10.1",
            "mgtIpV6":"",
            "mgtPrefixV6":"",
            "mgtGwV6":"",
            "haIp": "2.2.2.2",
            "haPrefix": "24",
            "peerMgtIpType": "1",
            "peerMgtIp": "1.1.10.15",
            "peerMgtPrefix": "24",
            "peerMgtGw": "1.1.10.1",
            "peerMgtIpV6": "",
            "peerMgtPrefixV6": "",
            "peerMgtGwV6": "",
            "peerHaIp": "2.2.2.5",
            "ntpIp": "1.1.10.1",
            "timezone": "12",
            "sbctype": "isbc",
            "tipc": "1500",
            "rgIp":"2.2.2.2",
            "enableREST":"enabled",
            "enableCoreEMA":"enabled",
            "enableTS":"enabled",
            "haMode":"1to1"
        },
        "emsConf":
        {
        "emsName": "",
        "emsPass": "",
        "downloadEmsConfig": "True",
        "emsIp1": "",
        "emsIp2": "",
        "EmsPrivateNodeParameters":
         
        { "cluster_id": "" }
        },
        "oamConf":
         
        { "oamIp1": "", "oamIp2": "" }
        }
        Tip

        If you do not need an object for configuration, pass an empty string ("") as its value.

        For reference, the possible values of some objects are as follows:

            - installType: 1 - Standalone | 2 - HA

            - mgtIpType: 1 - IPv4 | 2 - IPv6 | 3 - IPv4 + IPv6

            - timezone: Refer to ~/createConfigDrive-1.0.0/README.md.

            - sbctype: isbc | ssbc | msbc | tsbc | mrfp

      2.  If you want to use SSH Public Key authentication for the 'admin' and 'linuxadmin' user, edit the ~/createConfigDrive-1.0.0/conf/user-data file to replace _ADMIN_SSH_KEY_ and _LINUXADMIN_SSH_KEY_ with their respective SSH public keys.

      3. Enter the following command:

        # python createConfigDrive.py --file ./input/sbx.json
  3. The script performs and creates configuration drive images for both the active and standby instances. The file config_drive_kvm1.iso is for the active instance, and the file  config_drive_kvm2.iso is for the standby instance. By default, the generated files are placed in the ~/createConfigDrive-1.0.0/output directory. Copy the file ~/createConfigDrive-1.0.0/output/config_drive_kvm1.iso to the directory /var/lib/libvirt/images.

    # cp ~/createConfigDrive-1.0.0/output/config_drive_kvm1.iso /var/lib/libvirt/images
  4. The recommended deployment creates the standby instance on a different KVM host. Use the scp tool to securely copy the file to the remote location. Copy the file ~/createConfigDrive-1.0.0/output/config_drive_kvm2.iso to the /var/lib/libvirt/images directory of the standby instance's host. 

    # scp -P 22 ~/createConfigDrive-1.0.0/output/config_drive_kvm2.iso root@<Standby_Host_IP_Address>:/var/lib/libvirt/images
    
    root@<Standby_Host_IP_Address>'s password:
    config_drive_kvm2.iso
  5. Perform the following commands to change directories and then truncate the disk and create a mirror disk:

    # cd /var/lib/libvirt/images
    # truncate --size=35G active_evlog.qcow2
    Note

    Ensure the minimum disk size of both the root disk and mirror disk is 35 GB.

    In the preceding command, "35G" is the size of the mirror disk in GB. Specify a size for the mirror disk that is appropriate for your system, and that does not exceed the available space on the partition.

    Because the size of the disk impacts sync time, Ribbon recommends a value of 100 GB or less.

  6. Create soft (symbolic) links for the base build that you want to install.

    # ln -s /var/lib/libvirt/images/<base_image_file_name>_cp1.qcow2 /var/lib/libvirt/images/KVM1.qcow2
  7. Enter the following command to instantiate the active instance:

    # virt-install --name=KVM1 \
    --ram=18432 --vcpus sockets=1,cores=10,threads=1 --cpu host \
    --disk path=/var/lib/libvirt/images/KVM1.qcow2 \
    --disk path=/var/lib/libvirt/images/config_drive_kvm1.iso,device=cdrom,serial=RIBBON_CONFIG_DRIVE \
    --disk path=/var/lib/libvirt/images/active_evlog.qcow2,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE \
    --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 \
    --network=bridge:MGT0,model=virtio --network=bridge:HA0,model=virtio \
    --network=bridge:PKT0bridgeP7P1,model=virtio --network=bridge:PKT1bridgeP7P2,model=virtio \
    --autostart --prompt --noautoconsole
    WARNING  --prompt mode is no longer supported.
    
    Starting install...
    Domain creation completed.
    Note

    For more information on creating network bridges, refer to Creating and Linking Network Bridge.

    1. If you wish to install the SBC with an SR-IOV interface for packet ports, use the following command:

      virt-install --name=SBCSWE1A --ram=20480 --memorybacking hugepages=yes --vcpus sockets=1,cores=2,threads=2 --cpu host-passthrough --disk path=/var/lib/libvirt/images/SBCSWE1A.qcow2,device=disk,bus=virtio,format=qcow2,size=70 --disk path=/var/lib/libvirt/images/config_drive_SBCSWE1A.img,device=disk,bus=virtio,serial=RIBBON_CONFIG_DRIVE, device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE,size=70 --network=MGT0,model=virtio, --network=bridge:HA0,model=virtio --hostdev pci_0000_03_0a_0 --hostdev pci_0000_03_0a_1 --hostdev pci_0000_03_0e_0 --hostdev pci_0000_03_0e_1 --hostdev pci_0000_3b_00_0 --hostdev pci_0000_5e_00_0 --cputune=vcpupin0.vcpu=0,vcpupin0.cpuset=2,vcpupin1.vcpu=1,vcpupin1.cpuset=10,vcpupin2.vcpu=2,vcpupin2.cpuset=3,vcpupin3.vcpu=3,vcpupin3.cpuset=11 --numatune=0,mode=strict  --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 --autostart --noautoconsole --vnc --virt-type kvm
      Note

Once the virt-install command completes, the instance reboots twice and finally comes up in active mode. Continue with the instantiation step for the standby instance.

Standby Instance

Log on to the KVM host reserved for the standby instance as root (or equivalent, with full administrative privileges), and enter the following steps:

  1. Navigate to the /var/lib/libvirt/images/ directory.

    # cd /var/lib/libvirt/images/
  2. Enter the following command to truncate the disk and create a mirror disk:

    # truncate --size=35G standby_evlog.qcow2
    
    Note

    Ensure the minimum disk size of both the root disk and mirror disk is 35 GB.

    In the preceding command, "35G" is the size of the mirror disk in GB. Specify a size for the mirror disk that is appropriate for your system, and that does not exceed the available space on the partition.

    Because the size of the disk impacts sync time, Ribbon recommends a value of 100 GB or less.

  3. Create soft (symbolic) links for the base build that you want to install.

    # ln -s /var/lib/libvirt/images/<base_image_file_name>_cp1.qcow2 /var/lib/libvirt/images/KVM2.qcow2
  4. Enter the following command to instantiate the standby instance. Note that the config drive file (config_drive_kvm2.iso) required to instantiate the standby was also created during the "Active Instance" procedure. 

    # virt-install --name=KVM2 \
    --ram=18432 --vcpus sockets=1,cores=10,threads=1 --cpu host \
    --disk path=/var/lib/libvirt/images/KVM2.qcow2 \
    --disk path=/var/lib/libvirt/images/config_drive_kvm2.iso,device=cdrom,serial=RIBBON_CONFIG_DRIVE \
    --disk path=/var/lib/libvirt/images/standby_evlog.qcow2,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE \
    --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 \
    --network=bridge:MGT0,model=virtio --network=bridge:HA0,model=virtio \
    --network=bridge:PKT0bridgeP7P1,model=virtio --network=bridge:PKT1bridgeP7P2,model=virtio \
    --autostart --prompt --noautoconsole
    WARNING  --prompt mode is no longer supported.
    
    Starting install...
    Domain creation completed.

On completion of the execution of virt-install command, the instance reboots twice and comes up in standby mode.

Upgrade

This section describes the procedure for offline upgrade of an HA pair of SBC SWe instances on KVM. The upgrade starts with the standby instance, followed by the active instance.

Note

Starting with SBC 8.1, this is the recommended procedure for upgrading an SBC SWe instance on KVM.

Caution

For LSWU, ensure that the base version and the version to which you are upgrading are LSWU-compatible.

Standby Instance

Log on to the KVM host for the current standby instance as root (or equivalent, with full administrative privileges), and perform the following steps:

  1. Log on to the standby instance as root and stop the SBC application.

    # sbxstop
  2. Switch back to the host, shut down the standby instance running as a KVM guest.

    # virsh shutdown <CE_name_of_Standby_instance>
  3. Navigate to the /var/lib/libvirt/images directory of the host.

    # cd /var/lib/libvirt/images
  4. Remove the soft (symbolic) link pointing to the base build that you used for installation.

    # rm -rf KVM2.qcow2
  5. Ensure that you have the QCOW2 image file for the SBC version to which you want to upgrade in the /var/lib/libvirt/images directory.

  6. Optionally, check the QCOW2 disk size and modify it if required. Usually, the disk size for the upgraded build is the same as the original disk size.

    #### Check for the virtual size of the disk. By default, the size of the new disk after the upgrade will be same.
    # qemu-img info <qcow2_image_file_for_upgrade> | grep "virtual size"
    
    #### Resize disk size for upgraded version
    # qemu-img resize <qcow2_image_file_for_upgrade> +100G
    Tip

    In the preceding code snippet, "+100G" implies the new disk is 100GB larger than the original disk size. Ensure that the new disk size does not exceed the maximum allowed disk size.

  7. Create a new soft (symbolic) link pointing to the build file to which you want to upgrade.

    # ln -s <qcow2_image_file_for_upgrade> KVM2.qcow2
  8. Start the standby instance as a KVM guest.

    # virsh start <CE_name_of_Standby_instance>
  9. The standby instance comes up with the upgraded version of the SBC SWe. Log on to the instance as root, and check the details of the upgraded version by performing the following command:

    # swinfo -v
  10. After booting up, the standby instance attempts to synchronize with the active. To check the sync status, perform the following command:

    % show table system syncStatus

Active Instance

Log on to the KVM host reserved for the active instance as root (or equivalent, with full administrative privileges), and perform the following steps:

  1. Log on to the active instance as root and stop the SBC application.

    # sbxstop
  2. Switch back to the host, shut down the active instance running as a KVM guest.

    # virsh shutdown <CE_name_of_Active_instance>
  3. Navigate to the /var/lib/libvirt/images directory of the host.

    # cd /var/lib/libvirt/images
  4. Remove the soft (symbolic) link pointing to the base build that you used for installation.

    # rm -rf KVM1.qcow2
  5. Ensure that you have the QCOW2 image file for the SBC version to which you want to upgrade in the /var/lib/libvirt/images directory.
  6. Optionally, check the QCOW2 disk size and modify it if required. Usually, the disk size for the upgraded build is the same as the original disk size.

    #### Check for the virtual size of the disk. By default, the size of the new disk after the upgrade will be same.
    # qemu-img info <qcow2_image_file_for_upgrade> | grep "virtual size"
    
    #### Resize disk size for upgraded version
    # qemu-img resize <qcow2_image_file_for_upgrade> +100G
    Tip

    In the preceding code snippet, "+100G" implies the new disk is 100GB larger than the original disk size. Ensure that the new disk size does not exceed the maximum allowed disk size.

  7. Create a new soft (symbolic) link pointing to the build file to which you want to upgrade.

    # ln -s <qcow2_image_file_for_upgrade> KVM1.qcow2
  8. Start the active instance as a KVM guest.

    # virsh start <CE_name_of_Active_instance>
  9. The active instance comes up with the upgraded version of the SBC SWe. Log on to the instance as root, and check the details of the upgraded version by performing the following command:

    # swinfo -v
  10. Once the upgrade procedure for both standby and active instances is complete, the instances will automatically synchronize as an HA pair. To check the sync status, log on to the instance and perform the following command:

    % show table system syncStatus