Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Add_workflow_for_techpubs
AUTH2UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
AUTH1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cd5909df, userName='null'}
JIRAIDAUTHSBX-88133xxxxx
REV5UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cd5909df, userName='null'}
REV6UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cd5909df, userName='null'}
REV3REV1

PanelIn

UserResourceIdentifier{userKey=8a00a0c86d32712b016d6c1db0950020, userName='null'}
REV1UserResourceIdentifier{userKey=8a00a0c85652e498015654fb9a500001, userName='null'}


Panel

In this Section:

Table of Contents
maxLevel4

Installation and Image Replacement Upgrade Procedure for QCOW2 Based Deployments

This section describes Installation and Image Replacement Upgrade Procedure for QCOW2 based deployments a procedure for installation or upgrade of an SBC SWe 1:1 HA pair on KVM , using the CLI tool kvmInstallusing a QCOW2 image file and an ISO config drive you create using the CLI tool createConfigDrive.py. The tool is contained within a the tarball kvmInstallcreateConfigDrive-1.0.0.tar.gz (available as part of the released release software bundle tool). It significantly improves the workflow for the installation and upgrade of SBC SWe on KVM for QCOW2 based deploymentsas compared to the traditional procedure involving an ISO file, especially when an LSWU is performed.

Info
titleNote
  • The tool and the associated procedure are applicable only for qcow2 based deployments of when using a QCOW2 file to deploy SBC SWe 8.0 1 or higher versions on KVM. For MRFP, you may use it to upgrade from SBC 7.2.x to SBC 8.0 or higher versions.
  • Starting with SBC 8.01, this is the recommended procedure for installation and upgrade of SBC SWe on KVM.


Info
titleNote

The procedure described in this section assumes and recommends that the active and standby instances are guests in two different KVM hosts.


Tip
titleTip

For installation or upgrade of standalone instances:

  • Ignore the steps that are exclusive to HA
  • Wherever required, modify the steps to suit a single, standalone instance

, complete the steps for a single KVM host, rather than two hosts, and for only one instance, rather than both active and standby instances.

Prerequisites

Before performing an installation followed by an upgrade, execute the following steps:

  1. Back up the system configuration and the data before starting the procedure.
  2. Ensure that you have root level access to the KVM hosts. For an HA pair, log on to both the KVM hosts as root (or equivalent, with full administrative privileges).
  3. For On each of the KVM hostshost, perform the following:
    1. Download the released release software bundle and untar it. The unpacked bundle contains a the tarball kvmInstallcreateConfigDrive-1.0.0.tar.gz - copy . Copy it to your home directory.

      Code Block
      # cp kvmInstallcreateConfigDrive-1.0.0.tar.gz ~/


    2. Unpack the tarball kvmInstallcreateConfigDrive-1.0.0.tar.gz.

      Code Block
      # tar -xvzf kvmInstallcreateConfigDrive-1.0.0.tar.gz


    3. Navigate to ~/kvmInstallcreateConfigDrive-1.0.0 and check its content. For this procedure, the files of interest are kvmInstall. and ensure both createConfigDrive.py and README.md. To familiarize yourself with the tool kvmInstall.py, read the README.md. are present. Open README.md in an editor such as vi to find more information on the createConfigDrive.py tool.

      Code Block
      # cd ~/kvmInstallcreateConfigDrive-1.0.0
      # vi README.md


      Info
      titleNote

      Ensure that the following dependencies of kvmInstallpackages that createConfigDrive.py requires are installed on the instances:

      • The Python package netaddr. If you have the Python package manager "pip" installed, you may use the command "pip install netaddr" to install the package.
      • Linux packagespackage:
        • libgeustfs-tools
        • e2fsprogs
        • coreutils
        • genisoimage
        For Debian-based Linux distributions, you may use the command "sudo apt install <dependency>" to install the Linux packages.

      Additionally, the libvirtd service must be up and running. To check the status of the service, or start/stop it, execute the command "sudo systemctl < status | start | stop > libvirtd".


    4. Copy the qcow2 images of the base build (for initial installation) and the final build (Copy the QCOW2 images of the base build (for initial installation) and the build to which you want to upgrade ) to the /var/lib/libvirt/images directory.

      Code Block
      # cp <path_to_qcow2_image_file> /var/lib/libvirt/images


    5. Create copies of the base and final builds in the /var/lib/libvirt/images directory. While executing the cp command, add the suffix "_cp1" and "_cp2" to the base and final build file names respectively. For example, the names of the copied files should be similar to "sbc-V08.0001.00A01900R000-connexip-os_07.00.00-A019_4684_amd64_cp1.qcow2" and "sbc-V08.0001.00A02000R000-connexip-os_07.00.00-A020_146_amd64_cp2.qcow2".

      Code Block
      # cd /var/lib/libvirt/images
      # cp <base_image_file_name>.qcow2 <base_image_file_name>_cp1.qcow2
      # cp <final_image_file_name>.qcow2 <final_image_file_name>_cp2.qcow2


Installation

Tip
titleTip

If you want to To perform only an upgrade, skip the procedure for installation.

Active Instance

Log on to the KVM host reserved for the active instance as root (or equivalent, with full administrative privileges), and execute the following steps:

  1. Navigate to the ~/kvmInstallcreateConfigDrive-1.0.0 directory directory.

    Code Block
    # cd ~/kvmInstallcreateConfigDrive-1.0.0 


  2. Run the script kvmInstallcreateConfigDrive.py. For installation, you may can run the script in either of two modes, depending on the command-line option:. The --cli option provides screen prompts after which you can enter configuration data for the deployment. The --file option requires you to pass the script a file, sbx.json, which must contain all the required configuration data in a json file format. Both options are shown below.

    1. The --cli option:

      1. Execute the following command :to provide configuration data in response to prompts.

        Code Block
        # python kvmInstallcreateConfigDrive.py --cli


      2. Anchor
        inputs necessary for the --cli option
        inputs necessary for the --cli option
        Provide inputs as prompted. The choice of inputs determine the  Your responses determine the nature of the instance. For example, to create an the active instance of an HA pair, the options you choose to input should be enter values similar to the snippet belowexample below:

        Code Block
        INFO:__main__:##################################################################
        INFO:__main__:  Executing Command : /home/test/kvmInstall-1.0.0/kvmInstall.createConfigDrive.py --cli
        INFO:__main__:##################################################################
        InstallEnter the SBC HA Mode Type [1 - Standalone1:1 , 2 - HAN:1]: 21
        Enter the SystemSBC Name: KVMHA
        Enter the Install Type [1- Standalone, 2- HA]: 2
        Enter the System Name: KVMHA
        Enter the Ce Name: KVM1
        Enter the Type of mgt IP[1 - V4/, 2 - V6/, 3 - V4+v6]: 1
        Enter the V4 mgt IP of the Active server: 10.546.2217.252126
        Enter the V4 mgt Prefix for Active server: 24
        Enter the V4 mgt GW for the Active server: 10.546.2217.1
        Enter the Peer Ce Name: KVM2
        Enter the Type of Peer mgt IP[1 - V4/, 2 - V6/, 3-3 V4+V6]: 1
        Enter the V4 mgt IP of the Standby server: 10.546.2217.253127
        Enter the V4 mgt Prefix for Standby server: 24
        Enter the V4 mgt GW for the Standby server: 10.546.2217.1
        Enter the ha IP for Active server[V4]: 169.254.8899.1813
        Enter the ha Prefix for Active server: 2416
        Enter the ha IP for Standby server[V4]: 169.254.88.2013
        Enter the ntp IP for the server: 10.1281.2541.672
        Enter the Timezone ID [Check README for the ID(1-595)]: 2712
        Enter the SBC Type[isbc,ssbc,msbc,tsbc,mrfp]: isbc
        Enter the TIPC id for the system: 1509
        INFO:__main__:Created the active server config drive name : config_drive_KVM1.img under output dir
        INFO:__main__:Created the standby server config drive name : config_drive_KVM2.img under output dir
        INFO:__main__:Add this line to the virt-install for config_drive:1569
        Do you want to enter EMS settings [1- Yes, 2-No ]: 2
        Do you want to enter OAM settings [1- Yes, 2-No ]: 2
        Do you want to enter TSBC settings [1- Yes, 2-No ]: 2
        INFO:__main__:--disk path=<path of the config_drive.qcow2>,device=disk,bus=virtio,serial=RIBBON_CONFIG_DRIVE

      The --file option:

      1. Check for the boilerplate sbx.json file in the ~/kvmInstall
        Cleared old JSON files : Done
        INFO:__main__:Created the active server config drive name : config_drive_kvm1.iso under output dir
        INFO:__main__:Created the standby server config drive name : config_drive_kvm2.iso under output dir
        [root@Host-0 createConfigDrive-1.0.0
        /input
        ]# ls -l output/config_drive_kvm*
        -rw-r--r--. 1 root root 360448 Aug 13 06:32 output/config_drive_kvm1.iso
        -rw-r--r--. 1 root root 360448 Aug 13 06:32 output/config_drive_kvm2.iso


    2. The --file option:

      1. Copy the template sbx.json file in the ~/createConfigDrive-1.0.0/input directory and modify it as appropriate for the deployment. The contents of the sbx.json file are similar to the values you provide with the --cli option. An example of a file edited for an HA pair is shown below:

        Code Block
        {
         "sbxConf": {
         "installType": "1",
         "role": "1",
         "systemName": "vsbcSystem",
         "ceName": "vsbc1",
         "peerCeName": "vbsc2",
         "mgtIpType": "1",
         "mgtIp": "1.1.10.10",
         "mgtPrefix
        directory, and modify the file as is appropriate for the mode of deployment. For an HA pair, the modifications necessary in the boilerplate sbx.json file is similar to the inputs necessary for the --cli option. As a reference, you may use the configuration given below:
        Code Block
        {
          "installType": "2", 
          "systemName": "KVMHA",
          "ceName": "KVM1",
          "peerCeName": "KVM2",
          "mgtIpType": "3",
          "mgtIp": "10.54.221.252",
          "mgtPrefix": "24",
          "mgtGw": "10.54.221.1",
          "mgtIpV6":"fd00:10:6b50:4d20::77",
          "mgtPrefixV6":"60",
          "mgtGwV6":"fd00:10:6b50:4d20::1",
          "haIp": "169.254.88.18",
          "haPrefix": "24",
          "peerMgtIpType": "3",
          "peerMgtIp": "10.54.221.253",
          "peerMgtPrefix": "24",
          "peerMgtGwmgtGw": "101.541.22110.1",
          "peerMgtIpV6mgtIpV6":"",
         "mgtPrefixV6"fd00:10:6b50:4d20::78:"",
          "peerMgtPrefixV6mgtGwV6": "60",
          "peerMgtGwV6haIp": "fd00:10:6b50:4d20::2.2.2.2",
         "haPrefix": "24",
         "peerMgtIpType": "1",
          "peerHaIppeerMgtIp": "1691.2541.8810.2015",
         "peerMgtPrefix": "ntpIp24",
         "peerMgtGw": "101.1281.25410.671",
          "timezonepeerMgtIpV6": "27",
          "sbctypepeerMgtPrefixV6": "isbc",
          "tipcpeerMgtGwV6": "1509",
        } 
        Tip
        titleTip

        If you do not want to use an object for configuration, pass an empty string ("") as its value.

        For reference, the possible values of some objects are as follows:

            - installType: 1 - Standalone | 2 - HA

            - mgtIpType: 1 - IPv4 | 2 - IPv6 | 3 - IPv4 + IPv6

            - timezone: Refer to ~/kvmInstall.py/README.md.

            - sbctype: isbc | ssbc | msbc | tsbc | mrfp

      2. Execute the following command:

        Code Block
        # python kvmInstall.py --file ./input/sbx.json
  3. The script executes and creates configuration drive images for both active and standby instances - "config_drive_kvm1.img" for the active instance, and "config_drive_kvm2.img" for the standby instance. By default, the generated files are placed in the ~/kvmInstall-1.0.0/output directory. Copy the file ~/kvmInstall-1.0.0/output/config_drive_kvm1.img to the directory /var/lib/libvirt/images.

    Code Block
    # cp ~/kvmInstall-1.0.0/output/config_drive_kvm1.img /var/lib/libvirt/images
  4. As the standby instance is supposed to be created on a different KVM host, copy the file ~/kvmInstall-1.0.0/output/config_drive_kvm2.img to the /var/lib/libvirt/images directory of the standby instance's host. Use the scp tool to remotely and securely copy the file.

    Code Block
    # scp -P 22 ~/kvmInstall-1.0.0/output/config_drive_kvm2.img root@<Standby_Host_IP_Address>:/var/lib/libvirt/images
    
    root@<Standby_Host_IP_Address>'s password:
    config_drive_KVM2.img
  5. Execute the following command to run kvmInstall.py with the --mirror_disk option:

    Code Block
    # python kvmInstall.py --mirror_disk 50
    
    INFO:__main__:##################################################################
    INFO:__main__:  Executing Command : kvmInstall.py --mirror_disk 50
    INFO:__main__:##################################################################
    INFO:__main__:Created Mirror Disk of size 50 GB name : active_Mirror.qcow2 for active server present under output
    INFO:__main__:Add this line to the virt-install for mirror_drive:
    INFO:__main__:--disk path=<path of the mirror_drive.qcow2>,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE,size=50
    Tip
    titleTip

     In the above snippet, "50" is the size of the mirror disk in GB. Provide the size of the mirror disk as appropriate. Ensure that the size of the mirror disk does not exceed the maximum allowed size.

      1.  "peerHaIp": "2.2.2.5",
         "ntpIp": "1.1.10.1",
         "timezone": "12",
         "sbctype": "isbc",
         "tipc": "1500",
         "rgIp":"2.2.2.2",
         "haMode":"1to1"
         },
         "emsConf":
         {
         "emsName": "",
         "emsPass": "",
         "downloadEmsConfig": "True",
         "emsIp1": "",
         "emsIp2": ""
         },
         "oamConf":
         {
         "oamIp1": "",
         "oamIp2": ""
         }
        }


        Tip
        titleTip

        If you do not need an object for configuration, pass an empty string ("") as its value.

        For reference, the possible values of some objects are as follows:

            - installType: 1 - Standalone | 2 - HA

            - mgtIpType: 1 - IPv4 | 2 - IPv6 | 3 - IPv4 + IPv6

            - timezone: Refer to ~/createConfigDrive-1.0.0/README.md.

            - sbctype: isbc | ssbc | msbc | tsbc | mrfp


      2. Execute the following command:

        Code Block
        # python createConfigDrive.py --file ./input/sbx.json


  6. The script executes and creates configuration drive images for both the active and standby instances. The file config_drive_kvm1.iso is for the active instance, and the file  config_drive_kvm2.iso is for the standby instance. By default, the generated files are placed in the ~/createConfigDrive-1.0.0/output directory. Copy the file ~/createConfigDrive-1.0.0/output/config_drive_kvm1.iso to the directory The script executes and creates mirror disk image file active_mirror.qcow2 for the active instance in the ~/kvmInstall-1.0.0/output directory. Copy the mirror disk image file to the /var/lib/libvirt/images directory.

    Code Block
    # cp ~/kvmInstallcreateConfigDrive-1.0.0/output/activeconfig_drive_mirrorkvm1.qcow2iso /var/lib/libvirt/images


  7. Create soft (symbolic) links for the base build (which you want to install).

    Code Block# ln -s

    The recommended deployment creates the standby instance on a different KVM host. Use the scp tool to securely copy the file to the remote location. Copy the file ~/createConfigDrive-1.0.0/output/config_drive_kvm2.iso to the /var/lib/libvirt/images

    /<base_image_file_name>_cp1.qcow2 /var/lib/libvirt/images/KVM1.qcow2
  8. Execute Steps 1 through 4 for Standby.
  9. Execute the following command for instantiating the active instance: directory of the standby instance's host. 

    Code Block
    # virt-installscp --name=KVM1 \
    --ram=18432 --vcpus sockets=1,cores=10,threads=1 --cpu host \
    --disk path=/var/lib/libvirt/images/KVM1.qcow2 \
    --disk path=P 22 ~/createConfigDrive-1.0.0/output/config_drive_kvm2.iso root@<Standby_Host_IP_Address>:/var/lib/libvirt/images/
    
    root@<Standby_Host_IP_Address>'s password:
    config_drive_KVM1.img,device=disk,bus=virtio,serial=RIBBON_CONFIG_DRIVE \
    --disk path=kvm2.iso


  10. Execute the following commands to change directories and then truncate the disk and create a mirror disk:

    Code Block
    # cd /var/lib/libvirt/images/
    # truncate --size=30G active_Mirrorevlog.qcow2,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE \
    --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 \
    --network=bridge:MGT0,model=virtio --network=bridge:HA0,model=virtio \
    --network=bridge:PKT0bridgeP7P1,model=virtio --network=bridge:PKT1bridgeP7P2,model=virtio \
    --autostart --prompt --noautoconsole
    WARNING  --prompt mode is no longer supported.
    
    Starting install...
    Domain creation completed.

On completion of the execution of virt-install command, the instance reboots twice and finally comes up in active mode.

Anchor

  1. Tip
    titleTip

    In the preceding command, "30" is the size of the mirror disk in GB. Specify a size for the mirror disk that is appropriate for your system. Ensure that the size you specify does not exceed the available space on the partition. Because the size of the disc impacts sync time, Ribbon recommends a value of 100GB or less.


  2. Check the QCOW2 disk size and modify it if required.

    Spacevars
    0company
     generally recommends a disk size of 100GB or more for installations.

    Code Block
    #### Check for the virtual size of the disk. By default, the size of the new disk after the installation will be same.
    # qemu-img info <qcow2_image_file_for_install> | grep "virtual size"
    
    #### Resize disk size for the version to install
    # qemu-img resize <qcow2_image_file_for_install> +100G


    Tip
    titleTip

    In the preceding code snippet, "+100G" implies the new disk is 100GB larger than the original disk size. Ensure that the new disk size does not exceed the maximum allowed disk size.


  3. Create soft (symbolic) links for the base build that you want to install.

    Code Block
    # ln -s /var/lib/libvirt/images/<base_image_file_name>_cp1.qcow2 /var/lib/libvirt/images/KVM1.qcow2


  4. Execute the following command to instantiate the active instance:

    Code Block
    # virt-install --name=KVM1 \
    --ram=18432 --vcpus sockets=1,cores=10,threads=1 --cpu host \
    --disk path=/var/lib/libvirt/images/KVM1.qcow2 \
    --disk path=/var/lib/libvirt/images/config_drive_kvm1.iso,device=cdrom,serial=RIBBON_CONFIG_DRIVE \
    --disk path=/var/lib/libvirt/images/active_evlog.qcow2,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE \
    --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 \
    --network=bridge:MGT0,model=virtio --network=bridge:HA0,model=virtio \
    --network=bridge:PKT0bridgeP7P1,model=virtio --network=bridge:PKT1bridgeP7P2,model=virtio \
    --autostart --prompt --noautoconsole
    WARNING  --prompt mode is no longer supported.
    
    Starting install...
    Domain creation completed.


    Info
    titleNote

    For more information on creating network bridges, refer to Creating and Linking Network Bridge.


    1. If you wish to install the SBC with an SR-IOV interface for packet ports, use the following command:

      Code Block
      virt-install --name=SBCSWE1A --ram=20480 --memorybacking hugepages=yes --vcpus sockets=1,cores=2,threads=2 --cpu host-passthrough --disk path=/var/lib/libvirt/images/SBCSWE1A.qcow2,device=disk,bus=virtio,format=qcow2,size=70 --disk path=/var/lib/libvirt/images/config_drive_SBCSWE1A.img,device=disk,bus=virtio,serial=RIBBON_CONFIG_DRIVE, device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE,size=70 --network=MGT0,model=virtio, --network=bridge:HA0,model=virtio --hostdev pci_0000_03_0a_0 --hostdev pci_0000_03_0a_1 --hostdev pci_0000_03_0e_0 --hostdev pci_0000_03_0e_1 --hostdev pci_0000_3b_00_0 --hostdev pci_0000_5e_00_0 --cputune=vcpupin0.vcpu=0,vcpupin0.cpuset=2,vcpupin1.vcpu=1,vcpupin1.cpuset=10,vcpupin2.vcpu=2,vcpupin2.cpuset=3,vcpupin3.vcpu=3,vcpupin3.cpuset=11 --numatune=0,mode=strict  --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 --autostart --noautoconsole --vnc --virt-type kvm


      Info
      titleNote

      For additional details, refer to Creating a New SBC SWe Instance on KVM Hypervisor (SRIOV).


Once the virt-install command completes, the instance reboots twice and finally comes up in active mode. Continue with the instantiation step for the standby instance.

Anchor
Standby
Standby
Standby Instance

Log on to the KVM host reserved for the standby instance as root (or equivalent, with full administrative privileges), and execute the following steps:

  1. Navigate to the /var/lib/libvirt/images/ directory.

    Code Block
    # cd /var/lib/libvirt/images/


  2. Execute the following command to truncate the disk and create a mirror disk:

    Code Block
    # truncate --size=30G standby_evlog.qcow2
    


    Tip
    titleTip

    In the preceding command, "30" is the size of the mirror disk in GB. Specify a size for the mirror disk that is appropriate for your system. Ensure that the size you specify does not exceed the available space on the partition. Because the size of the disc impacts sync time, Ribbon recommends a value of 100GB or less.


  3. Check the QCOW2 disk size and modify it if required.

    Spacevars
    0company
     generally recommends a disk size of 100GB or more for installations.

    Code Block
    #### Check for the virtual size of the disk. By default, the size of the new disk after the installation will be same.
    # qemu-img info <qcow2_image_file_for_install> | grep "virtual size"
    
    #### Resize disk size for upgraded version
    # qemu-img resize <qcow2_image_file_for_install> +100G


    Tip
    titleTip

    In the preceding code snippet, "+100G" implies the new disk is 100GB larger than the original disk size. Ensure that the new disk size does not exceed the maximum allowed disk size.

StandbyStandbyStandby

Log on to the KVM host reserved for the standby instance as root (or equivalent, with full administrative privileges), and execute the following steps:

  1. Navigate to the ~/kvmInstall-1.0.0 directory.

    Code Block
    # cd ~/kvmInstall-1.0.0
  2. Execute the following command to run kvmInstall.py with the --mirror_disk option:

    Code Block
    # python kvmInstall.py --mirror_disk 50
    
    INFO:__main__:##################################################################
    INFO:__main__:  Executing Command : kvmInstall.py --mirror_disk 50
    INFO:__main__:##################################################################
    INFO:__main__:Created Mirror Disk of size 50 GB name : active_Mirror.qcow2 for active server present under output
    INFO:__main__:Add this line to the virt-install for mirror_drive:
    INFO:__main__:--disk path=<path of the mirror_drive.qcow2>,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE,size=50
    Tip
    titleTip

     In the above snippet, "50" is the size of the mirror disk in GB. Provide the size of the mirror disk as appropriate. Ensure that the size of the mirror disk does not exceed the maximum allowed size, and is same as that of the active instance.

  3. The script executes and creates mirror disk image file active_mirror.qcow2 for the standby instance in the ~/kvmInstall-1.0.0/output directory. Copy the mirror disk image file to the /var/lib/libvirt/images directory.

    Code Block
    # cp ~/kvmInstall-1.0.0/output/active_mirror.qcow2 /var/lib/libvirt/images


  4. Create soft (symbolic) links for the base build

    (which

    that you want to install

    )

    .

    Code Block
    # ln -s /var/lib/libvirt/images/<base_image_file_name>_cp1.qcow2 /var/lib/libvirt/images/KVM2.qcow2


  5. Anchor
    Step 5 of Standby
    Step 5 of Standby
    Execute the following command for instantiating to instantiate the standby instance:. Note that the config drive file (config_drive_kvm2.iso) required to instantiate the standby was also created during the "Active Instance" procedure. 

    Code Block
    # virt-install --name=KVM2 \
    --ram=18432 --vcpus sockets=1,cores=10,threads=1 --cpu host \
    --disk path=/var/lib/libvirt/images/KVM2.qcow2 \
    --disk path=/var/lib/libvirt/images/config_drive_KVM2.img,device=disk,bus=virtiokvm2.iso,device=cdrom,serial=RIBBON_CONFIG_DRIVE \
    --disk path=/var/lib/libvirt/images/activestandby_Mirrorevlog.qcow2,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE \
    --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 \
    --network=bridge:MGT0,model=virtio --network=bridge:HA0,model=virtio \
    --network=bridge:PKT0bridgeP7P1,model=virtio --network=bridge:PKT1bridgeP7P2,model=virtio \
    --autostart --prompt --noautoconsole
    WARNING  --prompt mode is no longer supported.
    
    Starting install...
    Domain creation completed.


On completion of the execution of virt-install command, the instance reboots twice and comes up in standby mode.

Upgrade

This section describes the procedure of for offline upgrade of an HA pair of SBC SWe instances on KVM. The upgrade starts with the Standby standby instance, followed by the Active active instance.

Info
titleNote

Starting with SBC 8.01, this is the recommended procedure for upgrading an SBC SWe instance on KVM.


Note
titleCaution

For LSWU, ensure that the base version and the upgraded version to which you are upgrading are LSWU-compatible.

Standby Instance

Log on to the KVM host for the current standby instance as root (or equivalent, with full administrative privileges), and execute the following steps:

  1. Log on to the standby instance as root and stop the SBC application.

    Code Block
    # sbxstop


  2. Switch back to the host, shut down the standby instance running as a KVM guest.

    Code Block
    # virsh shutdown <CE_name_of_Standby_instance>


  3. Navigate to the /var/lib/libvirt/images directory of the host.

    Code Block
    # cd /var/lib/libvirt/images


  4. Remove the soft (symbolic) link pointing to the base build that you used for installation.

    Code Block
    # rm -rf KVM2.qcow2


  5. Ensure that you have the qcow2 image QCOW2 image file for the SBC version to which you want to upgrade in the /var/lib/libvirt/images directory.

  6. Optionally, check for the qcow2 disk QCOW2 disk size , and modify it if required. Usually, the disk size for the upgraded build is the same as the original disk size.

    Code Block
    #### Check for the virtual size of the disk. By default, the size of the new disk after the upgrade will be same.
    # qemu-img info <qcow2_image_file_for_upgrade> | grep "virtual size"
    
    #### Resize disk size for upgraded version
    # qemu-img resize <qcow2_image_file_for_upgrade> +30G100G


    Tip
    titleTip

    In the above preceding code snippet, "+30G" implies the size of the new disk is increased by 30GB with respect to the original disk size. For example, if the size of the original disk is 50GB, then the size of 100G" implies the new disk is 80GB 100GB larger than the original disk size. Ensure that the new disk size does not exceed the maximum allowed disk size.


  7. Create a new soft (symbolic) link pointing to the final build (file to which you want to upgrade).

    Code Block
    # ln -s <qcow2_image_file_for_upgrade> KVM2.qcow2


  8. Start the standby instance as a KVM guest.

    Code Block
    # virsh start <CE_name_of_Standby_instance>


  9. The standby instance comes up with the upgraded version of the SBC SWe. Log on to the instance as root, and check the details of the upgraded version by executing the following command:

    Code Block
    # swinfo -v


  10. After booting up, the standby instance attempts to synchronize with the active. To check the sync status, you may execute the following command:

    Code Block
    % show table system syncStatus


Active Instance

Log on to the KVM host reserved for the active instance as root (or equivalent, with full administrative privileges), and execute the following steps:

  1. Log on to the active instance as root and stop the SBC application.

    Code Block
    # sbxstop


  2. Switch back to the host, shut down the active instance running as a KVM guest.

    Code Block
    # virsh shutdown <CE_name_of_Active_instance>


  3. Navigate to the /var/lib/libvirt/images directory of the host.

    Code Block
    # cd /var/lib/libvirt/images


  4. Remove the soft (symbolic) link pointing to the base build that you used for installation.

    Code Block
    # rm -rf KVM1.qcow2


  5. Ensure that you have the qcow2 image QCOW2 image file for the SBC version to which you want to upgrade in the /var/lib/libvirt/images directory.
  6. Optionally, check for the qcow2 disk QCOW2 disk size , and modify it if required. Usually, the disk size for the upgraded build is the same as the original disk size.

    Code Block
    #### Check for the virtual size of the disk. By default, the size of the new disk after the upgrade will be same.
    # qemu-img info <qcow2_image_file_for_upgrade> | grep "virtual size"
    
    #### Resize disk size for upgraded version
    # qemu-img resize <qcow2_image_file_for_upgrade> +30G100G


    Tip
    titleTip

    In the above preceding code snippet, "+30G100G" implies the size of the new disk is increased by 30GB with respect to 100GB larger than the original disk size. For example, if the size of the original disk is 50GB, then the size of the new disk is 80GB. Ensure that the new disk size does not exceed the maximum allowed disk size.


  7. Create a new soft (symbolic) link pointing to the final build (file to which you want to upgrade).

    Code Block
    # ln -s <qcow2_image_file_for_upgrade> KVM1.qcow2


  8. Start the active instance as a KVM guest.

    Code Block
    # virsh start <CE_name_of_Active_instance>


  9. The active instance comes up with the upgraded version of the SBC SWe. Log on to the instance as root, and check the details of the upgraded version by executing the following command:

    Code Block
    # swinfo -v


  10. Once the upgrade procedure for both standby and active instances is complete, the instance instances will automatically synchronize as an HA pair. To check the sync status, log on to the instance and execute the following command:

    Code Block
    % show table system syncStatus
 

Pagebreak