In this Section:
This section describes Installation and Image Replacement Upgrade Procedure for QCOW2
based deployments of an SBC SWe HA pair on KVM, using the CLI tool kvmInstall.py
. The tool is contained within a tarball kvmInstall-1.0.0.tar.gz
(available as part of the released software bundle tool). It significantly improves the workflow for the installation and upgrade of SBC SWe on KVM for QCOW2
based deployments, especially when an LSWU is performed.
qcow2
based deployments of SBC SWe 8.0 or higher versions on KVM. For MRFP, you may use it to upgrade from SBC 7.2.x to SBC 8.0 or higher versions.The procedure described in this section assumes and recommends that the active and standby instances are guests in two different KVM hosts.
For installation or upgrade of standalone instances:
Before performing an installation followed by an upgrade, execute the following steps:
root
level access to the KVM hosts. For an HA pair, log on to both the KVM hosts as root
(or equivalent, with full administrative privileges).Download the released software bundle and untar it. The unpacked bundle contains a tarball kvmInstall-1.0.0.tar.gz
- copy it to your home directory.
# cp kvmInstall-1.0.0.tar.gz ~/
Unpack the tarball kvmInstall-1.0.0.tar.gz
.
# tar -xvzf kvmInstall-1.0.0.tar.gz
Navigate to ~/kvmInstall-1.0.0
and check its content. For this procedure, the files of interest are kvmInstall.py
and README.md
. To familiarize yourself with the tool kvmInstall.py
, read the README.md
.
# cd ~/kvmInstall-1.0.0 # vi README.md
Ensure that the following dependencies of kvmInstall.py
are installed on the instances:
netaddr
. If you have the Python package manager "pip
" installed, you may use the command "pip install netaddr
" to install the package.libgeustfs-tools
e2fsprogs
coreutils
sudo apt install <dependency>
" to install the Linux packages.Additionally, the libvirtd
service must be up and running. To check the status of the service, or start/stop it, execute the command "sudo systemctl < status | start | stop > libvirtd
".
Copy the qcow2
images of the base build (for initial installation) and the final build (to which you want to upgrade) to the /var/lib/libvirt/images
directory.
# cp <path_to_qcow2_image_file> /var/lib/libvirt/images
Create copies of the base and final builds in the /var/lib/libvirt/images
directory. While executing the cp
command, add the suffix "_cp1
" and "_cp2
" to the base and final build file names respectively. For example, the names of the copied files should be similar to "sbc-V08.00.00A019-connexip-os_07.00.00-A019_4684_amd64_cp1.qcow2
" and "sbc-V08.00.00A020-connexip-os_07.00.00-A020_146_amd64_cp2.qcow2
".
# cd /var/lib/libvirt/images # cp <base_image_file_name>.qcow2 <base_image_file_name>_cp1.qcow2 # cp <final_image_file_name>.qcow2 <final_image_file_name>_cp2.qcow2
If you want to perform only an upgrade, skip the procedure for installation.
Log on to the KVM host reserved for the active instance as root
(or equivalent, with full administrative privileges), and execute the following steps:
Navigate to the ~/kvmInstall-1.0.0
directory.
# cd ~/kvmInstall-1.0.0
Run the script kvmInstall.py
. For installation, you may run the script in two modes, depending on the command-line option:
The --cli
option:
Execute the following command:
# python kvmInstall.py --cli
Provide inputs as prompted. The choice of inputs determine the nature of the instance. For example, to create an active instance of an HA pair, the options you choose to input should be similar to the snippet below:
INFO:__main__:################################################################## INFO:__main__: Executing Command : /home/test/kvmInstall-1.0.0/kvmInstall.py --cli INFO:__main__:################################################################## Install Type [1 - Standalone, 2 - HA]: 2 Enter the System Name: KVMHA Enter the Ce Name: KVM1 Enter the Type of mgt IP[1 - V4/ 2 - V6/ 3 - V4+v6]: 1 Enter the mgt IP of the Active server: 10.54.221.252 Enter the mgt Prefix for Active server: 24 Enter the mgt GW for the Active server: 10.54.221.1 Enter the Peer Ce Name: KVM2 Enter the Type of Peer mgt IP[1 - V4/ 2 - V6/ 3-3 V4+V6]: 1 Enter the mgt IP of the Standby server: 10.54.221.253 Enter the mgt Prefix for Standby server: 24 Enter the mgt GW for the Standby server: 10.54.221.1 Enter the ha IP for Active server[V4]: 169.254.88.18 Enter the ha Prefix for Active server: 24 Enter the ha IP for Standby server[V4]: 169.254.88.20 Enter the ntp IP for the server: 10.128.254.67 Enter the Timezone ID [Check README for the ID(1-595)]: 27 Enter the SBC Type[isbc,ssbc,msbc,tsbc,mrfp]: isbc Enter the TIPC id for the system: 1509 INFO:__main__:Created the active server config drive name : config_drive_KVM1.img under output dir INFO:__main__:Created the standby server config drive name : config_drive_KVM2.img under output dir INFO:__main__:Add this line to the virt-install for config_drive: INFO:__main__:--disk path=<path of the config_drive.qcow2>,device=disk,bus=virtio,serial=RIBBON_CONFIG_DRIVE
The --file
option:
Check for the boilerplate sbx.json
file in the ~/kvmInstall-1.0.0/input
directory, and modify the file as is appropriate for the mode of deployment. For an HA pair, the modifications necessary in the boilerplate sbx.json
file is similar to the inputs necessary for the --cli
option. As a reference, you may use the configuration given below:
{ "installType": "2", "systemName": "KVMHA", "ceName": "KVM1", "peerCeName": "KVM2", "mgtIpType": "3", "mgtIp": "10.54.221.252", "mgtPrefix": "24", "mgtGw": "10.54.221.1", "mgtIpV6":"fd00:10:6b50:4d20::77", "mgtPrefixV6":"60", "mgtGwV6":"fd00:10:6b50:4d20::1", "haIp": "169.254.88.18", "haPrefix": "24", "peerMgtIpType": "3", "peerMgtIp": "10.54.221.253", "peerMgtPrefix": "24", "peerMgtGw": "10.54.221.1", "peerMgtIpV6": "fd00:10:6b50:4d20::78", "peerMgtPrefixV6": "60", "peerMgtGwV6": "fd00:10:6b50:4d20::1", "peerHaIp": "169.254.88.20", "ntpIp": "10.128.254.67", "timezone": "27", "sbctype": "isbc", "tipc": "1509" }
If you do not want to use an object for configuration, pass an empty string (""
) as its value.
For reference, the possible values of some objects are as follows:
- installType
: 1
- Standalone | 2
- HA
- mgtIpType
: 1
- IPv4 | 2
- IPv6 | 3
- IPv4 + IPv6
- timezone
: Refer to ~/kvmInstall.py/README.md
.
- sbctype
: isbc
| ssbc
| msbc
| tsbc
| mrfp
Execute the following command:
# python kvmInstall.py --file ./input/sbx.json
The script executes and creates configuration drive images for both active and standby instances - "config_drive_kvm1.img
" for the active instance, and "config_drive_kvm2.img
" for the standby instance. By default, the generated files are placed in the ~/kvmInstall-1.0.0/output
directory. Copy the file
to the directory ~/kvmInstall-1.0.0/output
/config_drive_kvm1.img/var/lib/libvirt/images
.
# cp ~/kvmInstall-1.0.0/output/config_drive_kvm1.img /var/lib/libvirt/images
As the standby instance is supposed to be created on a different KVM host, copy the file
to the ~/kvmInstall-1.0.0/output
/config_drive_kvm2.img/var/lib/libvirt/images
directory of the standby instance's host. Use the scp
tool to remotely and securely copy the file.
# scp -P 22 ~/kvmInstall-1.0.0/output/config_drive_kvm2.img root@<Standby_Host_IP_Address>:/var/lib/libvirt/images root@<Standby_Host_IP_Address>'s password: config_drive_KVM2.img
Execute the following command to run kvmInstall.py
with the --mirror_disk
option:
# python kvmInstall.py --mirror_disk 50 INFO:__main__:################################################################## INFO:__main__: Executing Command : kvmInstall.py --mirror_disk 50 INFO:__main__:################################################################## INFO:__main__:Created Mirror Disk of size 50 GB name : active_Mirror.qcow2 for active server present under output INFO:__main__:Add this line to the virt-install for mirror_drive: INFO:__main__:--disk path=<path of the mirror_drive.qcow2>,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE,size=50
In the above snippet, "50
" is the size of the mirror disk in GB. Provide the size of the mirror disk as appropriate. Ensure that the size of the mirror disk does not exceed the maximum allowed size.
The script executes and creates mirror disk image file active_mirror.qcow2
for the active instance in the ~/kvmInstall-1.0.0/output
directory. Copy the mirror disk image file to the /var/lib/libvirt/images
directory.
# cp ~/kvmInstall-1.0.0/output/active_mirror.qcow2 /var/lib/libvirt/images
Create soft (symbolic) links for the base build (which you want to install).
# ln -s /var/lib/libvirt/images/<base_image_file_name>_cp1.qcow2 /var/lib/libvirt/images/KVM1.qcow2
Execute the following command for instantiating the active instance:
# virt-install --name=KVM1 \ --ram=18432 --vcpus sockets=1,cores=10,threads=1 --cpu host \ --disk path=/var/lib/libvirt/images/KVM1.qcow2 \ --disk path=/var/lib/libvirt/images/config_drive_KVM1.img,device=disk,bus=virtio,serial=RIBBON_CONFIG_DRIVE \ --disk path=/var/lib/libvirt/images/active_Mirror.qcow2,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE \ --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 \ --network=bridge:MGT0,model=virtio --network=bridge:HA0,model=virtio \ --network=bridge:PKT0bridgeP7P1,model=virtio --network=bridge:PKT1bridgeP7P2,model=virtio \ --autostart --prompt --noautoconsole WARNING --prompt mode is no longer supported. Starting install... Domain creation completed.
On completion of the execution of virt-install
command, the instance reboots twice and finally comes up in active mode.
Log on to the KVM host reserved for the standby instance as root
(or equivalent, with full administrative privileges), and execute the following steps:
Navigate to the ~/kvmInstall-1.0.0
directory.
# cd ~/kvmInstall-1.0.0
Execute the following command to run kvmInstall.py
with the --mirror_disk
option:
# python kvmInstall.py --mirror_disk 50 INFO:__main__:################################################################## INFO:__main__: Executing Command : kvmInstall.py --mirror_disk 50 INFO:__main__:################################################################## INFO:__main__:Created Mirror Disk of size 50 GB name : active_Mirror.qcow2 for active server present under output INFO:__main__:Add this line to the virt-install for mirror_drive: INFO:__main__:--disk path=<path of the mirror_drive.qcow2>,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE,size=50
In the above snippet, "50
" is the size of the mirror disk in GB. Provide the size of the mirror disk as appropriate. Ensure that the size of the mirror disk does not exceed the maximum allowed size, and is same as that of the active instance.
The script executes and creates mirror disk image file active_mirror.qcow2
for the standby instance in the ~/kvmInstall-1.0.0/output
directory. Copy the mirror disk image file to the /var/lib/libvirt/images
directory.
# cp ~/kvmInstall-1.0.0/output/active_mirror.qcow2 /var/lib/libvirt/images
Create soft (symbolic) links for the base build (which you want to install).
# ln -s /var/lib/libvirt/images/<base_image_file_name>_cp1.qcow2 /var/lib/libvirt/images/KVM2.qcow2
Execute the following command for instantiating the standby instance:
# virt-install --name=KVM2 \ --ram=18432 --vcpus sockets=1,cores=10,threads=1 --cpu host \ --disk path=/var/lib/libvirt/images/KVM2.qcow2 \ --disk path=/var/lib/libvirt/images/config_drive_KVM2.img,device=disk,bus=virtio,serial=RIBBON_CONFIG_DRIVE \ --disk path=/var/lib/libvirt/images/active_Mirror.qcow2,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE \ --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 \ --network=bridge:MGT0,model=virtio --network=bridge:HA0,model=virtio \ --network=bridge:PKT0bridgeP7P1,model=virtio --network=bridge:PKT1bridgeP7P2,model=virtio \ --autostart --prompt --noautoconsole WARNING --prompt mode is no longer supported. Starting install... Domain creation completed.
On completion of the execution of virt-install
command, the instance reboots twice and comes up in standby mode.
This section describes the procedure of offline upgrade of an HA pair SBC SWe instances on KVM. The upgrade starts with the Standby instance, followed by the Active instance.
Starting with SBC 8.0, this is the recommended procedure for upgrading an SBC SWe instance on KVM.
For LSWU, ensure that the base version and the upgraded version are LSWU-compatible.
Log on to the KVM host for the current standby instance as root
(or equivalent, with full administrative privileges), and execute the following steps:
Log on to the standby instance as root
and stop the SBC application.
# sbxstop
Switch back to the host, shut down the standby instance running as a KVM guest.
# virsh shutdown <CE_name_of_Standby_instance>
Navigate to /var/lib/libvirt/images
directory of the host.
# cd /var/lib/libvirt/images
Remove the soft (symbolic) link pointing to the base build that you used for installation.
# rm -rf KVM2.qcow2
Ensure that you have the qcow2
image file for the SBC version to which you want to upgrade in the /var/lib/libvirt/images
directory.
Optionally, check for the qcow2
disk size, and modify it if required. Usually, the disk size for the upgraded build is the same as the original disk size.
#### Check for the virtual size of the disk. By default, the size of the new disk after the upgrade will be same. # qemu-img info <qcow2_image_file_for_upgrade> | grep "virtual size" #### Resize disk size for upgraded version # qemu-img resize <qcow2_image_file_for_upgrade> +30G
In the above snippet, "+30G
" implies the size of the new disk is increased by 30GB
with respect to the original disk size. For example, if the size of the original disk is 50GB
, then the size of the new disk is 80GB
. Ensure that the new disk size does not exceed the maximum allowed disk size.
Create a new soft (symbolic) link pointing to the final build (to which you want to upgrade).
# ln -s <qcow2_image_file_for_upgrade> KVM2.qcow2
Start the standby instance as a KVM guest.
# virsh start <CE_name_of_Standby_instance>
The standby instance comes up with the upgraded version of the SBC SWe. Log on to the instance as root
, and check the details of the upgraded version by executing the following command:
# swinfo -v
After booting up, the standby instance attempts to synchronize with the active. To check the sync status, you may execute the following command:
% show table system syncStatus
Log on to the KVM host reserved for the active instance as root
(or equivalent, with full administrative privileges), and execute the following steps:
Log on to the active instance as root
and stop the SBC application.
# sbxstop
Switch back to the host, shut down the active instance running as a KVM guest.
# virsh shutdown <CE_name_of_Active_instance>
Navigate to the /var/lib/libvirt/images
directory of the host.
# cd /var/lib/libvirt/images
Remove the soft (symbolic) link pointing to the base build that you used for installation.
# rm -rf KVM1.qcow2
qcow2
image file for the SBC version to which you want to upgrade in the /var/lib/libvirt/images
directory.Optionally, check for the qcow2
disk size, and modify it if required. Usually, the disk size for the upgraded build is the same as the original disk size.
#### Check for the virtual size of the disk. By default, the size of the new disk after the upgrade will be same. # qemu-img info <qcow2_image_file_for_upgrade> | grep "virtual size" #### Resize disk size for upgraded version # qemu-img resize <qcow2_image_file_for_upgrade> +30G
In the above snippet, "+30G
" implies the size of the new disk is increased by 30GB
with respect to the original disk size. For example, if the size of the original disk is 50GB
, then the size of the new disk is 80GB
. Ensure that the new disk size does not exceed the maximum allowed disk size.
Create a new soft (symbolic) link pointing to the final build (to which you want to upgrade).
# ln -s <qcow2_image_file_for_upgrade> KVM1.qcow2
Start the active instance as a KVM guest.
# virsh start <CE_name_of_Active_instance>
The active instance comes up with the upgraded version of the SBC SWe. Log on to the instance as root
, and check the details of the upgraded version by executing the following command:
# swinfo -v
Once the upgrade procedure for both standby and active instances is complete, the instance will automatically synchronize as an HA pair. To check the sync status, log on to the instance and execute the following command:
% show table system syncStatus