Add_workflow_for_techpubs | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Panel | ||||
---|---|---|---|---|
In this Section:
|
This section describes a procedure for installation or upgrade of an SBC SWe 1:1 HA pair on KVM using a QCOW2 image file and an ISO config drive you create using the CLI tool createConfigDrive.py
. The tool is contained within the tarball createConfigDrive-1.0.0.tar.gz
(available as part of the release software bundle). It significantly improves the workflow for the installation and upgrade of SBC SWe on KVM as compared to the traditional procedure involving an ISO file, especially when an LSWU is performed.
Info | ||
---|---|---|
| ||
|
Info | ||
---|---|---|
| ||
The procedure described in this section assumes and recommends that the active and standby instances are guests in two different KVM hosts. |
Tip | ||
---|---|---|
| ||
For installation or upgrade of standalone instances, complete the steps for a single KVM host, rather than two hosts, and for only one instance, rather than both active and standby instances. |
Before performing an installation followed by an upgrade, execute the following steps:
root
level access to the KVM hosts. For an HA pair, log on to both KVM hosts as root
(or equivalent, with full administrative privileges).Download the release software bundle and untar it. The unpacked bundle contains the tarball createConfigDrive-1.0.0.tar.gz
. C
opy it to your home directory.
Code Block |
---|
# cp createConfigDrive-1.0.0.tar.gz ~/ |
Unpack the tarball createConfigDrive-1.0.0.tar.gz
.
Code Block |
---|
# tar -xvzf createConfigDrive-1.0.0.tar.gz |
Navigate to ~/kvmInstallcreateConfigDrive-1.0.0
and and ensure both createConfigDrive.py
and README.md
are present. Open README.m
d in an editor such as vi to find more information on the createConfigDrive.py
tool.
Code Block |
---|
# cd ~/kvmInstallcreateConfigDrive-1.0.0 # vi README.md |
Info | ||
---|---|---|
| ||
Ensure that the following packages that
|
Copy the QCOW2 images of the base build (for initial installation) and the build to which you want to upgrade to the /var/lib/libvirt/images
directory.
Code Block |
---|
# cp <path_to_qcow2_image_file> /var/lib/libvirt/images |
Create copies of the base and final builds in the /var/lib/libvirt/images
directory. While executing the cp
command, add the suffix "_cp1
" and "_cp2
" to the base and final build file names respectively. For example, the names of the copied files should be similar to "sbc-V08.01.00R000-connexip-os_07.00.00-A019_4684_amd64_cp1.qcow2
" and "sbc-V08.01.00R000-connexip-os_07.00.00-A020_146_amd64_cp2.qcow2
".
Code Block |
---|
# cd /var/lib/libvirt/images # cp <base_image_file_name>.qcow2 <base_image_file_name>_cp1.qcow2 # cp <final_image_file_name>.qcow2 <final_image_file_name>_cp2.qcow2 |
Tip | ||
---|---|---|
| ||
To perform only an upgrade, skip the procedure for installation. |
Log on to the KVM host reserved for the active instance as root
(or equivalent, with full administrative privileges), and execute the following steps:
Navigate to the ~/kvmInstallcreateConfigDrive-1.0.0
directory directory.
Code Block |
---|
# cd ~/kvmInstallcreateConfigDrive-1.0.0 |
Run the script createConfigDrive.py
. For installation, you can run the script in either of two modes. The --cli
option provides screen prompts after which you can enter configuration data for the deployment. The --file
option requires you to pass the script a file, sbx.json
, that contains which must contain all the required configuration data in a json file format. Both options are shown below.
The --cli
option:
Execute the following command to provide configuration data in response to prompts.
Code Block |
---|
# python createConfigDrive.py --cli |
Anchor | ||||
---|---|---|---|---|
|
Code Block |
---|
INFO:__main__:################################################################## INFO:__main__: Executing Command : /home/test/kvmInstall-1.0.0/createConfigDrive.py --cli INFO:__main__:################################################################## InstallEnter the SBC HA Mode Type [1 - 1:1 Standalone, 2 - HAN:1]: 21 Enter the SystemSBC Name: KVMHA Enter the Ce NameInstall Type [1- Standalone, 2- HA]: 2 Enter the System Name: KVMHA Enter the Ce Name: KVM1 Enter the Type of mgt IP[1 - V4/, 2 - V6/, 3 - V4+v6]: 1 Enter the V4 mgt IP of the Active server: 10.546.2217.252126 Enter the V4 mgt Prefix for Active server: 24 Enter the V4 mgt GW for the Active server: 10.546.2217.1 Enter the Peer Ce Name: KVM2 Enter the Type of Peer mgt IP[1 - V4/, 2 - V6/, 3-3 V4+V6]: 1 Enter the V4 mgt IP of the Standby server: 10.546.2217.253127 Enter the V4 mgt Prefix for Standby server: 24 Enter the V4 mgt GW for the Standby server: 10.546.2217.1 Enter the ha IP for Active server[V4]: 169.254.8899.1813 Enter the ha Prefix for Active server: 2416 Enter the ha IP for Standby server[V4]: 169.254.88.2013 Enter the ntp IP for the server: 10.1281.2541.672 Enter the Timezone ID [Check README for the ID(1-595)]: 2712 Enter the SBC Type[isbc,ssbc,msbc,tsbc,mrfp]: isbc Enter the TIPC id for the system: 1509 INFO:__main__:Created the active server config drive name : config_drive_KVM1.iso under output dir INFO:__main__:Created the standby server config drive name : config_drive_KVM2.iso under output dir1569 Do you want to enter EMS settings [1- Yes, 2-No ]: 2 Do you want to enter OAM settings [1- Yes, 2-No ]: 2 Do you want to enter TSBC settings [1- Yes, 2-No ]: 2 INFO:__main__:Add this line to the virt-install for config_drive:Cleared old JSON files : Done INFO:__main__:Created the active server config drive name : config_drive_kvm1.iso under output dir INFO:__main__:--disk path=<path of theCreated the standby server config drive name : config_drive.qcow2>,device=disk,bus=virtio,serial=RIBBON_CONFIG_DRIVE |
The --file
option:
sbx.json
file in the ~/kvmInstall-_kvm2.iso under output dir [root@Host-0 createConfigDrive-1.0.0 |
]# ls -l output/config_drive_kvm*
-rw-r--r--. 1 root root 360448 Aug 13 06:32 output/config_drive_kvm1.iso
-rw-r--r--. 1 root root 360448 Aug 13 06:32 output/config_drive_kvm2.iso |
The --file
option:
Copy the template sbx.json
file in the ~/createConfigDrive-1.0.0/input
directory and modify it as appropriate for the deployment. The contents of the sbx.json
file are similar to the values you provide with the --cli
option. An example of a file edited for an HA pair is shown below:
Code Block |
---|
{
"sbxConf": {
"installType": "1",
"role": "1",
"systemName": "vsbcSystem",
"ceName": "vsbc1",
"peerCeName": "vbsc2",
"mgtIpType": "1",
"mgtIp": "1.1.10.10",
"mgtPrefix": "24",
"mgtGw": "1.1.10.1",
"mgtIpV6":"",
"mgtPrefixV6":"",
"mgtGwV6":"",
"haIp": "2.2.2.2",
"haPrefix": "24",
"peerMgtIpType": "1",
"peerMgtIp": "1.1.10.15",
"peerMgtPrefix": "24",
"peerMgtGw": "1.1.10.1",
"peerMgtIpV6": "",
"peerMgtPrefixV6": "",
"peerMgtGwV6": "",
"peerHaIp": "2.2.2.5",
"ntpIp": "1.1.10.1",
"timezone": "12",
"sbctype": "isbc",
"tipc": "1500",
"rgIp":"2.2.2.2",
"haMode":"1to1"
},
"emsConf":
{
"emsName": "",
"emsPass": "",
"downloadEmsConfig": "True",
"emsIp1": "",
"emsIp2": ""
},
"oamConf":
{
"oamIp1": "",
"oamIp2": ""
}
} |
sbx.json
file are similar to the values you provide with the --cli
option. An example of a file edited for an HA pair is shown below:Code Block |
---|
{
"installType": "2",
"systemName": "KVMHA",
"ceName": "KVM1",
"peerCeName": "KVM2",
"mgtIpType": "3",
"mgtIp": "10.54.221.252",
"mgtPrefix": "24",
"mgtGw": "10.54.221.1",
"mgtIpV6":"fd00:10:6b50:4d20::77",
"mgtPrefixV6":"60",
"mgtGwV6":"fd00:10:6b50:4d20::1",
"haIp": "169.254.88.18",
"haPrefix": "24",
"peerMgtIpType": "3",
"peerMgtIp": "10.54.221.253",
"peerMgtPrefix": "24",
"peerMgtGw": "10.54.221.1",
"peerMgtIpV6": "fd00:10:6b50:4d20::78",
"peerMgtPrefixV6": "60",
"peerMgtGwV6": "fd00:10:6b50:4d20::1",
"peerHaIp": "169.254.88.20",
"ntpIp": "10.128.254.67",
"timezone": "27",
"sbctype": "isbc",
"tipc": "1509"
} |
Tip | ||
---|---|---|
| ||
If you do not need an object for configuration, pass an empty string ( For reference, the possible values of some objects are as follows: - - - - |
Execute the following command:
Code Block |
---|
# python createConfigDrive.py --file ./input/sbx.json |
The script executes and creates configuration drive images for both the active and standby instances. The file config_drive_KVM1.iso
is for the active instance, and the file config_drive_KVM2.iso
is for the standby instance. By default, the generated files are placed in the ~/createConfigDrive-1.0.0/output
directory. Copy the file
to the directory ~/createConfigDrive-1.0.0/output
/config_drive_KVM1.iso/var/lib/libvirt/images
.
Code Block |
---|
# cp ~/createConfigDrive-1.0.0/output/config_drive_KVM1.iso /var/lib/libvirt/images |
The recommended deployment creates the standby instance on a different KVM host. Use the scp
tool to securely copy the file to the remote location. Copy the file
to the ~/createConfigDrive-1.0.0/output
/config_drive_KVM2.iso/var/lib/libvirt/images
directory of the standby instance's host.
Code Block |
---|
# scp -P 22 ~/createConfigDrive-1.0.0/output/config_drive_KVM2.iso root@<Standby_Host_IP_Address>:/var/lib/libvirt/images root@<Standby_Host_IP_Address>'s password: config_drive_KVM2.iso |
Execute the following command to truncate the disk and create a mirror disk:
Code Block |
---|
# truncate --size=30G active_evlog.qcow2 |
Tip | ||
---|---|---|
| ||
In the preceding command, " |
The script executes and creates the mirror disk image file active_evlog.qcow2
for the active instance in the ~/createConfigDrive-1.0.0/output
directory. Copy the mirror disk image file to the /var/lib/libvirt/images
directory.
Code Block |
---|
# cp ~/kvmInstallcreateConfigDrive-1.0.0/output/active_evlog.qcow2 /var/lib/libvirt/images |
Create soft (symbolic) links for the base build that you want to install.
Code Block |
---|
# ln -s /var/lib/libvirt/images/<base_image_file_name>_cp1.qcow2 /var/lib/libvirt/images/KVM1.qcow2 |
Execute the following command to instantiate the active instance:
Code Block |
---|
# virt-install --name=KVM1 \ --ram=18432 --vcpus sockets=1,cores=10,threads=1 --cpu host \ --disk path=/var/lib/libvirt/images/KVM1.qcow2 \ --disk path=/var/lib/libvirt/images/config_drive_KVM1.iso,device=cdrom,bus=virtio,serial=RIBBON_CONFIG_DRIVE \ --disk path=/var/lib/libvirt/images/active_evlog.qcow2,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE \ --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 \ --network=bridge:MGT0,model=virtio --network=bridge:HA0,model=virtio \ --network=bridge:PKT0bridgeP7P1,model=virtio --network=bridge:PKT1bridgeP7P2,model=virtio \ --autostart --prompt --noautoconsole WARNING --prompt mode is no longer supported. Starting install... Domain creation completed. |
On completion of the execution of virt-install
command, the instance reboots twice and finally comes up in active mode. Continue with the instantiation step for the standby instance.
Anchor | ||||
---|---|---|---|---|
|
Log on to the KVM host reserved for the standby instance as root
(or equivalent, with full administrative privileges), and execute the following steps:
Navigate to the ~/kvmInstallcreateConfigDrive-1.0.0
directory directory.
Code Block |
---|
# cd ~/kvmInstallcreateConfigDrive-1.0.0 |
Execute the following command to truncate the disk and create a mirror disk:
Code Block |
---|
# truncate --size=30G active_evlog.qcow2 |
Tip | ||
---|---|---|
| ||
In the preceding command, " |
The script executes and creates the mirror disk image file active_evlog.qcow2
for the standby instance in the ~/createConfigDrive-1.0.0/output
directory. Copy the mirror disk image file to the /var/lib/libvirt/images
directory.
Code Block |
---|
# cp ~/kvmInstallcreateConfigDrive-1.0.0/output/active_evlog.qcow2 /var/lib/libvirt/images |
Create soft (symbolic) links for the base build that you want to install.
Code Block |
---|
# ln -s /var/lib/libvirt/images/<base_image_file_name>_cp1.qcow2 /var/lib/libvirt/images/KVM2.qcow2 |
Anchor | ||||
---|---|---|---|---|
|
Code Block |
---|
# virt-install --name=KVM2 \ --ram=18432 --vcpus sockets=1,cores=10,threads=1 --cpu host \ --disk path=/var/lib/libvirt/images/KVM2.qcow2 \ --disk path=/var/lib/libvirt/images/config_drive_KVM2.iso,device=cdrom,bus=virtio,serial=RIBBON_CONFIG_DRIVE \ --disk path=/var/lib/libvirt/images/active_evlog.qcow2,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE \ --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 \ --network=bridge:MGT0,model=virtio --network=bridge:HA0,model=virtio \ --network=bridge:PKT0bridgeP7P1,model=virtio --network=bridge:PKT1bridgeP7P2,model=virtio \ --autostart --prompt --noautoconsole WARNING --prompt mode is no longer supported. Starting install... Domain creation completed. |
On completion of the execution of virt-install
command, the instance reboots twice and comes up in standby mode.
This section describes the procedure for offline upgrade of an HA pair of SBC SWe instances on KVM. The upgrade starts with the standby instance, followed by the active instance.
Info | ||
---|---|---|
| ||
Starting with SBC 8.1, this is the recommended procedure for upgrading an SBC SWe instance on KVM. |
Note | ||
---|---|---|
| ||
For LSWU, ensure that the base version and the version to which you are upgrading are LSWU-compatible. |
Log on to the KVM host for the current standby instance as root
(or equivalent, with full administrative privileges), and execute the following steps:
Log on to the standby instance as root
and stop the SBC application.
Code Block |
---|
# sbxstop |
Switch back to the host, shut down the standby instance running as a KVM guest.
Code Block |
---|
# virsh shutdown <CE_name_of_Standby_instance> |
Navigate to the /var/lib/libvirt/images
directory of the host.
Code Block |
---|
# cd /var/lib/libvirt/images |
Remove the soft (symbolic) link pointing to the base build that you used for installation.
Code Block |
---|
# rm -rf KVM2.qcow2 |
Ensure that you have the QCOW2 image file for the SBC version to which you want to upgrade in the /var/lib/libvirt/images
directory.
Optionally, check the QCOW2 disk size and modify it if required. Usually, the disk size for the upgraded build is the same as the original disk size.
Code Block |
---|
#### Check for the virtual size of the disk. By default, the size of the new disk after the upgrade will be same. # qemu-img info <qcow2_image_file_for_upgrade> | grep "virtual size" #### Resize disk size for upgraded version # qemu-img resize <qcow2_image_file_for_upgrade> +30G |
Tip | ||
---|---|---|
| ||
In the preceding code snippet, " |
Create a new soft (symbolic) link pointing to the build file to which you want to upgrade.
Code Block |
---|
# ln -s <qcow2_image_file_for_upgrade> KVM2.qcow2 |
Start the standby instance as a KVM guest.
Code Block |
---|
# virsh start <CE_name_of_Standby_instance> |
The standby instance comes up with the upgraded version of the SBC SWe. Log on to the instance as root
, and check the details of the upgraded version by executing the following command:
Code Block |
---|
# swinfo -v |
After booting up, the standby instance attempts to synchronize with the active. To check the sync status, execute the following command:
Code Block |
---|
% show table system syncStatus |
Log on to the KVM host reserved for the active instance as root
(or equivalent, with full administrative privileges), and execute the following steps:
Log on to the active instance as root
and stop the SBC application.
Code Block |
---|
# sbxstop |
Switch back to the host, shut down the active instance running as a KVM guest.
Code Block |
---|
# virsh shutdown <CE_name_of_Active_instance> |
Navigate to the /var/lib/libvirt/images
directory of the host.
Code Block |
---|
# cd /var/lib/libvirt/images |
Remove the soft (symbolic) link pointing to the base build that you used for installation.
Code Block |
---|
# rm -rf KVM1.qcow2 |
/var/lib/libvirt/images
directory.Optionally, check the QCOW2 disk size and modify it if required. Usually, the disk size for the upgraded build is the same as the original disk size.
Code Block |
---|
#### Check for the virtual size of the disk. By default, the size of the new disk after the upgrade will be same. # qemu-img info <qcow2_image_file_for_upgrade> | grep "virtual size" #### Resize disk size for upgraded version # qemu-img resize <qcow2_image_file_for_upgrade> +30G |
Tip | ||
---|---|---|
| ||
In the preceding code snippet, " |
Create a new soft (symbolic) link pointing to the build file to which you want to upgrade.
Code Block |
---|
# ln -s <qcow2_image_file_for_upgrade> KVM1.qcow2 |
Start the active instance as a KVM guest.
Code Block |
---|
# virsh start <CE_name_of_Active_instance> |
The active instance comes up with the upgraded version of the SBC SWe. Log on to the instance as root
, and check the details of the upgraded version by executing the following command:
Code Block |
---|
# swinfo -v |
Once the upgrade procedure for both standby and active instances is complete, the instances will automatically synchronize as an HA pair. To check the sync status, log on to the instance and execute the following command:
Code Block |
---|
% show table system syncStatus |
Pagebreak |
---|