Page History
Add_workflow_for_techpubs | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Panel | ||||
---|---|---|---|---|
In this section:
|
Info | ||
---|---|---|
| ||
Related articles: |
This section describes a procedure for installation or upgrade of an SBC SWe 1:1 HA pair on KVM using a QCOW2 image file and an ISO config drive you create using the CLI tool createConfigDrive.py
. The tool is contained within the tarball createConfigDrive-1.0.0.tar.gz
(available as part of the release software bundle). It significantly improves the workflow for the installation installing and upgrade of upgrading SBC SWe on KVM as compared to the traditional procedure involving an ISO file, especially when an LSWU is performed.
Info | ||
---|---|---|
| ||
The tool and the associated procedure are applicable only when using a QCOW2 file to deploy SBC SWe 8.1 or higher on KVM, launched in Nto1 HA mode.
|
Info | ||
---|---|---|
| ||
The procedure described in this section assumes and recommends that the active and standby instances are guests in two different KVM hosts. |
Tip | ||
---|---|---|
| ||
For installation or upgrade of standalone instances, complete the steps for a single KVM host, rather than two hosts, and for only one instance, rather than both active and standby instances. |
Prerequisites
Before performing an installation followed by an upgrade, execute perform the following steps:
- Back up the system configuration and the data.
- Ensure that you have
root
level access to the KVM hosts. For an HA pair, log on to both KVM hosts asroot
(or equivalent, with full administrative privileges). - On each KVM host, perform the following:
Download the release software bundle and untar it. The unpacked bundle contains the tarball
createConfigDrive-1.0.0.tar.gz
.C
opy it to your home directory.Code Block # cp createConfigDrive-1.0.0.tar.gz ~/
Unpack the tarball
createConfigDrive-1.0.0.tar.gz
.Code Block # tar -xvzf createConfigDrive-1.0.0.tar.gz
Navigate to
~/createConfigDrive-1.0.0
and ensure bothcreateConfigDrive.py
andREADME.md
are present. OpenREADME.m
d in an editor such as vi to find more information on thecreateConfigDrive.py
tool.Code Block # cd ~/createConfigDrive-1.0.0 # vi README.md
Info title Note Ensure that the following packages that
createConfigDrive.py
requires are installed on the instances:- The Python package
netaddr
. If you have the Python package manager "pip
" installed, use the command "pip install netaddr
" to install the package. - Linux package:
genisoimage
sudo apt install <dependency>
" to install the Linux packages.
- The Python package
Copy the QCOW2 images of the base build (for initial installation) and the build to which you want to upgrade to the
/var/lib/libvirt/images
directory.Code Block # cp <path_to_qcow2_image_file> /var/lib/libvirt/images
Create copies of the base and final builds in the
/var/lib/libvirt/images
directory. While executing performing thecp
command, add the suffix "_cp1
" and "_cp2
" to the base and final build file names respectively. For example, the names of the copied files should be similar to "sbc-V08.01.00R000-connexip-os_07.00.00-A019_4684_amd64_cp1.qcow2
" and "sbc-V08.01.00R000-connexip-os_07.00.00-A020_146_amd64_cp2.qcow2
".Code Block # cd /var/lib/libvirt/images # cp <base_image_file_name>.qcow2 <base_image_file_name>_cp1.qcow2 # cp <final_image_file_name>.qcow2 <final_image_file_name>_cp2.qcow2
Installation
Tip | ||
---|---|---|
| ||
To perform only an upgrade, skip the procedure for installation. |
Active Instance
Log on to the KVM host reserved for the active instance as root
(or equivalent, with full administrative privileges), and execute perform the following steps:
Navigate to the
~/createConfigDrive-1.0.0
directory.Code Block # cd ~/createConfigDrive-1.0.0
Run the script
createConfigDrive.py
. For installation, you can run the script in either of two modes. The--cli
option provides screen prompts after which you can enter configuration data for the deployment. The--file
option requires you to pass the script a file,sbx.json
, which must contain all the required configuration data in a json file format. Both options are shown below.The
--cli
option:Execute Enter the following command to provide configuration data in response to prompts.
Code Block # python createConfigDrive.py --cli
Your responses determine the nature of the instance. For example, to create the active instance of an HA pair, enter values similar to the example below:Anchor inputs necessary for the --cli option inputs necessary for the --cli option Code Block > python createConfigDrive.py --cli WARNING:__main__:Another instance of CLOUD_CONFIG Install is running... INFO:__main__:################################################################## INFO:__main__: Executing Command : createConfigDrive.py --cli INFO:__main__:################################################################## Enter the Product Type [1 - SBC , 2 - OAM]: 1 Enter the SBC HA Mode Type [1 - 1:1 , 2 - N:1]: 1 Enter the SBC Install Type [1- Standalone, 2- HA]: 2 Enter the System Name: KVMHA Enter the Ce Name: KVM1 Enter the Type of mgt IP[1- V4, 2- V6, 3- V4+v6]: 1 Enter the V4 mgt IP of the Active server: 10.6.7.126 Enter the V4 mgt Prefix for Active server: 24 Enter the V4 mgt GW for the Active server: 10.6.7.1 Do you want to configure second management port (mgt1) [1- Yes, 2-No ]: 2 Enter the Peer Ce Name: KVM2 Enter the Type of Peer mgt IP[1-V4, 2-V6, 3-V4+V6]: 1 Enter the V4 mgt IP of the Standby server: 10.6.7.127 Enter the V4 mgt Prefix for Standby server: 24 Enter the V4 mgt GW for the Standby server: 10.6.7.1 Enter the ha IP for Active server[V4]: 169.254.99.13 Enter the ha Prefix for Active server: 16 Enter the ha IP for Standby server[V4]: 169.254.88.13 Enter the ntp IP for the server: 10.1.1.2 Enter the Timezone ID [Check README for the ID(1-595)]: 12 Enter the SBC Type[isbc,ssbc,msbc,mrfp]: isbc Enter the TIPC id for the system: 1569 Do you want SSH Public Key based authentication for 'admin' and 'linuxadmin' user [1- Yes, 2-No ]: 21 DoEnter youSSH wantPublic toKey enterfor EMS settings [1- Yes, 2-No ]: 2 Enable or Disable EMA REST 'admin' user: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCnJ4s1adPtqTKb+8mtmVs4ZkIgP5a1CmCucJp0XtHrqFU454XO5APmlPknBI2qhb2W71HWl wPI88XqLM+KnpkoVC44UpZeCUoTuQfnuCt5XfXZnlbbJVV71N+fbmGDvCgTUqG5/cfVIrhqZ4OU8rpiznH3Vuofn/kLN/ EBGnvUjODUe+EbmuAeD1CpWxm47TrbfZESMOXHfNHDNKZwmHwnX8qWOpltgYpK1Kx7m82Bb5H4qpREAjtJbkOH1Qm+Mm7f 8ysMqgRrrpcEZgPWUSDQjsLofDlwERO++npJpzm6QiqRygKlpf9IRUWU3ckKpVwDfQhol3tbd3Pagav3cKTf Enter SSH Public Key for 'linuxadmin' user: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCnJ4s1adPtqTKb+8mtmVs4ZkIgP5a1CmCucJp0XtHrqFU454XO5APmlPknBI2qhb2W71HWlwPI88 XqLM+KnpkoVC44UpZeCUoTuQfnuCt5XfXZnlbbJVV71N+fbmGDvCgTUqG5/cfVIrhqZ4OU8rpiznH3Vuofn/kLN/EBGnvUjODUe+ EbmuAeD1CpWxm47TrbfZESMOXHfNHDNKZwmHwnX8qWOpltgYpK1Kx7m82Bb5H4qpREAjtJbkOH1Qm+Mm7f8ysMqgRrrpcEZgPWU SDQjsLofDlwERO++npJpzm6QiqRygKlpf9IRUWU3ckKpVwDfQhol3tbd3Pagav3cKTf Do you want to enter EMS settings [1- Yes, 2-No ]: 2 Enable or Disable EMA REST module? [1- enable, 2- disable]: 21 Enable or Disable EMA Core module? [1- enable, 2- disable]: 21 Enable or Disable EMA Troubleshooting module? [1- enable, 2- disable]: 21 Do you want to enter OAM settings [1- Yes, 2-No ]: 2 Do you want to enter TSBC settings [1- Yes, 2-No ]: 2 INFO:__main__:Cleared old JSON files : Done I: -input-charset not specified, using utf-8 (detected in locale settings) Total translation table size: 0 Total rockridge attributes bytes: 411 Total directory bytes: 0 Path table size(bytes): 10 Max brk space used 2200023000 176177 extents written (0 MB) INFO:__main__:Created the active server config drive name : config_drive_kvm1.iso under output dir I: -input-charset not specified, using utf-8 (detected in locale settings) Total translation table size: 0 Total rockridge attributes bytes: 411 Total directory bytes: 0 Path table size(bytes): 10 Max brk space used 2200023000 176177 extents written (0 MB) INFO:__main__:Created the standby server config drive name : config_drive_kvm2.iso under output dir
The
--file
option:Copy the template
sbx.json
file in the~/createConfigDrive-1.0.0/input
directory and modify it as appropriate for the deployment. The contents of thesbx.json
file are similar to the values you provide with the--cli
option. An example of a file edited for an HA pair is shown below:Code Block { "sbxConf": { "productType": "SBC", "haMode": "1:1", "personality": "1", " "installType": "1", "role": "1", "systemName": "vsbcSystem", "ceName": "vsbc1", "peerCeName": "vbsc2", "mgtIpType": "1", "mgtIp": "1.1.10.10", "mgtPrefix": "24", "mgtGw": "1.1.10.1", "mgtIpV6":"", " "mgtPrefixV6":"", "mgtGwV6":"", "isMgt1Conf":"1", "mgt1IpType": "1", "mgt1Ip": "10.2.3.41", "mgt1Prefix": "24", "mgt1Gw": "10.2.3.1", "mgt1IpV6":"", "mgt1PrefixV6":"", "mgt1GwV6":"", "haIp": "2.2.2.2", "haPrefix": "24", "peerMgtIpType": "1", "peerMgtIp": "1.1.10.15", "peerMgtPrefix": "24", "peerMgtGw": "1.1.10.1", "peerMgtIpV6": "", "peerMgtPrefixV6": "", "peerMgtGwV6": "", "peerMgt1IpType": "", "peerMgt1Ip": "", "peerMgt1Prefix": "", "peerMgt1GwpeerMgtGwV6": "", "peerMgt1IpV6": "", "peerMgt1PrefixV6": "", "peerMgt1GwV6": "", "peerHaIp": "2.2.2.5", "ntpIp": "1.1.10.1", "timezone": "12", "sbctype": "isbc", "tipc": "1500", "rgIp":"2.2.2.2", "enableREST":"enabled", "enableCoreEMA":"enabled", "enableTS":"enabled", "haMode":"1to1" }, "emsConf": { "emsName": "", "emsPass": "", "downloadEmsConfig": "True", "emsIp1": "", "emsIp2": "", "EmsPrivateNodeParameters": { "cluster_id": "" } }, "oamConf": { "oamIp1": "", "oamIp2": "" } }
Tip title Tip If you do not need an object for configuration, pass an empty string (
""
) as its value.For reference, the possible values of some objects are as follows:
-
installType
:1
- Standalone |2
- HA-
mgtIpType
:1
- IPv4 |2
- IPv6 |3
- IPv4 + IPv6-
timezone
: Refer to~/createConfigDrive-1.0.0/README.md
.-
sbctype
:isbc
|ssbc
|msbc
|tsbc
|mrfp
If you want to use SSH Public Key authentication for the 'admin' and 'linuxadmin' user, edit the
~/createConfigDrive-1.0.0/conf/user-data
file to replace_ADMIN_SSH_KEY_
and_LINUXADMIN_SSH_KEY_
with their respective SSH public keys.Execute Enter the following command:
Code Block # python createConfigDrive.py --file ./input/sbx.json
The script executes performs and creates configuration drive images for both the active and standby instances. The file
config_drive_kvm1.iso
is for the active instance, and the fileconfig_drive_kvm2.iso
is for the standby instance. By default, the generated files are placed in the~/createConfigDrive-1.0.0/output
directory. Copy the file~/createConfigDrive-1.0.0/output
/config_drive_kvm1.iso/var/lib/libvirt/images
.Code Block # cp ~/createConfigDrive-1.0.0/output/config_drive_kvm1.iso /var/lib/libvirt/images
The recommended deployment creates the standby instance on a different KVM host. Use the
scp
tool to securely copy the file to the remote location. Copy the file
to the~/createConfigDrive-1.0.0/output
/config_drive_kvm2.iso/var/lib/libvirt/images
directory of the standby instance's host.Code Block # scp -P 22 ~/createConfigDrive-1.0.0/output/config_drive_kvm2.iso root@<Standby_Host_IP_Address>:/var/lib/libvirt/images root@<Standby_Host_IP_Address>'s password: config_drive_kvm2.iso
Execute Perform the following commands to change directories and then truncate the disk and create a mirror disk:
Code Block # cd /var/lib/libvirt/images # truncate --size=35G active_evlog.qcow2
Info title Note Ensure the minimum disk size of both the root disk and mirror disk is 35 GB.
In the preceding command, "35G" is the size of the mirror disk in GB. Specify a size for the mirror disk that is appropriate for your system, and that does not exceed the available space on the partition.
Because the size of the disk impacts sync time, Ribbon recommends a value of 100 GB or less.
Create soft (symbolic) links for the base build that you want to install.
Code Block # ln -s /var/lib/libvirt/images/<base_image_file_name>_cp1.qcow2 /var/lib/libvirt/images/KVM1.qcow2
Execute Enter the following command to instantiate the active instance:
Code Block # virt-install --name=KVM1 \ --ram=18432 --vcpus sockets=1,cores=10,threads=1 --cpu host \ --disk path=/var/lib/libvirt/images/KVM1.qcow2 \ --disk path=/var/lib/libvirt/images/config_drive_kvm1.iso,device=cdrom,serial=RIBBON_CONFIG_DRIVE \ --disk path=/var/lib/libvirt/images/active_evlog.qcow2,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE \ --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 \ --network=bridge:MGT0,model=virtio --network=bridge:HA0,model=virtio \ --network=bridge:PKT0bridgeP7P1,model=virtio --network=bridge:PKT1bridgeP7P2,model=virtio \ --autostart --prompt --noautoconsole WARNING --prompt mode is no longer supported. Starting install... Domain creation completed.
Info title Note For more information on creating network bridges, refer to Creating and Linking Network Bridge.
If you wish to install the SBC with an SR-IOV interface for packet ports, use the following command:
Code Block virt-install --name=SBCSWE1A --ram=20480 --memorybacking hugepages=yes --vcpus sockets=1,cores=2,threads=2 --cpu host-passthrough --disk path=/var/lib/libvirt/images/SBCSWE1A.qcow2,device=disk,bus=virtio,format=qcow2,size=70 --disk path=/var/lib/libvirt/images/config_drive_SBCSWE1A.img,device=disk,bus=virtio,serial=RIBBON_CONFIG_DRIVE, device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE,size=70 --network=MGT0,model=virtio, --network=bridge:HA0,model=virtio --hostdev pci_0000_03_0a_0 --hostdev pci_0000_03_0a_1 --hostdev pci_0000_03_0e_0 --hostdev pci_0000_03_0e_1 --hostdev pci_0000_3b_00_0 --hostdev pci_0000_5e_00_0 --cputune=vcpupin0.vcpu=0,vcpupin0.cpuset=2,vcpupin1.vcpu=1,vcpupin1.cpuset=10,vcpupin2.vcpu=2,vcpupin2.cpuset=3,vcpupin3.vcpu=3,vcpupin3.cpuset=11 --numatune=0,mode=strict --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 --autostart --noautoconsole --vnc --virt-type kvm
Info title Note For additional details, refer to Creating a New SBC SWe Instance on KVM Hypervisor (SRIOV).
Once the virt-install
command completes, the instance reboots twice and finally comes up in active mode. Continue with the instantiation step for the standby instance.
Anchor | ||||
---|---|---|---|---|
|
Log on to the KVM host reserved for the standby instance as root
(or equivalent, with full administrative privileges), and execute enter the following steps:
Navigate to the
/var/lib/libvirt/images/
directory.Code Block # cd /var/lib/libvirt/images/
Execute Enter the following command to truncate the disk and create a mirror disk:
Code Block # truncate --size=35G standby_evlog.qcow2
Info title Note Ensure the minimum disk size of both the root disk and mirror disk is 35 GB.
In the preceding command, "35G" is the size of the mirror disk in GB. Specify a size for the mirror disk that is appropriate for your system, and that does not exceed the available space on the partition.
Because the size of the disk impacts sync time, Ribbon recommends a value of 100 GB or less.
Create soft (symbolic) links for the base build that you want to install.
Code Block # ln -s /var/lib/libvirt/images/<base_image_file_name>_cp1.qcow2 /var/lib/libvirt/images/KVM2.qcow2
Execute Enter the following command to instantiate the standby instance. Note that the config drive file (Anchor Step 5 of Standby Step 5 of Standby config_drive_kvm2.iso
) required to instantiate the standby was also created during the "Active Instance" procedure.Code Block # virt-install --name=KVM2 \ --ram=18432 --vcpus sockets=1,cores=10,threads=1 --cpu host \ --disk path=/var/lib/libvirt/images/KVM2.qcow2 \ --disk path=/var/lib/libvirt/images/config_drive_kvm2.iso,device=cdrom,serial=RIBBON_CONFIG_DRIVE \ --disk path=/var/lib/libvirt/images/standby_evlog.qcow2,device=disk,bus=virtio,serial=RIBBON_MIRROR_DRIVE \ --import --os-type=linux --os-variant=debianwheezy --arch=x86_64 \ --network=bridge:MGT0,model=virtio --network=bridge:HA0,model=virtio \ --network=bridge:PKT0bridgeP7P1,model=virtio --network=bridge:PKT1bridgeP7P2,model=virtio \ --autostart --prompt --noautoconsole WARNING --prompt mode is no longer supported. Starting install... Domain creation completed.
On completion of the execution of virt-install
command, the instance reboots twice and comes up in standby mode.
Upgrade
This section describes the procedure for offline upgrade of an HA pair of SBC SWe instances on KVM. The upgrade starts with the standby instance, followed by the active instance.
Info | ||
---|---|---|
| ||
Starting with SBC 8.1, this is the recommended procedure for upgrading an SBC SWe instance on KVM. |
Note | ||
---|---|---|
| ||
For LSWU, ensure that the base version and the version to which you are upgrading are LSWU-compatible. |
Standby Instance
Log on to the KVM host for the current standby instance as root
(or equivalent, with full administrative privileges), and execute perform the following steps:
Log on to the standby instance as
root
and stop the SBC application.Code Block # sbxstop
Switch back to the host, shut down the standby instance running as a KVM guest.
Code Block # virsh shutdown <CE_name_of_Standby_instance>
Navigate to the
/var/lib/libvirt/images
directory of the host.Code Block # cd /var/lib/libvirt/images
Remove the soft (symbolic) link pointing to the base build that you used for installation.
Code Block # rm -rf KVM2.qcow2
Ensure that you have the QCOW2 image file for the SBC version to which you want to upgrade in the
/var/lib/libvirt/images
directory.Optionally, check the QCOW2 disk size and modify it if required. Usually, the disk size for the upgraded build is the same as the original disk size.
Code Block #### Check for the virtual size of the disk. By default, the size of the new disk after the upgrade will be same. # qemu-img info <qcow2_image_file_for_upgrade> | grep "virtual size" #### Resize disk size for upgraded version # qemu-img resize <qcow2_image_file_for_upgrade> +100G
Tip title Tip In the preceding code snippet, "+100G" implies the new disk is 100GB larger than the original disk size. Ensure that the new disk size does not exceed the maximum allowed disk size.
Create a new soft (symbolic) link pointing to the build file to which you want to upgrade.
Code Block # ln -s <qcow2_image_file_for_upgrade> KVM2.qcow2
Start the standby instance as a KVM guest.
Code Block # virsh start <CE_name_of_Standby_instance>
The standby instance comes up with the upgraded version of the SBC SWe. Log on to the instance as
root
, and check the details of the upgraded version by executing performing the following command:Code Block # swinfo -v
After booting up, the standby instance attempts to synchronize with the active. To check the sync status, execute perform the following command:
Code Block % show table system syncStatus
Active Instance
Log on to the KVM host reserved for the active instance as root
(or equivalent, with full administrative privileges), and execute perform the following steps:
Log on to the active instance as
root
and stop the SBC application.Code Block # sbxstop
Switch back to the host, shut down the active instance running as a KVM guest.
Code Block # virsh shutdown <CE_name_of_Active_instance>
Navigate to the
/var/lib/libvirt/images
directory of the host.Code Block # cd /var/lib/libvirt/images
Remove the soft (symbolic) link pointing to the base build that you used for installation.
Code Block # rm -rf KVM1.qcow2
- Ensure that you have the QCOW2 image file for the SBC version to which you want to upgrade in the
/var/lib/libvirt/images
directory. Optionally, check the QCOW2 disk size and modify it if required. Usually, the disk size for the upgraded build is the same as the original disk size.
Code Block #### Check for the virtual size of the disk. By default, the size of the new disk after the upgrade will be same. # qemu-img info <qcow2_image_file_for_upgrade> | grep "virtual size" #### Resize disk size for upgraded version # qemu-img resize <qcow2_image_file_for_upgrade> +100G
Tip title Tip In the preceding code snippet, "
+100G
" implies the new disk is 100GB larger than the original disk size. Ensure that the new disk size does not exceed the maximum allowed disk size.Create a new soft (symbolic) link pointing to the build file to which you want to upgrade.
Code Block # ln -s <qcow2_image_file_for_upgrade> KVM1.qcow2
Start the active instance as a KVM guest.
Code Block # virsh start <CE_name_of_Active_instance>
The active instance comes up with the upgraded version of the SBC SWe. Log on to the instance as
root
, and check the details of the upgraded version by executing performing the following command:Code Block # swinfo -v
Once the upgrade procedure for both standby and active instances is complete, the instances will automatically synchronize as an HA pair. To check the sync status, log on to the instance and execute perform the following command:
Code Block % show table system syncStatus
Pagebreak |
---|