Modified: for 16.0.2

In this section:

Overview

Deploying the EdgeView in a clustered configuration provides the following functionalities in addition to all the features provided by a traditional single-node deployment:

  • Automatic system recovery from failures or planned infrastructure maintenance with minimal application disruption.
  • Supports large scale deployments in excess of 100,000 connected the EdgeMarc devices with horizontal and vertical scaling. 

The following technologies are implemented to provide the mentioned functionalities:

  • Container orchestration to schedule and scale our microservices across a cluster of hosts.
  • Storage abstraction layer for stateful applications, to store, replicate and provide failover capabilities for persistent data.

Deployment Model- Storidge

Deployment Model- NFS

Requirements

Minimum Requirements

  • Five CentOS 7, RHEL 8 or Ubuntu 18 Virtual Machines. All virtual machines within the cluster must be the same OS
    • Ensure that these dedicated virtual machines have unique hostnames with no other production applications running on them. Install and update base OS packages to the latest versions.
    • Requires an active internet connection to automatically install and configure software dependencies.
    • Does not have any pre-existing OS level firewall rules (iptables).
  • vCPU/Core: 8
    • Example: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50 GHz
  • RAM: 8 GB
  • Load balancer or network configuration (DNS) to provide a single network endpoint for the cluster.
  • NIC Interface: 1
  • Disk configuration options (Choose one or the other):
    • To use NFS, configure and operate an NFS server, separate from your docker swarm cluster.
    • To use Storidge provide the following disk configuration to each VM:

      • Disks mounted to each Virtual Machine: 4
      • Host OS: 100 GB minimum
      • 3x disks for Docker storage: 100 GB minimum each per VM (this will grow as you add data to the system over time)
      • Disk space monitoring and alerting configured at safe threshold (for example, 80% used capacity)
      • Ability to expand disk space when it reaches the threshold
  • Docker version 19.03.8
  • Storidge version V2.0.0-3411
Note

This is the minimum recommendation for initial installation. Monitor your network environment and add additional CPU, RAM, and Disk space, as required.

Network Port Requirements

Open the following ports for a successful HA EdgeView installation:

  • 80, 443 for HTTP/HTTPS
    Inbound (The EdgeView GUI access and initial ZTP connection from the  EdgeMarc )
  • 5671 for EMPath
    Inbound (RabbitMQ communication between the EdgeMarc and the  EdgeView )
  • 8022, 8443 for EM GUI access
    Inbound (Communication between the EdgeView and the  EdgeMarc )
  • 9142 for legacy support data syncing
    Inbound (Cassandra connection for legacy support)

  • 22 for SSH

  • Inbound (Connection of the EdgeMarc devices in environments that have both the EdgeView 14 and the EdgeView 16 deployed)

  • 9000 Portainer Management GUI

  • 5000 Docker Private Registry Service

  • 3260 ISCSI communication between Storidge Hosts

  • 8282 Storidge REST API

  • 8383 Storidge Secure cluster configuration

  • 16990 Storidge Metrics Exporter

  • 16996 Storidge DFS internode communication

  • 16997 Storidge SDS CLI server

  • 16998 Storidge Controller nodes heartbeat

  • 16999 DFS-CIO internode communication

Outbound (For EdgeView  to send email notifications to users)

  • 25, 587 for SMTP

Deployment

The main difference between deploying the EdgeView in a multi-node configuration vs. single node configuration is that you must first install and configure a Storidge cluster before proceeding with the EdgeView installation in the former. At a high level, these are the required steps:

  1. Install Storidge software on each node you plan on including in the cluster.
  2. Initialize the Storidge Cluster.
  3. Install the EdgeView in HA mode.

In order to provide a concrete example of how to proceed, documented below is an example set of servers. It’s recommended that you make a similar document for your environment so that you can reference it later. We use references from this table in the following installation instructions.

Example set of Servers for Storidge

HostnameIP AddressSwarm RoleStoridge RoleNotes
prod-ev-cluster-110.10.10.1ManagerSDSPrimary and install files located here
prod-ev-cluster-210.10.10.2ManagerBackup 1
prod-ev-cluster-310.10.10.3ManagerBackup 2
prod-ev-cluster-410.10.10.4WorkerStorage
prod-ev-cluster-510.10.10.5WorkerStorage

Example set of Servers for NFS

HostnameIP AddressSwarm RoleNotes
prod-ev-cluster-110.10.10.1ManagerPrimary and install files located here
prod-ev-cluster-210.10.10.2Manager
prod-ev-cluster-310.10.10.3Manager
prod-ev-cluster-410.10.10.4Worker
prod-ev-cluster-510.10.10.5Worker
prod-nfs-server-110.10.10.6N/ANFS server

Install or Configure Storage Backend

In order to successfully deploy an EdgeView system in HA mode Storidge must be installed and configured or an NFS server may be utilized to provide persistent storage to the cluster.

Install Storidge

To install Storidge, perform the following steps using an SSH client such as PuTTY:

  1. SSH into each of the five servers and run the following command to install the Storidge software:

    $ curl -fsSL ftp://104.131.153.182/pub/ce/cio-ce | sudo bash -s -- -f -r 3411

    Wait for the above command to finish its execution successfully. It is expected to take several minutes.

  2. On the first node (primary node) in the cluster, run the following command:

    Note

    If you have more than one network interface, add the --ip option and provide the IP address of the primary node.

    $ cioctl create --all-managers --noportainer 
  3. Copy the join command and run that on the rest of the nodes in the cluster (nodes 2 to 5).

    If you have more than one network interface, add the --ip option and provide the IP address of the node.

    For example:

    $ cioctl join 10.10.10.1 3b6ca81e9370a228c7fc8 edbfa3c6361-3aff2bb8
  4. Once you have added all your nodes to the cluster, return to the primary node and initialize the cluster with the ‘cioctl init’ command that was displayed after running the ‘cioctl create’ command.
    For example: 

    $ cioctl init e2e14943

    You can now see the cluster form. This may take several minutes.

  5. To verify that all five nodes are present and that their status is ‘normal’, run the following command:

    $ cio node ls 

Install and or Configure an NFS Server

To install and or Configure an NFS Server, use an SSH client such as PuTTY. This section gives you example commands to create an NFS server, if there isn’t one already available in the environment (tested with RHEL 8. Adjust commands for your environment as necessary).

Example for NFS Server setup with RHEL 8 NFSv4 Server:

  1. To install the packages, run the following command: 

    $ yum -y install nfs-utils
  2. Once the installation is complete, run the following command to start and enable the service: 

    $ systemctl enable --now nfs-server 
  3. To create the directory where the data will be written, run the following command:

    $ mkdir /dockervolumes
  4. Create the /etc/exports config information. 

    $ cat > /etc/exports << EOF
    /dockervolumes *(rw,no_root_squash,no_subtree_check)
    EOF
  5. To export the directory for use, run the following command:

    $ exportfs -ar
  6. To validate, run the following command:

     $ showmount -e  

    If NFS is set up correctly, this is the output:

Install the EdgeView

To install and configure the EdgeView, perform the following steps:

  1. Download the EdgeView artifacts ‘cluster-install-ev.sh’ and ‘ev-fullpkg.tar.xz’ to the /opt directory of the primary node (note that you will need an HA license to utilize this functionality).

    Note

    For these artifacts, log in to Salesforce and open an additional browser for the Ribbon Global Software Center (GSC). In the Downloads section, search for the name and version of the required software.
    Reach out to   Ribbon Support for more information.

  2. Enable read-write-execute permissions for the copied files using the command given below:

    chmod +x cluster-install-ev.sh
          
  3. To execute the installation process, use the following command:

    ./cluster-install-ev.sh

    The installation procedure will begin. Please provide input as prompted by the console output during the installation process.

    The installation creates a log file of all the performed actions for historical review/troubleshooting. The name of this file is $MM-DD-YYYY-ev-install.log.

  4. You are prompted to enter information related to your networking configuration. Please read this section carefully and provide appropriate values for your network environment.

    Once the installation is complete, the console screen displays the following message:

    EdgeView Installation/Upgrade Completed” 

    A directory named scc-build is created.

  5. All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes. To validate, run the following command:

    docker service ls

    Tip

    Make sure that the replicas are all meeting their targets – for example, 1/1 or 5/5. If they are not, run the following command and check the Health section of the output using the following command: 

    docker service ps $service_name -no-trunc
  6. For the initial SCC configuration, register your EdgeView server by accessing it through the web interface at its IP address and entering the registration information that you received in your email. If you didn’t receive the registration information, reach out to Ribbon Support.
  7. Enter the EdgeView admin registration details and click NEXT.
  8. Enter the Tenant admin registration details and click NEXT.
  9. Click APPLY to complete the registration.
  10. Click LOGIN to navigate to the login screen.
  11. Enter the credentials and click SIGN IN to log in to the EdgeView.

Upgrades

Upgrade Storidge Cluster

Storidge upgrades are managed with the cioctl node update command:

cioctl node update <NODENAME | NODEID> [options]

This command updates Storidge software on the node to the latest version.

Storidge supports cluster aware updates so that users can easily upgrade to the latest capabilities. Cluster aware updating upgrades nodes to the latest software releases while the cluster is online and services continue to run.

The cioctl node update updates the Storidge software components and dependencies on a node. When the command is run, it checks for any software updates. If an update is available, it performs the following sequence:

  1. Drain the node, so services are moved to operating nodes.
  2. Cordon the node and set it into maintenance mode.
  3. Download the latest software release to /var/lib/Storidge.
  4. Install the latest software update and any dependencies.
  5. Reboot the node.
  6. Uncordon node to exit maintenance mode, and rejoin the cluster.

The cioctl node update command will prescribe an update sequence so worker nodes are updated first, and the sds node (primary) is updated at the end. For further documentation refer to  Storidge Documentation.

Upgrade NFS Swarm Nodes

For systems that are utilizing NFS as the storage backend, Docker packages should be regularly updated using the OS package manager (yum or apt).

Example:

$ yum update docker-ce

Upgrade EdgeView

Perform the following installation steps to upgrade EdgeView:

  1. Download the EdgeView artifacts cluster- install-ev.sh and ev-fullpkg.tar.xz to the /opt directory of your system.

    Note

    For these artifacts, log in to Salesforce and open an additional browser for the Ribbon Global Software Center (GSC). In the Downloads section, search for the name and version of the required software.
    Reach out to   Ribbon Support for more information.

  2. Enable read-write-execute permissions for the copied files using the commands given below:

      chmod +x cluster-install-ev.sh
  3. Execute the following command to start the installation process:

    ./cluster-install-ev.sh
  4. The installation creates a log file of all actions for historical review/ troubleshooting using the  file format:  $MM-DD-YYYY-ev-install.log . When prompted, enter validation or update information related to your networking configuration environment. Once the installation completes, the console screen displays the message “EdgeView Installation/Upgrade Completed”. 

  5. To check whether the containers started  successfully,  use the following command:

    docker service ls

    All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes.
    Example:

OS/Security Patching and Node Maintenance 

Linux systems require security updates and patches on a regular basis. When  EdgeView is run in HA mode, apply patches as required in the environment. Update only one member of the cluster at a time to ensure the availability of the application. After each node update, verify that all services return to a healthy state before moving to the next. 

Note

EdgeView does not support the utilization of VMware snapshots for testing and/or reverting changes. This results in errors.

For more information, refer to Storidge Node Maintenance.

Cordon node

To safely evict all services before performing maintenance on a node, execute the following command:

cioctl node cordon

This command puts the node in drain state so services are rescheduled. The node is then isolated from other nodes in the cluster.

Identify the name of the node you need to cordon. List the nodes in the cluster using the following command:


cio node ls

For CIO to cordon the node, execute the following command:

cioctl node cordon NODENAME

Perform Maintenance

In the cordoned state, a node is temporarily isolated from the rest of the cluster. Change block tracking is engaged to track updates that are destined for the cordoned node. This enables fast rebuilds when the node is rejoined to the cluster. When the cioctl node cordon command returns, proceed with maintenance. Perform desired CIO software upgrades, driver updates, hardware replacements, and package updates.

Reboot Node

After the node maintenance is completed, run reboot to clean state the node. This step is optional

Uncordon Node

Restore the node to full operation after maintenance is complete, using the following command:

cioctl node uncordon NODENAME

The uncordoned node rejoins the cluster and exits drain state.

Repeat these steps for each node in the cluster until all member nodes are upgraded.

Migrate from single node to Storidge Cluster (HA)

Steps to migrate data from existing  EdgeView system (for  EdgeView versions 15.1.0 and higher) to Storidge Cluster:

  1. To build a Storidge cluster, follow the guidelines in the Install Storidge section.
  2. To install Storidge in single-node configuration on the source system, run the following commands:

    curl -fsSL http://download.storidge.com/pub/ce/cio-ce | sudo bash
    cioctl create --single-node
  3. Shutdown  EdgeView on the source system with docker-compose down
  4. Create Storidge profile on the single node system:

    cd scc-build/; cio profile create storidge-profile
  5. Copy root ssh key from single node to root authorized keys on sds node:

    cat /root/.ssh/id_rsa.pub ; vim /root/.ssh/authorized_keys 

    Paste the key from other server.

  6. Convert the local docker volumes to remote cio volumes in a single command:

    cioctl migrate docker $LOCAL_DOCKER_VOLUME $REMOTE_CIO_VOLUME -p storidge-profile -v --ip $INTERNAL_IP_SDS_NODE_REMOTE 

    The remote name should match the naming convention in the cluster.
    Example: ev_mysql_data for things to work seamlessly.

  7. Run the normal install process on the cluster and check whether all expected data is there.
  8. Update  EdgeMarcs or DNS entries to point to the new cluster address.

Modified: for 16.0.2

In this section:

Overview

Deploying the EdgeView in a clustered configuration provides the following functionalities in addition to all the features provided by a traditional single-node deployment:

  • Automatic system recovery from failures or planned infrastructure maintenance with minimal application disruption.
  • Supports large scale deployments in excess of 100,000 connected the EdgeMarc devices with horizontal and vertical scaling. 

The following technologies are implemented to provide the mentioned functionalities:

  • Container orchestration to schedule and scale our microservices across a cluster of hosts.
  • Storage abstraction layer for stateful applications, to store, replicate and provide failover capabilities for persistent data.

Deployment Model- Storidge

Deployment Model- NFS

Requirements

Minimum Requirements

  • Five CentOS 7, RHEL 8 or Ubuntu 18 Virtual Machines. All virtual machines within the cluster must be the same OS
    • Ensure that these dedicated virtual machines have unique hostnames with no other production applications running on them. Install and update base OS packages to the latest versions.
    • Requires an active internet connection to automatically install and configure software dependencies.
    • Does not have any pre-existing OS level firewall rules (iptables).
  • vCPU/Core: 8
    • Example: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50 GHz
  • RAM: 8 GB
  • Load balancer or network configuration (DNS) to provide a single network endpoint for the cluster.
  • NIC Interface: 1
  • Disk configuration options (Choose one or the other):
    • To use NFS, configure and operate an NFS server, separate from your docker swarm cluster.
    • To use Storidge provide the following disk configuration to each VM:

      • Disks mounted to each Virtual Machine: 4
      • Host OS: 100 GB minimum
      • 3x disks for Docker storage: 100 GB minimum each per VM (this will grow as you add data to the system over time)
      • Disk space monitoring and alerting configured at safe threshold (for example, 80% used capacity)
      • Ability to expand disk space when it reaches the threshold
  • Docker version 19.03.8
  • Storidge version V2.0.0-3411
Note

This is the minimum recommendation for initial installation. Monitor your network environment and add additional CPU, RAM, and Disk space, as required.

Network Port Requirements

Open the following ports for a successful HA EdgeView installation:

  • 80, 443 for HTTP/HTTPS
    Inbound (The EdgeView GUI access and initial ZTP connection from the  EdgeMarc )
  • 5671 for EMPath
    Inbound (RabbitMQ communication between the EdgeMarc and the  EdgeView )
  • 8022, 8443 for EM GUI access
    Inbound (Communication between the EdgeView and the  EdgeMarc )
  • 9142 for legacy support data syncing
    Inbound (Cassandra connection for legacy support)

  • 22 for SSH

  • Inbound (Connection of the EdgeMarc devices in environments that have both the EdgeView 14 and the EdgeView 16 deployed)

  • 9000 Portainer Management GUI

  • 5000 Docker Private Registry Service

  • 3260 ISCSI communication between Storidge Hosts

  • 8282 Storidge REST API

  • 8383 Storidge Secure cluster configuration

  • 16990 Storidge Metrics Exporter

  • 16996 Storidge DFS internode communication

  • 16997 Storidge SDS CLI server

  • 16998 Storidge Controller nodes heartbeat

  • 16999 DFS-CIO internode communication

Outbound (For EdgeView  to send email notifications to users)

  • 25, 587 for SMTP

Deployment

The main difference between deploying the EdgeView in a multi-node configuration vs. single node configuration is that you must first install and configure a Storidge cluster before proceeding with the EdgeView installation in the former. At a high level, these are the required steps:

  1. Install Storidge software on each node you plan on including in the cluster.
  2. Initialize the Storidge Cluster.
  3. Install the EdgeView in HA mode.

In order to provide a concrete example of how to proceed, documented below is an example set of servers. It’s recommended that you make a similar document for your environment so that you can reference it later. We use references from this table in the following installation instructions.

Example set of Servers for Storidge

HostnameIP AddressSwarm RoleStoridge RoleNotes
prod-ev-cluster-110.10.10.1ManagerSDSPrimary and install files located here
prod-ev-cluster-210.10.10.2ManagerBackup 1
prod-ev-cluster-310.10.10.3ManagerBackup 2
prod-ev-cluster-410.10.10.4WorkerStorage
prod-ev-cluster-510.10.10.5WorkerStorage

Example set of Servers for NFS

HostnameIP AddressSwarm RoleNotes
prod-ev-cluster-110.10.10.1ManagerPrimary and install files located here
prod-ev-cluster-210.10.10.2Manager
prod-ev-cluster-310.10.10.3Manager
prod-ev-cluster-410.10.10.4Worker
prod-ev-cluster-510.10.10.5Worker
prod-nfs-server-110.10.10.6N/ANFS server

Install or Configure Storage Backend

In order to successfully deploy an EdgeView system in HA mode Storidge must be installed and configured or an NFS server may be utilized to provide persistent storage to the cluster.

Install Storidge

To install Storidge, perform the following steps using an SSH client such as PuTTY:

  1. SSH into each of the five servers and run the following command to install the Storidge software:

    $ curl -fsSL ftp://104.131.153.182/pub/ce/cio-ce | sudo bash -s -- -f -r 3411

    Wait for the above command to finish its execution successfully. It is expected to take several minutes.

  2. On the first node (primary node) in the cluster, run the following command:

    Note

    If you have more than one network interface, add the --ip option and provide the IP address of the primary node.

    $ cioctl create --all-managers --noportainer 
  3. Copy the join command and run that on the rest of the nodes in the cluster (nodes 2 to 5).

    If you have more than one network interface, add the --ip option and provide the IP address of the node.

    For example:

    $ cioctl join 10.10.10.1 3b6ca81e9370a228c7fc8 edbfa3c6361-3aff2bb8
  4. Once you have added all your nodes to the cluster, return to the primary node and initialize the cluster with the ‘cioctl init’ command that was displayed after running the ‘cioctl create’ command.
    For example: 

    $ cioctl init e2e14943

    You can now see the cluster form. This may take several minutes.

  5. To verify that all five nodes are present and that their status is ‘normal’, run the following command:

    $ cio node ls 

Install and or Configure an NFS Server

To install and or Configure an NFS Server, use an SSH client such as PuTTY. This section gives you example commands to create an NFS server, if there isn’t one already available in the environment (tested with RHEL 8. Adjust commands for your environment as necessary).

Example for NFS Server setup with RHEL 8 NFSv4 Server:

  1. To install the packages, run the following command: 

    $ yum -y install nfs-utils
  2. Once the installation is complete, run the following command to start and enable the service: 

    $ systemctl enable --now nfs-server 
  3. To create the directory where the data will be written, run the following command:

    $ mkdir /dockervolumes
  4. Create the /etc/exports config information. 

    $ cat > /etc/exports << EOF
    /dockervolumes *(rw,no_root_squash,no_subtree_check)
    EOF
  5. To export the directory for use, run the following command:

    $ exportfs -ar
  6. To validate, run the following command:

     $ showmount -e  

Install the EdgeView

To install and configure the EdgeView, perform the following steps:

  1. Download the EdgeView artifacts ‘cluster-install-ev.sh’ and ‘ev-fullpkg.tar.xz’ to the /opt directory of the primary node (note that you will need an HA license to utilize this functionality).

    Note

    For these artifacts, log in to Salesforce and open an additional browser for the Ribbon Global Software Center (GSC). In the Downloads section, search for the name and version of the required software.
    Reach out to   Ribbon Support for more information.

  2. Enable read-write-execute permissions for the copied files using the command given below:

    chmod +x cluster-install-ev.sh
          
  3. To execute the installation process, use the following command:

    ./cluster-install-ev.sh

    The installation procedure will begin. Please provide input as prompted by the console output during the installation process.

    The installation creates a log file of all the performed actions for historical review/troubleshooting. The name of this file is $MM-DD-YYYY-ev-install.log.

  4. You are prompted to enter information related to your networking configuration. Please read this section carefully and provide appropriate values for your network environment.

    Once the installation is complete, the console screen displays the following message:

    EdgeView Installation/Upgrade Completed” 

    A directory named scc-build is created.

  5. All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes. To validate, run the following command:

    docker service ls

    Tip

    Make sure that the replicas are all meeting their targets – for example, 1/1 or 5/5. If they are not, run the following command and check the Health section of the output using the following command: 

    docker service ps $service_name -no-trunc
  6. For the initial SCC configuration, register your EdgeView server by accessing it through the web interface at its IP address and entering the registration information that you received in your email. If you didn’t receive the registration information, reach out to Ribbon Support.
  7. Enter the EdgeView admin registration details and click NEXT.
  8. Enter the Tenant admin registration details and click NEXT.
  9. Click APPLY to complete the registration.
  10. Click LOGIN to navigate to the login screen.
  11. Enter the credentials and click SIGN IN to log in to the EdgeView.

Upgrades

Upgrade Storidge Cluster

Storidge upgrades are managed with the cioctl node update command:

cioctl node update <NODENAME | NODEID> [options]

This command updates Storidge software on the node to the latest version.

Storidge supports cluster aware updates so that users can easily upgrade to the latest capabilities. Cluster aware updating upgrades nodes to the latest software releases while the cluster is online and services continue to run.

The cioctl node update updates the Storidge software components and dependencies on a node. When the command is run, it checks for any software updates. If an update is available, it performs the following sequence:

  1. Drain the node, so services are moved to operating nodes.
  2. Cordon the node and set it into maintenance mode.
  3. Download the latest software release to /var/lib/Storidge.
  4. Install the latest software update and any dependencies.
  5. Reboot the node.
  6. Uncordon node to exit maintenance mode, and rejoin the cluster.

The cioctl node update command will prescribe an update sequence so worker nodes are updated first, and the sds node (primary) is updated at the end. For further documentation refer to  Storidge Documentation.

Upgrade NFS Swarm Nodes

For systems that are utilizing NFS as the storage backend, Docker packages should be regularly updated using the OS package manager (yum or apt).

Example:

$ yum update docker-ce

Upgrade EdgeView

Perform the following installation steps to upgrade EdgeView:

  1. Download the EdgeView artifacts cluster- install-ev.sh and ev-fullpkg.tar.xz to the /opt directory of your system.

    Note

    For these artifacts, log in to Salesforce and open an additional browser for the Ribbon Global Software Center (GSC). In the Downloads section, search for the name and version of the required software.
    Reach out to   Ribbon Support for more information.

  2. Enable read-write-execute permissions for the copied files using the commands given below:

      chmod +x cluster-install-ev.sh
  3. Execute the following command to start the installation process:

    ./cluster-install-ev.sh
  4. The installation creates a log file of all actions for historical review/ troubleshooting using the  file format:  $MM-DD-YYYY-ev-install.log . When prompted, enter validation or update information related to your networking configuration environment. Once the installation completes, the console screen displays the message “EdgeView Installation/Upgrade Completed”. 

  5. To check whether the containers started  successfully,  use the following command:

    docker service ls

    All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes.
    Example:

OS/Security Patching and Node Maintenance 

Linux systems require security updates and patches on a regular basis. When  EdgeView is run in HA mode, apply patches as required in the environment. Update only one member of the cluster at a time to ensure the availability of the application. After each node update, verify that all services return to a healthy state before moving to the next. 

Note

EdgeView does not support the utilization of VMware snapshots for testing and/or reverting changes. This results in errors.

For more information, refer to Storidge Node Maintenance.

Cordon node

To safely evict all services before performing maintenance on a node, execute the following command:

cioctl node cordon

This command puts the node in drain state so services are rescheduled. The node is then isolated from other nodes in the cluster.

Identify the name of the node you need to cordon. List the nodes in the cluster using the following command:


cio node ls

For CIO to cordon the node, execute the following command:

cioctl node cordon NODENAME

Perform Maintenance

In the cordoned state, a node is temporarily isolated from the rest of the cluster. Change block tracking is engaged to track updates that are destined for the cordoned node. This enables fast rebuilds when the node is rejoined to the cluster. When the cioctl node cordon command returns, proceed with maintenance. Perform desired CIO software upgrades, driver updates, hardware replacements, and package updates.

Reboot Node

After the node maintenance is completed, run reboot to clean state the node. This step is optional

Uncordon Node

Restore the node to full operation after maintenance is complete, using the following command:

cioctl node uncordon NODENAME

The uncordoned node rejoins the cluster and exits drain state.

Repeat these steps for each node in the cluster until all member nodes are upgraded.

Migrate from single node to Storidge Cluster (HA)

Steps to migrate data from existing  EdgeView system (for  EdgeView versions 15.1.0 and higher) to Storidge Cluster:

  1. To build a Storidge cluster, follow the guidelines in the Install Storidge section.
  2. To install Storidge in single-node configuration on the source system, run the following commands:

    curl -fsSL http://download.storidge.com/pub/ce/cio-ce | sudo bash
    cioctl create --single-node
  3. Shutdown  EdgeView on the source system with docker-compose down
  4. Create Storidge profile on the single node system:

    cd scc-build/; cio profile create storidge-profile
  5. Copy root ssh key from single node to root authorized keys on sds node:

    cat /root/.ssh/id_rsa.pub ; vim /root/.ssh/authorized_keys 

    Paste the key from other server.

  6. Convert the local docker volumes to remote cio volumes in a single command:

    cioctl migrate docker $LOCAL_DOCKER_VOLUME $REMOTE_CIO_VOLUME -p storidge-profile -v --ip $INTERNAL_IP_SDS_NODE_REMOTE 

    The remote name should match the naming convention in the cluster.
    Example: ev_mysql_data for things to work seamlessly.

  7. Run the normal install process on the cluster and check whether all expected data is there.
  8. Update  EdgeMarcs or DNS entries to point to the new cluster address.