Modified: for 16.0.2
In this section:
Deploying the
The following technologies are implemented to provide the mentioned functionalities:
Minimum Requirements
This is the minimum recommendation for initial installation. Monitor your network environment and add additional CPU, RAM, and Disk space, as required.
Network Port Requirements
Open the following ports for a successful HA
9142 for legacy support data syncing
Inbound (Cassandra connection for legacy support)
22 for SSH
Inbound (Connection of the
9000 Portainer Management GUI
5000 Docker Private Registry Service
3260 ISCSI communication between Storidge Hosts
8282 Storidge REST API
8383 Storidge Secure cluster configuration
16990 Storidge Metrics Exporter
16996 Storidge DFS internode communication
16997 Storidge SDS CLI server
16998 Storidge Controller nodes heartbeat
16999 DFS-CIO internode communication
Outbound (For
The main difference between deploying the
In order to provide a concrete example of how to proceed, documented below is an example set of servers. It’s recommended that you make a similar document for your environment so that you can reference it later. We use references from this table in the following installation instructions.
In order to successfully deploy an
To install Storidge, perform the following steps using an SSH client such as PuTTY:
SSH into each of the five servers and run the following command to install the Storidge software:
$ curl -fsSL ftp://104.131.153.182/pub/ce/cio-ce | sudo bash -s -- -f -r 3411
Wait for the above command to finish its execution successfully. It is expected to take several minutes.
On the first node (primary node) in the cluster, run the following command:
If you have more than one network interface, add the --ip option and provide the IP address of the primary node.
$ cioctl create --all-managers --noportainer
Copy the join command and run that on the rest of the nodes in the cluster (nodes 2 to 5).
If you have more than one network interface, add the --ip option and provide the IP address of the node.
For example:
$ cioctl join 10.10.10.1 3b6ca81e9370a228c7fc8 edbfa3c6361-3aff2bb8
Once you have added all your nodes to the cluster, return to the primary node and initialize the cluster with the ‘cioctl init’ command that was displayed after running the ‘cioctl create’ command.
For example:
$ cioctl init e2e14943
You can now see the cluster form. This may take several minutes.
To verify that all five nodes are present and that their status is ‘normal’, run the following command:
$ cio node ls
To install and or Configure an NFS Server, use an SSH client such as PuTTY. This section gives you example commands to create an NFS server, if there isn’t one already available in the environment (tested with RHEL 8. Adjust commands for your environment as necessary).
Example for NFS Server setup with RHEL 8 NFSv4 Server:
To install the packages, run the following command:
$ yum -y install nfs-utils
Once the installation is complete, run the following command to start and enable the service:
$ systemctl enable --now nfs-server
To create the directory where the data will be written, run the following command:
$ mkdir /dockervolumes
Create the /etc/exports config information.
$ cat > /etc/exports << EOF /dockervolumes *(rw,no_root_squash,no_subtree_check) EOF
To export the directory for use, run the following command:
$ exportfs -ar
To validate, run the following command:
$ showmount -e
If NFS is set up correctly, this is the output:
To install and configure the
Download the
For these artifacts, log in to Salesforce and open an additional browser for the Ribbon Global Software Center (GSC). In the Downloads section, search for the name and version of the required software.
Reach out to Ribbon Support for more information.
Enable read-write-execute permissions for the copied files using the command given below:
chmod +x cluster-install-ev.sh
To execute the installation process, use the following command:
./cluster-install-ev.sh
The installation procedure will begin. Please provide input as prompted by the console output during the installation process.
The installation creates a log file of all the performed actions for historical review/troubleshooting. The name of this file is $MM-DD-YYYY-ev-install.log.
You are prompted to enter information related to your networking configuration. Please read this section carefully and provide appropriate values for your network environment.
Once the installation is complete, the console screen displays the following message:
“
A directory named scc-build is created.
All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes. To validate, run the following command:
docker service ls
Make sure that the replicas are all meeting their targets – for example, 1/1 or 5/5. If they are not, run the following command and check the Health section of the output using the following command:
docker service ps $service_name -no-trunc
Storidge upgrades are managed with the cioctl node update command:
cioctl node update <NODENAME | NODEID> [options]
This command updates Storidge software on the node to the latest version.
Storidge supports cluster aware updates so that users can easily upgrade to the latest capabilities. Cluster aware updating upgrades nodes to the latest software releases while the cluster is online and services continue to run.
The cioctl node update updates the Storidge software components and dependencies on a node. When the command is run, it checks for any software updates. If an update is available, it performs the following sequence:
The cioctl node update command will prescribe an update sequence so worker nodes are updated first, and the sds node (primary) is updated at the end. For further documentation refer to Storidge Documentation.
For systems that are utilizing NFS as the storage backend, Docker packages should be regularly updated using the OS package manager (yum or apt).
Example:
$ yum update docker-ce
Perform the following installation steps to upgrade
Download the
For these artifacts, log in to Salesforce and open an additional browser for the Ribbon Global Software Center (GSC). In the Downloads section, search for the name and version of the required software.
Reach out to Ribbon Support for more information.
Enable read-write-execute permissions for the copied files using the commands given below:
chmod +x cluster-install-ev.sh
Execute the following command to start the installation process:
./cluster-install-ev.sh
The installation creates a log file of all actions for historical review/ troubleshooting using the file format: $MM-DD-YYYY-ev-install.log . When prompted, enter validation or update information related to your networking configuration environment. Once the installation completes, the console screen displays the message “
To check whether the containers started successfully, use the following command:
docker service ls
All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes.
Example:
Linux systems require security updates and patches on a regular basis. When
For more information, refer to Storidge Node Maintenance.
To safely evict all services before performing maintenance on a node, execute the following command:
cioctl node cordon
This command puts the node in drain state so services are rescheduled. The node is then isolated from other nodes in the cluster.
Identify the name of the node you need to cordon. List the nodes in the cluster using the following command:
cio node ls
For CIO to cordon the node, execute the following command:
cioctl node cordon NODENAME
In the cordoned state, a node is temporarily isolated from the rest of the cluster. Change block tracking is engaged to track updates that are destined for the cordoned node. This enables fast rebuilds when the node is rejoined to the cluster. When the cioctl node cordon command returns, proceed with maintenance. Perform desired CIO software upgrades, driver updates, hardware replacements, and package updates.
After the node maintenance is completed, run reboot to clean state the node. This step is optional
Restore the node to full operation after maintenance is complete, using the following command:
cioctl node uncordon NODENAME
The uncordoned node rejoins the cluster and exits drain state.
Repeat these steps for each node in the cluster until all member nodes are upgraded.
Steps to migrate data from existing
To install Storidge in single-node configuration on the source system, run the following commands:
curl -fsSL http://download.storidge.com/pub/ce/cio-ce | sudo bash cioctl create --single-node
Create Storidge profile on the single node system:
cd scc-build/; cio profile create storidge-profile
Copy root ssh key from single node to root authorized keys on sds node:
cat /root/.ssh/id_rsa.pub ; vim /root/.ssh/authorized_keys
Paste the key from other server.
Convert the local docker volumes to remote cio volumes in a single command:
cioctl migrate docker $LOCAL_DOCKER_VOLUME $REMOTE_CIO_VOLUME -p storidge-profile -v --ip $INTERNAL_IP_SDS_NODE_REMOTE
The remote name should match the naming convention in the cluster.
Example: ev_mysql_data for things to work seamlessly.
Modified: for 16.0.2
In this section:
Deploying the
The following technologies are implemented to provide the mentioned functionalities:
Minimum Requirements
This is the minimum recommendation for initial installation. Monitor your network environment and add additional CPU, RAM, and Disk space, as required.
Network Port Requirements
Open the following ports for a successful HA
9142 for legacy support data syncing
Inbound (Cassandra connection for legacy support)
22 for SSH
Inbound (Connection of the
9000 Portainer Management GUI
5000 Docker Private Registry Service
3260 ISCSI communication between Storidge Hosts
8282 Storidge REST API
8383 Storidge Secure cluster configuration
16990 Storidge Metrics Exporter
16996 Storidge DFS internode communication
16997 Storidge SDS CLI server
16998 Storidge Controller nodes heartbeat
16999 DFS-CIO internode communication
Outbound (For
The main difference between deploying the
In order to provide a concrete example of how to proceed, documented below is an example set of servers. It’s recommended that you make a similar document for your environment so that you can reference it later. We use references from this table in the following installation instructions.
In order to successfully deploy an
To install Storidge, perform the following steps using an SSH client such as PuTTY:
SSH into each of the five servers and run the following command to install the Storidge software:
$ curl -fsSL ftp://104.131.153.182/pub/ce/cio-ce | sudo bash -s -- -f -r 3411
Wait for the above command to finish its execution successfully. It is expected to take several minutes.
On the first node (primary node) in the cluster, run the following command:
If you have more than one network interface, add the --ip option and provide the IP address of the primary node.
$ cioctl create --all-managers --noportainer
Copy the join command and run that on the rest of the nodes in the cluster (nodes 2 to 5).
If you have more than one network interface, add the --ip option and provide the IP address of the node.
For example:
$ cioctl join 10.10.10.1 3b6ca81e9370a228c7fc8 edbfa3c6361-3aff2bb8
Once you have added all your nodes to the cluster, return to the primary node and initialize the cluster with the ‘cioctl init’ command that was displayed after running the ‘cioctl create’ command.
For example:
$ cioctl init e2e14943
You can now see the cluster form. This may take several minutes.
To verify that all five nodes are present and that their status is ‘normal’, run the following command:
$ cio node ls
To install and or Configure an NFS Server, use an SSH client such as PuTTY. This section gives you example commands to create an NFS server, if there isn’t one already available in the environment (tested with RHEL 8. Adjust commands for your environment as necessary).
Example for NFS Server setup with RHEL 8 NFSv4 Server:
To install the packages, run the following command:
$ yum -y install nfs-utils
Once the installation is complete, run the following command to start and enable the service:
$ systemctl enable --now nfs-server
To create the directory where the data will be written, run the following command:
$ mkdir /dockervolumes
Create the /etc/exports config information.
$ cat > /etc/exports << EOF /dockervolumes *(rw,no_root_squash,no_subtree_check) EOF
To export the directory for use, run the following command:
$ exportfs -ar
To validate, run the following command:
$ showmount -e
To install and configure the
Download the
For these artifacts, log in to Salesforce and open an additional browser for the Ribbon Global Software Center (GSC). In the Downloads section, search for the name and version of the required software.
Reach out to Ribbon Support for more information.
Enable read-write-execute permissions for the copied files using the command given below:
chmod +x cluster-install-ev.sh
To execute the installation process, use the following command:
./cluster-install-ev.sh
The installation procedure will begin. Please provide input as prompted by the console output during the installation process.
The installation creates a log file of all the performed actions for historical review/troubleshooting. The name of this file is $MM-DD-YYYY-ev-install.log.
You are prompted to enter information related to your networking configuration. Please read this section carefully and provide appropriate values for your network environment.
Once the installation is complete, the console screen displays the following message:
“
A directory named scc-build is created.
All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes. To validate, run the following command:
docker service ls
Make sure that the replicas are all meeting their targets – for example, 1/1 or 5/5. If they are not, run the following command and check the Health section of the output using the following command:
docker service ps $service_name -no-trunc
Storidge upgrades are managed with the cioctl node update command:
cioctl node update <NODENAME | NODEID> [options]
This command updates Storidge software on the node to the latest version.
Storidge supports cluster aware updates so that users can easily upgrade to the latest capabilities. Cluster aware updating upgrades nodes to the latest software releases while the cluster is online and services continue to run.
The cioctl node update updates the Storidge software components and dependencies on a node. When the command is run, it checks for any software updates. If an update is available, it performs the following sequence:
The cioctl node update command will prescribe an update sequence so worker nodes are updated first, and the sds node (primary) is updated at the end. For further documentation refer to Storidge Documentation.
For systems that are utilizing NFS as the storage backend, Docker packages should be regularly updated using the OS package manager (yum or apt).
Example:
$ yum update docker-ce
Perform the following installation steps to upgrade
Download the
For these artifacts, log in to Salesforce and open an additional browser for the Ribbon Global Software Center (GSC). In the Downloads section, search for the name and version of the required software.
Reach out to Ribbon Support for more information.
Enable read-write-execute permissions for the copied files using the commands given below:
chmod +x cluster-install-ev.sh
Execute the following command to start the installation process:
./cluster-install-ev.sh
The installation creates a log file of all actions for historical review/ troubleshooting using the file format: $MM-DD-YYYY-ev-install.log . When prompted, enter validation or update information related to your networking configuration environment. Once the installation completes, the console screen displays the message “
To check whether the containers started successfully, use the following command:
docker service ls
All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes.
Example:
Linux systems require security updates and patches on a regular basis. When
For more information, refer to Storidge Node Maintenance.
To safely evict all services before performing maintenance on a node, execute the following command:
cioctl node cordon
This command puts the node in drain state so services are rescheduled. The node is then isolated from other nodes in the cluster.
Identify the name of the node you need to cordon. List the nodes in the cluster using the following command:
cio node ls
For CIO to cordon the node, execute the following command:
cioctl node cordon NODENAME
In the cordoned state, a node is temporarily isolated from the rest of the cluster. Change block tracking is engaged to track updates that are destined for the cordoned node. This enables fast rebuilds when the node is rejoined to the cluster. When the cioctl node cordon command returns, proceed with maintenance. Perform desired CIO software upgrades, driver updates, hardware replacements, and package updates.
After the node maintenance is completed, run reboot to clean state the node. This step is optional
Restore the node to full operation after maintenance is complete, using the following command:
cioctl node uncordon NODENAME
The uncordoned node rejoins the cluster and exits drain state.
Repeat these steps for each node in the cluster until all member nodes are upgraded.
Steps to migrate data from existing
To install Storidge in single-node configuration on the source system, run the following commands:
curl -fsSL http://download.storidge.com/pub/ce/cio-ce | sudo bash cioctl create --single-node
Create Storidge profile on the single node system:
cd scc-build/; cio profile create storidge-profile
Copy root ssh key from single node to root authorized keys on sds node:
cat /root/.ssh/id_rsa.pub ; vim /root/.ssh/authorized_keys
Paste the key from other server.
Convert the local docker volumes to remote cio volumes in a single command:
cioctl migrate docker $LOCAL_DOCKER_VOLUME $REMOTE_CIO_VOLUME -p storidge-profile -v --ip $INTERNAL_IP_SDS_NODE_REMOTE
The remote name should match the naming convention in the cluster.
Example: ev_mysql_data for things to work seamlessly.