Add_workflow_for_techpubs |
---|
AUTH2 | UserResourceIdentifier{userKey=8a00a0c86ca23f8a016cd6ed6fe90013, userName='null'} |
---|
AUTH1 | UserResourceIdentifier{userKey=8a00a0c87ad8c4ce017add1308da0002, userName='null'} |
---|
JIRAIDAUTH | TP-577 |
---|
REV5 | UserResourceIdentifier{userKey=8a00a0c85f4199b1015f7ea6e836000d, userName='null'} |
---|
REV6 | UserResourceIdentifier{userKey=8a00a0c85f4199b1015f7ea6e836000d, userName='null'} |
---|
REV3 | REV1 | UserResourceIdentifier{userKey=8a00a0c866dc3dee0166f54435370023, userName='null'} |
---|
REV1 | UserResourceIdentifier{userKey=8a00a0c866dc3dee0166f54435370023, userName='null'} |
---|
|
Available_since |
---|
Available_since |
---|
Type | Modified For |
---|
Release | 16.0.2 |
---|
|
...
Open the following ports for a successful HA installation:
- 80, 443 for HTTP/HTTPS
Inbound (The GUI access and initial ZTP connection from the )
- 5671 for EMPath
Inbound (RabbitMQ communication between the and the )
- 8022, 8443 for EM GUI access
Inbound (Communication between the and the )
9142 for legacy support data syncing
Inbound (Cassandra connection for legacy support)
22 for SSH
Inbound (Connection of the
devices in environments that have both the 14 and the 16 deployed)9000 Portainer Management GUI
5000 Docker Private Registry Service
3260 ISCSI communication between Storidge Hosts
8282 Storidge REST API
8383 Storidge Secure cluster configuration
16990 Storidge Metrics Exporter
16996 Storidge DFS internode communication
16997 Storidge SDS CLI server
16998 Storidge Controller nodes heartbeat
16999 DFS-CIO internode communication
...
To install the packages, run the following command:
Code Block |
---|
$ yum -y install nfs-utils |
Once the installation is complete, run the following command to start and enable the service:
Code Block |
---|
$ systemctl enable --now nfs-server |
To create the directory where the data will be written, run the following command:
Code Block |
---|
$ mkdir /dockervolumes |
Create the /etc/exports config information.
Code Block |
---|
$ cat > /etc/exports << EOF
/dockervolumes *(rw,no_root_squash,no_subtree_check)
EOF |
To export the directory for use, run the following command:
To validate, run the following command:
Code Block |
---|
$ showmount -e |
If NFS is set up correctly, this is the output:
Image Added
Install the
...
Download the
artifacts ‘cluster-install-ev.sh’ and ‘ev-fullpkg.tar.xz’ to the /opt directory of the primary node (reach out to Ribbon Support for the URLs of these artifacts. Also node (note that you will need an HA license to utilize this functionality). Code Block |
---|
cd /opt/
wget ftp://$url_from_support
wget ftp://$url_from_support |
Enable read-write-execute permissions for the copied files using the command given below:
Code Block |
---|
chmod +x cluster-install-ev.sh |
To execute the installation process, use the following command:
Code Block |
---|
./cluster-install-ev.sh |
The installation procedure will begin. Please provide input as prompted by the console output during the installation process.
The installation creates a log file of all the performed actions for historical review/troubleshooting. The name of this file is $MM-DD-YYYY-ev-install.log.
You are prompted to enter information related to your networking configuration. Please read this section carefully and provide appropriate values for your network environment.
Once the installation is complete, the console screen displays the following message:
“
Installation/Upgrade Completed” A directory named scc-build is created.
All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes. To validate, run the following command:
Code Block |
---|
docker service ls |
Tip |
---|
|
Make sure that the replicas are all meeting their targets – for example, 1/1 or 5/5. If they are not, run the following command and check the Health section of the output using the following command: Code Block |
---|
docker service ps $service_name -no-trunc |
|
- For the initial SCC configuration, register your server by accessing it through the web interface at its IP address and entering the registration information that you received in your email. If you didn’t receive the registration information, reach out to Ribbon Support.
- Enter the admin registration details and click NEXT.
- Enter the Tenant admin registration details and click NEXT.
- Click APPLY to complete the registration.
- Click LOGIN to navigate to the login screen.
- Enter the credentials and click SIGN IN to log in to the .
...
Download the
artifacts cluster-install-ev.shsh and ev-fullpkg.tar.xz to the /opt directory of your system (reach out to support@rbbn.com for the URLs for these artifacts):.
Enable read-write-execute permissions for the copied files using the commands given below:
Code Block |
---|
chmod +x cluster-install-ev.sh |
Execute the following command to start the installation process:
Code Block |
---|
./cluster-install-ev.sh |
The installation creates a log file of all actions for historical review/troubleshooting using the file format: $MM-DD-YYYY-ev-install.log. When prompted, enter validation or update information related to your networking configuration environment. Once the installation completes, the console screen displays the message “ Installation/Upgrade Completed”.
To check whether the containers started successfully, use the following command:
Code Block |
---|
docker service ls |
All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes.
Example:
Image Added
OS/Security Patching and Node Maintenance
Linux systems require security updates and patches on a regular basis. When is run in HA mode, apply patches as required in the environment. Update only one member of the cluster at a time to ensure the availability of the application. After each node update, verify that all services return to a healthy state before moving to the next.
Info |
---|
|
does not support the utilization of VMware snapshots for testing and/or reverting changes. This results in errors. |
For more information, refer to Storidge Node Maintenance.
Cordon node
To safely evict all services before performing maintenance on a node, execute the following command:
Code Block |
---|
cioctl node cordon |
This command puts the node in drain state so services are rescheduled. The node is then isolated from other nodes in the cluster.
Identify the name of the node you need to cordon. List the nodes in the cluster using the following command:
For CIO to cordon the node, execute the following command:
Code Block |
---|
cioctl node cordon NODENAME |
Perform Maintenance
In the cordoned state, a node is temporarily isolated from the rest of the cluster. Change block tracking is engaged to track updates that are destined for the cordoned node. This enables fast rebuilds when the node is rejoined to the cluster. When the cioctl node cordon command returns, proceed with maintenance. Perform desired CIO software upgrades, driver updates, hardware replacements, and package updates.
Reboot Node
After the node maintenance is completed, run reboot to clean state the node. This step is optional
Uncordon Node
Restore the node to full operation after maintenance is complete, using the following command:
Code Block |
---|
cioctl node uncordon NODENAME |
The uncordoned node rejoins the cluster and exits drain state.
Repeat these steps for each node in the cluster until all member nodes are upgraded.
Migrate from single node to Storidge Cluster (HA)
Steps to migrate data from existing system (for versions 15.1.0 and higher) to Storidge Cluster:
- To build a Storidge cluster, follow the guidelines in the Install Storidge section.
To install Storidge in single-node configuration on the source system, run the following commands:
Code Block |
---|
curl -fsSL http://download.storidge.com/pub/ce/cio-ce | sudo bash
cioctl create --single-node |
- Shutdown on the source system with docker-compose down
Create Storidge profile on the single node system:
Code Block |
---|
cd scc-build/; cio profile create storidge-profile |
Copy root ssh key from single node to root authorized keys on sds node:
Code Block |
---|
cat /root/.ssh/id_rsa.pub ; vim /root/.ssh/authorized_keys |
Paste the key from other server.
Convert the local docker volumes to remote cio volumes in a single command:
Code Block |
---|
cioctl migrate docker $LOCAL_DOCKER_VOLUME $REMOTE_CIO_VOLUME -p storidge-profile -v --ip $INTERNAL_IP_SDS_NODE_REMOTE |
The remote name should match the naming convention in the cluster.
Example: ev_mysql_data for things to work seamlessly.
- Run the normal install process on the cluster and check whether all expected data is there.
- Update s or DNS entries to point to the new cluster address.
Add_workflow_for_techpubs |
---|
AUTH1 | UserResourceIdentifier{userKey=8a00a0c86ca23f8a016cd6ed6fe90013, userName='null'} |
---|
JIRAIDAUTH | TP-577 |
---|
REV5 | UserResourceIdentifier{userKey=8a00a0c85b2726c2015b58aa779d0003, userName='null'} |
---|
REV6 | UserResourceIdentifier{userKey=8a00a0c85b2726c2015b58aa779d0003, userName='null'} |
---|
REV3 | UserResourceIdentifier{userKey=8a00a0c866dc3dee0166f54435370023, userName='null'} |
---|
REV1 | UserResourceIdentifier{userKey=8a00a0c866dc3dee0166e4fafc100006, userName='null'} |
---|
|
Available_since |
---|
Type | Modified For |
---|
Release | 16.0.2 |
---|
|
Overview
Deploying the in a clustered configuration provides the following functionalities in addition to all the features provided by a traditional single-node deployment:
- Automatic system recovery from failures or planned infrastructure maintenance with minimal application disruption.
- Supports large scale deployments in excess of 100,000 connected the devices with horizontal and vertical scaling.
The following technologies are implemented to provide the mentioned functionalities:
- Container orchestration to schedule and scale our microservices across a cluster of hosts.
- Storage abstraction layer for stateful applications, to store, replicate and provide failover capabilities for persistent data.
Caption |
---|
0 | Figure |
---|
1 | Deployment Model- Storidge |
---|
|
Image Added |
Caption |
---|
0 | Figure |
---|
1 | Deployment Model- NFS |
---|
|
Image Added |
Requirements
Minimum Requirements
- Five CentOS 7, RHEL 8 or Ubuntu 18 Virtual Machines. All virtual machines within the cluster must be the same OS
- Ensure that these dedicated virtual machines have unique hostnames with no other production applications running on them. Install and update base OS packages to the latest versions.
- Requires an active internet connection to automatically install and configure software dependencies.
- Does not have any pre-existing OS level firewall rules (iptables).
- vCPU/Core: 8
- Example: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50 GHz
- RAM: 8 GB
- Load balancer or network configuration (DNS) to provide a single network endpoint for the cluster.
- NIC Interface: 1
- Disk configuration options (Choose one or the other):
- To use NFS, configure and operate an NFS server, separate from your docker swarm cluster.
- To use Storidge provide the following disk configuration to each VM:
- Disks mounted to each Virtual Machine: 4
- Host OS: 100 GB minimum
- 3x disks for Docker storage: 100 GB minimum each per VM (this will grow as you add data to the system over time)
- Disk space monitoring and alerting configured at safe threshold (for example, 80% used capacity)
- Ability to expand disk space when it reaches the threshold
- Docker version 19.03.8
- Storidge version V2.0.0-3411
Info |
---|
|
This is the minimum recommendation for initial installation. Monitor your network environment and add additional CPU, RAM, and Disk space, as required. |
Network Port Requirements
Open the following ports for a successful HA installation:
- 80, 443 for HTTP/HTTPS
Inbound (The GUI access and initial ZTP connection from the )
- 5671 for EMPath
Inbound (RabbitMQ communication between the and the )
- 8022, 8443 for EM GUI access
Inbound (Communication between the and the )
9142 for legacy support data syncing
Inbound (Cassandra connection for legacy support)
22 for SSH
Inbound (Connection of the
devices in environments that have both the 14 and the 16 deployed)9000 Portainer Management GUI
5000 Docker Private Registry Service
3260 ISCSI communication between Storidge Hosts
8282 Storidge REST API
8383 Storidge Secure cluster configuration
16990 Storidge Metrics Exporter
16996 Storidge DFS internode communication
16997 Storidge SDS CLI server
16998 Storidge Controller nodes heartbeat
16999 DFS-CIO internode communication
Outbound (For to send email notifications to users)
Deployment
The main difference between deploying the
in a multi-node configuration vs. single node configuration is that you must first install and configure a Storidge cluster before proceeding with the installation in the former. At a high level, these are the required steps:- Install Storidge software on each node you plan on including in the cluster.
- Initialize the Storidge Cluster.
- Install the in HA mode.
In order to provide a concrete example of how to proceed, documented below is an example set of servers. It’s recommended that you make a similar document for your environment so that you can reference it later. We use references from this table in the following installation instructions.
Caption |
---|
0 | Table |
---|
1 | Example set of Servers for Storidge |
---|
|
Hostname | IP Address | Swarm Role | Storidge Role | Notes |
---|
prod-ev-cluster-1 | 10.10.10.1 | Manager | SDS | Primary and install files located here | prod-ev-cluster-2 | 10.10.10.2 | Manager | Backup 1 | — | prod-ev-cluster-3 | 10.10.10.3 | Manager | Backup 2 | — | prod-ev-cluster-4 | 10.10.10.4 | Worker | Storage | — | prod-ev-cluster-5 | 10.10.10.5 | Worker | Storage | — |
|
Caption |
---|
0 | Table |
---|
1 | Example set of Servers for NFS |
---|
|
Hostname | IP Address | Swarm Role | Notes |
---|
prod-ev-cluster-1 | 10.10.10.1 | Manager | Primary and install files located here | prod-ev-cluster-2 | 10.10.10.2 | Manager | — | prod-ev-cluster-3 | 10.10.10.3 | Manager | — | prod-ev-cluster-4 | 10.10.10.4 | Worker | — | prod-ev-cluster-5 | 10.10.10.5 | Worker | — | prod-nfs-server-1 | 10.10.10.6 | N/A | NFS server |
|
Install or Configure Storage Backend
In order to successfully deploy an
system in HA mode Storidge must be installed and configured or an NFS server may be utilized to provide persistent storage to the cluster. Anchor |
---|
| install storidge |
---|
| install storidge |
---|
|
Install StoridgeTo install Storidge, perform the following steps using an SSH client such as PuTTY:
SSH into each of the five servers and run the following command to install the Storidge software:
Code Block |
---|
$ curl -fsSL ftp://104.131.153.182/pub/ce/cio-ce | sudo bash -s -- -f -r 3411 |
Wait for the above command to finish its execution successfully. It is expected to take several minutes.
On the first node (primary node) in the cluster, run the following command:
Info |
---|
|
If you have more than one network interface, add the --ip option and provide the IP address of the primary node. |
Code Block |
---|
$ cioctl create --all-managers --noportainer |
Copy the join command and run that on the rest of the nodes in the cluster (nodes 2 to 5).
Info |
---|
If you have more than one network interface, add the --ip option and provide the IP address of the node. |
For example:
Code Block |
---|
$ cioctl join 10.10.10.1 3b6ca81e9370a228c7fc8 edbfa3c6361-3aff2bb8 |
Once you have added all your nodes to the cluster, return to the primary node and initialize the cluster with the ‘cioctl init’ command that was displayed after running the ‘cioctl create’ command.
For example:
Code Block |
---|
$ cioctl init e2e14943 |
You can now see the cluster form. This may take several minutes.
To verify that all five nodes are present and that their status is ‘normal’, run the following command:
Install and or Configure an NFS Server
To install and or Configure an NFS Server, use an SSH client such as PuTTY. This section gives you example commands to create an NFS server, if there isn’t one already available in the environment (tested with RHEL 8. Adjust commands for your environment as necessary).
Example for NFS Server setup with RHEL 8 NFSv4 Server:
To install the packages, run the following command:
Code Block |
---|
$ yum -y install nfs-utils |
Once the installation is complete, run the following command to start and enable the service:
Code Block |
---|
$ systemctl enable --now nfs-server |
To create the directory where the data will be written, run the following command:
Code Block |
---|
$ mkdir /dockervolumes |
Create the /etc/exports config information.
Code Block |
---|
$ cat > /etc/exports << EOF
/dockervolumes *(rw,no_root_squash,no_subtree_check)
EOF |
To export the directory for use, run the following command:
To validate, run the following command:
Code Block |
---|
$ showmount -e |
Install the
To install and configure the
, perform the following steps:Download the
artifacts ‘cluster-install-ev.sh’ and ‘ev-fullpkg.tar.xz’ to the /opt directory of the primary node (note that you will need an HA license to utilize this functionality).
Enable read-write-execute permissions for the copied files using the command given below:
Code Block |
---|
chmod +x cluster-install-ev.sh |
To execute the installation process, use the following command:
Code Block |
---|
./cluster-install-ev.sh |
The installation procedure will begin. Please provide input as prompted by the console output during the installation process.
The installation creates a log file of all the performed actions for historical review/troubleshooting. The name of this file is $MM-DD-YYYY-ev-install.log.
You are prompted to enter information related to your networking configuration. Please read this section carefully and provide appropriate values for your network environment.
Once the installation is complete, the console screen displays the following message:
“
Installation/Upgrade Completed” A directory named scc-build is created.
All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes. To validate, run the following command:
Code Block |
---|
docker service ls |
Image Added
Tip |
---|
|
Make sure that the replicas are all meeting their targets – for example, 1/1 or 5/5. If they are not, run the following command and check the Health section of the output using the following command: Code Block |
---|
docker service ps $service_name -no-trunc |
|
- For the initial SCC configuration, register your server by accessing it through the web interface at its IP address and entering the registration information that you received in your email. If you didn’t receive the registration information, reach out to Ribbon Support.
Image Added - Enter the admin registration details and click NEXT.
Image Added - Enter the Tenant admin registration details and click NEXT.
Image Added - Click APPLY to complete the registration.
Image Added - Click LOGIN to navigate to the login screen.
Image Added - Enter the credentials and click SIGN IN to log in to the .
Image Added
Upgrades
Upgrade Storidge Cluster
Storidge upgrades are managed with the cioctl node update command:
Code Block |
---|
cioctl node update <NODENAME | NODEID> [options] |
This command updates Storidge software on the node to the latest version.
Storidge supports cluster aware updates so that users can easily upgrade to the latest capabilities. Cluster aware updating upgrades nodes to the latest software releases while the cluster is online and services continue to run.
The cioctl node update updates the Storidge software components and dependencies on a node. When the command is run, it checks for any software updates. If an update is available, it performs the following sequence:
- Drain the node, so services are moved to operating nodes.
- Cordon the node and set it into maintenance mode.
- Download the latest software release to /var/lib/Storidge.
- Install the latest software update and any dependencies.
- Reboot the node.
- Uncordon node to exit maintenance mode, and rejoin the cluster.
The cioctl node update command will prescribe an update sequence so worker nodes are updated first, and the sds node (primary) is updated at the end. For further documentation refer to Storidge Documentation.
Upgrade NFS Swarm Nodes
For systems that are utilizing NFS as the storage backend, Docker packages should be regularly updated using the OS package manager (yum or apt).
Example:
Code Block |
---|
$ yum update docker-ce |
Upgrade EdgeView
Perform the following installation steps to upgrade
:Download the
artifacts cluster-install-ev.sh and ev-fullpkg.tar.xz to the /opt directory of your system. Info |
---|
|
For these artifacts, log in to Salesforce and open an additional browser for the Ribbon Global Software Center (GSC). In the Downloads section, search for the name and version of the required software. Reach out to Ribbon Support for more information. |
Code Block |
---|
cd /opt/ wget ftp://$url_from_support wget ftp://$url_from_support |
Enable read-write-execute permissions for the copied files using the commands given below:
Code Block |
---|
chmod +x cluster-install-ev.sh |
Execute the following command to start the installation process:
Code Block |
---|
./cluster-install-ev.sh |
The installation creates a log file of all actions for historical review/troubleshooting using the file format: $MM-DD-YYYY-ev-install.log. When prompted, enter validation or update information related to your networking configuration environment. Once the installation completes, the console screen displays the message “ Installation/Upgrade Completed”.
To check whether the containers started successfully, use the following command:
Code Block |
---|
docker service ls |
All the services start and all replica-expected-values reach their target (for example 1/1 or 5/5) over the next several minutes.
Example:
OS/Security Patching and Node Maintenance
Linux systems require security updates and patches on a regular basis. When is run in HA mode, apply patches as required in the environment. Update only one member of the cluster at a time to ensure the availability of the application. After each node update, verify that all services return to a healthy state before moving to the next.
...
For more information, refer to Storidge Node Maintenance.
Cordon node
To safely evict all services before performing maintenance on a node, execute the following command:
...
Code Block |
---|
cioctl node cordon NODENAME |
Perform Maintenance
In the cordoned state, a node is temporarily isolated from the rest of the cluster. Change block tracking is engaged to track updates that are destined for the cordoned node. This enables fast rebuilds when the node is rejoined to the cluster. When the cioctl node cordon command returns, proceed with maintenance. Perform desired CIO software upgrades, driver updates, hardware replacements, and package updates.
Reboot Node
After the node maintenance is completed, run reboot to clean state the node. This step is optional
Uncordon Node
Restore the node to full operation after maintenance is complete, using the following command:
...
Repeat these steps for each node in the cluster until all member nodes are upgraded.
Migrate from single node to Storidge Cluster (HA)
Steps to migrate data from existing system (for versions 15.1.0 and higher) to Storidge Cluster:
...