Page History
Add_workflow_for_techpubs | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
In this section:
Table of Contents | ||
---|---|---|
|
This section describes the
extrasteps
(to perform in addition to the steps described in Instantiate Standalone SBC
) necessaryon Azurefor creating
aan HFE/SBC on Azure. All commands used in this section
isare part of the Azure CLI.
HFE Node Network Setup
HFE nodes allow sub-second switchover between SBCs of an HA pair, as they negate the need for any IP reassignment. In the Microsoft Azure environment.
Info | ||
---|---|---|
| ||
For each SBC HA pair, use unique subnet for pkt0 and pkt1. |
Info | ||
---|---|---|
| ||
The interfaces may sometimes display in the incorrect order on the HFE node at the Linux level. However, this is not an issue because the HFE script ensures the entire configuration is set up correctly based on the the Azure NICs, not the local interface names. |
Azure requires that each interface of an instance is in a separate subnet, but within the same virtual network. Assuming all Mgmt Management interfaces are from the same subnet and HFE interfaces to the associated SBC PKT share a subnet, a minimum of six subnets are necessary for a full HFE setup.
Configure the HFE nodes in one of two ways:
- Use custom data on cloud-init enabled distributions.
- Use
HFE_AZ_manual_setup.sh
script. For more information, see HFE Node Initial Configuration.
HFE 2.1
In HFE 2.1, there are two HFE nodes -
In HFE 2.1, there are two HFE nodes - one to handle untrusted public traffic to the SBC (for PKT0,) and the other to handle trusted traffic from the SBC to other trusted networks (from PKT1). In this section, the HFE node handling untrusted traffic is referred to as the "PKT0 HFE node", and the HFE node handling trusted traffic as the "PKT1 HFE node".
Both HFE nodes require 3 three interfaces, as described below:
HFE 2.1 - Interface Requirement
Standard/Ubuntu Interface Name | NIC | PKT0 HFE Node Function | PKT1 HFE Node Function | Requires External IP? |
---|---|---|---|---|
eth0 |
nic0 | Public Interface for SBC PKT0 | Private interface in for SBC PKT1 (can only be connected to/from instances in same subnet). | Yes (only on PKT0 HFE node) | |
eth1 |
nic1 | Management interface to HFE | Management interface to HFE. | Optional | |
eth2 |
nic2 | Interface to SBC PKT0 | Interface to SBC PKT1. | No |
title | Note |
---|
Steps to Create SBC HA with HFE 2.1
environment, the startup script for the SBCs requires the fieldsPkt0HfeInstanceName
and Pkt1HfeInstanceName
. For more information, see the table in SBCs' Userdata.Setup
To create the SBC HA with HFE, perform the following steps:
- Install and login to the Azure CLI
- Create Resource Group Virtual Network, Security Groups and Subnets for SBC
- Create HFE Subnets
Steps to Create SBC HA with HFE Setup
To create the SBC HA with HFE, perform the following steps:
- Install and login to the Azure CLI. Create Resource Group and Network with six subnets.
- Configure the Storage Account .
- for HFE
- Create the User Assigned Managed Identity
- Configure HFE Nodes
- Additional Steps for SBC Setup for HFE 2.
- Create the HFE Node(s).
- If using a non cloud-init enabled image, run the manual setup script. See HFE Node Initial Configuration.
- Understand the extra steps necessary for the SBC creation in SBCs' Userdata. Create two SBCs following the instructions in the sections Create SBC and SBCs' Userdata.
Resources for HFE Setup
Configure HFE Nodes
To create the HFE setup, use the HFE Azure Shell Script To create HFE setup, use the HFE Azure Shell Script and the HFE Azure Manual Setup Shell Script, included in the cloudTemplates.tar.gz and called HFE, named called HFE_AZ.sh and HFE_AZ_manual_setup.sh.
HFE Azure User Data
Noprint | ||||
---|---|---|---|---|
|
id | @hfeazureuserdata |
---|
Configure the Storage Account
The script HFE_AZ.sh
is stored in a container within a storage account. This allow the HFE nodes to download and run the script during the VM startup.
To configure the storage account, perform the following steps:
Create a storage account by executing the following command:
. Upload this file to a storage account, so that the HFE nodes can download it. You can retrieve the files from the Ribbon Support portal. See Configure the Storage Account for HFE.
Create HFE Subnets
Two further subnets need to be created for the HFE. These subnets will be used for the eth0 for HFE PKT0 and HFE PKT1. To create the subnets use the following command.
Info | ||
---|---|---|
| ||
--serverice-endpoints is required to allow the HFE to download the HFE script from storage. |
Syntax
Code Block | ||
---|---|---|
| ||
az network vnet subnet |
title | Syntax |
---|
create --name <NAME> --address-prefixes <CIDR> --resource-group <RESOURCE |
-GROUP |
-NAME> |
title | Example |
---|
--vnet-name |
<VNET_NAME> --network- |
security-group |
<SECURITY GROUP NAME>
--service-endpoints Microsoft.Storage |
Examples
Create a container by executing the following command:Code Block | |
---|---|
|
| |
az |
network vnet |
subnet create --name |
pkt0-- |
hfe -- |
address- |
prefixes 10.2.4.0/24 -- |
resource- |
Code Block | ||
---|---|---|
| ||
az storage container create --name hfescripts --account-name rbbnhfestorage --public-access blob --auth-mode key |
Upload the script HFE_AZ.sh to the container by executing the following command:
Code Block | ||
---|---|---|
| ||
az storage blob upload --name <NAME> --file <HFE_AZ.sh> --container-name <CONTAINER NAME> --account-name <STORAGE ACCOUNT NAME> |
Code Block | ||
---|---|---|
| ||
az storage blob upload --name HFE_AZ.sh --file /tmp/HFE_AZ.sh --container-name hfescripts --account-name rbbnhfestorage |
Make the storage account accessible for the instances by allowing access to virtual machines in both subnets used for ETH0 on the HFE node (ensure that the subnet exists).
Code Block | ||
---|---|---|
| ||
az storage account network-rule add --account-name <STORAGE ACCOUNT NAME> --subnet <SUBNET NAME of SUBNET USED FOR ETH0 of HFE NODE> --vnet-name <VIRTUAL NETWORK NAME> |
Code Block | ||
---|---|---|
| ||
az storage account network-rule add --account-name rbbnhfestorage --subnet hfepublic --vnet-name RibbonNet |
HFE Node Initial Configuration
You can perform the initial configuration of the HFE node(s) in two ways:
- Using custom-data and cloud-init.
- Using the script
HFE_AZ_manual_setup.sh
.
The list of cloud-init enabled Linux VMs is available in Microsoft Azure Documentation.
HFE Variables
The HFE has variables that are required to be updated. When using cloud-init, update the the HFE variables in the custom data.
For manual setup, update the script HFE_AZ_manual_setup.sh
(the portion of the script below the comment: UPDATE VARIABLES IN THIS SECTION
).
The following table contains the values that you must update:
<HFE_SCRIPT_LOCATION>
The URL for HFE_AZ.sh that is contained in a VM within a storage account.
You can retrieve the URL by executing the following command:az storage blob url --account-name <STORAGE ACCOUNT NAME> --container-name <CONTAINER NAME> --name <BLOB NAME>
<ACTIVE_SBC_NAME>
<STANDBY_SBC_NAME>
<REMOTE_SSH_MACHINE_IP>
The SSH IP/IPs to allow access through the mgmt port.
Note:
For multiple IPs, use a comma separated list.<SBC_PKT_PORT_NAME>
This tells the HFE which PKT port it is communicating with. Can only be set as PKT0
or PKT1
.
Note: This is only for HFE 2.1.
Updating HFE Variables
Azure does not support updating Custom Data after a VM is created. To update an HFE variable, use the following procedure:
Enter the updated variable to /opt/HFE/natVars.user
. For example:
Code Block |
---|
echo "REMOTE_SSH_MACHINE_IP=\"10.27.0.54,10.36.9.6\"" | sudo tee -a /opt/HFE/natVars.user |
Reboot the HFE:
Code Block |
---|
sudo reboot |
Info | ||
---|---|---|
| ||
Any variable added to |
Supported Images
The following images are generally supported for using as the HFE:
Cloud-init configuration
- Ubuntu 18.04
Manual configuration
- CentOS 7
- CentOS 8
- RHEL 7
- RHEL 8
- Debian 10
Custom Data Example
An example of the custom data for a HFE node is given below:
Noprint | ||||
---|---|---|---|---|
|
id | @hfeazurecustomdata |
---|
Manual Configuration
The script HFE_AZ_manual_setup.sh
has two functions:
- It creates the
systemd
service "ribbon-hfe
" and enables the service. - S
ystemd
runs it to download the script and write the variables out to/opt/HFE/natVars.input
, similar to the role of custom-data does in the cloud-init. As the script is run as a service bysystemd
, it will automatically run if the instance reboots.
The steps required to initially configure the HFE node using the script HFE_AZ_manual_setup.sh
are as follows:
HFE_AZ_manual_setup.sh
onto the instance, in a file path that has executable permissions for the root.Run the script with heightened permissions and the '-s
' flag. For example:
Code Block |
---|
sudo /usr/sbin/HFE_AZ_manual_setup.sh -s |
Tip | ||
---|---|---|
| ||
When you use the ' |
Start the service by executing the following command:
Code Block |
---|
sudo systemctl start ribbon-hfe |
Create HFE Nodes
To create HFE node(s), perform the steps described below.
Create Public IPs
Create at least one Public IP for ETH0 of the PKT0 HFE Node. Optionally, create up to two additional Public IPs to access the MGMT interfaces on both HFE nodes. For more information, refer to Create Public IPs (Standalone).
Create NICs
To create NICs, use the following command syntax:
Code Block |
---|
az network nic create --name <NIC NAME>
--resource-group <RESOURCE-GROUP-NAME>
--vnet-name <VIRTUAL NETWORK NAME>
--subnet <SUBNET NAME>
--network-security-group <SECURITY GROUP NAME> |
HFE 2.1
For HFE 2.1, create a total of six NICs (three for each interface).
The following table contains the extra flags necessary for each interface:
0 | Table |
---|---|
1 | HFE 2.1 - Extra flags for each interface |
--public-ip-address <PUBLIC IP NAME> --ip-forwarding - --accelerated-networking true
--public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true
--ip-forwarding --accelerated-networking true
--ip-forwarding --accelerated-networking true
--public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true
--ip-forwarding --accelerated-networking true
Create the VM
To create the VM(s), use the following command syntax:
Code Block |
---|
az vm create --name <INSTANCE NAME>
--resource-group <RESOURCE_GROUP_NAME>
--admin-username <UserName>
--custom-data <USERDATA FILE>
--image <IMAGE NAME>
--location <LOCATION>
--size <INSTANCE SIZE>
--ssh-key-values <PUBLIC SSH KEY FILENAME>
--nics <ETH0 NIC> <ETH1 NIC> <ETH2 NIC>
--boot-diagnostics-storage <STORAGE ACCOUNT NAME>
--assign-identity <USER ASSIGNED MANAGED IDENTITY ID> |
The following table describes each flag:
0 | Figure |
---|---|
1 | VM Creation - Flag Description |
name
rbbnSbc
resource-group
RBBN-SBC-RG
admin-user-name
rbbn
custom-data
hfeUserData.txt
image
Canonical:UbuntuServer:18.04-LTS:latest
location
East US
size
Standard_DS3_v2
Indicates instance size. In AWS this is known as 'Instance Type', and Openstack calls this 'flavor'. For more information on instances size, refer to Microsoft Azure Documentation.
Note:
- Maintain same instance type for the HFE and the SBC.
- For HFE 2.1, each HFE node requires a minimum of three NICs.
ssh-key-values
azureSshKey.pub
A file that contains the public SSH key for accessing the linuxadmin
user.
You can retrieve the file by executing the following command:
ssh-keygen -y -f azureSshKey.pem > azureSshKey.pub
Note: The Public Key must be in openSSH form: ssh-rsa XXX
nics
hfe-pub hfe-mgmt-pkt0 hfe-pkt0
boot-diagnostics-storage
sbcdiagstore
The storage account created in previous steps.
This allows the use of the serial console.
assign-identity
/subscriptions/<SUBSCRIPTION ID>/resourceGroups/RBBN-SBC-RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/rbbnUami
This is ID for the User Assigned Managed Identity created in previous steps.
You can retrieve it by executing the following command:
az identity show --name < IDENTITY NAME> --resource-group <RESOURCE-GROUP-NAME>
HFE Routing
The HFE setup requires routes in Azure to force all the traffic leaving PKT0 and PKT1 to route back through the HFE.
title | Note |
---|
Consider the following when creating routes in Azure:
Custom routes are not given complete priority over the standard Azure routing. If there is a more specific Azure route, Azure directs traffic based on that default rule.Spacevars | ||
---|---|---|
|
(Refer to https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview for information on default rules)
Spacevars | ||
---|---|---|
|
Spacevars | ||
---|---|---|
|
- If multiple
setups are using the same endpoint (for example, in an SLB/SBC setup), separate the SBCs into separate subnets and route tables to ensure they route to the correct HFE.Spacevars 0 product
(Refer to Configure SBC SWe on Azure for SLB for more information)
To create the routes, perform the following steps:
Create the route-table:
Code Block | ||
---|---|---|
| ||
az network route-table create --name <NAME> --resource-group <RESOURCE_GROUP_NAME> |
Code Block | ||
---|---|---|
| ||
az network route-table create --name hfe-route-table --resource-group RBBN-SBC-RG |
Create two rules for PKT0 and PKT1:
Code Block | ||
---|---|---|
| ||
az network route-table route create --name <NAME>
--resource-group <RESOURCE_GROUP_NAME>
--address-prefix <CIDR OF ENDPOINT>
--next-hop-type VirtualAppliance
--route-table-name <ROUTE TABLE NAME>
--next-hop-ip-address <IP FOR ETH3/ETH4 of HFE NODE> |
group RBBN-SBC-RG --vnet-name RibbonNet --network-security-group pkt0RbbnSbcSG --service-endpoints Microsoft.Storage
az network vnet subnet create --name pkt1--hfe --address-prefixes 10.2.5.0/24 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --network-security-group Pkt1RbbnSbcSG --service-endpoints Microsoft.Storage |
Configure the Storage Account for HFE
The script HFE_AZ.sh
is stored in a container within a storage account. This allow the HFE nodes to download and run the script during the VM startup. It is recommended to use the "storageV2" as the type for the storage account.
To configure the storage account, perform the following steps:
Create a storage account by executing the following command:
SyntaxCode Block az storage account create --name <NAME> --resource-group <RESOURCE_GROUP_NAME> --kind storageV2
ExampleCode Block az storage account create --name rbbnhfestorage --resource-group RBBN-SBC-RG --kind storageV2
Create a container by executing the following command:
SyntaxCode Block az storage container create --name <NAME> --account-name <STORAGE ACCOUNT NAME> --public-access blob --auth-mode key
ExampleCode Block az storage container create --name hfescripts --account-name rbbnhfestorage --public-access blob --auth-mode key
Upload the script HFE_AZ.sh to the container by executing the following command:
SyntaxCode Block az storage blob upload --name <NAME> --file <HFE_AZ.sh> --container-name <CONTAINER NAME> --account-name <STORAGE ACCOUNT NAME>
ExampleCode Block az storage blob upload --name HFE_AZ.sh --file /tmp/HFE_AZ.sh --container-name hfescripts --account-name rbbnhfestorage
Make the storage account accessible for the instances by allowing access to virtual machines in both subnets used for ETH0 and ETH1 (to handle when management interface is used) on the HFE node (ensure that the subnets exists).
SyntaxCode Block az storage account network-rule add --account-name <STORAGE ACCOUNT NAME> --resource-group <RESOURCE_GROUP_NAME> --subnet <SUBNET NAME of SUBNET USED FOR ETH0 of HFE NODE> --vnet-name <VIRTUAL NETWORK NAME>
Example
Code Block az storage account network-rule add --account-name rbbnhfestorage --resource-group RBBN-SBC-RG --subnet hfepublic --vnet-name RibbonNet
HFE Node Initial Configuration
Azure requires that each interface of an instance is in a separate subnet, but within the same virtual network. Assuming all Mgmt interfaces are from the same subnet and HFE interfaces to the associated SBC PKT share a subnet, a minimum of six subnets are necessary for a full HFE setup.
You can perform the initial configuration of the HFE nodes using custom-data and cloud-init.
The list of cloud-init enabled Linux VMs is available in Microsoft Azure Documentation.
HFE Variables
To create the custom data for the HFE node, update the following example script using the table below. Save this to a file to use during the HFE VM creation.
Code Block | ||||
---|---|---|---|---|
| ||||
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
HFE_DIR="/opt/HFE"
HFE_LOG_DIR="$HFE_DIR/log"
HFE_FILE="$HFE_DIR/HFE_AZ.sh"
LOG_FILE="$HFE_LOG_DIR/cloud-init-nat.log"
NAT_VAR="$HFE_DIR/natVars.input"
TEMP_MGMT_ROUTE="$HFE_DIR/.tempRoute"
AZ_BLOB_URL="<HFE_SCRIPT_LOCATION>" # URL of uploaded HFE script
timestamp()
{
date +"%Y-%m-%d %T"
}
if [ ! -d $HFE_LOG_DIR ]; then
mkdir -p $HFE_LOG_DIR;
fi;
/bin/echo $(timestamp) " ========================= cloud-init configuration for HFE ==========================================" >> $LOG_FILE
#Fix any interfaces
defaultRoute=$(ip route | grep default) # There will only be 1 default route with a metric 100
ip a | grep -E eth.: | grep DOWN | awk -F' ' '{print $2}' | sed 's/://' | while read intf; do
/bin/echo $(timestamp) "Bringing up $intf" >> $LOG_FILE
dhclient $intf
ip route | grep -E "default.*$intf" | while read r; do
if [[ "$r" != "$defaultRoute" ]];then
if [ $(echo $r | grep -c metric) -eq 0 ]; then
/bin/echo $(timestamp) "Deleting new route $r" >> $LOG_FILE
ip route delete $r
fi
fi
done
done
#Test for internet access
curl --connect-timeout 10 https://management.azure.com
if [ $? -ne 0 ]; then
for i in $(seq 1 $(ip a | grep -E eth.: | grep -c -v eth0)); do
MGT_INTF_NAME=eth$i
cidrIp=$(ip route | grep "$MGT_INTF_NAME proto kernel scope link" | awk -F " " '{print $1}' | awk -F "/" '{print $1}')
finalOct=$(echo $cidrIp | awk -F "." '{print $4}')
gwOct=$(( finalOct + 1 ))
mgtGwIp=$(echo $cidrIp | awk -v var="$gwOct" -F. '{$NF=var}1' OFS=.)
echo -e "tempMgtGw=$mgtGwIp\ntempMgtIntf=$MGT_INTF_NAME" > $TEMP_MGMT_ROUTE
/bin/echo $(timestamp) "Adding temporary default route for $MGT_INTF_NAME" >> $LOG_FILE
ip route add 0.0.0.0/0 via $mgtGwIp dev $MGT_INTF_NAME metric 10
curl --connect-timeout 10 https://management.azure.com
if [ $? -eq 0 ]; then
break
else
rm $TEMP_MGMT_ROUTE
/bin/echo $(timestamp) "Removing temporary default route for $MGT_INTF_NAME" >> $LOG_FILE
ip route delete 0.0.0.0/0 via $mgtGwIp dev $MGT_INTF_NAME metric 10
fi
done < <(ip a | grep -E eth.:)
fi
curl --connect-timeout 10 "$AZ_BLOB_URL" -H 'x-ms-version : 2019-02-02' -o $HFE_FILE
if [ $? -ne 0 ]; then
/bin/echo $(timestamp) "Error:Could not copy HFE script from Azure Blob Container." >> $LOG_FILE
else
/bin/echo $(timestamp) "Copied HFE script from Azure Blob Container." >> $LOG_FILE
fi;
/bin/echo > $NAT_VAR
/bin/echo "ACTIVE_SBC_VM_NAME=\"<ACTIVE_SBC_NAME>\"" >> $NAT_VAR
/bin/echo "STANDBY_SBC_VM_NAME=\"<STANDBY_SBC_NAME>\"" >> $NAT_VAR
/bin/echo "REMOTE_SSH_MACHINE_IP=\"<REMOTE_SSH_MACHINE_IP>\"" >> $NAT_VAR
/bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR
/bin/echo "CUSTOM_ROUTES=\"<CUSTOM_STATIC_ROUTES_CONFIG>\"" >> $NAT_VAR
/bin/echo "ENABLE_PKT_DNS_QUERY=<0/1>" >> $NAT_VAR
/bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILE
sudo chmod 744 $HFE_FILE
/bin/echo $(timestamp) "Configured using HFE script - $HFE_FILE" >> $LOG_FILE
/bin/echo $(timestamp) " ========================= Done ==========================================" >> $LOG_FILE
nohup $HFE_FILE setup > /dev/null 2>&1 & |
The following table contains the values that you must update:
Value to be updated | Description | Example |
---|---|---|
<HFE_SCRIPT_LOCATION> | The URL for HFE_AZ.sh that is contained in a VM within a storage account. You can retrieve the URL by executing the following command: | https://rbbnhfestorage.blob.core.windows.net/hfescripts/HFE_AZ.sh |
<ACTIVE_SBC_NAME> | The instance name for the Active SBC | rbbnSbc-1 |
<STANDBY_SBC_NAME> | The instance name for the Standby SBC | rbbnSbc-2 |
<REMOTE_SSH_MACHINE_IP> | The SSH IP/IPs to allow access through the mgmt port. Note:
| 43.26.27.29,35.13.71.112 |
<SBC_PKT_PORT_NAME> | This tells the HFE which PKT port it is communicating with. Can only be set as | PKT0 |
<CUSTOM_ROUTES> | It enables the HFE script to add these routes as part of its start-up process and verify these routes continue to be on the HFE throughout the uptime | 1.1.1.0/26_eth1, 2.2.2.0/28_eth2, 3.3.3.4/32_eth3 |
<ENABLE_PKT_DNS_QUERY> | This flag is used to enable/disable the support for the HFE to forward the DNS queries on the SBC PKT port correctly | 0 |
Supported Images
Ubuntu LTS are the supported images for use with HFE setups.
Create HFE Nodes
To create HFE nodes, perform the steps described below.
Create Public IPs
Create at least one Public IP for ETH0 of the PKT0 HFE Node. Optionally, create up to two additional Public IPs to access the MGMT interfaces on both HFE nodes.
Create the Public IPs by running the following command.
Syntax
Code Block |
---|
az network public-ip create --name <PUBLIC IP NAME> --resource-group <RESOURCE-GROUP-NAME> --allocation-method Static |
Examples
Code Block |
---|
az network public-ip create --name pkt0-mgmt-ip --resource-group RBBN-SBC-RG --allocation-method Static
az network public-ip create --name hfe-pkt0-ip --resource-group RBBN-SBC-RG --allocation-method Static
az network public-ip create --name pkt1-mgmt-ip --resource-group RBBN-SBC-RG --allocation-method Static |
Create NICs
To create NICs, use the following command.
Syntax
Code Block |
---|
az network nic create --name <NIC NAME>
--resource-group <RESOURCE-GROUP-NAME>
--vnet-name <VIRTUAL NETWORK NAME>
--subnet <SUBNET NAME>
--network-security-group <SECURITY GROUP NAME> |
Example
Repeat the following command for each NIC.
Code Block |
---|
az network nic create --name hfe-pkt0-nic0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group pkt0RbbnSbcSG --public-ip-address hfe-pkt0-ip
az network nic create --name hfe-pkt0-nic1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group mgmtRbbnSbcSG --public-ip-address pkt0-mgmt-ip
az network nic create --name hfe-pkt0-nic2 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group pkt0RbbnSbcSG
az network nic create --name hfe-pkt1-nic0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group pkt1RbbnSbcSG
az network nic create --name hfe-pkt1-nic1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group mgmtRbbnSbcSG --public-ip-address pkt0-mgmt-ip
az network nic create --name hfe-pkt1-nic2 |
title | Example |
---|
--resource-group RBBN-SBC-RG -- |
vnet- |
name RibbonNet -- |
subnet SubnetMgmt -- |
network- |
security- |
Attach the route table to the PKT0/PKT1 subnets:
title | Syntax |
---|
group pkt1RbbnSbcSG |
HFE 2.1
For HFE 2.1, create a total of six NICs (three for each interface).
The following table contains the extra flags necessary for each interface:
HFE 2.1 - Extra flags for each interface
HFE | Interface | Flags |
---|---|---|
PKT0 HFE | eth0 | --public-ip-address <PUBLIC IP NAME> --ip-forwarding --accelerated-networking true |
eth1 | --public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true | |
eth2 | --ip-forwarding --accelerated-networking true | |
PKT1 HFE | eth0 | --ip-forwarding --accelerated-networking true |
eth1 | --public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true | |
eth2 | --ip-forwarding --accelerated-networking true |
Create the VMs for HFE Instances
Create a VM for each HFE instance. Use the following command syntax:
Code Block |
---|
az vm create --name <INSTANCE NAME> --resource-group <RESOURCE_GROUP_NAME> -- |
Code Block | ||
---|---|---|
| ||
az network vnet subnet update --name pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --route-table pkt0-route |
Additional Steps for SBC HFE Setup
To create the SBC HA with HFE setup, first perform all of the steps described in Create SBC (Standalone).
In addition to those steps, perform the steps described below.
Configure NICs
The SBC requires 4 NICs, each one attached to a individual subnet for MGMT, HA, PKT0 and PKT1.
To create a standard NIC, use the following syntax:
Code Block |
---|
az network nic create --name <NIC NAME>admin-username <UserName> --custom-data <USERDATA FILE> --image <IMAGE NAME> --location <LOCATION> --size <INSTANCE SIZE> --resource-group <RESOURCE GROUP NAME> --ssh-key-values <PUBLIC SSH KEY FILENAME> --vnet-namenics <VIRTUAL<ETH0 NETWORKNIC> NAME> <ETH1 NIC> <ETH2 NIC> --subnet <SUBNET--boot-diagnostics-storage <STORAGE ACCOUNT NAME> --assign-identity <USER ASSIGNED MANAGED --network-security-group <SECURITY GROUP NAME> --accelerated-networking <true/false> |
See below for additional steps for when SBCs are in a HFE setup.
Secondary IPs
The HA SBCs require configuring Secondary IPs on both the PKT0 and PKT1 ports for both the Active and the Standby instances.
Info | ||
---|---|---|
| ||
|
Create and attach Secondary IPs to a network interface by executing the following command:
Code Block | ||
---|---|---|
| ||
az network nic ip-config create --name <NAME> --nic-name <PKT0/PKT1 NIC NAME> --resource-group <RESOURCE_GROUP_NAME> |
Code Block | ||
---|---|---|
| ||
az network nic ip-config create --name sbc1-pkt0-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG |
Create NIC for PKT0 and PKT1
When creating the NICs for both SBC's PKT0 and PKT1 ports, include the flag --ip-forwarding
for receiving the traffic sent to the HFE node. For example:
Code Block |
---|
az network nic create --name sbc1-pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt0 --network-security-group RbbnSbcSG --ip-forwarding |
Info | ||
---|---|---|
| ||
Because the HFE Node receives all the traffic, it is not necessary to create Public IP addresses for these ports, or add them to the NICs. |
The SBCs in the HFE environment require the following user data:
0 | Table |
---|---|
1 | SBC HFE - User Data |
IDENTITY ID> |
The following table describes each flag:
VM Creation - Flag Description
Flag | Accepted Values | Example | Description |
---|---|---|---|
name | rbbnSbc | Name of the instance; must be unique in the resource group. | |
resource-group | RBBN-SBC-RG | Name of the Resource Group. | |
admin-user-name | rbbn | The default user to log on. | |
custom-data | File name | hfeUserData.txt | A file containing the HFE user data. Use this option for cloud-init enabled images. For more information, see Custom Data Example. |
image | Canonical:UbuntuServer:18.04-LTS:latest | The name of an image. For more information, see Supported Images. | |
location | East US | The location to host the VM in. For more information, refer to Microsoft Azure Documentation. | |
size | Standard_D8s_v3 | Indicates instance size. In AWS this is known as 'Instance Type', and Openstack calls this 'flavor'. For more information on instances size, refer to Microsoft Azure Documentation. Note:
| |
ssh-key-values | File Name. | azureSshKey.pub | A file that contains the public SSH key for accessing the You can retrieve the file by executing the following command:
Note: The Public Key must be in openSSH form: |
nics | Space-seperated list | hfe-pub hfe-mgmt-pkt0 hfe-pkt0 | The names of the NICs created in previous steps. |
boot-diagnostics-storage | Storage Account Name. | sbcdiagstore | The storage account created in previous steps. This allows the use of the serial console. |
assign-identity | User Assigned Managed Identity ID | /subscriptions/<SUBSCRIPTION ID>/resourceGroups/RBBN-SBC-RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/rbbnUami | This is ID for the User Assigned Managed Identity created in previous steps. You can retrieve it by executing the following command:
|
HFE Routing
The HFE setup requires routes in Azure to force all the traffic leaving PKT0 and PKT1 to route back through the HFE.
Info | ||||
---|---|---|---|---|
| ||||
Consider the following when creating routes in Azure:
|
To create the routes, perform the following steps:
Create the route-table:
SyntaxCode Block az network route-table create --name <NAME> --resource-group <RESOURCE_GROUP_NAME>
Example
Code Block az network route-table create --name hfe-route-table --resource-group RBBN-SBC-RG
Create two rules for PKT0 and PKT1:
SyntaxCode Block az network route-table route create --name <NAME> --resource-group <RESOURCE_GROUP_NAME> --address-prefix <CIDR OF ENDPOINT> --next-hop-type VirtualAppliance --route-table-name <ROUTE TABLE NAME> --next-hop-ip-address <IP FOR ETH3/ETH4 of HFE NODE>
Example
Code Block az network route-table route create --name pkt0-route --resource-group RBBN-SBC-RG --adress-prefix 77.77.173.255/32 --next-hop-type VirtualAppliance --route-table-name hfe-route-table --next-hop-ip-address 10.2.6.5
Attach the route table to the PKT0/PKT1 subnets:
SyntaxCode Block az network vnet subnet update --name <SUBNET NAME> --resource-group <RESOURCE_GROUP_NAME> --vnet-name <VIRTUAL NETWORK NAME> --route-table <ROUTE TABLE NAME>
Example
Code Block az network vnet subnet update --name pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --route-table pkt0-route
Additional Steps for SBC HFE Setup for HFE 2.1
To create the SBCs for HA with HFE setup, follow the instructions as described in Instantiate Standalone SBC on Azure, with the addition of the steps below.
Configure NICs
The SBC requires 4 NICs, each one attached to a individual subnet for MGMT, HA, PKT0 and PKT1.
To create a standard NIC, use the following syntax:
Code Block |
---|
az network nic create --name <NIC NAME>
--resource-group <RESOURCE GROUP NAME>
--vnet-name <VIRTUAL NETWORK NAME>
--subnet <SUBNET NAME>
--network-security-group <SECURITY GROUP NAME>
--accelerated-networking true |
Create NIC for PKT0 and PKT1
When creating the NICs for both SBC's PKT0 and PKT1 ports, include the flag --ip-forwarding
for receiving the traffic sent to the HFE node.
Example
Code Block |
---|
az network nic create --name sbc1-pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt0 --network-security-group pkt0RbbnSbcSG--ip-forwarding
az network nic create --name sbc1-pkt1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt1 --network-security-group pkt1RbbnSbcSG--ip-forwarding
az network nic create --name sbc2-pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt0 --network-security-group pkt0RbbnSbcSG--ip-forwarding
az network nic create --name sbc2-pkt1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt1 --network-security-group pkt1RbbnSbcSG--ip-forwarding |
Info | ||
---|---|---|
| ||
Because the HFE Node receives all the traffic, it is not necessary to create Public IP addresses for these ports, or add them to the NICs. |
Secondary IPs
The HA SBCs require configuring Secondary IPs on both the PKT0 and PKT1 ports for both the Active and the Standby instances.
Info |
---|
|
Create and attach Secondary IPs to a network interface by executing the following command:
Syntax
Code Block |
---|
az network nic ip-config create --name <NAME> --nic-name <PKT0/PKT1 NIC NAME> --resource-group <RESOURCE_GROUP_NAME> |
Example
Code Block |
---|
az network nic ip-config create --name sbc1-pkt0-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG
az network nic ip-config create --name sbc1-pkt1-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG
az network nic ip-config create --name sbc2-pkt0-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG
az network nic ip-config create --name sbc2-pkt1-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG |
SBC Userdata
The SBCs in the HFE environment require the following user data:
SBC HFE - User Data
Key | Allow Values | Description |
---|---|---|
CEName | N/A | Specifies the actual CE name of the SBC instance. CEName Requirements:
|
ReverseNatPkt0 | True/False | Required to be True for SBC HA setup |
ReverseNatPkt1 | True/False | Required to be True for SBC HA setup |
SystemName | N/A | Specifies the System Name of the SBC instances. SystemName Requirements:
|
SbcPersonalityType | isbc | The name of the SBC personality type for this instance. Currently, Ribbon supports only Integrated SBC (I-SBC). |
AdminSshKey | ssh-rsa ... | Public SSH Key to access the admin user; must be in the form ssh-rsa ... |
ThirdPartyCpuAlloc | 0-4 | (Optional) Number of CPUs segregated for use with non-Ribbon applications. Restrictions:
|
ThirdPartyMemAlloc | 0-4096 | (Optional) Amount of memory (in MB) that segregated out for use with non Ribbon applications. Restrictions:
|
CERole | ACTIVE/STANDBY | Specifies the CE's role within the HA setup. |
PeerCEHa0IPv4Address | xxx.xxx.xxx.xxx | This value must be the Private IP Address of the Peer SBC's HA interface. |
ClusterIp | xxx.xxx.xxx.xxx | This value must also be the Private IP Address of the Peer SBC's HA interface. |
PeerCEName | N/A | Specifies the actual CE name of the Peer SBC instance in the HA setup. |
SbcHaMode | 1to1 | Specifies the Mode of the HA configuration. Currently, Azure supports only 1:1 HA. |
PeerInstanceName | N/A | Specifies the name of the Peer Instance in the HA setup. Note: This is not the CEName or the SystemName. |
Pkt0HfeInstanceName | N/A | Specifies the instance name of the PKT0 HFE Node. |
Specifies the instance name of the PKT1 HFE Node. | ||
Pkt1HfeInstanceName | N/A |
Create a JSON file using the following structure:
Code Block |
---|
{
"CEName" : "<SBC CE NAME>",
"ReverseNatPkt0" : "True",
"ReverseNatPkt1" : "True",
"SystemName" : "<SYSTEM NAME>",
"SbcPersonalityType": "isbc",
"AdminSshKey" : "<ssh-rsa ...>",
"ThirdPartyCpuAlloc" : "<0-4>",
"ThirdPartyMemAlloc" : "<0-4096>",
"CERole" : "<ACTIVE/STANDBY>",
"PeerCEHa0IPv4Address" : "<PEER HA IP ADDRESS>",
"ClusterIp" : "<PEER HA IP ADDRESS>",
"PeerCEName" : "<PEER SBC CE NAME>",
"SbcHaMode" : "1to1",
"PeerInstanceName" : "<PEER INSTANCE NAME>",
"Pkt0HfeInstanceName" : "<PKT0 HFE NODE INSTANCE NAME>",
"Pkt1HfeInstanceName" : "<PKT1 HFE NODE INSTANCE NAME>"
} |
Note | ||
---|---|---|
| ||
|
Configure PKT Ports
Configure the PKT ports using the SBC CLI.
Please note: This configuration needs to be added after the instance has been created.
Example
Code Block | ||
---|---|---|
| ||
admin@sbc-10.2.2.12> conf
Entering configuration mode private
[ok][2019-10-04 09:04:15]
[edit]
admin@sbc-10.2.2.12% set addressContext default ipInterfaceGroup LIG1 ipInterface LIF1 portName pkt0 ipPublicVarV4 IF2.IPV4 prefixVarV4 IF2.PrefixV4 mode inService state enabled
[ok][2019-10-04 09:04:46]
[edit]
admin@sbc-10.2.2.12% commit
Commit complete.
[ok][2019-10-04 09:04:50]
[edit]
admin@sbc-10.2.2.12% set addressContext default ipInterfaceGroup LIG2 ipInterface LIF2 portName pkt1 ipPublicVarV4 IF3.IPV4 prefixVarV4 IF3.PrefixV4 mode inService state enabled
[ok][2019-10-04 09:04:58]
[edit]
admin@sbc-10.2.2.12% com
Commit complete.
[ok][2019-10-04 09:05:00]
[edit]
admin@sbc-10.2.2.12% set addressContext default staticRoute 0.0.0.0 0 <PKT0 SUBNET GATEWAY> LIG1 LIF1 preference 100
[ok][2019-10-04 09:05:11]
[edit]
admin@sbc-10.2.2.12% com
Commit complete.
[ok][2019-10-04 09:05:15]
[edit]
admin@sbc-10.2.2.12% set addressContext default staticRoute 0.0.0.0 0 <PKT1 SUBNET GATEWAY> LIG2 LIF2 preference 100
[ok][2019-10-04 09:05:22]
[edit]
admin@sbc-10.2.2.12% com
Commit complete.
[ok][2019-10-04 09:05:24]
[edit]
admin@sbc-10.2.2.12% |
Info |
---|
The gateway IP address for the subnet is X.X.X.1 |
The correct SBC CLI configuration will look similar to the following:
Code Block |
---|
admin@sbc-10.2.2.12> show table addressContext default staticRoute
IP
INTERFACE IP
DESTINATION GROUP INTERFACE CE
IP ADDRESS PREFIX NEXT HOP NAME NAME PREFERENCE NAME
-----------------------------------------------------------------------
0.0.0.0 0 10.2.3.1 LIG1 LIF1 100 -
0.0.0.0 0 10.2.4.1 LIG2 LIF2 100 -
[ok][2019-10-04 09:16:47]
admin@sbc-10.2.2.12>
admin@sbc-10.2.2.12> show table addressContext default ipInterfaceGroup
IP IP IP
CE PORT IP ALT IP ALT DRYUP BW VLAN IP VAR PREFIX VAR PUBLIC VAR PREFIX PUBLIC
NAME IPSEC NAME NAME NAME ADDRESS PREFIX ADDRESS PREFIX MODE ACTION TIMEOUT STATE CONTINGENCY TAG BANDWIDTH V4 V4 VAR V4 V6 VAR V6 VAR V6
------------------------------------------------------------------------------------------------------------ |
Specifies the actual CE name of the SBC instance.
CEName Requirements:
Must start with an alphabetic character.
Contain only alphabetic characters and/or numbers; no special characters are allowed.
Cannot exceed 64 characters in length.
Specifies the System Name of the SBC instances.
SystemName Requirements:
Must start with an alphabetic character.
Contain only alphabetic characters and/or numbers; no special characters are allowed.
Cannot exceed 26 characters in length.
- Must be the same on both peers CEs.
ssh-rsa ...
(Optional) Number of CPUs segregated for use with non-Ribbon applications.
Restrictions:
- 0-4 CPUs
- Both
ThirdPartCpuAlloc
andThirdPartyMemAlloc
must be configured. - The configuration must match between peer instances.
(Optional) Amount of memory (in MB) that segregated out for use with non Ribbon applications.
Restrictions:
- 0-4096 CPUs
- Both
ThirdPartCpuAlloc
andThirdPartyMemAlloc
must be configured. - The configuration must match between peer instances
This value must be the Private IP Address of the Peer SBC's HA interface.
This value must also be the Private IP Address of the Peer SBC's HA interface.
Specifies the name of the Peer Instance in the HA setup.
Note: This is not the CEName or the SystemName.
Specifies the instance name of the PKT0 HFE Node.
Note: Applicable only for HFE 2.1.
Specifies the instance name of the PKT1 HFE Node.
Note: Applicable only for HFE 2.1.
Code Block |
---|
{
"CEName" : "<SBC CE NAME>",
"ReverseNatPkt0" : "True",
"ReverseNatPkt1" : "True",
"SystemName" : "<SYSTEM NAME>",
"SbcPersonalityType": "isbc",
"AdminSshKey" : "<ssh-rsa ...>",
"ThirdPartyCpuAlloc" : "<0-4>",
"ThirdPartyMemAlloc" : "<0-4096>",
"CERole" : "<ACTIVE/STANDBY>",
"PeerCEHa0IPv4Address" : "<PEER HA IP ADDRESS>",
"ClusterIp" : "<PEER HA IP ADDRESS>",
"PeerCEName" : "<PEER SBC CE NAME>",
"SbcHaMode" : "1to1",
"PeerInstanceName" : "<PEER INSTANCE NAME>",
"Pkt0HfeInstanceName" : "<PKT0 HFE NODE INSTANCE NAME>",
"Pkt1HfeInstanceName" : "<PKT1 HFE NODE INSTANCE NAME>"
} |
Note | ||
---|---|---|
| ||
The SBC requires user data in a valid JSON format. If the user-data is not a valid JSON, the instance shuts down immediately. You cannot update user data on VMs in the Azure framework. |
Example Meta Variable table for a SBC HA is provided below:
Expand | |
---|---|
Code Block | -----------------------------------------------------
act-------------------------- LIG1 disabled LIF1 - pkt0 - - - - inService dryUp 60 enabled 0 - 0 IF2.IPV4 IF2.PrefixV4 - - - - LIG2 disabled LIF2 - pkt1 - - - - inService dryUp 60 enabled 0 - 0 IF3.IPV4 IF3.PrefixV4 - - - - [ok][2019-10-04 09:18:35] |
Sample Meta Variable Table
Example Meta Variable table for a SBC HA is provided below:
Code Block | ||||
---|---|---|---|---|
| ||||
admin@act 10.2.2.127 IF0.GWV4 10.2.0.1
act-10.2.2.127 IF0.IPV4 10.2.0.9
act-10.2.2.127 IF0.Port Mgt0
act-10.2.2.127 IF0.RNat True
act-10.2.2.127 IF1.GWV4 10.2.2.1
act-10.2.2.127 IF1.IPV4 10.2.2.127
act-10.2.2.127 IF1.Port Ha0
act-10.2.2.127 IF1.RNat True
act-10.2.2.127 IF2.GWV4 10.2.3.1
act-10.2.2. 127127> showIF2.IPV4 table system metaVariable CE NAME NAME10.2.3.10 act-10.2.2.127 IF2.Port Pkt0
act-10.2.2.127 IF2.RNat TrueVALUE ----------------------------------------------------- act-10.2.2.127IF3 IF0.GWV4 10.2.4 0.1 act-10.2.2.127IF3 IF0.IPV4 10.2.4 0.10 9 act-10.2.2.127IF3 IF0.PortPkt1 Mgt0 act-10.2.2.127IF3 IF0.RNat True act-10.2.2.127IF0 IF1.FIPV4 GWV4137.117.73.22 act-10.2.2.127 IF0.PrefixV4 24
act-10.2.2. 127 IF1.PrefixV4 241 act-10.2.2.127IF2.PrefixV4 IF1.IPV424 10.2.2.127 act-10.2.2.127IF3.PrefixV4 IF1.Port24 Ha0 act-10.2.2.127HFE_IF2.FIPV4 IF1.RNat52.168.34.216 True act-10.2.2.127HFE_IF3.FIPV4 IF2.GWV4 10.2.2 3.7 1 act-10.2.2.127HFE_ IF2. IFNameIPV4IF_HFE_PKT0 10.2.3.10 act-10.2.2.127HFE_IF3 IF2.IFName PortIF_HFE_PKT1 act-10.2.2.127 secondaryIPList.Pkt0 ['10.2.3.10'] Pkt0 act-10.2.2.127secondaryIPList.Pkt1 ['10.2.4.10'] sby IF2.RNat True act-10.2.2.227 127IF0 IF3.GWV4 10.2.0 4.1sby act-10.2.2.227 127IF0 IF3.IPV4 10.2.0 4.14 10sby act-10.2.2.227 127IF0 IF3.PortMgt0 Pkt1sby act-10.2.2.227 127IF0 IF3.RNat Truesby act-10.2.2.227 127IF1 IF0.GWV4 FIPV410 137.2 117.2 73.1 22sby act-10.2.2.227 IF1 . IPV4127 IF0.PrefixV410.2.2.227 sby 24 act-10.2.2.227 127 IF1.Port PrefixV4Ha0 sby 24 act-10.2.2.227 127IF1 IF2.RNat PrefixV4True sby 24 act-10.2.2.227 127IF2 IF3.GWV4 PrefixV410.2.3.1 sby 24 act-10.2.2.227 127 HFE_IF2.IPV4 FIPV410.2.3.10 sby 52.168.34.216 act-10.2.2.227 127IF2 HFE_IF3.Port FIPV4Pkt0 sby 10.2.2.7 act-10.2.2.227 127 HFE_IF2.RNat IFNameTrue sby IF_HFE_PKT0 act-10.2.2.227 127 HFE_IF3.GWV4 IFName10.2.4.1 sby IF_HFE_PKT1 act-10.2.2.227 127IF3 secondaryIPList.IPV4 Pkt0 ['10.2.4 3.10']sby act-10.2.2.227 127IF3 secondaryIPList.Port Pkt1 Pkt1 ['10.2.4.10'] sby-10.2.2.227IF3 IF0.RNat GWV4True 10.2.0.1 sby-10.2.2.227 IF0.FIPV4 IPV440 10.76 2.8 0.39 14 sby-10.2.2.227 IF0.PrefixV4 Port24 Mgt0 sby-10.2.2.227IF1.PrefixV4 IF0.RNat24 True sby-10.2.2.227IF2.PrefixV4 IF1.GWV424 10.2.2.1 sby-10.2.2.227IF3.PrefixV4 IF1.IPV424 10.2.2.227 sby-10.2.2.227HFE_IF2.FIPV4 IF1.Port52.168.34.216 Ha0 sby-10.2.2.227 IF1.RNatHFE_IF3.FIPV4 10.2.2.7True sby-10.2.2.227HFE_ IF2. IFNameGWV4IF_HFE_PKT0 10.2.3.1 sby-10.2.2.227HFE_IF3.IFName IF2.IPV4IF_HFE_PKT1 sby-10.2.2.227 secondaryIPList.Pkt0 ['10.2.3. 11']10 sby-10.2.2.227secondaryIPList.Pkt1 ['10.2.4.11'] [ok][2019-10-07 11:48:16] admin@act IF2.Port Pkt0 sby-10.2.2.127> |
Add New Endpoints to UAC
To add a new end point to the Public Endpoint side with HFE1 (for example, 52.52.52.52 is the new end point IP):
Add the end point IP to outbound security group.
Caption | ||||
---|---|---|---|---|
| ||||
Add the end point IP to the PKT0 subnet custom route table. Name of the route table will be $instanceBaseName
.
Caption | ||||
---|---|---|---|---|
| ||||
$hfePkt0OutIp
.227 IF2.RNat True
sby-10.2.2.227 IF3.GWV4 10.2.4.1
sby-10.2.2.227 IF3.IPV4 10.2.4.10
sby-10.2.2.227 IF3.Port Pkt1
sby-10.2.2.227 IF3.RNat True
sby-10.2.2.227 IF0.FIPV4 40.76.8.39
sby-10.2.2.227 IF0.PrefixV4 24
sby-10.2.2.227 IF1.PrefixV4 24
sby-10.2.2.227 IF2.PrefixV4 24
sby-10.2.2.227 IF3.PrefixV4 24
sby-10.2.2.227 HFE_IF2.FIPV4 52.168.34.216
sby-10.2.2.227 HFE_IF3.FIPV4 10.2.2.7
sby-10.2.2.227 HFE_IF2.IFName IF_HFE_PKT0
sby-10.2.2.227 HFE_IF3.IFName IF_HFE_PKT1
sby-10.2.2.227 secondaryIPList.Pkt0 ['10.2.3.11']
sby-10.2.2.227 secondaryIPList.Pkt1 ['10.2.4.11']
[ok][2019-10-07 11:48:16]
admin@act-10.2.2.127> |
Add New Endpoints to UAC
To add a new end point to the Public Endpoint side with HFE1 (for example, 52.52.52.52 is the new end point IP):
Add the end point IP to outbound security group.
- Add the end point IP to the PKT0 subnet custom route table.
Select Next hop type of Virtual Appliance, and the Next hop address as HFE eth2 IP. - Add the end point IP to the Inbound Security Rule of the Security group of nic1 of HFE1, and PKT0 of the SBC.
Add New Endpoints to UAS
Add the IP (for example, 10.2.3.9) to PKT1 subnet custom route table.
Select Next hop type of Virtual Appliance, and the Next hop address as HFE eth2 IP.
Optional HFE Configuration
Adding Custom Static Routes to HFE
For specialized deployments, users may need to add specific custom static routes to the HFE at the OS level. The HFE script supports this by using the HFE variable CUSTOM_ROUTES
. It enables the HFE script to add these routes as part of its start-up process and verify these routes continue to be on the HFE throughout the uptime.
CUSTOM_ROUTES is a comma separated list of values in the form <DESTINATION_IP_CIDR>_<INTERFACE_NAME>. For example: 1.1.1.0/26_eth1, 2.2.2.0/28_eth2, 3.3.3.4/32_eth3.
If the HFE is already deployed, the variable is added to /opt/HFE/natVars.user.
Example
Code Block |
---|
echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" | sudo tee -a /opt/HFE/natVars.user |
Info |
---|
For |
Updating HFE Variables
Add New Endpoints to UAS
Add the IP (for example, 10.2.3.9) to PKT1 subnet custom route table. The name of the route table is $instanceBaseName
.
Caption | ||||
---|---|---|---|---|
| ||||
Select Next hop type of Virtual Appliance, and the Next hop address as $hfePkt1OutIp
.
HFE Node Logging
The HFE generates the following logs under /opt/HFE/log/
:
- cloud-init-nat.log: Logs generated from the initial configuration.
- HFE_conf.log: Logs generated from the setup of the HFE node. They contain information about:
- SBC instance names
- The IPs for allowing SSH into the HFE node
- The configured zone
- The SBC IPs being used to forward traffic to
- Iptables rules
- Routing rules
- HFE_conf.log.prev: A copy of the previous HFE_conf.log.
- HFE.log:
- Logs which contain messages about any switchover action, as well as connection errors. The logs generated are as follows:
- Connection error detected to Active SBC: <<IP>>. Attempting switchover.
- We have lost connection to the SBC. HFE node now performing switchover action
- Connection error ongoing - No connection to SBC PKT ports from HFE
- This error means that a switchover has been attempted, but no connection could be established to the new SBC.
- The HFE node then continually switches between the SBCs until a connection is established
- This usually means there is a network issue or a configuration issue on the SBCs.
- Switchover from old Active <<Old Active SBC IP>> to new Active <<New Active SBC IP>> complete. Connection established.
- The switchover action is complete and connection has been established to the 'Active' SBC
- Initial HFE startup configuration complete. Successfully connected to <<SBC Instance Name>>
- The HFE node has successfully connected to the active SBC following a boot.
- Connection error detected to Active SBC: <<IP>>. Attempting switchover.
- This log is rotated when it reaches 250MB:
- Up to four previous logs are saved.
- The previous logs are compressed to save disk space.
- Logs which contain messages about any switchover action, as well as connection errors. The logs generated are as follows:
Adding Custom Static Routes to HFE
For specialized deployments, users may need to add specific custom static routes to the HFE at the OS level. The HFE script supports this by using the HFE variable CUSTOM_ROUTES
. It enables the HFE script to add these routes as part of its start-up process and verify these routes continue to be on the HFE throughout the uptime.
CUSTOM_ROUTES is a comma separated list of values in the form <DESTINATION_IP_CIDR>_<INTERFACE_NAME>. For example: 1.1.1.0/26_eth1, 2.2.2.0/28_eth2, 3.3.3.4/32_eth3.
To add the CUSTOM_ROUTES to the HFE customData, add the following line below /bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR
. For example:
Code Block |
---|
/bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR
/bin/echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" >> $NAT_VAR
/bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILE |
If the HFE is already deployed, the variable is added to /opt/HFE/natVars.user. For example:
Code Block |
---|
echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" | sudo tee -a /opt/HFE/natVars.user |
Info |
---|
For |
Creating a HFE Sysdump
The HFE_AZ.sh script can create an archive of useful logs to help with debugging (similar to the SBC sysdump). Run the following command to collect the logs:
Code Block |
---|
sudo /opt/HFE/HFE_AZ.sh sysdump |
The following details are collected:
- Output of:
- Interfaces
- Routes
- IPtables
- dmesg
- conntrack count
- conntrack extended list
- The VM Azure metadata
- journalctl errors
- dhclient logs
- System-networkd logs
- The logs:
- syslog
- waagent logs
- cloud-init logs
/opt/HFE/*
(without previous sysdumps)- All user bash history
The sysdumps archives are stored in the .tar.gz
format under /opt/HFE/sysdump/
.
Handling Multiple Remote SSH IPs to Connect to HFE Node
The following section contains the instructions to set multiple SSH IPs to access the HFE node as well as to update the instances to add in more SSH IPs.
Info | ||
---|---|---|
| ||
Ensure the REMOTE_SSH_MACHINE_IP is not set to an IP where the call traffic is originating from. It can break the HFE logic and the traffic fails to reach the SBC. |
Initial Orchestration
During orchestration, you can supply multiple IP addresses to the appropriate variable with a common separated list. For example, 10.0.0.1, 10.0.0.2, and 10.0.0.3. The following table represents the list of variables that need to be set for each orchestration type:
0 | Table |
---|---|
1 | Initial Orchestration |
Updating Remote SSH IPs
The following steps describe the procedure to update the Remote SSH IPs on the Azure.
Info | ||
---|---|---|
| ||
To add in a new Remote SSH Machine IP, you need to supply the full list of IPs for which the routes need to be created. |
Info | ||
---|---|---|
| ||
The following procedure results in network outages as the HFE requires a reboot to select the latest list. |
Azure does not support updating Custom Data after a VM is created. To update a an HFE variable, use the following procedure:
- Log on to the HFE node as a rbbn user
- , as user specified during instance creation.
Enter the updated variable
toto
/opt/HFE/natVars.user
. For exampleFor example:
Code Block echo "REMOTE_SSH_MACHINE_IP=\"10.27.0.54,10.36.9.6\"" | sudo tee -a /opt/HFE/natVars.user
Reboot the HFE:
Code Block sudo reboot
Any variable added to
/opt/HFE/natVars.
user user
will overwrite the values set as the variables in custom data . Ensure you enter the complete . To add a new Remote SSH Machine IP, ensure to supply the full list of IPs for which you wish to create the routes
need to be created.
Enabling PKT DNS Support on
HFEThe DNS queries on the SBC PKT port are sent using the primary IP. The HFE variable ENABLE_PKT_DNS_QUERY is used to enable the support for the HFE to forward these requests correctly.
To enable the PKT DNS Support on a new HFE setup, add "ENABLE_PKT_DNS_QUERY=1
" to the customData
, below SBC_PKT_PORT_NAME
.
Example:
HFE
The DNS queries on the SBC PKT port are sent using the primary IP. The HFE variable
/bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR /bin/echo "ENABLE_PKT_DNS_QUERY
=1" >> $NAT_VAR /bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILEis used to enable the support for the HFE to forward these requests correctly.
To enable the PKT DNS Support option on an already configured HFE setup:
- Log on to the HFE node as a RBBN user.
Add the
natvar
ENABLE_PKT_DNS_QUERY
to/opt/HFE/natVars.user
with the value 1.Code Block echo "ENABLE_PKT_DNS_QUERY=1" | sudo tee -a /opt/HFE/natVars.user
Reboot the HFE.
Code Block sudo reboot