This section describes the extra steps (in addition to the Standalone SBC) necessary for creating a HFE/SBC on Azure. All commands used in this section is part of the Azure CLI.
HFE Node Network Setup
HFE nodes allow sub-second switchover between SBCs of an HA pair, as they negate the need for any IP reassignment. In the Microsoft Azure environment, there are two modes available for the HFE environment - HFE 2.0 and HFE 2.1.
Note
For each SBC HA pair, use unique subnet for pkt0 and pkt1.
Note
The interfaces may sometimes display in the incorrect order on the HFE node at the Linux level. However, this is not an issue because the HFE script ensures the entire configuration is set up correctly based on the the Azure NICs, not the local interface names.
Azure requires that each interface of an instance is in a separate subnet, but within the same virtual network. Assuming all Mgmt interfaces are from the same subnet and HFE interfaces to the associated SBC PKT share a subnet, a minimum of six subnets are necessary for a full HFE setup.
Configure the HFE nodes in one of two ways:
Use custom data on cloud-init enabled distributions.
A HFE 2.0 environment uses a single HFE node with 5 interfaces. All trusted and untrusted traffic use the same node. Each interface's function is described in the following table:
HFE 2.0 - Interface Description
Standard / Ubuntu Interface Name
NIC
PKT0 HFE node Function
Requires External IP?
eth0 / ens4
nic0
Public Interface for SBC PKT0
Yes
eth1 / ens5
nic1
Private interface in for SBC PKT1 (can only be connected to/from instances in same subnet)
No
eth2 / ens6
nic2
Management interface to HFE
Optional
eth3 / ens7
nic3
Interface to SBC PKT0
No
eth4 / ens8
nic4
Interface to SBC PKT1
No
Note
To use a HFE 2.0 environment, the startup script for the SBCs requires the field HfeInstanceName. For more information, see the table in SBCs' Userdata.
HFE 2.1
In HFE 2.1, there are two HFE nodes - one to handle untrusted public traffic to the SBC (for PKT0,) and the other to handle trusted traffic from the SBC to other trusted networks (from PKT1). In this section, the HFE node handling untrusted traffic is referred to as the "PKT0 HFE node", and the HFE node handling trusted traffic as the "PKT1 HFE node".
Both HFE nodes require 3 interfaces, as described below:
HFE 2.1 - Interface Requirement
Standard/Ubuntu Interface Name
NIC
PKT0 HFE Node Function
PKT1 HFE Node Function
Requires External IP?
eth0 / ens4
nic0
Public Interface for SBC PKT0
Private interface in for SBC PKT1 (can only be connected to/from instances in same subnet).
Yes (only on PKT0 HFE node)
eth1 / ens5
nic1
Management interface to HFE
Management interface to HFE.
Optional
eth2 / ens6
nic2
Interface to SBC PKT0
Interface to SBC PKT1.
No
Note
To use a HFE 2.0 environment, the startup script for the SBCs requires the fields Pkt0HfeInstanceName and Pkt1HfeInstanceName. For more information, see the table in SBCs' Userdata.
Steps to Create SBC HA with HFE Setup
To create the SBC HA with HFE, perform the following steps:
The snippets given below are samples; Ribbon provides the latest scripts in the cloudTemplates package bundle.
HFE Azure Shell Script
Click to view script
#!/bin/bash
#############################################################
#
# Copyright (c) 2019 Ribbon Communications Inc.
#
# All Rights Reserved.
# Confidential and Proprietary.
#
# HFE_AZ.sh
#
# Module Description:
# Script to enable HFE(High-Availability Front End) instance as frontend for
# PKT ports of SBC.
# Call "HFE_AZ.sh setup" - to configure HFE with Secondary or Primary IPs
# The script will perform the following steps when called from cloud-init (setup function):
# 1) Save and clear old iptables rules : preConfigure
# 2) Enalbe IP Forwarding :configureNATRules
# 3) Read variables from natVars.input configured from userdata: readConfig
# 4) Determine interface names and GW IPs on HFE: setInterfaceMap
# 5) Extract the interface IPs required to configure the HFE for pkt0/pkt1
# ports: extractHfeConfigIps
# 6) Setup DNATs, SNATs and routes to route traffic through HFE to/from SBC PKT ports: configureNATRules
# 7) Configure Management interface to HFE on ens6: configureMgmtNAT
# 8) Log applied iptables configuration and routes: showCurrentConfig
# 9) Check connectivity to currently Active SBC using ping and switchover to Standby SBC
# if connection is lost to current Active: checkSbcConnectivity
#
#
# Call "HFE_AZ.sh cleanup" to clear all IP tables and Routes on the HFE.
# 1) Save and clear old iptables rules : preConfigure
# 2) Read variables from natVars.input configured from userdata: readConfig
# 3) Determine interface names and GW IPs on HFE: setInterfaceMap
# 4) Extract the interface IPs required to configure the HFE for pkt0/pkt1
# ports: extractHfeConfigIps
# 5) Route clean up for Pkt0/Pkt1 subnet and remote machine's public IP: routeCleanUp
#
# This option is useful to debug connectivity of end-point with HFE, after
# calling this no packet is forwarded to SBC, user can ping IP on ens4 to
# make sure connectivity between end-point and HFE is working fine.
# Once this is done user MUST reboot HFE node to restore all forwarding rules
# and routes.
#
#
# NOTE: This script is run by cloud-init or systemd in HFE instance.
#
# This script should be uploaded to a container in an Azure Storage Account
# HFE Node user data then downloads and runs with cloud-init.
# See Ribbon Documentation for more information.
#
#
#############################################################
declare -A interfaceMap
declare -A commandMap
HFERoot="/opt/HFE"
varFile="$HFERoot/natVars.input"
confLogFile="$HFERoot/log/HFE_conf.log"
statusLogFile="$HFERoot/log/HFE.log"
oldRules="$HFERoot/iptables.rules.prev"
hfeLogRotateConf="/etc/logrotate.d/hfe"
APP_INSTALL_MARKER="$HFERoot/app_install_marker"
IPTABLES="$(command -v iptables)"
CURL="$(command -v curl)"
ACTIVE_SBC_IP_ARR=""
STANDBY_SBC_IP_ARR=""
HFE_ETH0_IP_ARR=""
ACTIVE_SBC_IP_PKT1_ARR=""
STANDBY_SBC_IP_PKT1_ARR=""
HFE_ETH1_IP_ARR=""
MODE=""
gRetVal=""
readonly INTF_TYPE_MASTER=2
readonly INTF_TYPE_SLAVE=1
readonly INTF_TYPE_NORMAL=0
readonly MAX_QUEUE_LENGTH=8192
PRIMARY_IP=0
RG_NAME="" # Resource Group Name
SUB_ID="" # Subscription ID
# REST VARS
apiversion="api-version=2018-10-01"
resource="resource=https://management.azure.com/"
DEBUG_MODE="0"
INIT_LOG=1
PROG=${0##*/}
usage()
{
echo $1
echo "usage: $PROG <setup | cleanup>"
echo "Example:"
echo "$PROG setup - Setup HFE with Secondary IPs"
echo "$PROG cleanup - Cleanup IP tables and route on HFE"
exit
}
timestamp()
{
date +"%Y-%m-%d %T"
}
timestampDebug()
{
date +"%F %T,%3N"
}
loggerHfe()
{
echo $(timestamp) "$1" >> $confLogFile 2>&1
}
statusLoggerHfeDebug()
{
if [ "$DEBUG_MODE" -eq "1" ]; then
echo $(timestamp) "$1" >> $statusLogFile 2>&1
fi
}
statusLoggerHfe()
{
echo $(timestamp) "$1" >> $statusLogFile 2>&1
}
doneMessage()
{
loggerHfe " ========================= DONE HFE_AZ.sh =========================================="
exit
}
errorAndExit()
{
loggerHfe " Error: $1"
doneMessage
}
clearOldIptableRules()
{
# reset the default policies in the filter table.
$IPTABLES -P INPUT ACCEPT
$IPTABLES -P FORWARD ACCEPT
$IPTABLES -P OUTPUT ACCEPT
# reset the default policies in the nat table.
$IPTABLES -t nat -P PREROUTING ACCEPT
$IPTABLES -t nat -P POSTROUTING ACCEPT
$IPTABLES -t nat -P OUTPUT ACCEPT
# reset the default policies in the mangle table.
$IPTABLES -t mangle -P PREROUTING ACCEPT
$IPTABLES -t mangle -P POSTROUTING ACCEPT
$IPTABLES -t mangle -P INPUT ACCEPT
$IPTABLES -t mangle -P OUTPUT ACCEPT
$IPTABLES -t mangle -P FORWARD ACCEPT
# flush all the rules in the filter and nat tables.
$IPTABLES -F
$IPTABLES -t nat -F
$IPTABLES -t mangle -F
# erase all chains that's not default in filter and nat table.
$IPTABLES -X
$IPTABLES -t nat -X
$IPTABLES -t mangle -X
}
getSubnetPrefix()
{
local SBC_NAME=$1
if [[ $2 == "PKT0" ]]; then
IF="2"
else
IF="3"
fi
vmInstance=$($CURL "https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/$RG_NAME/providers/Microsoft.Compute/virtualMachines/$SBC_NAME?$apiversion" -H "$authhead")
networkInterfaceId=$(echo $vmInstance | $JQ .properties.networkProfile.networkInterfaces[$IF].id | sed 's/"//g')
subnetId=$($CURL "https://management.azure.com$networkInterfaceId?$apiversion" -H "$authhead" | $JQ .properties.ipConfigurations[0].properties.subnet.id | sed 's/"//g')
subnetPrefix=$($CURL "https://management.azure.com$subnetId?$apiversion" -H "$authhead" | $JQ .properties.addressPrefix | sed 's/"//g')
loggerHfe " Got Subnet Prefix for $2 - $subnetPrefix"
echo $subnetPrefix
}
#$1 = Active SBC NAME
#$2 = PKT port
#$3 = Gateway
#$4 = Interface
configureSubnetRoute()
{
subnetPrefix=$(getSubnetPrefix $1 $2)
declare -g subnetPrefix_$2="$subnetPrefix"
statusLoggerHfeDebug "set subnetPrefix_$2 as $subnetPrefix"
$IP route | grep -E "$subnetPrefix.*$4"
if [ $? -eq 0 ]; then
loggerHfe " Route already availabe to $2 subnet"
else
$IP route add $subnetPrefix via $3 dev $4
if [ $? -eq 0 ]; then
loggerHfe " Set route to reach $2 subnet"
else
errorAndExit "Failed to set route to $subnetPrefix"
fi
fi
}
# $1 - IP / Subnet Prefix
# $2 - Gateway
# $3 - Interface
removeRoute()
{
$IP route | grep -E "$1.*$2.*$3"
if [ $? -eq 0 ]; then
$IP route del $1 via $2 dev $3
if [ $? -eq 0 ]; then
loggerHfe " Route deleted to for $1 "
else
errorAndExit " Failed to delete route for $1 "
fi
else
loggerHfe " Route not available for $1 "
fi
}
# $1 - IP / Subnet Prefix
# $2 - Gateway
# $3 - Interface
verifyRoute()
{
statusLoggerHfeDebug "verifyRoute $1 $2 $3"
$IP route | grep -E "$1.*$3"
if [ $? -ne 0 ]; then
statusLoggerHfeDebug "Adding missing route for $1."
$IP route add $1 via $2 dev $3
fi
}
verifyRoutes()
{
if [[ $MODE == "single" ]]; then
verifyRoute $subnetPrefix_PKT0 $ETH3_GW $ETH3
verifyRoute $subnetPrefix_PKT1 $ETH4_GW $ETH4
else
var="subnetPrefix_$SBC_PKT_PORT_NAME"
verifyRoute "${!var}" $ETH2_GW $ETH2
fi
IFS=',' read -ra SSH_IP_LIST <<< "$REMOTE_SSH_MACHINE_IP"
for REMOTE_MACHINE_IP in "${SSH_IP_LIST[@]}"; do
verifyRoute $REMOTE_MACHINE_IP $MGMT_GW $MGMT
done
}
routeCleanUp()
{
loggerHfe " Route clean up for Pkt0 and Pkt1 IP and remote machine's public IP."
if [[ $MODE == 'single' ]]; then
#PKT0
pkt0SubnetPrefix=$(getSubnetPrefix $ACTIVE_SBC_VM_NAME "PKT0")
removeRoute $pkt1SubnetPrefix $ETH3_GW $ETH3
#PKT1
pkt1SubnetPrefix=$(getSubnetPrefix $ACTIVE_SBC_VM_NAME "PKT1")
removeRoute $pkt1SubnetPrefix $ETH4_GW $ETH4
else
subnetPrefix=$(getSubnetPrefix $ACTIVE_SBC_VM_NAME $SBC_PKT_PORT_NAME)
removeRoute $subnetPrefix $ETH2_GW $ETH2
fi
## Cleanup Ip route from Remote Machine to HFE
IFS=',' read -ra SSH_IP_LIST <<< "$REMOTE_SSH_MACHINE_IP"
for REMOTE_MACHINE_IP in "${SSH_IP_LIST[@]}"; do
removeRoute $REMOTE_MACHINE_IP $MGMT_GW $MGMT
done
}
verifyPackages()
{
local missingCmd=0
commandMap["iptables"]=$IPTABLES
commandMap["curl"]=$CURL
commandMap["jq"]=$JQ
commandMap["conntrack"]=$CONNTRACK
commandMap["fping"]=$FPING
commandMap["ip"]=$IP
for i in "${!commandMap[@]}"
do
if [[ ${commandMap[$i]} == "" ]]; then
loggerHfe "Required packages $i is missing."
missingCmd=1
fi
done
if [ $missingCmd -eq 1 ]; then
errorAndExit "Missing required packages. Exiting!!!"
fi
}
prepareHFEInstance()
{
### Configure ip_forward - Enable ip forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
if [ ! -f $APP_INSTALL_MARKER ]; then
### Install required packages
if [[ $(command -v yum) == "" ]]; then
sudo apt-get update
sudo apt-get -y install jq
sudo apt-get -y install conntrack
sudo apt-get -y install fping
else
sudo yum -y update
sudo yum -y install epel-release
if [ $? -ne 0 ]; then
majorOsVer=$(cat /etc/os-release | grep VERSION_ID | awk -F= '{print $2}' | sed 's/"//g' | sed 's/\..*//')
sudo yum -y install wget
cd /tmp/
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-$majorOsVer.noarch.rpm
epelRpm=$(ls epel*.rpm)
sudo yum -y install $epelRpm
fi
sudo yum -y install jq
sudo yum -y install conntrack
sudo yum -y install fping
fi
setLogRotate
fi
JQ="$(command -v jq)"
CONNTRACK="$(command -v conntrack)"
FPING="$(command -v fping)"
IP="$(command -v ip)"
verifyPackages
# If we get this far, packages are good
echo 1 > $APP_INSTALL_MARKER
}
preConfigure()
{
if [ -f $confLogFile ]; then
mv -f $confLogFile $confLogFile.prev
fi
### Redirect all echo $(timestamp) to file after writing ip_forward
loggerHfe " ========================== Starting HFE_AZ.sh ============================"
loggerHfe " Enabled IP forwarding"
loggerHfe " This script will setup DNAT, SNAT and IP forwarding."
loggerHfe " Save old rules in $HFERoot/firewall.rules"
### Save old iptable rules
sudo iptables-save > $oldRules
if [ "$?" = "0" ]; then
### Clear the iptables
clearOldIptableRules
else
errorAndExit "Cound not save old iptables rules. Exiting"
fi
}
getGatewayIp()
{
local interface=$1
cidrIp=$($IP route | grep "$interface proto kernel scope link" | awk -F " " '{print $1}' | awk -F "/" '{print $1}')
finalOct=$(echo $cidrIp | awk -F "." '{print $4}')
gwOctet=$(( finalOct + 1 ))
gwIp=$(echo $cidrIp | awk -v var="$gwOctet" -F. '{$NF=var}1' OFS=.)
echo $gwIp
}
setInterfaceMap()
{
interfaceIds=$($CURL "https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/$RG_NAME/providers/Microsoft.Compute/virtualMachines/$HFE_NAME?$apiversion" -H "$authhead" | $JQ .properties.networkProfile.networkInterfaces)
interfaceNum=$(echo $interfaceIds | $JQ length)
for i in $(seq 0 $((interfaceNum-1))); do
interfaceId=$(echo $interfaceIds | $JQ .[$i].id | sed 's/"//g')
privIp=$($CURL "https://management.azure.com$interfaceId?$apiversion" -H "$authhead" | $JQ .properties.ipConfigurations[0].properties.privateIPAddress | sed 's/"//g')
while read IF; do
ifIp=$(ip address show $IF | grep -w inet | awk -F ' ' '{print $2}' | awk -F '/' '{print $1}')
# handle multiple IPs
if [[ $(echo $ifIp | grep -c $privIp) -ne 0 ]]; then
interfaceMap["eth$i"]=$IF
break
fi
done < <($IP address show | egrep ^[0-9]: | awk -F ' ' '{print $2}' | sed 's/://' | grep -v lo)
done
ETH0=${interfaceMap['eth0']}
ETH1=${interfaceMap['eth1']}
ETH2=${interfaceMap['eth2']}
if [[ $MODE == "single" ]]; then
ETH3=${interfaceMap['eth3']}
ETH4=${interfaceMap['eth4']}
fi
if [[ $MODE == "single" ]]; then
# Gateway is second IP in CIDR
ETH2_GW=$(getGatewayIp $ETH2)
ETH3_GW=$(getGatewayIp $ETH3)
ETH4_GW=$(getGatewayIp $ETH4)
loggerHfe ""
loggerHfe " Default GWs:"
loggerHfe " ETH2_GW $ETH2_GW"
loggerHfe " ETH3_GW $ETH3_GW"
loggerHfe " ETH4_GW $ETH4_GW"
else
# Gateway is second IP in CIDR
ETH1_GW=$(getGatewayIp $ETH1)
ETH2_GW=$(getGatewayIp $ETH2)
loggerHfe ""
loggerHfe " Default GWs:"
loggerHfe " ETH1_GW $ETH1_GW"
loggerHfe " ETH2_GW $ETH2_GW"
fi
#Set the MGMT interface information
if [[ $MODE == "single" ]]; then
MGMT=$ETH2
MGMT_GW=$ETH2_GW
else
MGMT=$ETH1
MGMT_GW=$ETH1_GW
fi
}
readConfig()
{
loggerHfe " Read variables from file $varFile"
source $varFile
SBC_PKT_PORT_NAME=$(echo $SBC_PKT_PORT_NAME | awk '{print toupper($0)}') #Force UpperCase
loggerHfe " Data from $varFile:"
if [[ $SBC_PKT_PORT_NAME == "" ]]; then
MODE="single"
else
MODE="split"
loggerHfe " SBC_PKT_PORT_NAME $SBC_PKT_PORT_NAME"
fi
loggerHfe " ACTIVE_SBC_VM_NAME $ACTIVE_SBC_VM_NAME"
loggerHfe " STANDBY_SBC_VM_NAME $STANDBY_SBC_VM_NAME"
loggerHfe " REMOTE_SSH_MACHINE_IP $REMOTE_SSH_MACHINE_IP"
loggerHfe ""
if [[ $MODE == "single" ]]; then
loggerHfe " SBC_PKT_PORT_NAME is not configured. Assuming single HFE instance with 5 interfaces"
loggerHfe ""
fi
}
## Parameters passed in to configureNATRules
## $1 - Interface for Public/Private connection to HFE
## $2 - Interface to Pkt0/Pkt1 to Active SBC
## $3 - Gateway for Pkt0/Pkt1 to Active SBC
## $4 - Print configuration messages to HFE log
configureNATRules()
{
## Set forward ACCEPT rule for packets coming into HFE
$IPTABLES -I FORWARD -i $1 -o $2 -j ACCEPT
if [ $? -eq 0 ]; then
[[ ! -z $4 ]] && loggerHfe " Set Forward ACCEPT rule all packets coming from outside $1 to $2 towards SBC"
else
errorAndExit "Failed to set forward ACCEPT rule for all packets coming on IP($1)"
fi
## Set forward ACCEPT rule for packets coming from SBC
$IPTABLES -I FORWARD -i $2 -o $1 -j ACCEPT
if [ $? -eq 0 ]; then
[[ ! -z $4 ]] && loggerHfe " Set Forward ACCEPT rule all packets coming from SBC ($2) to $1"
else
errorAndExit "Failed to set ACCEPT rule all packets coming from SBC ($2) to $1"
fi
if [[ $1 == $ETH0 ]]; then
sbcIpCount="${#ACTIVE_SBC_IP_ARR[@]}"
hfeIpCount="${#HFE_ETH0_IP_ARR[@]}"
addNumRoute=$(( hfeIpCount < sbcIpCount ? hfeIpCount : sbcIpCount ))
else
sbcIpCount="${#ACTIVE_SBC_IP_PKT1_ARR[@]}"
hfeIpCount="${#HFE_ETH1_IP_ARR[@]}"
addNumRoute=$(( hfeIpCount < sbcIpCount ? hfeIpCount : sbcIpCount ))
fi
for (( idx=0; idx<$addNumRoute; idx++ ))
do
if [[ $1 == $ETH0 ]]; then
hfeIp=${HFE_ETH0_IP_ARR[$idx]}
sbcIp=${ACTIVE_SBC_IP_ARR[$idx]}
else
hfeIp=${HFE_ETH1_IP_ARR[$idx]}
sbcIp=${ACTIVE_SBC_IP_PKT1_ARR[$idx]}
fi
## Set DNAT for detination IP
$IPTABLES -t nat -I PREROUTING -d $hfeIp -j DNAT --to $sbcIp
if [ $? -eq 0 ]; then
[[ ! -z $4 ]] && loggerHfe " Set up proper DNAT for destination IP $hfeIp to offset $sbcIp."
else
errorAndExit "Failed to set DNAT rule for destination IP $hfeIp to offset $sbcIp."
fi
## Set SNAT for external interface to HFE
$IPTABLES -t nat -I POSTROUTING -o $1 -s $sbcIp -j SNAT --to $hfeIp
if [ $? -eq 0 ]; then
[[ ! -z $4 ]] && loggerHfe " Set up POSTROUTING rule (source IP $sbcIp, to offset $hfeIp) for packet sent on $1"
else
errorAndExit "Failed to set POSTROUTING rule (source IP $sbcIp, to offset $hfeIp) for packet sent on $1"
fi
done
}
###################################################################################################
#### WARNING
####
#### Each call of this function will result in pkt drop.
####
#### Each conntrack flush operation has a associated time penalty (pkt drop), call this function
#### only when you are all set with setting up new iptables rules (and old rules are cleaned up).
####
####
####
###################################################################################################
clearOldRules()
{
## Reset connection tracking
## Any packet received on input interfaces before NAT rules are set are not forwarded
## to sbc, connection tracking will not forward those packets even
## if NAT rules are set after receiving first packet from that source
## IP/Port as it has cache entry for source IP and Port.
## Reset connection tracking will treat them as new stream and those packets
## will be forwarded to SBC once SNAT and DNAT rules are setup
## properly.
$CONNTRACK -F conntrack
if [ $? -eq 0 ]; then
[[ ! -z $6 ]] && loggerHfe " Flushing connection tracking rules."
else
[[ ! -z $6 ]] && loggerHfe " (WARNING):Flushing connection tracking rules failed."
fi
}
configureMgmtNAT()
{
loggerHfe " Optional configuration to reach $MGMT using Remote IP. "
if [ -z "${REMOTE_SSH_MACHINE_IP// }" ]; then
loggerHfe " No IP is given for REMOTE_SSH_MACHINE_IP field, no route is set for managing this instance over $MGMT"
else
loggerHfe " $MGMT is used to manage this HFE instance, we can login using private IP to manage HFE machine without setting default route"
loggerHfe " default route points to eth0 which will be used to interface all traffic for SBC"
IFS=',' read -ra SSH_IP_LIST <<< "$REMOTE_SSH_MACHINE_IP"
for REMOTE_MACHINE_IP in "${SSH_IP_LIST[@]}"; do
$IP route | grep -E "$REMOTE_MACHINE_IP.*$MGMT_GW.*$MGMT"
if [ "$?" = "0" ]; then
loggerHfe " Route is already available for remote machine's public IP($REMOTE_MACHINE_IP), from this IP you can SSH to HFE over Remote IP($MGMT)"
else
$IP route add $REMOTE_MACHINE_IP via $MGMT_GW dev $MGMT
if [ "$?" = "0" ]; then
loggerHfe " Route added for remote machine's public IP($REMOTE_MACHINE_IP), from this IP you can SSH to HFE over Remote IP($MGMT)"
else
errorAndExit "Failed to add route for ($REMOTE_MACHINE_IP)"
fi
fi
done
fi
}
showCurrentConfig()
{
loggerHfe " Applied iptable rules and kernel routing table. "
natTable=`$IPTABLES -t nat -vnL`
loggerHfe " NAT tables:"
loggerHfe " $natTable "
filterTable=`$IPTABLES -t filter -vnL`
loggerHfe " Filter tables:"
loggerHfe " $filterTable "
routeOutput=`$IP route`
loggerHfe " Route:"
loggerHfe " $routeOutput "
}
getSbcPktIps()
{
local SBC_NAME=$1
if [[ $2 == "PKT0" ]]; then
IF="2"
else
IF="3"
fi
unsortedList=''
sortedList=''
vmInstance=$($CURL "https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/$RG_NAME/providers/Microsoft.Compute/virtualMachines/$SBC_NAME?$apiversion" -H "$authhead")
networkInterfaceId=$(echo $vmInstance | $JQ .properties.networkProfile.networkInterfaces[$IF].id | sed 's/"//g')
# Get Primary IPs or Secondary IPs
ipConfigs=$($CURL "https://management.azure.com$networkInterfaceId?$apiversion" -H "$authhead" | $JQ .properties.ipConfigurations)
configCount=$(echo $ipConfigs | $JQ . | $JQ length)
for i in $(seq 0 $((configCount-1))); do
if [[ $(echo $ipConfigs | $JQ .[$i].properties.primary | sed 's/"//g') == "true" ]]; then
#Don't use primary on SBC
continue
else
ip=$(echo $ipConfigs | $JQ .[$i].properties.privateIPAddress | sed 's/"//g')
unsortedList+="$ip "
fi
done
sortedList=$(echo $unsortedList| tr " " '\n' | sort -n -t . -k1,1 -k2,2 -k 3,3 -k4,4)
if [[ -z "$sortedList" ]];then
errorAndExit "Failed to get IP(s) assigned to $2 on $1."
fi
loggerHfe " $SBC_NAME - IPs for $2 are $(echo $sortedList | tr '\n' " ")"
echo $sortedList
}
getHfeIps()
{
interface=$1
unsortedList=''
sortedList=''
if [[ $interface == $ETH0 ]]; then
nic=0
else
nic=1
fi
vmInstance=$($CURL "https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/$RG_NAME/providers/Microsoft.Compute/virtualMachines/$HFE_NAME?$apiversion" -H "$authhead")
networkInterfaceId=$(echo $vmInstance | $JQ .properties.networkProfile.networkInterfaces[$nic].id | sed 's/"//g')
ipConfigs=$($CURL "https://management.azure.com$networkInterfaceId?$apiversion" -H "$authhead" | $JQ .properties.ipConfigurations)
configCount=$(echo $ipConfigs | $JQ . | $JQ length)
for i in $(seq 0 $((configCount-1))); do
ip=$(echo $ipConfigs | $JQ .[$i].properties.privateIPAddress | sed 's/"//g')
unsortedList+="$ip "
done
sortedList=$(echo $unsortedList| tr " " '\n' | sort -n -t . -k1,1 -k2,2 -k 3,3 -k4,4)
if [[ -z "$sortedList" ]];then
errorAndExit "Failed to get IP(s) assigned to $HFE_NAME on $interface."
fi
loggerHfe " $HFE_NAME - IPs for $interface are $(echo $sortedList | tr '\n' " ")"
echo $sortedList
}
setLogRotate()
{
# Setup HFE logRotate to backup and rotate logs which report status of connection between HFE and Active SBC
if [ ! -f $hfeLogRotateConf ]; then
echo -e "$statusLogFile" >> $hfeLogRotateConf
echo -e "{" >> $hfeLogRotateConf
echo -e " rotate 4" >> $hfeLogRotateConf
echo -e " size=250M" >> $hfeLogRotateConf
echo -e " missingok" >> $hfeLogRotateConf
echo -e " notifempty" >> $hfeLogRotateConf
echo -e " dateformat -%d%m%Y" >> $hfeLogRotateConf
echo -e " compress" >> $hfeLogRotateConf
echo -e "}" >> $hfeLogRotateConf
fi
}
checkSbcConnectivity()
{
local switched=0
local counter=0
local verifyCount=0
while :; do
############################################### CAUTION #################################
#
# -t (value) < -P (value)
#
#########################################################################################
# -c, --count=N
#
# Number of request packets to send to each target. In this mode, a line is
# displayed for each received response (this can suppressed with -q or -Q). Also,
# statistics about responses for each target are displayed when all requests have
# been sent (or when interrupted).
#
#
# -p, --period= MSEC
#
# In looping or counting modes (-l, -c, or -C), this parameter sets the time in
# milliseconds that fping waits between successive packets to an individual
# target. Default is 1000 and minimum is 10.
#
#
#
# -t, --timeout= MSEC
#
# Initial target timeout in milliseconds. In the default, non-loop mode,
# the default timeout is 500ms, and it represents the amount of time that
# fping waits for a response to its first request. Successive timeouts are
# multiplied by the backoff factor specified with -B.
#
# In loop/count mode, the default timeout is automatically adjusted to
# match the "period" value (but not more than 2000ms). You can still
# adjust the timeout value with this option, if you wish to, but note that
# setting a value larger than "period" produces inconsistent results,
# because the timeout value can be respected only for the last ping.
#
# Also note that any received replies that are larger than the timeout
# value, will be discarded.
############################################################################################
# Use first IP
$FPING -c 3 -t 20 -p 200 ${ACTIVE_SBC_IP_ARR[0]} &> /dev/null
if [ $? -ne 0 ]
then
if [ $switched -eq 0 ]; then
statusLoggerHfe "Connection error detected to Active SBC: $ACTIVE_SBC_VM_NAME. ${ACTIVE_SBC_IP_ARR[0]} did not respond. Attempting switchover."
switched=1
elif [ $switched -eq 1 ] && [ $(($counter % 10)) -eq 0 ]; then
statusLoggerHfe "Connection error ongoing - No connection to SBC $SBC_PKT_PORT_NAME from HFE"
fi
# If single HFE (2.0), this is PKT0 config
TEMP_SBC_IP_ARR=( ${ACTIVE_SBC_IP_ARR[@]} )
ACTIVE_SBC_IP_ARR=( ${STANDBY_SBC_IP_ARR[@]} )
STANDBY_SBC_IP_ARR=( ${TEMP_SBC_IP_ARR[@]} )
TEMP_SBC_NAME=$ACTIVE_SBC_VM_NAME
ACTIVE_SBC_VM_NAME=$STANDBY_SBC_VM_NAME
STANDBY_SBC_VM_NAME=$TEMP_SBC_NAME
if [[ $MODE == "single" ]]; then
TEMP_SBC_IP_PKT1_ARR=( ${ACTIVE_SBC_IP_PKT1_ARR[@]} )
ACTIVE_SBC_IP_PKT1_ARR=( ${STANDBY_SBC_IP_PKT1_ARR[@]} )
STANDBY_SBC_IP_PKT1_ARR=( ${TEMP_SBC_IP_PKT1_ARR[@]} )
fi
statusLoggerHfeDebug "Before clearOldIptableRules"
clearOldIptableRules
statusLoggerHfeDebug "After clearOldIptableRules, and just before configureNATRules"
statusLoggerHfeDebug "Verifying Routes"
verifyRoutes
verifyCount=0
if [[ $MODE == "single" ]]; then
configureNATRules $ETH0 $ETH3 $ETH3_GW
configureNATRules $ETH1 $ETH4 $ETH4_GW
else
configureNATRules $ETH0 $ETH2 $ETH2_GW
fi
statusLoggerHfeDebug "After configureNATRules and just before clearOldRules"
clearOldRules
statusLoggerHfeDebug "After clearOldRules"
let counter=$counter+1 #increment log count
statusLoggerHfeDebug "Wait for SBC to complete switchover."
statusLoggerHfeDebug ""
statusLoggerHfeDebug ""
sleep 2s
else
if [ $INIT_LOG -eq 1 ]; then
statusLoggerHfe "Initial HFE startup configuration complete. Successfully connected to $ACTIVE_SBC_VM_NAME"
INIT_LOG=0
switched=0
counter=0
elif [ $switched -eq 1 ]; then
statusLoggerHfe "Switchover from old Active $TEMP_SBC_NAME to new Active $ACTIVE_SBC_VM_NAME complete. Connection established."
switched=0
counter=0
fi
statusLoggerHfeDebug "Nothing needs to be done, active is up $ACTIVE_SBC_VM_NAME"
sleep 0.5s
fi
# Verify Routes every ~10 secs
if [ $verifyCount -eq 20 ]; then
verifyRoutes
verifyCount=0
fi
let verifyCount=$verifyCount+1
done
}
# Verify authorization is successful
verifyAutorization()
{
local SBC_NAME=$1
local count=0
loggerHfe " Validating authorization token"
vmResult=$($CURL "https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/$RG_NAME/providers/Microsoft.Compute/virtualMachines/$SBC_NAME?$apiversion" -H "$authhead" | $JQ .error.code)
while [[ $vmResult != "null" ]] && [ $count -lt 30 ]; do
loggerHfe " Authorization validation for attempt $count failed. Retrying in 30 seconds"
let count=$count+1
sleep 30s
token=$($CURL "http://169.254.169.254/metadata/identity/oauth2/token?$apiversion&$resource" -H Metadata:true | $JQ .access_token | sed 's/"//g')
authhead="Authorization:Bearer $token"
vmResult=$($CURL "https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/$RG_NAME/providers/Microsoft.Compute/virtualMachines/$SBC_NAME?$apiversion" -H "$authhead" | $JQ .error.code)
done
if [ $count -eq 30 ]; then
errorAndExit "Authorization still failing after 15 minutes. Verify identity roles and reboot! Exiting"
else
loggerHfe " Authorization validation successful for attempt $count. Continuing to get SBC information"
fi
}
getAzureVars()
{
metaDataJson=$($CURL "http://169.254.169.254/metadata/instance/compute?$apiversion" -H Metadata:true)
RG_NAME=$(echo $metaDataJson | $JQ .resourceGroupName | sed 's/"//g')
SUB_ID=$(echo $metaDataJson | $JQ .subscriptionId | sed 's/"//g')
HFE_NAME=$(echo $metaDataJson | $JQ .name | sed 's/"//g')
token=$($CURL "http://169.254.169.254/metadata/identity/oauth2/token?$apiversion&$resource" -H Metadata:true | $JQ .access_token | sed 's/"//g')
authhead="Authorization:Bearer $token"
}
extractHfeConfigIps()
{
# Get IPs
if [[ $MODE == "single" ]]; then
ACTIVE_SBC_IP_ARR=( $(getSbcPktIps $ACTIVE_SBC_VM_NAME "PKT0") )
STANDBY_SBC_IP_ARR=( $(getSbcPktIps $STANDBY_SBC_VM_NAME "PKT0") )
HFE_ETH0_IP_ARR=( $(getHfeIps $ETH0) )
ACTIVE_SBC_IP_PKT1_ARR=( $(getSbcPktIps $ACTIVE_SBC_VM_NAME "PKT1") )
STANDBY_SBC_IP_PKT1_ARR=( $(getSbcPktIps $STANDBY_SBC_VM_NAME "PKT1") )
HFE_ETH1_IP_ARR=( $(getHfeIps $ETH1) )
else
ACTIVE_SBC_IP_ARR=( $(getSbcPktIps $ACTIVE_SBC_VM_NAME $SBC_PKT_PORT_NAME) )
STANDBY_SBC_IP_ARR=( $(getSbcPktIps $STANDBY_SBC_VM_NAME $SBC_PKT_PORT_NAME) )
HFE_ETH0_IP_ARR=( $(getHfeIps $ETH0) )
fi
# Verify we have everything
if [ -z $ACTIVE_SBC_IP_ARR ] || [ -z $STANDBY_SBC_IP_ARR ] || [ -z $HFE_ETH0_IP_ARR ] ; then
errorAndExit "SBC information missing. Exiting"
elif [[ $MODE == "single" ]]; then
if [ -z $ACTIVE_SBC_IP_PKT1_ARR ] || [ -z $STANDBY_SBC_IP_PKT1_ARR ] || [ -z $HFE_ETH1_IP_ARR ]; then
errorAndExit "SBC information missing. Exiting"
fi
fi
}
# Returns interface type.
# SLAVE interface will have symlink to it's master.
# if the symlink is present, then it's a salve interface.
# Normal interface will not have any of these files.
getIntfType()
{
local intf=$1
if ls /sys/class/net/$intf/lower_* 1> /dev/null 2>&1; then
gRetVal=$INTF_TYPE_MASTER
return 0
fi
if ls /sys/class/net/$intf/master 1> /dev/null 2>&1; then
gRetVal=$INTF_TYPE_SLAVE
return 0
fi
gRetVal=$INTF_TYPE_NORMAL
return 0
}
function setSecondaryIntfQueueLen()
{
local netdev=""
local intfType=$INTF_TYPE_NORMAL
local rxPresetMax=4096
local txPresetMax=4096
local rxQLen=0
local txQLen=0
for netdev in $(ls /sys/class/net); do
if [ "$netdev" != "lo" ]; then
getIntfType $netdev
intfType=$gRetVal
if [ $intfType -eq $INTF_TYPE_SLAVE ]; then
rxQLen=$(ethtool -g $netdev| grep -A 4 "Current hardware settings" | grep RX: | cut -d ':' -f2| xargs)
if [ $? -ne 0 ]; then
errorAndExit "$netdev: Failed to get Current RX queue length"
fi
txQLen=$(ethtool -g $netdev| grep -A 4 "Current hardware settings" | grep TX: | cut -d ':' -f2| xargs)
if [ $? -ne 0 ]; then
errorAndExit "$netdev: Failed to get Current TX queue length"
fi
#find the preset Max RX queue length
rxPresetMax=$(ethtool -g $netdev| grep -A 4 "Pre-set maximums" | grep RX: | cut -d ':' -f2| xargs)
if [ $? -ne 0 ]; then
errorAndExit "$netdev: Failed to get Pre-set Maximum RX queue length"
fi
#restrict the queue length to Max 8192
rxPresetMax=$((rxPresetMax>MAX_QUEUE_LENGTH ? MAX_QUEUE_LENGTH : rxPresetMax))
#find the preset Max TX queue length
txPresetMax=$(ethtool -g $netdev | grep -A 4 "Pre-set maximums" | grep TX: | cut -d ':' -f2| xargs)
if [ $? -ne 0 ]; then
errorAndExit "$netdev: Failed to get Pre-set Maximum TX queue length"
fi
#restrict the queue length to Max 8192
txPresetMax=$((txPresetMax>MAX_QUEUE_LENGTH ? MAX_QUEUE_LENGTH : txPresetMax))
if [ $rxQLen -lt $rxPresetMax ]; then
loggerHfe " $netdev: changing RX queue length from $rxQLen to $rxPresetMax"
ethtool -G $netdev rx $rxPresetMax
fi
if [ $txQLen -lt $txPresetMax ]; then
loggerHfe " $netdev: changing TX queue length from $txQLen to $txPresetMax"
ethtool -G $netdev tx $txPresetMax
fi
fi
fi
done
sleep 1
}
main()
{
case $1 in
"setup")
prepareHFEInstance
preConfigure
readConfig
getAzureVars
verifyAutorization $ACTIVE_SBC_VM_NAME
setInterfaceMap
extractHfeConfigIps
if [[ $MODE == "single" ]]; then
# Configure Routes for Active SBC PKT0
configureSubnetRoute $ACTIVE_SBC_VM_NAME "PKT0" $ETH3_GW $ETH3
configureNATRules $ETH0 $ETH3 $ETH3_GW 1
# Configure Routes for Active SBC PKT1
configureSubnetRoute $ACTIVE_SBC_VM_NAME "PKT1" $ETH4_GW $ETH4
configureNATRules $ETH1 $ETH4 $ETH4_GW 1
else
# Configure Routes for Active SBC
configureSubnetRoute $ACTIVE_SBC_VM_NAME $SBC_PKT_PORT_NAME $ETH2_GW $ETH2
configureNATRules $ETH0 $ETH2 $ETH2_GW 1
fi
clearOldRules
configureMgmtNAT
setSecondaryIntfQueueLen
showCurrentConfig
checkSbcConnectivity
;;
"cleanup")
prepareHFEInstance
preConfigure
readConfig
getAzureVars
setInterfaceMap
extractHfeConfigIps
routeCleanUp
doneMessage
;;
*)
usage "Unrecognized switch"
;;
esac
}
[[ $# -ne 1 ]] && usage
main $1
HFE Azure Manual Setup Shell Script
Click to view script
#!/bin/bash
###################################
#
# Copyright (c) 2020 Ribbon Communication, Inc.
# All Rights Reserved.
#
# Script to setup running the HFE_AZ.sh as
# a systemd service. TO be used in Azure
# with distros missing cloudinit functionality.
#
# This scipt needs to be run with heightened
# privileges.
##################################
PROG=${0##*/}
DIR=$(cd -P "$(dirname $0)" && pwd)
HFE_DIR="/opt/HFE"
HFE_LOG_DIR="$HFE_DIR/log"
HFE_FILE="$HFE_DIR/HFE_AZ.sh"
LOG_FILE="$HFE_LOG_DIR/cloud-init-nat.log"
NAT_VAR="$HFE_DIR/natVars.input"
###############################
# UPDATE VARIABLES IN THIS SECTION
AZ_BLOB_URL="<HFE_SCRIPT_LOCATION>" # URL of uploaded HFE script
ACTIVE_SBC_NAME="<ACTIVE_SBC_NAME>"
STANDBY_SBC_NAME="<STANDBY_SBC_NAME>"
REMOTE_SSH_MACHINE_IP="<REMOTE_SSH_MACHINE_IP>"
SBC_PKT_PORT_NAME="<SBC_PKT_PORT_NAME>" # Only use for HFE 2.1
#
##############################
timestamp()
{
date +"%Y-%m-%d %T"
}
log()
{
echo $(timestamp) "$1" >> $LOG_FILE
logger -t "ribbon-hfe" "$1"
}
usage()
{
echo ""
echo "$PROG usage:"
echo " Script used to setup to initialise the HFE script and also create the systemd file"
echo " To be used with distros without cloudinit support"
echo " -s : Setup systemd to run this script to initialize the HFE at startup"
echo " -r : Run the initial setup of HFE"
echo " -h : Prints this message"
echo ""
}
runHfe()
{
log " ========================= Initial configuration for HFE =========================================="
curl "$AZ_BLOB_URL" -H 'x-ms-version : 2019-02-02' -o $HFE_FILE
if [ $? -ne 0 ]; then
log "Error: Could not copy HFE script from Azure Blob Container."
else
log "Copied HFE script from Azure Blob Container."
fi
echo > $NAT_VAR
echo "ACTIVE_SBC_VM_NAME=\"$ACTIVE_SBC_NAME\"" >> $NAT_VAR
echo "STANDBY_SBC_VM_NAME=\"$STANDBY_SBC_NAME\"" >> $NAT_VAR
echo "REMOTE_SSH_MACHINE_IP=\"$REMOTE_SSH_MACHINE_IP\"" >> $NAT_VAR
if [[ $SBC_PKT_PORT_NAME != "" ]]; then
echo "SBC_PKT_PORT_NAME=\"$SBC_PKT_PORT_NAME\"" >> $NAT_VAR
fi
log "Copied natVars.input"
sudo chmod 744 $HFE_FILE
log "Configured using HFE script - $HFE_FILE"
log " ========================= Done =========================================="
$HFE_FILE setup
}
setupHfe()
{
if [[ $(command -v setenforce) != "" ]]; then
log "Setting SELinux to Permissive"
setenforce Permissive
sed -i 's/SELINUX=.*/SELINUX=permissive/' /etc/selinux/config
fi
log "Creating a systemd file for ribbon-hfe"
systemdHfe=/usr/lib/systemd/system/ribbon-hfe.service
echo "[Unit]
Description=Ribbon Communications HFE script for use with HA SBX in Azure
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=15
StartLimitBurst=5
ConditionFileIsExecutable=$DIR/$PROG
[Service]
Type=simple
ExecStart=$DIR/$PROG -r
Restart=always
RestartSec=1
[Install]
WantedBy=multi-user.target" > $systemdHfe
systemctl enable ribbon-hfe
}
if [ ! -d $HFE_LOG_DIR ]; then
mkdir -p $HFE_LOG_DIR;
fi
if [[ $AZ_BLOB_URL == \<*\> ]] || [[ $ACTIVE_SBC_NAME == \<*\> ]] || [[ $STANDBY_SBC_NAME == \<*\> ]] || [[ $REMOTE_SSH_MACHINE_IP == \<*\> ]] || [[ $SBC_PKT_PORT_NAME == \<*\> ]]; then
msg="Not all of the HFE variables are updated. Exiting!"
log "$msg"
echo $msg
exit 1
fi
while getopts srh o
do
case $o in
s) setupHfe ;;
r) runHfe ;;
h) usage ;;
*) usage && exit 1 ;;
esac
done
Make the storage account accessible for the instances by allowing access to virtual machines in both subnets used for ETH0 on the HFE node (ensure that the subnet exists). Syntax
az storage account network-rule add --account-name <STORAGE ACCOUNT NAME> --subnet <SUBNET NAME of SUBNET USED FOR ETH0 of HFE NODE> --vnet-name <VIRTUAL NETWORK NAME>
The HFE has variables that are required to be updated. When using cloud-init, update the the HFE variables in the custom data.
For manual setup, update the script HFE_AZ_manual_setup.sh (the portion of the script below the comment: UPDATE VARIABLES IN THIS SECTION).
The following table contains the values that you must update:
Value to be updated
Description
Example
<HFE_SCRIPT_LOCATION>
The URL for HFE_AZ.sh that is contained in a VM within a storage account.
You can retrieve the URL by executing the following command: az storage blob url --account-name <STORAGE ACCOUNT NAME> --container-name <CONTAINER NAME> --name <BLOB NAME>
The script HFE_AZ_manual_setup.sh has two functions:
It creates the systemd service "ribbon-hfe" and enables the service.
Systemd runs it to download the script and write the variables out to /opt/HFE/natVars.input, similar to the role of custom-data does in the cloud-init. As the script is run as a service by systemd, it will automatically run if the instance reboots.
The steps required to initially configure the HFE node using the script HFE_AZ_manual_setup.sh are as follows:
Using SCP, upload the script HFE_AZ_manual_setup.sh onto the instance, in a file path that has executable permissions for the root.
Run the script with heightened permissions and the '-s' flag. For example:
sudo /usr/sbin/HFE_AZ_manual_setup.sh -s
Tip
When you use the '-s' flag, systemd points at the location of the script. If you remove the file, run the script again with the '-s' flag.
Start the service by executing the following command:
sudo systemctl start ribbon-hfe
Create HFE Nodes
To create HFE node(s), perform the steps described below.
Create Public IPs
Create at least one Public IP for ETH0 of the PKT0 HFE Node. Optionally, create up to two additional Public IPs to access the MGMT interfaces on both HFE nodes. For more information, refer to Create Public IPs (Standalone).
Create NICs
To create NICs, use the following command syntax:
az network nic create --name <NIC NAME>
--resource-group <RESOURCE-GROUP-NAME>
--vnet-name <VIRTUAL NETWORK NAME>
--subnet <SUBNET NAME>
--network-security-group <SECURITY GROUP NAME>
HFE 2.0
For HFE 2.0, create a total of five NICs.
The following table contains the extra flags necessary for each interface:
HFE 2.0 - Extra flags for each interface
Interface
Flags
eth0
--public-ip-address <PUBLIC IP NAME> --ip-forwarding --accelerated-networking true
eth1
--ip-forwarding --accelerated-networking true
eth2
--public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true (optional)
eth3
--ip-forwarding --accelerated-networking true
eth4
--ip-forwarding --accelerated-networking true
HFE 2.1
For HFE 2.1, create a total of six NICs (three for each interface).
The following table contains the extra flags necessary for each interface:
HFE 2.1 - Extra flags for each interface
HFE
Interface
Flags
PKT0 HFE
eth0
--public-ip-address <PUBLIC IP NAME> --ip-forwarding - --accelerated-networking true
eth1
--public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true
eth2
--ip-forwarding --accelerated-networking true
PKT1 HFE
eth0
--ip-forwarding --accelerated-networking true
eth1
--public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true
eth2
--ip-forwarding --accelerated-networking true
Create the VM
To create the VM(s), use the following command syntax:
Indicates instance size. In AWS this is known as 'Instance Type', and Openstack calls this 'flavor'. For more information on instances size, refer to Microsoft Azure Documentation.
Note:
Maintain same instance type for the HFE and the SBC.
For HFE 2.0, the HFE node requires five NICs.
For HFE 2.1, each HFE node requires a minimum of three NICs.
ssh-key-values
File Name.
azureSshKey.pub
A file that contains the public SSH key for accessing the linuxadmin user.
You can retrieve the file by executing the following command:
This is ID for the User Assigned Managed Identity created in previous steps.
You can retrieve it by executing the following command:
az identity show --name < IDENTITY NAME> --resource-group <RESOURCE-GROUP-NAME>
HFE Routing
The HFE setup requires routes in Azure, to force all the traffic leaving PKT0 and PKT1 to route back through the HFE.
Note
Routes in Azure apply throughout the subnet. Therefore, if multiple resources use an endpoint, separate the endpoints in to different subnets, failing which the traffic routes through the HFE node.
To create the routes, perform the following steps:
Create the route-table: Syntax
az network route-table create --name <NAME> --resource-group <RESOURCE_GROUP_NAME>
Example
az network route-table create --name hfe-route-table --resource-group RBBN-SBC-RG
Create two rules for PKT0 and PKT1: Syntax
az network route-table route create --name <NAME>
--resource-group <RESOURCE_GROUP_NAME>
--address-prefix <CIDR OF ENDPOINT>
--next-hop-type VirtualAppliance
--route-table-name <ROUTE TABLE NAME>
--next-hop-ip-address <IP FOR ETH3/ETH4 of HFE NODE>
To create the SBC HA with HFE setup, first perform all of the steps described in Create SBC (Standalone).
In addition to those steps, perform the steps described below.
Configure NICs
The SBC requires 4 NICs, each one attached to a individual subnet for MGMT, HA, PKT0 and PKT1.
To create a standard NIC, use the following syntax:
az network nic create --name <NIC NAME>
--resource-group <RESOURCE GROUP NAME>
--vnet-name <VIRTUAL NETWORK NAME>
--subnet <SUBNET NAME>
--network-security-group <SECURITY GROUP NAME>
--accelerated-networking <true/false>
See below for additional steps for when SBCs are in a HFE setup.
Secondary IPs
The HA SBCs require configuring Secondary IPs on both the PKT0 and PKT1 ports for both the Active and the Standby instances.
Note
Before creating the Secondary IP configuration, create the NICSs for the SBCs.
You cannot set the IP config name as "ipconfig1", because it is reserved for the primary IP configuration on a NIC.
Create and attach Secondary IPs to a network interface by executing the following command:
Syntax
az network nic ip-config create --name <NAME> --nic-name <PKT0/PKT1 NIC NAME> --resource-group <RESOURCE_GROUP_NAME>
Example
az network nic ip-config create --name sbc1-pkt0-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG
Create NIC for PKT0 and PKT1
When creating the NICs for both SBC's PKT0 and PKT1 ports, include the flag --ip-forwarding for receiving the traffic sent to the HFE node. For example:
az network nic create --name sbc1-pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt0 --network-security-group RbbnSbcSG --ip-forwarding
Note
Because the HFE Node receives all the traffic, it is not necessary to create Public IP addresses for these ports, or add them to the NICs.
SBC Userdata
The SBCs in the HFE environment require the following user data:
SBC HFE - User Data
Key
Allow Values
Description
CEName
N/A
Specifies the actual CE name of the SBC instance.
CEName Requirements:
Must start with an alphabetic character.
Contain only alphabetic characters and/or numbers; no special characters are allowed.
Cannot exceed 64 characters in length.
ReverseNatPkt0
True/False
Requires True for standalone SBC
ReverseNatPkt1
True/False
Requires True for standalone SBC
SystemName
N/A
Specifies the System Name of the SBC instances.
SystemName Requirements:
Must start with an alphabetic character.
Contain only alphabetic characters and/or numbers; no special characters are allowed.
Cannot exceed 26 characters in length.
Must be the same on both peers CEs.
SbcPersonalityType
isbc
The name of the SBC personality type for this instance. Currently, Ribbon supports only Integrated SBC (I-SBC).
AdminSshKey
ssh-rsa ...
Public SSH Key to access the admin user; must be in the form ssh-rsa ...
ThirdPartyCpuAlloc
0-4
(Optional) Number of CPUs segregated for use with non-Ribbon applications.
Restrictions:
0-4 CPUs
Both ThirdPartCpuAlloc and ThirdPartyMemAlloc must be configured.
The configuration must match between peer instances.
ThirdPartyMemAlloc
0-4096
(Optional) Amount of memory (in MB) that segregated out for use with non Ribbon applications.
Restrictions:
0-4096 CPUs
Both ThirdPartCpuAlloc and ThirdPartyMemAlloc must be configured.
The configuration must match between peer instances
CERole
ACTIVE/STANDBY
Specifies the CE's role within the HA setup.
PeerCEHa0IPv4Address
xxx.xxx.xxx.xxx
This value must be the Private IP Address of the Peer SBC's HA interface.
ClusterIp
xxx.xxx.xxx.xxx
This value must also be the Private IP Address of the Peer SBC's HA interface.
PeerCEName
N/A
Specifies the actual CE name of the Peer SBC instance in the HA setup.
SbcHaMode
1to1
Specifies the Mode of the HA configuration. Currently, Azure supports only 1:1 HA.
PeerInstanceName
N/A
Specifies the name of the Peer Instance in the HA setup.
Note: This is not the CEName or the SystemName.
HfeInstanceName
N/A
Specifies the instance name of the single HFE instance.