This section describes the steps to perform in addition to the steps described in Instantiate Standalone SBC on Azure for creating an HFE/SBC on Azure. All commands used in this section are part of the Azure CLI.
HFE nodes allow sub-second switchover between SBCs of an HA pair, as they negate the need for any IP reassignment.
For each SBC HA pair, use unique subnet for pkt0 and pkt1.
The interfaces may sometimes display in the incorrect order on the HFE node at the Linux level. However, this is not an issue because the HFE script ensures the entire configuration is set up correctly based on the the Azure NICs, not the local interface names.
Azure requires that each interface of an instance is in a separate subnet, but within the same virtual network. Assuming all Management interfaces are from the same subnet and HFE interfaces to the associated SBC PKT share a subnet, a minimum of six subnets are necessary for a full HFE setup.
In HFE 2.1, there are two HFE nodes - one to handle untrusted public traffic to the SBC (for PKT0) and the other to handle trusted traffic from the SBC to other trusted networks (from PKT1). In this section, the HFE node handling untrusted traffic is referred to as the "PKT0 HFE node", and the HFE node handling trusted traffic as the "PKT1 HFE node".
Both HFE nodes require three interfaces, as described below:
HFE 2.1 - Interface Requirement
Standard/Ubuntu Interface Name | NIC | PKT0 HFE Node Function | PKT1 HFE Node Function | Requires External IP? |
---|---|---|---|---|
eth0 | nic0 | Public Interface for SBC PKT0 | Private interface in for SBC PKT1 (can only be connected to/from instances in same subnet). | Yes (only on PKT0 HFE node) |
eth1 | nic1 | Management interface to HFE | Management interface to HFE. | Optional |
eth2 | nic2 | Interface to SBC PKT0 | Interface to SBC PKT1. | No |
To create the SBC HA with HFE, perform the following steps:
To create the HFE setup, use the HFE Azure Shell Script included in the cloudTemplates.tar.gz, named called HFE_AZ.sh. Upload this file to a storage account, so that the HFE nodes can download it. You can retrieve the files from the Ribbon Support portal. See Configure the Storage Account for HFE.
Two further subnets need to be created for the HFE. These subnets will be used for the eth0 for HFE PKT0 and HFE PKT1. To create the subnets use the following command.
--serverice-endpoints is required to allow the HFE to download the HFE script from storage.
Syntax
az network vnet subnet create --name <NAME> --address-prefixes <CIDR> --resource-group <RESOURCE-GROUP-NAME> --vnet-name <VNET_NAME> --network-security-group <SECURITY GROUP NAME> --service-endpoints Microsoft.Storage
Examples
az network vnet subnet create --name pkt0--hfe --address-prefixes 10.2.4.0/24 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --network-security-group pkt0RbbnSbcSG --service-endpoints Microsoft.Storage az network vnet subnet create --name pkt1--hfe --address-prefixes 10.2.5.0/24 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --network-security-group Pkt1RbbnSbcSG --service-endpoints Microsoft.Storage
The script HFE_AZ.sh
is stored in a container within a storage account. This allow the HFE nodes to download and run the script during the VM startup. It is recommended to use the "storageV2" as the type for the storage account.
To configure the storage account, perform the following steps:
Create a storage account by executing the following command:
Syntax
az storage account create --name <NAME> --resource-group <RESOURCE_GROUP_NAME> --kind storageV2
Example
az storage account create --name rbbnhfestorage --resource-group RBBN-SBC-RG --kind storageV2
Create a container by executing the following command:
Syntax
az storage container create --name <NAME> --account-name <STORAGE ACCOUNT NAME> --public-access blob --auth-mode key
Example
az storage container create --name hfescripts --account-name rbbnhfestorage --public-access blob --auth-mode key
Upload the script HFE_AZ.sh to the container by executing the following command:
Syntax
az storage blob upload --name <NAME> --file <HFE_AZ.sh> --container-name <CONTAINER NAME> --account-name <STORAGE ACCOUNT NAME>
Example
az storage blob upload --name HFE_AZ.sh --file /tmp/HFE_AZ.sh --container-name hfescripts --account-name rbbnhfestorage
Make the storage account accessible for the instances by allowing access to virtual machines in both subnets used for ETH0 and ETH1 (to handle when management interface is used) on the HFE node (ensure that the subnets exists).
Syntax
az storage account network-rule add --account-name <STORAGE ACCOUNT NAME> --resource-group <RESOURCE_GROUP_NAME> --subnet <SUBNET NAME of SUBNET USED FOR ETH0 of HFE NODE> --vnet-name <VIRTUAL NETWORK NAME>
Example
az storage account network-rule add --account-name rbbnhfestorage --resource-group RBBN-SBC-RG --subnet hfepublic --vnet-name RibbonNet
Azure requires that each interface of an instance is in a separate subnet, but within the same virtual network. Assuming all Mgmt interfaces are from the same subnet and HFE interfaces to the associated SBC PKT share a subnet, a minimum of six subnets are necessary for a full HFE setup.
You can perform the initial configuration of the HFE nodes using custom-data and cloud-init.
The list of cloud-init enabled Linux VMs is available in Microsoft Azure Documentation.
To create the custom data for the HFE node, update the following example script using the table below. Save this to a file to use during the HFE VM creation.
The following table contains the values that you must update:
Value to be updated | Description | Example |
---|---|---|
<HFE_SCRIPT_LOCATION> | The URL for HFE_AZ.sh that is contained in a VM within a storage account. You can retrieve the URL by executing the following command: | https://rbbnhfestorage.blob.core.windows.net/hfescripts/HFE_AZ.sh |
<ACTIVE_SBC_NAME> | The instance name for the Active SBC | rbbnSbc-1 |
<STANDBY_SBC_NAME> | The instance name for the Standby SBC | rbbnSbc-2 |
<REMOTE_SSH_MACHINE_IP> | The SSH IP/IPs to allow access through the mgmt port. Note:
| 43.26.27.29,35.13.71.112 |
<SBC_PKT_PORT_NAME> | This tells the HFE which PKT port it is communicating with. Can only be set as | PKT0 |
<CUSTOM_ROUTES> | It enables the HFE script to add these routes as part of its start-up process and verify these routes continue to be on the HFE throughout the uptime | 1.1.1.0/26_eth1, 2.2.2.0/28_eth2, 3.3.3.4/32_eth3 |
<ENABLE_PKT_DNS_QUERY> | This flag is used to enable/disable the support for the HFE to forward the DNS queries on the SBC PKT port correctly | 0 |
Ubuntu LTS are the supported images for use with HFE setups.
To create HFE nodes, perform the steps described below.
Create at least one Public IP for ETH0 of the PKT0 HFE Node. Optionally, create up to two additional Public IPs to access the MGMT interfaces on both HFE nodes.
Create the Public IPs by running the following command.
Syntax
az network public-ip create --name <PUBLIC IP NAME> --resource-group <RESOURCE-GROUP-NAME> --allocation-method Static
az network public-ip create --name pkt0-mgmt-ip --resource-group RBBN-SBC-RG --allocation-method Static az network public-ip create --name hfe-pkt0-ip --resource-group RBBN-SBC-RG --allocation-method Static az network public-ip create --name pkt1-mgmt-ip --resource-group RBBN-SBC-RG --allocation-method Static
To create NICs, use the following command.
Syntax
az network nic create --name <NIC NAME> --resource-group <RESOURCE-GROUP-NAME> --vnet-name <VIRTUAL NETWORK NAME> --subnet <SUBNET NAME> --network-security-group <SECURITY GROUP NAME>
Example
Repeat the following command for each NIC.
az network nic create --name hfe-pkt0-nic0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group pkt0RbbnSbcSG --public-ip-address hfe-pkt0-ip az network nic create --name hfe-pkt0-nic1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group mgmtRbbnSbcSG --public-ip-address pkt0-mgmt-ip az network nic create --name hfe-pkt0-nic2 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group pkt0RbbnSbcSG az network nic create --name hfe-pkt1-nic0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group pkt1RbbnSbcSG az network nic create --name hfe-pkt1-nic1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group mgmtRbbnSbcSG --public-ip-address pkt0-mgmt-ip az network nic create --name hfe-pkt1-nic2 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group pkt1RbbnSbcSG
For HFE 2.1, create a total of six NICs (three for each interface).
The following table contains the extra flags necessary for each interface:
Table 2 HFE 2.1 - Extra flags for each interface
HFE | Interface | Flags |
---|---|---|
PKT0 HFE | eth0 | --public-ip-address <PUBLIC IP NAME> --ip-forwarding --accelerated-networking true |
eth1 | --public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true | |
eth2 | --ip-forwarding --accelerated-networking true | |
PKT1 HFE | eth0 | --ip-forwarding --accelerated-networking true |
eth1 | --public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true | |
eth2 | --ip-forwarding --accelerated-networking true |
Create a VM for each HFE instance. Use the following command syntax:
az vm create --name <INSTANCE NAME> --resource-group <RESOURCE_GROUP_NAME> --admin-username <UserName> --custom-data <USERDATA FILE> --image <IMAGE NAME> --location <LOCATION> --size <INSTANCE SIZE> --ssh-key-values <PUBLIC SSH KEY FILENAME> --nics <ETH0 NIC> <ETH1 NIC> <ETH2 NIC> --boot-diagnostics-storage <STORAGE ACCOUNT NAME> --assign-identity <USER ASSIGNED MANAGED IDENTITY ID>
The following table describes each flag:
Table 4 VM Creation - Flag Description
Flag | Accepted Values | Example | Description |
---|---|---|---|
name | rbbnSbc | Name of the instance; must be unique in the resource group. | |
resource-group | RBBN-SBC-RG | Name of the Resource Group. | |
admin-user-name | rbbn | The default user to log on. | |
custom-data | File name | hfeUserData.txt | A file containing the HFE user data. Use this option for cloud-init enabled images. For more information, see Custom Data Example. |
image | Canonical:UbuntuServer:18.04-LTS:latest | The name of an image. For more information, see Supported Images. | |
location | East US | The location to host the VM in. For more information, refer to Microsoft Azure Documentation. | |
size | Standard_D8s_v3 | Indicates instance size. In AWS this is known as 'Instance Type', and Openstack calls this 'flavor'. For more information on instances size, refer to Microsoft Azure Documentation. Note:
| |
ssh-key-values | File Name. | azureSshKey.pub | A file that contains the public SSH key for accessing the You can retrieve the file by executing the following command:
Note: The Public Key must be in openSSH form: |
nics | Space-seperated list | hfe-pub hfe-mgmt-pkt0 hfe-pkt0 | The names of the NICs created in previous steps. |
boot-diagnostics-storage | Storage Account Name. | sbcdiagstore | The storage account created in previous steps. This allows the use of the serial console. |
assign-identity | User Assigned Managed Identity ID | /subscriptions/<SUBSCRIPTION ID>/resourceGroups/RBBN-SBC-RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/rbbnUami | This is ID for the User Assigned Managed Identity created in previous steps. You can retrieve it by executing the following command:
|
The HFE setup requires routes in Azure to force all the traffic leaving PKT0 and PKT1 to route back through the HFE.
Consider the following when creating routes in Azure:
To create the routes, perform the following steps:
Create the route-table:
Syntax
az network route-table create --name <NAME> --resource-group <RESOURCE_GROUP_NAME>
Example
az network route-table create --name hfe-route-table --resource-group RBBN-SBC-RG
Create two rules for PKT0 and PKT1:
Syntax
az network route-table route create --name <NAME> --resource-group <RESOURCE_GROUP_NAME> --address-prefix <CIDR OF ENDPOINT> --next-hop-type VirtualAppliance --route-table-name <ROUTE TABLE NAME> --next-hop-ip-address <IP FOR ETH3/ETH4 of HFE NODE>
Example
az network route-table route create --name pkt0-route --resource-group RBBN-SBC-RG --adress-prefix 77.77.173.255/32 --next-hop-type VirtualAppliance --route-table-name hfe-route-table --next-hop-ip-address 10.2.6.5
Attach the route table to the PKT0/PKT1 subnets:
Syntax
az network vnet subnet update --name <SUBNET NAME> --resource-group <RESOURCE_GROUP_NAME> --vnet-name <VIRTUAL NETWORK NAME> --route-table <ROUTE TABLE NAME>
Example
az network vnet subnet update --name pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --route-table pkt0-route
To create the SBCs for HA with HFE setup, follow the instructions as described in Instantiate Standalone SBC on Azure, with the addition of the steps below.
The SBC requires 4 NICs, each one attached to a individual subnet for MGMT, HA, PKT0 and PKT1.
To create a standard NIC, use the following syntax:
az network nic create --name <NIC NAME> --resource-group <RESOURCE GROUP NAME> --vnet-name <VIRTUAL NETWORK NAME> --subnet <SUBNET NAME> --network-security-group <SECURITY GROUP NAME> --accelerated-networking true
When creating the NICs for both SBC's PKT0 and PKT1 ports, include the flag --ip-forwarding
for receiving the traffic sent to the HFE node.
Example
az network nic create --name sbc1-pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt0 --network-security-group pkt0RbbnSbcSG--ip-forwarding az network nic create --name sbc1-pkt1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt1 --network-security-group pkt1RbbnSbcSG--ip-forwarding az network nic create --name sbc2-pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt0 --network-security-group pkt0RbbnSbcSG--ip-forwarding az network nic create --name sbc2-pkt1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt1 --network-security-group pkt1RbbnSbcSG--ip-forwarding
Because the HFE Node receives all the traffic, it is not necessary to create Public IP addresses for these ports, or add them to the NICs.
The HA SBCs require configuring Secondary IPs on both the PKT0 and PKT1 ports for both the Active and the Standby instances.
ipconfig1
", because it is reserved for the primary IP configuration on a NIC.Create and attach Secondary IPs to a network interface by executing the following command:
Syntax
az network nic ip-config create --name <NAME> --nic-name <PKT0/PKT1 NIC NAME> --resource-group <RESOURCE_GROUP_NAME>
Example
az network nic ip-config create --name sbc1-pkt0-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG az network nic ip-config create --name sbc1-pkt1-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG az network nic ip-config create --name sbc2-pkt0-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG az network nic ip-config create --name sbc2-pkt1-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG
The SBCs in the HFE environment require the following user data:
Table 5 SBC HFE - User Data
Key | Allow Values | Description |
---|---|---|
CEName | N/A | Specifies the actual CE name of the SBC instance. CEName Requirements:
|
ReverseNatPkt0 | True/False | Required to be True for SBC HA setup |
ReverseNatPkt1 | True/False | Required to be True for SBC HA setup |
SystemName | N/A | Specifies the System Name of the SBC instances. SystemName Requirements:
|
SbcPersonalityType | isbc | The name of the SBC personality type for this instance. Currently, Ribbon supports only Integrated SBC (I-SBC). |
AdminSshKey | ssh-rsa ... | Public SSH Key to access the admin user; must be in the form ssh-rsa ... |
ThirdPartyCpuAlloc | 0-4 | (Optional) Number of CPUs segregated for use with non-Ribbon applications. Restrictions:
|
ThirdPartyMemAlloc | 0-4096 | (Optional) Amount of memory (in MB) that segregated out for use with non Ribbon applications. Restrictions:
|
CERole | ACTIVE/STANDBY | Specifies the CE's role within the HA setup. |
PeerCEHa0IPv4Address | xxx.xxx.xxx.xxx | This value must be the Private IP Address of the Peer SBC's HA interface. |
ClusterIp | xxx.xxx.xxx.xxx | This value must also be the Private IP Address of the Peer SBC's HA interface. |
PeerCEName | N/A | Specifies the actual CE name of the Peer SBC instance in the HA setup. |
SbcHaMode | 1to1 | Specifies the Mode of the HA configuration. Currently, Azure supports only 1:1 HA. |
PeerInstanceName | N/A | Specifies the name of the Peer Instance in the HA setup. Note: This is not the CEName or the SystemName. |
Pkt0HfeInstanceName | N/A | Specifies the instance name of the PKT0 HFE Node. |
Specifies the instance name of the PKT1 HFE Node. | ||
Pkt1HfeInstanceName | N/A |
Create a JSON file using the following structure:
{ "CEName" : "<SBC CE NAME>", "ReverseNatPkt0" : "True", "ReverseNatPkt1" : "True", "SystemName" : "<SYSTEM NAME>", "SbcPersonalityType": "isbc", "AdminSshKey" : "<ssh-rsa ...>", "ThirdPartyCpuAlloc" : "<0-4>", "ThirdPartyMemAlloc" : "<0-4096>", "CERole" : "<ACTIVE/STANDBY>", "PeerCEHa0IPv4Address" : "<PEER HA IP ADDRESS>", "ClusterIp" : "<PEER HA IP ADDRESS>", "PeerCEName" : "<PEER SBC CE NAME>", "SbcHaMode" : "1to1", "PeerInstanceName" : "<PEER INSTANCE NAME>", "Pkt0HfeInstanceName" : "<PKT0 HFE NODE INSTANCE NAME>", "Pkt1HfeInstanceName" : "<PKT1 HFE NODE INSTANCE NAME>" }
Configure the PKT ports using the SBC CLI.
Please note: This configuration needs to be added after the instance has been created.
Example
admin@sbc-10.2.2.12> conf Entering configuration mode private [ok][2019-10-04 09:04:15] [edit] admin@sbc-10.2.2.12% set addressContext default ipInterfaceGroup LIG1 ipInterface LIF1 portName pkt0 ipPublicVarV4 IF2.IPV4 prefixVarV4 IF2.PrefixV4 mode inService state enabled [ok][2019-10-04 09:04:46] [edit] admin@sbc-10.2.2.12% commit Commit complete. [ok][2019-10-04 09:04:50] [edit] admin@sbc-10.2.2.12% set addressContext default ipInterfaceGroup LIG2 ipInterface LIF2 portName pkt1 ipPublicVarV4 IF3.IPV4 prefixVarV4 IF3.PrefixV4 mode inService state enabled [ok][2019-10-04 09:04:58] [edit] admin@sbc-10.2.2.12% com Commit complete. [ok][2019-10-04 09:05:00] [edit] admin@sbc-10.2.2.12% set addressContext default staticRoute 0.0.0.0 0 <PKT0 SUBNET GATEWAY> LIG1 LIF1 preference 100 [ok][2019-10-04 09:05:11] [edit] admin@sbc-10.2.2.12% com Commit complete. [ok][2019-10-04 09:05:15] [edit] admin@sbc-10.2.2.12% set addressContext default staticRoute 0.0.0.0 0 <PKT1 SUBNET GATEWAY> LIG2 LIF2 preference 100 [ok][2019-10-04 09:05:22] [edit] admin@sbc-10.2.2.12% com Commit complete. [ok][2019-10-04 09:05:24] [edit] admin@sbc-10.2.2.12%
The correct SBC CLI configuration will look similar to the following:
admin@sbc-10.2.2.12> show table addressContext default staticRoute IP INTERFACE IP DESTINATION GROUP INTERFACE CE IP ADDRESS PREFIX NEXT HOP NAME NAME PREFERENCE NAME ----------------------------------------------------------------------- 0.0.0.0 0 10.2.3.1 LIG1 LIF1 100 - 0.0.0.0 0 10.2.4.1 LIG2 LIF2 100 - [ok][2019-10-04 09:16:47] admin@sbc-10.2.2.12> admin@sbc-10.2.2.12> show table addressContext default ipInterfaceGroup IP IP IP CE PORT IP ALT IP ALT DRYUP BW VLAN IP VAR PREFIX VAR PUBLIC VAR PREFIX PUBLIC NAME IPSEC NAME NAME NAME ADDRESS PREFIX ADDRESS PREFIX MODE ACTION TIMEOUT STATE CONTINGENCY TAG BANDWIDTH V4 V4 VAR V4 V6 VAR V6 VAR V6 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- LIG1 disabled LIF1 - pkt0 - - - - inService dryUp 60 enabled 0 - 0 IF2.IPV4 IF2.PrefixV4 - - - - LIG2 disabled LIF2 - pkt1 - - - - inService dryUp 60 enabled 0 - 0 IF3.IPV4 IF3.PrefixV4 - - - - [ok][2019-10-04 09:18:35]
Example Meta Variable table for a SBC HA is provided below:
To add a new end point to the Public Endpoint side with HFE1 (for example, 52.52.52.52 is the new end point IP):
Add the end point IP to outbound security group.
Add the IP (for example, 10.2.3.9) to PKT1 subnet custom route table.
Select Next hop type of Virtual Appliance, and the Next hop address as HFE eth2 IP.
For specialized deployments, users may need to add specific custom static routes to the HFE at the OS level. The HFE script supports this by using the HFE variable CUSTOM_ROUTES
. It enables the HFE script to add these routes as part of its start-up process and verify these routes continue to be on the HFE throughout the uptime.
CUSTOM_ROUTES is a comma separated list of values in the form <DESTINATION_IP_CIDR>_<INTERFACE_NAME>. For example: 1.1.1.0/26_eth1, 2.2.2.0/28_eth2, 3.3.3.4/32_eth3.
If the HFE is already deployed, the variable is added to /opt/HFE/natVars.user.
Example
echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" | sudo tee -a /opt/HFE/natVars.user
For <INTERFACE_NAME>
, use the standard eth0, eth1, and so on always even if the Linux distribution does not use this naming convention. The HFE_AZ.sh
determines the interface to add the route.
Azure does not support updating Custom Data after a VM is created. To update an HFE variable, use the following procedure:
Enter the updated variable to /opt/HFE/natVars.user
. For example:
echo "REMOTE_SSH_MACHINE_IP=\"10.27.0.54,10.36.9.6\"" | sudo tee -a /opt/HFE/natVars.user
Reboot the HFE:
sudo reboot
Any variable added to/opt/HFE/natVars.user
will overwrite the values set as the variables in custom data. To add a new Remote SSH Machine IP, ensure to supply the full list of IPs for which you wish to create the routes.
The DNS queries on the SBC PKT port are sent using the primary IP. The HFE variable ENABLE_PKT_DNS_QUERY is used to enable the support for the HFE to forward these requests correctly.
To enable the PKT DNS Support option on an already configured HFE setup:
Add the natvar
ENABLE_PKT_DNS_QUERY
to
/opt/HFE/natVars.user
with the value 1.
echo "ENABLE_PKT_DNS_QUERY=1" | sudo tee -a /opt/HFE/natVars.user
Reboot the HFE.
sudo reboot