DO NOT SHARE THESE DOCS WITH CUSTOMERS!
This is an LA release that will only be provided to a select number of PLM-sanctioned customers (PDFs only). Contact PLM for details.
In this section:
This section describes the extra steps (in addition to the Standalone SBC) necessary for creating a HFE/SBC on Azure. All commands used in this section is part of the Azure CLI.
HFE nodes allow sub-second switchover between SBCs of an HA pair, as they negate the need for any IP reassignment. In the Microsoft Azure environment.
For each SBC HA pair, use unique subnet for pkt0 and pkt1.
The interfaces may sometimes display in the incorrect order on the HFE node at the Linux level. However, this is not an issue because the HFE script ensures the entire configuration is set up correctly based on the the Azure NICs, not the local interface names.
Azure requires that each interface of an instance is in a separate subnet, but within the same virtual network. Assuming all Mgmt interfaces are from the same subnet and HFE interfaces to the associated SBC PKT share a subnet, a minimum of six subnets are necessary for a full HFE setup.
Configure the HFE nodes in one of two ways:
HFE_AZ_manual_setup.sh
script. For more information, see HFE Node Initial Configuration.In HFE 2.1, there are two HFE nodes - one to handle untrusted public traffic to the SBC (for PKT0,) and the other to handle trusted traffic from the SBC to other trusted networks (from PKT1). In this section, the HFE node handling untrusted traffic is referred to as the "PKT0 HFE node", and the HFE node handling trusted traffic as the "PKT1 HFE node".
Both HFE nodes require 3 interfaces, as described below:
To use a HFE 2.1 environment, the startup script for the SBCs requires the fields Pkt0HfeInstanceName
and Pkt1HfeInstanceName
. For more information, see the table in SBCs' Userdata.
To create the SBC HA with HFE, perform the following steps:
To create HFE setup, use the HFE Azure Shell Script and the HFE Azure Manual Setup Shell Script, included in the cloudTemplates.tar.gz and called HFE_AZ.sh and HFE_AZ_manual_setup.sh.
HFE Azure User Data
The script HFE_AZ.sh
is stored in a container within a storage account. This allow the HFE nodes to download and run the script during the VM startup.
To configure the storage account, perform the following steps:
Create a storage account by executing the following command:
Syntax
az storage account create --name <NAME> --resource-group <RESOURCE_GROUP_NAME> --kind storageV2
Example
az storage account create --name rbbnhfestorage --resource-group RBBN-SBC-RG --kind storageV2
Create a container by executing the following command:
Syntax
az storage container create --name <NAME> --account-name <STORAGE ACCOUNT NAME> --public-access blob --auth-mode key
Example
az storage container create --name hfescripts --account-name rbbnhfestorage --public-access blob --auth-mode key
Upload the script HFE_AZ.sh to the container by executing the following command:
Syntax
az storage blob upload --name <NAME> --file <HFE_AZ.sh> --container-name <CONTAINER NAME> --account-name <STORAGE ACCOUNT NAME>
Example
az storage blob upload --name HFE_AZ.sh --file /tmp/HFE_AZ.sh --container-name hfescripts --account-name rbbnhfestorage
Make the storage account accessible for the instances by allowing access to virtual machines in both subnets used for ETH0 on the HFE node (ensure that the subnet exists).
Syntax
az storage account network-rule add --account-name <STORAGE ACCOUNT NAME> --subnet <SUBNET NAME of SUBNET USED FOR ETH0 of HFE NODE> --vnet-name <VIRTUAL NETWORK NAME>
Example
az storage account network-rule add --account-name rbbnhfestorage --subnet hfepublic --vnet-name RibbonNet
You can perform the initial configuration of the HFE node(s) in two ways:
HFE_AZ_manual_setup.sh
.The list of cloud-init enabled Linux VMs is available in Microsoft Azure Documentation.
The HFE has variables that are required to be updated. When using cloud-init, update the the HFE variables in the custom data.
For manual setup, update the script HFE_AZ_manual_setup.sh
(the portion of the script below the comment: UPDATE VARIABLES IN THIS SECTION
).
The following table contains the values that you must update:
Value to be updated | Description | Example |
---|---|---|
<HFE_SCRIPT_LOCATION> | The URL for HFE_AZ.sh that is contained in a VM within a storage account. You can retrieve the URL by executing the following command: | https://rbbnhfestorage.blob.core.windows.net/hfescripts/HFE_AZ.sh |
<ACTIVE_SBC_NAME> | The instance name for the Active SBC | rbbnSbc-1 |
<STANDBY_SBC_NAME> | The instance name for the Standby SBC | rbbnSbc-2 |
<REMOTE_SSH_MACHINE_IP> | The SSH IP/IPs to allow access through the mgmt port. Note:
| 43.26.27.29,35.13.71.112 |
<SBC_PKT_PORT_NAME> | This tells the HFE which PKT port it is communicating with. Can only be set as Note: This is only for HFE 2.1. | PKT0 |
Azure does not support updating Custom Data after a VM is created. To update an HFE variable, use the following procedure:
Enter the updated variable to /opt/HFE/natVars.user
. For example:
echo "REMOTE_SSH_MACHINE_IP=\"10.27.0.54,10.36.9.6\"" | sudo tee -a /opt/HFE/natVars.user
Reboot the HFE:
sudo reboot
Any variable added to /opt/HFE/natVars.user
will overwrite the values set as the variables in custom data. To add in a new Remote SSH Machine IP, ensure to supply the full list of IPs you wish the routes to be created for.
Supported Images
The following images are generally supported for using as the HFE:
An example of the custom data for a HFE node is given below:
The script HFE_AZ_manual_setup.sh
has two functions:
systemd
service "ribbon-hfe
" and enables the service.ystemd
runs it to download the script and write the variables out to /opt/HFE/natVars.input
, similar to the role of custom-data does in the cloud-init. As the script is run as a service by systemd
, it will automatically run if the instance reboots.The steps required to initially configure the HFE node using the script HFE_AZ_manual_setup.sh
are as follows:
HFE_AZ_manual_setup.sh
onto the instance, in a file path that has executable permissions for the root.Run the script with heightened permissions and the '-s
' flag. For example:
sudo /usr/sbin/HFE_AZ_manual_setup.sh -s
When you use the '-s
' flag, systemd points at the location of the script. If you remove the file, run the script again with the '-s
' flag.
Start the service by executing the following command:
sudo systemctl start ribbon-hfe
To create HFE node(s), perform the steps described below.
Create at least one Public IP for ETH0 of the PKT0 HFE Node. Optionally, create up to two additional Public IPs to access the MGMT interfaces on both HFE nodes. For more information, refer to Create Public IPs (Standalone).
To create NICs, use the following command syntax:
az network nic create --name <NIC NAME> --resource-group <RESOURCE-GROUP-NAME> --vnet-name <VIRTUAL NETWORK NAME> --subnet <SUBNET NAME> --network-security-group <SECURITY GROUP NAME>
For HFE 2.1, create a total of six NICs (three for each interface).
The following table contains the extra flags necessary for each interface:
To create the VM(s), use the following command syntax:
az vm create --name <INSTANCE NAME> --resource-group <RESOURCE_GROUP_NAME> --admin-username <UserName> --custom-data <USERDATA FILE> --image <IMAGE NAME> --location <LOCATION> --size <INSTANCE SIZE> --ssh-key-values <PUBLIC SSH KEY FILENAME> --nics <ETH0 NIC> <ETH1 NIC> <ETH2 NIC> --boot-diagnostics-storage <STORAGE ACCOUNT NAME> --assign-identity <USER ASSIGNED MANAGED IDENTITY ID>
The following table describes each flag:
The HFE setup requires routes in Azure, to force all the traffic leaving PKT0 and PKT1 to route back through the HFE.
Consider the following when creating routes in Azure:
To create the routes, perform the following steps:
Create the route-table:
Syntax
az network route-table create --name <NAME> --resource-group <RESOURCE_GROUP_NAME>
Example
az network route-table create --name hfe-route-table --resource-group RBBN-SBC-RG
Create two rules for PKT0 and PKT1:
Syntax
az network route-table route create --name <NAME> --resource-group <RESOURCE_GROUP_NAME> --address-prefix <CIDR OF ENDPOINT> --next-hop-type VirtualAppliance --route-table-name <ROUTE TABLE NAME> --next-hop-ip-address <IP FOR ETH3/ETH4 of HFE NODE>
Example
az network route-table route create --name pkt0-route --resource-group RBBN-SBC-RG --adress-prefix 77.77.173.255/32 --next-hop-type VirtualAppliance --route-table-name hfe-route-table --next-hop-ip-address 10.2.6.5
Attach the route table to the PKT0/PKT1 subnets:
Syntax
az network vnet subnet update --name <SUBNET NAME> --resource-group <RESOURCE_GROUP_NAME> --vnet-name <VIRTUAL NETWORK NAME> --route-table <ROUTE TABLE NAME>
Example
az network vnet subnet update --name pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --route-table pkt0-route
To create the SBC HA with HFE setup, first perform all of the steps described in Create SBC (Standalone).
In addition to those steps, perform the steps described below.
The SBC requires 4 NICs, each one attached to a individual subnet for MGMT, HA, PKT0 and PKT1.
To create a standard NIC, use the following syntax:
az network nic create --name <NIC NAME> --resource-group <RESOURCE GROUP NAME> --vnet-name <VIRTUAL NETWORK NAME> --subnet <SUBNET NAME> --network-security-group <SECURITY GROUP NAME> --accelerated-networking <true/false>
See below for additional steps for when SBCs are in a HFE setup.
The HA SBCs require configuring Secondary IPs on both the PKT0 and PKT1 ports for both the Active and the Standby instances.
ipconfig1
", because it is reserved for the primary IP configuration on a NIC.Create and attach Secondary IPs to a network interface by executing the following command:
Syntax
az network nic ip-config create --name <NAME> --nic-name <PKT0/PKT1 NIC NAME> --resource-group <RESOURCE_GROUP_NAME>
Example
az network nic ip-config create --name sbc1-pkt0-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG
When creating the NICs for both SBC's PKT0 and PKT1 ports, include the flag --ip-forwarding
for receiving the traffic sent to the HFE node. For example:
az network nic create --name sbc1-pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt0 --network-security-group RbbnSbcSG --ip-forwarding
The SBCs in the HFE environment require the following user data:
Create a JSON file using the following structure:
{ "CEName" : "<SBC CE NAME>", "ReverseNatPkt0" : "True", "ReverseNatPkt1" : "True", "SystemName" : "<SYSTEM NAME>", "SbcPersonalityType": "isbc", "AdminSshKey" : "<ssh-rsa ...>", "ThirdPartyCpuAlloc" : "<0-4>", "ThirdPartyMemAlloc" : "<0-4096>", "CERole" : "<ACTIVE/STANDBY>", "PeerCEHa0IPv4Address" : "<PEER HA IP ADDRESS>", "ClusterIp" : "<PEER HA IP ADDRESS>", "PeerCEName" : "<PEER SBC CE NAME>", "SbcHaMode" : "1to1", "PeerInstanceName" : "<PEER INSTANCE NAME>", "Pkt0HfeInstanceName" : "<PKT0 HFE NODE INSTANCE NAME>", "Pkt1HfeInstanceName" : "<PKT1 HFE NODE INSTANCE NAME>" }
The SBC requires user data in a valid JSON format. If the user-data is not a valid JSON, the instance shuts down immediately.
You cannot update user data on VMs in the Azure framework.
Example Meta Variable table for a SBC HA is provided below:
To add a new end point to the Public Endpoint side with HFE1 (for example, 52.52.52.52 is the new end point IP):
Add the end point IP to outbound security group.
Add the end point IP to the PKT0 subnet custom route table. Name of the route table will be $instanceBaseName
.
$hfePkt0OutIp
.Add the IP (for example, 10.2.3.9) to PKT1 subnet custom route table. The name of the route table is $instanceBaseName
.
Select Next hop type of Virtual Appliance, and the Next hop address as $hfePkt1OutIp
.
The HFE generates the following logs under /opt/HFE/log/
:
For specialized deployments, users may need to add specific custom static routes to the HFE at the OS level. The HFE script supports this by using the HFE variable CUSTOM_ROUTES
. It enables the HFE script to add these routes as part of its start-up process and verify these routes continue to be on the HFE throughout the uptime.
CUSTOM_ROUTES is a comma separated list of values in the form <DESTINATION_IP_CIDR>_<INTERFACE_NAME>. For example: 1.1.1.0/26_eth1, 2.2.2.0/28_eth2, 3.3.3.4/32_eth3.
To add the CUSTOM_ROUTES to the HFE customData, add the following line below /bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR
. For example:
/bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR /bin/echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" >> $NAT_VAR /bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILE
If the HFE is already deployed, the variable is added to /opt/HFE/natVars.user. For example:
echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" | sudo tee -a /opt/HFE/natVars.user
For <INTERFACE_NAME>
, use the standard eth0, eth1, and so on always even if the Linux distribution does not use this naming convention. The HFE_AZ.sh
determines the interface to add the route.
The HFE_AZ.sh script can create an archive of useful logs to help with debugging (similar to the SBC sysdump). Run the following command to collect the logs:
sudo /opt/HFE/HFE_AZ.sh sysdump
The following details are collected:
/opt/HFE/*
(without previous sysdumps)The sysdumps archives are stored in the .tar.gz
format under /opt/HFE/sysdump/
.
The following section contains the instructions to set multiple SSH IPs to access the HFE node as well as to update the instances to add in more SSH IPs.
Ensure the REMOTE_SSH_MACHINE_IP is not set to an IP where the call traffic is originating from. It can break the HFE logic and the traffic fails to reach the SBC.
During orchestration, you can supply multiple IP addresses to the appropriate variable with a common separated list. For example, 10.0.0.1, 10.0.0.2, and 10.0.0.3. The following table represents the list of variables that need to be set for each orchestration type:
The following steps describe the procedure to update the Remote SSH IPs on the Azure.
To add in a new Remote SSH Machine IP, you need to supply the full list of IPs for which the routes need to be created.
The following procedure results in network outages as the HFE requires a reboot to select the latest list.
Azure does not support updating Custom Data after a VM is created. To update a HFE variable, use the following procedure:
Enter the updated variable to/opt/HFE/natVars.user. For example:
echo "REMOTE_SSH_MACHINE_IP=\"10.27.0.54,10.36.9.6\"" | sudo tee -a /opt/HFE/natVars.user
Reboot the HFE:
sudo reboot
Any variable added to /opt/HFE/natVars.user will overwrite the values set as the variables in custom data. Ensure you enter the complete list of IPs for which the routes need to be created.
The DNS queries on the SBC PKT port are sent using the primary IP. The HFE variable ENABLE_PKT_DNS_QUERY is used to enable the support for the HFE to forward these requests correctly.
To enable the PKT DNS Support on a new HFE setup, add "ENABLE_PKT_DNS_QUERY=1
" to the customData
, below SBC_PKT_PORT_NAME
.
Example:
/bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR /bin/echo "ENABLE_PKT_DNS_QUERY=1" >> $NAT_VAR /bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILE
To enable the PKT DNS Support option on an already configured HFE setup:
Add the natvar
ENABLE_PKT_DNS_QUERY
to
/opt/HFE/natVars.user
with the value 1.
echo "ENABLE_PKT_DNS_QUERY=1" | sudo tee -a /opt/HFE/natVars.user
Reboot the HFE.
sudo reboot