DO NOT SHARE THESE DOCS WITH CUSTOMERS!

This is an LA release that will only be provided to a select number of PLM-sanctioned customers (PDFs only). Contact PLM for details.

In this section:

This section describes the extra steps (in addition to the Standalone SBC) necessary for creating a HFE/SBC on Azure. All commands used in this section is part of the Azure CLI.

HFE Node Network Setup

HFE nodes allow sub-second switchover between SBCs of an HA pair, as they negate the need for any IP reassignment. In the Microsoft Azure environment.

Note

For each SBC HA pair, use unique subnet for pkt0 and pkt1.


Note

The interfaces may sometimes display in the incorrect order on the HFE node at the Linux level. However, this is not an issue because the HFE script ensures the entire configuration is set up correctly based on the the Azure NICs, not the local interface names.


Azure requires that each interface of an instance is in a separate subnet, but within the same virtual network. Assuming all Mgmt interfaces are from the same subnet and HFE interfaces to the associated SBC PKT share a subnet, a minimum of six subnets are necessary for a full HFE setup.

Configure the HFE nodes in one of two ways:

HFE 2.1

In HFE 2.1, there are two HFE nodes - one to handle untrusted public traffic to the SBC (for PKT0,) and the other to handle trusted traffic from the SBC to other trusted networks (from PKT1). In this section, the HFE node handling untrusted traffic is referred to as the "PKT0 HFE node", and the HFE node handling trusted traffic as the "PKT1 HFE node".

Both HFE nodes require 3 interfaces, as described below:

HFE 2.1 - Interface Requirement

Standard/Ubuntu Interface Name

NIC

PKT0 HFE Node Function

PKT1 HFE Node FunctionRequires External IP?
eth0 / ens4nic0Public Interface for SBC PKT0Private interface in for SBC PKT1 (can only be connected to/from instances in same subnet).Yes (only on PKT0 HFE node)
eth1 / ens5nic1Management interface to HFEManagement interface to HFE.Optional
eth2 / ens6nic2Interface to SBC PKT0Interface to SBC PKT1.No

Note

To use a HFE 2.1 environment, the startup script for the SBCs requires the fields Pkt0HfeInstanceName and Pkt1HfeInstanceName. For more information, see the table in SBCs' Userdata.

Steps to Create SBC HA with HFE Setup

To create the SBC HA with HFE, perform the following steps:

  1. Install and login to the Azure CLI.
  2. Create Resource Group and Network with six subnets.
  3. Configure the Storage Account.
  4. Create the User Assigned Managed Identity.
  5. Create the HFE Node(s).
  6. If using a non cloud-init enabled image, run the manual setup script. See HFE Node Initial Configuration.
  7. Understand the extra steps necessary for the SBC creation in SBCs' Userdata.
  8. Create two SBCs following the instructions in the sections Create SBC and SBCs' Userdata.

Resources for HFE Setup

To create HFE setup, use the HFE Azure Shell Script and the HFE Azure Manual Setup Shell Script, included in the cloudTemplates.tar.gz and called HFE_AZ.sh and HFE_AZ_manual_setup.sh.


HFE Azure User Data

Click to view script

Configure the Storage Account

The script HFE_AZ.sh is stored in a container within a storage account. This allow the HFE nodes to download and run the script during the VM startup.

To configure the storage account, perform the following steps:

  1. Create a storage account by executing the following command:
    Syntax

    az storage account create --name <NAME> --resource-group <RESOURCE_GROUP_NAME> --kind storageV2


    Example

    az storage account create --name rbbnhfestorage --resource-group RBBN-SBC-RG --kind storageV2
  2. Create a container by executing the following command:
    Syntax

    az storage container create --name <NAME> --account-name <STORAGE ACCOUNT NAME> --public-access blob --auth-mode key


    Example

    az storage container create --name hfescripts --account-name rbbnhfestorage --public-access blob --auth-mode key
  3. Upload the script HFE_AZ.sh to the container by executing the following command:
    Syntax

    az storage blob upload --name <NAME> --file <HFE_AZ.sh> --container-name <CONTAINER NAME> --account-name <STORAGE ACCOUNT NAME>


    Example

    az storage blob upload --name HFE_AZ.sh --file /tmp/HFE_AZ.sh --container-name hfescripts --account-name rbbnhfestorage
  4. Make the storage account accessible for the instances by allowing access to virtual machines in both subnets used for ETH0 on the HFE node (ensure that the subnet exists).
    Syntax

    az storage account network-rule add --account-name <STORAGE ACCOUNT NAME> --subnet <SUBNET NAME of SUBNET USED FOR ETH0 of HFE NODE> --vnet-name <VIRTUAL NETWORK NAME>


    Example

    az storage account network-rule add --account-name rbbnhfestorage --subnet hfepublic  --vnet-name RibbonNet

HFE Node Initial Configuration

You can perform the initial configuration of the HFE node(s) in two ways:

  • Using custom-data and cloud-init.
  • Using the script HFE_AZ_manual_setup.sh.

The list of cloud-init enabled Linux VMs is available in Microsoft Azure Documentation.

HFE Variables

The HFE has variables that are required to be updated. When using cloud-init, update the the HFE variables in the custom data.

For manual setup, update the script HFE_AZ_manual_setup.sh (the portion of the script below the comment: UPDATE VARIABLES IN THIS SECTION).

The following table contains the values that you must update:

Value to be updatedDescriptionExample
<HFE_SCRIPT_LOCATION>

The URL for HFE_AZ.sh that is contained in a VM within a storage account.

You can retrieve the URL by executing the following command:
az storage blob url --account-name <STORAGE ACCOUNT NAME> --container-name <CONTAINER NAME> --name <BLOB NAME>

https://rbbnhfestorage.blob.core.windows.net/hfescripts/HFE_AZ.sh
<ACTIVE_SBC_NAME>
The instance name for the Active SBCrbbnSbc-1
<STANDBY_SBC_NAME>
The instance name for the Standby SBCrbbnSbc-2
<REMOTE_SSH_MACHINE_IP>

The SSH IP/IPs to allow access through the mgmt port.

Note:

43.26.27.29,35.13.71.112
<SBC_PKT_PORT_NAME>

This tells the HFE which PKT port it is communicating with. Can only be set as PKT0 or PKT1.

Note: This is only for HFE 2.1.

PKT0

Updating HFE Variables

Azure does not support updating Custom Data after a VM is created. To update an HFE variable, use the following procedure:

  1. Log on to the HFE node as a Ribbon user.
  2. Enter the updated variable to /opt/HFE/natVars.user. For example:

    echo "REMOTE_SSH_MACHINE_IP=\"10.27.0.54,10.36.9.6\"" | sudo tee -a /opt/HFE/natVars.user
  3.  Reboot the HFE: 

    sudo reboot
    Note

    Any variable added to /opt/HFE/natVars.user will overwrite the values set as the variables in custom data. To add in a new Remote SSH Machine IP, ensure to supply the full list of IPs you wish the routes to be created for.

Supported Images

The following images are generally supported for using as the HFE:

Cloud-init configuration

  • Ubuntu 18.04

Manual configuration

  • CentOS 7
  • CentOS 8
  • RHEL 7
  • RHEL 8
  • Debian 10

Custom Data Example

An example of the custom data for a HFE node is given below:

Click to view script

Manual Configuration

The script HFE_AZ_manual_setup.sh has two functions:

  • It creates the systemd service "ribbon-hfe" and enables the service.
  • Systemd runs it to download the script and write the variables out to /opt/HFE/natVars.input, similar to the role of custom-data does in the cloud-init. As the script is run as a service by systemd, it will automatically run if the instance reboots.

The steps required to initially configure the HFE node using the script HFE_AZ_manual_setup.sh are as follows:

  1. Using SCP, upload the script HFE_AZ_manual_setup.sh onto the instance, in a file path that has executable permissions for the root.
  2. Run the script with heightened permissions and the '-s' flag. For example:

    sudo /usr/sbin/HFE_AZ_manual_setup.sh -s
    Tip

    When you use the '-s' flag, systemd points at the location of the script. If you remove the file, run the script again with the '-s' flag.

  3. Start the service by executing the following command:

    sudo systemctl start ribbon-hfe

Create HFE Nodes

To create HFE node(s), perform the steps described below.

Create Public IPs

Create at least one Public IP for ETH0 of the PKT0 HFE Node. Optionally, create up to two additional Public IPs to access the MGMT interfaces on both HFE nodes. For more information, refer to Create Public IPs (Standalone).

Create NICs

To create NICs, use the following command syntax:

az network nic create --name <NIC NAME>
                      --resource-group <RESOURCE-GROUP-NAME>
                      --vnet-name <VIRTUAL NETWORK NAME>
                      --subnet <SUBNET NAME>
                      --network-security-group <SECURITY GROUP NAME> 


HFE 2.1

For HFE 2.1, create a total of six NICs (three for each interface).

The following table contains the extra flags necessary for each interface:

HFE 2.1 - Extra flags for each interface

HFEInterfaceFlags
PKT0 HFEeth0--public-ip-address <PUBLIC IP NAME> --ip-forwarding - --accelerated-networking true
eth1--public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true
eth2--ip-forwarding --accelerated-networking true
PKT1 HFEeth0--ip-forwarding --accelerated-networking true
eth1--public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true
eth2--ip-forwarding --accelerated-networking true

Create the VM

To create the VM(s), use the following command syntax:

az vm create --name <INSTANCE NAME>
             --resource-group <RESOURCE_GROUP_NAME>
             --admin-username <UserName>
             --custom-data <USERDATA FILE>
             --image <IMAGE NAME>
             --location <LOCATION>
             --size <INSTANCE SIZE>
             --ssh-key-values <PUBLIC SSH KEY FILENAME>
             --nics <ETH0 NIC> <ETH1 NIC> <ETH2 NIC>
             --boot-diagnostics-storage <STORAGE ACCOUNT NAME>
             --assign-identity <USER ASSIGNED MANAGED IDENTITY ID>


The following table describes each flag:

VM Creation - Flag Description

FlagAccepted ValuesExampleDescription
name
rbbnSbcName of the instance; must be unique in the resource group.
resource-group
RBBN-SBC-RGName of the Resource Group.
admin-user-name
rbbnThe default user to log on.
custom-dataFile namehfeUserData.txtA file containing the HFE user data. Use this option for cloud-init enabled images. For more information, see Custom Data Example.
image
Canonical:UbuntuServer:18.04-LTS:latestThe name of an image. For more information, see Supported Images.
location
East USThe location to host the VM in. For more information, refer to Microsoft Azure Documentation.
size
Standard_DS3_v2

Indicates instance size. In AWS this is known as 'Instance Type', and Openstack calls this 'flavor'. For more information on instances size, refer to Microsoft Azure Documentation.

Note:

  • Maintain same instance type for the HFE and the SBC.
  • For HFE 2.1, each HFE node requires a minimum of three NICs.
ssh-key-valuesFile Name.azureSshKey.pub

A file that contains the public SSH key for accessing the linuxadmin user.

You can retrieve the file by executing the following command:

ssh-keygen -y -f azureSshKey.pem > azureSshKey.pub

Note: The Public Key must be in openSSH form: ssh-rsa XXX 

nicsSpace-seperated listhfe-pub hfe-mgmt-pkt0 hfe-pkt0The names of the NICs created in previous steps.
boot-diagnostics-storageStorage Account Name.sbcdiagstore

The storage account created in previous steps.

This allows the use of the serial console.

assign-identityUser Assigned Managed Identity ID/subscriptions/<SUBSCRIPTION ID>/resourceGroups/RBBN-SBC-RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/rbbnUami

This is ID for the User Assigned Managed Identity created in previous steps.

You can retrieve it by executing the following command:

az identity show --name < IDENTITY NAME> --resource-group <RESOURCE-GROUP-NAME>

HFE Routing 

The HFE setup requires routes in Azure, to force all the traffic leaving PKT0 and PKT1 to route back through the HFE.

Note

Consider the following when creating routes in Azure:

  • Custom routes are not given complete priority over the standard Azure routing. If there is a more specific Azure route, Azure directs traffic based on that default rule.
    • For example, if you create a custom route to 0.0.0.0/0 via the HFE, but send traffic to a private IP within the same Virtual network, Azure does NOT route via the HFE eth2. Instead, the traffic flows directly from the SBC to the private IP.
      (Refer to https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview for information on default rules)
    • If a custom route destination matches a default route destination, the SBC uses the custom route.
  • Ribbon advises to supply a specific IP or CIDR of the endpoints as the destination IP to prevent any routing issues with the Azure default routing rules.
  • Routes are applied to all traffic within a subnet. This means if multiple routes using the same destination address exist, the SBC can use any route.
    • If multiple SBC setups are using the same endpoint (for example, in an SLB/SBC setup), separate the SBCs into separate subnets and route tables to ensure they route to the correct HFE.
      (Refer to Configure SBC SWe on Azure for SLB for more information)

To create the routes, perform the following steps:

  1. Create the route-table:
    Syntax

    az network route-table create --name <NAME> --resource-group <RESOURCE_GROUP_NAME>


    Example

    az network route-table create --name hfe-route-table --resource-group RBBN-SBC-RG


  2. Create two rules for PKT0 and PKT1:
    Syntax

    az network route-table route create --name <NAME>
                                        --resource-group <RESOURCE_GROUP_NAME>
                                        --address-prefix <CIDR OF ENDPOINT>
                                        --next-hop-type VirtualAppliance
                                        --route-table-name <ROUTE TABLE NAME>
                                        --next-hop-ip-address <IP FOR ETH3/ETH4 of HFE NODE> 


    Example

    az network route-table route create --name pkt0-route --resource-group RBBN-SBC-RG --adress-prefix 77.77.173.255/32 --next-hop-type VirtualAppliance --route-table-name hfe-route-table --next-hop-ip-address 10.2.6.5


  3. Attach the route table to the PKT0/PKT1 subnets:
    Syntax

    az network vnet subnet update --name <SUBNET NAME>
                                  --resource-group <RESOURCE_GROUP_NAME>
                                  --vnet-name <VIRTUAL NETWORK NAME>
                                  --route-table <ROUTE TABLE NAME>


    Example

    az network vnet subnet update --name pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --route-table pkt0-route


Additional Steps for SBC HFE Setup

To create the SBC HA with HFE setup, first perform all of the steps described in Create SBC (Standalone).

In addition to those steps, perform the steps described below.

Configure NICs

The SBC requires 4 NICs, each one attached to a individual subnet for MGMT, HA, PKT0 and PKT1.

To create a standard NIC, use the following syntax:

az network nic create --name <NIC NAME>
                      --resource-group <RESOURCE GROUP NAME>
                      --vnet-name <VIRTUAL NETWORK NAME>
                      --subnet <SUBNET NAME>
                      --network-security-group <SECURITY GROUP NAME>
                      --accelerated-networking <true/false>

See below for additional steps for when SBCs are in a HFE setup.

Secondary IPs

The HA SBCs require configuring Secondary IPs on both the PKT0 and PKT1 ports for both the Active and the Standby instances.

Note
  • Before creating the Secondary IP configuration, create the NICSs for the SBCs.
  • You cannot set the IP config name as "ipconfig1", because it is reserved for the primary IP configuration on a NIC.

Create and attach Secondary IPs to a network interface by executing the following command:

Syntax

az network nic ip-config create --name <NAME> --nic-name <PKT0/PKT1 NIC NAME> --resource-group <RESOURCE_GROUP_NAME>


Example

az network nic ip-config create --name sbc1-pkt0-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG


Create NIC for PKT0 and PKT1

When creating the NICs for both SBC's PKT0 and PKT1 ports, include the flag --ip-forwarding for receiving the traffic sent to the HFE node. For example:

az network nic create --name sbc1-pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt0 --network-security-group RbbnSbcSG --ip-forwarding
Note
Because the HFE Node receives all the traffic, it is not necessary to create Public IP addresses for these ports, or add them to the NICs.


SBC Userdata

The SBCs in the HFE environment require the following user data: 

SBC HFE - User Data

KeyAllow ValuesDescription
CENameN/A

Specifies the actual CE name of the SBC instance.

CEName Requirements:

  • Must start with an alphabetic character.

  • Contain only alphabetic characters and/or numbers; no special characters are allowed.

  • Cannot exceed 64 characters in length.

ReverseNatPkt0True/FalseRequires True for standalone SBC
ReverseNatPkt1True/FalseRequires True for standalone SBC
SystemNameN/A

Specifies the System Name of the SBC instances.

SystemName Requirements:

  • Must start with an alphabetic character.

  • Contain only alphabetic characters and/or numbers; no special characters are allowed.

  • Cannot exceed 26 characters in length.

  • Must be the same on both peers CEs.
SbcPersonalityTypeisbcThe name of the SBC personality type for this instance. Currently, Ribbon supports only Integrated SBC (I-SBC).
AdminSshKeyssh-rsa ...Public SSH Key to access the admin user; must be in the form ssh-rsa ...
ThirdPartyCpuAlloc0-4

(Optional) Number of CPUs segregated for use with non-Ribbon applications.

Restrictions:

    • 0-4 CPUs
    • Both ThirdPartCpuAlloc and ThirdPartyMemAlloc must be configured.
    • The configuration must match between peer instances.
ThirdPartyMemAlloc0-4096

(Optional) Amount of memory (in MB) that segregated out for use with non Ribbon applications.

Restrictions:

    • 0-4096 CPUs
    • Both ThirdPartCpuAlloc and ThirdPartyMemAlloc must be configured.
    • The configuration must match between peer instances
CERoleACTIVE/STANDBYSpecifies the CE's role within the HA setup.
PeerCEHa0IPv4Addressxxx.xxx.xxx.xxx

This value must be the Private IP Address of the Peer SBC's HA interface.

ClusterIpxxx.xxx.xxx.xxx

This value must also be the Private IP Address of the Peer SBC's HA interface.

PeerCENameN/ASpecifies the actual CE name of the Peer SBC instance in the HA setup.
SbcHaMode1to1Specifies the Mode of the HA configuration. Currently, Azure supports only 1:1 HA.
PeerInstanceNameN/A

Specifies the name of the Peer Instance in the HA setup.

Note: This is not the CEName or the SystemName.

Pkt0HfeInstanceNameN/A

Specifies the instance name of the PKT0 HFE Node.

Note: Applicable only for HFE 2.1.

Pkt1HfeInstanceNameN/A

Specifies the instance name of the PKT1 HFE Node.

Note: Applicable only for HFE 2.1.



Create a JSON file using the following structure:

{
  "CEName" : "<SBC CE NAME>",
  "ReverseNatPkt0" : "True",
  "ReverseNatPkt1" : "True",
  "SystemName" : "<SYSTEM NAME>",
  "SbcPersonalityType": "isbc",
  "AdminSshKey" : "<ssh-rsa ...>",
  "ThirdPartyCpuAlloc" : "<0-4>",
  "ThirdPartyMemAlloc" : "<0-4096>",
  "CERole" : "<ACTIVE/STANDBY>",
  "PeerCEHa0IPv4Address" : "<PEER HA IP ADDRESS>",
  "ClusterIp" : "<PEER HA IP ADDRESS>",
  "PeerCEName" : "<PEER SBC CE NAME>",
  "SbcHaMode" : "1to1",
  "PeerInstanceName" : "<PEER INSTANCE NAME>",
  "Pkt0HfeInstanceName" : "<PKT0 HFE NODE INSTANCE NAME>",
  "Pkt1HfeInstanceName" : "<PKT1 HFE NODE INSTANCE NAME>"
}
Caution

The SBC requires user data in a valid JSON format. If the user-data is not a valid JSON, the instance shuts down immediately.

You cannot update user data on VMs in the Azure framework.


Sample Meta Variable Table

Example Meta Variable table for a SBC HA is provided below:

admin@act-10.2.2.127> show table system metaVariable
CE NAME         NAME                  VALUE
-----------------------------------------------------
act-10.2.2.127  IF0.GWV4              10.2.0.1
act-10.2.2.127  IF0.IPV4              10.2.0.9
act-10.2.2.127  IF0.Port              Mgt0
act-10.2.2.127  IF0.RNat              True
act-10.2.2.127  IF1.GWV4              10.2.2.1
act-10.2.2.127  IF1.IPV4              10.2.2.127
act-10.2.2.127  IF1.Port              Ha0
act-10.2.2.127  IF1.RNat              True
act-10.2.2.127  IF2.GWV4              10.2.3.1
act-10.2.2.127  IF2.IPV4              10.2.3.10
act-10.2.2.127  IF2.Port              Pkt0
act-10.2.2.127  IF2.RNat              True
act-10.2.2.127  IF3.GWV4              10.2.4.1
act-10.2.2.127  IF3.IPV4              10.2.4.10
act-10.2.2.127  IF3.Port              Pkt1
act-10.2.2.127  IF3.RNat              True
act-10.2.2.127  IF0.FIPV4             137.117.73.22
act-10.2.2.127  IF0.PrefixV4          24
act-10.2.2.127  IF1.PrefixV4          24
act-10.2.2.127  IF2.PrefixV4          24
act-10.2.2.127  IF3.PrefixV4          24
act-10.2.2.127  HFE_IF2.FIPV4         52.168.34.216
act-10.2.2.127  HFE_IF3.FIPV4         10.2.2.7
act-10.2.2.127  HFE_IF2.IFName        IF_HFE_PKT0
act-10.2.2.127  HFE_IF3.IFName        IF_HFE_PKT1
act-10.2.2.127  secondaryIPList.Pkt0  ['10.2.3.10']
act-10.2.2.127  secondaryIPList.Pkt1  ['10.2.4.10']
sby-10.2.2.227  IF0.GWV4              10.2.0.1
sby-10.2.2.227  IF0.IPV4              10.2.0.14
sby-10.2.2.227  IF0.Port              Mgt0
sby-10.2.2.227  IF0.RNat              True
sby-10.2.2.227  IF1.GWV4              10.2.2.1
sby-10.2.2.227  IF1.IPV4              10.2.2.227
sby-10.2.2.227  IF1.Port              Ha0
sby-10.2.2.227  IF1.RNat              True
sby-10.2.2.227  IF2.GWV4              10.2.3.1
sby-10.2.2.227  IF2.IPV4              10.2.3.10
sby-10.2.2.227  IF2.Port              Pkt0
sby-10.2.2.227  IF2.RNat              True
sby-10.2.2.227  IF3.GWV4              10.2.4.1
sby-10.2.2.227  IF3.IPV4              10.2.4.10
sby-10.2.2.227  IF3.Port              Pkt1
sby-10.2.2.227  IF3.RNat              True
sby-10.2.2.227  IF0.FIPV4             40.76.8.39
sby-10.2.2.227  IF0.PrefixV4          24
sby-10.2.2.227  IF1.PrefixV4          24
sby-10.2.2.227  IF2.PrefixV4          24
sby-10.2.2.227  IF3.PrefixV4          24
sby-10.2.2.227  HFE_IF2.FIPV4         52.168.34.216
sby-10.2.2.227  HFE_IF3.FIPV4         10.2.2.7
sby-10.2.2.227  HFE_IF2.IFName        IF_HFE_PKT0
sby-10.2.2.227  HFE_IF3.IFName        IF_HFE_PKT1
sby-10.2.2.227  secondaryIPList.Pkt0  ['10.2.3.11']
sby-10.2.2.227  secondaryIPList.Pkt1  ['10.2.4.11']
[ok][2019-10-07 11:48:16]
admin@act-10.2.2.127>


Add New Endpoints to UAC

To add a new end point to the Public Endpoint side with HFE1 (for example, 52.52.52.52 is the new end point IP):

  1. Add the end point IP to outbound security group.

    Add outbound security rule

  2. Add the end point IP to the PKT0 subnet custom route table. Name of the route table will be $instanceBaseName.

    UAC - Route Table - Add Route


    Select Next hop type of Virtual Appliance, and the Next hop address as $hfePkt0OutIp.

  3. Add the end point IP to the Inbound Security Rule of the Security group of nic1 of HFE1, and PKT0 of the SBC.

Add New Endpoints to UAS

Add the IP (for example, 10.2.3.9) to PKT1 subnet custom route table. The name of the route table is $instanceBaseName.

UAS - Route Table - Add Route


Select Next hop type of Virtual Appliance, and the Next hop address as $hfePkt1OutIp.

HFE Node Logging

The HFE generates the following logs under /opt/HFE/log/:

  • cloud-init-nat.log: Logs generated from the initial configuration.
  • HFE_conf.log: Logs generated from the setup of the HFE node. They contain information about:
    • SBC instance names
    • The IPs for allowing SSH into the HFE node
    • The configured zone
    • The SBC IPs being used to forward traffic to
    • Iptables rules
    • Routing rules
  • HFE_conf.log.prev: A copy of the previous HFE_conf.log.
  • HFE.log
    • Logs which contain messages about any switchover action, as well as connection errors. The logs generated are as follows:
      1. Connection error detected to Active SBC: <<IP>>. Attempting switchover.
        • We have lost connection to the SBC. HFE node now performing switchover action 
      2. Connection error ongoing - No connection to SBC PKT ports from HFE
        • This error means that a switchover has been attempted, but no connection could be established to the new SBC.
        • The HFE node then continually switches between the SBCs until a connection is established
        • This usually means there is a network issue or a configuration issue on the SBCs.
      3. Switchover from old Active <<Old Active SBC IP>> to new Active <<New Active SBC IP>> complete. Connection established.
        • The switchover action is complete and connection has been established to the 'Active' SBC
      4. Initial HFE startup configuration complete. Successfully connected to <<SBC Instance Name>>
        • The HFE node has successfully connected to the active SBC following a boot.
    • This log is rotated when it reaches 250MB:
      • Up to four previous logs are saved.
      • The previous logs are compressed to save disk space.

Adding Custom Static Routes to HFE

For specialized deployments, users may need to add specific custom static routes to the HFE at the OS level. The HFE script supports this by using the HFE variable CUSTOM_ROUTES. It enables the HFE script to add these routes as part of its start-up process and verify these routes continue to be on the HFE throughout the uptime.

CUSTOM_ROUTES is a comma separated list of values in the form <DESTINATION_IP_CIDR>_<INTERFACE_NAME>. For example: 1.1.1.0/26_eth1, 2.2.2.0/28_eth2, 3.3.3.4/32_eth3.
To add the CUSTOM_ROUTES to the HFE customData, add the following line below /bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR. For example:

/bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR
/bin/echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" >> $NAT_VAR
/bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILE

If the HFE is already deployed, the variable is added to /opt/HFE/natVars.user. For example:

echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" | sudo tee -a /opt/HFE/natVars.user

For <INTERFACE_NAME>, use the standard eth0, eth1, and so on always even if the Linux distribution does not use this naming convention. The HFE_AZ.sh determines the interface to add the route.

Creating a HFE Sysdump

The HFE_AZ.sh script can create an archive of useful logs to help with debugging (similar to the SBC sysdump). Run the following command to collect the logs:

sudo /opt/HFE/HFE_AZ.sh sysdump

The following details are collected:

  1. Output of:
    • Interfaces
    • Routes
    • IPtables
    • dmesg
    • conntrack count
    • conntrack extended list
    • The VM Azure metadata
    • journalctl errors
    • dhclient logs
    • System-networkd logs
  2. The logs:
    • syslog
    • waagent logs
    • cloud-init logs
  3. /opt/HFE/* (without previous sysdumps)
  4. All user bash history

The sysdumps archives are stored in the .tar.gz format under /opt/HFE/sysdump/.

Handling Multiple Remote SSH IPs to Connect to HFE Node

The following section contains the instructions to set multiple SSH IPs to access the HFE node as well as to update the instances to add in more SSH IPs.

Note

Ensure the REMOTE_SSH_MACHINE_IP is not set to an IP where the call traffic is originating from. It can break the HFE logic and the traffic fails to reach the SBC.

Initial Orchestration

During orchestration, you can supply multiple IP addresses to the appropriate variable with a common separated list. For example, 10.0.0.1, 10.0.0.2, and 10.0.0.3. The following table represents the list of variables that need to be set for each orchestration type:

Initial Orchestration

CloudOrchestration TypeVariable Name
AzureManual creation using CLIREMOTE_SSH_MACHINE_IP (in customData)
Terraformremote_ssh_ip

Updating Remote SSH IPs

The following steps describe the procedure to update the Remote SSH IPs on the Azure.

Note

To add in a new Remote SSH Machine IP, you need to supply the full list of IPs for which the routes need to be created.

Note

The following procedure results in network outages as the HFE requires a reboot to select the latest list.

Azure

Azure does not support updating Custom Data after a VM is created. To update a HFE variable, use the following procedure:

  1. Log on to the HFE node as a rbbn user.
  2. Enter the updated variable to/opt/HFE/natVars.user. For example: 

    echo "REMOTE_SSH_MACHINE_IP=\"10.27.0.54,10.36.9.6\"" | sudo tee -a /opt/HFE/natVars.user
  3. Reboot the HFE: 

    sudo reboot
    Note

    Any variable added to /opt/HFE/natVars.user will overwrite the values set as the variables in custom data. Ensure you enter the complete list of IPs for which the routes need to be created.

Enabling PKT DNS Support on HFE

The DNS queries on the SBC PKT port are sent using the primary IP. The HFE variable ENABLE_PKT_DNS_QUERY is used to enable the support for the HFE to forward these requests correctly. 

To enable the PKT DNS Support on a new HFE setup, add "ENABLE_PKT_DNS_QUERY=1" to the customData, below SBC_PKT_PORT_NAME.

Example:

/bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR
/bin/echo "ENABLE_PKT_DNS_QUERY=1" >> $NAT_VAR
/bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILE

To enable the PKT DNS Support option on an already configured HFE setup:

  1. Log on to the HFE node as a RBBN user.
  2. Add the natvar ENABLE_PKT_DNS_QUERY to /opt/HFE/natVars.user with the value 1.

    echo "ENABLE_PKT_DNS_QUERY=1" | sudo tee -a /opt/HFE/natVars.user
  3. Reboot the HFE.

    sudo reboot