Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Add_workflow_for_techpubs
AUTH2UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
AUTH1UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cd5909df8a00a0c86820e56901685f374974002d, userName='null'}
JIRAIDAUTHSBX-121791117834
REV5UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cd5909df8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV6UserResourceIdentifier{userKey=8a00a02355cd1c2f0155cd26cd5909df8a00a02355cd1c2f0155cd26cb8305e9, userName='null'}
REV3UserResourceIdentifier{userKey=8a00a0c85652e498015669b485df00048a00a0c87b4755e3017b4ba436730001, userName='null'}
REV1UserResourceIdentifier{userKey=8a00a0c85652e498015669b485df00048a00a02355cd1c2f0155cd26ccf9091c, userName='null'}

panel

In this section:

Table of Contents
maxLevel3

This section describes the

extra

steps

(

to perform in addition to the steps described in Instantiate Standalone SBC

) necessary

on Azurefor creating

a

an HFE/SBC on Azure. All commands used in this section

is

are part of the Azure CLI.

HFE Node Network Setup

HFE nodes allow sub-second switchover between SBCs of an HA pair, as they negate the need for any IP reassignment. In the Microsoft Azure environment.

Info
titleNote

For each SBC HA pair, use unique subnet for pkt0 and pkt1.


Info
titleNote

The interfaces may sometimes display in the incorrect order on the HFE node at the Linux level. However, this is not an issue because the HFE script ensures the entire configuration is set up correctly based on the the Azure NICs, not the local interface names.


Azure requires that each interface of an instance is in a separate subnet, but within the same virtual network. Assuming all Mgmt Management interfaces are from the same subnet and HFE interfaces to the associated SBC PKT share a subnet, a minimum of six subnets are necessary for a full HFE setup.

Configure the HFE nodes in one of two ways:

Anchor

HFE 2.1

In HFE 2.1, there are two HFE nodes -

hfe_node_network_setup_hfe2-1hfe_node_network_setup_hfe2-1HFE 2.1

In HFE 2.1, there are two HFE nodes - one to handle untrusted public traffic to the SBC (for PKT0,) and the other to handle trusted traffic from the SBC to other trusted networks (from PKT1). In this section, the HFE node handling untrusted traffic is referred to as the "PKT0 HFE node", and the HFE node handling trusted traffic as the "PKT1 HFE node".

Both HFE nodes require 3 three interfaces, as described below:

Caption0Table1

HFE 2.1 - Interface Requirement

Standard/Ubuntu Interface Name

NIC

PKT0 HFE Node Function

PKT1 HFE Node Function

Requires External IP?

eth0
/ ens4
nic0Public Interface for SBC PKT0Private interface in for SBC PKT1 (can only be connected to/from instances in same subnet).Yes (only on PKT0 HFE node)
eth1
/ ens5
nic1Management interface to HFEManagement interface to HFE.Optional
eth2
/ ens6
nic2Interface to SBC PKT0Interface to SBC PKT1.No
Info
titleNote
To use a

Steps to Create SBC HA with HFE 2.1

environment, the startup script for the SBCs requires the fields Pkt0HfeInstanceName and Pkt1HfeInstanceName. For more information, see the table in SBCs' Userdata.

Setup

To create the SBC HA with HFE, perform the following steps:

  1. Install and login to the Azure CLI
  2. Create Resource Group Virtual Network, Security Groups and Subnets for SBC
  3. Create HFE Subnets

Steps to Create SBC HA with HFE Setup

To create the SBC HA with HFE, perform the following steps:

  1. Install and login to the Azure CLI.
  2. Create Resource Group and Network with six subnets.
  3. Configure the Storage Account
  4. .
  5. for HFE
  6. Create the User Assigned Managed Identity
  7. Configure HFE Nodes
  8. Additional Steps for SBC Setup for HFE 2.
  9. Create the HFE Node(s).
  10. If using a non cloud-init enabled image, run the manual setup script. See HFE Node Initial Configuration.
  11. Understand the extra steps necessary for the SBC creation in SBCs' Userdata.
  12. Create two SBCs following the instructions in the sections Create SBC and SBCs' Userdata.

Resources for HFE Setup

  1. 1

Configure HFE Nodes

To create the HFE setup, use the HFE Azure Shell Script To create HFE setup, use the HFE Azure Shell Script and the HFE Azure Manual Setup Shell Script, included in the cloudTemplates.tar.gz and called HFE, named called HFE_AZ.sh and HFE_AZ_manual_setup.sh.

HFE Azure User Data

Noprint

Click to view script

Toggle Cloak
id@hfeazureuserdata

Cloak
id@hfeazureuserdata

Configure the Storage Account

The script HFE_AZ.sh is stored in a container within a storage account. This allow the HFE nodes to download and run the script during the VM startup.

To configure the storage account, perform the following steps:

Create a storage account by executing the following command:

. Upload this file to a storage account, so that the HFE nodes can download it. You can retrieve the files from the Ribbon Support portal. See Configure the Storage Account for HFE.

Create HFE Subnets

Two further subnets need to be created for the HFE. These subnets will be used for the eth0 for HFE PKT0 and HFE PKT1. To create the subnets use the following command.

Info
titleNote

--serverice-endpoints is required to allow the HFE to download the HFE script from storage.


Syntax

Code Block
titleCreate HFE Subnets
az network vnet subnet
Code Block
titleSyntax
az storage account
 create --name <NAME>
                              --address-prefixes <CIDR>
                              --resource-group <RESOURCE
_
-GROUP
_
-NAME>
--kind storageV2 Code Block
titleExample
az storage account create -

                              --vnet-name 
rbbnhfestorage
<VNET_NAME>
                              --network-
resource
security-group 
RBBN-SBC-RG --kind storageV2
<SECURITY GROUP NAME>
							  --service-endpoints Microsoft.Storage

Examples

Create a container by executing the following command:

Code Block
title
Syntax
Create HFE Subnets Example
az 
storage
network vnet 
container
subnet create --name 
<NAME>
pkt0--
account-name <STORAGE ACCOUNT NAME>
hfe --
public
address-
access blob
prefixes 10.2.4.0/24 --
auth
resource-
mode key
Code Block
titleExample
az storage container create --name hfescripts --account-name rbbnhfestorage --public-access blob --auth-mode key

Upload the script HFE_AZ.sh to the container by executing the following command:

Code Block
titleSyntax
az storage blob upload --name <NAME> --file <HFE_AZ.sh> --container-name <CONTAINER NAME> --account-name <STORAGE ACCOUNT NAME>
Code Block
titleExample
az storage blob upload --name HFE_AZ.sh --file /tmp/HFE_AZ.sh --container-name hfescripts --account-name rbbnhfestorage

Make the storage account accessible for the instances by allowing access to virtual machines in both subnets used for ETH0 on the HFE node (ensure that the subnet exists).

Code Block
titleSyntax
az storage account network-rule add --account-name <STORAGE ACCOUNT NAME> --subnet <SUBNET NAME of SUBNET USED FOR ETH0 of HFE NODE> --vnet-name <VIRTUAL NETWORK NAME>
Code Block
titleExample
az storage account network-rule add --account-name rbbnhfestorage --subnet hfepublic  --vnet-name RibbonNet

HFE Node Initial Configuration

You can perform the initial configuration of the HFE node(s) in two ways:

  • Using custom-data and cloud-init.
  • Using the script HFE_AZ_manual_setup.sh.

The list of cloud-init enabled Linux VMs is available in Microsoft Azure Documentation.

HFE Variables

The HFE has variables that are required to be updated. When using cloud-init, update the the HFE variables in the custom data.

For manual setup, update the script HFE_AZ_manual_setup.sh (the portion of the script below the comment: UPDATE VARIABLES IN THIS SECTION).

The following table contains the values that you must update:

Value to be updatedDescriptionExample
<HFE_SCRIPT_LOCATION>

The URL for HFE_AZ.sh that is contained in a VM within a storage account.

You can retrieve the URL by executing the following command:
az storage blob url --account-name <STORAGE ACCOUNT NAME> --container-name <CONTAINER NAME> --name <BLOB NAME>

https://rbbnhfestorage.blob.core.windows.net/hfescripts/HFE_AZ.sh
<ACTIVE_SBC_NAME>
The instance name for the Active SBCrbbnSbc-1
<STANDBY_SBC_NAME>
The instance name for the Standby SBCrbbnSbc-2
<REMOTE_SSH_MACHINE_IP>

The SSH IP/IPs to allow access through the mgmt port.

Note:

For multiple IPs, use a comma separated list.  Include Page_reference_Multiple_Remote_SSH_IPs_reference_Multiple_Remote_SSH_IPs
  • Add the IPs will to the associated SGN. For more information, refer to Create Rules.
  • 43.26.27.29,35.13.71.112
    <SBC_PKT_PORT_NAME>

    This tells the HFE which PKT port it is communicating with. Can only be set as PKT0 or PKT1.

    Note: This is only for HFE 2.1.

    PKT0

    Updating HFE Variables

    Azure does not support updating Custom Data after a VM is created. To update an HFE variable, use the following procedure:

  • Log on to the HFE node as a Ribbon user.
  • Enter the updated variable to /opt/HFE/natVars.user. For example:

    Code Block
    echo "REMOTE_SSH_MACHINE_IP=\"10.27.0.54,10.36.9.6\"" | sudo tee -a /opt/HFE/natVars.user

     Reboot the HFE: 

    Code Block
    sudo reboot
    Info
    titleNote

    Any variable added to /opt/HFE/natVars.user will overwrite the values set as the variables in custom data. To add in a new Remote SSH Machine IP, ensure to supply the full list of IPs you wish the routes to be created for.

    Supported Images

    The following images are generally supported for using as the HFE:

    Cloud-init configuration

    • Ubuntu 18.04

    Manual configuration

    • CentOS 7
    • CentOS 8
    • RHEL 7
    • RHEL 8
    • Debian 10

    Custom Data Example

    An example of the custom data for a HFE node is given below:

    Noprint

    Click to view script

    Toggle Cloak
    id@hfeazurecustomdata

    Cloak
    id@hfeazurecustomdata

    Manual Configuration

    The script HFE_AZ_manual_setup.sh has two functions:

    • It creates the systemd service "ribbon-hfe" and enables the service.
    • Systemd runs it to download the script and write the variables out to /opt/HFE/natVars.input, similar to the role of custom-data does in the cloud-init. As the script is run as a service by systemd, it will automatically run if the instance reboots.

    The steps required to initially configure the HFE node using the script HFE_AZ_manual_setup.sh are as follows:

  • Using SCP, upload the script HFE_AZ_manual_setup.sh onto the instance, in a file path that has executable permissions for the root.
  • Run the script with heightened permissions and the '-s' flag. For example:

    Code Block
    sudo /usr/sbin/HFE_AZ_manual_setup.sh -s
    Tip
    titleTip

    When you use the '-s' flag, systemd points at the location of the script. If you remove the file, run the script again with the '-s' flag.

    Start the service by executing the following command:

    Code Block
    sudo systemctl start ribbon-hfe

    Create HFE Nodes

    To create HFE node(s), perform the steps described below.

    Create Public IPs

    Create at least one Public IP for ETH0 of the PKT0 HFE Node. Optionally, create up to two additional Public IPs to access the MGMT interfaces on both HFE nodes. For more information, refer to Create Public IPs (Standalone).

    Create NICs

    To create NICs, use the following command syntax:

    Code Block
    az network nic create --name <NIC NAME>
                          --resource-group <RESOURCE-GROUP-NAME>
                          --vnet-name <VIRTUAL NETWORK NAME>
                          --subnet <SUBNET NAME>
                          --network-security-group <SECURITY GROUP NAME> 

    HFE 2.1

    For HFE 2.1, create a total of six NICs (three for each interface).

    The following table contains the extra flags necessary for each interface:

    Caption
    0Table
    1HFE 2.1 - Extra flags for each interface
    HFEInterfaceFlagsPKT0 HFEeth0--public-ip-address <PUBLIC IP NAME> --ip-forwarding - --accelerated-networking trueeth1--public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking trueeth2--ip-forwarding --accelerated-networking truePKT1 HFEeth0--ip-forwarding --accelerated-networking trueeth1--public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking trueeth2--ip-forwarding --accelerated-networking true

    Create the VM

    To create the VM(s), use the following command syntax:

    Code Block
    az vm create --name <INSTANCE NAME>
                 --resource-group <RESOURCE_GROUP_NAME>
                 --admin-username <UserName>
                 --custom-data <USERDATA FILE>
                 --image <IMAGE NAME>
                 --location <LOCATION>
                 --size <INSTANCE SIZE>
                 --ssh-key-values <PUBLIC SSH KEY FILENAME>
                 --nics <ETH0 NIC> <ETH1 NIC> <ETH2 NIC>
                 --boot-diagnostics-storage <STORAGE ACCOUNT NAME>
                 --assign-identity <USER ASSIGNED MANAGED IDENTITY ID>

    The following table describes each flag:

    Caption
    0Figure
    1VM Creation - Flag Description
    FlagAccepted ValuesExampleDescriptionnamerbbnSbcName of the instance; must be unique in the resource group.resource-groupRBBN-SBC-RGName of the Resource Group.admin-user-namerbbnThe default user to log on.custom-dataFile namehfeUserData.txtA file containing the HFE user data. Use this option for cloud-init enabled images. For more information, see Custom Data Example.imageCanonical:UbuntuServer:18.04-LTS:latestThe name of an image. For more information, see Supported Images.locationEast USThe location to host the VM in. For more information, refer to Microsoft Azure Documentation.sizeStandard_DS3_v2

    Indicates instance size. In AWS this is known as 'Instance Type', and Openstack calls this 'flavor'. For more information on instances size, refer to Microsoft Azure Documentation.

    Note:

    • Maintain same instance type for the HFE and the SBC.
    • For HFE 2.1, each HFE node requires a minimum of three NICs.
    ssh-key-valuesFile Name.azureSshKey.pub

    A file that contains the public SSH key for accessing the linuxadmin user.

    You can retrieve the file by executing the following command:

    ssh-keygen -y -f azureSshKey.pem > azureSshKey.pub

    Note: The Public Key must be in openSSH form: ssh-rsa XXX 

    nicsSpace-seperated listhfe-pub hfe-mgmt-pkt0 hfe-pkt0The names of the NICs created in previous steps.boot-diagnostics-storageStorage Account Name.sbcdiagstore

    The storage account created in previous steps.

    This allows the use of the serial console.

    assign-identityUser Assigned Managed Identity ID/subscriptions/<SUBSCRIPTION ID>/resourceGroups/RBBN-SBC-RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/rbbnUami

    This is ID for the User Assigned Managed Identity created in previous steps.

    You can retrieve it by executing the following command:

    az identity show --name < IDENTITY NAME> --resource-group <RESOURCE-GROUP-NAME>

    HFE Routing 

    The HFE setup requires routes in Azure to force all the traffic leaving PKT0 and PKT1 to route back through the HFE.

    Info
    titleNote

    Consider the following when creating routes in Azure:

    Custom routes are not given complete priority over the standard Azure routing. If there is a more specific Azure route, Azure directs traffic based on that default rule.
  • For example, if you create a custom route to 0.0.0.0/0 via the HFE, but send traffic to a private IP within the same Virtual network, Azure does NOT route via the HFE eth2. Instead, the traffic flows directly from the 
    Spacevars
    0product
    to the private IP.
    (Refer to https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview for information on default rules)
  • If a custom route destination matches a default route destination, the
    Spacevars
    0product
    uses the custom route.
  • Spacevars
    0company
    advises to supply a specific IP or CIDR of the endpoints as the destination IP to prevent any routing issues with the Azure default routing rules.
  • Routes are applied to all traffic within a subnet. This means if multiple routes using the same destination address exist, the SBC can use any route.
    • If multiple
      Spacevars
      0product
      setups are using the same endpoint (for example, in an SLB/SBC setup), separate the SBCs into separate subnets and route tables to ensure they route to the correct HFE.
      (Refer to Configure SBC SWe on Azure for SLB for more information)
  • To create the routes, perform the following steps:

    Create the route-table:

    Code Block
    titleSyntax
    az network route-table create --name <NAME> --resource-group <RESOURCE_GROUP_NAME>
    Code Block
    titleExample
    az network route-table create --name hfe-route-table --resource-group RBBN-SBC-RG

    Create two rules for PKT0 and PKT1:

    Code Block
    titleSyntax
    az network route-table route create --name <NAME>
                                        --resource-group <RESOURCE_GROUP_NAME>
                                        --address-prefix <CIDR OF ENDPOINT>
                                        --next-hop-type VirtualAppliance
                                        --route-table-name <ROUTE TABLE NAME>
                                        --next-hop-ip-address <IP FOR ETH3/ETH4 of HFE NODE> 
    group RBBN-SBC-RG --vnet-name RibbonNet --network-security-group pkt0RbbnSbcSG --service-endpoints Microsoft.Storage
    
    az network vnet subnet create --name pkt1--hfe --address-prefixes 10.2.5.0/24 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --network-security-group Pkt1RbbnSbcSG --service-endpoints Microsoft.Storage


    Configure the Storage Account for HFE

    The script HFE_AZ.sh is stored in a container within a storage account. This allow the HFE nodes to download and run the script during the VM startup. It is recommended to use the "storageV2" as the type for the storage account.

    To configure the storage account, perform the following steps:

    1. Create a storage account by entering the following command:
      Syntax

      Code Block
      az storage account create --name <NAME> --resource-group <RESOURCE_GROUP_NAME> --kind storageV2


      Example

      Code Block
      az storage account create --name rbbnhfestorage --resource-group RBBN-SBC-RG --kind storageV2


    2. Create a container by entering the following command:
      Syntax

      Code Block
      az storage container create --name <NAME> --account-name <STORAGE ACCOUNT NAME> --public-access blob --auth-mode key


      Example

      Code Block
      az storage container create --name hfescripts --account-name rbbnhfestorage --public-access blob --auth-mode key


    3. Upload the script HFE_AZ.sh to the container by entering the following command:
      Syntax

      Code Block
      az storage blob upload --name <NAME> --file <HFE_AZ.sh> --container-name <CONTAINER NAME> --account-name <STORAGE ACCOUNT NAME>


      Example

      Code Block
      az storage blob upload --name HFE_AZ.sh --file /tmp/HFE_AZ.sh --container-name hfescripts --account-name rbbnhfestorage


    4. Make the storage account accessible for the instances by allowing access to virtual machines in both subnets used for ETH0 and ETH1 (to handle when management interface is used) on the HFE node (ensure that the subnets exists).
      Syntax

      Code Block
      az storage account network-rule add --account-name <STORAGE ACCOUNT NAME>
      								    --resource-group <RESOURCE_GROUP_NAME>
                                          --subnet <SUBNET NAME of SUBNET USED FOR ETH0 of HFE NODE>
                                          --vnet-name <VIRTUAL NETWORK NAME>


      Example

      Code Block
      az storage account network-rule add --account-name rbbnhfestorage --resource-group RBBN-SBC-RG --subnet hfepublic  --vnet-name RibbonNet


    HFE Node Initial Configuration

    Azure requires that each interface of an instance is in a separate subnet, but within the same virtual network. Assuming all Mgmt interfaces are from the same subnet and HFE interfaces to the associated SBC PKT share a subnet, a minimum of six subnets are necessary for a full HFE setup.

    You can perform the initial configuration of the HFE nodes using custom-data and cloud-init.

    The list of cloud-init enabled Linux VMs is available in Microsoft Azure Documentation.

    HFE Variables

    To create the custom data for the HFE node, update the following example script using the table below. Save this to a file to use during the HFE VM creation.

    Code Block
    titleClick to view script
    collapsetrue
    Content-Type: multipart/mixed; boundary="//"
    MIME-Version: 1.0
    --//
    Content-Type: text/cloud-config; charset="us-ascii"
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Content-Disposition: attachment; filename="cloud-config.txt"
    #cloud-config
    cloud_final_modules:
    - [scripts-user, always]
    --//
    Content-Type: text/x-shellscript; charset="us-ascii"
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Content-Disposition: attachment; filename="userdata.txt"
    #!/bin/bash
    HFE_DIR="/opt/HFE"
    HFE_LOG_DIR="$HFE_DIR/log"
    HFE_FILE="$HFE_DIR/HFE_AZ.sh"
    LOG_FILE="$HFE_LOG_DIR/cloud-init-nat.log"
    NAT_VAR="$HFE_DIR/natVars.input"
    TEMP_MGMT_ROUTE="$HFE_DIR/.tempRoute"
    AZ_BLOB_URL="<HFE_SCRIPT_LOCATION>" # URL of uploaded HFE script
    timestamp()
    {
      date +"%Y-%m-%d %T"
    }
    if [ ! -d $HFE_LOG_DIR ]; then
      mkdir -p $HFE_LOG_DIR;
    fi;
    
    /bin/echo $(timestamp) " ========================= cloud-init configuration for HFE ==========================================" >> $LOG_FILE
    
    #Fix any interfaces
    defaultRoute=$(ip route | grep default) # There will only be 1 default route with a metric 100
    ip a | grep -E eth.: | grep DOWN | awk -F' ' '{print $2}' | sed 's/://' | while read intf; do
      /bin/echo $(timestamp) "Bringing up $intf" >> $LOG_FILE
      dhclient $intf
      ip route | grep -E "default.*$intf" | while read r; do
        if [[ "$r" != "$defaultRoute" ]];then
          if [ $(echo $r | grep -c metric) -eq 0 ]; then
            /bin/echo $(timestamp) "Deleting new route $r" >> $LOG_FILE
            ip route delete $r
          fi
        fi
      done
    done
    
    #Test for internet access
    curl --connect-timeout 10 https://management.azure.com
    if [ $? -ne 0 ]; then
      for i in $(seq 1 $(ip a | grep -E eth.: | grep -c -v eth0)); do
        MGT_INTF_NAME=eth$i
        cidrIp=$(ip route | grep "$MGT_INTF_NAME proto kernel scope link" | awk -F " " '{print $1}' | awk -F "/" '{print $1}')
        finalOct=$(echo $cidrIp | awk -F "." '{print $4}')
        gwOct=$(( finalOct + 1 ))
        mgtGwIp=$(echo $cidrIp | awk -v var="$gwOct" -F. '{$NF=var}1' OFS=.)
        echo -e "tempMgtGw=$mgtGwIp\ntempMgtIntf=$MGT_INTF_NAME" > $TEMP_MGMT_ROUTE
        /bin/echo $(timestamp) "Adding temporary default route for $MGT_INTF_NAME" >> $LOG_FILE
        ip route add 0.0.0.0/0 via $mgtGwIp dev $MGT_INTF_NAME metric 10
        curl --connect-timeout 10 https://management.azure.com
        if [ $? -eq 0 ]; then
          break
        else
          rm $TEMP_MGMT_ROUTE
          /bin/echo $(timestamp) "Removing temporary default route for $MGT_INTF_NAME" >> $LOG_FILE
          ip route delete 0.0.0.0/0 via $mgtGwIp dev $MGT_INTF_NAME metric 10
        fi
      done < <(ip a |  grep -E eth.:)
    fi
    
    curl --connect-timeout 10 "$AZ_BLOB_URL" -H 'x-ms-version : 2019-02-02' -o $HFE_FILE
    if [ $? -ne 0 ]; then
      /bin/echo $(timestamp) "Error:Could not copy HFE script from Azure Blob Container." >> $LOG_FILE
    else
      /bin/echo $(timestamp) "Copied HFE script from Azure Blob Container." >> $LOG_FILE
    fi;
    /bin/echo > $NAT_VAR
    /bin/echo "ACTIVE_SBC_VM_NAME=\"<ACTIVE_SBC_NAME>\"" >> $NAT_VAR
    /bin/echo "STANDBY_SBC_VM_NAME=\"<STANDBY_SBC_NAME>\"" >> $NAT_VAR
    /bin/echo "REMOTE_SSH_MACHINE_IP=\"<REMOTE_SSH_MACHINE_IP>\"" >> $NAT_VAR
    /bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR
    /bin/echo "CUSTOM_ROUTES=\"<CUSTOM_STATIC_ROUTES_CONFIG>\"" >> $NAT_VAR
    /bin/echo "ENABLE_PKT_DNS_QUERY=<0/1>" >> $NAT_VAR
    /bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILE
    sudo chmod 744 $HFE_FILE
    /bin/echo $(timestamp) "Configured using HFE script - $HFE_FILE" >> $LOG_FILE
    /bin/echo $(timestamp) " ========================= Done ==========================================" >> $LOG_FILE
    nohup $HFE_FILE setup > /dev/null 2>&1 &


    The following table contains the values that you must update:


    Value to be updated

    Description

    Example

    1
    <HFE_SCRIPT_LOCATION>

    The URL for HFE_AZ.sh that is contained in a VM within a storage account.

    You can retrieve the URL by executing the following command:
    az storage blob url --account-name <STORAGE ACCOUNT NAME> --container-name <CONTAINER NAME> --name <BLOB NAME>

    https://rbbnhfestorage.blob.core.windows.net/hfescripts/HFE_AZ.sh
    2
    <ACTIVE_SBC_NAME>
    The instance name for the Active SBCrbbnSbc-1
    3
    <STANDBY_SBC_NAME>
    The instance name for the Standby SBCrbbnSbc-2
    4
    <REMOTE_SSH_MACHINE_IP>

    The SSH IP/IPs to allow access through the mgmt port.

    Note:

    • For multiple IPs, use a comma separated list.
    • Add the IPs will to the associated SGN. For more information, refer to Create Rules.
    43.26.27.29,35.13.71.112
    5
    <SBC_PKT_PORT_NAME>

    This tells the HFE which PKT port it is communicating with. Can only be set as PKT0 or PKT1.

    PKT0
    6
    <CUSTOM_ROUTES>
    It enables the HFE script to add these routes as part of its start-up process and verify these routes continue to be on the HFE throughout the uptime1.1.1.0/26_eth1, 2.2.2.0/28_eth2, 3.3.3.4/32_eth3
    7
    <ENABLE_PKT_DNS_QUERY>
    This flag is used to enable/disable the support for the HFE to forward the DNS queries on the SBC PKT port correctly0


    Supported Images

    Ubuntu LTS are the supported images for use with HFE setups.

    Create HFE Nodes

    To create HFE nodes, perform the steps described below.

    Create Public IPs

    Create at least one Public IP for ETH0 of the PKT0 HFE Node. Optionally, create up to two additional Public IPs to access the MGMT interfaces on both HFE nodes. 

    Create the Public IPs by running the following command.

    Syntax

    Code Block
    az network public-ip create --name <PUBLIC IP NAME> --resource-group <RESOURCE-GROUP-NAME> --allocation-method Static

    Examples

    Code Block
    az network public-ip create --name pkt0-mgmt-ip --resource-group RBBN-SBC-RG --allocation-method Static
     
    az network public-ip create --name hfe-pkt0-ip --resource-group RBBN-SBC-RG --allocation-method Static
     
    az network public-ip create --name pkt1-mgmt-ip --resource-group RBBN-SBC-RG --allocation-method Static

    Create NICs

    To create NICs, use the following command.

    Syntax

    Code Block
    az network nic create --name <NIC NAME>
                          --resource-group <RESOURCE-GROUP-NAME>
                          --vnet-name <VIRTUAL NETWORK NAME>
                          --subnet <SUBNET NAME>
                          --network-security-group <SECURITY GROUP NAME>

    Example

    Repeat the following command for each NIC.

    Code Block
    az network nic create --name hfe-pkt0-nic0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group pkt0RbbnSbcSG --public-ip-address hfe-pkt0-ip
    az network nic create --name hfe-pkt0-nic1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group mgmtRbbnSbcSG --public-ip-address pkt0-mgmt-ip
    az network nic create --name hfe-pkt0-nic2 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group pkt0RbbnSbcSG
    
    az network nic create --name hfe-pkt1-nic0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group pkt1RbbnSbcSG
    az network nic create --name hfe-pkt1-nic1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet SubnetMgmt --network-security-group mgmtRbbnSbcSG --public-ip-address pkt0-mgmt-ip
    az network nic create --name hfe-pkt1-nic2
    Code Block
    titleExample
    az network route-table route create --name pkt0-route
     --resource-group RBBN-SBC-RG --
    adress
    vnet-
    prefix 77.77.173.255/32
    name RibbonNet --
    next-hop-type VirtualAppliance
    subnet SubnetMgmt --
    route
    network-
    table
    security-
    name hfe-route-table --next-hop-ip-address 10.2.6.5

    Attach the route table to the PKT0/PKT1 subnets:

    Code Block
    titleSyntax
    az network vnet subnet update --name <SUBNET NAME>
                                  --resource-group <RESOURCE_GROUP_NAME>
                                  --vnet-name <VIRTUAL NETWORK NAME>
                                  --route-table <ROUTE TABLE NAME>
    Code Block
    titleExample
    az network vnet subnet update --name pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --route-table pkt0-route
    group pkt1RbbnSbcSG

    HFE 2.1

    For HFE 2.1, create a total of six NICs (three for each interface).

    The following table contains the extra flags necessary for each interface:

    HFE 2.1 - Extra flags for each interface

    HFEInterfaceFlags
    PKT0 HFEeth0--public-ip-address <PUBLIC IP NAME> --ip-forwarding --accelerated-networking true
    eth1--public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true
    eth2--ip-forwarding --accelerated-networking true
    PKT1 HFEeth0--ip-forwarding --accelerated-networking true
    eth1--public-ip-address <PUBLIC IP NAME> (optional) --accelerated-networking true
    eth2--ip-forwarding --accelerated-networking true

    Create the VMs for HFE Instances

    Create a VM for each HFE instance. Use the following command syntax:

    Code Block
    az vm

    Additional Steps for SBC HFE Setup

    To create the SBC HA with HFE setup, first perform all of the steps described in Create SBC (Standalone).

    In addition to those steps, perform the steps described below.

    Configure NICs

    The SBC requires 4 NICs, each one attached to a individual subnet for MGMT, HA, PKT0 and PKT1.

    To create a standard NIC, use the following syntax:

    Code Block
    az network nic create --name <NIC<INSTANCE NAME>
                 --resource-group <RESOURCE_GROUP_NAME>
                 --resource-group <RESOURCE GROUP NAME>admin-username <UserName>
                 --custom-data <USERDATA FILE>
                 --image <IMAGE NAME>
                 --vnet-name <VIRTUAL NETWORK NAME>
    location <LOCATION>
                 --size <INSTANCE SIZE>
                 --subnet <SUBNET NAME>ssh-key-values <PUBLIC SSH KEY FILENAME>
                 --nics <ETH0 NIC> <ETH1 NIC> <ETH2      --network-security-group <SECURITY GROUP NAME>
                          --accelerated-networking <true/false>

    See below for additional steps for when SBCs are in a HFE setup.

    Secondary IPs

    The HA SBCs require configuring Secondary IPs on both the PKT0 and PKT1 ports for both the Active and the Standby instances.

    Info
    titleNote
    • Before creating the Secondary IP configuration, create the NICSs for the SBCs.
    • You cannot set the IP config name as "ipconfig1", because it is reserved for the primary IP configuration on a NIC.

    Create and attach Secondary IPs to a network interface by executing the following command:

    Code Block
    titleSyntax
    az network nic ip-config create --name <NAME> --nic-name <PKT0/PKT1 NIC NAME> --resource-group <RESOURCE_GROUP_NAME>
    Code Block
    titleExample
    az network nic ip-config create --name sbc1-pkt0-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG

    Create NIC for PKT0 and PKT1

    When creating the NICs for both SBC's PKT0 and PKT1 ports, include the flag --ip-forwarding for receiving the traffic sent to the HFE node. For example:

    Code Block
    az network nic create --name sbc1-pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt0 --network-security-group RbbnSbcSG --ip-forwarding
    Info
    titleNote
    Because the HFE Node receives all the traffic, it is not necessary to create Public IP addresses for these ports, or add them to the NICs.
    SBC Userdata

    The SBCs in the HFE environment require the following user data: 

    Caption
    0Table
    1SBC HFE - User Data
    NIC>
                 --boot-diagnostics-storage <STORAGE ACCOUNT NAME>
                 --assign-identity <USER ASSIGNED MANAGED IDENTITY ID>


    The following table describes each flag:

    VM Creation - Flag Description

    Flag

    Accepted Values

    Example

    Description

    name
    rbbnSbcName of the instance; must be unique in the resource group.
    resource-group
    RBBN-SBC-RGName of the Resource Group.
    admin-user-name
    rbbnThe default user to log on.
    custom-dataFile namehfeUserData.txtA file containing the HFE user data. Use this option for cloud-init enabled images. For more information, see Custom Data Example.
    image
    Canonical:UbuntuServer:18.04-LTS:latestThe name of an image. For more information, see Supported Images.
    location
    East USThe location to host the VM in. For more information, refer to Microsoft Azure Documentation.
    size
    Standard_D8s_v3

    Indicates instance size. In AWS this is known as 'Instance Type', and Openstack calls this 'flavor'. For more information on instances size, refer to Microsoft Azure Documentation.

    Note:

    • Maintain same instance type for the HFE and the SBC.
    • For HFE 2.1, each HFE node requires a minimum of three NICs.
    ssh-key-valuesFile Name.azureSshKey.pub

    A file that contains the public SSH key for accessing the linuxadmin user.

    You can retrieve the file by executing the following command:

    ssh-keygen -y -f azureSshKey.pem > azureSshKey.pub

    Note: The Public Key must be in openSSH form: ssh-rsa XXX 

    nicsSpace-seperated listhfe-pub hfe-mgmt-pkt0 hfe-pkt0The names of the NICs created in previous steps.
    boot-diagnostics-storageStorage Account Name.sbcdiagstore

    The storage account created in previous steps.

    This allows the use of the serial console.

    assign-identityUser Assigned Managed Identity ID/subscriptions/<SUBSCRIPTION ID>/resourceGroups/RBBN-SBC-RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/rbbnUami

    This is ID for the User Assigned Managed Identity created in previous steps.

    You can retrieve it by executing the following command:

    az identity show --name < IDENTITY NAME> --resource-group <RESOURCE-GROUP-NAME>

    HFE Routing 

    The HFE setup requires routes in Azure to force all the traffic leaving PKT0 and PKT1 to route back through the HFE.

    Info
    titleNote

    Consider the following when creating routes in Azure:

    • Custom routes are not given complete priority over the standard Azure routing. If there is a more specific Azure route, Azure directs traffic based on that default rule.
      • For example, if you create a custom route to 0.0.0.0/0 via the HFE, but send the traffic to a private IP within the same Virtual network, Azure does NOT route via the HFE eth2. Instead, the traffic flows directly from the SBC to the private IP. Refer to Virtual network traffic routing on the Microsoft Azure website for information on default rules.
      • If a custom route destination matches a default route destination, the SBC uses the custom route.
    • Spacevars
      0company
       recommends to supply a specific IP or CIDR of the endpoints as the destination IP, to prevent any routing issues with the Azure default routing rules.
    • Routes are applied to all traffic within a subnet. If multiple routes using the same destination address exist, the SBC can use any route.
      • If multiple SBC setups are using the same endpoint (for example, in an SLB/SBC setup), separate the SBCs into separate subnets and route tables to ensure they route to the correct HFE. Refer to Configure SBC SWe on Azure for SLB, for more information.


    To create the routes, perform the following steps:

    1. Create the route-table:
      Syntax

      Code Block
      az network route-table create --name <NAME> --resource-group <RESOURCE_GROUP_NAME>


      Example

      Code Block
      az network route-table create --name hfe-route-table --resource-group RBBN-SBC-RG


    2. Create two rules for PKT0 and PKT1:
      Syntax

      Code Block
      az network route-table route create --name <NAME>
                                          --resource-group <RESOURCE_GROUP_NAME>
                                          --address-prefix <CIDR OF ENDPOINT>
                                          --next-hop-type VirtualAppliance
                                          --route-table-name <ROUTE TABLE NAME>
                                          --next-hop-ip-address <IP FOR ETH3/ETH4 of HFE NODE>


      Example

      Code Block
      az network route-table route create --name pkt0-route --resource-group RBBN-SBC-RG --adress-prefix 77.77.173.255/32 --next-hop-type VirtualAppliance --route-table-name hfe-route-table --next-hop-ip-address 10.2.6.5


    3. Attach the route table to the PKT0/PKT1 subnets:
      Syntax

      Code Block
      az network vnet subnet update --name <SUBNET NAME>
                                    --resource-group <RESOURCE_GROUP_NAME>
                                    --vnet-name <VIRTUAL NETWORK NAME>
                                    --route-table <ROUTE TABLE NAME>


      Example

      Code Block
      az network vnet subnet update --name pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --route-table pkt0-route


    Additional Steps for SBC HFE Setup for HFE 2.1

    To create the SBCs for HA with HFE setup, follow the instructions as described in Instantiate Standalone SBC on Azure, with the addition of the steps below.

    Configure NICs

    The SBC requires 4 NICs, each one attached to a individual subnet for MGMT, HA, PKT0 and PKT1.

    To create a standard NIC, use the following syntax:

    Code Block
    az network nic create --name <NIC NAME>
                          --resource-group <RESOURCE GROUP NAME>
                          --vnet-name <VIRTUAL NETWORK NAME>
                          --subnet <SUBNET NAME>
                          --network-security-group <SECURITY GROUP NAME>
                          --accelerated-networking true

    Create NIC for PKT0 and PKT1

    When creating the NICs for both SBC's PKT0 and PKT1 ports, include the flag --ip-forwarding for receiving the traffic sent to the HFE node.

    Example

    Code Block
    az network nic create --name sbc1-pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt0 --network-security-group pkt0RbbnSbcSG--ip-forwarding
    az network nic create --name sbc1-pkt1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt1 --network-security-group pkt1RbbnSbcSG--ip-forwarding
    
    az network nic create --name sbc2-pkt0 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt0 --network-security-group pkt0RbbnSbcSG--ip-forwarding
    az network nic create --name sbc2-pkt1 --resource-group RBBN-SBC-RG --vnet-name RibbonNet --subnet pkt1 --network-security-group pkt1RbbnSbcSG--ip-forwarding


    Info
    titleNote

    Because the HFE Node receives all the traffic, it is not necessary to create Public IP addresses for these ports, or add them to the NICs.

    Secondary IPs

    The HA SBCs require configuring Secondary IPs on both the PKT0 and PKT1 ports for both the Active and the Standby instances.

    Info
    • Before creating the Secondary IP configuration, create the NICSs for the SBCs.
    • You cannot set the IP config name as "ipconfig1", because it is reserved for the primary IP configuration on a NIC.


    Create and attach Secondary IPs to a network interface by executing the following command:

    Syntax

    Code Block
    az network nic ip-config create --name <NAME> --nic-name <PKT0/PKT1 NIC NAME> --resource-group <RESOURCE_GROUP_NAME>


    Example

    Code Block
    az network nic ip-config create --name sbc1-pkt0-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG
    az network nic ip-config create --name sbc1-pkt1-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG
    
    az network nic ip-config create --name sbc2-pkt0-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG
    az network nic ip-config create --name sbc2-pkt1-secIp --nic-name sbc1-pkt0 --resource-group RBBN-SBC-RG

    SBC Userdata

    The SBCs in the HFE environment require the following user data: 

    SBC HFE - User Data

    Key

    Allow Values

    Description

    CENameN/A

    Specifies the actual CE name of the SBC instance.

    CEName Requirements:

    • Must start with an alphabetic character.

    • Contain only alphabetic characters and/or numbers; no special characters are allowed.

    • Cannot exceed 64 characters in length.

    ReverseNatPkt0True/FalseRequires True for standalone SBC
    ReverseNatPkt1True/FalseRequires True for standalone SBC
    SystemNameN/A

    Specifies the System Name of the SBC instances.

    SystemName Requirements:

    • Must start with an alphabetic character.

    • Contain only alphabetic characters and/or numbers; no special characters are allowed.

    • Cannot exceed 26 characters in length.

    • Must be the same on both peers CEs.
    SbcPersonalityTypeisbcThe name of the SBC personality type for this instance. Currently, Ribbon supports only Integrated SBC (I-SBC).
    AdminSshKeyssh-rsa ...Public SSH Key to access the admin user; must be in the form ssh-rsa ...
    ThirdPartyCpuAlloc0-4

    (Optional) Number of CPUs segregated for use with non-Ribbon applications.

    Restrictions:

      • 0-4 CPUs
      • Both ThirdPartCpuAlloc and ThirdPartyMemAlloc must be configured.
      • The configuration must match between peer instances.
    ThirdPartyMemAlloc0-4096

    (Optional) Amount of memory (in MB) that segregated out for use with non Ribbon applications.

    Restrictions:

      • 0-4096 CPUs
      • Both ThirdPartCpuAlloc and ThirdPartyMemAlloc must be configured.
      • The configuration must match between peer instances
    CERoleACTIVE/STANDBYSpecifies the CE's role within the HA setup.
    PeerCEHa0IPv4Addressxxx.xxx.xxx.xxx

    This value must be the Private IP Address of the Peer SBC's HA interface.

    ClusterIpxxx.xxx.xxx.xxx

    This value must also be the Private IP Address of the Peer SBC's HA interface.

    PeerCENameN/ASpecifies the actual CE name of the Peer SBC instance in the HA setup.
    SbcHaMode1to1Specifies the Mode of the HA configuration. Currently, Azure supports only 1:1 HA.
    PeerInstanceNameN/A

    Specifies the name of the Peer Instance in the HA setup.

    Note: This is not the CEName or the SystemName.

    Pkt0HfeInstanceNameN/A

    Specifies the instance name of the PKT0 HFE Node.

    Specifies the instance name of the PKT1 HFE Node.

    Pkt1HfeInstanceNameN/A


    Create a JSON file using the following structure:

    Code Block
    {   
      "CEName" : "<SBC CE NAME>",
      "ReverseNatPkt0" : "True",
      "ReverseNatPkt1" : "True",
      "SystemName" : "<SYSTEM NAME>",
      "SbcPersonalityType": "isbc",
      "AdminSshKey" : "<ssh-rsa ...>",
      "ThirdPartyCpuAlloc" : "<0-4>",
      "ThirdPartyMemAlloc" : "<0-4096>",
      "CERole" : "<ACTIVE/STANDBY>",
      "PeerCEHa0IPv4Address" : "<PEER HA IP ADDRESS>",
      "ClusterIp" : "<PEER HA IP ADDRESS>",
      "PeerCEName" : "<PEER SBC CE NAME>",
      "SbcHaMode" : "1to1",
      "PeerInstanceName" : "<PEER INSTANCE NAME>",
      "Pkt0HfeInstanceName" : "<PKT0 HFE NODE INSTANCE NAME>",
      "Pkt1HfeInstanceName" : "<PKT1 HFE NODE INSTANCE NAME>" 
    }


    Note
    titleCaution
    • The SBC requires user data in a valid JSON format. If the user-data is not a valid JSON, the instance shuts down immediately.
    • You cannot update user data on VMs in the Azure framework.

    Configure PKT Ports

    Configure the PKT ports using the SBC CLI.

    Please note: This configuration needs to be added after the instance has been created.

    Example

    Code Block
    titleConfigure PKT Ports example
    admin@sbc-10.2.2.12> conf
    Entering configuration mode private
    [ok][2019-10-04 09:04:15]
     
    [edit]
    admin@sbc-10.2.2.12% set addressContext default ipInterfaceGroup LIG1 ipInterface LIF1 portName pkt0 ipPublicVarV4 IF2.IPV4 prefixVarV4 IF2.PrefixV4 mode inService state enabled
    [ok][2019-10-04 09:04:46]
     
    [edit]
    admin@sbc-10.2.2.12% commit
    Commit complete.
    [ok][2019-10-04 09:04:50]
     
    [edit]
    admin@sbc-10.2.2.12% set addressContext default ipInterfaceGroup LIG2 ipInterface LIF2 portName pkt1 ipPublicVarV4 IF3.IPV4 prefixVarV4 IF3.PrefixV4 mode inService state enabled
    [ok][2019-10-04 09:04:58]
     
    [edit]
    admin@sbc-10.2.2.12% com
    Commit complete.
    [ok][2019-10-04 09:05:00]
     
    [edit]
    admin@sbc-10.2.2.12% set addressContext default staticRoute 0.0.0.0 0 <PKT0 SUBNET GATEWAY> LIG1 LIF1 preference 100
    [ok][2019-10-04 09:05:11]
     
    [edit]
    admin@sbc-10.2.2.12% com
    Commit complete.
    [ok][2019-10-04 09:05:15]
     
    [edit]
    admin@sbc-10.2.2.12% set addressContext default staticRoute 0.0.0.0 0 <PKT1 SUBNET GATEWAY> LIG2 LIF2 preference 100
    [ok][2019-10-04 09:05:22]
     
    [edit]
    admin@sbc-10.2.2.12% com
    Commit complete.
    [ok][2019-10-04 09:05:24]
     
    [edit]
    admin@sbc-10.2.2.12%



    Info
    The gateway IP address for the subnet is X.X.X.1


    The correct SBC CLI configuration will look similar to the following:

    Code Block
    admin@sbc-10.2.2.12> show table addressContext default staticRoute
                                   IP
                                   INTERFACE  IP
    DESTINATION                    GROUP      INTERFACE              CE
    IP ADDRESS   PREFIX  NEXT HOP  NAME       NAME       PREFERENCE  NAME
    -----------------------------------------------------------------------
    0.0.0.0      0       10.2.3.1  LIG1       LIF1       100         -
    0.0.0.0      0       10.2.4.1  LIG2       LIF2       100         -
    [ok][2019-10-04 09:16:47]
    admin@sbc-10.2.2.12>
    admin@sbc-10.2.2.12> show table addressContext default ipInterfaceGroup
     
                                                                                                                                                                   IP      IP           IP
                          CE    PORT  IP               ALT IP   ALT                        DRYUP             BW           VLAN             IP VAR    PREFIX VAR    PUBLIC  VAR  PREFIX  PUBLIC
    NAME  IPSEC     NAME  NAME  NAME  ADDRESS  PREFIX  ADDRESS  PREFIX  MODE       ACTION  TIMEOUT  STATE    CONTINGENCY  TAG   BANDWIDTH  V4        V4            VAR V4  V6   VAR V6  VAR V6
    ------------------------------------------------------------------------------------------------------------
    KeyAllow ValuesDescriptionCENameN/A

    Specifies the actual CE name of the SBC instance.

    CEName Requirements:

    • Must start with an alphabetic character.

    • Contain only alphabetic characters and/or numbers; no special characters are allowed.

    • Cannot exceed 64 characters in length.

    ReverseNatPkt0True/FalseRequires True for standalone SBCReverseNatPkt1True/FalseRequires True for standalone SBCSystemNameN/A

    Specifies the System Name of the SBC instances.

    SystemName Requirements:

    • Must start with an alphabetic character.

    • Contain only alphabetic characters and/or numbers; no special characters are allowed.

    • Cannot exceed 26 characters in length.

    • Must be the same on both peers CEs.
    SbcPersonalityTypeisbcThe name of the SBC personality type for this instance. Currently, Ribbon supports only Integrated SBC (I-SBC).AdminSshKeyssh-rsa ...Public SSH Key to access the admin user; must be in the form ssh-rsa ...ThirdPartyCpuAlloc0-4

    (Optional) Number of CPUs segregated for use with non-Ribbon applications.

    Restrictions:

      • 0-4 CPUs
      • Both ThirdPartCpuAlloc and ThirdPartyMemAlloc must be configured.
      • The configuration must match between peer instances.
    ThirdPartyMemAlloc0-4096

    (Optional) Amount of memory (in MB) that segregated out for use with non Ribbon applications.

    Restrictions:

      • 0-4096 CPUs
      • Both ThirdPartCpuAlloc and ThirdPartyMemAlloc must be configured.
      • The configuration must match between peer instances
    CERoleACTIVE/STANDBYSpecifies the CE's role within the HA setup.PeerCEHa0IPv4Addressxxx.xxx.xxx.xxx

    This value must be the Private IP Address of the Peer SBC's HA interface.

    ClusterIpxxx.xxx.xxx.xxx

    This value must also be the Private IP Address of the Peer SBC's HA interface.

    PeerCENameN/ASpecifies the actual CE name of the Peer SBC instance in the HA setup.SbcHaMode1to1Specifies the Mode of the HA configuration. Currently, Azure supports only 1:1 HA.PeerInstanceNameN/A

    Specifies the name of the Peer Instance in the HA setup.

    Note: This is not the CEName or the SystemName.

    Pkt0HfeInstanceNameN/A

    Specifies the instance name of the PKT0 HFE Node.

    Note: Applicable only for HFE 2.1.

    Pkt1HfeInstanceNameN/A

    Specifies the instance name of the PKT1 HFE Node.

    Note: Applicable only for HFE 2.1.

    Create a JSON file using the following structure:
    Code Block
    {
      "CEName" : "<SBC CE NAME>",
      "ReverseNatPkt0" : "True",
      "ReverseNatPkt1" : "True",
      "SystemName" : "<SYSTEM NAME>",
      "SbcPersonalityType": "isbc",
      "AdminSshKey" : "<ssh-rsa ...>",
      "ThirdPartyCpuAlloc" : "<0-4>",
      "ThirdPartyMemAlloc" : "<0-4096>",
      "CERole" : "<ACTIVE/STANDBY>",
      "PeerCEHa0IPv4Address" : "<PEER HA IP ADDRESS>",
      "ClusterIp" : "<PEER HA IP ADDRESS>",
      "PeerCEName" : "<PEER SBC CE NAME>",
      "SbcHaMode" : "1to1",
      "PeerInstanceName" : "<PEER INSTANCE NAME>",
      "Pkt0HfeInstanceName" : "<PKT0 HFE NODE INSTANCE NAME>",
      "Pkt1HfeInstanceName" : "<PKT1 HFE NODE INSTANCE NAME>"
    }
    Note
    titleCaution

    The SBC requires user data in a valid JSON format. If the user-data is not a valid JSON, the instance shuts down immediately.

    You cannot update user data on VMs in the Azure framework.

    Sample Meta Variable Table

    Example Meta Variable table for a SBC HA is provided below:

    admin@act-10.2.2.127> show table system metaVariable CE NAME NAME VALUE
    -----------------------------------------------------
    act
    --------------------------
    LIG1  disabled  LIF1  -     pkt0  -        -       -        -       inService  dryUp   60       enabled  0            -     0          IF2.IPV4  IF2.PrefixV4  -       -    -       -
    LIG2  disabled  LIF2  -     pkt1  -        -       -        -       inService  dryUp   60       enabled  0            -     0          IF3.IPV4  IF3.PrefixV4  -       -    -       -
    [ok][2019-10-04 09:18:35]
    Expand
    Code Block


    Sample Meta Variable Table

    Example Meta Variable table for a SBC HA is provided below:


    Code Block
    titleClick here to expand
    collapsetrue
    admin@act
    10.2.2.127 IF0.GWV4 10.2.0.1 act-10.2.2.127 IF0.IPV4 10.2.0.9 act-10.2.2.127 IF0.Port Mgt0 act-10.2.2.127 IF0.RNat True act-10.2.2.127 IF1.GWV4 10.2.2.1 act-10.2.2.127 IF1.IPV4 10.2.2.127 act-10.2.2.127 IF1.Port Ha0 act-10.2.2.127 IF1.RNat True act-10.2.2.127 IF2.GWV4 10.2.3.1 act
    -10.2.2.
    127
    127> show 
    IF2.IPV4
    table system metaVariable
    CE NAME         NAME  
    10.2.3.10 act-10.2.2.127
      
    IF2.Port
                  
    Pkt0 act-10.2.2.127 IF2.RNat True
    VALUE
    -----------------------------------------------------
    act-10.2.2.127  
    IF3
    IF0.GWV4              10.2.
    4
    0.1
    act-10.2.2.127  
    IF3
    IF0.IPV4              10.2.
    4
    0.
    10
    9
    act-10.2.2.127  
    IF3
    IF0.Port              
    Pkt1
    Mgt0
    act-10.2.2.127  
    IF3
    IF0.RNat              True
    act-10.2.2.127  
    IF0
    IF1.
    FIPV4
    GWV4             
    137.117.73.22 act-10.2.2.127
     
    IF0.PrefixV4 24 act-
    10.2.2.
    127 IF1.PrefixV4 24
    1
    act-10.2.2.127  
    IF2.PrefixV4
    IF1.IPV4              
    24
    10.2.2.127
    act-10.2.2.127  
    IF3.PrefixV4
    IF1.Port              
    24
    Ha0
    act-10.2.2.127  
    HFE_IF2.FIPV4
    IF1.RNat              
    52.168.34.216
    True
    act-10.2.2.127  
    HFE_IF3.FIPV4
    IF2.GWV4              10.2.
    2
    3.
    7
    1
    act-10.2.2.127  
    HFE_
    IF2.
    IFName
    IPV4              
    IF_HFE_PKT0
    10.2.3.10
    act-10.2.2.127  
    HFE_IF3
    IF2.
    IFName
    Port        
    IF_HFE_PKT1 act-10.2.2.127 secondaryIPList.Pkt0 ['10.2.3.10']
          Pkt0
    act-10.2.2.127  
    secondaryIPList.Pkt1 ['10.2.4.10'] sby
    IF2.RNat              True
    act-10.2.2.
    227
    127  
    IF0
    IF3.GWV4              10.2.
    0
    4.1
    
    sby
    act-10.2.2.
    227
    127  
    IF0
    IF3.IPV4              10.2.
    0
    4.
    14
    10
    
    sby
    act-10.2.2.
    227
    127  
    IF0
    IF3.Port              
    Mgt0
    Pkt1
    
    sby
    act-10.2.2.
    227
    127  
    IF0
    IF3.RNat              True
    
    sby
    act-10.2.2.
    227
    127  
    IF1
    IF0.
    GWV4
    FIPV4             
    10
    137.
    2
    117.
    2
    73.
    1
    22
    
    sby
    act-10.2.2
    .227 IF1
    .
    IPV4
    127  IF0.PrefixV4          
    10.2.2.227 sby
    24
    act-10.2.2.
    227
    127  IF1.
    Port
    PrefixV4          
    Ha0 sby
    24
    act-10.2.2.
    227
    127  
    IF1
    IF2.
    RNat
    PrefixV4          
    True sby
    24
    act-10.2.2.
    227
    127  
    IF2
    IF3.
    GWV4
    PrefixV4          
    10.2.3.1 sby
    24
    act-10.2.2.
    227
    127  HFE_IF2.
    IPV4
    FIPV4         
    10.2.3.10 sby
    52.168.34.216
    act-10.2.2.
    227
    127  
    IF2
    HFE_IF3.
    Port
    FIPV4         
    Pkt0 sby
    10.2.2.7
    act-10.2.2.
    227
    127  HFE_IF2.
    RNat
    IFName        
    True sby
    IF_HFE_PKT0
    act-10.2.2.
    227
    127  HFE_IF3.
    GWV4
    IFName        
    10.2.4.1 sby
    IF_HFE_PKT1
    act-10.2.2.
    227
    127  
    IF3
    secondaryIPList.
    IPV4
    Pkt0  
    ['10.2.
    4
    3.10']
    
    sby
    act-10.2.2.
    227
    127  
    IF3
    secondaryIPList.
    Port Pkt1
    Pkt1  ['10.2.4.10']
    sby-10.2.2.227  
    IF3
    IF0.
    RNat
    GWV4              
    True
    10.2.0.1
    sby-10.2.2.227  IF0.
    FIPV4
    IPV4              
    40
    10.
    76
    2.
    8
    0.
    39
    14
    sby-10.2.2.227  IF0.
    PrefixV4
    Port              
    24
    Mgt0
    sby-10.2.2.227  
    IF1.PrefixV4
    IF0.RNat              
    24
    True
    sby-10.2.2.227  
    IF2.PrefixV4
    IF1.GWV4              
    24
    10.2.2.1
    sby-10.2.2.227  
    IF3.PrefixV4
    IF1.IPV4              
    24
    10.2.2.227
    sby-10.2.2.227  
    HFE_IF2.FIPV4
    IF1.Port              
    52.168.34.216
    Ha0
    sby-10.2.2.227  IF1.RNat     
    HFE_IF3.FIPV4
             
    10.2.2.7
    True
    sby-10.2.2.227  
    HFE_
    IF2.
    IFName
    GWV4              
    IF_HFE_PKT0
    10.2.3.1
    sby-10.2.2.227  
    HFE_IF3.IFName
    IF2.IPV4          
    IF_HFE_PKT1 sby-10.2.2.227
      
    secondaryIPList.Pkt0
      
    ['
    10.2.3.
    11']
    10
    sby-10.2.2.227  
    secondaryIPList.Pkt1 ['10.2.4.11'] [ok][2019-10-07 11:48:16] admin@act
    IF2.Port              Pkt0
    sby-10.2.2.
    127>

    Add New Endpoints to UAC

    To add a new end point to the Public Endpoint side with HFE1 (for example, 52.52.52.52 is the new end point IP):

    Add the end point IP to outbound security group.

    Caption
    0Figure
    1Add outbound security rule

    Image Removed

    Add the end point IP to the PKT0 subnet custom route table. Name of the route table will be $instanceBaseName.

    Caption
    0Figure
    1UAC - Route Table - Add Route

    Image Removed

    Select Next hop type of Virtual Appliance, and the Next hop address as $hfePkt0OutIp.
  • Add the end point IP to the Inbound Security Rule of the Security group of nic1 of HFE1, and PKT0 of the SBC.
  • 227  IF2.RNat              True
    sby-10.2.2.227  IF3.GWV4              10.2.4.1
    sby-10.2.2.227  IF3.IPV4              10.2.4.10
    sby-10.2.2.227  IF3.Port              Pkt1
    sby-10.2.2.227  IF3.RNat              True
    sby-10.2.2.227  IF0.FIPV4             40.76.8.39
    sby-10.2.2.227  IF0.PrefixV4          24
    sby-10.2.2.227  IF1.PrefixV4          24
    sby-10.2.2.227  IF2.PrefixV4          24
    sby-10.2.2.227  IF3.PrefixV4          24
    sby-10.2.2.227  HFE_IF2.FIPV4         52.168.34.216
    sby-10.2.2.227  HFE_IF3.FIPV4         10.2.2.7
    sby-10.2.2.227  HFE_IF2.IFName        IF_HFE_PKT0
    sby-10.2.2.227  HFE_IF3.IFName        IF_HFE_PKT1
    sby-10.2.2.227  secondaryIPList.Pkt0  ['10.2.3.11']
    sby-10.2.2.227  secondaryIPList.Pkt1  ['10.2.4.11']
    [ok][2019-10-07 11:48:16]
    admin@act-10.2.2.127>

    Add New Endpoints to UAC

    To add a new end point to the Public Endpoint side with HFE1 (for example, 52.52.52.52 is the new end point IP):

    1. Add the end point IP to outbound security group.

      Image Added

    2. Add the end point IP to the PKT0 subnet custom route table.
      Image Added

      Select Next hop type of Virtual Appliance, and the Next hop address as HFE eth2 IP.
    3. Add the end point IP to the Inbound Security Rule of the Security group of nic1 of HFE1, and PKT0 of the SBC.

    Add New Endpoints to UAS

    Add the IP (for example, 10.2.3.9) to PKT1 subnet custom route table.
    Image Added

    Select Next hop type of Virtual Appliance, and the Next hop address as HFE eth2 IP.

    Optional HFE Configuration

    Adding Custom Static Routes to HFE

    For specialized deployments, users may need to add specific custom static routes to the HFE at the OS level. The HFE script supports this by using the HFE variable CUSTOM_ROUTES. It enables the HFE script to add these routes as part of its start-up process and verify these routes continue to be on the HFE throughout the uptime.

    CUSTOM_ROUTES is a comma separated list of values in the form <DESTINATION_IP_CIDR>_<INTERFACE_NAME>. For example: 1.1.1.0/26_eth1, 2.2.2.0/28_eth2, 3.3.3.4/32_eth3.

    If the HFE is already deployed, the variable is added to /opt/HFE/natVars.user.
    Example

    Code Block
    echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" | sudo tee -a /opt/HFE/natVars.user


    Info

    For <INTERFACE_NAME>, use the standard eth0, eth1, and so on always even if the Linux distribution does not use this naming convention. The HFE_AZ.sh determines the interface to add the route.

    Updating HFE Variables

    Add New Endpoints to UAS

    Add the IP (for example, 10.2.3.9) to PKT1 subnet custom route table. The name of the route table is $instanceBaseName.

    Caption
    0Figure
    1UAS - Route Table - Add Route

    Image Removed

    Select Next hop type of Virtual Appliance, and the Next hop address as $hfePkt1OutIp.

    HFE Node Logging

    The HFE generates the following logs under /opt/HFE/log/:

    • cloud-init-nat.log: Logs generated from the initial configuration.
    • HFE_conf.log: Logs generated from the setup of the HFE node. They contain information about:
      • SBC instance names
      • The IPs for allowing SSH into the HFE node
      • The configured zone
      • The SBC IPs being used to forward traffic to
      • Iptables rules
      • Routing rules
    • HFE_conf.log.prev: A copy of the previous HFE_conf.log.
    • HFE.log
      • Logs which contain messages about any switchover action, as well as connection errors. The logs generated are as follows:
        1. Connection error detected to Active SBC: <<IP>>. Attempting switchover.
          • We have lost connection to the SBC. HFE node now performing switchover action 
        2. Connection error ongoing - No connection to SBC PKT ports from HFE
          • This error means that a switchover has been attempted, but no connection could be established to the new SBC.
          • The HFE node then continually switches between the SBCs until a connection is established
          • This usually means there is a network issue or a configuration issue on the SBCs.
        3. Switchover from old Active <<Old Active SBC IP>> to new Active <<New Active SBC IP>> complete. Connection established.
          • The switchover action is complete and connection has been established to the 'Active' SBC
        4. Initial HFE startup configuration complete. Successfully connected to <<SBC Instance Name>>
          • The HFE node has successfully connected to the active SBC following a boot.
      • This log is rotated when it reaches 250MB:
        • Up to four previous logs are saved.
        • The previous logs are compressed to save disk space.

    Adding Custom Static Routes to HFE

    For specialized deployments, users may need to add specific custom static routes to the HFE at the OS level. The HFE script supports this by using the HFE variable CUSTOM_ROUTES. It enables the HFE script to add these routes as part of its start-up process and verify these routes continue to be on the HFE throughout the uptime.

    CUSTOM_ROUTES is a comma separated list of values in the form <DESTINATION_IP_CIDR>_<INTERFACE_NAME>. For example: 1.1.1.0/26_eth1, 2.2.2.0/28_eth2, 3.3.3.4/32_eth3.
    To add the CUSTOM_ROUTES to the HFE customData, add the following line below /bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR. For example:

    Code Block
    /bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR
    /bin/echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" >> $NAT_VAR
    /bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILE

    If the HFE is already deployed, the variable is added to /opt/HFE/natVars.user. For example:

    Code Block
    echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" | sudo tee -a /opt/HFE/natVars.user
    Info

    For <INTERFACE_NAME>, use the standard eth0, eth1, and so on always even if the Linux distribution does not use this naming convention. The HFE_AZ.sh determines the interface to add the route.

    Creating a HFE Sysdump

    The HFE_AZ.sh script can create an archive of useful logs to help with debugging (similar to the SBC sysdump). Run the following command to collect the logs:

    Code Block
    sudo /opt/HFE/HFE_AZ.sh sysdump

    The following details are collected:

    1. Output of:
      • Interfaces
      • Routes
      • IPtables
      • dmesg
      • conntrack count
      • conntrack extended list
      • The VM Azure metadata
      • journalctl errors
      • dhclient logs
      • System-networkd logs
    2. The logs:
      • syslog
      • waagent logs
      • cloud-init logs
    3. /opt/HFE/* (without previous sysdumps)
    4. All user bash history

    The sysdumps archives are stored in the .tar.gz format under /opt/HFE/sysdump/.

    Handling Multiple Remote SSH IPs to Connect to HFE Node

    The following section contains the instructions to set multiple SSH IPs to access the HFE node as well as to update the instances to add in more SSH IPs.

    Info
    titleNote

    Ensure the REMOTE_SSH_MACHINE_IP is not set to an IP where the call traffic is originating from. It can break the HFE logic and the traffic fails to reach the SBC.

    Initial Orchestration

    During orchestration, you can supply multiple IP addresses to the appropriate variable with a common separated list. For example, 10.0.0.1, 10.0.0.2, and 10.0.0.3. The following table represents the list of variables that need to be set for each orchestration type:

    Caption
    0Table
    1Initial Orchestration
    CloudOrchestration TypeVariable NameAzureManual creation using CLIREMOTE_SSH_MACHINE_IP (in customData)Terraformremote_ssh_ip

    Updating Remote SSH IPs

    The following steps describe the procedure to update the Remote SSH IPs on the Azure.

    Info
    titleNote

    To add in a new Remote SSH Machine IP, you need to supply the full list of IPs for which the routes need to be created.

    Info
    titleNote

    The following procedure results in network outages as the HFE requires a reboot to select the latest list.

    Azure

    Azure does not support updating Custom Data after a VM is created. To update a an HFE variable, use the following procedure:

    1. Log on to the HFE node
    2. as a rbbn user
    3. , as user specified during instance creation.
    4. Enter the updated variable

    5. to
    6. to /opt/HFE/natVars.user.

    7. For example
    8. For example:

    9.  
    10. Code Block
      echo "REMOTE_SSH_MACHINE_IP=\"10.27.0.54,10.36.9.6\"" | sudo tee -a /opt/HFE/natVars.user


    11.  Reboot the HFE: 

      Code Block
      sudo reboot
    12. Infotitle
    13. Note
    14. Any variable added to/opt/HFE/natVars.

    15. user
    16. user will overwrite the values set as the variables in custom data

    17. . Ensure you enter the complete
    18. . To add a new Remote SSH Machine IP, ensure to supply the full list of IPs for which you wish to create the routes

    19. need to be created
    20. .

    Enabling PKT DNS Support on

    HFE

    The DNS queries on the SBC PKT port are sent using the primary IP. The HFE variable ENABLE_PKT_DNS_QUERY is used to enable the support for the HFE to forward these requests correctly. 

    To enable the PKT DNS Support on a new HFE setup, add "ENABLE_PKT_DNS_QUERY=1" to the customData, below SBC_PKT_PORT_NAME.

    Example:

    Code Block

    HFE

    The DNS queries on the SBC PKT port are sent using the primary IP. The HFE variable

    /bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR /bin/echo "

    ENABLE_PKT_DNS_QUERY

    =1" >> $NAT_VAR /bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILE

    is used to enable the support for the HFE to forward these requests correctly. 

    To enable the PKT DNS Support option on an already configured HFE setup:

    1. Log on to the HFE node as a RBBN user.
    2. Add the natvar ENABLE_PKT_DNS_QUERY to /opt/HFE/natVars.user with the value 1.

      Code Block
      echo "ENABLE_PKT_DNS_QUERY=1" | sudo tee -a /opt/HFE/natVars.user


    3. Reboot the HFE.

      Code Block
      sudo reboot