HFE Node Network Setup

The High-Availability Front End (HFE) node is a public-facing node that allows sub-second switchover between Active and Standby SBC instances of an HA pair, as it negates the need for any IP reassignment.

GCP requires each interface of a instance in a separate Virtual Private Network (VPC). Create a minimum of six VPCs for a full HFE set up (assuming all management interfaces for the SBC and the HFE node are in the same VPC).

HFE 2.1 Network Setup

HFE 2.1 has two HFE nodes, each responsible for a different type of traffic:

  • Untrusted public traffic to the SBC (for PKT0). In this document, such a HFE node is referred to as "PKT0 HFE node".
  • Trusted traffic from the SBC to other trusted networks (from PKT1). In this document, such a HFE node is referred to as the "PKT1 HFE node".

Both HFE nodes require three interfaces, as follows:

Standard/Ubuntu Interface Name

NIC

PKT0 HFE node Function

PKT1 HFE node FunctionRequires External IP?
eth0 / ens4nic0Public Interface for SBC PKT0Private interface in for SBC PKT1; only instances in the same subnet can connect.Yes (only on PKT0 HFE node)
eth1 / ens5nic1Management interface to HFE.Management interface to HFE.Optional
eth2 / ens6nic2Interface to SBC PKT0; ensure that the interface is in the same VPC and subnet as SBC pkt0.Interface to SBC PKT1; ensure that the interface is in the same VPC and subnet as SBC pkt1.No
Note

To use a HFE 2.1 environment, the startup-script for the SBCs requires the fields Pkt0HfeInstanceName and Pkt1HfeInstanceName. For more information, refer to the table in the section "User Data" on the page Instantiating SBC SWe in GCP.


Supported OS for HFE

For HFE nodes, Ribbon supports the following operating systems:

  • Ubuntu 18.04 LTS
  • Debian 9/10
  • CentOS 8
  • Red Hat Enterprise Linux 8


Note

The mandatory requirement for supported OS are as follows:

  • The package managers are apt or yum. If any required packages are not in the repository, the HFE script fails with message "Required packages <Package Name> is missing" in the HFE_conf.log.
  • The application "Google Metadata Script Runner" is available on the system. For more information, refer to https://cloud.google.com/compute/docs/startupscript.

Prerequisites for Creating the HFE Node

Ensure that the following are configured before creating the HFE node:

Manual HFE Node Instance Creation

This section describes the manual creation of HFE nodes.

HFE 2.1 - Split HFE

  1. Navigate to Compute Engine > VM instances.
  2. Click CREATE INSTANCE.
  3. Select Name, Region, and Zone.
  4. Select Machine Type:
    • General-purpose
    • Series - N1
    • n1-standard-4

      Note

      The value for Machine Type are for illustration purposes only. For recommended configurations, refer to Instance Types Supported for SBC SWe in GCP.


      HFE 2.1 - Basic Configuration

  5. To configure the OS, select Boot Disk - Change:
    1. Select a supported OS (for example, Ubuntu 19.04).
    2. Set the Size as 10 (GB).

      HFE 2.1 - Boot Disk

  6. In Identity and API access - Service account, select the Service account created earlier.
  7. In the Security tab, update the SSH Keys as to include:
    1. SSH Key for user: Insert ssh-rsa ... <user-name>.
    2. Check Block project-wide SSH keys.

      HFE 2.1 - Security

  8. Update the Metadata to include the startup script for the HFE. For HFE 2.1, a copy of the startup script is available in GCE - HFE 2.1 Startup Script, with appropriate values for the following variables:
    • HFE_SCRIPT_LOCATION - The location of the HFE script stored in Google storage. For more information, refer to Create a Bucket in Google Cloud Storage for HFE Script Upload.
    • ACTIVE_SBC_NAME - Instance name of the Active SBC.
    • STANDBY_SBC_NAME - Instance name of the Standby SBC.
    • REMOTE_SSH_MACHINE_IP - The IP address of the remote machine to SSH from on the Management Interface. You can provide multiple IP addresses as a comma-separated list. For example, 10.0.0.1,10.0.0.2,10.0.0.3. 
    • ZONE - The Zone in which the SBCs are configured.
    • SBC_PKT_PORT_NAME - This tells the HFE if it is handling traffic for PKT0 or PKT1. The accepted values are PKT0 and PKT1. Use this variable only for HFE 2.1.
  9. Update the Network Interfaces on the HFE by selecting the Networking tab.
  10. Update the Network interfaces by configuring the three interfaces described in HFE 2.1 Network Set-up:
    1. For HFE PKT0 node, add Network interface for the Public Interface to receive traffic for SBC PKT0:
      1. Select the VPC created for the HFE 'Public' facing for PKT0 traffic.
      2. Select the Subnet created for the HFE 'Public' facing for PKT0 traffic.
      3. Set the Primary internal IP as Ephemeral (Automatic).
      4. Set the External IP as one of the static External IPs created earlier.
      5. Set IP forwarding to On.

        HFE 2.1 - HFE PKT0 Node - Public Interface - Network Interface - PKT0

    2. For HFE PKT1 node, add Network interface for Private interface to receive traffic for SBC PKT1:
      1. Select the VPC created for the HFE 'Private' facing for PKT1 traffic.
      2. Select the Subnet created for the HFE 'Private' facing for PKT1 traffic.
      3. Set the Primary internal IP as Ephemeral (Automatic).
      4. Set External IP as None.
      5. Set IP forwarding to On.

        HFE 2.1 - HFE PKT1 Node - Private Interface - Network Interface - PKT1

    3. Add Network interface for Management interface to HFE:
      1. Select the VPC created for SBC MGT0.
      2. Select Subnet which was created for SBC MGT0.
      3. Set the Primary internal IP as Ephemeral (Automatic).
      4. Set the External IP as Ephemeral (Automatic).

        HFE 2.1 - Management Interface - Network Interface - MGT0

    4. For PKT0 HFE node, add Network interface for Interface to communicate with SBC PKT0
      1. Select the VPC created for SBC PKT0.
      2. Select the Subnet created for SBC PKT0.
      3. Set the Primary internal IP as Ephemeral (Automatic).
      4. Set the External IP as None.

        HFE 2.1 - HFE PKT0 Node - Network Interface - PKT0

    5. For PKT1 HFE node, add Network interface for Interface to communicate with SBC PKT1
      1. Select the VPC created for SBC PKT1.
      2. Select the Subnet created for SBC PKT1.
      3. Set the Primary internal IP as Ephemeral (Automatic).
      4. Set the External IP as None.

        HFE 2.1 - HFE PKT1 Node - Network Interface - PKT1

  11. Click CREATE.


Warning

Since the the SBCs are not yet configured, errors are logged in the file HFE.log. After the HFE node instance is created, stop the instance from running until the SBCs are created and configured.

Warning

The HFE_GCE.sh script fails and SSH to mgmt interface does not work until the SBCs are created (due to the inability to read the information from the SBC); the HFE node is accessed via NIC0.


Startup Script - Example

Note

The term "user-data" is deprecated; use the term "startup script" to refer to the code snippet below.

Refer to the following page for startup script examples:

HFE Network Security Configuration

To use the HFE, add specific extra rules at the Google Network level. Ensure that routes and firewall rules are configured on the VPC networks containing the subnets in which PKT0 and PKT1 interfaces on the SBC are located. 

Note

Refer to Firewall Rules Overview for complete firewall details.


Google Network Routes

Create routes to send the traffic from PKT0 and PKT1 interfaces return through the HFE:

  1. Go to VPC Networks.
  2. Click on the VPC used for the PKT0 interface, and click Routes.
  3. Click Add route:
    1. Provide a Name.
    2. Set Destination IP range to 0.0.0.0/0.
    3. Set Priority to anything under 1000 (The priority value for the default routes created). 
    4. Set an Instance tag. This is used to specify which instances use this rule.

      Note

      When creating an instance, set this tag as the Network Tag.

    5. Set the Next Hop using one of the following options:
      1. Specify an instance - Use an instance already created with an interface in this VPC. However, if the instance is deleted and another is created with the same name, traffic is routed to the new instance.

        Note

        To use the "Specify Instance" method, create the HFE instance before specifying it.

      2. Specify an IP address - Specify the private IP address of nic3 / nic4 (for the VPCs of PKT0 and PKT1 respectively)  interface on the HFE.

        Note

        You cannot edit VPC route rules on the Goggle Cloud Platform.

      Create a route

    6. Repeat for the VPC used for the SBC PKT1 interface.

Google Firewall Rules

By default, the Google network drops all packets unless firewall rules are configured. Ensure that rules are set for the VPCs to allow traffic from specific locations to reach the instances.

  1. Go to VPC networks.
  2. Click on the VPCs used for the PKT0 port on the SBCs, and click Firewall Rules.
  3. There are two types of firewall required for using the HFE:
    1. An ingress and egress rule to allow all traffic (protocol and port) types from the source IP(s) of the traffic.
      1. Set Targets to "All instances in the network".
      2. The Source filter should be "IP ranges". 
      3. Set Source IP ranges as the source IPs for the traffic.
      4. Set Protocols and ports as "Allow all".

    2. An ingress and egress rule to allow all traffic from within the subnet to communicate (default):
      1. Set Targets to "All instances in the network".
      2. The Source filter should be "IP ranges". 
      3. Set Source IP ranges as the "subnet CIDR".
      4. Set Protocols and ports as "Allow all".
  4. Repeat for the VPC used for the SBC PKT1 interface.

    VPC network details

Adding Custom Static Routes to HFE

For specialized deployments, users may need to add specific custom static routes to the HFE at the OS level. The HFE script supports this by using the HFE variable CUSTOM_ROUTES. It enables the HFE script to add these routes as part of its start-up process and verify these routes continue to be on the HFE throughout the uptime.

CUSTOM_ROUTES is a comma separated list of values in the form <DESTINATION_IP_CIDR>_<INTERFACE_NAME>. For example: 1.1.1.0/26_eth1, 2.2.2.0/28_eth2, 3.3.3.4/32_eth3.
To add the CUSTOM_ROUTES to the HFE startup-script, add the following line below /bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR. For example:

/bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR
/bin/echo "CUSTOM_ROUTES=\"<DESTINATION_IP_CIDR>_<INTERFACE_NAME>, <DESTINATION_IP_CIDR>_<INTERFACE_NAME>\"" >> $NAT_VAR
/bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILE

For <INTERFACE_NAME>, use the standard eth0, eth1, and so on always even if the Linux distribution does not use this naming convention. The HFE_GCE.sh determines the interface to add the route.

Creating a HFE Sysdump

The HFE_GCE.sh script (part of cloudTemplates.tar.gz) can create an archive of useful logs to help with debugging (similar to the SBC sysdump). Run the following command to collect the logs:

sudo /opt/HFE/HFE_GCE.sh sysdump

The following details are collected:

  1. Output of:
    • Interfaces
    • Routes
    • IPtables
    • dmesg
    • conntrack count
    • conntrack extended list
    • The VM GCE metadata
    • journalctl errors
    • dhclient logs
    • System-networkd logs
  2. The logs:
    • syslog
    • GCE startup-script
    • cloud-init logs
  3. /opt/HFE/* (without previous sysdumps)
  4. All user bash history

The sysdumps archives are stored in the .tar.gz format under /opt/HFE/sysdump/.

Enabling PKT DNS Support on HFE

The DNS queries on the SBC PKT port are sent using the primary IP. The HFE variable ENABLE_PKT_DNS_QUERY is used to enable the support for the HFE to forward these requests.

To enable it on a new HFE setup, add "ENABLE_PKT_DNS_QUERY=1" to the startup-script, below the SBC_PKT_PORT_NAME. For example: 

/bin/echo "SBC_PKT_PORT_NAME=\"<SBC_PKT_PORT_NAME>\"" >> $NAT_VAR
/bin/echo "ENABLE_PKT_DNS_QUERY=1" >> $NAT_VAR
/bin/echo $(timestamp) "Copied natVars.input" >> $LOG_FILE


HFE Node Logging

The HFE generates the following logs under /opt/HFE/log/:

  • cloud-init-nat.log: Logs generated from the commands in the user-data script.
  • HFE_conf.log: Logs generated from the set up of the HFE node. They contain information about:
    • SBC instance names
    • IPs for allowing SSH access of the HFE node
    • The configured zone
    • SBC IPs used to forward traffic
    • IP Tables rules
    • Routing rules
  • HFE_conf.log.prev: A copy of the previous HFE_conf.log.
  • HFE.log
    • Logs containing messages about any switchover action and connection errors. The logs generated are as follows:
      1. Connection error detected to Active SBC: <<IP>>. Attempting switchover.
        • Lost connection to the SBC. HFE node performs switchover action .
      2. Connection error ongoing - No connection to SBC PKT ports from HFE
        • This error means that the HFE node attempted a switchover, but no connection is established with new SBC.
        • The HFE node then continually switches between the SBCs until a connection is established.
        • This usually means there is a network issue or a configuration issue on the SBCs. 
      3. Switchover from old Active <<Old Active SBC IP>> to new Active <<New Active SBC IP>> complete. Connection established.
        • The switchover action is complete and connection is established to the Active SBC.
    • This log is rotated when it reaches 250 MB.
      • A maximum of four previous logs are saved.
      • The previous logs are compressed to save disk space.

Support for HFE node OS Upgrades and Security Patches

Ribbon tested the following upgrade scenarios on the HFE node, using ICMP packets to contact the PKT0/PKT1 ports on the SBC:

  • Ubuntu 16.04 LTS > 18.04 LTS
  • Ubuntu 19.04 > 19.10

    Note

    While updating Google-specific packages, the routes are removed. To restore them, reboot the instance after update.

    Ribbon recommends rebooting the HFE instance after installation or update of any package that affects networking.

  • Debian 9 > Debian 10
  • Installation of all updates available on CentOS 8.

    Note

    Ribbon does not support full OS upgrades on CentOS.

Create a NAT Gateway

Create a cloud NAT gateway for the VPC used by nic0 on the PKT1 HFE node. This allows the PKT1 HFE node to access the Google servers to retrieve the script and query instance information, and also prevents the instance from getting exposed to the outer world.

To create a cloud NAT gateway, perform the following steps:

  1. In the GCP Console, navigate to Networking > Network services > Cloud NAT.
  2. Click CREATE NAT GATEWAY.
  3. Enter a Gateway name.
  4. Select the VPC.
  5. Select the Region
  6. For Cloud Router, select Create New Router.
    1. Enter Name.
    2. Optionally, enter a Description.
    3. Click Create.

      Create a router

  7. Retain the default NAT mapping.
  8. Click Create.

    Create a NAT gateway