Note

Any changes in data formatting in the newer version of Linux can cause issues in the HFE upgrade, so run the procedure in a lab prior to attempting the upgrade in Production.

Ensure to upgrade the HFE.sh script prior to attempting to upgrade the HFE OS version.

To upgrade the HFE.sh script on HFE node (required):

  1. Login to the AWS EC2 Console, and click on INSTANCES > Instances on the left panel.

  2. Find the HFE instance in the instance list.


  3. Determine the S3 bucket name associated with your HFE. If the HFE S3 bucket name used by HFE is already known, skip to next step.

    1. With the EC2 HFE instance selected, click on the TAGS tab to reveal the instance TAG details.
    2. Locate the Cloudformation.stack-name and record the value.
    3. Open the Cloudformation Management Console.
    4. Locate the stack in the list based on the Cloudformation.stack-name value found in prior step.
    5. Click the Parameters tab, then locate the HFEScriptS3Location parameter to find the S3 bucket name.
  4. Upload the latest HFE.sh to the S3 bucket associated with the HFE node.

  5. From the AWS EC2 Console, select the HFE instance.
  6. Select Instance State > Reboot from the actions drop-down to reboot the HFE node.
     

To upgrade the OS on the HFE node (optional): 

  1. Login to the HFE management IP as ec2-user using the SSH key used during initial cloudformation deployment of the SBC with HFE.
    You will receive a notice if package updates are available or required.

      [akhan@greenhornetinddev4 ~]$ ssh -i swe.pem ec2-user@18.210.147.64
                    Last login: Wed Jul 18 08:04:34 2018 from 121.242.142.135
    
    
                     __|  __|_  )
                     _|  (     /   Amazon Linux 2 AMI
                    ___|\___|___|
    
    
                   https://aws.amazon.com/amazon-linux-2/
                   15 package(s) needed for security, out of 81 available
                   Run "sudo yum update" to apply all updates.
  2.  Run "sudo su" command. 

  3. Run "yum clean all" command. 
  4. Run "yum update" command. 

Handling Multiple Remote SSH IPs to Connect to HFE Node

The following section contains the instructions to set multiple SSH IPs to access the HFE node as well as to update the instances to add in more SSH IPs.

Note

Ensure the REMOTE_SSH_MACHINE_IP is not set to an IP where the call traffic is originating from. It can break the HFE logic and the traffic fails to reach the SBC.

Initial Orchestration

During orchestration, you can supply multiple IP addresses to the appropriate variable with a common separated list. For example, 10.0.0.1, 10.0.0.2, and 10.0.0.3. The following table represents the list of variables that need to be set for each orchestration type:

Initial Orchestration

CloudOrchestration TypeVariable Name
AWSManual creation through consoleREMOTE_SSH_MACHINE_IP (in user-data)
Cloud formationremoteSSHMachinePublicIP
Terraformremote_ssh_ip

Updating Remote SSH IPs

The following steps describe the procedure to update the Remote SSH IPs on the AWS.

Note

To add in a new Remote SSH Machine IP, you need to supply the full list of IPs for which the routes need to be created.

Note

The following procedure results in network outages as the HFE requires a reboot to select the latest list.

  1. Navigate to the AWS console.
  2. Select EC2.
  3. Select the HFE instance.
  4. Select Actions.
  5. Select Instance settings.
  6. Select Edit user data.
  7. Edit the REMOTE_SSH_MAHCHINE_IP line, For example: 

    /bin/echo "REMOTE_SSH_MACHINE_IP=\"10.0.0.1,10.10.10.10\"">> $NAT_VAR
  8. Select Save.

  9. Navigate to Instance state.
  10. Select Stop instance and Start instance for the changes to take affect.