DO NOT SHARE THESE DOCS WITH CUSTOMERS!
This is an LA release that will only be provided to a select number of PLM-sanctioned customers (PDFs only). Contact PLM for details.
In this section:
This best practice is intended for Proof of Concept (POC) use. Its content is subject to change and is not intended for general availability
IPsec is not supported by the SBC SWe in AWS.
The HFE front-ends only one pkt port (pkt0), public endpoints can be connected only to pkt0. Pkt1 can serve private endpoints.
Use the HFE.sh script for HFE configuration.
Using a Template to Spawn the HFE Network with Address Translation (NAT) Instance and an SBC Pair
Create and configure these resources just once. You can reuse the IAM role and buckets. If the HFE.sh
file changes in future releases, you can upload the new file to the S3 bucket and configure the file in the templates.
Step | Action |
---|---|
1 | Create the S3 bucket. You can opt for versioning of the bucket. For instructions, refer to https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html. |
2 | Upload Note: For instructions, refer to Putting An Object In A Bucket. |
3 | Create the IAM role for EC2 service and select the AmazonS3ReadOnlyAccess policy for this role. Note the role name, it will be used for instances creation later. Refer to https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html for instructions. NOTE: See Amazon managed policy [arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess] |
4 | You must restart/reboot the HFE in order for changes to take effect if a new HFE.sh script is uploaded. |
Creating an IAM Role for HFE
Refer to Create an Identity and Access Management (IAM) Role for HFE.
Creating an Instance
This template creates two Instances: one HFE node (NAT), and one HA SBC.
The HFE and SBC HA nodes will reside in the same AZ.
This new template creates a new subnet and routing table/rule (AWS routing rule) for the Pkt0
interface which is front-ended by the HFE instance. This new subnet is a private subnet (UserGuide/VPC_Scenario2.html) in AWS which sends all traffic meant to go out of VPC to the HFE instance (eth2
).
The HFE instance has three interfaces:
eth0 – This interface is created to accept traffic on EIP that is redirected towards the SBC, this can be in any public subnet, either you can create new Subnet and Security group or reuse any existing Subnet and Security group that allows traffic from public end-point. The eth0
subnet should have an AWS routing table attached to it which provides Internet access to eth0
, and allows public traffic on eth0
. The eth0
subnet will contact the AWS S3 bucket to download the HFE.sh
script and configure the HFE instance.
eth1 – This interface is created to manage HFE, this is created using Security group and Subnet that is used for mgt0 of the SBC.
eth2 – This interface sends and receives all traffic from the SBC (using the routing table of the private subnet), this can be in any private subnet. You can create new Subnet and Security groups or reuse any existing Subnet and Security group that allows all traffic from the VPC.
The template creates two instances — the HFE Node (NAT) and one HA SBC instance.
The HFE Node needs to be same size as that of the SBC instance. For example, if the SBC instance is C5.2xlarge, the HFE Node also needs to be C5.2xlarge
New Fields Introduced for HFE
If no IP is selected in the remoteSSHMachinePublicIP field, the HFE node can not be accessed from any machine outside the VPC. Secure shell (ssh) access to the HFE is provided only using eth1
. If you want to manange the HFE then you must provide public IP of machine that will be used to connect to the HFE.
The HFE AMI is not maintained by Ribbon. Users must select the latest AWS Linux to spawn the HFE and maintain this. Upgrades for AWS Linux are provided by AWS: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-updates.html
The HFE is configured to send traffic to the SBC when it boots up. For this process to work properly you must allow Internet traffic over eth0. All traffic reaching eth0 is sent to the SBC over eth2. eth1 is used to manage the HFE, optionally you can attach Elastic IP (EIP) attached to manage the HFE.
The security group of eth2 should allow all traffic in the Virtual Private Cloud (VPC) as that will send/receive traffic from pkt0 of the SBC (pkt0 is in another, private subnet).
Do not make any change that might impact networking, routing, Linux iptables, or conntrack such as:
Secondary IP fail-over takes between 2-6 seconds. This is an AWS limitation. The HFE and HFE Node addresses the issue only from an EIP perspective.
The following table shows the media loss tested on HFE for different types of calls.
The following figures show the CloudFormationTemplate, used to spawn an HFE and both Active and Standby SBCs. The template creates a private Subnet, and an AWS routing table to redirect pkt0 (SBC) traffic to the HFE eth2. SBC pkt0 is in private subnet and front-ended by the HFE.
Verify that the HFE.sh
script was copied from the S3 bucket to the directory specified in the template.
This file contains the following scripts, configuration and log files:
HFE.sh: Configuration file that is downloaded from S3 bucket and configures this instances to front-end pkt0 of SBCs.
natVars.input: Contains input to HFE.sh.
SBC_SECONDARY_IP="10.54.90.76":
Secondary IP of SBC's pkt0 portREMOTE_SSH_MACHINE_IP="121.242.142.135":
Public IP of machine which can be used to connect to HFE using ssh, this ip should have route via eth1
[root@ip-10-54-40-101 ec2-user]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.54.40.1 0.0.0.0 UG 0 0 0 eth0
0.0.0.0 10.54.10.1 0.0.0.0 UG 10001 0 0 eth1
0.0.0.0 10.54.40.1 0.0.0.0 UG 10002 0 0 eth2
10.54.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
10.54.40.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
10.54.40.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2
10.54.90.76 10.54.40.1 255.255.255.255 UGH 0 0 0 eth2
121.242.142.135 10.54.10.1 255.255.255.255 UGH 0 0 0 eth1
169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
[root@ip-10-54-40-101 ec2-user]#
iptables.rules.prev: iptables rules before configuration was applied using HFE.sh.
HFE.log: Log file of HFE.sh script
cloud-init-nat.log: Log of cloud-init which creates an HFE directory and copies HFE.sh from S3 bucket
Check /var/log/cloud-init.log for userdata script errors. You should see the following if the user data script was run successfully:
May 18 08:59:59 cloud-init[2603]: util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/runcmd'] with allowed return codes [0] (shell=True, capture=False)
May 18 08:59:59 cloud-init[2603]: util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/userdata.txt'] with allowed return codes [0] (shell=True, capture=False)
May 18 09:00:00 cloud-init[2603]: cloud-init[DEBUG]: Ran 1 modules with 0 failures
May 18 09:00:00 cloud-init[2603]: util.py[DEBUG]: Creating symbolic link from '/run/cloud-init/result.json' => '../../var/lib/cloud/data/result.json'
All packets coming to eth0
are forwarded to eth2
and then sent to the SBC. The destination IP of all incoming packets is changed from the private IP of HFE (eth0
) to the Secondary IP of the SBC (pkt0
).
The reply from the SBC comes to eth2.
The destination IP here is a public end-point and the source IP is the secondary IP of the SBC. These packets are forwarded to eth0
and then they are sent out to a public endpoint. The default route with the highest priority is set to eth0
, so there is no need to configure routing for any endpoint that wants to connect to SBC (pkt0
) using EIP attached on HFE (eth0
). Just before sending out these packets, the source IP is changed from the SBC's secondary IP to a private IP on eth0
of the HFE.
You will see tshark packet analyzer output if you have IPs configured as shown in this table.