In this section:
This best practice is intended for Proof of Concept (POC) use. Its content is subject to change and is not intended for general availability
IPsec is not supported by the SBC SWe in AWS.
The HFE front-ends only one pkt port (pkt0), public endpoints can be connected only to pkt0. Pkt1 can serve private endpoints.
Use the HFE.sh script for HFE configuration.
Using a Template to Spawn the HFE Network with Address Translation (NAT) Instance and an SBC Pair
Create and configure these resources just once. You can reuse the IAM role and buckets. If the HFE.sh
file changes in future releases, you can upload the new file to the S3 bucket and configure the file in the templates.
Step | Action |
---|---|
1 | Create the S3 bucket. You can opt for versioning of the bucket. For instructions, refer to https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html. |
2 | Upload Note: For instructions, refer to Putting An Object In A Bucket. |
3 | Create the IAM role for EC2 service and select the AmazonS3ReadOnlyAccess policy for this role. Note the role name, it will be used for instances creation later. Refer to https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html for instructions. NOTE: See Amazon managed policy [arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess] |
4 | You must restart/reboot the HFE in order for changes to take effect if a new HFE.sh script is uploaded. |
Creating an IAM Role for HFE
Refer to Create an Identity and Access Management (IAM) Role for HFE.
Creating an Instance
This template creates two Instances: one HFE node (NAT), and one HA SBC.
The HFE and SBC HA nodes will reside in the same AZ.
This new template creates a new subnet and routing table/rule (AWS routing rule) for the Pkt0
interface which is front-ended by the HFE instance. This new subnet is a private subnet (UserGuide/VPC_Scenario2.html) in AWS which sends all traffic meant to go out of VPC to the HFE instance (eth2
).
The HFE instance has three interfaces:
eth0 – This interface is created to accept traffic on EIP that is redirected towards the SBC, this can be in any public subnet, either you can create new Subnet and Security group or reuse any existing Subnet and Security group that allows traffic from public end-point. The eth0
subnet should have an AWS routing table attached to it which provides Internet access to eth0
, and allows public traffic on eth0
. The eth0
subnet will contact the AWS S3 bucket to download the HFE.sh
script and configure the HFE instance.
eth1 – This interface is created to manage HFE, this is created using Security group and Subnet that is used for mgt0 of the SBC.
eth2 – This interface sends and receives all traffic from the SBC (using the routing table of the private subnet), this can be in any private subnet. You can create new Subnet and Security groups or reuse any existing Subnet and Security group that allows all traffic from the VPC.
The template creates two instances — the HFE Node (NAT) and one HA SBC instance.
The HFE Node needs to be same size as that of the SBC instance. For example, if the SBC instance is C5.2xlarge, the HFE Node also needs to be C5.2xlarge
New Fields Introduced for HFE
New HFE fields
Field | Description | Notes |
---|---|---|
AMI ID of HFE Node, select latest AWS Linux AMI ID. |
| |
Location of the HFE script in S3, use < | Use the IAM role created earlier for the HFE. Note: The template will not validate the location during instance creation, you must check this manually. | |
IAM role for HFE instances. | Provide the IAM role name here. Attached S3ReadOnlyAccess Policy [Amazon managed policy - arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess] to the SWeNAT IAM role. The Resource section can be changed to give permission only to the resources that point to Mangler.sh in S3 bucket (optional) To edit policy Resources, create a policy using the JSON mentioned above and then attach that to IAM role used to spawn NAT instance (SWeNAT in this case). More details about IAM role and policy are here: | |
Enter a CIDR for pirvate subnet for the SBC, this new subnet will be served by HFE instance. | Enter Free CIDR in your VPC, create a small subnet as all interfaces in this subnet will start sending traffic to the HFE. Recommended value is /28. | |
Enter Availability Zone for pirvate subnet for the SBC, this new subnet will be served by HFE instance. | Select AZ which has other subnets for the SBC – mgt, HA and Pkt1 ports. Enter the AZ that you are using to create the SBC. | |
Select the security Group for Public interface on the HFE (security group that allows all communication from public end-points). | ||
Select security Group for private interface on the HFE (towards the SBC). | Security group that allows all traffic from VPC, this will handle all traffic coming from pirvate subnet from SBC | |
| Subnet to handle all public end-point traffic( if created earlier, or you can reuse any subnet that is connected to Internet GW, for example | |
SubnetId of an existing subnet in your Virtual Private Cloud (VPC) for the HFE which will be used to send traffic towards the SBC. | Subnet to handle all traffic coming from the SBC which is front-ended by the HFE. If created earlier, or else you can reuse any subnet that is connected to Internet GW, for example | |
remoteSSHMachinePublicIP | If you want to connect the Management Interface of the HFE using EIP, enter the public IP of the machine that will connect to the HFE using a public IP. |
If no IP is selected in the remoteSSHMachinePublicIP field, the HFE node can not be accessed from any machine outside the VPC. Secure shell (ssh) access to the HFE is provided only using eth1
. If you want to manange the HFE then you must provide public IP of machine that will be used to connect to the HFE.
The HFE AMI is not maintained by Ribbon. Users must select the latest AWS Linux to spawn the HFE and maintain this. Upgrades for AWS Linux are provided by AWS: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-updates.html
The HFE is configured to send traffic to the SBC when it boots up. For this process to work properly you must allow Internet traffic over eth0. All traffic reaching eth0 is sent to the SBC over eth2. eth1 is used to manage the HFE, optionally you can attach Elastic IP (EIP) attached to manage the HFE.
The security group of eth2 should allow all traffic in the Virtual Private Cloud (VPC) as that will send/receive traffic from pkt0 of the SBC (pkt0 is in another, private subnet).
Do not make any change that might impact networking, routing, Linux iptables, or conntrack such as:
Secondary IP fail-over takes between 2-6 seconds. This is an AWS limitation. The HFE and HFE Node addresses the issue only from an EIP perspective.
The following table shows the media loss tested on HFE for different types of calls.
Media Loss
SN | Call-Type | Transport | Adaptor | ptime | Call Session | Switchover Type | Media Loss on Public side | Iterations | Remarks |
---|---|---|---|---|---|---|---|---|---|
1 | Pass-thr | UDP | N/A | 20 | 1 | CLI | 2.379/2.460 | 2 | Single call from SIPp |
2 | Pass-thr | UDP | N/A | 20 | 1 | Stop Dashboard | 1.880/2.060 | 2 | Single call from SIPp |
3 | Pass-thr | UDP | N/A | 20 | 2400 | CLI | 1.720/1.919 | 2 | load with one Single call from SIPp |
4 | Pass-thr | UDP | N/A | 20 | 2400 | Stop Dashboard | 2.645/1.760 | 2 | load with one Single call from SIPp |
5 | Pass-thr | UDP | N/A | 20 | 2400 | CLI | 2.250/2.279 | 2 | Pinned calls and then one single call |
6 | Pass-thr | UDP | N/A | 20 | 2400 | Stop Dashboard | 2.259/2.300 | 2 | Pinned calls and then one single call |
1 | Pass-thr | SRTP | N/A | 20 | 1 | CLI | 2.019/2.320 | 2 | Single call from SIPp |
2 | Pass-thr | SRTP | N/A | 20 | 1 | Stop Dashboard | 2.200/1.900 | 2 | Single call from SIPp |
3 | Pass-thr | SRTP | N/A | 20 | 2400 | CLI | 2.300/2.620 | 2 | load with one Single call from SIPp |
4 | Pass-thr | SRTP | N/A | 20 | 2400 | Stop Dashboard | 2.339/2.960 | 2 | load with one Single call from SIPp |
5 | Pass-thr | SRTP | N/A | 20 | 2400 | CLI | 1.940/2.000 | 2 | Pinned calls and then one single call |
6 | Pass-thr | SRTP | N/A | 20 | 2400 | Stop Dashboard | 2.599/2.340 | 2 | Pinned calls and then one single call |
1 | Pass-thr | TLS | Yes | 20 | 1 | CLI | 2.099/2.403 | 2 | Single call from SIPp |
2 | Pass-thr | TLS | Yes | 20 | 1 | Stop Dashboard | 2.72/8.9/1.900/2.499 | 4 | Single call from SIPp |
3 | Pass-thr | TLS | Yes | 20 | 2400 | CLI | 1.800/1.920 | 2 | load with one Single call from SIPp |
4 | Pass-thr | TLS | Yes | 20 | 2400 | Stop Dashboard | 2.159/2.599 | 2 | load with one Single call from SIPp |
5 | Pass-thr | TLS | Yes | 20 | 2400 | CLI | 2.140/1.920 | 2 | Pinned calls and then one single call |
6 | Pass-thr | TLS | Yes | 20 | 2400 | Stop Dashboard | 1.959/2.780 | 2 | Pinned calls and then one single call |
The following figures show the CloudFormationTemplate, used to spawn an HFE and both Active and Standby SBCs. The template creates a private Subnet, and an AWS routing table to redirect pkt0 (SBC) traffic to the HFE eth2. SBC pkt0 is in private subnet and front-ended by the HFE.
CloudFormationTemplate - Details and Parameters
CloudFormationTemplate - continued
CloudFormationTemplate - continued
Verify that the HFE.sh
script was copied from the S3 bucket to the directory specified in the template.
This file contains the following scripts, configuration and log files:
HFE.sh: Configuration file that is downloaded from S3 bucket and configures this instances to front-end pkt0 of SBCs.
natVars.input: Contains input to HFE.sh.
SBC_SECONDARY_IP="10.54.90.76":
Secondary IP of SBC's pkt0 portREMOTE_SSH_MACHINE_IP="121.242.142.135":
Public IP of machine which can be used to connect to HFE using ssh, this ip should have route via eth1
[root@ip-10-54-40-101 ec2-user]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.54.40.1 0.0.0.0 UG 0 0 0 eth0
0.0.0.0 10.54.10.1 0.0.0.0 UG 10001 0 0 eth1
0.0.0.0 10.54.40.1 0.0.0.0 UG 10002 0 0 eth2
10.54.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
10.54.40.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
10.54.40.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2
10.54.90.76 10.54.40.1 255.255.255.255 UGH 0 0 0 eth2
121.242.142.135 10.54.10.1 255.255.255.255 UGH 0 0 0 eth1
169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
[root@ip-10-54-40-101 ec2-user]#
iptables.rules.prev: iptables rules before configuration was applied using HFE.sh.
HFE.log: Log file of HFE.sh script
cloud-init-nat.log: Log of cloud-init which creates an HFE directory and copies HFE.sh from S3 bucket
Check /var/log/cloud-init.log for userdata script errors. You should see the following if the user data script was run successfully:
May 18 08:59:59 cloud-init[2603]: util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/runcmd'] with allowed return codes [0] (shell=True, capture=False)
May 18 08:59:59 cloud-init[2603]: util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/userdata.txt'] with allowed return codes [0] (shell=True, capture=False)
May 18 09:00:00 cloud-init[2603]: cloud-init[DEBUG]: Ran 1 modules with 0 failures
May 18 09:00:00 cloud-init[2603]: util.py[DEBUG]: Creating symbolic link from '/run/cloud-init/result.json' => '../../var/lib/cloud/data/result.json'
All packets coming to eth0
are forwarded to eth2
and then sent to the SBC. The destination IP of all incoming packets is changed from the private IP of HFE (eth0
) to the Secondary IP of the SBC (pkt0
).
The reply from the SBC comes to eth2.
The destination IP here is a public end-point and the source IP is the secondary IP of the SBC. These packets are forwarded to eth0
and then they are sent out to a public endpoint. The default route with the highest priority is set to eth0
, so there is no need to configure routing for any endpoint that wants to connect to SBC (pkt0
) using EIP attached on HFE (eth0
). Just before sending out these packets, the source IP is changed from the SBC's secondary IP to a private IP on eth0
of the HFE.
IP Addresses
Public endpoint | 35.169.151.85 |
Private IP of | 172.31.10.187 |
Secondary IP on | 172.31.15.36 |
You will see tshark packet analyzer output if you have IPs configured as shown in this table.
Packet Redirection - refer to IP addresses in previous table
tsahrk -i eth0 |
---|
321.811904636 35.169.151.85 -> 172.31.10.187 ICMP 98 Echo (ping) request id=0x04e1, seq=88/22528, ttl=63 |
321.812312648 172.31.10.187 -> 35.169.151.85 ICMP 98 Echo (ping) reply id=0x04e1, seq=88/22528, ttl=63 |
tshark -i eth2 |
320.814501198 35.169.151.85 -> 172.31.15.36 ICMP 98 Echo (ping) request id=0x04e1, seq=87/22272, ttl=62 |
320.816507065 172.31.15.36 -> 35.169.151.85 ICMP 98 Echo (ping) reply id=0x04e1, seq=87/22272, ttl=64 |