DO NOT SHARE THESE DOCS WITH CUSTOMERS!
This is an LA release that will only be provided to a select number of PLM-sanctioned customers (PDFs only). Contact PLM for details.
In this section:
Ensure the following before creating the SBCs:
For a Standalone configuration:
Ensure the following before creating an HA pair with HFE setup:
To create an SBC instance, follow the steps given below. Any extra configuration for the HFE environment setup is mentioned specifically.
Click Create to open the Create an instance page.
Select the Boot disk option, and click Change to open the Boot disk panel.
Click on Management, security, disks, networking, sole tenancy. The tabs expand.
linuxadmin
user. For more information, see SSH Key Login Only.For Standalone, leave it blank.
Set External IP as one of the static External IPs created earlier.
Set the External IP as "None".
Set Alias IP range as "/32".
For Standalone, set as one of the static External IPs created earlier.
Set Alias IP range as "/32".
For Standalone, set as one of the static External IPs created earlier.
You must create both SBC and HFE VMs within seconds of each other; otherwise, the application will fail to start and then require rebooting.
If the HFE node(s) are already created, when the SBC CREATE commands are run, simply reboot the HFE node(s) to make them work.
For Standalone, click CREATE.
In a HFE environment, ensure that manual creation of the SBCs needs is performed exactly as described below, failing which the SBC CEs cannot collect necessary information about peers.
If you create an Active instance that comes up before creating the Standby instance, restart both instances and clear the databases.
To avoid disk corruption, do not use the GCP console Reset option to restart an instance.
To enhance the security of the SBC in the public cloud space, certain restrictions are imposed.
By default, only the linuxadmin
user (used for accessing the Linux shell) and the admin
user (default user for accessing the SBC CLI) support SSH key login. The SSH keys are entered using the following methods:
<ssh-rsa ...> linuxadmin
.The format informs the cloud-init that the key is for linuxadmin.
You can retrieve the Public SSH keys on Linux by executing the following command: ssh-keygen -y -f <privateKeyFile>
.
Ribbon recommends using separate SSH keys for every user.
By default, linuxadmin allows very little access at the OS level. For simple debugging, use the sbcDiagnostic
command to run sysdumps and check the application status.
When a valid SBX license is installed, linuxadmin gains full sudo
permissions.
To prevent execution of unauthorized commands on the SBC, the user-data only in a valid json format is allowed. If the SBC detects any user-data in an invalid json format, it shuts down immediately.
Specify the user data in pure json format.
The following table describes all of the keys required in the SBC user-data. The Required By column specifies which type of setup requires this key.
Key | Allow Values | Required by | Description |
---|---|---|---|
CERole | ACTIVE/STANDBY | HFE | Defined role for the SBC instance. One must be configured as ACTIVE the other STANDBY |
ReverseNatPkt0 | True/False | HFE | Requires True for HFE |
ReverseNatPkt1 | True/False | HFE | Requires True for HFE |
CEName | N/A | Standalone and HFE | This specifies the actual CE name of the SBC instance. For more information, refer to System and Instance Naming in SBC SWe N:1 and Cloud-Based Systems. CEName Requirements:
|
SystemName | N/A | Standalone and HFE | This specifies the System Name of the SBC instances. For more information, refer to System and Instance Naming in SBC SWe N:1 and Cloud-Based Systems. SystemName Requirements:
|
PeerCEName | N/A | HFE | CEName for the peer instance (ensure it matches peer CE's CEName in the user-data). |
PeerCEHa0IPv4Address | xxx.xxx.xxx.xxx | HFE | Private IPv4 address of the HA interface on the peer instance. |
ClusterIp | xxx.xxx.xxx.xxx | HFE | Private IPv4 address of the HA interface on the peer instance. |
SbcPersonalityType | isbc | Standalone and HFE | The name of the SBC personality type for this instance. Currently Ribbon supports only I-SBC. |
SbcHaMode | 1to1 | HA | The mode of SBC management. |
Mgt0Prefix | N/A | Standalone and HFE | The CIDR prefix for the Mgmt subnet. |
ThirdPartyCpuAlloc | 0-4 | N/A | Number of CPUs allocated to non-Ribbon applications. This key is optional. Restrictions:
|
ThirdPartyMemAlloc | 0-4096 | N/A | Amount of memory (in MB) allocated to non-Ribbon applications. This key is optional. Restrictions:
|
AdminSshKey | ssh-rsa ... | Standalone and HFE | Public SSH Key to access the admin user. See SSH Key Login Only. |
PeerInstanceName | N/A | HFE | The Name for the Peer Instance in GCP. Note that this is not the CEName or the SystemName. |
* For more information, refer to the section "HFE 2.1" of the page Configure HFE Nodes in GCP.
Configure the PKT interfaces using the following command examples:
You must create 3 static routes per packet interface as per the example below.
# Configuring PKT0 interface set addressContext default ipInterfaceGroup LIF1 commit set addressContext default ipInterfaceGroup LIF1 ipInterface F1 ceName <<CE Name of configured Active* from metavars>> portName pkt0 ipVarV4 IF2.IPV4 prefixVarV4 IF2.PrefixV4 ipPublicVarV4 <<IF2.FIPV4 OR HFE_IF2.FIPV4 **>> commit set addressContext default ipInterfaceGroup LIF1 ipInterface F1 mode inService state enabled commit set addressContext default staticRoute <<subnet gateway>> 32 0.0.0.0 LIF1 F1 preference 100 commit set addressContext default staticRoute 0.0.0.0 0 <<subnet gateway>> LIF1 F1 preference 100 commit set addressContext default staticRoute <<subnet IP>> <<subnet prefix>> <<subnet gateway>> LIF1 F1 preference 100 commit # Configuring PKT1 interface set addressContext default ipInterfaceGroup LIF2 commit set addressContext default ipInterfaceGroup LIF2 ipInterface F2 ceName <<CE Name of configured Active* from metavars>> portName pkt1 ipVarV4 IF3.IPV4 prefixVarV4 IF3.PrefixV4 ipPublicVarV4 <<IF3.FIPV4 OR HFE_IF3.FIPV4 **>> commit set addressContext default ipInterfaceGroup LIF2 ipInterface F2 mode inService state enabled commit set addressContext default staticRoute <<subnet gateway>> 32 0.0.0.0 LIF2 F2 preference 100 commit set addressContext default staticRoute 0.0.0.0 0 <<subnet gateway>> LIF2 F2 preference 100 commit set addressContext default staticRoute <<subnet IP>> <<subnet prefix>> <<subnet gateway>> LIF2 F2 preference 100 commit
* This is the active which has ACTIVE as their CERole in the User Data
** If using HFE, use the HFE_IF*.FIPV4 metavariable. For Standalone, use IF*.FIPV4.
The correct configuration is similar to the following:
admin@nodeA> show table addressContext default staticRoute IP INTERFACE IP DESTINATION GROUP INTERFACE CE IP ADDRESS PREFIX NEXT HOP NAME NAME PREFERENCE NAME ------------------------------------------------------------------------ 0.0.0.0 0 10.0.32.1 LIF1 F1 100 - 0.0.0.0 0 10.0.48.1 LIF2 F2 100 - 10.0.32.0 24 10.0.32.1 LIF1 F1 100 - 10.0.32.1 32 0.0.0.0 LIF1 F1 100 - 10.0.48.0 24 10.0.48.1 LIF2 F2 100 - 10.0.48.1 32 0.0.0.0 LIF2 F2 100 - [ok][2019-08-05 10:26:34] admin@nodeA> show table addressContext default ipInterfaceGroup IP PORT IP ALT IP ALT DRYUP BW VLAN IP VAR PREFIX VAR IP PUBLIC VA NAME IPSEC NAME CE NAME NAME ADDRESS PREFIX ADDRESS PREFIX MODE ACTION TIMEOUT STATE CONTINGENCY TAG BANDWIDTH V4 V4 VAR V4 V6 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- LIF1 disabled F1 nodeA-10.2.0.14 pkt0 - - - - inService dryUp 60 enabled 0 - 0 IF2.IPV4 IF2.PrefixV4 HFE_IF2.FIPV4 - LIF2 disabled F2 nodeA-10.2.0.14 pkt1 - - - - inService dryUp 60 enabled 0 - 0 IF3.IPV4 IF3.PrefixV4 HFE_IF3.FIPV4 - [ok][2019-08-05 10:29:58]
Sample SBC configurations are provided below. For more information, refer to Metadata and Userdata Formats in AWS.
Example Meta Variable table for HFE environment:
admin@nodeA> show table system metaVariable CE NAME NAME VALUE -------------------------------------------------------- nodeA-10.2.0.14 IF0.GWV4 10.0.0.1 nodeA-10.2.0.14 IF0.IPV4 10.0.0.54 nodeA-10.2.0.14 IF0.Port Mgt0 nodeA-10.2.0.14 IF0.RNat True nodeA-10.2.0.14 IF1.GWV4 10.2.0.1 nodeA-10.2.0.14 IF1.IPV4 10.2.0.14 nodeA-10.2.0.14 IF1.Port Ha0 nodeA-10.2.0.14 IF1.RNat True nodeA-10.2.0.14 IF2.GWV4 10.0.32.1 nodeA-10.2.0.14 IF2.IPV4 10.0.32.204 nodeA-10.2.0.14 IF2.Port Pkt0 nodeA-10.2.0.14 IF2.RNat True nodeA-10.2.0.14 IF3.GWV4 10.0.48.1 nodeA-10.2.0.14 IF3.IPV4 10.0.48.37 nodeA-10.2.0.14 IF3.Port Pkt1 nodeA-10.2.0.14 IF3.RNat True nodeA-10.2.0.14 IF0.FIPV4 35.184.248.228 nodeA-10.2.0.14 IF0.PrefixV4 24 nodeA-10.2.0.14 IF1.PrefixV4 32 nodeA-10.2.0.14 IF2.PrefixV4 32 nodeA-10.2.0.14 IF3.PrefixV4 32 nodeA-10.2.0.14 HFE_IF2.FIPV4 34.68.87.53 nodeA-10.2.0.14 HFE_IF3.FIPV4 10.0.3.19 nodeA-10.2.0.14 HFE_IF2.IFName IF_HFE_PKT0 nodeA-10.2.0.14 HFE_IF3.IFName IF_HFE_PKT1 nodeA-10.2.0.14 secondaryIPList.Pkt0 ['10.0.32.204'] nodeA-10.2.0.14 secondaryIPList.Pkt1 ['10.0.48.37'] nodeB-10.2.0.15 IF0.GWV4 10.0.0.1 nodeB-10.2.0.15 IF0.IPV4 10.0.0.55 nodeB-10.2.0.15 IF0.Port Mgt0 nodeB-10.2.0.15 IF0.RNat True nodeB-10.2.0.15 IF1.GWV4 10.2.0.1 nodeB-10.2.0.15 IF1.IPV4 10.2.0.15 nodeB-10.2.0.15 IF1.Port Ha0 nodeB-10.2.0.15 IF1.RNat True nodeB-10.2.0.15 IF2.GWV4 10.0.32.1 nodeB-10.2.0.15 IF2.IPV4 10.0.32.204 nodeB-10.2.0.15 IF2.Port Pkt0 nodeB-10.2.0.15 IF2.RNat True nodeB-10.2.0.15 IF3.GWV4 10.0.48.1 nodeB-10.2.0.15 IF3.IPV4 10.0.48.37 nodeB-10.2.0.15 IF3.Port Pkt1 nodeB-10.2.0.15 IF3.RNat True nodeB-10.2.0.15 IF0.FIPV4 35.232.104.143 nodeB-10.2.0.15 IF0.PrefixV4 24 nodeB-10.2.0.15 IF1.PrefixV4 32 nodeB-10.2.0.15 IF2.PrefixV4 32 nodeB-10.2.0.15 IF3.PrefixV4 32 nodeB-10.2.0.15 HFE_IF2.FIPV4 34.68.87.53 nodeB-10.2.0.15 HFE_IF3.FIPV4 10.0.3.19 nodeB-10.2.0.15 HFE_IF2.IFName IF_HFE_PKT0 nodeB-10.2.0.15 HFE_IF3.IFName IF_HFE_PKT1 nodeB-10.2.0.15 secondaryIPList.Pkt0 ['10.0.32.206'] nodeB-10.2.0.15 secondaryIPList.Pkt1 ['10.0.48.39'] [ok][2019-08-02 09:24:54]
Each SBC contains instance data, which is available in the file /opt/sonus/conf/instanceLcaData.json
.
{ "secondaryIPListMgt0": [], "Mgt0IPv4Prefix": "24", "VIP_Pkt1_00": { "IP": "10.0.48.37", "IFName": "IF3" }, "Ha0IPv4Prefix": "32", "PeerCEMgt0IPv4Address": "10.0.0.55", "SystemName": "GCEHA", "PeerInstanceName": "cj-standby", "ThirdPartyCpuAlloc": "0", "PeerCEHa0IPv4Prefix": "32", "Mgt0Prefix": "24", "ThirdPartyMemAlloc": "0", "SbcPersonalityType": "isbc", "PeerCEHa0IPv4Address": "10.2.0.15", "CEName": "nodeA", "ClusterIp": "10.2.0.15", "HFE_IF2": { "IFName": "IF_HFE_PKT0", "FIPV4": "34.68.87.53" }, "secondaryIPListHa0": [], "PeerCEPkt1IPv4Prefix": "32", "instanceName": "cj-active", "Pkt1IPv4Prefix": "32", "CERole": "ACTIVE", "secondaryIPListPkt1": [ "10.0.48.39" ], "secondaryIPListPkt0": [ "10.0.32.206" ], "ReverseNatPkt0": "True", "ReverseNatPkt1": "True", "Pkt0IPv4Prefix": "32", "PeerCEPkt1IPv4Address": "10.0.48.40", "zone": "projects/626129518018/zones/us-central1-a", "SbcMgmtMode": "centralized", "PeerCEPkt0IPv4Address": "10.0.32.205", "IF0": { "PrefixV4": "24", "RNat": "True", "Port": "Mgt0", "FIPV4": "35.184.248.228" }, "IF1": { "PrefixV4": "32", "RNat": "True", "Port": "Ha0" }, "IF2": { "PrefixV4": "32", "RNat": "True", "Port": "Pkt0" }, "IF3": { "PrefixV4": "32", "RNat": "True", "Port": "Pkt1" }, "AdminSshKey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCMEMXjfUCrKApRWcjEYshAVDNg6aIrrgOp/ckLk2bSPFa37 BNoHr+SlxfvOUOm+C61CB6yp6Lou2lQWjBISoK5rx8fLrPOJz9JDnmEwmmnk4EdbWB0ArZC9MdhNxYbaWCeQFIYBY4FwLIxSy1 fyc6fZhQiPtqd05o08/9icwEbPM0EjeO7FHHMVLVBn7/LlDABcA4+O28/FF61HT3fJ1XZzXgg5MRURf/WcN0aZoKshsVZPiJZWg 2lkKehXHnMDjnmPvjWgyMQsgs9KfZirg1PMw7O8G/oMfXHMICCkx3I8t8/6VK2WQvoilo4zn6LgpLIjBvc2mxJRCZqh3MgxT" "PeerCEName": "nodeB", "VIP_Pkt0_00": { "IP": "10.0.32.204", "IFName": "IF2" }, "PeerCEMgt0IPv4Prefix": "24", "HfeInstanceName": "cj-hfe", "HFE_IF3": { "IFName": "IF_HFE_PKT1", "FIPV4": "10.0.3.19" }, "secondaryIPList": { "Ha0": [], "Mgt0": [], "Pkt1": [ "10.0.48.37" ], "Pkt0": [ "10.0.32.204" ] }, "PeerCEPkt0IPv4Prefix": "32" }
The AdminSshKey example above is actuallly one continuous line, but was split into multiple lines to display better on the page.
The following steps are mandatory to configure the SBC for DNS call flows.
When an external DNS server is configured in the SBC for FQDN resolutions, the resolv.conf
file is updated in the SBC with the custom DNS server's IP address. This increases the priority of the custom DNS server over the metedata namespace server. As a result, all the post-reboot metadata queries fails, the SSH keys from the metadata server for the instance are not copied in the Authorized_keys
file, and the machine is inaccessible.
Add the following FQDN against its IP address in your custom DNS server, so that the metadata DNS requests are successful, and the custom DNS server can resolve the Google metadata FQDN.
The following example is for an Ubuntu DNS instance. For any other OS, configure the DNS server accordingly.
In the named.conf
file:
zone "google.internal" IN { type master; allow-query {any;}; file "google.internal.zone"; };
Open the folder containing all the zone files.
Create a new zone file google.internal.zone
with the following entries:
# cat google.internal.zone $TTL 1D@ IN SOA ip-172-31-10-54.google.internal. root.google.internal. ( 2019109120 ; Serial number (yyyymmdd-num) 8H ; Refresh 2M ; Retry 4W ; Expire 1D ) ; Minimum IN NS ip-172-31-10-54 as.ipv4 A 0.0.0.0 as.ipv6 AAAA 0::0 ip-172-31-10-54 A <DNS server IP address> metadata IN A 169.254.169.254
rndc reload
.metadata.google.internal
, which resolves with the IP 169.254.169.254
.The following section contains the instructions to set multiple SSH IPs to access the HFE node as well as to update the instances to add in more SSH IPs.
Ensure the REMOTE_SSH_MACHINE_IP is not set to an IP where the call traffic is originating from. It can break the HFE logic and the traffic fails to reach the SBC.
During orchestration, you can supply multiple IP addresses to the appropriate variable with a common separated list. For example, 10.0.0.1, 10.0.0.2, and 10.0.0.3. The following table represents the list of variables that need to be set for each orchestration type:
The following steps describe the procedure to update the Remote SSH IPs on the GCP.
To add in a new Remote SSH Machine IP, you need to supply the full list of IPs for which the routes need to be created.
The following procedure results in network outages as the HFE requires a reboot to select the latest list.
In the value forstartup-script, edit the REMOTE_SSH_MACHINE_IP line. For example:
/bin/echo "REMOTE_SSH_MACHINE_IP=\"10.0.0.1,10.10.10.10\"">> $NAT_VAR
Select Save.