In this section:
Ensure the following before creating the SBCs:
For a Standalone configuration:
Ensure the following before creating an HA pair with HFE setup:
To create an SBC instance, follow the steps given below. Any extra configuration for the HFE environment setup is mentioned specifically.
Click Create to open the Create an instance page.
Select the Boot disk option, and click Change to open the Boot disk panel.
Click on Management, security, disks, networking, sole tenancy. The tabs expand.
linuxadmin
user. For more information, see 297373040.For Standalone, leave it blank.
Set External IP as one of the static External IPs created earlier.
Set the External IP as "None".
Set Alias IP range as "/32".
For Standalone, set as one of the static External IPs created earlier.
Set Alias IP range as "/32".
For Standalone, set as one of the static External IPs created earlier.
For Standalone, click CREATE.
In a HFE environment, ensure that manual creation of the SBCs needs is performed exactly as described below, failing which the SBC CEs cannot collect necessary information about peers.
If you create an Active instance that comes up before creating the Standby instance, restart both instances and clear the databases.
To avoid disk corruption, do not use the GCP console Reset option to restart an instance.
To enhance the security of the SBC in the public cloud space, certain restrictions are imposed.
By default, only the linuxadmin
user (used for accessing the Linux shell) and the admin
user (default user for accessing the SBC CLI) support SSH key login. The SSH keys are entered using the following methods:
<ssh-rsa ...> linuxadmin
.The format informs the cloud-init that the key is for linuxadmin.
You can retrieve the Public SSH keys on Linux by executing the following command: ssh-keygen -y -f <privateKeyFile>
.
Ribbon recommends using separate SSH keys for every user.
By default, linuxadmin allows very little access at the OS level. For simple debugging, use the sbcDiagnostic
command to run sysdumps and check the application status.
When a valid SBX license is installed, linuxadmin gains full sudo
permissions.
To prevent execution of unauthorized commands on the SBC, the user-data only in a valid json format is allowed. If the SBC detects any user-data in an invalid json format, it shuts down immediately.
Specify the user data in pure json. For example:
This example is suitable for HFE 2.1.
For HFE 2.0, make the following modifications:
Pkt0HfeInstanceName
and Pkt1HfeInstanceName
.HfeInstanceName
.{ "CERole" : "<<ACTIVE/STANDBY>>", "ReverseNatPkt0" : "True", "ReverseNatPkt1" : "True", "CEName" : "<<CE NAME>>", "SystemName" : "<<SYSTEM NAME>>", "PeerCEName" : "<<PEER CE NAME>>", "PeerCEHa0IPv4Address": "<<ETH1 PRIMARY IP ON PEER>>", "ClusterIp" : "<<ETH1 PRIMARY IP ON PEER>>", "SbcPersonalityType": "isbc", "SbcMgmtMode": "centralized", "Mgt0Prefix": "24", "ThirdPartyCpuAlloc" : "0", "ThirdPartyMemAlloc" : "0", "AdminSshKey" : "<<SSH KEY>>", "PeerInstanceName": "<<PEER INSTANCE GOOGLE NAME>>", "Pkt0HfeInstanceName": "<<PKT0 HFE NODE INSTANCE GOOGLE NAME>>", "Pkt1HfeInstanceName": "<<PKT1 HFE NODE INSTANCE GOOGLE NAME>>" }
The following table describes all of the keys required in the SBC user-data. The Required By column specifies which type of setup requires this key.
Key | Allow Values | Required by | Description |
---|---|---|---|
CERole | ACTIVE/STANDBY | HFE | Defined role for the SBC instance. One must be configured as ACTIVE the other STANDBY |
ReverseNatPkt0 | True/False | HFE | Requires True for HFE |
ReverseNatPkt1 | True/False | HFE | Requires True for HFE |
CEName | N/A | Standalone and HFE | This specifies the actual CE name of the SBC instance. For more information, refer to System and Instance Naming Conventions. CEName Requirements:
|
SystemName | N/A | Standalone and HFE | This specifies the System Name of the SBC instances. For more information, refer to System and Instance Naming Conventions. SystemName Requirements:
|
PeerCEName | N/A | HFE | CEName for the peer instance (ensure it matches peer CE's CEName in the user-data). |
PeerCEHa0IPv4Address | xxx.xxx.xxx.xxx | HFE | Private IPv4 address of the HA interface on the peer instance. |
ClusterIp | xxx.xxx.xxx.xxx | HFE | Private IPv4 address of the HA interface on the peer instance. |
SbcPersonalityType | isbc | Standalone and HFE | The name of the SBC personality type for this instance. Currently Ribbon supports only I-SBC. |
SbcMgmtMode | centralized | Standalone and HFE | The mode of SBC management. Currently Ribbon supports only the centralized mode. |
Mgt0Prefix | N/A | Standalone and HFE | The CIDR prefix for the Mgmt subnet. |
ThirdPartyCpuAlloc | 0-4 | N/A | Number of CPUs allocated to non-Ribbon applications. This key is optional. Restrictions:
|
ThirdPartyMemAlloc | 0-4096 | N/A | Amount of memory (in MB) allocated to non-Ribbon applications. This key is optional. Restrictions:
|
AdminSshKey | ssh-rsa ... | Standalone and HFE | Public SSH Key to access the admin user. See SSH Key Login Only. |
PeerInstanceName | N/A | HFE | The Name for the Peer Instance in GCP. Note that this is not the CEName or the SystemName. |
HfeInstanceName | N/A | HFE 2.0 | The name of the HFE instance in GCP; use only for HFE 2.0 (single HFE node)*. |
Pkt0HfeInstanceName | N/A | HFE 2.1 | The name of the "PKT0 HFE node"; use only for HFE 2.1 (split HFE nodes)**. |
Pkt1HfeInstanceName | N/A | HFE 2.1 | The name of the "PKT1 HFE node"; use only for HFE 2.1 (split HFE nodes)**. |
* For more information, refer to the section "HFE 2.0" of the page Configure the HFE Node.
** For more information, refer to the section "HFE 2.1" of the page Configure the HFE Node.
Configure the PKT interfaces using the following command examples:
You must create 3 static routes per packet interface as per the example below.
# Configuring PKT0 interface set addressContext default ipInterfaceGroup LIF1 commit set addressContext default ipInterfaceGroup LIF1 ipInterface F1 ceName <<CE Name of configured Active* from metavars>> portName pkt0 ipVarV4 IF2.IPV4 prefixVarV4 IF2.PrefixV4 ipPublicVarV4 <<IF2.FIPV4 OR HFE_IF2.FIPV4 **>> commit set addressContext default ipInterfaceGroup LIF1 ipInterface F1 mode inService state enabled commit set addressContext default staticRoute <<subnet gateway>> 32 0.0.0.0 LIF1 F1 preference 100 commit set addressContext default staticRoute 0.0.0.0 0 <<subnet gateway>> LIF1 F1 preference 100 commit set addressContext default staticRoute <<subnet IP>> <<subnet prefix>> <<subnet gateway>> LIF1 F1 preference 100 commit # Configuring PKT1 interface set addressContext default ipInterfaceGroup LIF2 commit set addressContext default ipInterfaceGroup LIF2 ipInterface F2 ceName <<CE Name of configured Active* from metavars>> portName pkt1 ipVarV4 IF3.IPV4 prefixVarV4 IF3.PrefixV4 ipPublicVarV4 <<IF3.FIPV4 OR HFE_IF3.FIPV4 **>> commit set addressContext default ipInterfaceGroup LIF2 ipInterface F2 mode inService state enabled commit set addressContext default staticRoute <<subnet gateway>> 32 0.0.0.0 LIF2 F2 preference 100 commit set addressContext default staticRoute 0.0.0.0 0 <<subnet gateway>> LIF2 F2 preference 100 commit set addressContext default staticRoute <<subnet IP>> <<subnet prefix>> <<subnet gateway>> LIF2 F2 preference 100 commit
* This is the active which has ACTIVE as their CERole in the user data.
** If using HFE, use the HFE_IF*.FIPV4 metavariable. For Standalone, use IF*.FIPV4.
The correct configuration is similar to the following:
admin@nodeA> show table addressContext default staticRoute IP INTERFACE IP DESTINATION GROUP INTERFACE CE IP ADDRESS PREFIX NEXT HOP NAME NAME PREFERENCE NAME ------------------------------------------------------------------------ 0.0.0.0 0 10.0.32.1 LIF1 F1 100 - 0.0.0.0 0 10.0.48.1 LIF2 F2 100 - 10.0.32.0 24 10.0.32.1 LIF1 F1 100 - 10.0.32.1 32 0.0.0.0 LIF1 F1 100 - 10.0.48.0 24 10.0.48.1 LIF2 F2 100 - 10.0.48.1 32 0.0.0.0 LIF2 F2 100 - [ok][2019-08-05 10:26:34] admin@nodeA> show table addressContext default ipInterfaceGroup IP PORT IP ALT IP ALT DRYUP BW VLAN IP VAR PREFIX VAR IP PUBLIC VA NAME IPSEC NAME CE NAME NAME ADDRESS PREFIX ADDRESS PREFIX MODE ACTION TIMEOUT STATE CONTINGENCY TAG BANDWIDTH V4 V4 VAR V4 V6 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- LIF1 disabled F1 nodeA-10.2.0.14 pkt0 - - - - inService dryUp 60 enabled 0 - 0 IF2.IPV4 IF2.PrefixV4 HFE_IF2.FIPV4 - LIF2 disabled F2 nodeA-10.2.0.14 pkt1 - - - - inService dryUp 60 enabled 0 - 0 IF3.IPV4 IF3.PrefixV4 HFE_IF3.FIPV4 - [ok][2019-08-05 10:29:58]
Sample SBC configurations are given below. For more information, refer to Metadata, Userdata and MetaVariable Formats on AWS (7.2S400).
Example Meta Variable table for HFE environment:
admin@nodeA> show table system metaVariable CE NAME NAME VALUE -------------------------------------------------------- nodeA-10.2.0.14 IF0.GWV4 10.0.0.1 nodeA-10.2.0.14 IF0.IPV4 10.0.0.54 nodeA-10.2.0.14 IF0.Port Mgt0 nodeA-10.2.0.14 IF0.RNat True nodeA-10.2.0.14 IF1.GWV4 10.2.0.1 nodeA-10.2.0.14 IF1.IPV4 10.2.0.14 nodeA-10.2.0.14 IF1.Port Ha0 nodeA-10.2.0.14 IF1.RNat True nodeA-10.2.0.14 IF2.GWV4 10.0.32.1 nodeA-10.2.0.14 IF2.IPV4 10.0.32.204 nodeA-10.2.0.14 IF2.Port Pkt0 nodeA-10.2.0.14 IF2.RNat True nodeA-10.2.0.14 IF3.GWV4 10.0.48.1 nodeA-10.2.0.14 IF3.IPV4 10.0.48.37 nodeA-10.2.0.14 IF3.Port Pkt1 nodeA-10.2.0.14 IF3.RNat True nodeA-10.2.0.14 IF0.FIPV4 35.184.248.228 nodeA-10.2.0.14 IF0.PrefixV4 24 nodeA-10.2.0.14 IF1.PrefixV4 32 nodeA-10.2.0.14 IF2.PrefixV4 32 nodeA-10.2.0.14 IF3.PrefixV4 32 nodeA-10.2.0.14 HFE_IF2.FIPV4 34.68.87.53 nodeA-10.2.0.14 HFE_IF3.FIPV4 10.0.3.19 nodeA-10.2.0.14 HFE_IF2.IFName IF_HFE_PKT0 nodeA-10.2.0.14 HFE_IF3.IFName IF_HFE_PKT1 nodeA-10.2.0.14 secondaryIPList.Pkt0 ['10.0.32.204'] nodeA-10.2.0.14 secondaryIPList.Pkt1 ['10.0.48.37'] nodeB-10.2.0.15 IF0.GWV4 10.0.0.1 nodeB-10.2.0.15 IF0.IPV4 10.0.0.55 nodeB-10.2.0.15 IF0.Port Mgt0 nodeB-10.2.0.15 IF0.RNat True nodeB-10.2.0.15 IF1.GWV4 10.2.0.1 nodeB-10.2.0.15 IF1.IPV4 10.2.0.15 nodeB-10.2.0.15 IF1.Port Ha0 nodeB-10.2.0.15 IF1.RNat True nodeB-10.2.0.15 IF2.GWV4 10.0.32.1 nodeB-10.2.0.15 IF2.IPV4 10.0.32.204 nodeB-10.2.0.15 IF2.Port Pkt0 nodeB-10.2.0.15 IF2.RNat True nodeB-10.2.0.15 IF3.GWV4 10.0.48.1 nodeB-10.2.0.15 IF3.IPV4 10.0.48.37 nodeB-10.2.0.15 IF3.Port Pkt1 nodeB-10.2.0.15 IF3.RNat True nodeB-10.2.0.15 IF0.FIPV4 35.232.104.143 nodeB-10.2.0.15 IF0.PrefixV4 24 nodeB-10.2.0.15 IF1.PrefixV4 32 nodeB-10.2.0.15 IF2.PrefixV4 32 nodeB-10.2.0.15 IF3.PrefixV4 32 nodeB-10.2.0.15 HFE_IF2.FIPV4 34.68.87.53 nodeB-10.2.0.15 HFE_IF3.FIPV4 10.0.3.19 nodeB-10.2.0.15 HFE_IF2.IFName IF_HFE_PKT0 nodeB-10.2.0.15 HFE_IF3.IFName IF_HFE_PKT1 nodeB-10.2.0.15 secondaryIPList.Pkt0 ['10.0.32.206'] nodeB-10.2.0.15 secondaryIPList.Pkt1 ['10.0.48.39'] [ok][2019-08-02 09:24:54]
Each SBC contains instance data, which is available in the file /opt/sonus/conf/instanceLcaData.json
.
{ "secondaryIPListMgt0": [], "Mgt0IPv4Prefix": "24", "VIP_Pkt1_00": { "IP": "10.0.48.37", "IFName": "IF3" }, "Ha0IPv4Prefix": "32", "PeerCEMgt0IPv4Address": "10.0.0.55", "SystemName": "GCEHA", "PeerInstanceName": "cj-standby", "ThirdPartyCpuAlloc": "0", "PeerCEHa0IPv4Prefix": "32", "Mgt0Prefix": "24", "ThirdPartyMemAlloc": "0", "SbcPersonalityType": "isbc", "PeerCEHa0IPv4Address": "10.2.0.15", "CEName": "nodeA", "ClusterIp": "10.2.0.15", "HFE_IF2": { "IFName": "IF_HFE_PKT0", "FIPV4": "34.68.87.53" }, "secondaryIPListHa0": [], "PeerCEPkt1IPv4Prefix": "32", "instanceName": "cj-active", "Pkt1IPv4Prefix": "32", "CERole": "ACTIVE", "secondaryIPListPkt1": [ "10.0.48.39" ], "secondaryIPListPkt0": [ "10.0.32.206" ], "ReverseNatPkt0": "True", "ReverseNatPkt1": "True", "Pkt0IPv4Prefix": "32", "PeerCEPkt1IPv4Address": "10.0.48.40", "zone": "projects/626129518018/zones/us-central1-a", "SbcMgmtMode": "centralized", "PeerCEPkt0IPv4Address": "10.0.32.205", "IF0": { "PrefixV4": "24", "RNat": "True", "Port": "Mgt0", "FIPV4": "35.184.248.228" }, "IF1": { "PrefixV4": "32", "RNat": "True", "Port": "Ha0" }, "IF2": { "PrefixV4": "32", "RNat": "True", "Port": "Pkt0" }, "IF3": { "PrefixV4": "32", "RNat": "True", "Port": "Pkt1" }, "AdminSshKey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCMEMXjfUCrKApRWcjEYshAVDNg6aIrrgOp/ckLk2bSPFa37BNoHr+SlxfvOUOm+C61CB6yp6Lou2lQWjBISoK5r+x8fLrPOJz9JDnmEwmmnk4EdbWB0ArZC9MdhNxYbaWCeQFIYBY4FwLIxSy1fyc6fZhQiPtqd05o08/9icwEbPM0EjeO7FHHMVLVBn7/LlDABcA4+O28/FF61HT3fJ1XZzXgg5MRURf/WcN0aZoKshsV+ZPiJZWg2lkKehXHnMDjnmPvjWgyMQsgs9KfZirg1PMw7O8G/oMfXHMICCkx3I8t8/6VK2WQvoilo4zn6LgpLIjBvc2mxJRCZqh3MgxT", "PeerCEName": "nodeB", "VIP_Pkt0_00": { "IP": "10.0.32.204", "IFName": "IF2" }, "PeerCEMgt0IPv4Prefix": "24", "HfeInstanceName": "cj-hfe", "HFE_IF3": { "IFName": "IF_HFE_PKT1", "FIPV4": "10.0.3.19" }, "secondaryIPList": { "Ha0": [], "Mgt0": [], "Pkt1": [ "10.0.48.37" ], "Pkt0": [ "10.0.32.204" ] }, "PeerCEPkt0IPv4Prefix": "32" }
The following steps are mandatory to configure the SBC for DNS call flows.
When an external DNS server is configured in the SBC for FQDN resolutions, the resolv.conf
file is updated in the SBC with the custom DNS server's IP address. This increases the priority of the custom DNS server over the metedata namespace server. As a result, all the post-reboot metadata queries fails, the SSH keys from the metadata server for the instance are not copied in the Authorized_keys
file, and the machine is inaccessible.
Add the following FQDN against its IP address in your custom DNS server, so that the metadata DNS requests are successful, and the custom DNS server can resolve the Google metadata FQDN.
The following example is for an Ubuntu DNS instance. For any other OS, configure the DNS server accordingly.
In the named.conf
file:
zone "google.internal" IN { type master; allow-query {any;}; file "google.internal.zone"; };
Open the folder containing all the zone files.
Create a new zone file google.internal.zone
with the following entries:
# cat google.internal.zone $TTL 1D@ IN SOA ip-172-31-10-54.google.internal. root.google.internal. ( 2019109120 ; Serial number (yyyymmdd-num) 8H ; Refresh 2M ; Retry 4W ; Expire 1D ) ; Minimum IN NS ip-172-31-10-54 as.ipv4 A 0.0.0.0 as.ipv6 AAAA 0::0 ip-172-31-10-54 A <DNS server IP address> metadata IN A 169.254.169.254
rndc reload
.metadata.google.internal,
which resolves with the IP 169.254.169.254
.