In this section...
The following tasks must be completed before manual creating the SBCs:
If a Standalone configuration:
For an HA with HFE setup these tasks also must be complete first:
These are the steps for creating the a SBC instance. Any extra configuration for the HFE environment setup is called out specifically.
Click Create to open the Create an instance page.
Choose the Boot disk option and then press Change to open the Boot disk panel.
Click on Management, security, disks, networking, sole tenancy. The tabs will expand.
If Standalone: Leave blank.
Set External IP as one of the static External IPs created earlier.
Set External IP as None.
Set Alias IP range as /32
If Standalone, set as one of the static External IPs created earlier.
Set Alias IP range as /32
If Standalone, set as one of the static External IPs created earlier.
If Standalone: Click CREATE.
In a HFE environment, manually creating SBCs needs to be done in a particular way (outlined below), so the SBC CEs can gather the necessary information about their peers.
If the 'Active' instance is created and comes up before the 'Standby' instance is created, both instances will need to be restarted and the DBs will need to be cleared.
If you must restart an instance, do not use the GCP console Reset option. This can lead to disk corruption.
To enhance the security of the SBC in the public cloud space, certain restrictions have been imposed.
By default both the linuxadmin user (used for accessing the Linux shell) and the 'admin' user (default user for accessing the SBC CLI) only support SSH key login. These SSH keys are entered via two methods:
This format then tells cloud-init that the key is for linuxadmin.
Public SSH keys can be retrieved on Linux using: ssh-keygen -y -f <privateKeyFile>.
Ribbon recommends that you use separate SSH keys for each user.
By default, very little can be accessed at the Linux level using linuxadmin. For simple debugging, the sbcDiagnostic command can be used to run sysdumps and check the application status.
When a valid SBX license is installed, linuxadmin gains full sudoers permissions.
To prevent unauthorized commands from being run on the SBC, only user-data can be valid json for the SBC instances. If any non-valid json is found, the SBC will immediately shut down.
The user data must be specified in pure json. For example:
{ "CERole" : "<<ACTIVE/STANDBY>>", "ReverseNatPkt0" : "True", "ReverseNatPkt1" : "True", "CEName" : "<<CE NAME>>", "SystemName" : "<<SYSTEM NAME>>", "PeerCEName" : "<<PEER CE NAME>>", "PeerCEHa0IPv4Address": "<<ETH1 PRIMARY IP ON PEER>>", "ClusterIp" : "<<ETH1 PRIMARY IP ON PEER>>", "SbcPersonalityType": "isbc", "SbcMgmtMode": "centralized", "Mgt0Prefix": "24", "ThirdPartyCpuAlloc" : "0", "ThirdPartyMemAlloc" : "0", "AdminSshKey" : "<<SSH KEY>>", "PeerInstanceName": "<<PEER INSTANCE GOOGLE NAME>>", "HfeInstanceName": "<<HFE INSTANCE GOOGLE NAME>>" }
The following table describes all of the keys which may be required in the SBC user-data. The Required By column specifies which type of setup requires this key.
Key | Allow Values | Required by | Description |
---|---|---|---|
CERole | ACTIVE/STANDBY | HFE | Defined role for the SBC instance. One must be configured as ACTIVE the other STANDBY |
ReverseNatPkt0 | True/False | HFE | Requires True for HFE |
ReverseNatPkt1 | True/False | HFE | Requires True for HFE |
CEName | N/A | Standalone and HFE | This specifies the actual CE name of the SBC instance. For more information, see System and Instance Naming Conventions. CEName Requirements:
|
SystemName | N/A | Standalone and HFE | This specifies the System Name of the SBC instances. For more information, see System and Instance Naming Conventions. SystemName Requirements:
|
PeerCEName | N/A | HFE | This is the value CEName for the peer instance (Must match peer CE's CEName in the user-data) |
PeerCEHa0IPv4Address | xxx.xxx.xxx.xxx | HFE | This is the private IPv4 address of the HA interface on peer instance |
ClusterIp | xxx.xxx.xxx.xxx | HFE | This is the private IPv4 address of the HA interface on peer instance |
SbcPersonalityType | isbc | Standalone and HFE | The name of the SBC personality type for this instance. At this time only integrated SBC (isbc) is supported. |
SbcMgmtMode | centralized | Standalone and HFE | The mode of how the SBCs are managed. At this time only centralized is supported |
Mgt0Prefix | N/A | Standalone and HFE | The CIDR prefix for the Mgmt subnet |
ThirdPartyCpuAlloc | 0-4 | N/A | Number of CPUs that are to be segregated out for use with non Ribbon applications. This key is optional. Restrictions:
|
ThirdPartyMemAlloc | 0-4096 | N/A | Amount of memory (in MB) that are to be segregated out for use with non Ribbon applications. This key is optional. Restrictions:
|
AdminSshKey | ssh-rsa ... | Standalone and HFE | Public SSH Key to access the admin user. See SSH Key Login Only |
PeerInstanceName | N/A | HFE | The Name for the Peer Instance in GCP. Note: This is not the CEName or the SystemName |
HfeInstanceName | N/A | HFE | The name of the HFE instance in GCP. |
The PKT interfaces need to be configured through the CLI. The commands needed are outline below:
# Configuring PKT0 interface set addressContext default ipInterfaceGroup LIF1 commit set addressContext default ipInterfaceGroup LIF1 ipInterface F1 ceName <<CE Name of configured Active* from metavars>> portName pkt0 ipVarV4 IF2.IPV4 prefixVarV4 IF2.PrefixV4 ipPublicVarV4 <<IF2.FIPV4 OR HFE_IF2.FIPV4 **>> commit set addressContext default ipInterfaceGroup LIF1 ipInterface F1 mode inService state enabled commit set addressContext default staticRoute <<subnet gateway>> 32 0.0.0.0 LIF1 F1 preference 100 commit set addressContext default staticRoute 0.0.0.0 0 <<subnet gateway>> LIF1 F1 preference 100 commit set addressContext default staticRoute <<subnet IP>> <<subnet prefix>> <<subnet gateway>> LIF1 F1 preference 100 commit # Configuring PKT1 interface set addressContext default ipInterfaceGroup LIF2 commit set addressContext default ipInterfaceGroup LIF2 ipInterface F2 ceName <<CE Name of configured Active* from metavars>> portName pkt1 ipVarV4 IF3.IPV4 prefixVarV4 IF3.PrefixV4 ipPublicVarV4 <<IF3.FIPV4 OR HFE_IF3.FIPV4 **>> commit set addressContext default ipInterfaceGroup LIF2 ipInterface F2 mode inService state enabled commit set addressContext default staticRoute <<subnet gateway>> 32 0.0.0.0 LIF2 F2 preference 100 commit set addressContext default staticRoute 0.0.0.0 0 <<subnet gateway>> LIF2 F2 preference 100 commit set addressContext default staticRoute <<subnet IP>> <<subnet prefix>> <<subnet gateway>> LIF2 F2 preference 100 commit
* This is the active which has ACTIVE as their CERole in the user data
** If using HFE, use the HFE_IF*.FIPV4 metavariable. If Standalone use IF*.FIPV4.
The correct configuration should look like this:
admin@nodeA> show table addressContext default staticRoute IP INTERFACE IP DESTINATION GROUP INTERFACE CE IP ADDRESS PREFIX NEXT HOP NAME NAME PREFERENCE NAME ------------------------------------------------------------------------ 0.0.0.0 0 10.0.32.1 LIF1 F1 100 - 0.0.0.0 0 10.0.48.1 LIF2 F2 100 - 10.0.32.0 24 10.0.32.1 LIF1 F1 100 - 10.0.32.1 32 0.0.0.0 LIF1 F1 100 - 10.0.48.0 24 10.0.48.1 LIF2 F2 100 - 10.0.48.1 32 0.0.0.0 LIF2 F2 100 - [ok][2019-08-05 10:26:34] admin@nodeA> show table addressContext default ipInterfaceGroup IP PORT IP ALT IP ALT DRYUP BW VLAN IP VAR PREFIX VAR IP PUBLIC VA NAME IPSEC NAME CE NAME NAME ADDRESS PREFIX ADDRESS PREFIX MODE ACTION TIMEOUT STATE CONTINGENCY TAG BANDWIDTH V4 V4 VAR V4 V6 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- LIF1 disabled F1 nodeA-10.2.0.14 pkt0 - - - - inService dryUp 60 enabled 0 - 0 IF2.IPV4 IF2.PrefixV4 HFE_IF2.FIPV4 - LIF2 disabled F2 nodeA-10.2.0.14 pkt1 - - - - inService dryUp 60 enabled 0 - 0 IF3.IPV4 IF3.PrefixV4 HFE_IF3.FIPV4 - [ok][2019-08-05 10:29:58]
Below are sample SBC configurations. More information on what these elements mean can be found here: Metadata, Userdata and MetaVariable Formats on AWS (7.2S400)
Example Meta Variable table for HFE environment:
admin@nodeA> show table system metaVariable CE NAME NAME VALUE -------------------------------------------------------- nodeA-10.2.0.14 IF0.GWV4 10.0.0.1 nodeA-10.2.0.14 IF0.IPV4 10.0.0.54 nodeA-10.2.0.14 IF0.Port Mgt0 nodeA-10.2.0.14 IF0.RNat True nodeA-10.2.0.14 IF1.GWV4 10.2.0.1 nodeA-10.2.0.14 IF1.IPV4 10.2.0.14 nodeA-10.2.0.14 IF1.Port Ha0 nodeA-10.2.0.14 IF1.RNat True nodeA-10.2.0.14 IF2.GWV4 10.0.32.1 nodeA-10.2.0.14 IF2.IPV4 10.0.32.204 nodeA-10.2.0.14 IF2.Port Pkt0 nodeA-10.2.0.14 IF2.RNat True nodeA-10.2.0.14 IF3.GWV4 10.0.48.1 nodeA-10.2.0.14 IF3.IPV4 10.0.48.37 nodeA-10.2.0.14 IF3.Port Pkt1 nodeA-10.2.0.14 IF3.RNat True nodeA-10.2.0.14 IF0.FIPV4 35.184.248.228 nodeA-10.2.0.14 IF0.PrefixV4 24 nodeA-10.2.0.14 IF1.PrefixV4 32 nodeA-10.2.0.14 IF2.PrefixV4 32 nodeA-10.2.0.14 IF3.PrefixV4 32 nodeA-10.2.0.14 HFE_IF2.FIPV4 34.68.87.53 nodeA-10.2.0.14 HFE_IF3.FIPV4 10.0.3.19 nodeA-10.2.0.14 HFE_IF2.IFName IF_HFE_PKT0 nodeA-10.2.0.14 HFE_IF3.IFName IF_HFE_PKT1 nodeA-10.2.0.14 secondaryIPList.Pkt0 ['10.0.32.204'] nodeA-10.2.0.14 secondaryIPList.Pkt1 ['10.0.48.37'] nodeB-10.2.0.15 IF0.GWV4 10.0.0.1 nodeB-10.2.0.15 IF0.IPV4 10.0.0.55 nodeB-10.2.0.15 IF0.Port Mgt0 nodeB-10.2.0.15 IF0.RNat True nodeB-10.2.0.15 IF1.GWV4 10.2.0.1 nodeB-10.2.0.15 IF1.IPV4 10.2.0.15 nodeB-10.2.0.15 IF1.Port Ha0 nodeB-10.2.0.15 IF1.RNat True nodeB-10.2.0.15 IF2.GWV4 10.0.32.1 nodeB-10.2.0.15 IF2.IPV4 10.0.32.204 nodeB-10.2.0.15 IF2.Port Pkt0 nodeB-10.2.0.15 IF2.RNat True nodeB-10.2.0.15 IF3.GWV4 10.0.48.1 nodeB-10.2.0.15 IF3.IPV4 10.0.48.37 nodeB-10.2.0.15 IF3.Port Pkt1 nodeB-10.2.0.15 IF3.RNat True nodeB-10.2.0.15 IF0.FIPV4 35.232.104.143 nodeB-10.2.0.15 IF0.PrefixV4 24 nodeB-10.2.0.15 IF1.PrefixV4 32 nodeB-10.2.0.15 IF2.PrefixV4 32 nodeB-10.2.0.15 IF3.PrefixV4 32 nodeB-10.2.0.15 HFE_IF2.FIPV4 34.68.87.53 nodeB-10.2.0.15 HFE_IF3.FIPV4 10.0.3.19 nodeB-10.2.0.15 HFE_IF2.IFName IF_HFE_PKT0 nodeB-10.2.0.15 HFE_IF3.IFName IF_HFE_PKT1 nodeB-10.2.0.15 secondaryIPList.Pkt0 ['10.0.32.206'] nodeB-10.2.0.15 secondaryIPList.Pkt1 ['10.0.48.39'] [ok][2019-08-02 09:24:54]
Each SBC contains instance data which compromises data needed by the SBC application. The data can be found in /opt/sonus/conf/instanceLcaData.json.
{ "secondaryIPListMgt0": [], "Mgt0IPv4Prefix": "24", "VIP_Pkt1_00": { "IP": "10.0.48.37", "IFName": "IF3" }, "Ha0IPv4Prefix": "32", "PeerCEMgt0IPv4Address": "10.0.0.55", "SystemName": "GCEHA", "PeerInstanceName": "cj-standby", "ThirdPartyCpuAlloc": "0", "PeerCEHa0IPv4Prefix": "32", "Mgt0Prefix": "24", "ThirdPartyMemAlloc": "0", "SbcPersonalityType": "isbc", "PeerCEHa0IPv4Address": "10.2.0.15", "CEName": "nodeA", "ClusterIp": "10.2.0.15", "HFE_IF2": { "IFName": "IF_HFE_PKT0", "FIPV4": "34.68.87.53" }, "secondaryIPListHa0": [], "PeerCEPkt1IPv4Prefix": "32", "instanceName": "cj-active", "Pkt1IPv4Prefix": "32", "CERole": "ACTIVE", "secondaryIPListPkt1": [ "10.0.48.39" ], "secondaryIPListPkt0": [ "10.0.32.206" ], "ReverseNatPkt0": "True", "ReverseNatPkt1": "True", "Pkt0IPv4Prefix": "32", "PeerCEPkt1IPv4Address": "10.0.48.40", "zone": "projects/626129518018/zones/us-central1-a", "SbcMgmtMode": "centralized", "PeerCEPkt0IPv4Address": "10.0.32.205", "IF0": { "PrefixV4": "24", "RNat": "True", "Port": "Mgt0", "FIPV4": "35.184.248.228" }, "IF1": { "PrefixV4": "32", "RNat": "True", "Port": "Ha0" }, "IF2": { "PrefixV4": "32", "RNat": "True", "Port": "Pkt0" }, "IF3": { "PrefixV4": "32", "RNat": "True", "Port": "Pkt1" }, "AdminSshKey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCMEMXjfUCrKApRWcjEYshAVDNg6aIrrgOp/ckLk2bSPFa37BNoHr+SlxfvOUOm+C61CB6yp6Lou2lQWjBISoK5r+x8fLrPOJz9JDnmEwmmnk4EdbWB0ArZC9MdhNxYbaWCeQFIYBY4FwLIxSy1fyc6fZhQiPtqd05o08/9icwEbPM0EjeO7FHHMVLVBn7/LlDABcA4+O28/FF61HT3fJ1XZzXgg5MRURf/WcN0aZoKshsV+ZPiJZWg2lkKehXHnMDjnmPvjWgyMQsgs9KfZirg1PMw7O8G/oMfXHMICCkx3I8t8/6VK2WQvoilo4zn6LgpLIjBvc2mxJRCZqh3MgxT", "PeerCEName": "nodeB", "VIP_Pkt0_00": { "IP": "10.0.32.204", "IFName": "IF2" }, "PeerCEMgt0IPv4Prefix": "24", "HfeInstanceName": "cj-hfe", "HFE_IF3": { "IFName": "IF_HFE_PKT1", "FIPV4": "10.0.3.19" }, "secondaryIPList": { "Ha0": [], "Mgt0": [], "Pkt1": [ "10.0.48.37" ], "Pkt0": [ "10.0.32.204" ] }, "PeerCEPkt0IPv4Prefix": "32" }
The following steps are mandatory to configure the SBC for DNS call flows.
When an external DNS server is configured in the SBC for FQDN resolutions, the resolv.conf file is updated in the SBC with the custom DNS server's IP address, and as a result the priority of the custom DNS server is higher than that of metedata namespace server. Therefore, all the post-reboot metadata queries fails, the SSH keys from the metadata server for the instance are not copied in the Authorized_keys file and the machine is inaccessble.
To overcome this issue, add the following FQDN againt its IP address in your custom DNS server as shown below, so that your metadata DNS requests are successful, and the custom DNS server is able to resolve the google metadata FQDN.
The following example is for an Ubuntu DNS instance. If you have any other OS, configure the DNS server accordingly.
In the named.conf file:
zone "google.internal" IN { type master; allow-query {any;}; file "google.internal.zone"; };
Open the folder containing all the zone files.
Create a new zone file google.internal.zone with the following entries:
# cat google.internal.zone $TTL 1D@ IN SOA ip-172-31-10-54.google.internal. root.google.internal. ( 2019109120 ; Serial number (yyyymmdd-num) 8H ; Refresh 2M ; Retry 4W ; Expire 1D ) ; Minimum IN NS ip-172-31-10-54 as.ipv4 A 0.0.0.0 as.ipv6 AAAA 0::0 ip-172-31-10-54 A <DNS server IP address> metadata IN A 169.254.169.254