In this section...

 

Prerequisites for SBC Creation

The following tasks must be completed before manual creating the SBCs:

If a Standalone configuration:

For an HA with HFE setup these tasks also must be complete first:

Step by Step Manual Instance Creation

These are the steps for creating the a SBC instance. Any extra configuration for the HFE environment setup is called out specifically.

  1. In the GCP console go to Compute Engine > VM instances
  2. Click Create to open the Create an instance page.

    Create an instance


  3. Enter a name in the Name field.
  4. Select the Region that contains the subnets that were created for the SBCs.
  5. Select an appropriate Zone.
  6. In the Machine type, select appropriate size (see Supported and Recommended Instance Sizes - SBC and HFE).
  7. Choose the Boot disk option and then press Change to open the Boot disk panel.

    Boot disk

  8. Select Custom Images, then select the account containing the image (if not the current) and choose the SBC image.
  9. Select SSD persistent disk as the Boot disk type with the disk size as ≥ 65GB. 
  10. Click Select.
  11. Under Identity and API access:
    1. If creating a Standalone setup: click Allow full access to all Cloud APIs
    2. If creating a HA/HFE setup: select the service account created in GCP Service Account Permissions.
  12. Click on Management, security, disks, networking, sole tenancy. The tabs will expand.

    Expanded tabs

  13. Click Management.
  14. In the Metadata section, enter the user data with Key set as "user-data". For more information see User Data.
  15. Click Security
  16. Click Block project-wide SSH keys
  17. Enter a public SSH key for the linuxadmin user. For more information see SSH Key Login Only.
  18. Click Networking
  19. Set Network Tags:
    1. If HFE: Add in the 'Instance tag' for routes for PKT0 and PKT1 that were created during HFE node setup. See Google network routes. 
    2. If Standalone: Leave blank.

      Network Tags

  20. Create the SBC Network interfaces in this order:
    1. Mgmt:
      1. Select VPC which was created for the mgmt
      2. Select the Subnetwork created for mgmt
      3. Set the Primary internal IP as Ephemeral (Automatic)
      4. Set External IP as one of the static External IPs created earlier.

        Network interfaces

    2.  HA:
      1. Select VPC which was created for HA
      2. Select the Subnetwork created for HA
      3. Set the Primary internal IP:
        • If HFE: set as one of the primary internal IPs created earlier
        • If Standalone: set as Ephemeral (Automatic)
      4. Set External IP as None.

        Network interfaces (continued)

    3.  PKT0:
      1. Select VPC which was created for PKT0 (and nic3 for HFE node)
      2. Select the Subnetwork created for PKT0 (and nic3 for HFE node)
      3. Set the Primary internal IP as Ephemeral (Automatic)
      4. Add an alias IP:
        1. Click Show alias IP ranges
        2. Set Subnet range as Primary
        3. Set Alias IP range as /32

           Setting the 'Alias IP range' as '/32' assigns an available IPv4 address from the subnet.
      5. Set External IP:
        • If HFE set as None.
        • If Standalone, set as one of the static External IPs created earlier.

          External IP



    4.  PKT1:
      1. Select VPC which was created for PKT1 (and nic4 for HFE node)
      2. Select the Subnetwork created for PKT1 (and nic4 for HFE node)
      3. Set the Primary internal IP as Ephemeral (Automatic)
      4. Add an alias IP:
        1. Click Show alias IP ranges
        2. Set Subnet range as Primary
        3. Set Alias IP range as /32

           Setting the 'Alias IP range' as '/32' assigns an available IPv4 address from the subnet.
      5. Set External IP:
        • If HFE set as None.
        • If Standalone, set as one of the static External IPs created earlier.

          PKT1 External IP

  21. If Standalone: Click CREATE.

    In a HFE environment, manually creating SBCs needs to be done in a particular way (outlined below), so the SBC CEs can gather the necessary information about their peers.

  22. If HFE:
    1. Repeat steps 1 - 21 for the standby instance in a new tab.
    2. Click CREATE in for the Active instance.
    3. Click CREATE in Standby instance.
    4. Stop and start the HFE node instance, so it can retrieve the SBC instance's information.

If the 'Active' instance is created and comes up before the 'Standby' instance is created, both instances will need to be restarted and the DBs will need to be cleared.

If you must restart an instance, do not use the GCP console Reset option. This can lead to disk corruption.

Public Cloud Security

To enhance the security of the SBC in the public cloud space, certain restrictions have been imposed.

SSH Key Login Only

By default both the linuxadmin user (used for accessing the Linux shell) and the 'admin' user (default user for accessing the SBC CLI) only support SSH key login. These SSH keys are entered via two methods:

  • For linuxadmin:
    • Within the instance creation, under the 'Security' tab, add the public ssh key in the form: <ssh-rsa ...> linuxadmin
    • This format then tells cloud-init that the key is for linuxadmin.

      Example

       

 

  • For admin:
    • The public key is entered through the SBC user-data.
    • This is to keep consistency between public cloud platforms.

Public SSH keys can be retrieved on Linux using: ssh-keygen -y -f <privateKeyFile>.

Ribbon recommends that you use separate SSH keys for each user.

Gaining Heightened Permissions for linuxadmin

By default, very little can be accessed at the Linux level using linuxadmin. For simple debugging, the sbcDiagnostic command can be used to run sysdumps and check the application status.

When a valid SBX license is installed, linuxadmin gains full sudoers permissions.

User Data format

To prevent unauthorized commands from being run on the SBC, only user-data can be valid json for the SBC instances. If any non-valid json is found, the SBC will immediately shut down. 

 

User Data

The user data must be specified in pure json. For example:

{
  "CERole" : "<<ACTIVE/STANDBY>>",
  "ReverseNatPkt0" : "True",
  "ReverseNatPkt1" : "True",
  "CEName" : "<<CE NAME>>",
  "SystemName" : "<<SYSTEM NAME>>",
  "PeerCEName" : "<<PEER CE NAME>>",
  "PeerCEHa0IPv4Address": "<<ETH1 PRIMARY IP ON PEER>>",
  "ClusterIp" : "<<ETH1 PRIMARY IP ON PEER>>",
  "SbcPersonalityType": "isbc",
  "SbcMgmtMode": "centralized",
  "Mgt0Prefix": "24",
  "ThirdPartyCpuAlloc" : "0",
  "ThirdPartyMemAlloc" : "0",
  "AdminSshKey" : "<<SSH KEY>>",
  "PeerInstanceName": "<<PEER INSTANCE GOOGLE NAME>>",
  "HfeInstanceName": "<<HFE INSTANCE GOOGLE NAME>>"
}

 

The following table describes all of the keys which may be required in the SBC user-data. The Required By column specifies which type of setup requires this key.

KeyAllow ValuesRequired byDescription
CERoleACTIVE/STANDBYHFEDefined role for the SBC instance. One must be configured as ACTIVE the other STANDBY
ReverseNatPkt0True/FalseHFERequires True for HFE
ReverseNatPkt1True/FalseHFERequires True for HFE
CENameN/AStandalone and HFE

This specifies the actual CE name of the SBC instance. For more information, see System and Instance Naming Conventions.

CEName Requirements:

  • Must start with an alphabetic character.

  • Only contain alphabetic characters and/or numbers. No special characters.

  • Cannot exceed 64 characters in length

SystemNameN/AStandalone and HFE

This specifies the System Name of the SBC instances. For more information, see System and Instance Naming Conventions.

SystemName Requirements:

  • Must start with an alphabetic character.

  • Only contain alphabetic characters and/or numbers. No special characters.

  • Cannot exceed 26 characters in length

  • Must be the same on both peers CEs.
PeerCENameN/AHFE

This is the value CEName for the peer instance (Must match peer CE's CEName in the user-data)

PeerCEHa0IPv4Addressxxx.xxx.xxx.xxxHFEThis is the private IPv4 address of the HA interface on peer instance
ClusterIpxxx.xxx.xxx.xxxHFEThis is the private IPv4 address of the HA interface on peer instance
SbcPersonalityTypeisbcStandalone and HFEThe name of the SBC personality type for this instance. At this time only integrated SBC (isbc) is supported.
SbcMgmtModecentralizedStandalone and HFEThe mode of how the SBCs are managed. At this time only centralized is supported
Mgt0PrefixN/AStandalone and HFEThe CIDR prefix for the Mgmt subnet
ThirdPartyCpuAlloc0-4N/A

Number of CPUs that are to be segregated out for use with non Ribbon applications. This key is optional.

Restrictions:

    • 0-4 CPUs
    • Both ThirdPartCpuAlloc and ThirdPartyMemAlloc must be configured
    • The configuration must match between peer instances
ThirdPartyMemAlloc0-4096N/A

Amount of memory (in MB) that are to be segregated out for use with non Ribbon applications. This key is optional.

Restrictions:

    • 0-4096 CPUs
    • Both ThirdPartCpuAlloc and ThirdPartyMemAlloc must be configured
    • The configuration must match between peer instances
AdminSshKeyssh-rsa ...Standalone and HFEPublic SSH Key to access the admin user. See SSH Key Login Only
PeerInstanceNameN/AHFEThe Name for the Peer Instance in GCP. Note: This is not the CEName or the SystemName
HfeInstanceNameN/AHFEThe name of the HFE instance in GCP.

 

CLI Configuration for Configuring PKT Ports

The PKT interfaces need to be configured through the CLI. The commands needed are outline below: 

# Configuring PKT0 interface
set addressContext default ipInterfaceGroup LIF1
commit
set addressContext default ipInterfaceGroup LIF1 ipInterface F1 ceName <<CE Name of configured Active* from metavars>> portName pkt0 ipVarV4 IF2.IPV4 prefixVarV4 IF2.PrefixV4 ipPublicVarV4 <<IF2.FIPV4 OR HFE_IF2.FIPV4 **>>
commit
set addressContext default ipInterfaceGroup LIF1 ipInterface F1 mode inService state enabled
commit
set addressContext default staticRoute <<subnet gateway>> 32 0.0.0.0 LIF1 F1 preference 100
commit
set addressContext default staticRoute 0.0.0.0 0 <<subnet gateway>> LIF1 F1 preference 100
commit
set addressContext default staticRoute <<subnet IP>> <<subnet prefix>> <<subnet gateway>> LIF1 F1 preference 100
commit
 
# Configuring PKT1 interface
set addressContext default ipInterfaceGroup LIF2
commit
set addressContext default ipInterfaceGroup LIF2 ipInterface F2 ceName <<CE Name of configured Active* from metavars>> portName pkt1 ipVarV4 IF3.IPV4 prefixVarV4 IF3.PrefixV4 ipPublicVarV4 <<IF3.FIPV4 OR HFE_IF3.FIPV4 **>>
commit
set addressContext default ipInterfaceGroup LIF2 ipInterface F2 mode inService state enabled
commit
set addressContext default staticRoute <<subnet gateway>> 32 0.0.0.0 LIF2 F2 preference 100
commit
set addressContext default staticRoute 0.0.0.0 0 <<subnet gateway>> LIF2 F2 preference 100
commit
set addressContext default staticRoute <<subnet IP>> <<subnet prefix>> <<subnet gateway>> LIF2 F2 preference 100
commit

* This is the active which has ACTIVE as their CERole in the user data

** If using HFE, use the HFE_IF*.FIPV4 metavariable. If Standalone use IF*.FIPV4.

 

The correct configuration should look like this:

admin@nodeA> show table addressContext default staticRoute
                                IP
                                INTERFACE  IP
DESTINATION                     GROUP      INTERFACE              CE
IP ADDRESS   PREFIX  NEXT HOP   NAME       NAME       PREFERENCE  NAME
------------------------------------------------------------------------
0.0.0.0      0       10.0.32.1  LIF1       F1         100         -
0.0.0.0      0       10.0.48.1  LIF2       F2         100         -
10.0.32.0    24      10.0.32.1  LIF1       F1         100         -
10.0.32.1    32      0.0.0.0    LIF1       F1         100         -
10.0.48.0    24      10.0.48.1  LIF2       F2         100         -
10.0.48.1    32      0.0.0.0    LIF2       F2         100         -
[ok][2019-08-05 10:26:34]
admin@nodeA> show table addressContext default ipInterfaceGroup

                                                                                                                                                                                         IP
                                       PORT  IP               ALT IP   ALT                        DRYUP             BW           VLAN             IP VAR    PREFIX VAR    IP PUBLIC      VA
NAME  IPSEC     NAME  CE NAME          NAME  ADDRESS  PREFIX  ADDRESS  PREFIX  MODE       ACTION  TIMEOUT  STATE    CONTINGENCY  TAG   BANDWIDTH  V4        V4            VAR V4         V6
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
LIF1  disabled  F1    nodeA-10.2.0.14  pkt0  -        -       -        -       inService  dryUp   60       enabled  0            -     0          IF2.IPV4  IF2.PrefixV4  HFE_IF2.FIPV4  -
LIF2  disabled  F2    nodeA-10.2.0.14  pkt1  -        -       -        -       inService  dryUp   60       enabled  0            -     0          IF3.IPV4  IF3.PrefixV4  HFE_IF3.FIPV4  -
[ok][2019-08-05 10:29:58]

 

Example SBC Instance Configurations

Below are sample SBC configurations. More information on what these elements mean can be found here: Metadata, Userdata and MetaVariable Formats on AWS (7.2S400)

SBC Meta Variable Table

Example Meta Variable table for HFE environment:

admin@nodeA> show table system metaVariable
CE NAME          NAME                  VALUE
--------------------------------------------------------
nodeA-10.2.0.14  IF0.GWV4              10.0.0.1
nodeA-10.2.0.14  IF0.IPV4              10.0.0.54
nodeA-10.2.0.14  IF0.Port              Mgt0
nodeA-10.2.0.14  IF0.RNat              True
nodeA-10.2.0.14  IF1.GWV4              10.2.0.1
nodeA-10.2.0.14  IF1.IPV4              10.2.0.14
nodeA-10.2.0.14  IF1.Port              Ha0
nodeA-10.2.0.14  IF1.RNat              True
nodeA-10.2.0.14  IF2.GWV4              10.0.32.1
nodeA-10.2.0.14  IF2.IPV4              10.0.32.204
nodeA-10.2.0.14  IF2.Port              Pkt0
nodeA-10.2.0.14  IF2.RNat              True
nodeA-10.2.0.14  IF3.GWV4              10.0.48.1
nodeA-10.2.0.14  IF3.IPV4              10.0.48.37
nodeA-10.2.0.14  IF3.Port              Pkt1
nodeA-10.2.0.14  IF3.RNat              True
nodeA-10.2.0.14  IF0.FIPV4             35.184.248.228
nodeA-10.2.0.14  IF0.PrefixV4          24
nodeA-10.2.0.14  IF1.PrefixV4          32
nodeA-10.2.0.14  IF2.PrefixV4          32
nodeA-10.2.0.14  IF3.PrefixV4          32
nodeA-10.2.0.14  HFE_IF2.FIPV4         34.68.87.53
nodeA-10.2.0.14  HFE_IF3.FIPV4         10.0.3.19
nodeA-10.2.0.14  HFE_IF2.IFName        IF_HFE_PKT0
nodeA-10.2.0.14  HFE_IF3.IFName        IF_HFE_PKT1
nodeA-10.2.0.14  secondaryIPList.Pkt0  ['10.0.32.204']
nodeA-10.2.0.14  secondaryIPList.Pkt1  ['10.0.48.37']
nodeB-10.2.0.15  IF0.GWV4              10.0.0.1
nodeB-10.2.0.15  IF0.IPV4              10.0.0.55
nodeB-10.2.0.15  IF0.Port              Mgt0
nodeB-10.2.0.15  IF0.RNat              True
nodeB-10.2.0.15  IF1.GWV4              10.2.0.1
nodeB-10.2.0.15  IF1.IPV4              10.2.0.15
nodeB-10.2.0.15  IF1.Port              Ha0
nodeB-10.2.0.15  IF1.RNat              True
nodeB-10.2.0.15  IF2.GWV4              10.0.32.1
nodeB-10.2.0.15  IF2.IPV4              10.0.32.204
nodeB-10.2.0.15  IF2.Port              Pkt0
nodeB-10.2.0.15  IF2.RNat              True
nodeB-10.2.0.15  IF3.GWV4              10.0.48.1
nodeB-10.2.0.15  IF3.IPV4              10.0.48.37
nodeB-10.2.0.15  IF3.Port              Pkt1
nodeB-10.2.0.15  IF3.RNat              True
nodeB-10.2.0.15  IF0.FIPV4             35.232.104.143
nodeB-10.2.0.15  IF0.PrefixV4          24
nodeB-10.2.0.15  IF1.PrefixV4          32
nodeB-10.2.0.15  IF2.PrefixV4          32
nodeB-10.2.0.15  IF3.PrefixV4          32
nodeB-10.2.0.15  HFE_IF2.FIPV4         34.68.87.53
nodeB-10.2.0.15  HFE_IF3.FIPV4         10.0.3.19
nodeB-10.2.0.15  HFE_IF2.IFName        IF_HFE_PKT0
nodeB-10.2.0.15  HFE_IF3.IFName        IF_HFE_PKT1
nodeB-10.2.0.15  secondaryIPList.Pkt0  ['10.0.32.206']
nodeB-10.2.0.15  secondaryIPList.Pkt1  ['10.0.48.39']
[ok][2019-08-02 09:24:54]

SBC Instance Data

Each SBC contains instance data which compromises data needed by the SBC application. The data can be found in /opt/sonus/conf/instanceLcaData.json.

{
    "secondaryIPListMgt0": [],
    "Mgt0IPv4Prefix": "24",
    "VIP_Pkt1_00": {
        "IP": "10.0.48.37",
        "IFName": "IF3"
    },
    "Ha0IPv4Prefix": "32",
    "PeerCEMgt0IPv4Address": "10.0.0.55",
    "SystemName": "GCEHA",
    "PeerInstanceName": "cj-standby",
    "ThirdPartyCpuAlloc": "0",
    "PeerCEHa0IPv4Prefix": "32",
    "Mgt0Prefix": "24",
    "ThirdPartyMemAlloc": "0",
    "SbcPersonalityType": "isbc",
    "PeerCEHa0IPv4Address": "10.2.0.15",
    "CEName": "nodeA",
    "ClusterIp": "10.2.0.15",
    "HFE_IF2": {
        "IFName": "IF_HFE_PKT0",
        "FIPV4": "34.68.87.53"
    },
    "secondaryIPListHa0": [],
    "PeerCEPkt1IPv4Prefix": "32",
    "instanceName": "cj-active",
    "Pkt1IPv4Prefix": "32",
    "CERole": "ACTIVE",
    "secondaryIPListPkt1": [
        "10.0.48.39"
    ],
    "secondaryIPListPkt0": [
        "10.0.32.206"
    ],
    "ReverseNatPkt0": "True",
    "ReverseNatPkt1": "True",
    "Pkt0IPv4Prefix": "32",
    "PeerCEPkt1IPv4Address": "10.0.48.40",
    "zone": "projects/626129518018/zones/us-central1-a",
    "SbcMgmtMode": "centralized",
    "PeerCEPkt0IPv4Address": "10.0.32.205",
    "IF0": {
        "PrefixV4": "24",
        "RNat": "True",
        "Port": "Mgt0",
        "FIPV4": "35.184.248.228"
    },
    "IF1": {
        "PrefixV4": "32",
        "RNat": "True",
        "Port": "Ha0"
    },
    "IF2": {
        "PrefixV4": "32",
        "RNat": "True",
        "Port": "Pkt0"
    },
    "IF3": {
        "PrefixV4": "32",
        "RNat": "True",
        "Port": "Pkt1"
    },
    "AdminSshKey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCMEMXjfUCrKApRWcjEYshAVDNg6aIrrgOp/ckLk2bSPFa37BNoHr+SlxfvOUOm+C61CB6yp6Lou2lQWjBISoK5r+x8fLrPOJz9JDnmEwmmnk4EdbWB0ArZC9MdhNxYbaWCeQFIYBY4FwLIxSy1fyc6fZhQiPtqd05o08/9icwEbPM0EjeO7FHHMVLVBn7/LlDABcA4+O28/FF61HT3fJ1XZzXgg5MRURf/WcN0aZoKshsV+ZPiJZWg2lkKehXHnMDjnmPvjWgyMQsgs9KfZirg1PMw7O8G/oMfXHMICCkx3I8t8/6VK2WQvoilo4zn6LgpLIjBvc2mxJRCZqh3MgxT",
    "PeerCEName": "nodeB",
    "VIP_Pkt0_00": {
        "IP": "10.0.32.204",
        "IFName": "IF2"
    },
    "PeerCEMgt0IPv4Prefix": "24",
    "HfeInstanceName": "cj-hfe",
    "HFE_IF3": {
        "IFName": "IF_HFE_PKT1",
        "FIPV4": "10.0.3.19"
    },
    "secondaryIPList": {
        "Ha0": [],
        "Mgt0": [],
        "Pkt1": [
            "10.0.48.37"
        ],
        "Pkt0": [
            "10.0.32.204"
        ]
    },
    "PeerCEPkt0IPv4Prefix": "32"
}

Configuring the SBC for DNS Call Flows

 

The following steps are mandatory to configure the SBC for DNS call flows.

 

When an external DNS server is configured in the SBC for FQDN resolutions, the resolv.conf file is updated in the SBC with the custom DNS server's IP address, and as a result the priority of the custom DNS server is higher than that of metedata namespace server. Therefore, all the post-reboot metadata queries fails, the SSH keys from the metadata server for the instance are not copied in the Authorized_keys file and the machine is inaccessble.

To overcome this issue, add the following FQDN againt its IP address in your custom DNS server as shown below, so that your metadata DNS requests are successful, and the custom DNS server is able to resolve the google metadata FQDN.

 

The following example is for an Ubuntu DNS instance. If you have any other OS, configure the DNS server accordingly.

 

  1. In the named.conf file:

    zone "google.internal" IN {
         type master;
         allow-query {any;};
         file "google.internal.zone";
    };

    Open the folder containing all the zone files.

  2. Create a new zone file google.internal.zone with the following entries:

    # cat google.internal.zone
    
    $TTL 1D@ IN SOA ip-172-31-10-54.google.internal.  root.google.internal. (
    2019109120 ; Serial number (yyyymmdd-num)
    8H ; Refresh
    2M ; Retry
    4W ; Expire
    1D ) ; Minimum
         IN NS ip-172-31-10-54
    as.ipv4                   A                         0.0.0.0
    as.ipv6                   AAAA                      0::0
    ip-172-31-10-54           A                         <DNS server IP address>
    
    metadata IN  A 169.254.169.254
  3. Reload the DNS service by typing rndc reload.
  4. Your FQDN will be metadata.google.internal which will resolve with the following IP 169.254.169.254.