Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Prerequisites for SBC Creation

Ensure the following before creating the SBCs:

  • Four VPCs (each containing a subnet) are created for the mgmt, HA, PKT0, and PKT1 interfaces. For more information, refer to Configure VPC Networks.
  • For each SBC instance, a Static External IP address is reserved. For more information, refer to the section "Reserve Static External IP Addresses" of the page Configure VPC Networks.
  • Delete any "sshKeys" or "ssh-key" field in the global Metadata. For more information, see SSH Key Login Only.

For a Standalone configuration:

  • A Static External IP address is reserved for PKT0 and PKT1. For more information, refer to the section "Reserve Static External IP Addresses" of the page Configure VPC Networks.

Ensure the following before creating an HA pair with HFE setup:

  • Two static internal IP address are created in the subnet used for the HA interfaces (for each SBC). For more information, refer to the section "Reserve Static External IP Addresses" of the page Configure VPC Networks.
  • A service account used for the SBCs and HFEs are created. For more information, refer to GCP Service Account Permissions.
  • The HFE node instance is created. For more information, refer to Configure HFE Nodes in GCP.

Manual SBC Instance Creation

To create an SBC instance, follow the steps given below. Any extra configuration for the HFE environment setup is mentioned specifically.

  1. In the GCP console, go to Compute Engine > VM instances.
  2. Click Create to open the Create an instance page.

    Caption
    0Figure
    1Create an instance


  3. Enter a name in the Name field.
  4. Select the Region that contains the subnets created for the SBCs.
  5. Select an appropriate Zone.
  6. In the Machine type, select appropriate size. For more information, refer to Instance Types Supported for SBC SWe in GCP.
  7. Select the Boot disk option, and click Change to open the Boot disk panel.

    Caption
    0Figure
    1Boot disk


  8. Select Custom Images.
  9. Select the account containing the image (if not the current), and select the SBC image.
  10. Select SSD persistent disk as the Boot disk type with the disk size as ≥ 65GB
  11. Click Select.
  12. Under Identity and API access:
    1. To create a Standalone setup, click Allow full access to all Cloud APIs.
    2. To create an HA/HFE setup: select the service account created in GCP Service Account Permissions.
  13. Click on Management, security, disks, networking, sole tenancy. The tabs expand.

    Caption
    0Figure
    1Expanded tabs


  14. Click Management.
  15. In the Metadata section, enter the user data with Key set as "user-data". For more information, see User Data.
  16. Click Security.
  17. Click Block project-wide SSH keys.
  18. Enter a public SSH key for the linuxadmin user. For more information, see SSH Key Login Only.
  19. Click Networking.
  20. Set Network Tags:
    1. For HFE, add in the "Instance tag" for routes for PKT0 and PKT1 (created during HFE node setup). For more information, see the section "Google Network Routes" of the page Configure HFE Nodes in GCP. 
    2. For Standalone, leave it blank.

      Caption
      0Figure
      1Network Tags


  21. Create the SBC Network interfaces in the following order:
    1. Mgmt:
      1. Select the VPC created for the mgmt.
      2. Select the Subnetwork created for mgmt.
      3. Set the Primary internal IP as "Ephemeral (Automatic)".
      4. Set External IP as one of the static External IPs created earlier.

        Caption
        0Figure
        1Network interfaces


    2.  HA:
      1. Select the VPC created for HA.
      2. Select the Subnetwork created for HA.
      3. Set the Primary internal IP:
        • For HFE, set as one of the primary internal IPs created earlier.
        • For Standalone, set as "Ephemeral (Automatic)".
      4. Set the External IP as "None".

        Caption
        0Figure
        1Network interfaces (continued)


    3.  PKT0:
      1. Select the VPC created for PKT0 (and nic3 for HFE node).
      2. Select the Subnetwork created for PKT0 (and nic3 for HFE node).
      3. Set the Primary internal IP as "Ephemeral (Automatic)".
      4. Add an alias IP:
        1. Click Show alias IP ranges
        2. Set the Subnet range as "Primary"
        3. Set Alias IP range as "/32".

          Info
          titleNote
          Setting the Alias IP range as "/32" assigns an available IPv4 address from the subnet.


      5. Set External IP:
        • For HFE, set as "None".
        • For Standalone, set as one of the static External IPs created earlier.

          Caption
          0Figure
          1External IP


    4.  PKT1:
      1. Select the VPC created for PKT1 (and nic4 for HFE node).
      2. Select the Subnetwork created for PKT1 (and nic4 for HFE node).
      3. Set the Primary internal IP as "Ephemeral (Automatic)".
      4. Add an alias IP:
        1. Click Show alias IP ranges.
        2. Set Subnet range as "Primary".
        3. Set Alias IP range as "/32".

          Info
          titleNote
           Setting the Alias IP range as "/32" assigns an available IPv4 address from the subnet.


      5. Set External IP:
        • For HFE, set as "None".
        • For Standalone, set as one of the static External IPs created earlier.

          Caption
          0Figure
          1PKT1 External IP


          Info
          titleNote

          You must create both SBC and HFE VMs within seconds of each other; otherwise, the application will fail to start and then require rebooting.

          If the HFE node(s) are already created, when the SBC CREATE commands are run, simply reboot the HFE node(s) to make them work.


  22. For Standalone, click CREATE.

    Warning
    titleWarning

    In a HFE environment, ensure that manual creation of the SBCs needs is performed exactly as described below, failing which the SBC CEs cannot collect necessary information about peers.


  23. For HFE:
    1. Repeat steps 1 - 21 for the standby instance in a new tab.
    2. Click CREATE in for the Active instance.
    3. Click CREATE in Standby instance.
    4. Stop and start the HFE node instance, to allow it to retrieve the SBC instance's information.
Info
titleNote

If you create an Active instance that comes up before creating the Standby instance, restart both instances and clear the databases.


Warning
titleWarning

To avoid disk corruption, do not use the GCP console Reset option to restart an instance.

Public Cloud Security

To enhance the security of the SBC in the public cloud space, certain restrictions are imposed.

Anchor
SSH Key Login Only
SSH Key Login Only
SSH Key Login Only

By default, only the linuxadmin user (used for accessing the Linux shell) and the admin user (default user for accessing the SBC CLI) support SSH key login. The SSH keys are entered using the  following methods:

  • For linuxadmin:
    • Within the instance creation, under the 'Security' tab, add the public ssh key in the form: <ssh-rsa ...> linuxadmin.
    • The format informs the cloud-init that the key is for linuxadmin.

      Caption
      0Figure
      1Example


  • For admin:
    • The public key is entered using the SBC user-data
    • This allows to maintain consistency between public cloud platforms.

Info
titleNote

You can retrieve the Public SSH keys on Linux by executing the following command: ssh-keygen -y -f <privateKeyFile>.


Info
titleNote

Ribbon recommends using separate SSH keys for every user.

Increased Permissions for linuxadmin

By default, linuxadmin allows very little access at the OS level. For simple debugging, use the sbcDiagnostic command to run sysdumps and check the application status.

When a valid SBX license is installed, linuxadmin gains full sudo permissions.

User Data Format

To prevent execution of unauthorized commands on the SBC, the user-data only in a valid json format is allowed. If the SBC detects any user-data in an invalid json format, it shuts down immediately. 

User Data

Specify the user data in pure json format. 

Info
titleNote

This example below is for HFE 2.0.


Code Block
{
 "CERole" : "ACTIVE",
 "ReverseNatPkt0" : "True",
 "ReverseNatPkt1" : "True",
 "CEName" : "vsbc1",
 "SystemName" : "vsbcSystem",
 "PeerCEName" : "vsbc2",
 "PeerCEHa0IPv4Address": "10.54.25.180",
 "ClusterIp" : "10.54.25.180",
 "SbcPersonalityType": "isbc",
 "SbcHaMode": "1to1",
 "Mgt0Prefix": "24",
 "ThirdPartyCpuAlloc" : "0",
 "ThirdPartyMemAlloc" : "0" ,
 "AdminSshKey" : "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCJnrFMr/RXJD3rVLMLdkJBYau+lWQ+F55Xj+KjunVBtw/zXURV38QIQ1zCw/GDO2CZTSyehUeiV0pi2moUs0ZiK6/TdWTzcOP3RCUhNI26sBFv/Tk5MdaojSqUc2NMpS/c1ESCmaUMBv4F7PfeHt0f3PqpUsxvKeNQQuEZyXjFEwAUdbkCMEptgaroYwuEz4SpFCfNBh0obUSoX5FNiNO/OyXcR8poVH0UhFim0Rdneo7VEH5FeqdkdGyZcTFs7A7aWpBRY3N8KUwklmNSWdDZ9//epEwgaF3m5U7XMd4M9zHURF1uQ/Nc+aiyVId9Mje2EU+nh6npaw/tEOPUiC1v",
 "PeerInstanceName": "iac1-single-sbc1-single-hfe-single-sbc2",
 "HfeInstanceName": "iac1-single-sbc1-single-hfe-single-sbc-hfe"
}


The following table describes all of the keys required in the SBC user-data. The Required By column specifies which type of setup requires this key.

KeyAllow ValuesRequired byDescription
CERoleACTIVE/STANDBYHFEDefined role for the SBC instance. One must be configured as ACTIVE the other STANDBY
ReverseNatPkt0True/FalseHFERequires True for HFE
ReverseNatPkt1True/FalseHFERequires True for HFE
CENameN/AStandalone and HFE

This specifies the actual CE name of the SBC instance. For more information, refer to System and Instance Naming in SBC SWe N:1 and Cloud-Based Systems.

CEName Requirements:

  • Must start with an alphabetic character.

  • Only contain alphabetic characters and/or numbers. No special characters.

  • Cannot exceed 64 characters in length

SystemNameN/AStandalone and HFE

This specifies the System Name of the SBC instances. For more information, refer to System and Instance Naming in SBC SWe N:1 and Cloud-Based Systems.

SystemName Requirements:

  • Must start with an alphabetic character.

  • Only contain alphabetic characters and/or numbers. No special characters.

  • Cannot exceed 26 characters in length

  • Same on both peers CEs.
PeerCENameN/AHFE

CEName for the peer instance (ensure it matches peer CE's CEName in the user-data).

PeerCEHa0IPv4Addressxxx.xxx.xxx.xxxHFEPrivate IPv4 address of the HA interface on the peer instance.
ClusterIpxxx.xxx.xxx.xxxHFEPrivate IPv4 address of the HA interface on the peer instance.
SbcPersonalityTypeisbcStandalone and HFEThe name of the SBC personality type for this instance. Currently Ribbon supports only I-SBC.
SbcHaMode1to1HAThe mode of SBC management. 
Mgt0PrefixN/AStandalone and HFEThe CIDR prefix for the Mgmt subnet.
ThirdPartyCpuAlloc0-4N/A

Number of CPUs allocated to non-Ribbon applications. This key is optional.

Restrictions:

    • 0-4 CPUs.
    • It is mandatory to configure ThirdPartCpuAlloc and ThirdPartyMemAlloc.
    • The configuration must match between peer instances.
ThirdPartyMemAlloc0-4096N/A

Amount of memory (in MB) allocated to non-Ribbon applications. This key is optional.

Restrictions:

    • 0-4096 MB.
    • It is mandatory to configure ThirdPartCpuAlloc and ThirdPartyMemAlloc.
    • The configuration must match between peer instances.
AdminSshKeyssh-rsa ...Standalone and HFEPublic SSH Key to access the admin user. See SSH Key Login Only.
PeerInstanceNameN/AHFEThe Name for the Peer Instance in GCP. Note that this is not the CEName or the SystemName.
HfeInstanceNameN/AHFE 2.0The name of the HFE instance in GCP; use only for HFE 2.0 (single HFE node)*.


Info
titleNote

* For more information, refer to the section "HFE 2.0" of the page Configure HFE Nodes in GCP.

** For more information, refer to the section "HFE 2.1" of the page Configure HFE Nodes in GCP.


Anchor
CLIConfigurationforconfiguringPKTports
CLIConfigurationforconfiguringPKTports
CLI Configuration for Configuring PKT Ports

Configure the PKT interfaces using the following command examples: 

Warning
titleWarning

You must create 3 static routes per packet interface as per the example below.


Code Block
# Configuring PKT0 interface
set addressContext default ipInterfaceGroup LIF1
commit
set addressContext default ipInterfaceGroup LIF1 ipInterface F1 ceName <<CE Name of configured Active* from metavars>> portName pkt0 ipVarV4 IF2.IPV4 prefixVarV4 IF2.PrefixV4 ipPublicVarV4 <<IF2.FIPV4 OR HFE_IF2.FIPV4 **>>
commit
set addressContext default ipInterfaceGroup LIF1 ipInterface F1 mode inService state enabled
commit
set addressContext default staticRoute <<subnet gateway>> 32 0.0.0.0 LIF1 F1 preference 100
commit
set addressContext default staticRoute 0.0.0.0 0 <<subnet gateway>> LIF1 F1 preference 100
commit
set addressContext default staticRoute <<subnet IP>> <<subnet prefix>> <<subnet gateway>> LIF1 F1 preference 100
commit
 
# Configuring PKT1 interface
set addressContext default ipInterfaceGroup LIF2
commit
set addressContext default ipInterfaceGroup LIF2 ipInterface F2 ceName <<CE Name of configured Active* from metavars>> portName pkt1 ipVarV4 IF3.IPV4 prefixVarV4 IF3.PrefixV4 ipPublicVarV4 <<IF3.FIPV4 OR HFE_IF3.FIPV4 **>>
commit
set addressContext default ipInterfaceGroup LIF2 ipInterface F2 mode inService state enabled
commit
set addressContext default staticRoute <<subnet gateway>> 32 0.0.0.0 LIF2 F2 preference 100
commit
set addressContext default staticRoute 0.0.0.0 0 <<subnet gateway>> LIF2 F2 preference 100
commit
set addressContext default staticRoute <<subnet IP>> <<subnet prefix>> <<subnet gateway>> LIF2 F2 preference 100
commit


Info
titleNote

* This is the active which has ACTIVE as their CERole in the user data.

** If using HFE, use the HFE_IF*.FIPV4 metavariable. For Standalone, use IF*.FIPV4.


The correct configuration is similar to the following:

Code Block
admin@nodeA> show table addressContext default staticRoute
                                IP
                                INTERFACE  IP
DESTINATION                     GROUP      INTERFACE              CE
IP ADDRESS   PREFIX  NEXT HOP   NAME       NAME       PREFERENCE  NAME
------------------------------------------------------------------------
0.0.0.0      0       10.0.32.1  LIF1       F1         100         -
0.0.0.0      0       10.0.48.1  LIF2       F2         100         -
10.0.32.0    24      10.0.32.1  LIF1       F1         100         -
10.0.32.1    32      0.0.0.0    LIF1       F1         100         -
10.0.48.0    24      10.0.48.1  LIF2       F2         100         -
10.0.48.1    32      0.0.0.0    LIF2       F2         100         -
[ok][2019-08-05 10:26:34]


admin@nodeA> show table addressContext default ipInterfaceGroup
                                                                                                                                                                                        IP
                                       PORT  IP               ALT IP   ALT                        DRYUP             BW           VLAN             IP VAR    PREFIX VAR    IP PUBLIC      VA
NAME  IPSEC     NAME  CE NAME          NAME  ADDRESS  PREFIX  ADDRESS  PREFIX  MODE       ACTION  TIMEOUT  STATE    CONTINGENCY  TAG   BANDWIDTH  V4        V4            VAR V4         V6
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
LIF1  disabled  F1    nodeA-10.2.0.14  pkt0  -        -       -        -       inService  dryUp   60       enabled  0            -     0          IF2.IPV4  IF2.PrefixV4  HFE_IF2.FIPV4  -
LIF2  disabled  F2    nodeA-10.2.0.14  pkt1  -        -       -        -       inService  dryUp   60       enabled  0            -     0          IF3.IPV4  IF3.PrefixV4  HFE_IF3.FIPV4  -
[ok][2019-08-05 10:29:58]


Example SBC Instance Configurations

Sample SBC configurations are provided below. For more information, refer to Metadata and Userdata Formats on AWS.

SBC Meta Variable Table

Example Meta Variable table for HFE environment:

Code Block
admin@nodeA> show table system metaVariable
CE NAME          NAME                  VALUE
--------------------------------------------------------
nodeA-10.2.0.14  IF0.GWV4              10.0.0.1
nodeA-10.2.0.14  IF0.IPV4              10.0.0.54
nodeA-10.2.0.14  IF0.Port              Mgt0
nodeA-10.2.0.14  IF0.RNat              True
nodeA-10.2.0.14  IF1.GWV4              10.2.0.1
nodeA-10.2.0.14  IF1.IPV4              10.2.0.14
nodeA-10.2.0.14  IF1.Port              Ha0
nodeA-10.2.0.14  IF1.RNat              True
nodeA-10.2.0.14  IF2.GWV4              10.0.32.1
nodeA-10.2.0.14  IF2.IPV4              10.0.32.204
nodeA-10.2.0.14  IF2.Port              Pkt0
nodeA-10.2.0.14  IF2.RNat              True
nodeA-10.2.0.14  IF3.GWV4              10.0.48.1
nodeA-10.2.0.14  IF3.IPV4              10.0.48.37
nodeA-10.2.0.14  IF3.Port              Pkt1
nodeA-10.2.0.14  IF3.RNat              True
nodeA-10.2.0.14  IF0.FIPV4             35.184.248.228
nodeA-10.2.0.14  IF0.PrefixV4          24
nodeA-10.2.0.14  IF1.PrefixV4          32
nodeA-10.2.0.14  IF2.PrefixV4          32
nodeA-10.2.0.14  IF3.PrefixV4          32
nodeA-10.2.0.14  HFE_IF2.FIPV4         34.68.87.53
nodeA-10.2.0.14  HFE_IF3.FIPV4         10.0.3.19
nodeA-10.2.0.14  HFE_IF2.IFName        IF_HFE_PKT0
nodeA-10.2.0.14  HFE_IF3.IFName        IF_HFE_PKT1
nodeA-10.2.0.14  secondaryIPList.Pkt0  ['10.0.32.204']
nodeA-10.2.0.14  secondaryIPList.Pkt1  ['10.0.48.37']
nodeB-10.2.0.15  IF0.GWV4              10.0.0.1
nodeB-10.2.0.15  IF0.IPV4              10.0.0.55
nodeB-10.2.0.15  IF0.Port              Mgt0
nodeB-10.2.0.15  IF0.RNat              True
nodeB-10.2.0.15  IF1.GWV4              10.2.0.1
nodeB-10.2.0.15  IF1.IPV4              10.2.0.15
nodeB-10.2.0.15  IF1.Port              Ha0
nodeB-10.2.0.15  IF1.RNat              True
nodeB-10.2.0.15  IF2.GWV4              10.0.32.1
nodeB-10.2.0.15  IF2.IPV4              10.0.32.204
nodeB-10.2.0.15  IF2.Port              Pkt0
nodeB-10.2.0.15  IF2.RNat              True
nodeB-10.2.0.15  IF3.GWV4              10.0.48.1
nodeB-10.2.0.15  IF3.IPV4              10.0.48.37
nodeB-10.2.0.15  IF3.Port              Pkt1
nodeB-10.2.0.15  IF3.RNat              True
nodeB-10.2.0.15  IF0.FIPV4             35.232.104.143
nodeB-10.2.0.15  IF0.PrefixV4          24
nodeB-10.2.0.15  IF1.PrefixV4          32
nodeB-10.2.0.15  IF2.PrefixV4          32
nodeB-10.2.0.15  IF3.PrefixV4          32
nodeB-10.2.0.15  HFE_IF2.FIPV4         34.68.87.53
nodeB-10.2.0.15  HFE_IF3.FIPV4         10.0.3.19
nodeB-10.2.0.15  HFE_IF2.IFName        IF_HFE_PKT0
nodeB-10.2.0.15  HFE_IF3.IFName        IF_HFE_PKT1
nodeB-10.2.0.15  secondaryIPList.Pkt0  ['10.0.32.206']
nodeB-10.2.0.15  secondaryIPList.Pkt1  ['10.0.48.39']
[ok][2019-08-02 09:24:54]

SBC Instance Data

Each SBC contains instance data, which is available in the file /opt/sonus/conf/instanceLcaData.json.

Code Block
{
    "secondaryIPListMgt0": [],
    "Mgt0IPv4Prefix": "24",
    "VIP_Pkt1_00": {
        "IP": "10.0.48.37",
        "IFName": "IF3"
    },
    "Ha0IPv4Prefix": "32",
    "PeerCEMgt0IPv4Address": "10.0.0.55",
    "SystemName": "GCEHA",
    "PeerInstanceName": "cj-standby",
    "ThirdPartyCpuAlloc": "0",
    "PeerCEHa0IPv4Prefix": "32",
    "Mgt0Prefix": "24",
    "ThirdPartyMemAlloc": "0",
    "SbcPersonalityType": "isbc",
    "PeerCEHa0IPv4Address": "10.2.0.15",
    "CEName": "nodeA",
    "ClusterIp": "10.2.0.15",
    "HFE_IF2": {
        "IFName": "IF_HFE_PKT0",
        "FIPV4": "34.68.87.53"
    },
    "secondaryIPListHa0": [],
    "PeerCEPkt1IPv4Prefix": "32",
    "instanceName": "cj-active",
    "Pkt1IPv4Prefix": "32",
    "CERole": "ACTIVE",
    "secondaryIPListPkt1": [
        "10.0.48.39"
    ],
    "secondaryIPListPkt0": [
        "10.0.32.206"
    ],
    "ReverseNatPkt0": "True",
    "ReverseNatPkt1": "True",
    "Pkt0IPv4Prefix": "32",
    "PeerCEPkt1IPv4Address": "10.0.48.40",
    "zone": "projects/626129518018/zones/us-central1-a",
    "SbcMgmtMode": "centralized",
    "PeerCEPkt0IPv4Address": "10.0.32.205",
    "IF0": {
        "PrefixV4": "24",
        "RNat": "True",
        "Port": "Mgt0",
        "FIPV4": "35.184.248.228"
    },
    "IF1": {
        "PrefixV4": "32",
        "RNat": "True",
        "Port": "Ha0"
    },
    "IF2": {
        "PrefixV4": "32",
        "RNat": "True",
        "Port": "Pkt0"
    },
    "IF3": {
        "PrefixV4": "32",
        "RNat": "True",
        "Port": "Pkt1"
    },
    "AdminSshKey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCMEMXjfUCrKApRWcjEYshAVDNg6aIrrgOp/ckLk2bSPFa37
     BNoHr+SlxfvOUOm+C61CB6yp6Lou2lQWjBISoK5rx8fLrPOJz9JDnmEwmmnk4EdbWB0ArZC9MdhNxYbaWCeQFIYBY4FwLIxSy1
     fyc6fZhQiPtqd05o08/9icwEbPM0EjeO7FHHMVLVBn7/LlDABcA4+O28/FF61HT3fJ1XZzXgg5MRURf/WcN0aZoKshsVZPiJZWg
     2lkKehXHnMDjnmPvjWgyMQsgs9KfZirg1PMw7O8G/oMfXHMICCkx3I8t8/6VK2WQvoilo4zn6LgpLIjBvc2mxJRCZqh3MgxT"

    "PeerCEName": "nodeB",
    "VIP_Pkt0_00": {
        "IP": "10.0.32.204",
        "IFName": "IF2"
    },
    "PeerCEMgt0IPv4Prefix": "24",
    "HfeInstanceName": "cj-hfe",
    "HFE_IF3": {
        "IFName": "IF_HFE_PKT1",
        "FIPV4": "10.0.3.19"
    },
    "secondaryIPList": {
        "Ha0": [],
        "Mgt0": [],
        "Pkt1": [
            "10.0.48.37"
        ],
        "Pkt0": [
            "10.0.32.204"
        ]
    },
    "PeerCEPkt0IPv4Prefix": "32"
}


Info
titleNote

The AdminSshKey example above is actuallly one continuous line, but was split into multiple lines to display better on the page.


Configure the SBC for DNS Call Flows

Info
titleNote

The following steps are mandatory to configure the SBC for DNS call flows.


When an external DNS server is configured in the SBC for FQDN resolutions, the resolv.conf file is updated in the SBC with the custom DNS server's IP address. This increases the priority of the custom DNS server over the metedata namespace server. As a result, all the post-reboot metadata queries fails, the SSH keys from the metadata server for the instance are not copied in the Authorized_keys file, and the machine is inaccessible.

Add the following FQDN against its IP address in your custom DNS server, so that the metadata DNS requests are successful, and the custom DNS server can resolve the Google metadata FQDN.

Info
titleNote

The following example is for an Ubuntu DNS instance. For any other OS, configure the DNS server accordingly.


  1. In the named.conf file:

    Code Block
    zone "google.internal" IN {
         type master;
         allow-query {any;};
         file "google.internal.zone";
    };

    Open the folder containing all the zone files.

  2. Create a new zone file google.internal.zone with the following entries:

    Code Block
    # cat google.internal.zone
    
    $TTL 1D@ IN SOA ip-172-31-10-54.google.internal.  root.google.internal. (
    2019109120 ; Serial number (yyyymmdd-num)
    8H ; Refresh
    2M ; Retry
    4W ; Expire
    1D ) ; Minimum
         IN NS ip-172-31-10-54
    as.ipv4                   A                         0.0.0.0
    as.ipv6                   AAAA                      0::0
    ip-172-31-10-54           A                         <DNS server IP address>
    
    metadata IN  A 169.254.169.254


  3. Reload the DNS service by executing rndc reload.
  4. Your FQDN is metadata.google.internal, which resolves with the IP 169.254.169.254.

Pagebreak