In this section:
This section describes how to create initial, basic configuration for an integrated SBC (I-SBC) SWe cluster using the SBC Configuration Manager GUI. Although the following procedures use the GUI, you can also make cluster configuration changes using the CLI on the active node on the cluster. Refer to Modifying SBC Cluster Configuration for information on using the CLI and the CLI Reference Guide for information on the available commands.
The configuration shown here is a basic example and focuses on minimum configuration required for a basic call flow and configuration required by cloud-based clusters. For example, in contrast to assigning static values, cloud deployments use metavariables to assign interface values such as IP addresses. The actual values are determined dynamically during instantiation, based on the cloud environment.
Beyond this minimal configuration, a full deployment would require further configuration and customization based on the intended use and environment, similar to other types of SBC deployments.
When using the dnsGroup LOCAL resolution, you cannot configure more than four active T-SBCs/M-SBCs since the DNS Group LOCAL entries are limited to a maximum of four entries. If you need to enable more than four active T-SBCs/M-SBCs, use an external DNS for resolution.
Begin the process to initially configure a new I-SBC cluster by accessing the SBC Configuration Manager for the cluster.
Click Network → Cluster Management. The Cluster Management / Manage VNFs window opens listing the SBC clusters registered with RAMP.
Figure 1: Cluster Management / Manage VNFs window
Click the Configurations tab.
Figure 2: Cluster Configurations Tab
Click Edit Configuration. The SBC Configuration Manager opens in a separate window against the cluster's active node. For information about using the GUI to configure the SBC, refer to the EMA User Guide.
Figure 3: SBC Configuration Manager Window
Use the following procedures and examples to configure basic I-SBC parameters using the SBC Configuration Manager.
To create IP interface group and IP interfaces, see Creating IP Interface Groups and IP Interfaces.
To validate the values assigned to metavariables during instantiation, review the Meta Variable table by clicking All → System → Meta Variables. The Meta Variable window opens showing the Meta Variable list. In cloud deployments, metavariables are used to assign interface values, such as IP addresses, whose values are configured dynamically during instantiation.
Figure 4: Meta Variable
Some of the following procedures require that you specify an address context in which to create configuration objects. The following procedures use an example address context named AC2 as a placeholder. In actual practice you can specify your own address context name or use the default address context. The following steps create an address context named AC2.
Select All → Address Context → IP Interface Group. The IP Interface Group List window opens.
Select AC2 from the Address Context drop-down list.
Click New IP Interface Group. The Create New IP Interface Group window opens.
Click Save.
Figure 5: IP Interface Group
Repeat the previous steps to create another interface group. For example: LIG2.
On the navigation pane, click All → Address Context → IP Interface Group → IP Interface. The IP Interface window opens.
Click Save.
Figure 6: Creating an IP Interface
Repeat the previous steps to add an interface LIF2 for the pkt1 port in a different Interface Group LIG2.
Click Save.
Figure 7: Static Route Window
Click Save.
Figure 8: Create New DNS Group
Select All → Address Context → DNS Group → Server. The Server window opens.
On the Server window, perform the following:
Select AC2 from the Address Context drop-down list.
Select the DNS group you created from the DNS Group drop-down list.
Click New Server. The Create New Server section opens.
In the Create New Server section:
Enter a server name.
Set the State to Enabled.
Enter the DNS server IP in the IP Address V4 or V6 field.
Click Save.
Figure 9: DNS Server Window
Select All → System → NTP → Server Admin. The Server Admin window opens.
Click Save.
Figure 10: NTP Server Admin
Select All → System → NTP → Time Zone. The Time Zone window opens.
Select the instance from the list. The Edit Selected Time Zone section opens.
Select an appropriate time zone from the Zone drop-down list.
Click Save.
Figure 11: Time Zone Window
Select All → Address Context → Zone. The Zone window opens.
Select AC2 from the Address Context drop-down list.
Click New Zone. The Create New Zone section opens.
Click Save.
Figure 12: Create New Zone
Repeat the previous steps to create another zone, for example, EXTERNAL.
Create SIP signaling ports for the zones you created:
Click Save.
Figure 13: Create New SIP Sig Port
Repeat the previous steps to create a SIP signaling port for the EXTERNAL zone.
Create SIP trunk groups in the the zones you created:
Click Save.
Figure 14: Creating a SIP Trunk Group
Repeat the previous steps to create another SIP trunk group for the EXTERNAL zone and using LIG2.
Create ingress IP prefixes for the SIP trunk groups you created:
Click Save.
Figure 15: Create New Ingress IP Prefix
Repeat the previous steps to create an IP Prefix for the EXTERNAL zone and EGRESS_TG trunk group.
Configure the settings for the remote PSX server:
Select the local server listed. The Edit Selected Local Server section opens.
Set the State to Disabled.
Select Out of Service from the Mode drop-down menu.
Click Save.
Figure 16: Edit Selected Local Server
Click New Remote Server. The Create New Remote Server section opens.
Enter a server Name.
Enter the server IP Address.
Set the State to Enabled.
Select Active from the Mode drop-down list.
Click Save.
Figure 17: Create New Remote Server
Once you have completed making configuration changes, click Apply Saved Changes and Close at the top-right of the SBC Configuration Manager window. When prompted, confirm that you want to save and activate your configuration changes. The SBC Configuration Manager window closes. The active node replicates the configuration changes to the standby node in the cluster and stores a record of the updated configuration back to RAMP.