In this section:
Prior releases supported the use of a dedicated SBC Configurator cluster to configure other SBC SWe clusters. This approach is replaced by using one of the SBC nodes within the cluster, referred to as the "Headend" SBC, to configure the other nodes. While the SBC Configurator currently remains supported for backward compatibility, it will be deprecated in a subsequent release. Beginning with release 7.1, use the Headend SBC configuration model described in this topic.
This page describes how to create the initial, basic configuration on a integrated SBC (I-SBC) SWe Cloud cluster using the EMS and the SBC Configuration Manager. The active SBC node within the cluster, referred to as the "Headend" node, is used for creating the configuration, and the EMS is used for distributing the configuration across the cluster. For more information on how an SBC cluster interacts with the EMS for configuration, refer to Configuring an SBC SWe Cluster using the EMS.
Prior to following these procedures you must have created an SBC SWe cluster in the EMS for the I-SBC cluster. Refer to Creating an SBC SWe Cluster in EMS documentation. You must then instantiate the I-SBC cluster. After instantiation, the SBC nodes register with the EMS, but since there is no configuration yet for the cluster, its nodes start with a blank configuration.
Begin the process to initially configure a new I-SBC cluster by accessing the SBC Configuration Manager on behalf of the cluster.
Click Network > Cluster Management. The Cluster Management / Manage VNFs window opens listing the SBC clusters registered with the EMS.
Click the Configurations tab.
Click Create. The SBC Configuration Manager opens against the Headend node.
Use the following procedures and examples to configure basic I-SBC parameters using the SBC Configuration Manager.
To create IP interface groupa and IP interfaces, see Creating IP Interface Groups and IP Interfaces.
To validate the values assigned during instantiation, review the meta variable table, click All > System > Meta Variables. The Meta Variable window opens showing the Meta Variable list.
Some of the following procedures require that you specify an address context in which to create configuration objects. The following procedures use an example address context named AC2 as a placeholder. In actual practice you can specify your own address context name or use the default address context. The following steps create an address context named AC2.
Select All > Address Context > IP Interface Group. The IP Interface Group List window opens.
Select AC2 from the Address Context drop-down list.
Click New IP Interface Group. The Create New IP Interface Group window opens.
Click Save.
Repeat the previous steps to create another interface group. For example: LIG2.
On the navigation pane, click All > Address Context > IP Interface Group > IP Interface. The IP Interface window opens.
Click Save.
Repeat the previous steps to add an interface LIF2 for the pkt1 port in a different Interface Group LIG2.
Click Save.
Click Save.
Select All > Address Context > DNS Group > Server. The Server window opens.
On the Server window, perform the following:
Select AC2 from the Address Context drop-down list.
Select the DNS group you created from the DNS Group drop-down list.
Click New Server. The Create New Server section opens.
In the Create New Server section, perform the following:
Enter a server name.
Set the State to Enabled.
Enter the DNS server IP in the IP Address V4 or V6 field.
Click Save.
Select All > System > NTP > Server Admin. The Server Admin window opens.
Click Save.
Select All > System > NTP > Time Zone. The Time Zone window opens.
Select the instance from the list. The Edit Selected Time Zone section opens.
Select an appropriate time zone from the Zone drop-down list.
Click Save.
Select All > Address Context > Zone. The Zone window opens.
Select AC2 from the Address Context drop-down list.
Click New Zone. The Create New Zone section opens.
Click Save.
Repeat the previous steps to create another zone, for example, EXTERNAL.
Create SIP signaling ports for the zones you created:
Click Save.
Repeat the previous steps to create a SIP signaling port for the EXTERNAL zone.
Create SIP trunk groups in the the zones you created:
Click Save.
Repeat the previous steps to create another SIP trunk group for the EXTERNAL zone and using LIG2.
Create ingress IP prefixes for the SIP trunk groups you created:
Click Save.
Repeat the previous steps to create an IP Prefix for the EXTERNAL zone and EGRESS_TG trunk group.
Configure the settings for the remote PSX server:
Select the local server listed. The Edit Selected Local Server section opens.
Set the State to Disabled.
Select Out of Service from the Mode drop-down menu.
Click Save.
Click New Remote Server. The Create New Remote Server section opens.
Enter a server Name.
Enter the server IP Address.
Set the State to Enabled.
Select Active from the Mode drop-down list.
Click Save.
Configure the intra-cluster VNF communication interface for use with the Load Balancing Service by configuring the Cluster Comm object.
Select All > System > Cluster Admin > Cluster Comm. The Cluster Comm window opens.
By default, the interface Type is specified as Mgmt and fields for the Management interface are shown, as in the first figure. However, an IP interface can also be used for intra-cluster communications by selecting the Type as IP and populating fields that specify a packet interface, as shown in the second figure.
Select the Type you want to use and then enter the corresponding interface values.
Click Save.
Select All > System > Load Balancing Service. The Load Balancing Service window opens.
The Management fixed IP address should be added as an A record on the DNS server.
Enter a group name. Example: sbc1.lbs.com
Click Save.
Once you have completed making configuration changes:
Click Save at the top of the SBC Configuration Manager window. A Save confirmation window opens.
Click Save and Activate configuration. This copies the configuration from the Headend SBC back to the EMS as the active configuration. This also sets the Headend SBC configuration status as Config-in-sync and all other non-Headend SBCs as Config-out-of-sync. The EMS pushes configuration differences to all of those nodes which are config-out-of-sync unless the nodes are unavailable.
Within the Cluster Management / Manage VNFs window, the Cluster Status column for the cluster displays All nodes online and Activation Complete, once the configuration is successfully activated.
The EMS performs the reboot of out-of-sync nodes automatically during initial configuration. However, if the nodes lose synchronization after that point, for example if an activation fails, then you must reboot the nodes to trigger a configuration download to bring the nodes back into sync. To reboot a node, use the Reboot Node option on the Nodes tab.