In this section:


A DSC VM contains several virtual network devices that are used for different types of traffic. For example, the pkt0 interface is intended for Diameter or SS7/IP traffic, while the mgt0 interface is intended for management traffic. The virtual network devices must be integrated into your IP infrastructure to provide IP connectivity to the appropriate systems while taking into account operational requirements such as redundancy, throughput, and latency.

DSC SWe (on VMware and on KVM) Network Connections

The following provides a brief overview on integrating a DSC SWe into your IP network infrastructure.

Each DSC VM has virtual network devices for:

  • routing Diameter or SS7/IP traffic
  • inter-process communications with other DSC VMs
  • monitoring using Integrated Monitoring Feed (IMF)
  • management (DSC VMs 1 and 2)

The naming of these devices within the VMs depends on their intended function. It is up to you to provide appropriate network connectivity between a VM and your IP network, using your Host OS configuration and IP port cabling. Each configured virtual network device on a VM needs its own unique IP address and appropriate network labels or names. Additionally, a shared management IP address is provisioned; this shared management IP address migrates from one management device to the other, depending on available IP connectivity with the management gateway.

For additional information on configuring a DSC VM for your network connections, refer to

The following table summarizes the virtual network devices configured on a DSC SWe (on VMware and on KVM).

Virtual Network Devices on a DSC SWe (on VMware and on KVM)

DSC VM
Virtual Network Devices

Intended Function

mgt0

Management interface for VM 1

mgt1

Management interface for VM 2

ha0

Internal communication between DSC VMs

pkt0, pkt1, pkt2, pkt3                    

Packet interfaces (all VMs). You may use one or more of these interfaces in each VM, depending on your redundancy and throughput requirements. SCTP multi-homing may be used if multiple packet interfaces are configured in a VM (see Multi-homed SCTP and NAT Support).

imf0, imf1

Monitoring interface (optional, for use with IMF)

DSC SWe (on OpenStack) Network Connections

The DSC SWe (on OpenStack) deployment expects to have one management network and two packet networks. These networks are as follows:

  • Management Network - provisioning network for Web/SSH communications
  • Packet Network - payload traffic network

The IP Networks used by the DSC SWE Platforms are required to be of high quality so these platforms can meet their throughput and latency requirements. See the Customer Network Requirements for the DSC SWe for more specific requirements and see Creating an IP Plan on OpenStack for detailed information about planning the IP network.

For provider networks, the IP addresses are assigned from the subnet allocation pools. These IP addresses for a network are assigned to each interface. ha0 is statically assigned. 

Supported IP Network Interfaces

InterfaceDescription
mgt0

Management interface for VM 1

mgt1

Management interface for VM 2

ha0

Internal communication between DSC VMs

pkt0, pkt1, pkt2, pkt3Packet interfaces (all VMs). You may use one or more of these interfaces in each VM, depending on your redundancy and throughput requirements. SCTP multi-homing may be used if multiple packet interfaces are configured in a VM (see Multi-homed SCTP and NAT Support).
imf0, imf1Monitoring interface (optional, for use with IMF)

Multi-homed SCTP and NAT Support

Multihoming, one of the key features of SCTP, is the ability of an association (that is a connection) to support multiple IP addresses or interfaces at a given end point in a network. In case of a network failure, use of more than one IP address allows re-routing of packets and provides an alternate path for retransmission. Therefore, the network address redundancy provides a certain level of network-level fault tolerance.

However, issues arise with multi-homing when an endpoint tries to establish an association with another endpoint which is hidden behind a Network Address Translation (NAT).

A NAT is designed for IP address conservation. The NAT translates private IP networks that use unregistered IP addresses in the internal network into public routable addresses before packets are forwarded to another network.  In this way, the NAT conserves public addresses because it can be configured to advertise at minimum only one public address for the entire network to the outside world.

In multihomed associations, supplementary address parameters (part of SCTP payload, not the IP header) are included in certain messages. The NAT modifies IP addresses in the IP header only and is incapable of handling the parameters. As a result, the NAT does not modify the supplementary IP addresses in the SCTP, which either causes the association to continue in a single-homed manner or immediately fail.

To avoid this problem between the SCTP payload and NAT, a set of rules are required for translating and configuring the outgoing and incoming supplementary IPs.

The SCTP NAT mapping feature enhances the SCTP to accommodate multihomed associations whose endpoints reside behind NAT entities. Using a modification to the Linux kernel SCTP code, a user can configure two sets of DSC SCTP NAT tables: egress and ingress.  The data from within the tables are used by the kernel to perform real-time supplementary address translation on egress and ingress SCTP packets. 

For more information about the Multi-homed SCTP and NAT Support, refer to DSC SWe Multihoming with NAT Support.

Creating an IP Plan

It is recommended that you create an IP Plan prior to installing the DSC SWe. IP plan is generally created using an Excel spreadsheet and is intended to capture information such as hostname, logical IP addresses, and so on which help in configuring the DSCsystem. It is important to create the IP plan even for simple networks so that the system information is available during installation, also this information acts as a reference for future configuration changes.

The naming of the network devices within the VM depends on the intended function. Each configured VM virtual network device requires its own unique IP address. Additionally, a shared management IP address is provisioned; this shared management IP address migrates from one management device to the other depending on available IP connectivity with the management gateway.

For more information about creating an IP plan see the following:

  • No labels