In this section:

DSC SWe on this page refers to the DSC SWe on Kernel-based Virtual Machine (KVM) and DSC SWe on VMware.

The following section provides a general overview on how the DSC SWe integrates with a customer network. For information on a how the addition of a Load Balancer may affect your network, refer to Load Balancer Network Integration on Customer Network.

Integrating the DSC SWe into your IP Network

A DSC SWe (DSC SWe) Platform consists of several DSC Virtual Machines (DSC VMs) working together to provide Diameter Signaling Controller or SS7 Signaling Transfer Point (STP) functionality. Each DSC VM is assigned a numeric VM Identifier, sometimes referred to as a VM ID or a (virtual) slot number.  A DSC VM may provide management and routing or only routing (Diameter and/or SS7) functionality. A DSC SWe must contain at least two VMs, one with ID 1 and one with ID 2, each of which provides both management and routing functionality. A DSC SWe may contain additional VMs only with routing functionality.

Each DSC VM contains several virtual network devices that are used for different types of traffic. For example, the pkt0 interface is intended for Diameter or SS7/IP traffic, while the mgt0 interface is intended for management traffic. These network devices must be integrated into your IP infrastructure to provide IP connectivity to the appropriate systems while taking into account operational requirements such as redundancy, throughput, and latency.

The following sections give a brief overview of integrating a DSC SWe into your IP network infrastructure.

Overview of DSC SWe Virtual Network Devices

Each DSC SWe VM has virtual network devices for

  • routing Diameter or SS7/IP traffic
  • inter-process communications with other DSC VMs
  • monitoring using Integrated Monitoring Feed (IMF)

DSC SWe VMs 1 and 2 have virtual network devices for management.

The naming of these devices within the VMs depends on their intended function. It is up to you to provide appropriate network connectivity between a VM and your IP network, using the host OS configuration and IP port cabling. Each configured VM virtual network device needs its own unique IP address. Additionally, a shared management IP address should be provisioned; this shared management IP address will migrate from one management device to the other depending on available IP connectivity with the management gateway.

Device names

VM Virtual Network Device       

Intended Function

mgt0

Management interface for VM 1

mgt1

Management interface for VM 2

ha0

Internal communication between DSC VMs

pkt0,pkt1,pkt2,pkt3

Packet interfaces (in any VM). You may use one or more of these interfaces in each VM, depending on your redundancy and throughput requirements. SCTP multi-homing may be used if multiple packet interfaces are configured in a VM.

imf0, imf1

Monitoring interface (optional, for use with IMF)


Requirements for the HA Interface

The High-availability (HA) interface is used for inter-process and inter-VM communication within the DSC SWe. Inter-process communication may use Transparent Inter-process Communication (TIPC), Stream Control Transmission Protocol (SCTP), Transmission Control Protocol (TCP) or User Datagram Protocol (UDP).

The HA Interface may be used to forward Diameter or SS7/IP traffic between VMs if the next hop is unavailable from the local VM. In the worst case, all Diameter or SS7/IP traffic may arrive on one VM and leave from another VM, so the HA interface should be fast enough to handle all Diameter or SS7/IP traffic processed by the system.

The HA interface may also be used for other maintenance tasks.

The reference configuration under KVM attaches the ha0 interface to a Linux bridge and then to a “bond” device that contains two physical devices that connect to the second server with crossover cables. The reference VMware setup is similar. Other configurations are possible.

The following is a summary of the minimum performance parameters for the HA network:

  • Low Latency: <= 18ms round-trip time between any two DSC VMs; lower than 18ms is better. 

  • Low loss and reordering
    • <= 0.01% packet loss
    • similar rates of packet re-ordering
    • bursts of high traffic loss; at high traffic levels, even a burst of 0.5s of complete packet loss can cause a degradation in throughput and possible signaling congestion.

  • High throughput: The reference configuration uses two 1Gbps crossover cables with a Linux bond device for redundancy.  

    No attempt has been made to identify the minimum throughput required for standard (automated or manual) maintenance procedures.

          

  • Redundancy: There must be no single point of failure that could isolate some VMs from others.

  • Isolation: All traffic to and from the HA interfaces from a single DSC SWe should be isolated at the Ethernet and IP layers. No other applications should be exchanging traffic with the HA network. More than one DSC SWe should NOT share a single HA network (see the figure in the Inter-connectivity Example Between the DSC SWe and a Customer Network.

  • Layer 2 transparency
    • The ha0 device is used within the VM as a TIPC bearer. 
    • TIPC caches the MAC address of adjacent nodes. 
    • The MAC address of TIPC packets generated by the VM should not be changed in flight.
    • The HA device requires Layer 2 interconnect between all VMs within the DSC SWe.

Requirements for the Management Interface

The management interfaces are used for system maintenance and monitoring, including access to the DSC SWe http-based interface and carrying Simple Network Management Protocol (SNMP) traps.

One IP address is required for mgt0 on VM1, a different IP address in the same subnet is required for mgt1 of VM2, and a third (“shared”) IP is required that can float between mgt0 and mgt1 to provide convenient access to the management functions using a single IP.

The floating management IP is managed using Corosync and Pacemaker clustering software which check the configured management gateway to determine whether mgt0 and/or mgt1 are accessible to remote maintenance systems. If the gateway is not accessible, the shared IP is migrated from one VM to the other VM.  For this reason, the management gateway must be accessible to both mgt0 and mgt1.

The cabling and IP network integration should be such that no single failure renders the DSC SWe management interfaces inaccessible to monitoring and maintenance stations.

Requirement for the Packet Interface

The packet interfaces are intended for Diameter and SS7/IP traffic. Each virtual packet interface requires a unique IP address. The packet network should provide sufficient throughput, redundancy, and low enough latency for the expected load and service level agreements. No bursts of high packet loss or latency should occur.

If the use of the Custom VLAN feature is required, the host must be set up to allow the routing of IEEE 802.1q tagged packets to and from the physical device. The actual VLAN configuration is done inside the VM once the host has been configured. For more information about configuring VLANs, refer to http://pubs.vmware.com/vsphere-60/topic/com.vmware.vsphere.networking.doc/GUID-7225A28C-DAAB-4E90-AE8C-795A755FBE27.html.

The compliance with IEEE 802.1q is accomplished for the three DSC Platforms as follows.

 


Inter-connectivity Example Between the DSC SWe and a Customer Network

The following illustration provides an example for the inter-connectivity between the DSC SWe and a customer network.

inter-connectivity between the DSC SWe and a customer network


  • No labels