In this section:

Network Overview

The DSC solution is always deployed as pairs of servers for redundancy.  Each server is fitted with multiple Ethernet ports which may be used in various configurations. 

Separation of Call Processing and  OAM&P  traffic is provided by using a combination of physical and logical VLAN separation. Multiple interfaces on each server provide for physical flow separation. VLAN tagging (802.1q) provides for logical flow separation within each bond.  VLAN tagging (802.1q) is always required for every server bond interface (This allows non-disruptive network modifications when needed).

Caution

VLAN tagging (802.1q) is mandatory for every server bond interface. (This allows non-disruptive network modifications in the future if needed).

Ethernet ports that form a bond interface operate as active/standby.  The activity of the bond interfaces is non-revertive.

The DSC server supports up to three (3) bond interfaces .  Each logical bond interface may attach to routers in different networks as needed to support the desired physical separation. These bonds are generally allocated to the following functions:

  • bond-ha - High availability - L2 only.
  • bond0 - OAMP
  • bond1 - Call signaling 

These bond interfaces convey the needed VLANs into each host.  A set of virtual bridges are constructed inside every virtual DSC server, one for each VLAN, which are then available for the VMs to use for network connectivity.  Each VM attaches to a subset of these bridges as required for its function. 

The number of VLAN/subnets required for any office is variable, and is influenced by these factors:

  • Co-located versus Geo-diverse deployment
  • Customer network constraints and desires

Layer 3 Switch Requirements

The virtual DSC requires the following features from the Layer 3 Switches to which it connects. 

  • Layer 2 features
    • VLAN
    • 802.1Q Tagging
    • Port Auto-negotiation
    • 1000 Base-T, 1GbE port or 10 GbE port support
    • BOOTP/DHCP Relay (RFC951, RFC2131) 
    • 9100 byte Jumbo Frames
    • Spanning Tree Fast Start   (Cisco portfast or similar)
  • Multi-Link Trunking
    • 802.3ad static link aggregation  (ie. trunk toward RMS)
    • 802.3ad dynamic aggregation- LACP (ie. Inter-switch trunks)
  • Layer 3 Routing
    • VRRP (RFC2338) or HSRP
    • Dynamic routing protocol
  • QoS / Filtering
    • DiffServ Support (RFC4274 & RFC2475)
    • ACL Support
  • Management
    • Per customer requirements, ie SNMPv3, SSH, etc.
    • Router scripting support such as tclsh for Cisco or Op for Juniper 
    • SSH pki
Note

This is not a comprehensive list, but identifies the primary features required for an L3 switch.

Layer 3 Switch Connectivity

Virtual DSC is required to directly connect to Layer 3 Switch edge devices.  Separate devices for Layer 2 switching and Layer 3 routing is not a supported configuration. 

A high level diagram of the L3 switch connectivity follows.  

Layer 3 Switch Network


Layer 3 Connectivity

Routing between the L3 switches and to the wider Network utilizes methods specified by the customer.  Commonly this is done by utilizing OSPF or BGP dynamic routing protocols or by BFD protected static routes.  

Layer 2 Connectivity

The Dell R740 rack mount servers supplied by Ribbon support SFP+ interfaces that support either 10G or 1G rates depending on the type of SFP/SFP+ chosen and auto-negotiation capability.  Interfacing to the RMS with less than 1Gb/s is not supported.  The interface type may be any technology supported by the compatible SFP/SFP+; copper or fiber.

Caution

Interfaces must be 1GbE or 10GbE.   Slower rates are not supported.

The DSC solution requires a number of separate VLANs for various functions.  It is required that all DSC VLANs be present on all DSC RMS servers.  This allows flexible assignment of VMs to any server.  

The L3 switch provides this connectivity via:

  • VLANs extended the across the two L3 switches over their common Inter Switch Trunk. 
  • The Inter Switch Trunk must be composed of multiple individual ports for redundancy.  The use of a link management protocol is recommended for better port failure detection. (eg. LACP or PAgP)
  • Interfaces to the RMS servers are always 802.1q tagged. (Even if it only conveys one VLAN) 
Caution

All virtual DSC VLANs must be mapped to every DSC server regardless of usage.

The RMS are always configured with redundant "bond" interfaces consisting of two Ethernet ports.  This bond is configured as an active-backup interface and therefore does not require any special configuration on the L3 switch.  One port of the bond is active which transmits and receives all traffic.  The inactive port does not function other than to detect link state. 

Co-Located Deployment 

In co-located (co-lo) deployments both L3 switches are located in one building.  Each RMS bond interface has one port attached to each of the two L3 switches.  Therefore, network connectivity is maintained even during a router outage (eg. reboot).

Geo-Redundant Deployment

In geo-redundant (geo) deployments the two L3 Switch are located in separate buildings. Each RMS bond interface therefore attaches to a single router.  To improve availability, it is recommended that the two ports of each bond interface utilize different multi-port cards in the L3 switch if possible.

Geo deployments require many of the VLANs to be extended between the two geo sites. This can be provided by the operator via many optical, switched, or tunneled methods. 

Spanning Tree

Spanning tree protocols (STP) are designed to protect the network from layer 2 loops that cause broadcast storms.  However, spanning tree re-convergence (even with Rapid STP) can break network connectivity for many seconds every time any link state change is detected on the VLAN.  For proper operation of the virtual DSC, spanning tree must not be configured to prevent these connectivity breaks.   

This can be done by disabling spanning tree on the DSC VLANs or when this isn't possible, configuring "fast start" features like Cisco portfast.   Features like "bpdu-guard" should be avoided since they require manual intervention to recover service.

To avoid the need for STP protocols, the virtual DSC VLANs are intended to only span two L3 switches, thus avoiding the need for box or mesh network typologies that require STP.

VLAN Requirements

The number of VLANs required by an office is variable.  The simplest configuration occurs in greenfield, co-located deployments. 

This table shows the superset of VLANs used in virtual DSC deployments.  Columns indicate which is used for greenfield offices (all offices) and which only apply to Geo-redundant offices.  The table also indicates which are extended between geo sites and the default bond it inhabits in the most common, three bond configuration (bond-ha, bond0, & bond1).  

DSC SWe VLAN Usage

(tick) yes

(error) no

(question) optional

Greenfield

DSC

pool

name

host

bridge

name

Extended

between

Geo sites

Default 

bond


Comments


Description

OAM

(tick)dataoambr-dataoam(tick)bond0

Call signaling

(tick)datacallpbr-datacallp(tick)bond1



(error)callp2br-callp2(tick)bond1

These VLANs are optional (up to 9 additional)

Only used when CallP separation is desired.


(error)callp3br-callp3(tick)
(error)callp4br-callp4(tick)
(error)callp5br-callp5(tick)
(error)callp6br-callp6(tick)
(error)callp7br-callp7(tick)
(error)callp8br-callp8(tick)
(error)callp9br-callp9(tick)
(error)callp10br-callp10(tick)
HA (tick)nonebr-ha-vsp2k(tick)bond-haSwitched only.  No IP address on the L3 switch.
Note

Unneeded VLANs do not need to be configured in the network and will not be instantiated in the host servers.

Greenfield VLAN Usage

The VLANs needed for a standard greenfield office deployment are indicated in the  table by check marks in the "greenfield" column.  This information presumes that there is no issue with the customer defining one large subnets/VLAN for each function.  The datacallp call signaling subnet is likely the largest since it is driven by the number of interconnections to third party nodes, each could take up to four IP addresses. 


RMS Interface Assignments

R740

The R740 servers contain:

  • eight (8) 1GbE/10GbE SFP+ ports located on two (2) quad port NIC cards, and
  • four (4) 1000BASE-T ports located on the RMS motherboard, and
  • one (1) 10/100/1000BASE-T capable chassis management port which is only used to access the iDRAC.

The physical location and corresponding Linux interface names are shown in the R740 Interface Physical Locations table.  

R740 Interface Physical Locations

CommentsR740 Rear viewComments
Top left PCI card -->

SFP+

ens1f1

SFP+

ens1f0








<-- Top right PCI card
Middle left PCI card -->

SFP+

ens2f1

SFP+

ens2f0









Bottom left PCI card -->








Power SupplyPower Supply
motherboard ports -->

 copper

iDRAC


serial, video, usb


copper

eno1

copper

eno2

copper

eno3

copper

eno4


Pairs of these ports are assigned to the bond interfaces as described earlier in this section.  The mapping of Ethernet interfaces to bond interfaces is given in the R740 interface to bond mapping table. The bonds are defined such that the two bonded interfaces are not allocated from the same PCI NIC card.

R740 Interface to Bond Mapping

R740 port

(4 PCI cards)


Bond interfaceTypical Usage Comments

ens1f1


bond-ha

High Availability (HA)

Always required.

If 1Gb/s, then must be dedicated for HA, and cannot be shared with other functions.

If 10Gb/s, then it may be shared with other uses.

ens2f1

ens1f0

bond0

OAM network

Typical for OAM. Can use eno1/eno2 if copper ports are preferred and avoid deploying additional SFPs.



ens2f0

ens1f1

bond1



Signaling/service/CallP network




These south bound interfaces are used to carry the signaling.



ens2f1

Note

Unused interfaces may be left un-cabled.

When interfaces are configured as 10GbE interfaces, then they may be configured to carry multiple traffic types which may reduce the overall number of bonds needed, or segregate flows differently to satisfy customer preference.  The table below gives examples of this optionality. 

Warning

The HA subnets must not share a bond interface with any other traffic when deployed on a 1GbE interface.

Bond Interface Common Options


Option

bond-habond0bond1


Comment

10Gb1Gb10Gb1Gb10Gb1Gb


Full separation

(tick)(tick)(tick)(tick)(tick)(tick)

Original configuration

Required when 1GbE interfaces are  used.

HAOAM

CallP



Network Separation

(tick)

(error)(tick)(tick)


HA

CallP


OAM

VLAN Usage by Virtual Machine

The Virtual Machine Interface Mapping table shows the association between the host bridges and the interfaces on each VM type.  The number of interfaces on each VM is modified to match the connectivity requirements (except G9EM). 

Virtual Machine Interface Mapping

KEY

(R)equired

(O)ptional

(G)eo only

(N)on-geo





Bridges







Comment

Site local when GEO? →NNNY
VMportbr-ha-vsp2kbr-dataoam

br-datacallp,callp2,callp3, ..,callp10

br-callp-site
DSC SWe / vSP2k




eth0-haR



eth1-mgt0
R

Uses floating IP
eth2 - mgt1
R

Uses floating IP
eth3-pkt0

RNRG

pkt0 and pkt1 must not share a bridge

pkt1 - used for physical multihome seperation

imf0 - packet capture.

eth4-pkt1

O
eth5-imf0
O

RMS Internal n=/networking 

The DSC SWe is logically a number of Virtual Machines which communicate using a number VLANs.  Every server in a DSC SWe system has all of the VLANs configured regardless of the needs of the specific VMs hosted on that server.  The diagrams below illustrate the connectivity of each VM type for a number of deployment options. Except where indicated the mate servers are not shown. 

The goal of the internal network structure of an RMS is to provide a separate Linux bridge interface for each office VLAN.  Virtual machines are generally configured with interfaces which map to only the needed VLANs for that VM.  The exceptions to this principle are:


Greenfield Non-Geo

This is the simplest possible configuration. 

vC20 Greenfield Non-Geo Plumbing


Greenfield Geo

Geo adds site local networks for oam and callp for virtual DSC SWe backup path.  

vC20 Greenfield GEO Plumbing


Routing

External Routing

All routing is performed by the customer edge Layer 3 switch. The virtual DSC SWe host servers only perform Layer 2 switching which means any flows between subnets must be routed by the customer routers. Therefore those flows are can be monitored and subjected to policies and firewall inspection if desired.

Routing for the site local VLANs remains the responsibility of the customer router. 

Application VM Routing

Traditionally, when a host or VM has multiple interfaces, one is designated the target of the default route.  Other interfaces can pass traffic to their local subnets, but are required to have individual static routes for any non-local (ie. routed) traffic to allow the correct interface to be used. The target of the default route is always the interface with the largest number of target destinations or when the exact destinations are unknown.  This is known as destination routing where the packet is forwarded based solely on the destination address in the packet.  The DSC SWe allows for static routing within the application VM and may require one to many static routes defined.