In this section:

Introduction

The SBC Core supports the use of the Master Trunkgroup Resource Manager (MTRM) to manage call and bandwidth resources on clients spread across multiple SBCs, thus enabling service providers to support enterprise customers using enterprise-wide CAC management down to the location and trunk group levels.

The Master Trunkgroup Resource Manager (MTRM) is basically a client/server model consisting of a Master Trunk Group Server (MTRG server) and Master Trunk Group Client (MTRG client). The server and client are associated with a common name. MTRG-server and MTRG-client instances can co-exist in the same SBC, or can reside in different SBCs.

The MTRG-server is configured with a CAC pool shared amongst its registered MTRG-clients. This CAC pool consists of call and bandwidth resources.

The MTRG client registers with the respective MTRG server and requests for a configurable margin of call and/or bandwidth resources. The CAC resources configured on the server are shared and dynamically redistributed between the registered clients. Each client can optionally maintain its own set of local resources and/or obtain aggregate resources from the respective server. As idle client resources fall below a configurable margin (calls/bandwidth per request), a client requests more resources from the server. The server, depending on the availability, may or may not grant the resources.

The functional diagram below shows multi-level hierarchy with intermediate MTRG servers at the location level. MtrgA1, MtrgA2 and MtrgA3 act as clients to Master server MtrgA.

IPTG 1 and 2 act as clients to mtrgA1, IPTG 3 and 4 to A2,  IPTG 5 and 6 to A3. Initially the IPTGs consume resources from their immediate MTRG servers A1, A2 and A3. If these servers learn that their CAC resources are falling below the margin, the servers request for more resources from their parent server MtrgA.

MTRM Functional Diagram

 

Configuring MTRM

If the connection between the MTRM server and client fails, the IPTG falls back to its local resources.

For the MTRG feature to work as expected, the egress and ingress callLimit of the IPTG must be set to the default value "unlimited".

For MTRM to work as designed, do not configure the parent CAC pool for the Trunk Group.

 

Configuring the SBC platform for MTRM involves the following configurations from EMA or CLI:

  1. Configure MTRM Server Connection Port 

    If configuring MTRM connection port on the management interface, first configure the Management Logical IP. Then assign that IP as the ipAddressV4 or ipAddressV6 of the MTRM connection port.
    % set addressContext default zone defaultSigZone mtrmConnPort <index> portRole server ipInterfaceGroupName/mgmtInterfaceGroupName <Interface_Group_Name> ipAddressV4 <IP_Address> ipAddressV6 <IP_Address> portNumber <MTRM_Port> healthCheckIgnore <enable/disable> healthCheckInterval <Interval_Value> healthCheckTimeout <Timeout_Value> mode inService state enabled
  2. Configure MTRM Client Connection Port

    % set addressContext default zone defaultSigZone mtrmConnPort <index> portRole client primaryServerIPAddr <IP_Address> primaryServerPortNumber <MTRM_Port> secondaryIpAddress <IP_Address> secondaryPortNumber <MTRM_Secondary_Server_Port_Number> ipInterfaceGroupName <IPIG name> mgmtInterfaceGroupName <MIG name> ipAddressV4 <V4_IP_Address> ipAddressV6 <V6_IP_Address> portNumber <MTRM_Port> healthCheckIgnore <enable/disable> healthCheckInterval <Interval_Value> healthCheckTimeout <Timeout_Value> mode <inService/outOfService> state <enabled/disabled>
    > show status addressContext default zone defaultSigZone mtrmConnPortPeerStatus
    mtrmConnPortPeerStatus 10 2 {
        peerName             HEBBE;
        peerConnectionStatus available;
        shelfId              2;
        peerRole             client;
        ipAddress            10.11.12.13;
        connectionType       none;
    }
    mtrmConnPortPeerStatus 110 1 {
        peerName             HEBBE;
        peerConnectionStatus available;
        shelfId              2;
        peerRole             server;
        ipAddress            10.11.12.13;
        connectionType       primary;
    }
  3. Configure MTRG Server

    If not configuring a parent Server, it is not necessary to define parentMtrg, parentRequestMaxCalls & parentRequestMaxBW parameters.
    % set global cac mtrgServer <MTRG_Server_Name> mtrmConnPortIndex <Index_Value> maxCalls <MTRG_Server_Maximum_Calls> callsPerRequest <Value> maxBandwidth <Maximum_Bandwidth_Value> bandwidthPerRequest <Value> parentMtrg <Name> parentRequestMaxBw <Maximum_Bandwidth> parentRequestMaxCalls <Maximum_Number_Of_Calls> mode <inService/outOfService> state <enabled/disabled>
    > show status global cac mtrgServerStatus 
    mtrgServerStatus MTRGA {     
    mtrgServerIndex 2;     
    maxGlbCallAvail 980;     
    maxGlbBwAvail   39200;     
    callsAllocated  1000;     
    bwAllocated     64000;     
    callsUsage      20;     
    bwUsage         24800;     
    parentMtrgIndex 1;     
    parentMtrgName  MTRGC;     
    parentMtrgState mtrmReady; 
    } 
    [ok][2013-07-30 14:56:43]
  4. Configure MTRG Client

    % set global cac mtrgClient <MTRG_Client> mtrmConnPortIndex <Index> mode <inService/outOfService> state <enabled/disabled>
  5. Configure SIP Trunk Group

    % set addressContext default zone <Zone_Name> sipTrunkGroup <IPTG_Name> masterTgName <Name> tgMtrgCallsPerReq <Value> tgMtrgBwPerReq <Maximum_Bandwidth_Value> tgMtrgReqMaxCalls <Value> tgMtrgReqMaxBw <Maximum_Bandwidth_value> tgMtrgResAllocation <LOCALIGNORED/LOCALPREFFERED>
    > show configuration details addressContext default zone Z1 sipTrunkGroup TG1
     
    masterTgName             MTRGA; 
    tgMtrgCallsPerReq        20;  
    tgMtrgBwPerReq           12400;  
    tgMtrgResAllocation      localignored; 
    tgMtrgReqMaxCalls        50; 
    tgMtrgReqMaxBw           unlimited; 
  6. View the status of MTRG server registered trunk groups:

    > show status global cac mtrgServerRegTgStatus                                 
    mtrgServerRegTgStatus MTRGC TG1 DEVHA2 SIPTG {
        mtrgServerIndex 1;
        tgIndex         0;
        callsAllocated  10;
        bwAllocated     24800;
        tgState         active;
    }
  7. View the status of MTRG client registered trunk groups:

    > show status global cac mtrgClientRegTgStatus 
    mtrgClientRegTgStatus MTRGC TG1
    DEVHA2 SIPTG {
        mtrgClientIndex 1;
        tgIndex         0;
        callsAllocated  10;
        bwAllocated     24800;
        tgState         active;
    }

Sample MTRM Configuration

Given below is a sample MTRM configuration using two SBCs

Sample MTRM Configuration

The MTRG (Master Trunk Group) Server, MTRG1 is on SBC1 and it is configured with 100 calls and 64K bandwidth.  It will communicate to MTRG client through MTRM server connection port CP1.

MTRG client MTRG1 on SBC2 will subscribe to the server on SBC1 for calls and bandwidth and will communicate to it using CP2. Note that the MTRG server and client name has to be same for them to communicate to each other.

The SIP trunk group will register with MTRG client. SIP TG and MTRG client should reside on same SBC and MTRG server can be on same or different SBC.

CLI for the above example is provided below:

  1. Set up server connection port SBC1.

    % set addressContext default zone defaultSigZone mtrmConnPort 1 portRole server ipInterfaceGroupName LIF1 ipAddressV4 10.54.20.32 portNumber 4360 mode inService state enabled



  2. Set up client connection port on SBC2.

    % set addressContext default zone defaultSigZone mtrmConnPort 101 portRole client primaryServerIPAddr 10.54.20.32 ipAddressV4 10.54.20.29 ipInterfaceGroupName LIF1 mode inService state enabled



  3. Check the status of connection ports on both SBC1 and SBC2.

    1. Check status from SBC1 server. peerConnectionStatus should be "available".

      > show status addressContext default zone defaultSigZone mtrmConnPortPeerStatus
      mtrmConnPortPeerStatus 1 2 {
          peerName             SBC2;
          peerConnectionStatus available;
          shelfId              2;
          peerRole             client;
          ipAddress            10.54.20.29;
          connectionType       none; 
    2. Check status from SBC2 server. peerConnectionStatus should be "available"

      > show status addressContext default zone defaultSigZone mtrmConnPortPeerStatus
      mtrmConnPortPeerStatus 101 1 {
          peerName             SBC1;
          peerConnectionStatus available;
          shelfId              2;
          peerRole             server;
          ipAddress            10.54.20.32;
          connectionType       primary;
      } 



  4. Set up MTRM server on SBC1.

    % set global cac mtrgServer MTRG1 maxCalls 100 maxBandwidth 64000 mtrmConnPortIndex 1 mode inService state enabled 



  5. Set up MTRM client on SBC2.

    % set global cac mtrgClient MTRG1 mtrmConnPortIndex 101 mode inService state enabled 



  6. Register SIPTG TEST to MTRM client MTRG1.

    % set addressContext default zone defaultSigZone sipTrunkGroup TEST masterTgName MTRG1 tgMtrgCallsPerReq 10 tgMtrgBwPerReq 5000 tgMtrgResAllocation localignored



    This completes the configuration. Initially the trunk group is given twice the calls per request and bandwidth per request resources. If you see TG status, it will indicate how many calls and how much BW is available.



  7. Check trunk group and MTRG status.

    > show status addressContext default zone defaultSigZone trunkGroupStatus
    trunkGroupStatus TEST {
        state                      inService;
        totalCallsAvailable        20;
        totalCallsInboundReserved  0;
        inboundCallsUsage          0;
        outboundCallsUsage         0;
        totalCallsConfigured       -1;
        priorityCallUsage          0;
        totalOutboundCallsReserved 0;
        bwCurrentLimit             10000;
        bwAvailable                10000;
        bwInboundUsage             0;
        bwOutboundUsage            0;
        packetOutDetectState       normal;
    } 



  8. To verify MTRG client and server status, enter the commands below. The resources allocated to the trunk group are equal to the resource usage in the MTRG server.

    >show status global cac mtrgServerStatus
    mtrgServerStatus MTRG1 {
        mtrgServerIndex 1;
        maxGlbCallAvail 80;
        maxGlbBwAvail   54000;
        callsAllocated  100;
        bwAllocated     64000;
        callsUsage      20;
        bwUsage         10000;
        parentMtrgIndex 0;
        parentMtrgName  "";
        parentMtrgState null;
    }
    
    >show status global cac mtrgServerRegTgStatus
    mtrgServerRegTgStatus MTRG1 TEST DEVHA1 SIPTG {
        mtrgServerIndex 1;
        tgIndex         3;
        callsAllocated  20;
        bwAllocated     10000;
        tgState         active;
    }

  • No labels