You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

In this section:

SBC Core Feature Availability by Platform

The following table lists selected SBC Core key features, and their availability in this release by product. Features in this table are not supported in all platforms/environments. 

Note

 This is not a comprehensive list of features. For a more inclusive and detailed listing of features, refer to the Feature Guides.


(tick) = Supported
(error) = Not supported

Feature Availability


Hardware-Based PlatformsSoftware-Based Platforms




Comments 

SBC 51x0 / 52x0 / 5400

SBC 7000

SBC SWe
(Virtual)

VMware/KVM

SBC SWe (Cloud)
Integrated SBC in OpenStack

D-SBC
OpenStack/VMware1

AWSGCPAzure
IP-Related Features

Link Detection:

Refer to SBC Core Redundancy for details.
Physical link detection(tick)(tick)(tick)(error)(error)(error)(error)(error)NOTE: Physical link detection is supported only on Direct I/O Packet Interfaces on the SBC SWe.
ARP Probing(error)(tick)(tick)(tick)(tick)(error)(error)(error)

Standby packet port support

(error)(tick)(error)(error)(See note 2 below)(error)(error)(error)Refer to SBC Core Redundancy for details.
Geographical Redundancy (GRHA)(tick)(tick)(tick)(error)(error)(error)(error)(error)

Refer to SBC Core Redundancy details.

N:1 Redundancy(error)(error)(error)(error)

(tick)

S-SBC: 4+1

M-SBC: 4+1

(error)(error)(error)Refer to Distributed SBC N:1 Redundancy Architecture for details.
EVRC-B Codecs(tick)(tick)(tick)(tick)Pass-through(tick)(tick)(tick)
H.323 Support(tick)(tick)(tick)(error)(error)(error)(error)(error)gap for cloud
SR-IOVN/AN/A(tick)(tick)(tick)(tick)(error)(error)
Direct I/O SupportN/AN/A(tick)N/AN/AN/AN/AN/A
DHCP Support(error)(error)(error)(tick)(tick)(tick)(tick)(tick)
NIC Teaming(error)(error)(tick)

Note: Not in KVM
(error)(error)(error)(error)(error)
SIP over SCTP(tick)(tick)(tick)(error)(error)(error)(error)(error)
SRTP(tick)(tick)(tick)(tick)(error)(tick)(tick)(tick)
Alternate/Multiple IP Support(tick)(tick)(tick)(tick)(tick)

(tick)

(error)(error)
IPv6 Support(tick)(tick)(tick)(tick)(tick)(error)3(error)(error)
SLB(error)(error)(error)(tick)(tick)(error)(error)(error)
Media-Related Features
Fax transcoded calls(tick)(tick)(tick)(tick)(error)(error)(error)(error)Refer to Fax Over IP
  • G.711–T.38 (V0)
(tick)(tick)(tick)(tick)(error)(error)(error)(error)
  • G.711–T.38 (V3)
(tick)(tick)(error)(error)(error)(error)(error)(error)
  • T.38 (V0)–T.38 (V0)
(tick)(tick)(tick)(tick)(error)(error)(error)(error)
Fax/Modem Fallback(tick)(tick)(tick)(error)(error)(error)(error)(error)
UC (Video, BFCP, Content Share)(tick)(tick)(tick)(tick)(error)(error)(error)(error)
GPU Transcoding(error)(error)

(error) (for VMware)

(tick) (for KVM)

(tick)(tick)(tick)(tick)(error)
Opus TranscodingNot available for 5100/5200



(tick)(tick)(tick)


1Full support for VMware available starting with SBC 8.1.

2Supported for four packet port configuration only. Not supported for any other packet port configuration.

3The SBC Core SWe/Cloud on AWS and GCP does not support IPv6. However, AWS infrastructure supports IPv6 and GCP does not support IPv6.

Feature Limitations of SBC in GCP

The SBC in GCP includes the following limitations when compared to the feature set for the SBC running in AWS and Azure:

  • Multiple IPs for traffic segregation is not possible because of the limitations with the number of EIPs you can assign per NIC interface.

  • Subnet gateways in the GCP are virtual. You cannot use them for Link Detection on the SBC, as they do not respond, resulting in continuous switchovers.

    Note

    For Link Detection configurations of HFE instances in GCP, provide any communicable IP address between the HFE and SBC. Ribbon recommends providing nic2 IP addresses for HFE 2.1, and nic3 and nic4 IP addresses for HFE 2.0.

  • To use a bastion server for SBC in GCP to access the management interface using private IP addresses, you must create the server in a subnet separate to the management interface.

  • Logical Management IP addresses are not supported by SBC in GCP.

Feature and Maintenance Limitations of SBC in Azure

The SBC in Azure includes the following limitations when compared to the feature set for the SBC running in AWS and GCP:

SBC Feature Limitations

The SBC in Azure includes the following known feature limitations:

  • Ribbon supports standalone and HA with HFE deployment model in Azure. However, Ribbon does not support Active-Standby HA deployment.
  • Although Azure supports multiple IP per network interface, currently the SBC does not have support for multiple IPs per network interface.
  • As pinging to gateway fails, the LDG configuration to specific out-of-subnet IP configuration also fails.
  • Logical Management IP addresses are not supported by the SBC.
  • Calls going through the public network (public-to-public, private-to-public or public-to-private calls) may not guarantee packet ordering.

Accelerated Networking in Azure - Platform Limitations

The following limitations apply to Accelerated NICs from the Azure platform:

  • The instance may get Mellanox ConnectX-3 or ConnectX-4 NICs as secondary interfaces; you cannot select the type of Mellanox NIC.
  • After every de-allocation and start of the instance, it may get different type of Mellanox NIC ( ConnectX-3 / ConnectX-4 ), and even if it is the same type, it is possible that the PCI ids are different. This happens because after the de-allocation and start of the instance, the VM can go to a different host and get different resources.
  • When the instance is de-allocated and started again, instance some times may not get the Mellanox NICs, and start only with netVSC interfaces. In such situations, Azure does not display any error or notification. To get the Mellanox NICs, shut down the instance, de-allocate, and start again.
  • Sometimes the secondary interfaces (Mellanox VF) are plugged late into the instance; hot-plugging the PCI NICs after initialization is not supported.
  • Azure may plug and unplug the Secondary PCI interfaces (Mellanox VF) to/from the running instance without any notification.
  • Azure does not guarantee deterministic performance; the VMs does not get dedicated CPU cores.
  • Sometimes Azure may not plug the Accelerated NICs, but plugs just the netVSC interfaces. In such situations, the SBC continues to function with limited throughput supported by the netVSCs.
  • The SBC does not support hot plugging/unplugging of the Secondary interfaces to/from the running instance.
  • The SBC does not handle the scheduled events of the VM. For more information on handling scheduled events, refer to Microsoft documentation.
  • As the VM can get any of the supported CPUs, the Active and Standby SBC can have different set of CPUs and Secondary interfaces for Accelerated Networking.

Maintenance of Virtual Machines in Azure

Warning

While carrying out maintenance, the performance of the SBC may get affected, or its service may get interrupted because maintenance freezes the VM for a few seconds..

Note

The maintenance tips given below are applicable to Azure VMs, and not the SBC. Refer to relevant Microsoft documentation for details.

To maintain and handle updates for VMs in Azure, follow the instructions based on your preferred method of interaction with Azure:

  • No labels