In this section:

What is Cloud Computing?

Cloud computing refers to the practice of providing a software service using a remote, shared set of Common-Off-The-Shelf servers (COTS) storage facilities, and networking infrastructure instead of local dedicated resources. Cloud computing reduces the burden of purchasing and managing resources when other external services such as E-mail servers and Customer Relationship Management (CRM) systems can be used. Therefore, cloud computing establishes a relationship between cloud service providers and users and consolidates different services into a single infrastructure.

Virtualization: Unlocking the Full Potential of Cloud Computing

Virtualization complements cloud computing with the ability to emulate many computers from a single machine. Virtual networks in a cloud environment provide flexibility to create, remove, rearrange, and reconfigure services in the cloud without adding or removing hardware, as well as emulating network equipment such as routers, switches, and firewalls. 

Cloud Management Software

The cloud management software must meet different requirements for both cloud service providers and users. The cloud users require control, security, availability, and access to the services, whereas, the cloud service providers need a rich set of tools to manage the cloud infrastructure.

The services are also scaled up or scaled down based on requirements and the required capacity and services are delivered with little or no manual intervention.

The cloud services help the service providers as follows:

  • Fully utilize their revenue-generating equipment to sell services to more than one customer from a single installation.
  • Provide service monitoring tools to track usage for performance purposes as well as to grow or downsize the cloud equipment infrastructure based on the demand. 
  • Provide the software to efficiently allocate physical resources among the clients, virtual machines, and services.  

Essential components of cloud management systems are shown in the following figure.

Cloud Architecture


The components are briefly described as follows:

  • Horizon Dashboard or UI - Both the provider and users need access to various parts of the cloud architecture to manage services delivered by the cloud. These services are based on Web-Centre, Web UI and/or a TCP protocol such as REST.
  • Identity - Controls access to both service providers and users.
  • Nova Compute - Manages the underlying hardware CPU resources and services (Virtual or otherwise) that consume its hardware.
  • Neutron Networking - Manages the physical network equipment and the virtual networks created within the network, and securely controls the network access in and out of the cloud for both resource management and service utilization.
  • Block Storage - All services use the block storage devices. Software that manages physical disk or RAM resources and partitions them between virtual services are essential.  
  • Glance Image Storage - Generally, services are provided in the form of a disk image, which must be instantiated (loaded onto hardware or turned into a running virtual machine).

Example: Consider a cloud service user is a retailer whose business relies on the availability of a website. The user uploads an image of a Web server with the Web site pre-installed on an operating system. When more users visit the website and the demand for Web sessions saturate the server capacity, the cloud management software brings up more instances of the Web site to fulfill the demand. The image store gives the cloud quick access to these images.

  • Object Storage - Few services operate without access to some kind of off-line data store or configuration repository. Most cloud infrastructures provide some form of integration with indexed storage.
  • Heat Orchestration - Heat orchestration is a collection of activities that automate the creation of services. Especially services that require coordinated deployment and interconnection of more than one image, and multiple resources.

Ribbon, Dynamic Networks, and Clouds

The flexibility of Ribbon cloud architecture maximizes the physical network transport equipment, and allows service providers, carriers, and enterprises to do the following:

  • Construct entire virtual networks and networks paths in response to real-time data traffic flow patterns, leveraging least cost routes or low volume paths.
  • Dynamically create and delete virtual networks enabling ephemeral service connections required by applications.
  • Scale up or scale down endpoints or VLANs to handle high traffic hour conditions.
  • Rewire networks dynamically in response to the changing business requirements without affecting hardware or requiring downtime.

To achieve these goals, the Ribbon product portfolio has adapted to cloud-centric behaviors such as:

  • Elasticity: Elasticity provides the ability to expand or contract the capacity of a service in response to demand. This implies some degree of automation required to detect the demand conditions that invoke change in resources and to add or remove these resources from the pool available to the users.
  • Heat orchestration readiness: Heat orchestration is used to describe the set of activities that enable automated deployment of services, scripted relationships between services, and the automation of dynamic service sizing. For more information on Cloud and Heat Orchestration, refer to Cloud Orchestration FAQs and Red Hat Cloud Foundations: Cloud 101 whitepaper. 
  • Cloud Awareness and Configuration: Heat orchestration defines network dynamically. For a network element like Session Border Controller to be useful in an orchestrated environment, it must include software to coordinate with Heat orchestrator to know:
    • About the network

    • Relationship with other components in the network

    • Fetching required configuration information

    Standards for this kind of communication are evolving, but methods of information exchange such as Userdata and Metadata used during cloud initialization, are implemented in DSC SWe.
  • Hardware independence: To successfully virtualize network components, such as DSC, means that the software must run on common off-the-shelf (COTS), platforms that do not require specialized hardware or process pipelining. Primarily, the DSC is dependent on special hardware to accelerate the CPU intensive operations of coder/decoder (CODEC) processing and transcoding. Removing such hardware requirements is part of an ongoing effort at Ribbon.

DSC SWe OpenStack Deployments

DSC SWe can be deployed through OpenStack HEAT Orchestration or by way of Virtual Network Function Manager (VNFM).

For information about DSC SWe deployment through OpenStack HEAT Orchestration, see DSC SWe in OpenStack.

For information about DSC SWe deployment through VNFM, see Samsung VNF Management Support (Samsung VNF Management Support).

For information about DSC SWe on OpenStack deployment through VNFM, refer to DSC SWe on OpenStack with VNFM

DSC SWe on OpenStack  

OpenStack is a collection of open-source cloud management and infrastructure software, which is downloaded and installed on COTS hardware and operating systems. Adopted as a focus for Cloud implementations by many Telecom and Operating System vendors, it has become an important target for DSC SWe deployment.

Ribbon provides DSC SWe as a QCOW2 image and heat orchestration template YAML files. The DSC SWe is instantiated through the OpenStack heat orchestration using the provided yaml file or yaml file generation script.

Heat orchestration is the main project in the OpenStack Heat Orchestration program. It implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like code. Ribbon provides sample HEAT templates for instantiating DSC SWe in either standalone or High Availability (HA) configurations.

Hewlett Packard Enterprise (HPE) Helion is an HPE-branded OpenStack solution on which the DSC SWe can be deployedThe HPE Helion customization is mostly transparent by way of software and drivers that enhance OpenStack stability and data throughput. 

Samsung VNF Management Support

The Virtual Network Functions Manager (VNFM) is a key component in network functions virtualization. VNFM interfaces with other functional blocks in a virtual network such as the virtualized infrastructure manager (VIM) and network functions virtualization orchestrator (NFVO) to direct orchestration, monitoring, and management of the virtual network functions (VNFs).

Note

VNF in this document refers to the collection of VMs that constitute the DSC SWe (on OpenStack). Each VM is considered a Virtual Network Function Component (VNFC).

In general, the DSC SWe accepts VNFM provisioning information through OpenStack metadata (as per published format). For deployments under the Samsung VNFM, the DSC SWe also implements direct VNFM communication through a Ve-VNFM REST interface. This interface is based on the Internet Engineering Task Force (IETF) Management and Orchestration (MANO) specifications with Samsung extensions.

The DSC SWe is distributed as a QCOW2 with a system definition Virtual Network Function Descriptor (VNFD) file for distribution under the Samsung VNFM. The Ve-VNFM REST interface serviced by the DSC SWe VNF (along with the VNFM Ve-VNFM service) allows for the transmission of usage alarms to the VNFM and the management instantiation and scale operations by the VNFM.  

DSC SWe on OpenStack with Ribbon VNFM

The Ribbon Virtual Network Function Manager (VNFM) is an ETSI standards-aligned virtualized application that provides an alternative to Heat templates and Samsung VNFM when deploying the DSC SWe in an OpenStack environment. DSC SWe (on OpenStack) using the Ribbon VNFM involves deployment, orchestration and lifecycle management. 

For more information, refer to DSC SWe (on Openstack) using VNFM Instantiation Guide.