This section provides the server, operating system and host configuration required to run the  DSC  virtual machines.

Warning

Performance cannot be guaranteed for customer provided servers that have a different hardware configuration than the Ribbon Dell provided server.

Server Requirements

  • Dual Intel Xeon Gold 5218 2.3GHz or better.
    • Intel Xeon Gold or Platinum is recommended for optimized performance and reliability
    • 2.3GHz or faster bus speed
    • Dual sockets
    • 32+ threads per socket
  • Intel NIC
    • Must have support for SR-IOV when running with RHEL 8 (DSC currently does not require SR-IOV, future proofing)
    • Support  1 GbE or 10 GbE, optional copper or optical based on the customer NHR requirement.
    • It is a good practice to have physical separation of flows (HA, Signaling and OAM), thus a minimum of 6 Ethernet interfaces are required.
      • If further physical separation between the signalling interface is required, more Ethernet interfaces are required.
      • When separation of flows is not needed fewer interfaces may be engineered per customer requirements. HA links requires 1 Gbps of bandwidth.
Note

If physical separation of flows is not a requirement then these can be added to the same physical interface and have logical separation using VLAN. This is only supported on interfaces operating at 10G or higher speeds.

  • Storage: Minimum 1.2TB 
    • Although not all of the 1.2TB is used it is recommend to engineer these sizes to allow for growth in future releases.

      Note

      These can be SATA or SAS drives.

  • Minimum 48 GB Memory, 2933MT/s, Dual Rank.
    • Balanced memory configurations enable optimal interleaving which maximizes memory bandwidth. Ensure the memory provides balanced configuration by properly configuring the memory channels across the memory controller.

    • When the memory subsystem is incorrectly configured, the memory bandwidth available to the server can become limited and overall server performance can be reduced.

  • Redundant PSU (AC or DC)
  • RAID controller is critical to ensure proper IO performance of the solution.
    • Must support RAID1 and RAID10
    • Should support 12 Gbit/s data transfer rate
    • Should have a sufficient buffer size,  ex 8GB
    • Supports a battery backup


Dell Specific

  • PERC H740P RAID Controller, LP Adapter, or PERC H750
  • iDRAC9,Enterprise
  • Intel i350 Quad Port 1GbE BASE-T and Intel X710 Dual Port 10GbE Direct Attach SFP+ 

Dell Recommended

  • PowerEdge R740 XL/R740xd XL Motherboard
  • Intel Xeon Gold 6240 2.6G, 18C/36T, 10.4GT/s, 24.75M Cache, Turbo, HT (150W) DDR4-2933, OEM XL
  • 16GB RDIMM, 2933MT/s, Dual Rank
  • 1.2TB 10K RPM SAS 12Gbps 512n 2.5in Hot-plug Hard Drive


Warning

These NIC models are known to cause outages when SR-IOV is configured with RHEL 8, and are therefore unsupported. 

  • Broadcom 5720 Quad Port 1GbE BASE-T, rNDC baseboard
  • Broadcom 57412 Dual Port 10GbE SFP+ Adapter, PCIe

Host OS Installation

The following actions are needed to prepare the server for the application VM install. This is not a comprehensive list but is meant to detail the key steps. Additional information is located in the Example Configuration.

  • Install minimum version of RHEL-8.8 OS 
  • Setup OAM Interface and IP address on server using the bond, vlan, bridge structure specified in this document for the office type; geo or co-located. For an example see the example in  Example Configuration.

    • Bridge name:  Geo office = br-oamA (siteA), or br-oamB (siteB),  Co-located office = br-dataoam

    • Ansible playbooks will create the remaining host networking from the office SpecBook configuration, assuming that Ribbon services have been purchased to instantiate the DSC SWe. 
  • Register Server with RedHat (For Package Installation)
  • Enable SR-IOV in kernel
  • Enable IOMMU manually if it is not enabled
  • Install Packages
  • Configure Firewall, iptables and/or nftables

OS Requirements

The OS is based on RHEL-8.2-x86_64 or later with the Virtualization Host packages selected during the RHEL install in the Software section of the installation menu, with the additional Virtualization Platform and Container Management Packages.


The following additional packages are required:

  • expect
  • tcl
  • libvirt-glib, libvirt-dbus
  • python3, python3-lxml, python3-netaddr, python3-netifaces, python3-pcp, python3-setuptools, python3-pip
  • podman, podman-docker, tmux
  • ipmitool
  • rear, syslinux, genisoimage, lftp
  • iptables, iptables-services
  • libvirt-client
  • net-snmp, net-snmp-utils, net-snmp-libs, net-snmp-agent-libs
  • qemu-img
  • sshpass
  • grub2-efi-x64-modules

If you are using a host linux distribution that does not have ifconfig or brctl support, the following additional packages are required when using the default networking templates provided in the DSC SWe package installer.

  • networkt-scripts
  • brige-utils

sshpass is available via the RHEL "extras" repository.  It is also available directly from the fedoraproject web site using these commands: 

Note

The currently available version may be different than the one shown in this example.

The following packages are recommended:

  • cockpit-machines
  • cockpit-storaged
  • virt-top

Server Configuration

Host and Management

In a customer supplied server solution the customer is responsible for the host OS installation, RHEL license, security updates,, integration in the customer back office for fault and alarms and for any host or operating system issues work directly with RedHat and/or the server vendor for support.

It is also critical that the customer implements a backup strategy in order to be able to recover the DSC in the event of a disk or server failure. The backup images must be stored off board.

Host Engineering Rules

  • The server must be dedicated to the DSC, this avoids having another VM using up resources needed by the DSC VMs
  • CPU over-subscription is not allowed
  • A minimum of 2 CPU and 12GB of memory must be allocated to the host. For larger host (> 56 vCPU) recommend reserving 4 vCPU for the host.
  • Out of band management interface is recommended

BIOS

The following BIOS configuration must be enabled on the host

  • Processor Settings - Logical Processor enabled
  • Processor Settings - Virtualization Technology enabled
  • System Profile = performance
  • SR-IOV must be enabled on all interfaces


How to Set on Dell Servers

BIOS settings

  • Processor settings
    • Ensure Processor is set for Hyperthreading and Virtualization
      • Configuration / BIOS Settings / Processor Settings / Logical Processor = ENABLED
      • Configuration / BIOS Settings / Processor Settings / Virtualization Technology = ENABLED
  • Network Settings
    • Disable PXE boot on all interfaces.
      • Configuration / BIOS Settings / Network Settings / PXE Device<all> = DISABLED
    •  Enable SR-IOV globally
      •  Configuration / BIOS Settings / Integrated Devices / SR-IOV Global Enable = ENABLED
    •  System Profile Settings
      • Configuration / BIOS Settings / System Profile Settings / System Profile = PERFORMANCE
Note

Some BIOS may have different System Profile options and will require the profile to be configured in the OS. Refer to the "Example configuration" section below for more details.


NIC Settings

  • Enable SR-IOV for all interfaces, both on-board and all PCI NIC cards.
    • System Setup / Device Settings /  <NIC CARD Select> / Device Level Configuration / Virtualization Mode = SR-IOV
    • Repeat for all NIC cards

Filesystem

Recommendation is to create a large LVM. The VM images are stored by default in /var/lib/libvirt/images.

  • At a minimum RAID1 needs to be configured and RAID10 is recommended for improved disk performance.
  • The size of this partition must be properly engineered to ensure all VMs can properly fit in this partition. 
    • When adding up all the VM disk size requirements, it is recommended to reserve another 200GB for future growth or upgrades.
  • The host should have 50G of reserved disk space for creating Relax-and-Recover (rear) host OS backups. 


Although not all of the 1.2TB is used it is recommend to engineer this server with 1.2TB to allow for growth in future releases.


Recommended Disk Configuration

Mount PointVolume NameSizeUsage
/homehome50G

Used for following directories:

  • /home/storage/loads/ (Release specific ESD files and core SOS load)
  • /home/storage/patches/ (Patches and patch bundle files)
/optopt10G

Used for following directories

  • /opt/rrbn/vc20/tools/ansibletk (Used for Ansible ToolKit, when Venus Host OS is used as ACN)
  • /opt/rrbn/vc20/tools/nettest (Network Testing Scripts and Configuration Files)
/root10G
/bootboot1G
/boot/efi
512M
/tmptmp10G
/varvar10G
/var/logvar_log10GFor logs
/var/log/auditvar_log_audit4GAudit Logs
/var/tmpvar_tmp10G
/backupsbackups50G

Used for ReaR Backup. Following directories are created:

  • /backups/rearLocalBackup (Directory for local ReaR backup)
  • /backups/rearMateBackup (Directory for mate server ReaR backup)
  • /backups/tmp (Working directory on the host which is used to create the backup)
/upgradesupgrades100GReserved for future use.
/var/lib/libvirt/imagesvar_lib_libvirt_images 

All Remaining Space

Used for VM disk storage (Example: qcow2 disks for vSP2K or DSC SWe)

Network Setup

For proper reliability each server should be connected across two Layer 3 switches for co-located deployments and across two different interface modules in geo deployments. These ports will form an active/standby bond on the host.

VLAN tagging is required and multiple bridges are needed for the DSC solution.  It is highly recommended for all DSC servers to have the same networking configuration.

For ansible to work the Host IP address must be configured on a bridge. Ansible does not support the host IP directly on an Ethernet interface.

Network Ports

The following network ports must be accessible on the host. Additional ports may be required based on the configuration.

Application/ServiceProtocolPortAccess Requirement
REAR - rpcTCP111Other DSC server
REAR - nfsTCP2049Other DSC server
REAR - mountdTCP20048Other DSC server
DNSUDP/TCP53
WebSMTCP9090Allow cockpit
ICMPICMP
Accept ICMP input packets
SSH/SFTPTCP22
ChronydUDP323

VM Placement

In a customer provided server the VM placement is engineered based on the number of vCPU, memory and disk available per server. These rules apply:

  • Each DSC component is made up of two unit VMs, and each unit VM must be on a different physical server for reliability.
Note

The Ribbon configurator provides for default recommended placement.

VM Configuration Recommendations

Note

The following are general recommendations for a DSC SWe installation:

  • Refer to section Performance Tuning the VM to determine the required resources (for example, vRAM, vCPU, vNIC ports, and the virtual Hard Disk) for optimum performance of the DSC VM.
  • In the BIOS settings of your computer, disconnect or disable any physical hardware devices that you will not be using (for example, floppy devices, network interfaces, storage controllers, optical drives, or USB controllers) to free CPU resources.


Note

Allocate each VM with the required virtual hardware for robust operation. Provisioning a VM with more resources than it requires can, in some cases, reduce the performance of the VM and other VMs sharing the same KVM Host.


The following tables show the recommended resources allocated to each VM installed on a KVM Host.

Recommended KVM Host Resources Allocated to each DSC SWe VM for Diameter

Resources

To support Diameter only

vCPU

4 cores; 2 GHz minimum

vRAM5 GB
vHDD65 GB
vNIC4 virtual NICS (1 MGMT, 1 HA, and 2 packet ports)

Recommended KVM Host Resources Allocated to each DSC SWe VM for SS7

Resources

To support SS7 Only

vCPU

4 cores; 2 GHz minimum

vRAM5 GB
vHDD65 GB
vNIC4 virtual NICS (1 MGMT, 1 HA, and 2 packet ports)

Recommended KVM Host Resources Allocated for DSC (Diameter and SS7) VMs

Resources

2 VMs supporting Diameter and SS7

vCPU8 cores; 2 GHz minimum
vRAM5 GB
vHDD65 GB
vNIC4 virtual NICS (1 MGMT, 1 HA, and 2 packet ports)

Host Checker

Ribbon provides a utility called C20hostchecker which is available for download from GSC. This will validate that the server has the proper hardware and software installed to proceed with the virtual C20 application installation.

Downloading the Utility

The c20hostchecker utility can be downloaded via Ribbon’s Global Software Center (GSC) website.

  1. Go to Downloading Software from the Ribbon Support Portal and log in,
  2. Navigate to the Software Releases pull-down menu and select Advance Search,
  3. Enter C20HostChecker in the search box and submit,
  4. Download the latest version and place on the host you wish to validate.

Software Bundle Content

The software bundle downloaded from Ribbon’s GSC site contains the following items:

  • c20hostchecker – The utility executable.
  • C20hostchecker User Guide
  • yaml – The YAML input file identifying host resources to validate.

Refer to the user guide for details on how to modify the yaml configuration file and run the c20hostchecker utility. The results of the c20hostchecker must be shared with the Ribbon Network Engineered assigned to this project.

Example Configuration

The information below is provided as job aids. Refer to redhat official documentation pages for extra details on host installation and network configuration located at https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8

Setup OAM Interface on Server

Create bond, bridge, vlan and  setup IP address on the bridge.  This example creates bond "bond0" from interfaces eno1 and eno2, adds a bridge "br-oamA", and attaches the bridge to the bond using VLAN "bond0.172" with VLAN ID = 172.

# add the bonds
 nmcli c a type bond con-name bond0 ifname bond0 bond.options "mode=active-backup, miimon=100, fail_over_mac=follow" ipv4.method disable ipv6.method ignore
  
# add the bond-slaves
 nmcli c a type bond-slave con-name eno1 ifname eno1 master bond0
 nmcli c a type bond-slave con-name eno2 ifname eno2 master bond0
  
# add the bridges
 nmcli c a type bridge con-name br-oamA  ifname br-host  ipv4.method disable ipv6.method ignore
 
# add the vlans (references both bond and bridge to tie them together)
 nmcli c a type vlan con-name bond0.172   ifname bond0.172   dev bond0 id 172 master br-oamA  slave-type bridge
 
# add the host IP
 nmcli c mod br-oamA ipv4.method manual ipv4.addresses 172.27.218.70/25 ipv4.gateway 172.27.218.1 ipv4.dns "172.27.218.216,172.27.218.116" ipv4.dns-search "example.com"
  
# bounce the OAM bridge to cause the IP to be instantiated.
 nmcli c down br-oamA
 nmcli c up   br-oamA

Enable IOMMU Manually if it is Not Enabled

# Edit /etc/default/grub and add intel_iommu=on to GRUB_CMDLINE_LINUX. Example seen below
GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet intel_iommu=on"

# Refresh the grub config change
grub2-mkconfig -o /boot/grub2/grub.cfg

# Reboot server
reboot 


Enable Latency-Performance Profile in OS.

# tuned-adm profile latency-performance

Rear Requirements for Host Backup and Restore

Disk Recommendations

Ribbon recommends 50GB of disk space to be allocated for ReaR backup creation and storage. The disk space is recommended to be used for the following:

  • Maintain two mate backup copies
  • Maintain one local backup copy
  • Provide a ReaR working directory

ReaR ISO Contents

The ReaR ISO image should include both the bare-metal image and the backup data. Including the backup data with the bare-metal ISO image ensures the host image and data align. This format also eliminates the need for network connectivity when restoring a server.

File Requirements

Directories Required to be Included in ReaR Backup for VM Recovery

The directories listed in this section must be included in the backup, they contain tools, scripts, and configuration files necessary for recovering the DSC Virtual Machines.

DirectoryContents
/opt/rbbnRibbon scripts, tools, and configuration files. This is the Ribbon default location, refer to the site Network Specification Document (Specbook) for the Ribbon tools location.

/var/lib/libvirt/swtpm




DSC VM configuration files, including the XML files.

/var/lib/libvirt/network

/var/lib/libvirt/filesystems

/var/lib/libvirt/boot

/var/lib/libvirt/dnsmasq

/var/lib/libvirt/qemu

/etc/libvirt/qemu

Directories Required to be Excluded from the ReaR Backup

Directories listed in this section must be excluded from the backup to prevent the backup from failing.

DirectoryContents

/var/lib/libvirt/images

DSC VM disk images. The images are large and constantly changing.
Rear backup directoryThe host backup should not include the mate, or local, backup images.
/var/logThe system logs are not needed for restore.

Files to Exclude from the ReaR Backup

This section identifies the file types that should be excluded from the backup to prevent the backup from getting to large. Including these file types may cause the backup to fail. A large backup ISO image may also slow recovery.

DirectoryComment

Installation ESD, ISO and QCOW2 images

The files are large and may cause the backup to fail. It is recommended to download installation images from GSC when required.
PatchesIt is recommended to re-download patches after a system recovery.

Rear Host Configuration File

This section provides a recommendation for the ReaR configuration file. It is recommended that the /etc/rear/local.conf file be used for the default ReaR server configuration. These variables override the default ReaR variable definitions in the /usr/share/rear/conf/default.conf file. 

Variable Definition Recommendations for the local.conf file.

VariableDescription
OUTPUT=ISOIdentifies the output file type. ISO is the only supported output type.
BACKUP=NETFSThe internal backup method.
OUTPUT_URL=nfs:<nfs mount>Identifies the locate of the backup. It is recommended to use an NFS mount to the mate server to allow for multiple backup copies to be maintained.
BACKUP_URL=iso:///backup/The BACKUP_URL specifies where the backup is located during the restore. This definition directs ReaR to place the backup data in the ISO backup image.
export TMPDIR="<dir>"This defines the ReaR working directory. In some systems the /tmp directory does not allow executable files to be run, in which case an alternate directory is needed.
KEEP_BUILD_DIR=""This variable instructs ReaR to cleanup its build directory after every backup. This is needed to prevent ReaR from exceeding its disk limit.
KEEP_OLD_OUTPUT_COPY=yThis variable specifies that two backups should be maintained, current and "old". At most, ReaR maintains two copies. This variable only applies when the backups are maintained on an NFS mount.
BACKUP_PROG_EXCLUDE=( "${BACKUP_PROG_EXCLUDE[@]}"

'/var/lib/libvirt/images/*' '...')

This variable specifies the directories that ReaR should excluded from the backup.

Ribbon versus Customer Responsibility Matrix

The following table outlines the responsibility for Ribbon provided hardware or customer sourced hardware.

Component

Ribbon Provided Hardware

Customer Provided Hardware

Install

Support

Upgrade &Security Patches 

Logs,  Alarms North-bound

System Recovery

Install

Support

Upgrade, &Security Patches  

Log and Alarms North-bound

System Recovery

DSC Applications 

R

R

R1

R

R

R

R6

R1

R

R/C5

VM Instantiation

R2

R

R1

R

R

R2

R

R1

R

R

Operating System and Hypervisor 

R

R

R1

R

R

C3

C3

C3

C3

C3

Hardware

R

R

R1

R

R

C4

C4

C4

C4

C4

Network (Router, CS LAN)

C

C

C

C

C

C

C

C

C

C

R = Ribbon,  C = Customer

Notes:

  1. Customers under Ribbon Care may choose to self apply or have Ribbon apply corrective updates. All upgrades are available via Ribbon support portal and Ribbon Pro Services can be contracted on a per occurrence basis.  

  2. VM Created as part of Ansible playbook at commission

  3. HOST OS must be provided by the customer and updates obtained directly from Red Hat (ie. HOST OS not under a Ribbon license). All required packages must be applied, supported NICs installed and disks portioned. Customer responsible for restoration in a failure. 

  4. Customer is responsible for independently acquiring and installing hardware. Ribbon will provide recommended Dell configurations. Additional pro services fees to qualify non Dell or custom configurations will apply. Lab strongly recommended for non standard configurations 
  5. Customer must maintain proper backups

  6. Ribbon does not certify performance on customer provided hardware, including where hardware complies to Ribbon recommended specifications