In this section:

This section covers some common issues found in the SBC SWe in Azure setup, and the action necessary for verification and troubleshooting.

Every time I start the SBC instance, it stops after a few minutes.

This results from submitting invalid user-data being. Submit only valid JSON to the SBC. For more information on valid JSON, refer to SBC's Userdata.

Action Steps:

To verify whether the problem occurs due to invalid JSON, perform the following steps:

  1. In the portal, go to Virtual machines.
  2. Select instance.
  3. Go to Support + troubleshooting > Serial Console.
  4. Start the instance.
  5. Search the log for the following message:
    Invalid JSON format in UserData - Shutting Down


The HFE.log continually gets the error message "Connection error ongoing - No connection to SBC PKT ports from HFE".

If the message "Connection error ongoing - No connection to SBC PKT ports from HFE" is continually written to HFE.log, it indicates that the HFE node cannot connect to the SBCs.

Action Steps:

Perform the following verification steps:

  1. Using the CLI, verify the PKT0 and PKT1 is configured correctly. For more information on this process, refer to Configuring PKT Ports.

  2. Verify the IPs listed in the HFE_conf.log are the ones attached to the SBC:

    1. Go to /opt/HFE/log/.

    2. Find the logs that specify the IPs for the SBC ; the logs are in the form:

      <SBC instance name> - IP for <pkt0 | pkt1> is <IP>
    3. Find the Alias IPs for the SBC:

      1. Go to Virtual machines.

      2. Click on the SBC instance.

      3. Go to SettingsNetworking.

      4. Go to the PKT0 interface.

      5. Click on the network interface.

      6. Go to Settings > IP configurations.

      7. Verify the secondary IP matches.

      8. Repeat for the PKT1 interface.

  3. Check the Security groups are correct:

    1. Go to Network security groups.

    2. Select the security group.

    3. Go to Inbound security rules.

    4. Verify the end point IPs are allowed. 

  4. Check the routes are correct:

    1. Go to Route tables.

    2. Select the route table.

    3. Click on Routes and verify the routes point to the eth2 IP on the HFE node.

    4. Click on Subnets and verify the the route table is associated with both subnets


Calls are failing to reach the SBC from the HFE node.

Action Steps:

  1. Verify there are no error logs in the HFE.log.
  2. Verify the end point of the traffic is allowed access through the VPC firewalls. For more information, refer to Network Security Group Creation.

I am unable to log on to my HFE node via the mgmt interface.

This indicates that either there is a configuration issue, or the firewall rules are not been updated correctly.

Action Steps:

  1. Verify the IP you are trying to SSH from is allowed through the network security group. For more information, refer to Network Security Group Creation.
  2. Verify that the IP you are trying to SSH from is present correctly in the HFE node user-data. Update the appropriate line containing "REMOTE_SSH_MACHINE_IP":

    /bin/echo "REMOTE_SSH_MACHINE_IP=\"10.27.178.4\"" >> $NAT_VAR


    For more information, refer to Custom Data Example.

  3. The HFE script may fail before creating the routes. In such cases:
    1. Attempt to SSH in to the NIC0 on the HFE node.
    2. Check the logs for errors in this directory /opt/HFE/log/. For more information, refer toHFE Node Logging.

Instance does not get Accelerated NICs

Even without the accelerated NICs, sometimes the SWe instance starts but does not guarantee performance.

Actions Steps:

  1. Execute the following command to confirm the availability of the Mellanox NICs.

    > lspci | grep Mellanox


    The sample output given below indicates presence of Mellanox NICs.

    83df:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] (rev 80)
    9332:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] (rev 80)
  2. In such situations, de-allocate the instance and start again.