In this section:
This section describes some common issues found in SBC GCP instances, and the action needed to verify or troubleshoot the issues.
It implies that there are errors in the public key provided, or you are using a different public key that the one provided.
Action Steps:
linuxadmin
key is correct:ssh-keygen -y -f <<key file>>.
ssh-rsa ... linuxadmin.
If this log is continually written to the HFE.log
, it implies that the HFE node cannot connect to the SBCs.
Action Steps:
Ensure that the PKT0 and PKT1 configured correctly through the CLI. Refer to the section "CLI Configuration for Configuring PKT Ports" of the page Configure SBCs in GCP.
HFE_conf.log
are the ones attached to the SBC:/opt/HFE/log/
.<<SBC instance name>> - IP for <<pkt0/pkt1>> is <<IP>>.
In the Network interfaces table, observe the nic2 and nic3 and ensure that the IPs in the Alias IP ranges match.
This is the result of the invalid user-data. Enter only valid json. Refer to the section "User Data Format" of the page Configure SBCs in GCP.
Action Steps:
Go to Compute > VM instances.
jq . user-data.txt
python -m json.tool user-data.txt
Action Steps:
HFE.log
. Refer to the section "HFE Node Logging" of the page Configure the HFE Node.This is the result of starting both the instances simultaneously, and trying to communicate with the same ID. This is an expected behavior. The system reboots and comes up as Standby.
It implies that either there is a configuration issue, or the firewall rules are not updated correctly.
Action Steps:
Ensure that the updated line is similar to the following:
/bin/echo "REMOTE_SSH_MACHINE_IP=\"10.27.178.4\"" >> $NAT_VAR
/opt/HFE/log/
. Refer to the section "HFE Node Logging" of the page Configure the HFE Node.The possible reason is that the SBC PKT interface is unable to find the HFE interface.
Action Steps:
Run tshark on the port:
tshark -i pkt0/pkt1
Look for ARP error messages, such as:
0.999962 42:01:0a:00:41:e8 -> Broadcast ARP 42 Who has 10.0.65.231? Tell 10.0.65.232
Perform a switchover through the CLI:
request system admin <system_name> switchover
When deploying an HA setup in a public cloud environment, each node must be able to query all other associated instances (peer SBC or HFE node) to obtain information about the other nodes. If there is a delay in creating any instance within the setup, the other nodes are unable to collect complete information and data is missing from the metaVariable table in the configuration database. The SBC application cannot start if cloud-init fails and the database is populated incorrectly.
To correct this issue, reboot both SBC instances from the console to ensure SSH works on the instances, and to allow the nodes to gather all of the required information.