In this section:
This section describes some common issues found in SBC GCP instances, and the action needed to verify or troubleshoot the issues.
It implies that there are errors in the public key provided, or you are using a different public key that the one provided.
Action:
linuxadmin
key is correct:ssh-keygen -y -f <<key file>>.
ssh-rsa ... linuxadmin.
If this log is continually written to the HFE.log
, it implies that the HFE node cannot connect to the SBCs.
Action:
Ensure that PKT0 and PKT1 are configured correctly through the CLI. Refer to the section "CLI Configuration for Configuring PKT Ports" of the page Instantiating SBC SWe in GCP.
HFE_conf.log
are the ones attached to the SBC:/opt/HFE/log/
.<<SBC instance name>> - IP for <<pkt0/pkt1>> is <<IP>>.
In the Network interfaces table, observe the nic2 and nic3 and ensure that the IPs in the Alias IP ranges match.
This is the result of the invalid user-data. Enter only valid json. Refer to the section "User Data Format" of the page Instantiating SBC SWe in GCP.
Action:
Go to Compute > VM instances.
jq . user-data.txt
python -m json.tool user-data.txt
Action:
HFE.log
. Refer to the section "HFE Node Logging" of the page Configure HFE Nodes in GCP.This is the result of starting both the instances simultaneously, and trying to communicate with the same ID. This is an expected behavior. The system reboots and comes up as Standby.
It implies that either there is a configuration issue, or the firewall rules are not updated correctly.
Action:
Ensure that the updated line is similar to the following:
/bin/echo "REMOTE_SSH_MACHINE_IP=\"10.27.178.4\"" >> $NAT_VAR
/opt/HFE/log/
. Refer to the section "HFE Node Logging" of the page Configure HFE Nodes in GCP.The possible reason is that the SBC PKT interface is unable to find the HFE interface.
Action:
Run tshark on the port:
tshark -i pkt0/pkt1
Look for ARP error messages, such as:
0.999962 42:01:0a:00:41:e8 -> Broadcast ARP 42 Who has 10.0.65.231? Tell 10.0.65.232
Perform a switchover through the CLI:
request system admin <system_name> switchover
Perform additional steps to allow traffic to reach the HFE, if it is a different subnet.
Action:
Peer the two VPCs together. Refer to https://cloud.google.com/vpc/docs/using-vpc-peering for details.
Ensure to peer both the VPCs.
In the penultimate line of the startup script add the following command:
ip route add <endpoint CIDR> via <gateway ip> dev <ens5/eth1>
See the following startup script excerpt for example:
/bin/echo "Configured using HFE script - $HFE_FILE" >> $LOG_FILE /bin/echo $(timestamp) " ========================= Done ==========================================" >> $LOG_FILE ip route add 10.27.27.0/24 via 10.27.3.1 dev ens5 nohup $HFE_FILE setup > /dev/null 2>&1 &
When deploying an HA setup in a public cloud environment, each node must be able to query all other associated instances (peer SBC or HFE node) to obtain information about the other nodes. If there is a delay in creating any instance within the setup, the other nodes are unable to collect complete information and data is missing from the metaVariable table in the configuration database. The SBC application cannot start if cloud-init fails and the database is populated incorrectly.
To correct this issue, reboot both SBC instances from the console to ensure SSH works on the instances, and to allow the nodes to gather all of the required information.