This topic covers common issues found in SBC GCP instances and the action steps needed to verify or fix.
When I try to SSH in to the SBC as linuxadmin, I receive Permission denied (publickey).
This error generally means there is some kind error in the public key that has been supplied or a different public key has been retrieved.
Action Steps:
- Check that the supplied linuxadmin key is correct:
- Go to Compute Engine > VM Instances.
- Click on the instance.
- Click on the SSH key.
- Verify the public key matches the result running ssh-keygen -y -f <<key file>>.
- Verify the format is ssh-rsa ... linuxadmin.
- Verify that 'Block project-wide SSH keys' is selected:
- Go to Compute Engine > VM Instances.
- Click on the instance.
- Check 'SSH keys' for the checkbox.
- Verify that there are no 'SSH Keys' in the global Metadata.
- Go to Compute Engine > Metadata.
- If there any key 'SSH key' keys remove them.
- Once the issue has been found:
- If it was an error in the key supplied then update the key and reboot the instance.
- If it is an error with global Metadata keys, all SBC instances will need to be completely recreated and the HFE node will need to be restarted to get the latest SBC information.
The HFE.log is continually getting the error message Connection error ongoing - No connection to SBC PKT ports from HFE
If this log continually being written to the HFE.log, then this means the HFE node cannot connect to the SBCs.
Action Steps:
Verify PKT0 and PKT1 configured correctly through the CLI. See "CLI Configuration for Configuring PKT Ports" in Configuring SBCs in GCP.
- Verify the IPs listed in the HFE_conf.log are the ones attached to the SBC:
- Go to /home/ubuntu/HFE/log/.
- Find the logs which specify the IPs for the SBC - these are in the form: <<SBC instance name>> - IP for <<pkt0/pkt1>> is <<IP>>.
- Find the Alias IPs for the SBC:
- Go to Compute Engine > VM Instances.
- Click on the instance.
In the Network interfaces table look at nic2 and nic3 and verify the IPs in the Alias IP ranges match.
- Check the VPC routes and firewalls are correct:
- Go to VPC network > VPC networks.
- Click on the VPC for PKT0.
- Click on Firewall rules and verify firewall rules exist that are outlined in Google Firewall Rules.
- Click on Routes and verify the routes exist outlined in Google Network Routes.
- Repeat for PKT1.
Every time I start my SBC instance, the instance stops itself after a few minutes.
This is the result of the invalid user-data being entered. Only valid json is allowed to be entered for the SBC. Refer to User Data Format.
Action Steps:
Go to Compute > VM instances.
- Click on the instance.
- Go to Custom metadata.
- Click on user-data.
- Copy the user-data into a file a verify that it is valid JSON. For example:
- Linux utility jq : jq . user-data.txt.
- Python: python -m json.tool user-data.txt.
- If valid, the user data will be printed out, else an error will be displayed.
Calls are failing to reach the SBC.
Action Steps:
- Verify there are no error logs in the HFE.log. See HFE Node Logging.
- Verify the end point of the traffic is allowed access through the VPC firewalls. See Google Firewall Rules.
One of my instances sent a broadcast message saying: "SplitBrain: Going for REBOOT to resolve id collision!!!"
This is the result of both instances being started at the same time and trying to communicate with the same ID. This is expected behaviour. The system will reboot and should come up as Standby.
I am unable to log in to my HFE via the mgmt interface
This can mean there is a configuration issue or firewall rules have not been updated correctly.
Action Steps:
- Verify the IP you are trying to SSH from is allowed through the VPC firewall. See Creating Firewall Rules for more information.
- Verify that the IP you are trying to SSH from is in HFE node user-data correctly. See User Data Example.
- The lining needing to be updated should look like this:
- /bin/echo "REMOTE_SSH_MACHINE_IP=\"10.27.178.4\"" >> $NAT_VAR.
- The HFE script may have failed before creating the routes:
- Attempt to SSH in to NIC0 on the HFE node.
- Check the logs for errors in this directory /home/ubuntu/HFE/log/. See HFE Node Logging for more information.