SAS unreachable

Is anyone experiencing an issue with SAS unreachable errors? I have had several APs go down recently and the only fix is to reboot the AP or wait until the SAS is reachable which usually takes about 10 minutes. It seems to only happen on one AP at a time when it does it.

This is from the log.
12/27/2023 : 13:55:42 CST : Connection for 18.204.158.93 failed!!!
12/27/2023 : 13:55:52 CST : Connection for 3.215.192.132 failed!!!
12/27/2023 : 13:55:52 CST : Sending to SAS domain proxy failed. Error : Couldn’t connect to server
12/27/2023 : 13:55:52 CST : Sending Heartbeat Request Failed

Usually this is because the SAS (or cnMaestro proxy) is actually unreachable, though I have seen this happen in other situations too. Older versions of firmware seemed to get stuck in this state until the AP was rebooted though. That seems to have been fixed somewhere along the lines between 20.x and 21.x though. Whether or not the SAS is actually becoming unreachable, make sure you are running the most recent stable firmware on these radios if you aren’t already.

We are not seeing this on our network today but we have in the past. When it has happened we found that the path that was being taken by the radios was ending up at an amazon server hosted in India instead of the US. To check we ran a ping from the ap to sas.cbrs.cambiumnetworks.com and look at the ip addres it responds with. Run a whois on the ip address to see where its actually located.

A good place to check to see if the SAS or cnMaestro cloud are having issues is https://status.cambiumnetworks.com/

I agree but it seems to be pretty regionally associated. when we were having issues only one subnet of our ips were affected while another was not, and it switched which one a few different times when it was happening.

I have noticed that this sometimes happens when AWS does periodic reliability tests and shifts their servers. What appears to happen is the primary DNS address becomes unreachable and the AP doesn’t try the secondary DNS lookup. This seems to have been fixed with the 20 and above firmware. Not sure the exact firmware version where it got fixed.