Odd GPS scenario. Let me start with we have already had a ticket with Cambium and they have recommended replacement of the 450i AP. We are limping along at this time due to site access issues, so looking for input and suggestions as to what could be the issue as I’m not convinced this is a bad AP.
Site consists of 3 450M and 2 450i APs. These are powered via a CMM5. We have the controller unit connected to the network switch and then 2 56V injectors daisy chained off of that. Injector #1 has 2 450M and 1 450i and injector #2 has 1 450M and 1 450i. Injector #1 has the GPS unit plugged into it providing sync to the entire site. We do have other sites with this 450M/450i mix with the same type of CMM5 setup that are not experiencing any issues.
Issue: At time of ticket to Cambium we only noticed with one AP, but BOTH the 450i APs will drop sync down into the 50-60% range at random times, though typically not the same time, resulting in re-registrations from SMs and general issues with customer service. Then it will bounce back up to a 90-100% range and rock along just fine.
It was not noticed until we bumped our firmware up to 16.2.3. We lowered firmware on 1 of the 450i’s to 16.2.2 with no change.
We do not notice the sync pulse status changing on any of the 450M APs. To me this rules out an issue with the CMM (APs split between 2 injectors) and the GPS. One question is, are the 450M APs maybe not as sensitive to GPS sync pulse as the 450i APs?
I’m hesitant to say it is the 450i because it is happening to both units. Granted these were installed at roughly the same time, so could be the same batch with potentially same hardware issue, but at time of install no issues were noted.
We do have other 450i APs deployed by themselves with no injector involved and I have noticed some fluctuations in the GPS sync, but typically no lower then 80% and again, not consistent.
The CMM5 is eligible for a firmware upgrade, but again with the 450M APs not experiencing an issue I’m hesitant to point a finger here. I do plan on upgrading the firmware once we have ability to access the site in case something blows up, but what would an upgrade hope to resolve?
Via the CMM5 we have toggled power and GPS for the 450i units with no change in behavior.
Regarding your 450m vs 450i sync sensitivity question:
450m supports only Cambium Sync, whereas 450i supports both Cambium Sync and the older Canopy sync. Since it has to support both types of sync, it does have slightly more complex hardware to handle this.
Furthermore, 450m primarily handles synchronization in software, whereas 450i handles the majority of the sync detection and filtering in the FPGA. Because of this, 450i is slightly less tolerant of sync pulse loss, and is quicker to both achieve and drop synchronization in the presence of a poor sync source.
Regarding the issue you’re seeing, I have some questions:
Are you seeing 100% sync pulses most of the time? If not, you should be, and that might indicate poor GPS visibility, a periodic interference source, or a bad injector.
Are the 450i radios dropping and regaining sync when you see issues, or just showing reduced sync % and then recovering?
Are you using Autosync + Free Run? If not, it may help to enable this on the 450i radios, especially if the radios are dropping sync periodically, and for a short amount of time.
WIth respect to (3) you don’t necessarily want to run Free Run on the 450i radios if there’s potential for near- or on-channel interference (frequency reuse setup or such) with the 450m radios, but if you’re experiencing short duration sync losses, the 450i has a DAC that is very good at maintaining sync accuracy during a GPS loss, and shouldn’t drift significantly until the GPS source stabilizes.
you mentioned that your CMM is eligible for a firmware upgrade. What version are you currently running on your CMM5 injectors? The latest release, which consists of version v1.4 on the controller and v00.21 for the injectors, added a feature to display the GPS satellite data on the controller GUI. This would give you the ability to check the number of visible and tracked satellites, which could help to explain why you are experiencing these sync issues.
Another feature in the latest CMM5 release adds support for upgrading the master injector from the controller. The slave injector must still be upgraded manually using the USB cable, but we have found that the slave will work fine if left on v00.13 after the controller and master have been upgraded. Hence you could upgrade your controller and master injector via remote access, provided that the injectors are already on v00.13 firmware.
We do not see 100% sync on the 450i APs, but all the time on the 450Ms. As of writing this I have one at 78% and the other at 82%. Are you suggesting that it could still be a bad injector or GPS even if the 450Ms aren’t seeing an issue?
I just looked at the Sync status and Event Log tab and the 450i is dropping and receiving sync repeatedly.
We have these on AutoSync, but will give the AutoSync+Free Run a try and see if that works for a fix until we can gain access to the site.
That site has the controller at 1.4 and master injector at 0.13. I do have plans to perform an upgrade because I do want to see the GPS satellite data. I’m hopeful that would give us a clue if the GPS is having issues and the 450Ms just aren’t affected by it.
Waiting until we have some semblance of access for me to do this remotely though, just in case we need to send up new equipment. Do you have any feedback on issues that have ever happened when doing a remote upgrade? I should be able to at a minimum do the controller and master injector and like you pointed out, leave the slave alone.
we are not aware of any issues that have happened during remote CMM5 firmware upgrade. The design of the CMM5 power injector is such that even if the upgrade were to fail, the most likely outcome is that you would temporarily lose remote access but the injector would continue to supply power and sync to the connected APs. But since it is not possible for us to guarantee that there would be no service impact in the event of an upgrade failure, I would certainly understand if you prefer to wait until you have site access just in case.
Based on your response to Al, it does seem that there could be an issue with the GPS at the site that is not severe enough to affect the 450m APs. As he mentioned, the safest option would likely be to switch the AP configuration to AutoSync + Free Run, provided the sync outages you are seeing are of relatively short duration. That should see you through until you can access the site to investigate further.
I would say it’s concerning if you’re not seeing over 95% almost all the time (obviously, if the sync just dropped and came back after a few minutes, it will take some time for the percentage to ramp back up). It could indicate a problem with the cabling or sync source.
If you are dropping and regaining sync a lot, that may explain the low sync percentage, since the sync percentage tracks 5 minutes worth of pulses. (An update in R20.1 will increase this to 15 minutes, and add a lot of extra sync tracking info that may be helpful in your current situation.) When you are in a period of solid sync for over 5 minutes without any sync loss events, I would expect to see close to 100% sync.
Hopefully the sync loss events you’re seeing are intermittent and don’t last too long, and If so, I expect AutoSync+Free Run to help smooth that out. It would still be worth folling Dave’s suggestions above with respect to the CMM5 to see if it improves anything as well.
Autosync+free run was applied last week and the provider did not hear from customers, so we subscribe to no news is good news. The event log makes it look to be a 2-3 hours cycle that it free runs then checks in and loses sync and goes back to free run.
I will be working on upgrades to more accessible CMM controllers and injectors and then plan to roll the dice on this one if I don’t have any hiccups. I would imagine that will help to pinpoint if there is a GPS issue and then make some plans from there.
@CambiumDaveJ I’m working on upgrading the controller and master injector remotely (at an easily accessible site). Upgrading the controller was flawless, lost 4 pings, easy as can be. As far as the master injector, it has been attempting an upgrade for over 30 minutes now. The controller is still responsive and I can navigate off the Services page to the status page and come back and it says still updating, may take 3-4 minutes. Any tips or tricks, as I don’t really want to remove power since that is stressed to not do.
I’m sorry that the upgrade of your master injector did not go smoothly. Can you please check that you selected the *.hex file for the upgrade? The filename should be “LMG_FW_00.21_25FB_1124A.hex” for the current firmware. If the wrong file type is chosen the controller may try to proceed with the upgrade using the invalid file type. While I don’t think this is likely to be the cause of your issue here, it would be good to confirm it just in case.
To recover from your situation I would suggest that you reboot the controller from the GUI and then check if it is able to communicate with the master injector. If there is an unused port on the master you can try enabling or disabling it and then click the Query button to verify that the status changed as expected. If the ports are full and you don’t want to disrupt AP service then you can just try the Blink or Query operations and see if they appear to go through. If you find that the injector is responding properly then you can check the reported firmware version to see if the upgrade worked and the controller simply failed to receive the acknowledgement that the download was successful.
If, on the other hand, the master injector is not responding, then you can try repeating the firmware upgrade procedure from the controller GUI. We have seen that on rare occasions the local upgrade procedure using the USB cable can fail due to a checksum error. The same failure could happen when upgrading from the controller, although we have not seen this before. When this failure happens the injector is left in the bootloader state; in this state it will not respond to status queries from the controller, but it will still support the upgrade process. So trying the upgrade again should restore it to full operation.
Alright, we are fully upgraded at the site in question. I’ve been glancing at the GPS data at each site I did working my way to this one and it seems in line with everything else I’ve seen. I have 23 satellites in view, 15 counted with a 3D fix. No invalid messages. The receiver model, version, revision is the same as other sites on the network:
Canopy UGPS(GPS + GLONASS)
I’ll leave it while I go to lunch and check back, but anything else I could be looking at this to see whether it is a GPS issue?
That’s great news @kendali - I’m pleased to hear that the second upgrade attempt was successful. I will have to check on whether it is critical to use the exact filename.
Your initial GPS stats look very good. That is an older UGPS model but it has been tested with the CMM5 and should work fine. The next step would be to wait until your APs loses sync again and see if the CMM5 reports any corresponding loss in the UGPS sync signal or satellite data. As noted earlier, the controller firmware that you just loaded supports SNMP traps, so if you are set up to monitor SNMP you could configure it to alert you to any sync loss events. Of course you can do the same thing on your APs, too.
No invalid message counts or anything to make me think there is a problem with the GPS while I was at lunch. It is up to 24 satellites in view and 20 counted. One AP is at 78% and the other at 73% and still losing sync:
02/24/2021 : 11:30:01 PST : Lost sync pulse from Main/Power Port
02/24/2021 : 11:30:01 PST : Generating Sync - Free Run
02/24/2021 : 11:30:03 PST : Acquired sync pulse from Main/Power Port
02/24/2021 : 11:36:39 PST : Lost sync pulse from Main/Power Port
02/24/2021 : 11:36:39 PST : Generating Sync - Free Run
02/24/2021 : 11:36:41 PST : Acquired sync pulse from Main/Power Port
02/24/2021 : 11:48:02 PST : Lost sync pulse from Main/Power Port
02/24/2021 : 11:48:02 PST : Generating Sync - Free Run
02/24/2021 : 11:48:07 PST : Acquired sync pulse from Main/Power Port
02/24/2021 : 12:47:26 PST : Lost sync pulse from Main/Power Port
02/24/2021 : 12:47:26 PST : Generating Sync - Free Run
02/24/2021 : 12:47:28 PST : Acquired sync pulse from Main/Power Port
I will be digging into getting a poller setup to track loss of sync. I did enable remote logging to track in our syslog any messages that might come across.
The fact that we have these same models of GPS at all our mountain top sites makes me confident it should work fine with the CMM5. I have ordered a spare pulse recently and changing to that along with checking cables will probably be the first step to physical troubleshooting once we gain access to the site.
I’m revisiting this ticket from earlier in the year because all complaints dropped off over the summer and we kind of forgot about it until we received a phone call and realized it is still an issue. We have access to the site currently (sad to say we have no snow) and I sent a tech up with a Pulse GPS and the pinout diagram to build the cable that takes it into the CMM. INSTANTLY BETTER! Yup, the 450i APs immediately got sync and have been rock solid at the full 100%. We will monitor this for a day or two, but I’m thinking it was a bad GPS.
It was frustrating to track down since it was working great for the 450Ms and the details on the CMM showed it as working great. This will be a first step on all our mountaintops if we start running into issues in the future.