Any known GPS problems with ePMP1000 on newest firmares?

We have a couple ePMP1000 APs with GPS, using internal GPS, where the logs are stuffed absolutely full of GPS messages like this 10-second sample:

2020-04-30T13:32:50-04:00 Bennettsville_ePMP_AP3 kernel [44072.650000] 1PPS: 44072657482003
2020-04-30T13:32:50-04:00 Bennettsville_ePMP_AP3 kernel [44072.650000] SYNC: Time Diff 7000004715, avg_per 1000000759, SYNC: Deviation 598
2020-04-30T13:32:50-04:00 Bennettsville_ePMP_AP3 kernel [44072.650000] SYNC: Pulse OK, per 1000001534
2020-04-30T13:32:50-04:00 Bennettsville_ePMP_AP3 kernel [44072.650000] SYNC: New clk drift: period = 500000288, value = 28823, num_diff = 50
2020-04-30T13:32:50-04:00 Bennettsville_ePMP_AP3 kernel [44072.650000] Sync Manager: evt (PULSE), state(SYNC) -> (SYNC)
2020-04-30T13:32:51-04:00 Bennettsville_ePMP_AP3 kernel [44073.650000] 1PPS: 44073657484901
2020-04-30T13:32:51-04:00 Bennettsville_ePMP_AP3 kernel [44073.650000] SYNC: Time Diff 6000002770, avg_per 1000002929, SYNC: Deviation 14804
2020-04-30T13:32:51-04:00 Bennettsville_ePMP_AP3 kernel [44073.650000] SYNC: Pulse !OK, per 1000002898
2020-04-30T13:32:51-04:00 Bennettsville_ePMP_AP3 kernel [44073.650000] Sync Manager: evt (PULSE), state(SYNC) -> (SYNC)
2020-04-30T13:32:52-04:00 Bennettsville_ePMP_AP3 kernel [44074.650000] 1PPS: 44074657482248
2020-04-30T13:32:52-04:00 Bennettsville_ePMP_AP3 kernel [44074.650000] SYNC: Not enough stable periods
2020-04-30T13:32:52-04:00 Bennettsville_ePMP_AP3 kernel [44074.650000] SYNC: Pulse !OK, per 999997347
2020-04-30T13:32:52-04:00 Bennettsville_ePMP_AP3 kernel [44074.650000] Sync Manager: evt (PULSE), state(SYNC) -> (SYNC)
2020-04-30T13:32:53-04:00 Bennettsville_ePMP_AP3 kernel [44075.650000] 1PPS: 44075657483529
2020-04-30T13:32:53-04:00 Bennettsville_ePMP_AP3 kernel [44075.650000] SYNC: Time Diff 9000005754, avg_per 1000001149, SYNC: Deviation 4587
2020-04-30T13:32:53-04:00 Bennettsville_ePMP_AP3 kernel [44075.650000] SYNC: Pulse !OK, per 1000001281
2020-04-30T13:32:53-04:00 Bennettsville_ePMP_AP3 kernel [44075.650000] Sync Manager: evt (PULSE), state(SYNC) -> (SYNC)
2020-04-30T13:32:54-04:00 Bennettsville_ePMP_AP3 kernel [44076.650000] 1PPS: 44076657483315
2020-04-30T13:32:54-04:00 Bennettsville_ePMP_AP3 kernel [44076.650000] SYNC: Time Diff 8000000451, avg_per 1000000741, SYNC: Deviation 5477
2020-04-30T13:32:54-04:00 Bennettsville_ePMP_AP3 kernel [44076.650000] SYNC: Pulse !OK, per 999999786
2020-04-30T13:32:54-04:00 Bennettsville_ePMP_AP3 kernel [44076.650000] Sync Manager: evt (PULSE), state(SYNC) -> (SYNC)
2020-04-30T13:32:55-04:00 Bennettsville_ePMP_AP3 kernel [44077.650000] 1PPS: 44077657487031
2020-04-30T13:32:55-04:00 Bennettsville_ePMP_AP3 kernel [44077.650000] SYNC: Not enough stable periods
2020-04-30T13:32:55-04:00 Bennettsville_ePMP_AP3 kernel [44077.650000] SYNC: Pulse !OK, per 1000003716
2020-04-30T13:32:55-04:00 Bennettsville_ePMP_AP3 kernel [44077.650000] Sync Manager: evt (PULSE), state(SYNC) -> (SYNC)
2020-04-30T13:32:56-04:00 Bennettsville_ePMP_AP3 kernel [44078.650000] Sync Manager: evt (RELIABLE THRESHOLD TIMER EXPIRED), state(SYNC) -> (SYNC)
2020-04-30T13:32:56-04:00 Bennettsville_ePMP_AP3 kernel [44078.650000] 1PPS: 44078657488150
2020-04-30T13:32:56-04:00 Bennettsville_ePMP_AP3 kernel [44078.650000] SYNC: Time Diff 8000008233, avg_per 1000001104, SYNC: Deviation 599
2020-04-30T13:32:56-04:00 Bennettsville_ePMP_AP3 kernel [44078.650000] SYNC: Pulse OK, per 1000001119
2020-04-30T13:32:56-04:00 Bennettsville_ePMP_AP3 kernel [44078.650000] SYNC: New clk drift: period = 430000277, value = 27730, num_diff = 43
2020-04-30T13:32:56-04:00 Bennettsville_ePMP_AP3 kernel [44078.650000] Sync Manager: evt (PULSE), state(SYNC) -> (SYNC)
2020-04-30T13:32:57-04:00 Bennettsville_ePMP_AP3 kernel [44079.650000] 1PPS: 44079657485259
2020-04-30T13:32:57-04:00 Bennettsville_ePMP_AP3 kernel [44079.650000] SYNC: Not enough stable periods
2020-04-30T13:32:57-04:00 Bennettsville_ePMP_AP3 kernel [44079.650000] SYNC: Pulse !OK, per 999997109
2020-04-30T13:32:57-04:00 Bennettsville_ePMP_AP3 kernel [44079.650000] Sync Manager: evt (PULSE), state(SYNC) -> (SYNC)
2020-04-30T13:32:58-04:00 Bennettsville_ePMP_AP3 kernel [44080.650000] 1PPS: 44080657485825
2020-04-30T13:32:58-04:00 Bennettsville_ePMP_AP3 kernel [44080.650000] SYNC: Time Diff 9000005356, avg_per 1000000806, SYNC: Deviation 1898
2020-04-30T13:32:58-04:00 Bennettsville_ePMP_AP3 kernel [44080.650000] SYNC: Pulse OK, per 1000000566
2020-04-30T13:32:58-04:00 Bennettsville_ePMP_AP3 kernel [44080.650000] SYNC: New clk drift: period = 450000254, value = 25405, num_diff = 45
2020-04-30T13:32:58-04:00 Bennettsville_ePMP_AP3 kernel [44080.650000] Sync Manager: evt (PULSE), state(SYNC) -> (SYNC)
2020-04-30T13:32:59-04:00 Bennettsville_ePMP_AP3 kernel [44081.650000] 1PPS: 44081657486190
2020-04-30T13:32:59-04:00 Bennettsville_ePMP_AP3 kernel [44081.650000] SYNC: Time Diff 9000004187, avg_per 1000000775, SYNC: Deviation 2788
2020-04-30T13:32:59-04:00 Bennettsville_ePMP_AP3 kernel [44081.650000] SYNC: Pulse !OK, per 1000000365
2020-04-30T13:32:59-04:00 Bennettsville_ePMP_AP3 kernel [44081.650000] Sync Manager: evt (PULSE), state(SYNC) -> (SYNC)

The radios are currently running 4.5 firmware, GPS on AXN_5.1_8174.  One of the APs was just updated from 4.4.3 to 4.5, I didn't see these messages from 4.4.3 it seems.  That one doesn't have nearly as bad of a session-drop problem.  The other AP is constantly causing us customer complaints, as it keeps dropping sessions.  I sent my guy out to the tower a few minutes ago to put both on a Packetflux injector, they have been running on the included POE bricks up until now.  (we ran out of available injector ports, got lazy)  But I'm trying to figure out what has been happening, and if there's another reasonable fix available.

j

Hi,

We've added additional logs to debug GPS related issues. Those messages should not cause disconnects or any other issues. Also the case when GPS is lost for some short period of time is covered by the holdoff period so please check that it is more than five minutes in settings.

Can you please also share tech support dump with out support team to check what is the reason of session drops.

Thanks,

Dmitry

I had a 1000 lite on a ptp link do this, it had an old round gps puck. with the square one I still get these messages but now its a tolerable rate of 2 or 3 every 10 seconds rather than 15 every 10 seconds. These new messages are a tad confusing at first, but they helped us find an old gps puck!

1 Like

Keep in mind that logging level is “warning” for all these messages instead of “debug”.
So, in order to ignore them we would have to set log level to “error” or more critical.