PMP 450i all SM dropping to 1X/1X modulation

We just had the 3rd occurrence of a strange problem on a PMP 450i 900 Omni running 20.0.1. All SM’s will suddenly drop their modulation to 1X/1X and then take around 30 minutes to recover. We suspected a GPS sync issue and the event log does show that GPS sync did drop briefly but recovered.

SM’s were reachable but interface was very slow. Reboot of an individual SM resulted in the SM deregistering and not reconnecting. Reboot of the AP resulted in the same, all SM’s deregistered and did not come back. All are indications of a timing problem.

The AP is not co located with another 450i, nor is there any other 900 in the area. We have another 450i 900 Oming 5+ miles away on the other side of a hill and spectrum anayluizers show that they do not see each other, but we use GPS sync on both AP’s anyway as a precaution. To date, we have not seen any indications of this problem on the other AP.

The AP was reachable and showed it was receiving GPS. We changed to internal sync on the AP but it did not alleviate the problem, nor did a frequency or bandwidth change. In all 3 cases everything eventually recovers on its own after about 30 minutes, returning it previous 8X/8X 8X/7X modulation levels.

After the latest even we were looking at the Sync Status page on the affected AP, it does not appear to be populating as it should. It shows 11 satellites visible and 16 tracked. How it can track more than it can see seems odd. It also shows the tracked SNR as 0 and the DOP fields are blank. The visible satellite signal information is also blank. 45 minutes after everything stabilized the only change on the Sync Status page was it now is tracking 18 of 11 visible satellites.

My first thought is the Packetflux sync source via the UGPS is malfunctioning, but the swap to internal sync didn’t help anything, so to me if the Packetflux was the issue, shouldn’t that have recovered the SM’s? The Packeflux is mounted close to the AP and the AP shows a good connection. .

Since the other 900 450i AP is running the same exact configuration and has not had these issues, one would assume that rules out a software issue…

Anyone encountered this type oif scenario before? Right now we plan on replacing the Packetflux and cable once the weather clears, but since the unit didn’t recover on internal sync, not confident that will actually resolve the issue.

where do you deploy the units ? is there any chance of it overheating or by any means something external factor …

Are you using Cambiums UGPS or Packetflux’s SyncBox jr?
This is important as the UGPS is not completely compatible with the Packetflux sync injector and should not be interconnected.
If it is a SyncBox, then check that you are getting stable power to the sync injector, it is important that the SyncBox has stable power from the sync injector or it will loose satellite locks and not pass this to the injector.

Its odd that flipping to internal sync wont restore the SMs link rates.
Did you reboot the AP after setting sync to internal?
I know it can take a few minutes for the SMs to ramp up but not 30mins. Have you tried rebooting all SMs then the AP for a full sector reboot? Try doing this all at once.

How sure are you this is a sync problem? My first thought would be that 900 MHz is being used by a utility for meter reading.

We have almost entirely stopped using 900 MHz due to power company smart meters, they wipe out the entire 902-928 MHz. But if you had lets say a water utility that just uses it for 30 minutes to read the meters, it could cause what you describe.

There is also the possibility of very high xmt power service just below or above the unlicensed band, for example paging. I assume you’ve run SA and remote SA during one of these events.

Actually the opposite is true. Standing air temperature is 5 degrees with a -16 wind chill. This occurred in temps in the 50’s as well. Point may be moot and we lost communicaiton with the AP about 4 in the morning. With the cold suspecting a cable but with the previous weirdness, the AP may have rolled over. It will be a few days before the weather will allow examination. We tried the obvious, new power injector, different switch port, jumper replacement etc. to no avail.

Douglas, we are using a Packetflux unit… Since it appears there is either a cable issue or a failed AP, that could be an issue for the Packetflux unit. I was also surprised it did not recover on the internal sync and that basically it ignored any attempt to revoer until it felt like. All instances were the same with extended recovery time. We did reboot the sector (several times) and even tried a hard reboot by killing power, with no change. I tried rebooting one SM initially but is did not recover and we lost the reset when the AP was rebooted.

We also suspected a possible interference problem, though we are very rural and this unit has been operating for months with no issues, and has run for 2 to 3 weeks between events. I did run a spectrum analyss from the AP and there is really nothing there, everything is down around -80. As noted, we have another 900 5+ miles away blocked by a hill, and neither unit sees the other, and they have identical settings and Packetflux sync on both The other unit is operating normally, also with good modulation. We have run SA on the SM’s multiple times and they only see the tower.

We’ll have to get the AP recovered once the weather clears. If it has hard failed, then all of this is problem symptoms of that. We have some record low temps, i.e. 10 below over the last few nights, but our other 900 has weathered it with no issues. Its a mystery and really can’t do much else until we can get hands on.

Unfortunately, with the exception of one customer that we were able to swing over to a newly lit 2.4 AP, the rest have no option but 900. I could fire up the old FSK, customers would love that throughput, LOL.

When running SA, of course you need to run it while the problem is occurring.

900 MHz is borderline obsolete, IMHO, due to interference, low throughput, and availability of other broadband options like Starlink. Trying to use an omni just makes it worse, as does trying to use a wide (20 MHz channel). If SMs won’t even stay registered, I’d drop to a 5 or 7 MHz channel in the middle of the band, at least to see if they will register and you can regain control over them and do things like remote SA.

I do understand that currently the AP is totally down so you can’t do much until weather permits replacing the AP.

We have 1 remaining 900 MHz 450i AP still in service with 2 subs. It is currently in 0 degree F weather and 2 years ago was I think at -15 F. But I think I saw a discussion, maybe it was in this forum, about some defective 900 MHz APs that couldn’t withstand cold.

I am thinking you are getting interference like Ken suggested. I was wondering about the packetflux because you mentioned using the upgs and these two together are known as a bad combination.

As for utilities on the unii bands, here in Canada thats not an allowed use. They must have a licensed channel or they would have to comply to the same power levels we do.

What channel size are you running? If its interference then running an AP in SA for a day would allow you to see if its something you can maneuver around. Once you know the offenders channel and size you can either match it or go to another channel away from it.

Its a misnomer that 900mhz is out dated. The pmp450i series can puch 150mbps over 900mhz if you have enough snr to keep the modulation up. Though I would use slant 45 for antenna configuration as its more tolerant of v/h users

We’re running a 10 Mhz channel, and normally have most clients at 8X/8X MIMO B with a few 8X/7X. Similar on the uplink side but its a little noisier using the KP Omini but none below 8X/6X. During our original scans, there was some low level stuff bouncing up above the noise floor and the low end of the spectrum, i.e. 900-904 area. Honestly couldn’t really see any modulation differences anywhere from about 910 up so we decided to go high as far away from the low end as possible.

Sorry for the confusion on the UPGS, we are using Packetflux, just via the UPGS port.

There almost have to be something for the smart meters in the town, and we assume what we were seeing on the low end was that. During the event we tried various frequencies up and down the spectrum with no change. We also tried dropping to a 7 Mhz channel with no change. The SA didn’t show anything of any signficance from what we saw on our original scans, but as you say, it was not extended, but one would assume something that knocks the 450i off the air for 30 minutes would show up in the scan during the event as I assume it would be of sufficient power level to thrown the 450i for such a loop. Drops in modulation level and registration inconsistency I would understand, but not the basic shutdown we’ve been experiencing. To us its an indication of timing, but changing to internal made no difference. The only smoking gun, such as it is, is the SYNC page not populating as expected and basically being weird.

We’ve run fine for months so this is fairly recent. There is a lot of energy related stuff in the area, i.e. pipelines that most likely use 900 for monitoring, and I suppose its possible they could be firing something up, but haven’t seen any hard evidence of it yet and would actually expect to see that on our other tower as its out in the woods surrounded by the stuff and it has trucked along with no issues. We were having no problem with 10 Mhz channels and had even played with 15 Mhz but that’s pushing it with an omni.

When this ice and snow finally clear off enough, we’ll run a new cable and take up a new Packetflux. Hoping its just a cable issue since the unit is offline. It may just turn out that the unit has hard failed and this is all some symptom of its demise. I hope its not a hard failure, as they don’t give these things away.

Its been a bit of a learning curve with the 900 and getting everything optimized but I agree with you on its viability. Every customer is one that we would not be able to reach otherwise and whose only option was satellite.

If the weather holds we will be able to climb Thursday or Friday.

Thanks for your suggestions!

If you are using the sync port, then make a new cable and take a spare syncbox. Since the sync page does not populate then the AP is not getting proper data from the syncbox. This is most likely a cable but could be the syncbox or the AP. I had similar problems with canopy aux ports just up and stop working and the 450i aux port is the same technology.

Do you have a packetflux sync injector, you need the newer cambium sync version. If so, try it. I am leaning towards an AP hardware issue now but hoping its just a dieing aux port.

Thanks Douglas, we plan on cable replacements and a new Packetflux. As you say, hoping its a cable/packetflux, but that’s not the way my luck runs, LOL. 6 more inches of snow yesterday but by Friday, we should be able to get to the AP.

Just to provide some closure, we were able to climb today and the AP has failed. Cable tested good and the Packetflux was not getting any power via the AUX port. We tried powering the unit with a cable/battery pack and POE injector and got nothing. Pulled it from the tower and same in our test lab. Not sure what caused the failure as we’ve haven’t had any weather events we could relate to it for several months and the ePMP gear on the same tower has had no issues. We’ll be sending it off to see if it can be repaired.

2 Likes

Because I hate reading open ended post, thought I would post the final resolution. We shipped off the AP and per the repair facility, the “dead” issue is a common problem they see requiring some component replacements that is due to cold. Coincidentally we had some sub zero temps (-10) but per the specs this should not have caused any issue. Our other 900 450i AP had no issues nor did any of the epmp equipment. I am only repeating their observation as they said they have gotten quite adept at the repair and can do it in their sleep.

When the unit came back, we were then experiencing an issue with the AP maintaining sync (which was a suspected indication with the original problem, but it never reported as such). It would continuously cycle through the sync process and never hold sync for longer than a few seconds. Same indication when the cable was replaced as well as the packetflux. Back to shop where the AUX port was replaced with no difference. Shipped the facility a packetflux to test with and after some more component replacements, it worked fine. Possible the repair facility jacked up something while they were in there…

Unit is back on the tower and seems to operating normally. We’re as confident as we can be there wasn’t an external event such as a lightening strike. The unit is well grounded and there are other AP and BH radios on the tower and no other equipment has shown any issue. No issues with the power supply or lightening arrestors and grounds all test good.

Typically the reports on the 450 series is that it is quite robust, but guess there are potential component issues in any manufacturing process. A bit concerning that the repair facility mentioned the frequency of the AP power failure due to cold. I suspect exaggeration as surely there are many operating in cold climates. Maybe just a bad batch out there.

2 Likes