Flaky Power Supply

Had an interesting one. Customer connection was fine for quite a while. during the last round of storms, the power went out repeatedly.

The connection got flakier and flakier. It was acting like a virus, but that proved to not be the situation. I had replaced the radio, but that did not fix it. Turns out the power outages caused some (but not total) damage to the power supply.

the last time the customer unplugged the power supply it finally died. Replaced it and away they went.

so, the new thing at the top of the troubleshooting list is to swap the power supply before breaking out the ladder.

jerry i have had out of the last 3 or 4 boxes of power pucks
1 or 2 dieing soon after install or some months into service.

sucks to roll a truck to replace a power puck.

I have also had a number of these fail. They aren’t the best quality. I have also had a number of problems with the pins in the power injector side getting stuck down and not making very good connection. I hate having to rebuild those things in the field.


Aaron

I have had about 4 fail. The fix? Just crimp the end of the pigtail again. The pins on the CAT5 connector are not crimped all the way from the factory. Now I re-crimp them all.

You’ll notice sometimes they are hard to plug in, the pins are to high.

I can’t wait until the others fail…

Hope this helps.

Eric

we have a situation at a location that keeps blowing the adapters… we have put in a UPS, a line conditioner, a big smart UPS but still the same thing, changed the sm still problems.

Works fine for 1-2 days and then bang ?

We have had a very low failure rate with these units, but as part of our standard installation we “give” the customer a quality surge protected power strip. This not only offers us protection from a variety of potential electrical issues but guarantees that the Canopy transformer is not 2 or 3 (some houses are a real nightmare) $1.59 power strips away from the outlet. We offer an UPS up sale to all of our customers as well.

Most of our failed units have been associated with cabling issues and I have seen a tech blow 3 supplies before I told him to test the cable or he was buying them out of his separation check. After he replaced the cable run, the problem was resolved (a couple of staples through the cable can cause very interesting results after several months in the field).

Finally, the radios can exhibit very strange behavior in low voltage situations and some houses just are not properly grounded (each event reduces the effectiveness of the ground).

Finally, the radios can exhibit very strange behavior in low voltage situations and some houses just are not properly grounded (each event reduces the effectiveness of the ground).


Which confirms what I was thinking yesterday, I am going to require outlets be tested with outlet tester prior to connecting the power supplies.

one of our sites has an AC unit in the garage that constantly turns on and off blowing the power supply for the bh…took me forever to figure that one out. would last a few hours to a couple days…swapped all the cables and the bh and it still happened…i blew through about 5 power supplies

i’ve seen the AC kicking in and blowing the adapter… change the cable that is the only thing we have not tried… will let you know…

Hmmm… I’ve never had a problem with the supplies “blowing up” (and we see some really bad power at coal mines and gas plants). I have had the pins in the female side not be snapped in all the way and they slide back over time. The one supply I did have problems with a few years ago was only because it was running two 5.8 BH radios - it lasted about a year before it died. Other than that, it has mostly been with the female or male rj-45 parts.


Aaron

We’ve had the problem with a few power supplies going bad as well… I’m getting frustrated though at the canopy modules just randomly rescanning. Almost all of my customers go down atleast 3-4 times a day for anywhere between 30 seconds and 15 minutes. When you check their modules they are re-scanning. They don’t reboot and they’re locked in to 1 frequency so I’m pretty lost as to why they re-scan. Some of them even have an RSSI of like 1700+ and a jitter of 2-4. It’s pretty frustrating >< It seems like everything works at night but not during the day.

if the AP looses sync it will throw everyone off…

They’re all at different times - sorry didn’t include that.

Edit: to be more clear - 1 module may go down for 21 seconds at 2pm, 5pm, 7pm and one other one might go down at 3pm, 4pm, 6pm and another might go down at 1pm, 3:30 pm, and 5:30 pm, etc. It’s totally different and I can’t find a reason for it. Anyone else have this experience?

I would still concentrate them being out of sync, whats in the logs.


If possible can you switch everything off, change one AP to an SM and do a spectrum scan.

What sort of environment is it, is there anything local that may be sending out a RF signal, ships/planes/radar.

When the SM start scanning, what do you see if you go into alignment, can you see the AP.

CMM logs ?

In and out all day today/Monday will post the results on Monday or Tuesday…

The problem with the SM’s going down is partially that it’s so random during the day that I have no idea when it’s going to happen so I can’t look to see what the SM is doing because it’s only down for a matter of minutes.

Some of them even have an RSSI of like 1700+ and a jitter of 2-4. It's pretty frustrating >< It seems like everything works at night but not during the day.


Chas, this is interference. The problem is likely that you are operating your SM's too close to the noise floor. During the day, the noise floor rises and causes SM's to re-reg. At night the noise floor lowers.

Ideally you would have some type of logging analyzer like the SpectrLAN Data Logger - http://www.lessemf.com/rf.html. You would set it up to take readings multiple times a day for several days to figure out what is going on.

Barring the SpectraLAN, here is how I would start troubleshooting this.

Turn off all the AP's except one.

Typically RF levels will be highest before and after lunch. Between 9am and 11am, or 2pm and 4pm flip the AP to SM mode and look at the spectrum - don't change anything, just print the analysis page. Repeat for each AP. Takes about 5 minutes per AP.

Repeat the above except do it around 10pm

Compare the two analysis and look at the noise floor. You will want to have all of the SM's dB level at the AP to be at least 10dB higher than the highest level background noise you see. If you don't the only answer is more gain at the SM side.

You may see frequencies that are quieter than others, you might try moving to those frequencies. We leave all frequencies checked on the SM's so that if we need to change frequencies we can. If you only have one frequency selected, and SM's are not registered, you cannot change the frequency on the AP without rolling a truck.

Hope this helps

You could also get a 30 day eval from Dartware for their InterMapper application (http://www.dartware.com/intermapper/wisp/index.html). It will let you directly monitor and log all of the link quality values (RSSI,Jitter,etc) as strip charts. Looking at a real time graph of the affected units often will permit you to see a pattern and determine what SM links are failing as a group and when.

That’s a good idea.

You could also use Cacti for this - it’s free for unlimited nodes. Enable monitoring along with thresholds on RSSI and Jitter.

somebody here wrote some pretty nice script to run on Cacti. Maybe he could write a report that shows re-reg counts

We currently run Intermapper and MRTG (CACTI is sooped up MRTG).

We have found that you need to be very careful when using snmp monitoring tools, it can have drastic performance effects on the RF.

Be careful with the amount of data you are pulling and the frequency, its no so much the bandwidth its more the pps (packets per second).

Thanks for the great advice - will work on all of those currently.