new bandwidth problem

we are now getting reports from customers that speed service will come and go…almost “surge” very regularly. we’re load balancing 3 DSL lines across 105 customers…and at the NOC we’re seeing 2400-2500 K per line (which is what we should be seeing according to the graphs cacti is making for us).

Somewhere outside of that room - either where we’re linking the output from the router (via a cheap linksys 5 port switch, is this our problem?) - to the canopy 5.7 5.65 jump to our first tower - to our customers - the speed is dropping to 800k or sometimes even less…sometimes a page can’t even be displayed. (no connectivity)

RF looks fine across the board…bandwidth looks fine where it all comes in. I’m at a loss. Any suggestions?

We’re 900 mhz to all the customers…we like for them to see between 1.5 and 3 meg. We have several 900 APs…the busiest one has 22 customers on it.

Our ISP is roughly the same size as yours. We have about 120 customers which are served from a 7mb pipe, which comes to us as straight Ethernet from a telco… Anyhow… I rarely see our WAN go over 5mb usage. We use a Cisco 2620 Router which goes to a canopy 20mb BH to get the connection out to our POP where we have a 5.2 cluster and a 900AP all connected by a Cisco 2900 switch. No complaints of speed

Are you saying that all 3 lines are showing 2400ish K as used? What is the rated capacity of the DSL lines? what type of router/equipment is doing your load balancing? If your three lines aren’t fully saturated then I might look at the switch.

actually…no…2400k available. roughly 600k usage on each line, with the occasional 1 or 2 meg spike. the 600k is sustained. i actually plugged in another switch tonight…hope that helps. any recommendations on a managed switch? would that help?

When looking for a switch, I would recommend a cisco much like we have. However anything that is ‘non-blocking’ should be ok. This simply means that the switch fabric has enough bandwidth that all ports could potentially be active at once without slow down.

Do you have a CIR from your DSL provider? What type of equipment is doing your routing and load balancing?

yes…we do have a CIR from our upstream provider - and everything is working well in our NOC. I’m looking at the wireless link.

Here is the linktest from our NOC…5.7 AM/SM. I’m being led that this could be problematic although it’s certainly enough bandwidth for us?

Stats for LUID: 2 Test Duration: 10 Pkt Length: 1522
Downlink RATE: 6139084 bps
Uplink RATE: 5903155 bps
Downlink Efficiency: 88 Percent
Max Downlink Index: 100
Actual Downlink Index: 88
Expected Frag Count: 119904
Actual Frag Count: 136002
Uplink Efficiency: 91 Percent
Max Uplink Index: 100
Actual Uplink Index: 91
Expected Frag Count: 115296
Actual Frag Count: 125729

is that bad for an uplink?
I guess i’m considering dropping it to 1x if it’ll improve the problem…

Could possibly be a PPS problem or even a ‘small packet problem’.

Any idea on the PPS going over the BH?

no idea…of course, i didn’t mention this, but we’re using a 5.7 SM and dish to send the bandwidth up to the 5.7 omni which is distributing it to the towers…are we overloading the SM?

So from what you have just described, you have a 5.7 SM @ your Internet connection, passing the data to an AP, then to other towers that have SM’s, then into 900MHz clusters?

That doesn’t sound like a good situation. Especially with the uplink/downlinks are only in the High 80s low 90s.

What about the uplink/downlink tests to the other 5.7 SMs?


Your SM is probably choking. Can you associate high latency from the NOC to the AP’s during times of heavy traffic?

Might be time to upgrade to a real BH solution.

oh man… Do you use cacti to monitor the SM?

in fact…yes…cacti does monitor the sm. what should i check?
i was about to ping around some more…

umm… check if the bandwidth used by the SM is close to its maximum during peek hours?

actually…nah, it’s not.

stats are at - this is the west cullman uplink. should support up to six meg…it’s barely passing 3…sometimes 3 1/2.

We moved one of our towers to another link (actually put that tower at 50% and used a 900SM to upload bandwidth to that tower). That freed up about 15 customers…apparently giving the 5.7 SM/AP link(s) enough room to breathe. That pretty much verified we’ve outgrown the “poor man’s backhaul” - fyi. This is what happens when you out grow that.

Maybe im reading this wrong, but you have ~120 users and your main backhaul is a 5.7 SM?

I have used the SM solutions but I was only able to get ~50 customers before i ran into a choking problem.

SM’s have a pps of 300, this causes a lot of problems on my sites that are still backhauled with an SM. one user with bittorrent and slow the site way down.

Depending what you are trying to offer you clients I would suggest upgrading to at least a 10mb back haul.

Feeding bandwidth upstream to an AP for distribution to customer SMs never seems to be a good idea. If you have one or two SMs connected, maybe it will survive, but the upside-down model seems to thrash the processor in the AP because its really stressing the ‘SMBH’-to-AP upstream link.

I agree as previously mentioned, get a real backhaul, there is much to choose from. The investment should more than compensate for the potential, and soon to come, lost customers and revenue, not to mention the impression customers get about wireless reliability and performance.

Thats the way it is…

indeed, as this was a cost saving / cutting measure when we were a startup company. in fact, when we first started we were using a 900 mhz AP and SM to feed the tower…this was a upgrade on the way to the upgrade…

Nothing wrong with an SM as a BH until it stops working…

We ran for a long time with a 2.4SM. When it hit the limit it did exactly what Jay’s did.

Then we went to BH20’s. When they hit the limit they did exactly the same thing…

we weren’t sure if this will fix it or not…and we’re about to spend some serious money on a backhaul pair.

this makes me feel much better. thanks jerry.

Watcha gonna get?