Consequences of running a backhaul at maximum throughput?

One would expect the speed to decrease proportionate to the requested internet usage. So the backhaul can provide 38 Mbps and people are trying to use 76 Mbps, everyone’s speed should more or less be cut in half.

Is this completely wrong? What about how it affects latency? We are beginning to bump the download capacity on our 52 Mbps PTP 500 and I am trying to figure out what exactly the consequences are. Also, taking into account the packet per second capability of the PTP 500, could I realistically be bumping the limit even without hitting the actual expected max throughput limit? I.e., our BH runs at 38.53 Mbps down. Could we be bumping the limit even at, say 34 Mbps depending on what the various usage scenarios are on the network?

Hi

A few points

Our PTP500 radios are not limited by PPS capability - we are able to achieve full throughput, even when using small packet sizes (which give high pps) - so the PPS is not a limiting factor on performance for Cambium PTP radios!

There are a few things to look at to try and improve your performance.

Is this a link for an ISP - or providing an internet connection? Is the traffic mainly in one direction, ie downloads?

If so then a good idea would be to configure the link so that it is not running symmetrically. PTP500 gives you the ability to configure 1:1, 1:3, 3:1 or Adaptive. This allows the radio to pass more traffic in one direction than the other - great for an internet connection where most traffic is in the download direction.
With the setting set to “Adaptive” the radio will monitor the data queues at each end of the link and adjust itself automatically to ensure that the radio is running in the most efficient way.


Latency - Latency across the air will remain the same at all throughput levels, including full capacity

One thing to note is that we do have fairly large data buffers in our radios - so if the link is at 100% capacity then new data arriving will be queued. If you are then measuring latency you will see the time to transmit across the link + the time spend in the queue - so on a link that is running over 100% you will see higher latency measurements.

Latency is a measure of the time taken to deliver a packet - and does not look at packets that are dropped - so on a competitive link which has smaller buffers you will see that “latency” remains constant - but what is actually happening is that the radio will drop packets - this will mean that the end device has to re-transmit, and so the actual effect is poor performance, although the measured latency will still look good.

By using large buffers we ensure that data delivered to our radio is queued and delivered, ensuring higher performance overall.

Our radios also allow you to configure QoS - so that you can prioritize certain traffic. For example you may want to prioritise VoIP traffic over standard HTTP to ensure that the latency for VoIP is optimized.

I hope that helps, and gives you a few things to look at.

Regards
Julian

1 Like

That does help, thanks.

This is a link for an ISP. We already have our backhaul set to optimize for download usage: 3:1 at the master, 1:3 at the slave. Our customers are using far more download than upload. In fact, it might be more efficient for us to have a 5:1 setting, but that is not currently available.

Is there any way to look at buffer statistics at all? Total buffer usage since last reboot? Total buffer usage in a certain time frame? Current buffer usage?

EDIT: Additionally, what is the recommended QoS setting for a WISP? Do I have to change settings in my other networking equipment for this to work on the backhaul, or is it just reading tags that are set by our users?