Speedtest versus epmp wireless link test

Hallo,

I have a question: in the Internet speed test the maximum speed reached in the router is 257 Mbts down and 97 Mbts up. Over the radio link speed is 113 Mbts down and 97 Mbts up. With Epmp  wireless link test the speed is 198 Mbts down and 144 Mbts up.

Can somebody explain me why is this diference? From my point of view the radiolink is in perfect conditions...

Network topology: provider FFTH---> mikrotik RB760iGS---> Epmp 1000 with GPS----> Client Epmp Force 180. Distance is aprox 600-700 meters.

Thanks.

WhatsApp Image 2018-12-07 at 16.37.48.jpeg

This might be because speedtest.net and the app use a single stream of TCP usually over port 80... whereas the radio's speed test uses UDP. UDP by it's nature will result in much faster reported speeds. For most accurate results, try to use a Mikrotik to Mikrotik or iPerf to iPerf test using multiple UDP or TCP streams.

1 Like

For now it’s ok with that speed…but I was curios to find out why is this difference between the tests.

Eric is correct and the RF link test is done using UDP traffic, which typically shows better performance than TCP based tests.  Regarding the topic of speed tests, and the difference between radio connected rates and third party server (i.e. speedtest.net), please see this post for some additional information.  There may be other factors in play influencing the results, end to end.

Without any modification on windows, this is speedtest direct from Provider FTTH Gpon:

speedtest from Provider FFTH

Depending what mode you are using, there is more or less overhead too.  If this is a backhaul or a point to point type link, we find that using ePTP mode can be dramatically faster and more stable ping times vs using TDD or TDD PTP modes.  So, if you can - try  ePTP mode and see what results you have.

1 Like

The size of your packet buffers can play largely into this as well depending on whats happening on this network. If your switches are soho oriented chips (small office, home office) they tend to be stuck with flow control on, and not enough buffer to deal with tdd flows, in that case, eptp mode would improve your results.

Using a switch with deeper packet buffers make a huge difference, particularly when flow control is involved.

Another thing to check is your retransmission rates.

If your seeing a 10% retransmission rate, you’ll see a much improved udp test score over tcp. If you’ve got a link cookn just right, you won’t see to large of a gap between the two. No more than 10%. Again, everything has to be humming just right, and that’s usually not realistic.

2 Likes

So, in your opinion I have to something more to this link? I will have a chance to improve the speed.
Right now is AP to SM in flexible mode and I want to keep it this way.

WRT buffers and speed tests, Chris is right. Deeper buffers give better speed test results. But they make everything else worse. Bufferbloat is the bane of networks. It's like taking a car and removing the seats in order to improve its 0-60 MPH (or 0-100 kph) drag strip tests. It helps the test but makes the car otherwise much less useful. Router vendors do sometimes add larger buffers in order to score higher on speed tests, which some reviewers misunderstand. Ideal buffers are fairly short, and most speed tests are dangerous.

1 Like

@fgoldstein wrote:

WRT buffers and speed tests, Chris is right. Deeper buffers give better speed test results. But they make everything else worse. Bufferbloat is the bane of networks. It's like taking a car and removing the seats in order to improve its 0-60 MPH (or 0-100 kph) drag strip tests. It helps the test but makes the car otherwise much less useful. Router vendors do sometimes add larger buffers in order to score higher on speed tests, which some reviewers misunderstand. Ideal buffers are fairly short, and most speed tests are dangerous.


that's not necessarily true.    we run juniper for our routing and switching, our MX, for example, has as much as 128meg of switch packet buffing per MPC plus much much more in the route engines.   its how the data is handled is what matters.   a fat buffer can help a poor phy, but a reasonable buffer can speed up a quick phy. 

our Juniper EX switches on our busy tower sites and make use of the 12 meg buffer in those switching by putting a point to point radio in a single LACP and defining the packet buffer on those links to be beyond the default value, the purpose to prevent the annoying effects of different ethernet rates on the same bridge domain.  

introduce a 100 meg port that's running near full speed, you'll notice you won't get 900 meg from a mediocre switch on the gig port feeding it.  put a solid switch there and re-measure you'll see the speed your expecting.  SOHO devices are horrid at expressing this problem.    

really deep buffers on routers can cause what you expressed when it's not a very powerful router.  of course, the goal is to keep those buffers as empty as possible, which fast hardware does. 

The problem isn't the horsepower of the router, it's the speed at which the buffer can drain and the resulting jitter and latency. The classic "extreme bufferbloat" example that made the rounds several years ago was on a mobile network where total latency could hit seven seconds! That's 7000 ms. or, more apprpriately, 7,000,000 microseconds.

You basically don't want to have more buffering than the jitter you are willing to add at that stage. The math shows that a 10-packet buffer can run a circuit at quite high efficiency. Big buffers help in Ookla Drag Racing, but hurt actual applications that have any interactivity or sensitivity to latency and jitter. If you want to do VoIP or gaming, for instance, you need to avoid big buffers.