We have a fair number of 5GHz ePMP deployments, but we've seen an alarming trend for those on the higher ends of our speed plans - specifically, they often have crappy speedtest performance.
The first time I directly tested this was with a customer on a 10 Mbps plan. The AP showed he was on the correct plan as per the AAA server, and the Wireless Link Test showed the radi was fully capable of handling the speed. The RF quality was very good, at MCS 14 or better.On that trip, I had no issues with Speedtest websites or iPerf if I connected directly to the power injector, but I did see a big drop if I connected through the router. After a bit of brainstorming, I turned off the Auto MTU setting on his router and set it to 1480. After that, going through the router caused no performance hit to Speedtest or iPerf.
For the next year or so, I recommended that solution to subscribers whose Speedtest results over ePMP were sub-expectation. Yesterday I went to a site with a 30 Mbps down plan, because simply changing the MTU on the router to 1480 wasn't doing the trick. The setting I had to revise to fix the problem was not in the router this time, but it was the MTU setting in the STA - changing it to 1564 (and leaving the router's setting on auto). After that change, I was able to connect through the customer's router and get the expected levels of performance.
Am I alone in this? Have others using ePMP also found that the default settings for MTU induce packet fragmentation which kills Speedtest scores? I know that real-world performance, over a variety of packet sizes, would probably not exhibit these specific symptoms, but let's be honest: Internet Subscribers love running Speedtests, and all but the most savvy of them believe their results are gospel.