ePMP 3000 - the jitter is killing me

Are there any planned upcoming improvements in the latency and jitter department for ePMP3k?  I've been pleased with the throughput performance overall with ePMP 3000, but the jitter is horrendous compared to ePMP 1000 and UBNT AC gear. Consider the following:

64 bytes from 100.64.104.63: icmp_seq=261 ttl=60 time=72.1 ms
64 bytes from 100.64.104.63: icmp_seq=262 ttl=60 time=19.2 ms
64 bytes from 100.64.104.63: icmp_seq=263 ttl=60 time=43.7 ms
64 bytes from 100.64.104.63: icmp_seq=264 ttl=60 time=101 ms
64 bytes from 100.64.104.63: icmp_seq=265 ttl=60 time=20.1 ms
64 bytes from 100.64.104.63: icmp_seq=266 ttl=60 time=9.47 ms
64 bytes from 100.64.104.63: icmp_seq=267 ttl=60 time=27.8 ms
64 bytes from 100.64.104.63: icmp_seq=268 ttl=60 time=27.0 ms
64 bytes from 100.64.104.63: icmp_seq=269 ttl=60 time=25.6 ms
64 bytes from 100.64.104.63: icmp_seq=270 ttl=60 time=84.0 ms
64 bytes from 100.64.104.63: icmp_seq=271 ttl=60 time=83.6 ms
64 bytes from 100.64.104.63: icmp_seq=272 ttl=60 time=86.8 ms
64 bytes from 100.64.104.63: icmp_seq=273 ttl=60 time=15.3 ms
64 bytes from 100.64.104.63: icmp_seq=274 ttl=60 time=8.78 ms
64 bytes from 100.64.104.63: icmp_seq=275 ttl=60 time=32.2 ms
64 bytes from 100.64.104.63: icmp_seq=276 ttl=60 time=117 ms
64 bytes from 100.64.104.63: icmp_seq=277 ttl=60 time=24.9 ms

This is on a sector with ~35 CPEs and at present, about 70% frame utilization.  I don't see jitter anything like this on more heavily loaded ePMP 1000 APs.  On ePMP 1000 the latency goes up predictably as subs are added, but jitter is almost never a problem.


If everyone was just streaming netflix it wouldn't matter, but with the new reliance on Zoom and other remote work, this has me moving customers back to either ePMP 1000 or UBNT AC.

This feels like pre-tdma linksys routers in an outdoor box sort of stuff all over again.  Is the ePMP3000 scheduler still just that unrefined?  It seems like it has been a few years now that I've been hoping this platform would mature.

4 Likes

lower and block mcs from the SM side to have 95% or better frames delivered. It should help a little bit.jitter.jpg

5 Likes

We stuck a  3000 AP on a couple of  towers between two 2000AP's and then replaced several businesses CPE's with AC and put them on the 3000AP.  All that were doing voip started complaining immediately,  and several residential customers installed with AC or swapped to AC that were working from home complained... luckily leaving their CPE AC and just connecting them to one of the 2k AP's fixed it.    


So far, for us anyway,  the whole ePMP AC thing has been a complete disappointment and while I'm still deploying AC CPE's we are upgrading / expanding with 2k AP's.

3 Likes

We are similar. Added a few 3K APs to add capacity and start migrating from 2000/200s to 3000/300s. Now the only customers on the 3K AP are ones that just browse/stream. Anyone working from home is back on the 2000 APs. Been debating whether to try going back to 4.4.3 on the 3000s based on feeback from some here. But keep thinking a fix is around the corner. From the 4.5.4 release notes, it doesn't look like it will be. Maybe have to go back to 2000s or something else entirely.

2 Likes

We are also seeing some complaints of Voip and jitter on our few 3k AP's out there.

Preseem latency graphs also suggest they are performing worse than 2k AP's.

2 Likes

Hi all,

We are fixing several issues that have influence on latency right now.

Hope to come up with good news soon.

Thank you.

2 Likes

Will these fixes be in 4.5.4 or are they targeted for a future release?

We're not deploying any more 3k until things perform better than 2k.

2 Likes

@dcshobby wrote:

Will these fixes be in 4.5.4 or are they targeted for a future release?

We're not deploying any more 3k until things perform better than 2k.


We will try to fix as much as possible in 4.5.4, but I can't guarantee we will resolve the issue completely.

Thank you.

As 2jarek mentioned, lowering the Max MCS on the SM side makes quite a difference

2 Likes

Lowering the max tx modulation helps marginally on a heavily loaded sector I've found.  Unfortunately, even on nearly "perfect" installations the jitter is still a problem.  Consider the following AP - 6 SM all MCS9/MCS9 the vast majority of the time.  I'm still seeing more jitter than I'd like.  This is from site router to customer router, just OTA latency.

 SEQ HOST SIZE TTL TIME STATUS 
40 100.64.125.253 56 64 7ms
41 100.64.125.253 56 64 11ms
42 100.64.125.253 56 64 26ms
43 100.64.125.253 56 64 10ms
44 100.64.125.253 56 64 2ms
45 100.64.125.253 56 64 12ms
46 100.64.125.253 56 64 7ms
47 100.64.125.253 56 64 16ms
48 100.64.125.253 56 64 1ms
49 100.64.125.253 56 64 3ms
50 100.64.125.253 56 64 7ms
51 100.64.125.253 56 64 7ms
52 100.64.125.253 56 64 7ms
53 100.64.125.253 56 64 12ms
54 100.64.125.253 56 64 12ms
55 100.64.125.253 56 64 16ms
56 100.64.125.253 56 64 12ms
57 100.64.125.253 56 64 7ms
58 100.64.125.253 56 64 22ms
59 100.64.125.253 56 64 7ms
sent=60 received=60 packet-loss=0% min-rtt=1ms avg-rtt=7ms max-rtt=26ms

1 Like

we found setting the qos voip priority has almost eliminated the issue of voip jitter but at the expense of higher than normal latency by about 10ms.

17ms jitter on a PtMP link is pretty darn good in my book. VOIP will run crystal clear on that. 

Can you provide an example of another PtMP with better results?

17ms latency is great.  17ms jitter is not.  Jitter is the issue I'm having, not the latency.  The OTA latency between AP and SM could be 50ms and it wouldn't matter so long as the jitter was ok.

ePMP 1000 is still much better in this regard.  Here is an example:

64 bytes from 100.64.103.72: icmp_seq=42 ttl=58 time=31.3 ms
64 bytes from 100.64.103.72: icmp_seq=43 ttl=58 time=36.0 ms
64 bytes from 100.64.103.72: icmp_seq=44 ttl=58 time=37.0 ms
64 bytes from 100.64.103.72: icmp_seq=45 ttl=58 time=39.0 ms
64 bytes from 100.64.103.72: icmp_seq=46 ttl=58 time=39.5 ms
64 bytes from 100.64.103.72: icmp_seq=47 ttl=58 time=31.9 ms
64 bytes from 100.64.103.72: icmp_seq=48 ttl=58 time=36.6 ms
64 bytes from 100.64.103.72: icmp_seq=49 ttl=58 time=46.0 ms
64 bytes from 100.64.103.72: icmp_seq=50 ttl=58 time=32.1 ms
^C
--- 100.64.103.72 ping statistics ---
50 packets transmitted, 50 received, 0% packet loss, time 49004ms
rtt min/avg/max/mdev = 30.759/37.018/46.087/3.581 ms

This example is from my head-end to the subscriber, 3 hops and then the AP.  This AP is also at about 65% frame utilization right now.

Here is another example from my head-end to a subscriber on the 3k sector that initiated the forum post.  This is a single hop to the AP.

64 bytes from 100.64.104.77: icmp_seq=48 ttl=60 time=25.5 ms
64 bytes from 100.64.104.77: icmp_seq=49 ttl=60 time=9.47 ms
64 bytes from 100.64.104.77: icmp_seq=50 ttl=60 time=9.67 ms
^C
--- 100.64.104.77 ping statistics ---
50 packets transmitted, 50 received, 0% packet loss, time 49057ms
rtt min/avg/max/mdev = 5.336/27.272/63.189/12.903 ms

Again, it's the jitter that is an issue.  Those big jumps between ~10ms to 60ms latency cause issues for VOIP, Zoom, some games, etc.

I should note, that using QOS on the AP to prioritize VOIP does help with properly marked traffic.  I'm experimenting with the same for Zoom.  Unfortunately, if the traffic can't be marked there isn't much that can be done.

What PtMP system has lower latency and jitter?  Well, LTU does so far.  The following is from my head-end to my house - one hop to the AP.  There are 27 subscribers on this LTU AP and a fair bit of traffic.  The only time jitter becomes an issue on LTU so far is when the uplink is completely saturated, which is to be expected.

64 bytes from 23.134.xxx.xxx: icmp_seq=64 ttl=60 time=23.3 ms
64 bytes from 23.134.xxx.xxx: icmp_seq=65 ttl=60 time=14.2 ms
64 bytes from 23.134.xxx.xxx: icmp_seq=66 ttl=60 time=10.4 ms
^C
--- 23.134.xxx.xxx ping statistics ---
66 packets transmitted, 66 received, 0% packet loss, time 65021ms
rtt min/avg/max/mdev = 5.522/13.975/24.799/2.829 ms
1 Like

I have an EPMP3K ap mixed with Elevate radios (locos, nanobeams, powerbeams) and Force 300-(13/16) radios.

AP has:

40Mhz Channel

75/25 ratio.

Guard Interval: Short

30 Subs

Omni antenna Mimosa N5-360

AP and SMs running Version 4.4.3 (not tested 4.5 yet)

All SMs run in NAT mode with PPPoE.

All SMs within < 1.5 Km (majority ~ 500m)

Latency to AP is 1-3ms (2 hops)

Interestingly as others have observed the Force 300 radios have a much higher jitter.

Sample Elavate radio with 60/60 signals icmp test: Minimum = 8ms, Maximum = 38ms, Average = 21ms 

Sample Force 300-16 radio with 60/60 signals icmp test: Minimum = 9ms, Maximum = 94ms, Average = 27ms

Elavate radios uplink is set to max 16QAM 3/4, though (this is what the elevate tool assigns by default and we kept it this way).

Force 300 radios uplink is left as default, so up to 256QAM (they all have perfect 50s-60s signals). May try to lock it to max 16QAM 3/4 as well as someone suggested, but I am afraid that the potential upload rates may tank a bit.

2 Likes