ePMP3000 Latency - What is acceptable to Cambium?

Another thread about this as i’m curious what Cambium thinks is acceptable, and what you all see latency-wise?

No matter the config, location, noise etc I always see 20-30ms latency from AP to SM.

My network is a straight line with 5 towers using two PTP550 links and 3 Ignitenet 60Ghz links. Latency from one end to the other is 7-16ms depending on load, which is not great but not horrible either - then add 20-30ms for ePMP3000 to F300-16/25 and now latency is horrible.

I’d like to hear from Cambium and from you about what latency is expected from this gear? Will the 2.5ms TDD window option return or is this it indefinitely?

My whole network is made of ePMP3000 and f300 so i’m really hoping latency will soon be halved.

Cheers

Latency on the ePMP radios is a bit of a mixed bag. There is a lot that affects it and there is no real way to predict it save for it goes up with the number of SMs on the AP and the load being passed.
It is greatly affected by retransmissions and one poor SM can/will increase the latency for the entire sector.
We have e3k-Lites in both PTP usage and in PMP usage, all gps timed (tdd-sync). latency ranges from 7ms to 40ms per link and most of the time its in the 15ms range. We also control how many SMs/AP to limit this from getting out of reasonability.

We played with the 2.5ms frame and found that the gains vs the losses were not good enough. The frame size has a direct correlation to the available bandwidth on an AP. The larger the frame size the more SMs that can be serviced in a frame and the more of a frame that can be utilized per SM serviced, thus more apparent bandwidth. Then there is the other offerings from Cambium with fixed frame durations and being in sync with them. This is very important as your ptp link may get trashed by your pmp links if they were in the same band.
There is also the CPU load to consider, the ePMP radios are an all software beast vs your dedicated ptp links are mostly ASIC (hardware) beasts with control software. Asics can move more data faster as there is less processing involved whereas software based radios must process everything in real time which takes more time. And before there is confusion a software defined radio can be either hardware or software based as it literally refers to the method of setup and configuration control though there are some manufacturers that use the SDR term more loosely as part of marketing propaganda.

I know not exactly the information your wanting but also some of the information you need to consider. Remember that there is a reason the pmp450 is more expensive than the epmp.

1 Like

Short Answer = about 12-15ms

Longer Answer - I think that what you’re seeing would probably be about the same here. It seems to me that the 3000/300 have higher latency than the 1000/2000 did… but I’m not really sure of how much different, after so many hardware and firmware changes. CLEARLY the ePTP mode on the 1000/2000 series was way way better than the 3000/300 series is, so there’s clearly a difference in how the N vs AC chipset’s work. However, on PtMP AP’s facing customers, we don’t really have any latency complaints from customers though… So - it’s not something we’re seeing too much of an issue with.

But - here’s an example of a 3000L which is at the far end of our network with 18 clients, many of which aren’t really ideal signals. This is a few tower hops away through our network, and this is the AP pinging to Google. If I ping from the 3000L AP to google, I get an average of 44.7ms. If I ping from a connected SM to google, I get an average of 56.8ms. So, looking at it like that, the SM to AP difference is ~12ms. This is kinda my ‘real world test’ and it seems that generally, a client is about 12ms more latency than the ethernet of the AP they are connected to.

This is from the AP to Google:
PING google.com (172.217.1.174) 32(60) bytes of data.
google.com ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 9013ms
rtt min/avg/max/mdev = 39.538/ 44.761 /55.114/4.172 ms

This is from a SM on that AP to Google:
PING google.com (172.217.164.238) 32(60) bytes of data.
google.com ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 9010ms
rtt min/avg/max/mdev = 43.276/ 56.820 /63.974/7.191 ms

Strangely, if I ping from that SM directly to that AP, it can look more concerning. The jitter and latency seems worse for some reason. However, when I do this test above to figure out more what this client really feels for real latency to an internet site (I think this is a more “real world” test) the latency for him is OK(ish).

Again, before anyone picks on ^^this^^ - this would be our least ideal AP. It is multiple hops, and the 18 clients connected are spread far and wide, and it’s an Omni, and 1/2 of them are from -63 to -72 (which are below our normal install criteria) so only 1/2 of them are ‘good’ or ‘vgood’ range, while 1/2 of them are ‘poor’ or ‘wtf’ signals.

By comparison, this is from my SM on my roof at home to Google.
PING google.com (172.217.165.14) 32(60) bytes of data.
40 bytes from yyz12s06-in-f14.1e100.net (172.217.165.14): icmp_seq=1 ttl=116 time=36.7 ms
40 bytes from yyz12s06-in-f14.1e100.net (172.217.165.14): icmp_seq=2 ttl=116 time=39.6 ms
40 bytes from yyz12s06-in-f14.1e100.net (172.217.165.14): icmp_seq=3 ttl=116 time=40.5 ms
40 bytes from yyz12s06-in-f14.1e100.net (172.217.165.14): icmp_seq=4 ttl=116 time=37.1 ms
40 bytes from yyz12s06-in-f14.1e100.net (172.217.165.14): icmp_seq=5 ttl=116 time=40.0 ms
40 bytes from yyz12s06-in-f14.1e100.net (172.217.165.14): icmp_seq=6 ttl=116 time=38.1 ms
40 bytes from yyz12s06-in-f14.1e100.net (172.217.165.14): icmp_seq=7 ttl=116 time=36.9 ms
40 bytes from yyz12s06-in-f14.1e100.net (172.217.165.14): icmp_seq=8 ttl=116 time=36.8 ms
40 bytes from yyz12s06-in-f14.1e100.net (172.217.165.14): icmp_seq=9 ttl=116 time=36.4 ms
40 bytes from yyz12s06-in-f14.1e100.net (172.217.165.14): icmp_seq=10 ttl=116 time=38.6 ms

google.com ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 9012ms
rtt min/avg/max/mdev = 36.487/ 38.124 /40.542/1.465 ms

So, that’s more typical on our network, But our house is closer to town, with only two tower hops from our fiber. And that is also an ePMP2000 AP with Force200 SM at our house.

But, the short answer is that if I compare what any of our APs pings to google or to any other site - and then compare what a client SM connected pings to the same site, the difference seems to be about 12-15ms extra for the client.

1 Like

This is a 1450 byte ICMP test from the router at the tower to an F300 client, so it’s direct, no hops. This is an e3k AP with almost 60 subs during peak hours. AP is pushing around 130mbps of traffic. AP is configured for a 75/25 split, 5ms frames, short GI.

3 Likes

That looks similar to what I get - pretty ugly still huh - compared to ubnt anyway. One cool thing that just came out is ability to limit max mcs from AP to SM - Look at (sm performance stats) what mcs value most packets pass to an SM, then set its max mcs to that value, or the one below that value. This reduces retransmits so tightens everything up. I run 80Mhz channels on some APs - just tested reducing max mcs down to SS mods and for the first time saw consistent pings in the teens, and still over 100Mbps. To me this means I can now ‘tune’ every individual SM max mcs both downlink and uplink so pretty much all packets get there [and back] first time. I still want the 2.5ms tdd window back. Now the firmware is finally getting acceptable, latency is the last thing that needs to be sorted. I’ll report back once I’ve tuned a whole sector to let you know if I’ve achieved sub 20ms consistently.

2 Likes

Hi @Eric_Ozrelic may I ask what that radio is modulating at for uplink and downlnk?

That was awhile ago and I can’t remember the specific SM I used… I think I just used a random F300 on the AP. That being said, all of the F300’s have both downlink and uplink of between DS7-9.

1 Like