PtP 550 and ePtP

Hi all, just seeing if anyone has tried eptp on their ptp550s since the 4.5 release?


@DigitalMan2020 wrote:

Hi all, just seeing if anyone has tried eptp on their ptp550s since the 4.5 release?


We are using it.  The Latency is now down to an accptable level, but there are tradeoff's with ePTP.

Thanks ninedd,

Yes the tradeoffs I was told are cannot use if using gps synch and its not compatible with channel bonding.  Anything else?

If one was to switch to eptp mode, can you change it on the slave and then the master, they should reconnect correct?  Just want to do it remotely from the office.

Yes, that is what we did. With the ''Try Changes'' test button - you should be able to do that without too much worry.  If it doesn't re-link, then it'll revert to the previous working settings. 

OH - if you're using AutoPower - that's not available for some strange reason in ePTP mode, so you'll need to set your power manually.

ALSO - you can't use ePTP mode in dual-radio 'bonded' mode, so you'll also need to change settings to accommodate that too.  But again, with the 'Try/Test'' button, that should either work, or it'll revert after a couple minutes.  

2 Likes

Thanks ninedd, 

I tried the eptp mode and it did not improve the latency... ping test was same on either mode... 

OK - for what it's worth, the AC gear (including the PTP550) already have pretty decent latency, so the ePTP mode latency benefits won't be as dramatic as it was in the 'N' gear I guess. But for us, there is still a noticable difference in lower latency. This is what we are getting on our PTP550 backhaul in ePTP mode.  

PING 10.0.0.81 (10.0.0.81) 32(60) bytes of data.
40 bytes from 10.0.0.81: icmp_req=1 ttl=64 time=1.39 ms
40 bytes from 10.0.0.81: icmp_req=2 ttl=64 time=1.98 ms
40 bytes from 10.0.0.81: icmp_req=3 ttl=64 time=1.28 ms
40 bytes from 10.0.0.81: icmp_req=4 ttl=64 time=1.50 ms
40 bytes from 10.0.0.81: icmp_req=5 ttl=64 time=1.97 ms
40 bytes from 10.0.0.81: icmp_req=6 ttl=64 time=1.45 ms
40 bytes from 10.0.0.81: icmp_req=7 ttl=64 time=1.77 ms
40 bytes from 10.0.0.81: icmp_req=8 ttl=64 time=1.65 ms
40 bytes from 10.0.0.81: icmp_req=9 ttl=64 time=1.68 ms
40 bytes from 10.0.0.81: icmp_req=10 ttl=64 time=1.74 ms

--- 10.0.0.81 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9011ms
rtt min/avg/max/mdev = 1.282/1.644/1.982/0.382 ms

1 Like

Nice!  What distance is that link?  

Hi All,

Out of interest, why do you guys want the PtP 550 and then are willing to not use bonding mode (I dont mean it like "thats crazy", just that in our case the most compelling reason to buy it was because of bonding). But the latency is per https://community.cambiumnetworks.com/t5/Sub-6-GHz/PTP-550-Latency/td-p/103726# is really not acceptable. The spec sheet advertises very low latency, but in bonding mode, not so great jitter:

  0 10.20.12.185                               56  64 4ms  
    1 10.20.12.185                               56  64 8ms  
    2 10.20.12.185                               56  64 8ms  
    3 10.20.12.185                               56  64 8ms  
    4 10.20.12.185                               56  64 8ms  
    5 10.20.12.185                               56  64 7ms  
    6 10.20.12.185                               56  64 8ms  
    7 10.20.12.185                               56  64 3ms  
    8 10.20.12.185                               56  64 6ms  
    9 10.20.12.185                               56  64 7ms  
   10 10.20.12.185                               56  64 28ms 
   11 10.20.12.185                               56  64 4ms  
   12 10.20.12.185                               56  64 32ms 
   13 10.20.12.185                               56  64 5ms  
   14 10.20.12.185                               56  64 8ms  
   15 10.20.12.185                               56  64 4ms  
   16 10.20.12.185                               56  64 15ms 
   17 10.20.12.185                               56  64 6ms  
   18 10.20.12.185                               56  64 6ms  
   19 10.20.12.185                               56  64 4ms  

even with no load 3km

You can reduce jitter on your link by dropping maximum mcs on the offending radio. Look at the performance stats, find where the mcs is jumping around and set it at the lower value. You will see latency flatten off. 
I run dual 80's then drop mcs individually on each of the 4 radios until pings are stable. Aim for 99% of traffic to pass at a single mcs value. The ping spikes you're seeing are the radio trying higher mcs value, failing, then retransmitting on a lower mcs - by dropping the max mcs you stop radio from trying the higher mcs ...
By doing this I've ended up with a solid 800/130, and current load on my nw peaks at 500/60, so customers always get the full speed of their plan when they speed test, and latency is almost acceptable - empm3000 latency still high but it's improving with updates and better config by me. 
good luck

1 Like