Latency PTP 550 conectorized

Hello, when will we have the eptp mode available to stabilize the PTP 550 device’s latency?

At this time there is no ePTP mode planned for PTP550 or F300... but, if enough people ask for it, maybe it will happen eventually  :-)

I have 6 PTP550 + 2 Force 300 in the warehouse waiting for a solution. At the moment they are unusefull.

The first link in produccion was changed for an licencied due to the latency and the periodical reboots in the suscriber site.

The test is with 2 channels 40 mbps and the traffic over the link is less than 100 mbps, not too much to make the latency inrease to 120ms

The stability of the latency really leaves something to be desired, we buy this ptp 550 us based on the stability and confidence of the epmp 1000 but until now it was pure illusion, For the price and the characteristics should be at least similar to the air fiber, end customer wants stability many users play and end up complaining about the oscillations.

Our supplier has our 550's on back order, so I can't comment specifically on the 550's.  But this is the results with our Force300 PTP links with version 4.1.4-RC4 firmware. We get 3.5ms average latency.

10 packets transmitted, 10 received, 0% packet loss, time 9013ms
rtt min/avg/max/mdev = 1.582/3.513/5.314/1.224 ms


Jumping from 1.5ms to 5.3ms is still more jitter than I'd like.  But for us, the Force300's are nowhere near the 120ms that you guys are reporting.  This 3.5ms average was with about 80 Mbit of client traffic going over the link.

PS. And this is the latency from the SM to the AP (8.4ms avg) while running a 20 second bandwith test from the AP to the SM to try to saturate the link.  This link was passing about 310 mbps while doing this ping test.
 
rtt min/avg/max/mdev = 6.448/8.369/11.092/1.392 ms

As mentioned, while running the above 'ping' test, the link was carrying about 60 mbit of client traffic, plus the bandwidth test gave the following 'left over' results.  Total was about 310 mbps aggregate traffic.

Downlink      178.513 Mbps
Uplink              68.582 Mbps
Aggregate     247.095 Mbps (+ about 60 Mbps of live client traffic)

PING 10.0.0.45 (10.0.0.45) 32(60) bytes of data.
40 bytes from 10.0.0.45: icmp_req=1 ttl=64 time=8.24 ms
40 bytes from 10.0.0.45: icmp_req=2 ttl=64 time=6.44 ms
40 bytes from 10.0.0.45: icmp_req=3 ttl=64 time=8.69 ms
40 bytes from 10.0.0.45: icmp_req=4 ttl=64 time=6.65 ms
40 bytes from 10.0.0.45: icmp_req=5 ttl=64 time=9.54 ms
40 bytes from 10.0.0.45: icmp_req=6 ttl=64 time=8.07 ms
40 bytes from 10.0.0.45: icmp_req=7 ttl=64 time=8.28 ms
40 bytes from 10.0.0.45: icmp_req=8 ttl=64 time=6.94 ms
40 bytes from 10.0.0.45: icmp_req=9 ttl=64 time=11.0 ms
40 bytes from 10.0.0.45: icmp_req=10 ttl=64 time=9.68 ms

— 10.0.0.45 ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 9015ms
rtt min/avg/max/mdev = 6.448/8.369/11.092/1.392 ms


@fgordillo wrote:

The test is with 2 channels 40 mbps and the traffic over the link is less than 100 mbps, not too much to make the latency inrease to 120ms


So something isn't right here. We've had a PTP550 up for quite some time now and aren't seeing anything like the latency you're seeing. Can you try just using one channel and see what your latency is like? Also, what does your spectrum and noise look like on both sides? Have you tried using different channels? When you check the Monitor -> Performance area, are you seeing a lot of wireless drops or errors? When you do tests what is the highest % of DS-MCS packets are being sent at? What I'd do is start basic... try using the radio in single radio mode on the cleanest 20Mhz channel you can find. Make sure the counters on the performance page are reset. Then run some link tests, while watching the latency through another client, like a laptop pinging either end, or using a Mikrotik router to ping while you're flooding the connection. See what you get, and if it looks good, then increase from there... maybe increase to a single 40MHz channel or perhaps 2x20MHz channels.... again, retest.

Here's a screen shot of the latency I'm seeing while using a single 40MHz channel on a PTP550 and the throughput results I get:

Single 40MHz radio on clean channel

Latency to far radio

You can see that 100% of my downlink and uplink packets are being sent DS-MCS 9, the highest modulation available

1 Like

I also just noticed a new beta for the F300 and PTP550 just came out and there are some reports that latency is lower under load. You can check it out HERE.

Both PTP550 are connected to mikrotiks with 100mbps Ethernet, maybe the ping test into the PTP is controled for the ethernet block and his own limits.

I will test with the new beta and GigaEth connected. 


@Eric Ozrelic wrote:

At this time there is no ePTP mode planned for PTP550 or F300... but, if enough people ask for it, maybe it will happen eventually  :-)


I know Cambium has said this a few times, and I know that they've also said that there would be little or no latency improvement for the Force300 ePTP vs TDD.  Perhaps because the way AC Wave2 works maybe? Maybe there's an inherent difference in latency by design or something?

But - here is and example of a link to my house.  It's an ePMP2000 Access Point and a Force200 client.  In TDD PMP mode, the latency is 7-8-9 ms (plus occasional spike to 12-18 ms) and by just switching to ePTP mode and rebooting, the latency drops to 1.3 ms and flat flat - quite dramatically better.

ePMP2000_AP_SWITH_FROM_PMP_TO_ePTP_MODE.jpg

Below is an example of a Force300 PTP link (in TDD mode) where we have an average latency of about 3.8 ms - but lows and highs of 1.5 to 6.5 ms or so.  That's not horrible, and throughput on the Force300's is really quite good.  But - if the Force300/PTP500 having an ePTP mode could stabilize the jitter (even a little improvement) that would be great.

Some of our network's towers have 7 or more hops to them, and if we did all these links with Force300's - we could potentially have end-to-end latency of 10-50 ms.  

Force300 SM to AP Ping Results

PING 10.0.0.45 (10.0.0.45) 32(60) bytes of data.
40 bytes from 10.0.0.45: icmp_req=1 ttl=64 time=1.56 ms
40 bytes from 10.0.0.45: icmp_req=2 ttl=64 time=4.48 ms
40 bytes from 10.0.0.45: icmp_req=3 ttl=64 time=3.83 ms
40 bytes from 10.0.0.45: icmp_req=4 ttl=64 time=1.82 ms
40 bytes from 10.0.0.45: icmp_req=5 ttl=64 time=5.09 ms
40 bytes from 10.0.0.45: icmp_req=6 ttl=64 time=3.85 ms
40 bytes from 10.0.0.45: icmp_req=7 ttl=64 time=2.71 ms
40 bytes from 10.0.0.45: icmp_req=8 ttl=64 time=6.53 ms
40 bytes from 10.0.0.45: icmp_req=9 ttl=64 time=4.95 ms
40 bytes from 10.0.0.45: icmp_req=10 ttl=64 time=3.82 ms

— 10.0.0.45 ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 9013ms
rtt min/avg/max/mdev = 1.565/3.868/6.531/1.448 ms


Realistically, over 7 hops, it wouldn’t likely happen that those spikes added together exactly at the same time to produce 50 ms…  but it is quite likely to have 20-30 ms latency end-to-end.  If an Force300 ePTP mode could reduce the latency to 1.3ms like it does on an Force200 link… that’d make it a killer price/performance backhaul leader.