Disappointed, anyone have a successful deployment?

We’ve had our first trial up for a couple weeks. Started in PTMP, with a V5000 and V3000 client at .2 miles or 360 meters. In perfect conditions, it never had an SNR higher than about 12-14. EIRP was a couple dB better than expected so I had to assume probably interference. We know there are other 60Ghz users in the area with Ubiquiti and Mikrotik, but short of low SNR we didn’t really find any good way to check for noise. We were running RFC tests to an endpoint at only 250Mbps, and never once had it test without a packet error rate. Also pretty disappointed that it maxed out at a frame rate of 170kfps. And with L2 enabled it does not pass 1Gbps, failed that test every single time with it’s processor reporting 99%. More like upper 800’s to low 900’s.

So we decided to swap it out to a PTP yesterday. We now have two v3000’s with the large dish at only 360 meters. Still seeing frame errors overnight, even though the link was showing SNR’s into the 25’s. Within an hour of the test finishing we started getting some snow. Sticky and wet against the front of the snow shovel, and with only 1 inch of snow by 5am we were offline and still are. Doesn’t look like the E2E controller keeps historical data if a link drops. Can I assume it maxed out it’s EIRP before dropping? The graph shows how it improves link quality when passing traffic, drops back to -60ish when idling, but then a very quick drop (again only 1 inch of snow.)

I have a PTP to try at 120 meters. To the other people testing in snow, have you been able to get a link to stay up at closer distances, or does a tiny amount of snow just take these offline at about any distance?

1 Like

Hello Brian,

Thank you for sharing those results, the SNR would have achieve MCS 11/12 but you will only see the MCS raise from 9 when there’s traffic, if the link is idle the MCS drops to 9. So the link doesn’t look like there are issue, at 200m and a V3000 if you were using the latest build I would have expected >1.5Gbps in each direction with the L2-GRE enabled (1.8Gbps in L3 mode) especially using large packets (1518B).

Not sure what version you were using but did you see the throughput increase as the packet size increases?

At the moment small packets do incur an overhead due to the physical structure of the frame, however we are currently testing new firmware (2.0) which include an update to the AMSDU to aggregate 4 data unit.

1 Like

Anthony, your right on with it achieving MCS 11. It would usually idle around an RSL of -62, and then increase up to maybe a -59 with traffic. But all the time it had packet loss, and never was better than around an e-5 packet error rate. And even in the PTP configuration with the two V3000’s, where the RSL came up as high as -45 and SNR’s of 25, we still had an error rate. Can’t tell you what because the link dropped before I could start additional testing. Is that just about the best the equipment can do? Should we always expect low grade errors? I’d expect some in a multi-point configuration with contention periods, but with only one subscriber attached I would expect it to run a lot cleaner.

We currently are on release 1.2. We are only up on the 1Gbps port so I couldn’t try to push higher than that. The reason I think the 1Gbps was the max, is as soon as we started pushing close to 1Gbps the packet error rate increased to .01%. No way voice traffic would have worked over the link. At the same time this is what we saw on the POP-DN. Since CPU1 and 3 were holding very steady at 98-100% utilization I assumed that was the cause of the additional packet loss.

I have yet to be able to test the higher capacity in the PTP configuration. I know it had a 0.1% frame error rate at 250M, but couldn’t do my packet test. The link dropped with an inch of snow yesterday morning before I could try a larger test, and it is still completely down now 36 hours later.