Looking for some suggestions. I've been fighting a problem for over a week and need a fresh perspective...
Trying to launch a new mini-pop site using a 1000 GPS AP on an omni antenna. Mounted on a 8' pole on top of a house. AP is connected to a Netonix WS-6-Mini switch also on the roof. Backhaul is via a Mimosa B5.
ePMP AP is on a 40 Mhz channel with 75/25 GPS sync. Trying to deliver ~ 100 Mbps to subscribers. We have 7 others of these in the town doing exactly this... Only this one has been a problem.
Here is my issue. Download speeds from the AP stop at 20 Mbps. Upload speeds are ~ 45 Mbps (which is expected). I can't break 20 Mbps download and I don't know why.
I have tried Force 180's and Force 200's as the SMs. All tests have been within 300 meters of the AP. When I run a radio throughput test, it gives me 100 x 45. A TCP speed test to either speedtest.net or our internal iPerf server always stop at 20 Mbps down. It's almost like a QOS setting but QOS is off.
If I plug a laptop into the cable that feeds the 1000 AP, we TCP speed test at 200 x 200 which is exactly what the backhaul is giving us. So, not a cable or bandwidth issue.
We replaced the AP. Same problem. We replaced the omni antenna. Same problem. We tried 3 different channels - all of which are totally clean. Same problem. We reduced the AP transmit power. Same problem. Noise floor is -90. SNR is in the 40's. RSSI is in the 50's. There is nothing out there that registers on either the Cambium spectrum tool or the one in the Mimosa B5 at the AP.
The B5 backhaul radio is 2' below the ePMP AP but on very different channels.
I feel like I am in the Bermuda Triangle of RF. No matter where I go with an SM and test, I get the same results. I even tried 3 different laptops thinking I was the issue. No changes. Ever.
Am I missing anything to test? I am getting ready to pull this mini-pop down and try again somewhere else. I simply can't make it work at this location.
Make a crossover cable between the Mimosa PoE injector and a Cambium injector and see what happens. You need to eliminate the switch as the issue in my opinion. It's strange you are seeing this issue that the Netonix crowd says is due to flow control being turned off. In your case is was already on. But, there are settings within that switch for flow contral to obey or generate pause frames. If the Mimosa's traffic flow is overwhelming the switches buffer, then I would expect what you are seeing to happen if flow control on a Netonix is not enabled. I don't believe Mimosa has flow control on it's devices like Cambium does and to be honest, it may not be that at all as I was just stabbing in the dark for you. If you look through the Netonix forums you'll see your issue described if you search for "flow control". I hope I haven't led you down a rabbit hole here. Good luck.
Check upstream for pause frames as well. It maybe generated in a nother spot while your pulling traffic. Just had that happen to us and took me a while to find it. The retransmission rate was a touch high on a different ptp and was tragging down an entire switch durring peak we kept over looking. Anything in that same broadcast domain could be the problem if its a pause related issue, with the sudden bursty nature, I’m willing to bet it is. Force the your ethernet speed to 100mbps and test from the mini switch, if your hitting 50 60 meg, your bottling up pause frames some where. If you land 80 to 90 meg, that points the problem else where
Pause frames is interesting... I have forced the port to 100M and will have our one client out there run a test. I do get full speed with a laptop testing the port the AP is in and none of the switch logs in the path report excessive pause frames but it does seem to fit a pattern.
True on the Mimosa - they do not allow flow control. The firmware says it can be turned on but it never negotiates it with any switch we own. I have read the Netonix forums on pause frames but have not seen many issues reported with ePMP gear. Ubiquiti gear seems to be the main offender with Netonix and we turn off flow control on all our AirFiber links as a result. The only ePMP issues reported are some 2000 AP issues. We run flow control on our 2000 APs and have no issues.
I'll report back after changing the speed negotiation on the switch.
Turned off flow control on all ports of the mini-pop switch and down graded the 1000 AP to 100M at the switch port. The results are in. No appreciable change. Downloads are up to a top of 25 Mbps (not really any better) and uploads staying at 45 Mbps.
If it were a flow control issue upstream of this house, I feel like I would be seeing that when I speed test at the mini-pop Netonix. I can reliably get 200 x 200 at that switch (TCP tests) when using my laptop. It's only when we conenct to the AP that we get this download issue. And, we have seen it with two different APs (tried to rule out a hardware issue).
My last test will be to remove the Netonix and go from the Mimosa injector directly into the Cambium injector and see what we get then. That will be a couple days. I'm up for any other suggestions or thoughts...
What about the ethernet cables, have you replaced or re-crimped them? I know it seems like a long-shot, but I've run into similar issues over the years a few times with over or under-crimped ethernet ends and certain ethernet ports.
Even if the cable links up at 1Gbps it may still be suspect.
I did think about that. I used the same cables to test into my laptop and was getting full speed so I opted not to re-crimp since they work in one scenario. When I remove the Netonix for that test, I'll replace the cable.
This is turning into a rabbit hole - related to pause frames. You got me thinking about pause frames so I started digging in my switches.
I am finding something I can't quite explain. On every single one of our Netonix switch ports that feeds a Cambium ePMP AP, I am seeing Rx Pause Frames with the same number as Rx Multicast Frames - and they tick up together. It appears as if it is pausing every single multicast frame. This happens with flow control on or off at the swtich.
I then ask myself why do I need/want multicast on the subscriber network segment? I don't think I do since there is not a reason for that. We don't use auto discovery on our managment side either and LLDP and MAC-Telnet are turned off. We don't run "Reliable Multicast" enabled but probably should?
On our Mimosa AP switch ports, no Rx pause frames. So, something is different between the two vendors here. From a network setup standpoint, you could swap a Mimosa AP with a Cambium AP and not make any changes to the switch. We untag the customer VLAN and tag the managment VLAN at the AP switch port for all our APs. Every AP has a different customer VLAN with the same managment VLAN.
While this is getting off topic from my original bandwidth topic, it is making me peel back some more layers of the onion. What are the suggested settings on the AP and SMs for multicast and broadcast?
Well, I think I have found where the issue is - but not why...
When I forced ports to 100 M on the AP and both ends of the backhaul, the problem went away. Download speeds shot up to 90 Mbps. I then started turning ports back to 1G starting at the AP.
With the AP and client end of the backhaul at 1G and the Internet end of the BH at 100M, speeds are good. As soon as I put the Internet side of the backhaul at 1G, speeds drop to 15 Mbps at the ePMP client SM.
Flow control is off on all ports and is off in the software of the Mimosa B5 link.
The internet side of the B5 hits a Netonix switch that also connects to the AirFiber link that takes this tower to the Internet. No one else is having issues - just this B5 link. I've used this gear before off this tower servicing other POPs with no issues.
This certainly seems like a flow control issue but I'm not understanding why...
Looks like this got solved. After testing various 100M forced options on our switches, we replaced an Ethernet cable that was feeding our Mimosa B5 backhaul AP. When the switch port that radio was plugged into was forced to 100M, the problem went away.
So, upon replacing the Ethernet cable and taking the switch back to 1G on that port, no issues. While this looked like a cable issue with the switch port testing, it did not appear to be a cable issue with our field testing. If we plugged a comouter into the far end of that backhaul link, we got full speed (200 x 200) TCP througput. It was only one hop further away while plugged into an SM attached to the AP connected to that backhaul that we saw the speed problem. Our testing all made us think we had an ePMP issue since the switch the ePMP AP was pluged into always had full throughput TCP testing.
So, palm to face and 2 weeks of hindsight wasted time - the AP is working as we would expect it with a new cable at the backhaul AP. Old cable tested out fine on an Ethernet tester but had issues on the tower.