450M performance problems with 16.2.2

Following up from:

Any update on this, Cambium? We are hearing many more complaints about this after the 16.2.2 release. We are seeing it on APs that are not crowded (under 20 customers) so it shouldn’t be the MIR at capacity bug in the release notes that hasn’t been fixed.

@rnelson please start a new thread, this one is over a year old. Please be specific about the issues you’re experiencing.

It is the exact same issue as OP is seeing.

(I moved this request into a topic of its own)

@rnelson, are you able to show the performance you were getting before and after you upgraded to 16.2.2?

So in the original thread, the OP found that his link tests were much higher then his actual speed tests behind the SM. The reason for this was… 1. when you run a link tests between AP and SM, it temporally stops all traffic across the AP, and gives you the best possible result. 2. when he’d run speed tests behind the client, they were lower, because the group that the SM he was testing in was saturated. 3. He resolved the problem by adding an AP offset from the original one, and migrating clients over to the new AP, thus reducing load on the original AP/group.

1 Like

Is the expectation that multiple APs should be put on a site that is not loaded? The cost benefit would not weigh out to have two 450Ms on a sector with only 20 customers though.

If you do not have an optimal spread of clients across all 7 groups in 90deg’s and/or you have many top talkers falling into one group, then it does not make sense to deploy 450m. It would make more sense to deploy something like one or two 450i w/30deg horn antenna.

1 Like

No, because it was about two months ago when we upgraded all the APs to 16.2.2, so there is no historical data for comparison. Before 16.2.2 we had no complaints with this issue. Now after upgrading we are receiving complaints about this regularly, even with 16.2.3.

MIR without bridging shows high throughput, MIR with bridging is set it will be way below the cap on the radius server. This is happening on low count APs now (under 20 SMs). I recall having this issue previously in one of the pre 16.1 releases. Then it was fixed in 16.1 and now it is back. This is happening on 450i as well as 450M.

We have dozens of 450m’s with SM client counts from a dozen to almost 100 per AP and I have not seen this issue… granted, we don’t use radius. All our AP’s and SM’s are running 16.2.3 or R20.

What do you use instead of radius?

We’re using Preseem.

You should probably raise a support ticket for this - the support team will be able to look at the diagnostics to see what is going on.

I see you raised a ticket with a similar subject in August - is this the same AP?

This is a different AP. I will open a support ticket. Thanks!

Ok I do have one question before opening a support ticket.

If there is no longer an MIR cap in radius and the MIR on the SM is the default of 150Mbps, should “Link Test with Bridging and MIR” return similar test results as “Link Test with Bridging”, assuming the speeds do not near the 150Mbps default SM cap?

Hi, We are experiencing exactly the same error as you describe, with 16.2.2? And we only have 10 customers on AP! When the load of 450m exceeds 60-80Mbits, 450m is unable to deliver the given speed to customers, but if we measure speed between AP and SM, we get given speed

Yes, except when there is other traffic on the link. Then the traffic is flowing during the Link Test with MIR. See the open issue listed in the Release Notes:

When Link Test is performed with Bridging and MIR, while there is significant user traffic, Link Test throughput results are low, when compared to Link Test results with Bridging.

What are your values for your Downlink and Uplink Burst Allocations? These values are in kilobits not kilobits-per-second. If these are at 0, then you will see lower values on the MIR with bridging test. As a rule-of-thumb, you can try 2X the MIR Downlink and Uplink Data Rate values. For example, if you have MIR Downlink Data Rate set to 15000 kbps, then set a Downlink Burst Allocation of 30000 kbits. This tends to get the MIR to run at the expected max (assuming your Max Burst Data rates are set to 0).

1 Like

Here is what we set to burst. Problem is we now tried this without MIR in radius and are still having issues. We have a ticket open with Cambium and sent multiple engineering files. Hopefully someone at Cambium can look at it soon because we are getting more and more complaints. The installers have noticed more issues with speeds since the 16.1 release or so. Before that speeds were great, now we get a lot more complaints from customers.

Wireless_7Mb_2Mb | Cambium-Canopy-DLBR | 7168
Wireless_7Mb_2Mb | Cambium-Canopy-BCASTMIR | 1400
Wireless_7Mb_2Mb | Cambium-Canopy-ULBL | 2000
Wireless_7Mb_2Mb | Fall-Through | Yes
Wireless_7Mb_2Mb | Cambium-Canopy-DLBL | 7000
Wireless_7Mb_2Mb | Cambium-Canopy-DLMB | 7700
Wireless_7Mb_2Mb | Cambium-Canopy-ULBR | 2048
Wireless_7Mb_2Mb | Cambium-Canopy-ULMB | 2200

This is what I send for a 25x7 (d/u) plan:

Cambium-Canopy-DLBL = 50000
Cambium-Canopy-DLBR = 25000
Cambium-Canopy-DLMB = 25000
Cambium-Canopy-ULBL = 14000
Cambium-Canopy-ULBR = 7000
Cambium-Canopy-ULMB = 7000

That gives me 26.27 Mbps down, 7.30 Mbps up on a Link Capacity test with MIR and bridging. This is using 16.2.3.

The 2X on the burst limit seems to get the token bucket also to work as people expect.

1 Like

Thanks for that info. I will try tweaking burst and report back!