We enabled Throughput Monitoring and Downlink RF Overload trapping on many of our 450(i/m) series AP's and have noticed a bit of confusing results.
We are seeing RF overload in many (many) cases where frame utilization is far below peak (20/30/40%) and seeing RF downlink overload.
Could someone at Cambium explain how this statistic is calculated? I thought the manual indicates it's RF downlink discards. So, for example, on an AP with 20 subs configured in a range of 4-10 Mbit plans with a total frame utilization of 40% at peak at any time on 5 minute graphs shouldn't be seeing any RF overloads, yet, we see them.
From the manual:RF Out Discards
This field indicates the number of packets tossed due to RF link at capacity. This stat will increase whenever the RF link is at capacity. When the internal FPGA RF input queue overflows, this stat is incremented. If this stat is seen to be incrementing at the AP, then the sector is congested. If seen at the SM, the number of Contention Slots must be looked at to ensure that enough Contention Slots are allocated to allow for bandwidth requests to be seen at the AP.