Therre was a thread which briefly touched on the meaning of the Slot Grouping table displayed at the bottom of the Link Test with Multiple VCs screen, but the actual results in the Link Test Table are either terrible or confusing...
|89 (Low Priority)||893.69 kbps||99%||3516||3491||8X/4X MIMO-B||8X/4X MIMO-B||100%|
|90 (Low Priority)||899.07 kbps||100%||3512||3512||8X/4X MIMO-B||8X/4X MIMO-B||100%|
|91 (Low Priority)||843.77 kbps||100%||3296||3296||8X/6X MIMO-B||8X/4X MIMO-B||100%|
|92 (Low Priority)||833.28 kbps||93%||3476||3255||8X/4X MIMO-B||8X/4X MIMO-B||100%|
|93 (Low Priority)||2.03 Mbps||83%||9563||7966||8X/4X MIMO-B||8X/2X MIMO-B||100%|
|94 (Low Priority)||163.84 kbps||11%||5544||640||8X/6X MIMO-B||8X/6X MIMO-B||100%|
So, is this just the best each SM could get if all were downloading at their maximum simultaneously? The Configuration and User Guide shows the controls, but doesn't assist with the interpretation of the results of the test.
Solved! Go to Solution.
To explain the columns in the Link Test results I will use the following results obtained from a Cambium test sector.
The columns in the first table, are :
VC : virtual circuit queue
Rate : delivered data rate averaged during the test
Efficiency : fragment success rate during the link test
Transmit fragments : number of fragments transmitted during the test
Receive fragments : number of fragments received during the test
Downlink rate : downlink rate adapt modulation mode at the end of the test
The first row details the totals for the test showing an aggregate of 211.25 Mbps Subsequent rows detail results for individual VCs
The second table details the grouping statistics achieved during the link test. The columns are :
Group size : each of the possible spatially multiplexed group sizes from 1 to 7
% distribution : the proportion of transmissions using this group size
average slot count : average number of slots per TDD used for transmissions with this group size
For these results all transmissions achieved a grouping of 3. VCs 18 and 21 are always grouped with one of the remaining four VC. None of the remaining 4 VCs could be grouped because the spatial seperation were too small. Consequently transmissions occur to VCs 18 and 21 every TDD cycle. and transmissions to the remaining VCs every 4 TDD cycles due to the round robin scheduler.
The reported TX fragment count for VC 18 is also consistent with the estimate based on
6 fragments/slot due to 64QAM modulation
x 55 slots/TDD (reported in the second table)
x 400 TDD/s (configured TDD rate)
x 10 s test time
I will need more information to understand the results originally posted. Could you provide a screenshot of the complete link test webpage?
When running the Link Test with Multiple VCs, data rates per VC may be lower than that initially expected. In this test, the AP is offering data to all VCs participating in the test. Consequently the AP scheduler has to share the capacity of the wireless medium amoung all the active VC. It is this sharing which reflects the lower rate per VC compared to a test performed on a VC with no other traffic on the sector.
cnMedusa has been optimised for maximising the minimum rate that can be delivered to SMs under busy conditions. This enables the service provider to increase the number of end customers that can receive video on demand during peak time.
The following results were obtained from a customer sector. In this case 111 VC participated in the link test and the sector delivered an aggregate of 193.28 Mbps. This corresponds to an average rate per VC of 1.7 Mbps but as the per VC results show there is a spread of results depending on the spatial multiplexing gain and modulation rate achieved for each individual VC.
OK, this statement
"Consequently the AP scheduler has to share the capacity of the wireless medium amoung all the active VC. It is this sharing which reflects the lower rate per VC compared to a test performed on a VC with no other traffic on the sector."
is the answer to the question I had asked. As I interpret this, the test generates pretty much a worst-case scenario, where the AP is sending at maximum capacity to all VCs simultaneously. This is not something that would occur in the real world with actual subscribers using the connection normally, and low numbers are not, therefore, anything to worry about.