Quality of Service Mechanisms Explained

ePMP Quality of Service
Detailing how ePMP’s QoS functions, including a discussion of Cambium’s proprietary methods for prioritizing, queueing, and limiting traffic over the radio link.


Introduction
The following information is intended for network operators seeking a better understanding of ePMP’s Quality of Service (QoS) implementation within the system’s Time Division Duplex (TDD) scheduler. This document includes a description of the QoS mechanisms present in the ePMP system, information about priority levels and Weighted Fair Queueing (WFQ) implementations, and discussions about how real-world results can be interpreted with respect to ePMP QoS.

qos_overview.PNG

ePMP QoS Mechanisms – the Basics
Cambium QoS consists of two functional areas: prioritization and rate limiting.

How ePMP Handles Prioritization Internally
The ePMP system prioritizes different types of traffic within internal queueing mechanisms. These mechanisms are controlled by classification rules defined at the user level.


Radio link traffic may be prioritized as Voice, High, or Low based on system configuration and traffic type. For all ePMP devices, high and low queue QoS prioritization and rate limiting mechanisms are applied solely to traffic ingressing the device Ethernet port and exiting the device radio interface (over the radio link). For example, ePMP Access Point devices enact QoS for High and Low priority data entering the Ethernet port and exiting the device over the radio downlink. ePMP Subscriber Modules enact QoS for High and Low priority data entering the Ethernet port and exiting the device over the radio uplink. In the case of Voice traffic, the ePMP system features a mechanism wherein the subscriber may “piggyback” a bandwidth request with VoIP packets sent over the uplink. This allows the Access Point to schedule additional subscriber uplink allocation sooner (instead of the subscriber waiting for the next uplink polling response opportunity to schedule transmission). The result is better handling of voice application communication and more reliable call quality.

 epmp_qos_interface.PNG

Fine Control of Prioritization by Configuring QoS Classification Rules
With ePMP, unicast traffic is prioritized based on Layer 2 markings (VLAN, CoS, EtherType, MAC address) and Layer 3 markings (IP, DSCP). This prioritization may be controlled by configuring QoS classification rules on the device (these rules are configured in the device QoS Classification Rules table). The QoS Classification Rules table contains all of the rules enforced by the device when passing traffic over the radio link. Traffic passed through the device is matched against each rule in the table; when a match is made the traffic is sent over the radio link using the priority defined in column Priority.

Controlling Overall Prioritization by Enabling Subscriber Module Priority
Prioritization may also be controlled by modifying the Subscriber Module Priority configuration parameter to automatically place all non-voice (non-VoIP) traffic in either the High priority queue or the Low priority queue. For example, when this parameter is set to High, the SM places all data other than VoIP in the High priority queue. This data will be given higher priority than other sector SMs configured with Subscriber Module Priority set to Normal or Low.   In other words, this high-priority SM will be served by the AP with its traffic automatically classified, and scheduled as, Voice (VoIP traffic) and High (all other traffic) priority. As with any Quality of Service implementation, careful attention should be paid to assigning high priority to an SM, especially if additional QoS classification rules are configured on the same system.

Prioritization of Broadcast and Multicast Traffic
Operators may configure ePMP devices to prioritize broadcast and multicast traffic automatically into the High priority queue or Low priority queue. By default, both broadcast and multicast traffic are automatically categorized in the Low priority queue. At the user level, this prioritization is controlled by the parameters Broadcast Priority and Multicast Priority.

Rate Limiting (or Maximum Information Rate – MIR)
Network deployments often require certain subscribers or segments of the network to operate at a reduced rate of communication. For example, many service providers offer tiered levels of service with varying levels of cost and bandwidth. To restrict bandwidth to subscribers, rate limiting or Maximum Information Rate (MIR) profiles may be configured at the user level. Once these profiles are configured in the Access Point and applied to a subscriber, the ePMP system enforces the data rate limits configured by the user.

ePMP rate limiting is conducted via a token bucket-based algorithm. The token bucket mechanism is used to ensure that traffic rates conform to the limits defined in the device’s MIR (Maximum Information Rate) Profiles configuration. MIR profiles may be configured to constrain downlink or uplink data rates.

 epmp_priority_levels.PNG

ePMP Priority Levels
The ePMP QoS prioritization system consists of three levels of priority: Voice, High, and Low. By default, all data passed over the air interface in low priority, allowing operators to choose which traffic types are most appropriate for prioritized delivery.


When VoIP Priority is set to Enabled, two entries are automatically added to the QoS Classification Rules table on the device. One entry is created with rule type CoS (5) and one entry is created with rule type DSCP (46). These CoS and DSCP values are typical default prioritization markings used by VoIP equipment and may be modified if required (to match the markings of VoIP equipment in use). The addition of these rules ensures that voice traffic passed over the radio link is given highest priority. For voice traffic, jitter is a key contributor to poor voice traffic quality and ePMP utilizes a special retransmission mechanism to keep jitter minimized.


Weighted Fair Queueing
Data servicing scheduling decisions are made between the various queues by a weighted fair queueing (WFQ) algorithm. This implementation introduces an internal 50:30:20 weighting between the Voice, High, and Low queues respectively. This means that while certain packets are served with priority, high-priority and high-bandwidth traffic will not consume the entire capacity of the link causing delivery failure of lower-priority traffic. This contrasts with strict priority scheduling systems present in other data delivery systems which may serve higher-priority traffic first regardless of bandwidth usage and starvation of lower-priority traffic. The ePMP system employs weighted fair queueing to ensure reliable user data transmission while serving higher-priority traffic on the same wireless link.

Corresponding Real-world Results to ePMP QoS Techniques
While ePMP queues are served by a 50:30:20 weighted fair queueing algorithm, external factors may result in real-world throughput results that do not directly correspond to this weighting ratio. These factors include but are not limited to:


Packet size - smaller packets, including inherently small voice packets, consume less bandwidth to send but require greater Packets-Per-Second (PPS) processing resources. Since the ePMP QoS scheduler uses an internal weighted fair queueing structure, Voice, High, and Low priority traffic are served at an internal ratio of 50:30:20 (respectively) of the total link capacity. In live systems, when small-packet Voice traffic increases on the wireless link, the voice traffic’s upper-limit of throughput is typically bound by PPS processing limits (instead of WFQ) which may result in an ultimate traffic weighting (actual radio link utilization after queueing) of (50-60):30:(10-20) due to the fact that when PPS limits are reached, the rate at which packets are dropped depends on the rate at which packets arrive at the scheduler (not by the WFQ mechanisms).

table1.PNG
As a base case example of how packet size contributes to live system results, consider the traffic profile shown right. This traffic profile consists of only large-packet traffic being scheduled at the device prior to transmission over-the-air to exhibit a correspondence between the internal weighting mechanisms and ultimate radio link utilization. In this case, the eventual ePMP link utilization will likely result in a 60 (High):40 (Low) weighting since all traffic contains large packets (1500 bytes). With this profile, system throughput capacity weighted limits will be enforced prior to PPS limitations because all profiles require large bandwidth resources due to large packet size.



If Voice traffic is introduced into this profile (see right), the small packet size (64 bytes) will not allow the Voice traffic to approach its weighted throughput capacity limit of table2.PNG22.5 Mbps (50% of total link capacity) prior to the ePMP system reaching its PPS limit. In this case, actual radio link utilization will not correspond directly with the ePMP device’s 50:30:20 weighting mechanism because the small 64 byte Voice packets eventually lead to the ePMP device dropping packets based on processing limits, not queueing decisions.

Packets-Per-Second – similar to the packet size discussion above, voice packets are small and introduce greater PPS processing and CPU time utilization. Once the system PPS limit is reached, packets are dropped since the CPU utilization is 100%. The rate at which packets are dropped depends on the rate at which packets are arriving at the scheduler, which may result in an eventual (sent out over the air) capacity weighting in the range (50-60):30:(10-20).
table3.PNG
For example, an ePMP link with the traffic profile shown right should theoretically produce an expected eventual link weighting of 22.5 Mbps for Voice priority (50% weight), 13.5 Mbps for High priority (30% weight), and 9 Mbps for Low priority (20% weight). Since both the Voice and High priority streams consist of small packets, it is likely that the system PPS limit will be reached and packets will be dropped, resulting in an eventual (sent out over the air) capacity weighting in the range (50-60):30:(10-20). As mentioned above, when PPS limits are reached the rate at which packets are dropped depends on the rate at which packets arrive at the scheduler (the 50:30:20 has no bearing once PPS limits are reached).

In conclusion, it is important to consider packet size and processing limits when evaluating real-world ePMP system results. While the weighted fair queueing mechanism is based on a 50:30:20 (Voice:High:Low) ratio, other traffic attributes may affect actual link performance slightly.


Other Considerations
Implementing prioritization in a network may or may not be one of several QoS prioritization mechanisms present in a network’s access links and management network routing equipment. Ensure that the priority level of traffic is set properly with respect to the other prioritization and queuing mechanisms in the network.

7 Likes