QoS Models
QoS Models are the architectures of Quality of Service: Best-Effort, Integrated Services (IntServ), and Differentiated Services (DiffServ).
1. Understanding the Need for QoS Architectures
The internet, by its original design, operates on a principle of fairness and simplicity where every piece of data is treated equally. This foundational approach, known as the Best-Effort model, was perfectly adequate for the early internet's primary uses, such as email and file transfers. However, as network usage has exploded to include a diverse array of applications with wildly different performance needs, from high-definition video conferencing to online gaming and massive cloud backups, a one-size-fits-all approach is no longer sufficient.
A QoS architecture, or model, is a high-level strategy and framework for implementing Quality of Service across a network. It defines the philosophy and the core set of rules and protocols used to classify, prioritize, and manage network traffic to meet the specific demands of various applications. Over the years, three primary models have been developed, each with its own approach to solving the challenges of providing predictable network performance. These are the default Best-Effort model, the resource-intensive Integrated Services (IntServ) model, and the highly scalable Differentiated Services (DiffServ) model.
2. The Best-Effort Model: The Foundation
The is the default state of IP networks. It provides a simple, fair, but ultimately unpredictable service.
How It Works
In a best-effort network, routers and switches use a simple first-in, first-out (FIFO) queueing mechanism. Packets are processed in the order they arrive, without any regard for their content, source, destination, or performance requirements. The network makes its "best effort" to deliver packets, but offers no promises.
- No Guarantees: There is no guarantee of bandwidth, no protection against packet loss, and no predictable level of latency or jitter.
- Fairness: All traffic streams compete on equal terms for network resources.
- Simplicity and Scalability: The lack of complex state management or per-flow handling makes the best-effort model extremely simple and highly scalable, which was a key factor in the rapid growth of the internet.
Where It Falls Short
While suitable for applications that can tolerate delay and variability (like email or standard web browsing, which are protected by TCP's reliability mechanisms), the best-effort model fails to meet the needs of real-time, interactive applications. For Voice over IP (VoIP), a packet arriving too late is just as useless as a packet that never arrives at all. The unpredictability of the best-effort model makes it impossible to provide a consistent, high-quality experience for such time-sensitive traffic.
3. Integrated Services (IntServ): The Reservation Model
The Integrated Services model, often referred to as IntServ, was the first comprehensive attempt to move beyond the best-effort approach. Its core philosophy is to provide "hard QoS," meaning it offers explicit, end-to-end, quantifiable guarantees of service for individual application flows.
Core Concepts of IntServ
- Per-Flow State: The defining characteristic of IntServ is that it treats each application's data stream as an individual "flow." A flow could be a single VoIP call, a video conference session, or any other distinct stream of data. Routers in an IntServ network must be able to identify and manage these individual flows.
- Explicit Resource Reservation: Before an application can send data, it must first signal its requirements to the network and request a reservation of resources (bandwidth and buffer space) along the entire path from source to destination. This is analogous to booking a specific flight ticket, which guarantees you a seat, rather than just showing up at the airport hoping to get on a flight.
- Admission Control: Each router along the path performs admission control. When a reservation request arrives, the router checks if it has sufficient available resources to meet the new request without compromising the guarantees already made to existing flows. If it has the resources, it accepts the reservation; if not, the request is rejected, and the application is notified that the desired quality of service is unavailable.
The Role of RSVP (Resource Reservation Protocol)
The signaling mechanism used to make these reservations in an IntServ architecture is the . The process works as follows:
- The sending application sends an RSVP `PATH` message toward the destination. This message travels along the route determined by standard routing protocols and "marks the trail," allowing each router to learn the path back to the sender.
- When the receiving application gets the `PATH` message, it responds with an RSVP `RESV` (Reservation) message. This message travels back up the same path toward the sender.
- As the `RESV` message passes through each router on its way back, that router performs admission control. If the router can accommodate the resource request, it configures its packet scheduler to provide the guaranteed service for that flow and forwards the `RESV` message to the next router up the path.
- If all routers along the path accept the reservation, a guaranteed end-to-end connection is established, and the application can begin sending its data with a predictable level of service.
The Scalability Problem: Why IntServ Is Rarely Used
While IntServ provides the strongest possible QoS guarantees, it has one major, critical drawback: a profound lack of scalability. In a large network like the internet, core routers handle millions of data flows simultaneously. The requirement for every router to maintain state (keep a record of the reservation) and perform admission control for every single individual flow creates an enormous processing and memory burden. As the network grows, this overhead becomes unsustainable. For this reason, the IntServ model is almost never deployed in large public networks and is typically only found in smaller, controlled environments like private corporate networks for specialized applications.
4. Differentiated Services (DiffServ): The Scalable Model
The Differentiated Services model, or DiffServ, was developed to provide a more scalable and manageable approach to QoS, addressing the critical shortcomings of IntServ. Its philosophy is to provide "soft QoS," focusing on prioritization and differentiation rather than absolute guarantees. Instead of managing millions of individual flows, DiffServ groups traffic into a small, manageable number of classes.
Core Concepts of DiffServ
- Traffic Classification and Marking: The intelligence in a DiffServ model is pushed to the edge of the network. The first router that a traffic flow encounters (the edge router) is responsible for classifying it into a predefined class based on policy (e.g., based on source/destination IP, port number, or application). The router then "marks" the packets of this flow by setting a value in the IP header.
- The DSCP Field: This marking is done in the 6-bit field in the IP header. The DSCP value tells all subsequent routers how this packet should be treated.
- Per-Hop Behaviors (PHBs): Routers in the core of the network do not need to perform complex classification or maintain per-flow state. They simply look at the DSCP marking of each packet and apply a corresponding . A PHB is a predefined set of forwarding actions.
Common Per-Hop Behaviors
Standard PHBs include:
- Default PHB (Best-Effort): This is the standard best-effort treatment, typically marked with DSCP 0.
- Expedited Forwarding (EF) PHB: Designed for low-loss, low-latency, low-jitter traffic like VoIP. Packets marked with the EF DSCP value are typically placed in a strict priority queue, ensuring they are sent before any other traffic. This provides a "virtual leased line" experience.
- Assured Forwarding (AF) PHB: This PHB defines four service classes, and within each class, three different drop precedences (low, medium, high). This allows for more granular differentiation. For example, business-critical data can be given a higher AF class than general web traffic. Within that class, during times of congestion, packets with a higher drop precedence will be discarded first.
The Scalability Advantage
The brilliance of DiffServ is its scalability. Core routers have a very simple job: check the DSCP mark and apply a simple PHB. All the complex work of classification and marking is handled only once at the network edge. This eliminates the per-flow state requirement in the core, allowing QoS to be effectively deployed across massive networks like the internet. DiffServ is the most widely implemented QoS model today.
5. Summary of QoS Models
| Characteristic | Best-Effort | Integrated Services (IntServ) | Differentiated Services (DiffServ) |
|---|---|---|---|
| Service Guarantee | None | Hard Guarantees (per flow) | Soft Guarantees (per class) |
| Primary Mechanism | First-In, First-Out (FIFO) Queueing | Resource Reservation (RSVP) | Classification and Prioritization (DSCP) |
| Scalability | Very High | Very Low | Very High |
| Core Router Complexity | Low | High (maintains per-flow state) | Low (implements simple PHBs) |
| Typical Use Case | Default Internet, non-critical data | Small, controlled private networks | Enterprise networks, ISP backbones |