IntServ

Resource reservation with RSVP for guaranteed service quality.

1. The Problem with the Best-Effort Internet

The traditional internet operates on a "best-effort" delivery model. This means that the network does its best to deliver your data packets, but it makes absolutely no promises. For many early applications like email or file transfer, this was sufficient. If some packets were delayed or lost, the Transmission Control Protocol (TCP) would simply retransmit them, and the user would barely notice, aside from a slightly longer download time.

However, with the rise of real-time applications like Voice over IP (VoIP), live video streaming, and online gaming, the best-effort model proved inadequate. In these applications, timing is everything. A packet of audio for a phone call that arrives 500 milliseconds late is completely useless; the moment in the conversation has already passed. The network's unpredictability, with its variable delays (jitter) and potential for packet loss during congestion, made it impossible to provide a consistent, high-quality experience for these time-sensitive services. A new architecture was needed that could provide explicit, predictable guarantees about network performance.

2. Introducing Integrated Services (IntServ): The Reservation Philosophy

The model was developed by the IETF as a comprehensive solution to this problem. The fundamental philosophy behind IntServ is to move away from a hope-for-the-best approach and toward a model of explicit resource reservation.

The core idea of IntServ is that before an application sends any data, it must first signal its needs to the network and request a reservation for the resources required to meet those needs. This reservation must be made at every single router along the entire path from the sender to the receiver. If every router on the path agrees that it has sufficient resources (bandwidth and buffer space) and accepts the reservation, the application receives an end-to-end guarantee for its data. If even one router on the path cannot provide the requested resources, the reservation fails, and the application is notified that it cannot get the desired quality of service. This approach provides "hard QoS," meaning the guarantees are explicit and quantifiable.

3. The Core Mechanics of IntServ

The IntServ architecture is built on three crucial concepts that work in concert: the flow, resource reservation, and admission control.

A. Data Flows: Identifying the Conversation

IntServ does not manage traffic in a coarse-grained way. Instead, it operates on the concept of a . A flow is a specific, distinguishable stream of data from a single application. For example, a VoIP call between two users is one flow, a video stream from a server to a user is another flow, and a file transfer is a third flow. Routers in an IntServ network must be able to identify packets belonging to specific flows to apply the correct reserved service.

B. Resource Reservation: Requesting the Guarantee

An application initiates a reservation by specifying its traffic characteristics and the level of service it requires. This is done through a signaling protocol. The network then reserves two primary resources along the determined path:

  • Bandwidth: A certain portion of the link's capacity is set aside for the flow, ensuring it can transmit data at its required rate even when the link is busy.
  • Buffer Space: A portion of the router's memory is reserved for the flow's packets. This ensures that if small bursts occur, the packets can be queued without being dropped, which is essential for providing delay guarantees.

C. Admission Control: The Network's Gatekeeper

The lynchpin of the IntServ model is . When a request for a new reservation arrives at a router, that router must perform a calculation. It checks its current resource commitments against its total available resources. It must answer the question: "If I accept this new flow, will I still be able to honor all the promises I have already made to the existing flows?" If the answer is yes, it accepts the reservation. If the answer is no, it rejects the request. This process acts as a gatekeeper, preventing the network from becoming overcommitted and ensuring that a guarantee, once given, is always honored.

4. RSVP: The Signaling Protocol for IntServ

The mechanism that applications use to communicate their needs to the network and establish reservations is the . RSVP is a signaling protocol, meaning it does not carry the application data itself; it only carries the messages needed to set up the path for that data.

A Two-Step Reservation Process

RSVP establishes reservations using a two-step process involving `PATH` and `RESV` messages.

  1. Step 1: The `PATH` Message (Sender to Receiver)

    The process begins with the data sender (e.g., a video server). The sender's application sends an RSVP `PATH` message destined for the receiver's IP address. This `PATH` message travels through the network, following the same route that the actual data packets will take. As it passes through each IntServ-aware router, the router inspects the message and creates a "path state." This state entry records the previous hop, essentially creating a trail of breadcrumbs that marks the path back to the sender. The `PATH` message does not make any reservations; it only establishes the reverse path for the reservation request.

  2. Step 2: The `RESV` Message (Receiver to Sender)

    When the `PATH` message arrives at the data receiver (e.g., the video client), the receiving application now knows the path to the sender. The receiver is responsible for initiating the actual reservation. It creates an RSVP `RESV` (Reservation) message, specifying the desired QoS (e.g., "I need 5 Mbps of bandwidth"). This `RESV` message is then sent back towards the sender, following the reverse path established by the `PATH` message.

    As the `RESV` message travels hop-by-hop back to the sender, each router on the path intercepts it. At each router, the following happens:

    • The router's admission control module examines the reservation request.
    • It checks its available bandwidth and buffer resources to see if it can satisfy the request.
    • If it can, it allocates the necessary resources, updates its internal state to reflect this new commitment, and forwards the `RESV` message to the next router upstream.
    • If it cannot satisfy the request, it rejects it and sends a reservation error message back to the receiver.

    If the `RESV` message successfully reaches the original sender, it means that every router along the path has accepted and established the reservation. A guaranteed end-to-end channel now exists, and the sender can begin transmitting its data with the assurance that the network will provide the requested Quality of Service.

This receiver-oriented approach is a key design feature of RSVP. It naturally supports multicast scenarios, where one sender transmits to multiple receivers. Each receiver can make its own independent reservation request tailored to its specific needs and link capacity.

5. IntServ Service Classes

IntServ defines two primary classes of service that an application can request.

  • Guaranteed Service

    This is the highest level of service in IntServ. It provides a firm, mathematically provable, deterministic (non-statistical) upper bound on the end-to-end packet delay. An application requesting this service specifies its traffic characteristics using a token bucket model. By making a guaranteed service reservation, an application is assured that its packets will never be dropped due to queue overflows (as long as it stays within its traffic profile) and will arrive within the calculated delay bound. This service is ideal for highly intolerant real-time applications that require strict timing guarantees.

  • Controlled-Load Service

    This service provides a less strict but still highly reliable level of service. It does not provide a strict numerical bound on delay, but it guarantees performance that is "closely equivalent to the performance that the same flow would receive from an uncongested best-effort network." In essence, it aims to make the application feel as if it is running on a lightly loaded network, even when the network is busy. It is suitable for a broader range of adaptive real-time applications that can tolerate some minor variations in delay but still require better performance than the standard best-effort model.

6. The Scalability Problem: The Downfall of IntServ

While the concept of providing hard, per-flow guarantees is powerful, the IntServ model suffers from a fatal flaw that has prevented its widespread adoption: a profound lack of scalability.

  • Per-Flow State Maintenance: The biggest issue is the amount of state that routers must maintain. A core router in a large ISP network might handle millions of concurrent flows. In the IntServ model, that router would need to store a separate reservation state entry for every single one of those flows. This "state explosion" would require enormous amounts of memory and processing power, making it prohibitively expensive and complex to implement in the network core.
  • Signaling Overhead: The RSVP messages themselves consume network bandwidth and router CPU cycles. In a large network, the continuous stream of `PATH` and `RESV` messages for setup, teardown, and refreshing of states would create significant overhead.
  • Complexity: The overall complexity of implementing and managing IntServ and RSVP across a multi-vendor, multi-provider network like the internet is extremely high.

Because of these scalability issues, IntServ is almost never used on the public internet. Its use is confined to smaller, controlled private networks (e.g., dedicated corporate video conferencing networks, industrial control systems) where the number of flows is manageable and the need for hard guarantees outweighs the complexity. Despite its limited deployment, the concepts pioneered by IntServ laid the crucial groundwork for more scalable QoS models like DiffServ and traffic engineering protocols like MPLS-TE.

    IntServ | Teleinf Edu