Link Aggregation

Bundling multiple physical links for increased bandwidth and redundancy.

The Bandwidth and Reliability Bottleneck

As networks grow in speed and complexity, certain connections become critical bottlenecks. Imagine a busy corporate network: multiple user PCs are connected to an access switch, and that single access switch is connected to a core distribution switch via one uplink cable. This single link carries all the combined traffic from dozens or hundreds of users to the rest of the network. This creates two significant problems:

  • Bandwidth Limitation: If you have 24 users on an access switch, all with 1 Gbps connections, but only a single 1 Gbps uplink to the core switch, that uplink quickly becomes a bottleneck. The maximum combined traffic can't exceed the 1 Gbps of that single link, slowing down the entire network. The straightforward solution, upgrading the link to 10 Gbps, can be very expensive, requiring new hardware and potentially new cabling.
  • Single Point of Failure: If that one critical uplink cable fails, or the port it's connected to breaks, the entire access switch and all its connected users are completely cut off from the network. This lack of is a major liability.

Adding a second, parallel physical link between the switches might seem like an obvious solution. However, due to the that prevents network loops, only one of these links would be active at any given time, while the other would be put into a blocking state. This provides redundancy but does nothing to increase bandwidth. A more sophisticated solution is needed to use both links simultaneously.

What is Link Aggregation? Bundling Physical Links

Link Aggregation is a networking technique that combines or "bundles" multiple physical network connections into a single logical link. This provides a high-throughput, fault-tolerant connection between two devices, typically switches, servers, or routers.

Instead of treating two 1 Gbps links as separate paths (where STP would block one), a switch configured for link aggregation treats them as a single, combined link with a total bandwidth of 2 Gbps. The technology is known by many names:

  • EtherChannel: Cisco's proprietary term for link aggregation.
  • Port Trunking or Port Teaming: Common industry terms.
  • Link Aggregation Group (LAG): A term used in the IEEE standard.
  • IEEE 802.3ad / 802.1AX: The official IEEE standards that define this technology.

Key Benefits of Link Aggregation

Implementing link aggregation provides two immediate and powerful advantages:

  1. Increased Bandwidth (High Throughput):

    This is the most obvious benefit. By bundling physical links, you aggregate their capacities. A LAG consisting of two 1 Gbps links provides a logical link with a total theoretical bandwidth of 2 Gbps. Four 10 Gbps links create a 40 Gbps logical link. This allows administrators to incrementally increase bandwidth between critical points in the network in a very cost-effective way, using existing ports and without needing to upgrade to the next, much more expensive, hardware speed tier.

  2. High Availability (Redundancy):

    The logical link remains operational as long as at least one of its physical links is active. If one of the physical cables in the bundle is cut or a port fails, the traffic will be automatically and transparently redistributed across the remaining active links in the LAG. This provides seamless fault tolerance. The failover is typically much faster than the time it would take for Spanning Tree Protocol to reconverge, ensuring minimal disruption to network services.

How Link Aggregation Distributes Traffic

A common misconception is that a LAG allows a single large file transfer to use the full combined speed (e.g., to achieve a 2 Gbps speed for one file over a 2x1 Gbps LAG). This is generally not the case.

A single "conversation" or flow between two devices is typically sent over only one of the physical links within the LAG. The switch needs a consistent, deterministic way to decide which link to use for each frame to ensure that frames belonging to the same conversation arrive in the correct order. This is achieved through a load-balancing algorithm.

The algorithm works by taking certain fields from the frame header, running them through a mathematical , and using the resulting hash value to select a physical link from the LAG. Common fields used for this calculation include:

  • Source MAC Address: All traffic from one computer will use the same physical link.
  • Destination MAC Address: All traffic to one computer will use the same physical link.
  • Source and Destination MAC Addresses: (Most common default) A unique conversation between two MAC addresses will use one link.
  • Source and Destination IP Addresses: Balances traffic at Layer 3, useful for routing between different networks.
  • Source and Destination TCP/UDP Port Numbers: (Most granular) This allows different applications (e.g., web browsing vs. email) between the same two computers to potentially use different physical links, providing the best traffic distribution.

Per-Flow, Not Per-Packet

The key takeaway is that link aggregation performs per-flow (or per-conversation) load balancing, not per-packet balancing. While a single file transfer won't exceed the speed of one link, a LAG carrying traffic from hundreds of different user conversations will distribute those conversations effectively across all available physical links, fully utilizing the aggregated bandwidth.

Implementation Requirements and Protocols

For link aggregation to work correctly, there are several strict requirements for the physical links being bundled:

  • Same Speed: All ports in the LAG must be configured to operate at the same speed (e.g., you cannot mix a 100 Mbps port with a 1 Gbps port).
  • Same Duplex Mode: All ports must be in the same duplex mode (typically full-duplex).
  • Point-to-Point: The links must be point-to-point connections between the same two devices.
  • Consistent Configuration: Ports on both ends of the link must have consistent configurations (e.g., same VLAN memberships, unless configured as trunks).

To form a LAG, the switches on both ends must be configured. This can be done in two ways:

  1. Static Configuration (On): The administrator manually configures the group of ports on both switches to form an EtherChannel. This is simple but provides no mechanism to detect cabling errors or misconfigurations between the switches.
  2. Dynamic Negotiation Protocols: It is highly recommended to use a dynamic negotiation protocol. These protocols allow the switches to communicate with each other to automatically form the LAG, verify that the configuration on both sides is compatible, and dynamically add or remove links from the bundle if conditions change. The two main protocols are:
    • PAgP (Port Aggregation Protocol): A Cisco-proprietary protocol.
    • LACP (Link Aggregation Control Protocol): An IEEE standard (802.3ad), which is vendor-neutral and the most commonly used method today.
    Link Aggregation | Teleinf Edu