Memory Elements

Basic memory components in digital systems: latches, SRAM, DRAM, and ROM types.

The Role of Memory in Network Devices

At the heart of every modern network switch or router lies a sophisticated system of memory. Memory elements are not just for storing data; they are critical for the entire process of packet and cell switching. Their primary role is to temporarily store data units (packets, cells) to handle contention-situations where multiple data streams compete for the same output link. This temporary storage is known as buffering.

Network memory can be broadly categorized into two types: traditional electronic memory (like RAM), and specialized optical memory, which addresses the unique challenges of all-optical networking.

Electronic Memory in a Modern Router

A contemporary switching node, such as an IP router, utilizes several types of electronic memory, each with a distinct function, to manage its operations.

Interactive Router Memory Architecture

Switch between startup and packet-forwarding paths to see how RAM, ROM, Flash and interfaces cooperate.

Active flow
Boot Sequence
Step 1 of 3
Boot ROM starts the initialization sequence
OS image and startup config are loaded into RAM
Interfaces are configured and links come up
Control Processor
Click a block to inspect its role.

Executes control-plane logic and forwarding decisions.

  • : This is the router's active workspace. It stores critical, temporary information such as the current running configuration, routing tables (maps of the network), and most importantly, packet buffers. When packets arrive faster than they can be sent out, they are queued in RAM.
  • : ROM holds the initial boot-up software. It contains essential instructions to start the router and find the main operating system, similar to the BIOS in a personal computer.
  • : This is the permanent storage for the router's operating system and saved configuration files. Unlike RAM, Flash is non-volatile, meaning it retains its data even when the power is turned off.

Buffering Strategies in Electronic Switches

In ATM and high-speed packet switching, the way cells or packets are buffered significantly impacts performance. Buffering is essential to resolve contention, where multiple inputs want to send data to the same output simultaneously. The location and logic of the buffer define the switch's architecture.

1. Input Buffering and HOL Blocking

In this model, each input port has its own buffer to store incoming packets. An arbiter decides which input gets to transmit to an output in each time slot. While simple, this architecture suffers from a major drawback: . To solve this, advanced switches use Virtual Output Queues (VOQ), where each input port maintains a separate queue for each output port, eliminating HOL blocking.

Input Buffering and HOL Blocking

Run time slots with and without Virtual Output Queues (VOQ) to observe head-of-line blocking.

Busy output:
Traffic preset:
Input 1
Blocked head
A1O1
A2O2
A3O3
Input 2
Blocked head
B1O1
B2O2
B3O3
Input 3
Blocked head
C1O1
C2O2
C3O3
Packets sent
0
avg 0.00/slot
Simulated slots
0
HOL blocking detected
0
No HOL event in this slot

2. Output Buffering

Here, packets are immediately passed through the switching fabric to a buffer located at each output port. This method completely avoids HOL blocking. However, it requires the switching fabric and the output buffer memory to operate NN times faster than the input line rate (where NN is the number of ports), as up to NN packets could arrive for the same output simultaneously. This makes it technologically demanding and expensive for large switches.

Output Buffering Architecture

Model contention when many inputs target one output and observe required fabric/memory speedup.

Input ports (N)
8
Packets arriving to one output this slot
4
Contended output
Line rate per port
N = 8
Simulated slots: 0
Inputs
8x ingress ports
Output buffers
4x egress queues
Queue depth
O1 (hotspot)0
O2 0
O3 0
O4 0
Recent queue pressure
Current speedup need
4x (400 Gb/s internal)
Worst case speedup
8x (800 Gb/s internal)
When k packets target one output in the same slot, internal transfer must sustain k writes before egress drains one packet.

3. Shared Memory Buffering

This is a highly efficient architecture where a single, central memory buffer is shared by all input and output ports. Incoming packets are written into this shared memory, and a control unit manages pointers to create logical queues for each output port. The memory is dynamically allocated, leading to optimal use of buffer space. It also requires a memory speed of NN times the line rate for reads and writes.

The Challenge and Solution of Optical Buffering

All-optical packet switching promises enormous speeds by keeping data in the form of light, avoiding slow electronic conversions. However, it faces a fundamental problem: light cannot be easily "stopped" and stored. There is no cheap and common optical equivalent of electronic RAM.

The most common solution is the . Instead of storing a packet, it is sent on a detour through a precisely measured loop of fiber. This forces the packet to travel a longer distance, causing it to arrive at its destination at a later time, effectively "buffering" it.

Optical Buffer Architectures

Optical FDL Buffer Architectures

Compare traveling (parallel FDLs) versus recirculating (single loop) optical buffering.

Requested delay (in T slots)
3T
WDM wavelengths
2
Packets to buffer now
3
Selected optical path
3T
Parallel instantaneous capacity
16
1T
idle
2T
idle
3T
active
4T
idle
5T
idle
6T
idle
7T
idle
8T
idle
Traveling structures pick one physical delay line whose length matches requested delay.
Packet plan
P1
requested delay: 3T
path: 3T
P2
requested delay: 3T
path: 3T
P3
requested delay: 3T
path: -
2 accepted
1 dropped
  • Traveling Type Buffers: These use a set of parallel FDLs, each with a different length corresponding to a different delay time (e.g., T,2T,3T...T, 2T, 3T...). A packet requiring a certain delay is switched into the appropriate fiber loop.
  • Recirculating Type Buffers: These use a single fiber loop. A packet can be made to circulate through the loop multiple times to achieve longer delays. This requires optical gates and switches to control the number of recirculations.

A key limitation is that typically only one packet can be written to or read from the buffer at any given moment. This can be overcome by applying WDM (Wavelength Division Multiplexing), where a single fiber loop can buffer multiple packets simultaneously on different wavelengths of light.