Hamming Code
A classic single-error correcting, double-error detecting block code.
The Problem: Data Corruption
When we transmit digital data, it travels through a physical medium (a channel) where it's vulnerable to noise and interference. These disturbances can corrupt the data, causing a bit to flip from a '0' to a '1' or vice versa. To ensure reliable communication, we need methods to handle these errors.
The simplest approach is error detection. One of the oldest and simplest techniques is the parity check. A single is added to a block of data, but it has a major limitation: it can detect that a single error occurred, but it cannot tell us which bit is wrong, and thus cannot correct it. This is where more advanced codes, like Hamming codes, come in.
Introduction to Hamming Codes
Hamming codes are a family of linear error-correcting codes, invented by Richard Hamming in the 1950s. They are a significant step up from simple parity checks because they can not only detect single-bit and double-bit errors but can also correct single-bit errors. This is achieved by adding carefully structured to the data.
The Power of Hamming Distance
The error-correcting capability of a code is determined by its . For a code to be able to:
- Detect up to errors, its minimum distance must be .
- Correct up to errors, its minimum distance must be .
The standard Hamming code has a minimum distance of . This means it can correct single-bit error and detect up to two-bit errors. By making the valid codewords "far apart" from each other, a single-bit error will result in a corrupted word that is still "closer" to its original, correct form than any other valid codeword, allowing the receiver to fix it.
Case Study: The (7,4) Hamming Code
The most classic example is the (7,4) Hamming code, which demonstrates all the core principles. The notation (n, k) means:
- : The number of data bits in the original message.
- : The total number of bits in the final encoded word.
- This means we add redundant parity bits.
Standard Construction: Parity Bits at Powers of Two
To construct the 7-bit codeword, we arrange the data and parity bits in a specific order. The bit positions are numbered 1 through 7. For consistency with the interactive demo and diagrams on this page, we display bits left‑to‑right as 7 → 1 (bit 7 is the leftmost, bit 1 is the rightmost).
- Parity Bits () are placed at positions that are powers of two (1, 2, 4).
- Data Bits () are placed in the remaining positions (3, 5, 6, 7).
Position (left→right): 7 6 5 4 3 2 1
Bit: D₇ D₆ D₅ P₄ D₃ P₂ P₁Parity Check Rules
Each parity bit checks a unique set of positions (including its own). The rule is: a parity bit at position checks all bit positions whose binary representation has a '1' in the -th place.
- (position 1, binary 001): Checks positions 1, 3, 5, 7 (those with a '1' in the last binary digit).
- (position 2, binary 010): Checks positions 2, 3, 6, 7 (those with a '1' in the middle binary digit).
- (position 4, binary 100): Checks positions 4, 5, 6, 7 (those with a '1' in the first binary digit).
Visualizing the Logic: The Venn Diagram
These complex relationships can be easily visualized using a Venn diagram. Each circle represents the set of bits controlled by one parity bit. The goal (using even parity) is to make sure there is an even number of '1's inside each circle.
Notice how the data bits are located at the intersections of the circles. For instance, data bit is inside all three circles, so it is checked by all three parity bits (). This clever overlapping structure is what allows us to pinpoint the exact location of an error.
Walkthrough: Encoding and Error Correction
Step 1: Encoding a Message
Let's encode the 4-bit data word 1011. We use even parity, meaning each parity bit is set to '0' or '1' to make the total count of '1's in its group even.
- Place data bits (left→right 7→1): Data (D3, D5, D6, D7) is 1, 0, 1, 1.
1 1 0 _ 1 _ _ - Calculate (checks 1,3,5,7): . To make the sum even, must be 0.
1 1 0 _ 1 _ 0 - Calculate (checks 2,3,6,7): . To make the sum even, must be 1.
1 1 0 _ 1 1 0 - Calculate (checks 4,5,6,7): . To make the sum even, must be 0.
1 1 0 0 1 1 0 - Transmitted Codeword (left→right 7→1):
1100110
Step 2: Detecting and Correcting an Error
Now, imagine the codeword is transmitted, but noise flips the 3rd bit from the left (position 5) from '0' to '1'. The receiver gets the corrupted word 1110110.
The receiver performs the same parity checks on the received word:
- Check 1 (for ; positions 1,3,5,7): . The count of '1's is odd (3). Error! Syndrome bit S₁ = 1.
- Check 2 (for ; positions 2,3,6,7): . The count of '1's is even (4). OK. Syndrome bit S₂ = 0.
- Check 3 (for ; positions 4,5,6,7): . The count of '1's is odd (3). Error! Syndrome bit S₄ = 1.
We combine the syndrome bits, from largest to smallest position (), to form a binary number: 101. In decimal, . This is the classic Hamming position index (1..7, counted from the right). In our left‑to‑right display (7 → 1), it corresponds to the 3rd bit from the left.
This result, 5, is the exact position of the corrupted bit!
To correct the error, the receiver simply flips the bit at position 5 in the received word 1110110, which turns it back into the original, correct codeword: 1100110.
Interactive Hamming (7,4)
Enter a 4-bit binary word (0/1 only)