Quantization

Converting continuous sample amplitudes into discrete levels and the resulting quantization noise.

What is Quantization? The Second Step of Digital Conversion

After , which discretizes a signal in time, we are left with a series of samples. However, the amplitude of each sample can still take on any value from a continuous range. Quantization is the process of discretizing these amplitudes.

In essence, quantization maps a large, often infinite, set of input values to a much smaller, finite set of output values. It's an act of rounding or approximation. Instead of dealing with an infinite number of possible voltage levels, we approximate each sample's value to the nearest of a predefined set of quantization levels. This step is crucial for representing the signal with a finite number of bits.

The Mechanism of Quantization

The process of quantization involves two main steps: dividing the amplitude range and assigning values.

  1. Dividing the Range: The entire possible range of the signal's amplitude is divided into a finite number of intervals, known as quantization intervals. Each interval is defined by a width, called the quantization step size (ϵ\epsilon).
  2. Rounding and Assignment: The actual amplitude of each sample is measured. This value is then rounded to the nearest available quantization level. For instance, if a sample has a value of 2.7V and the nearest levels are 2.5V and 3.0V, it would be assigned the value of 2.5V or 3.0V depending on the rounding rule. All samples falling within a given interval are assigned the same single quantization level.

Quantization Error: The Inevitable Loss

Because quantization is an approximation process, it introduces an unavoidable error. This error, known as quantization error or quantization noise, is the difference between the actual sample value and its rounded, quantized value.

  • Magnitude of Error: For a uniform quantizer (where all steps are equal), the maximum possible quantization error for any given sample is half the quantization step size (±ϵ2\pm \frac{\epsilon}{2}).
  • Impact on Signal Quality: This error manifests as noise that is added to the signal. The more quantization levels we use (i.e., the smaller the step size ϵ\epsilon), the smaller the error and the higher the fidelity of the digital representation, resulting in a better .

Resolution: Bits per Sample

After quantization, each of the discrete levels must be represented by a unique binary code. The number of bits used to represent each sample determines the resolution of the conversion and the total number of available quantization levels.

L=2nL = 2^n

Where LL is the number of quantization levels, and nn is the number of bits per sample.

Bit Depth in Practice

  • Early PCM Systems: Early PCM systems for telephony used 12 bits per sample. This provided 212=40962^{12} = 4096 quantization levels, resulting in a high-fidelity signal. However, this required a data rate of 96 kbit/s (8000 samples/s × 12 bits/sample), which was quite high for the time.
  • Modern Telephony (PCM): Through the use of non-linear quantization (companding), the industry standard was reduced to 8 bits per sample. This provides 28=2562^8 = 256 levels and results in the fundamental data rate of 64 kbit/s (8000 samples/s × 8 bits/sample) for a single voice channel.
  • High-Fidelity Audio: For applications like CD audio, much higher resolution is used, typically 16 bits (216=65,5362^{16} = 65,536 levels) or even 24 bits for professional recording, to capture a much wider dynamic range and minimize audible quantization noise.

The Trade-off

The choice of bit depth is a fundamental trade-off. More bits per sample lead to higher quality (lower quantization noise) but also require a higher data rate and thus more bandwidth for transmission and more space for storage.

    Quantization | Teleinf Edu