How Quantization Works in LLMs: Zero to Hero
Table of contents Introduction What is quantization (simple explanation) Why quantize LLMs? Costs, memory, and latency Quantization primitives and concepts Precision (bit widths) Range, scale and zero-point Uniform vs non-uniform quantization Blockwise and per-channel scaling Main quantization workflows Post-Training Quantization (PTQ) Quantization-Aware Training (QAT) Hybrid and mixed-precision approaches Practical algorithms and techniques Linear (symmetric) quantization Affine (zero-point) quantization Blockwise / groupwise quantization K-means and non-uniform quantization Persistent or learned scales, GPTQ-style (second-order aware) methods Quantizing KV caches and activations Tools, libraries and ecosystem (how to get started) Bitsandbytes, GGML, Hugging Face & Quanto, PyTorch, GPTQ implementations End-to-end example: quantize a transformer weight matrix (code) Best practices and debugging tips Limitations and failure modes Future directions Conclusion Resources Introduction Quantization reduces the numeric precision of a model’s parameters (and sometimes activations) so that a trained Large Language Model (LLM) needs fewer bits to store and compute with its values. The result: much smaller models, lower memory use, faster inference, and often reduced cost with only modest accuracy loss when done well[2][5]. ...