Scaling Sparse Autoencoders: Mapping the Black Box of Multi-Modal Foundation Models
Introduction Foundation models—large neural networks trained on massive, heterogeneous datasets—have reshaped the AI landscape. From GPT‑4’s language prowess to CLIP’s vision‑language alignment, these models excel at multi‑modal reasoning, yet their internal representations remain notoriously opaque. Researchers and practitioners alike ask: What does each neuron actually encode? Can we expose interpretable sub‑structures without sacrificing performance? How do we scale such interpretability tools to billions of parameters? Sparse autoencoders (SAEs) provide a promising answer. By forcing a bottleneck that activates only a tiny fraction of latent units, SAEs act as a “lens” that isolates salient features in the hidden space of a pre‑trained foundation model. When applied to multi‑modal models—those that jointly process text, images, audio, and more—SAEs can map the black box of cross‑modal representations, revealing conceptual atoms that are both human‑readable and mathematically tractable. ...