CommVQ: Commutative Vector Quantization for KV Cache Compression
AuthorsJunyan Li†, Tianle Cai‡, Yang Zhang§, Muhammad Yusuf Hassan†, Talha Chafekar†, Colorado Reed, Zhile Ren, Pengsheng Guo, Binazir Karimzadeh, Chong Wang, Chuang Gan†
AuthorsJunyan Li†, Tianle Cai‡, Yang Zhang§, Muhammad Yusuf Hassan†, Talha Chafekar†, Colorado Reed, Zhile Ren, Pengsheng Guo, Binazir Karimzadeh, Chong Wang, Chuang Gan†
Large Language Models (LLMs) are increasingly used in applications requiring long context lengths, but the key-value (KV) cache often becomes a memory bottleneck on GPUs as con- text lengths grow. To address this, we propose Commutative Vector Quantization (CommVQ) to significantly reduce memory usage for long context LLM inference. First, we leverage additive quantization by introducing a lightweight encoder and codebook to compress the KV cache, which can then be decoded with a simple matrix multiplication. Second, to tackle the high computational costs during decoding, we design the codebook to be commutative with Ro- tary Position Embedding (RoPE), and utilize an Expectation-Maximization (EM) algorithm to learn the codebook. This enables efficient integration of decoding into the self-attention mechanism, significantly reducing computational overhead. Our approach achieves superior accu- racy through additive quantization while lowering computational costs with our RoPE-commutative codebook. Experiments on long-context bench marks and GSM8K demonstrate that our method reduces FP16 KV cache size by 87.5% for 2-bit quantization, while maintaining higher accu- racy than state-of-the-art KV cache quantization methods. Remarkably, it enables 1-bit quanti- zation of the KV cache with minimal accuracy degradation, making it possible to run a LLaMA- 3.1 8B model with a maximum 128K context length on a single RTX 4090 GPU.
March 7, 2025research area Methods and Algorithms, research area Speech and Natural Language Processingconference ICLR
May 14, 2024research area Methods and Algorithms, research area Speech and Natural Language Processingconference ICML