Publication

VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models

VPTQ introduces Vector Post-Training Quantization for extremely low-bit quantization of LLMs. It uses Second-Order Optimization to formulate the LLM vector quantization problem and supports residual and outlier quantization to achieve near-lossless compression at 1-4 bits.

EMNLP 2024 / November 2024
LLMquantizationmodel compressionvector quantization

Authors

Yifei Liu, Jicheng Wen, Yang Wang, Shengyu Ye, Li Lyna Zhang, Ting Cao, Cheng Li, Mao Yang

Abstract

VPTQ uses Second-Order Optimization and vector quantization to achieve extreme low-bit (1–4 bit) compression of LLMs, enabling near-lossless quantization and fast inference with significantly reduced memory footprint.