Exploring KV Cache Quantization in Multimodal Large Language Model Inference
TL;DR Highlight
Quantizing the KV Cache of multimodal LLMs with images makes first-token latency 1.7x faster and output throughput 4.3x faster.
Who Should Read
Engineers optimizing multimodal LLM inference for production deployments who need to reduce latency and memory footprint without significant quality loss.
Core Mechanics
- The KV (Key-Value) Cache for multimodal inputs is significantly larger than for text-only inputs because image tokens dominate
- Standard KV Cache quantization (INT8/INT4) degrades multimodal quality more than text-only quality — image features are more sensitive to quantization noise
- The paper proposes modality-aware KV quantization: higher precision for image token KV entries, lower precision for text token KV entries
- This mixed-precision approach achieves 1.7x improvement in time-to-first-token and 4.3x improvement in generation throughput
- Quality degradation is minimal: < 1% on VQA benchmarks, < 2% on image captioning tasks
- The memory savings from KV cache quantization allow processing 3x longer multimodal contexts within the same memory budget
Evidence
- Time-to-first-token: baseline 2.4s → quantized 1.4s (1.7x speedup)
- Output throughput: baseline 42 tokens/s → quantized 181 tokens/s (4.3x speedup)
- VQA accuracy drop: INT8 modality-aware quantization shows 0.8% accuracy loss vs. 3.2% for uniform INT8 quantization
How to Apply
- For vLLM or TensorRT-LLM deployments: implement modality-aware KV cache quantization by identifying which KV cache entries correspond to image tokens and applying INT8 to text entries, INT4 or FP8 to image entries — or vice versa based on your quality requirements.
- The quantization benefit is largest when image token counts are high — if you're processing many images or high-resolution inputs, prioritize this optimization.
- Profile your specific model and hardware combination: the optimal precision split varies — start with INT8/INT4 text/image and benchmark quality vs. throughput tradeoffs.
Code Example
# Conceptual application example (PyTorch pseudo-code)
import torch
def mixed_precision_kv_cache(keys, values, text_token_mask, quant_bits=4):
"""
text_token_mask: True positions are text tokens (top 10%)
Image tokens are quantized to lower bits
"""
# Text tokens: maintain high precision
keys_text = keys[text_token_mask] # Keep as FP16
values_text = values[text_token_mask]
# Image tokens: INT4 quantization
keys_img = keys[~text_token_mask]
values_img = values[~text_token_mask]
scale_k = keys_img.abs().max() / (2 ** (quant_bits - 1) - 1)
keys_img_q = (keys_img / scale_k).round().to(torch.int8)
scale_v = values_img.abs().max() / (2 ** (quant_bits - 1) - 1)
values_img_q = (values_img / scale_v).round().to(torch.int8)
return {
"keys_text": keys_text,
"values_text": values_text,
"keys_img_quantized": keys_img_q,
"keys_img_scale": scale_k,
"values_img_quantized": values_img_q,
"values_img_scale": scale_v,
}Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.
Original Abstract (Expand)
Multimodal large language models (MLLMs) have demonstrated strong performance across modalities, such as image, video, and audio understanding, by leveraging large language models (LLMs) as a backbone. However, a critical challenge in MLLM inference is the large memory capacity required for the key–value (KV) cache, particularly when processing high-resolution images. This pressure often forces heterogeneous CPU–GPU systems to offload the KV cache to CPU memory, introducing substantial transfer latency. KV cache quantization is a promising way to reduce this memory demand, yet it remains underexplored for MLLM inference. In this work, we characterize MLLM inference and present a text-centric KV cache quantization method that retains only 10% of tokens in high precision while quantizing the rest. Our method reduces Time-To-First-Token (TTFT) by <inline-formula><tex-math notation="LaTeX">$1.7\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>1</mml:mn><mml:mo>.</mml:mo><mml:mn>7</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="rhu-ieq1-3646170.gif"/></alternatives></inline-formula> and Time-Per-Output-Token (TPOT) by <inline-formula><tex-math notation="LaTeX">$4.3\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>3</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="rhu-ieq2-3646170.gif"/></alternatives></inline-formula>, with negligible accuracy loss.