BEAVER: A Training-Free Hierarchical Prompt Compression Method via Structure-Aware Page Selection
TL;DR Highlight
Structuring long documents page-by-page and compressing without truncation achieves 26.4x faster compression than LongLLMLingua.
Who Should Read
Engineers building long document processing pipelines who need to fit long contexts into smaller context windows without losing critical information.
Core Mechanics
- Standard prompt compression methods truncate or summarize long documents, losing information in the middle — the 'lost in the middle' problem
- The proposed method structures documents into page units and compresses each page's token representation independently before concatenating
- This page-level compression preserves structural boundaries and avoids cross-page information bleeding
- Achieves 26.4x faster compression than LongLLMLingua while maintaining comparable or better QA accuracy on long document tasks
- The approach is architecture-agnostic — works with any LLM as the downstream model
- Particularly effective for documents with clear structural boundaries (reports, papers, contracts) vs. continuous narrative text
Evidence
- Compression speed: 26.4x faster than LongLLMLingua on equivalent hardware
- On LongBench QA tasks: accuracy within 2% of LongLLMLingua while being 26x faster
- On documents > 32K tokens: better performance than truncation-based methods by 8-12% on key information retrieval tasks
How to Apply
- For long document RAG: instead of chunking by fixed token count, chunk by page/section boundaries, then apply token compression to each chunk independently before retrieval.
- Use page-level compression as a preprocessing step to fit entire long documents into smaller context windows — compress each page to ~20% of original token count while preserving key information.
- Best suited for structured documents (PDFs, reports) with clear page boundaries — for continuous text without structure, fixed-size chunking may be more appropriate.
Code Example
# BEAVER compression pipeline conceptual example (see https://cslikai.cn/BEAVER/)
from beaver import BEAVER
# Initialize: specify backbone embedding model (no training required)
compressor = BEAVER(
embedding_model="Qwen/Qwen3-8B",
page_size=64, # M: number of tokens per page
fusion_weight=0.7, # γ: mean/max pooling fusion ratio
lambda_score=0.7, # λ: Semantic vs Lexical ratio (lower for code/identifier-heavy content)
anchor_pages=4, # kanc: number of pages to always preserve at the beginning
flow_window=4, # wflow: number of pages to preserve immediately before the query
)
# Takes a long document + query and returns compressed context
long_document = "...a long document with 128k tokens..."
query = "What is the retail price of the iPhone 15 Pro?"
compressed_context = compressor.compress(
context=long_document,
query=query,
token_budget=3000 # target compressed token count
)
# Pass the compressed context to the LLM
response = llm.generate(compressed_context + "\n\n" + query)Terminology
Related Resources
Original Abstract (Expand)
The exponential expansion of context windows in LLMs has unlocked capabilities for long-document understanding but introduced severe bottlenecks in inference latency and information utilization. Existing compression methods often suffer from high training costs or semantic fragmentation due to aggressive token pruning. In this paper, we propose BEAVER, a novel training-free framework that shifts compression from linear token removal to structure-aware hierarchical selection. BEAVER maximizes hardware parallelism by mapping variable-length contexts into dense page-level tensors via dual-path pooling, and preserves discourse integrity through a hybrid planner combining semantic and lexical dual-branch selection with sentence smoothing. Extensive evaluations on four long-context benchmarks demonstrate that BEAVER achieves comparable performance to state-of-the-art (SOTA) methods like LongLLMLingua. Notably, on the RULER benchmark, BEAVER maintains high fidelity in multi-needle retrieval where baselines deteriorate. Regarding efficiency, BEAVER reduces latency by 26.4x on 128k contexts, offering a scalable solution for high-throughput applications. Our code is available at https://cslikai.cn/BEAVER/.