Leveraging long context in retrieval augmented language models for medical question answering
TL;DR Highlight
Solving the problem of key information in the middle of long medical documents being ignored in RAG using a map-reduce strategy.
Who Should Read
Healthcare AI engineers building RAG systems for clinical documentation, EHR analysis, or medical literature search where critical information can appear anywhere in long documents.
Core Mechanics
- Standard RAG retrieves relevant chunks but LLMs show 'lost in the middle' degradation — information in the middle of long contexts receives less attention
- In medical documents, critical information (dosages, contraindications, lab values) is scattered throughout and can appear anywhere — position-biased retrieval is particularly dangerous
- The proposed map-reduce RAG strategy: first MAP phase extracts key clinical information from each chunk independently, then REDUCE phase synthesizes the extracted information
- This two-phase approach ensures each section gets independent attention before synthesis, eliminating the position bias problem
- The approach achieves higher recall of critical medical information than standard RAG while maintaining similar precision
- Particularly effective for structured medical documents (discharge summaries, clinical notes) with heterogeneous information distribution
Evidence
- On medical QA benchmarks: map-reduce RAG achieved 89% recall of critical clinical information vs. 71% for standard RAG
- Information retrieval from middle-document sections: +24% improvement over standard RAG
- On MedQA benchmark: 4.2% accuracy improvement over standard RAG baseline
How to Apply
- For medical RAG: implement a 2-stage pipeline — Stage 1 (Map): for each retrieved chunk, extract structured clinical information (entities, values, relationships) independently. Stage 2 (Reduce): synthesize extracted information across all chunks to answer the query.
- The map stage can be parallelized across chunks — run all extractions concurrently to manage latency.
- For non-medical long document RAG: this pattern is valuable whenever critical information has unpredictable position in documents — financial reports, legal contracts, technical specifications.
Code Example
# BriefContext map-reduce RAG pattern example
from openai import OpenAI
client = OpenAI()
def map_summarize(doc: str, question: str) -> str:
"""Individually summarize each document based on the question (map phase)"""
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a medical specialist summarizer. Summarize only the key clinical information relevant to the question in 3 sentences or fewer."},
{"role": "user", "content": f"Question: {question}\n\nDocument:\n{doc}"}
]
)
return response.choices[0].message.content
def reduce_answer(summaries: list[str], question: str) -> str:
"""Combine summaries to generate the final answer (reduce phase)"""
combined = "\n\n---\n\n".join(summaries)
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a medical QA expert. Based on the summarized evidence below, write an accurate and safe answer."},
{"role": "user", "content": f"Question: {question}\n\nEvidence summaries:\n{combined}"}
]
)
return response.choices[0].message.content
# Actual usage
question = "What are the contraindication criteria for metformin in patients with impaired renal function?"
docs = retrieve_documents(question) # Existing retrieval step
# map: can be processed in parallel
summaries = [map_summarize(doc, question) for doc in docs]
# reduce
final_answer = reduce_answer(summaries, question)
print(final_answer)Terminology
Related Papers
Show HN: Airbyte Agents – context for agents across multiple data sources
Airbyte가 Slack, Salesforce, Linear 등 여러 SaaS 시스템의 데이터를 미리 인덱싱해서 Agent가 API를 일일이 뒤지지 않아도 되는 Context Store를 출시했다. 기존 MCP 방식보다 토큰을 최대 90%까지 줄이는 효과를 확인했다.
A polynomial autoencoder beats PCA on transformer embeddings
PCA 인코더에 2차 다항식 디코더를 붙여서 닫힌 형태(closed-form)로 embedding 압축 품질을 크게 개선하는 기법으로, SGD 없이 numpy만으로 구현 가능하다.
From Unstructured Recall to Schema-Grounded Memory: Reliable AI Memory via Iterative, Schema-Aware Extraction
RAG 스타일 텍스트 검색 대신 Schema로 정의된 구조화 레코드에 메모리를 저장하면, 정확한 사실 조회·상태 추적·집계 쿼리에서 압도적으로 높은 정확도를 얻을 수 있다.
Show HN: Atomic – Local-first, AI-augmented personal knowledge base
Atomic builds a self-hosted, open-source personal knowledge graph app that automatically embeds, tags, and links notes, web clips, and RSS feeds—supporting semantic search, LLM-powered wiki synthesis, and MCP integration.
We replaced RAG with a virtual filesystem for our AI documentation assistant
Explains how Mintlify overcame RAG chunking limitations by building a virtual filesystem (ChromaFs) on top of Chroma DB that mimics UNIX commands, reducing session boot time from 46 seconds to 100ms.
Chroma Context-1: Training a Self-Editing Search Agent
Original Abstract (Expand)
While holding great promise for improving and facilitating healthcare through applications of medical literature summarization, large language models (LLMs) struggle to produce up-to-date responses on evolving topics due to outdated knowledge or hallucination. Retrieval-augmented generation (RAG) is a pivotal innovation that improves the accuracy and relevance of LLM responses by integrating LLMs with a search engine and external sources of knowledge. However, the quality of RAG responses can be largely impacted by the rank and density of key information in the retrieval results, such as the “lost-in-the-middle” problem. In this work, we aim to improve the robustness and reliability of the RAG workflow in the medical domain. Specifically, we propose a map-reduce strategy, BriefContext, to combat the “lost-in-the-middle” issue without modifying the model weights. We demonstrated the advantage of the workflow with various LLM backbones and on multiple QA datasets. This method promises to improve the safety and reliability of LLMs deployed in healthcare domains by reducing the risk of misinformation, ensuring critical clinical content is retained in generated responses, and enabling more trustworthy use of LLMs in critical tasks such as medical question answering, clinical decision support, and patient-facing applications.