High-performance automated abstract screening with large language model ensembles
TL;DR Highlight
Automating paper abstract screening with 6 LLMs including GPT-4, Claude, and Gemini was faster than human researchers and comparably accurate.
Who Should Read
Systematic review researchers, clinical researchers, and anyone who needs to screen large volumes of academic papers for inclusion/exclusion criteria.
Core Mechanics
- Benchmarked GPT-4, Claude, Gemini, and 3 other LLMs on abstract screening for systematic literature reviews
- Top LLMs achieved sensitivity >95% and specificity >80% on the screening task
- GPT-4 performed best overall; Claude and Gemini were competitive and significantly cheaper
- LLM screening was 10–20x faster than human review teams
- Ensemble of 2–3 LLMs further improved precision/recall over any single model
Evidence
- Evaluated on 3 systematic review datasets with ground-truth inclusion/exclusion decisions
- Sensitivity and specificity reported per model with confidence intervals
- Time-to-completion comparison: LLM batch vs. human team review timeline
How to Apply
- Use LLM abstract screening as a first-pass filter to reduce the human review pool by 70–80%, then have humans review remaining candidates.
- Optimize for sensitivity (minimize false negatives) over specificity when in doubt — it's better to over-include than miss relevant papers.
- Use an ensemble of 2 LLMs with majority voting to reduce single-model errors in the screening pipeline.
Code Example
# LLM Ensemble-based Abstract Screening Example (OpenAI + Anthropic)
import openai
import anthropic
INCLUSION_PROMPT = """
You are a systematic review assistant. Given the following abstract, decide if it meets the inclusion criteria.
Inclusion criteria:
- {inclusion_criteria}
Exclusion criteria:
- {exclusion_criteria}
Abstract:
{abstract}
Respond with ONLY 'INCLUDE' or 'EXCLUDE'.
"""
def screen_with_gpt4o(abstract, inclusion_criteria, exclusion_criteria):
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": INCLUSION_PROMPT.format(
abstract=abstract,
inclusion_criteria=inclusion_criteria,
exclusion_criteria=exclusion_criteria
)}],
temperature=0
)
return response.choices[0].message.content.strip()
def screen_with_claude(abstract, inclusion_criteria, exclusion_criteria):
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-3-5-20241022",
max_tokens=10,
messages=[{"role": "user", "content": INCLUSION_PROMPT.format(
abstract=abstract,
inclusion_criteria=inclusion_criteria,
exclusion_criteria=exclusion_criteria
)}]
)
return response.content[0].text.strip()
def ensemble_screen(abstract, inclusion_criteria, exclusion_criteria):
"""
Ensemble strategy: INCLUDE if either model returns INCLUDE (maximize sensitivity)
Final EXCLUDE only if both models return EXCLUDE
"""
gpt_result = screen_with_gpt4o(abstract, inclusion_criteria, exclusion_criteria)
claude_result = screen_with_claude(abstract, inclusion_criteria, exclusion_criteria)
# Conservative ensemble: INCLUDE if either model returns INCLUDE (sensitivity-first)
if "INCLUDE" in gpt_result or "INCLUDE" in claude_result:
return "INCLUDE", {"gpt4o": gpt_result, "claude": claude_result}
return "EXCLUDE", {"gpt4o": gpt_result, "claude": claude_result}
# Usage example
result, details = ensemble_screen(
abstract="This RCT evaluated...",
inclusion_criteria="Randomized controlled trials in adult patients with Type 2 diabetes",
exclusion_criteria="Non-English studies, animal studies, reviews"
)
print(f"Decision: {result}, Details: {details}")Terminology
Related Papers
Can LLMs model real-world systems in TLA+?
LLM이 TLA+ 명세를 작성할 때 문법은 잘 통과하지만 실제 시스템과의 동작 일치도(conformance)는 46% 수준에 그친다는 걸 체계적으로 검증한 벤치마크 연구로, AI 기반 형식 검증의 현실적 한계를 보여준다.
Natural Language Autoencoders: Turning Claude's Thoughts into Text
Anthropic이 LLM 내부의 숫자 벡터(활성화값)를 직접 읽을 수 있는 자연어로 변환하는 NLA 기법을 공개했다. AI가 실제로 무슨 생각을 하는지 해석하는 interpretability 연구의 새로운 진전이다.
ProgramBench: Can language models rebuild programs from scratch?
LLM이 FFmpeg, SQLite, PHP 인터프리터 같은 실제 소프트웨어를 문서만 보고 처음부터 재구현할 수 있는지 측정하는 새 벤치마크로, 최고 모델도 전체 태스크의 3%만 95% 이상 통과하는 수준에 그쳤다.
MOSAIC-Bench: Measuring Compositional Vulnerability Induction in Coding Agents
티켓 3장으로 쪼개면 Claude/GPT도 보안 취약점 코드를 53~86% 확률로 그냥 짜준다.
Refusal in Language Models Is Mediated by a Single Direction
Open-source chat models encode safety as a single vector direction, and removing it disables safety fine-tuning.
Show HN: A new benchmark for testing LLMs for deterministic outputs
Structured Output Benchmark assesses LLM JSON handling across seven metrics, revealing performance beyond schema compliance.
Original Abstract (Expand)
Abstract Objective Abstract screening is a labor-intensive component of systematic review involving repetitive application of inclusion and exclusion criteria on a large volume of studies. We aimed to validate large language models (LLMs) used to automate abstract screening. Materials and Methods LLMs (GPT-3.5 Turbo, GPT-4 Turbo, GPT-4o, Llama 3 70B, Gemini 1.5 Pro, and Claude Sonnet 3.5) were trialed across 23 Cochrane Library systematic reviews to evaluate their accuracy in zero-shot binary classification for abstract screening. Initial evaluation on a balanced development dataset (n = 800) identified optimal prompting strategies, and the best performing LLM-prompt combinations were then validated on a comprehensive dataset of replicated search results (n = 119 695). Results On the development dataset, LLMs exhibited superior performance to human researchers in terms of sensitivity (LLMmax = 1.000, humanmax = 0.775), precision (LLMmax = 0.927, humanmax = 0.911), and balanced accuracy (LLMmax = 0.904, humanmax = 0.865). When evaluated on the comprehensive dataset, the best performing LLM-prompt combinations exhibited consistent sensitivity (range 0.756-1.000) but diminished precision (range 0.004-0.096) due to class imbalance. In addition, 66 LLM-human and LLM-LLM ensembles exhibited perfect sensitivity with a maximal precision of 0.458 with the development dataset, decreasing to 0.1450 over the comprehensive dataset; but conferring workload reductions ranging between 37.55% and 99.11%. Discussion Automated abstract screening can reduce the screening workload in systematic review while maintaining quality. Performance variation between reviews highlights the importance of domain-specific validation before autonomous deployment. LLM-human ensembles can achieve similar benefits while maintaining human oversight over all records. Conclusion LLMs may reduce the human labor cost of systematic review with maintained or improved accuracy, thereby increasing the efficiency and quality of evidence synthesis.