High-performance automated abstract screening with large language model ensembles
TL;DR Highlight
Automating paper abstract screening with 6 LLMs including GPT-4, Claude, and Gemini was faster than human researchers and comparably accurate.
Who Should Read
Systematic review researchers, clinical researchers, and anyone who needs to screen large volumes of academic papers for inclusion/exclusion criteria.
Core Mechanics
- Benchmarked GPT-4, Claude, Gemini, and 3 other LLMs on abstract screening for systematic literature reviews
- Top LLMs achieved sensitivity >95% and specificity >80% on the screening task
- GPT-4 performed best overall; Claude and Gemini were competitive and significantly cheaper
- LLM screening was 10–20x faster than human review teams
- Ensemble of 2–3 LLMs further improved precision/recall over any single model
Evidence
- Evaluated on 3 systematic review datasets with ground-truth inclusion/exclusion decisions
- Sensitivity and specificity reported per model with confidence intervals
- Time-to-completion comparison: LLM batch vs. human team review timeline
How to Apply
- Use LLM abstract screening as a first-pass filter to reduce the human review pool by 70–80%, then have humans review remaining candidates.
- Optimize for sensitivity (minimize false negatives) over specificity when in doubt — it's better to over-include than miss relevant papers.
- Use an ensemble of 2 LLMs with majority voting to reduce single-model errors in the screening pipeline.
Code Example
# LLM Ensemble-based Abstract Screening Example (OpenAI + Anthropic)
import openai
import anthropic
INCLUSION_PROMPT = """
You are a systematic review assistant. Given the following abstract, decide if it meets the inclusion criteria.
Inclusion criteria:
- {inclusion_criteria}
Exclusion criteria:
- {exclusion_criteria}
Abstract:
{abstract}
Respond with ONLY 'INCLUDE' or 'EXCLUDE'.
"""
def screen_with_gpt4o(abstract, inclusion_criteria, exclusion_criteria):
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": INCLUSION_PROMPT.format(
abstract=abstract,
inclusion_criteria=inclusion_criteria,
exclusion_criteria=exclusion_criteria
)}],
temperature=0
)
return response.choices[0].message.content.strip()
def screen_with_claude(abstract, inclusion_criteria, exclusion_criteria):
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-3-5-20241022",
max_tokens=10,
messages=[{"role": "user", "content": INCLUSION_PROMPT.format(
abstract=abstract,
inclusion_criteria=inclusion_criteria,
exclusion_criteria=exclusion_criteria
)}]
)
return response.content[0].text.strip()
def ensemble_screen(abstract, inclusion_criteria, exclusion_criteria):
"""
Ensemble strategy: INCLUDE if either model returns INCLUDE (maximize sensitivity)
Final EXCLUDE only if both models return EXCLUDE
"""
gpt_result = screen_with_gpt4o(abstract, inclusion_criteria, exclusion_criteria)
claude_result = screen_with_claude(abstract, inclusion_criteria, exclusion_criteria)
# Conservative ensemble: INCLUDE if either model returns INCLUDE (sensitivity-first)
if "INCLUDE" in gpt_result or "INCLUDE" in claude_result:
return "INCLUDE", {"gpt4o": gpt_result, "claude": claude_result}
return "EXCLUDE", {"gpt4o": gpt_result, "claude": claude_result}
# Usage example
result, details = ensemble_screen(
abstract="This RCT evaluated...",
inclusion_criteria="Randomized controlled trials in adult patients with Type 2 diabetes",
exclusion_criteria="Non-English studies, animal studies, reviews"
)
print(f"Decision: {result}, Details: {details}")Terminology
Original Abstract (Expand)
Abstract Objective Abstract screening is a labor-intensive component of systematic review involving repetitive application of inclusion and exclusion criteria on a large volume of studies. We aimed to validate large language models (LLMs) used to automate abstract screening. Materials and Methods LLMs (GPT-3.5 Turbo, GPT-4 Turbo, GPT-4o, Llama 3 70B, Gemini 1.5 Pro, and Claude Sonnet 3.5) were trialed across 23 Cochrane Library systematic reviews to evaluate their accuracy in zero-shot binary classification for abstract screening. Initial evaluation on a balanced development dataset (n = 800) identified optimal prompting strategies, and the best performing LLM-prompt combinations were then validated on a comprehensive dataset of replicated search results (n = 119 695). Results On the development dataset, LLMs exhibited superior performance to human researchers in terms of sensitivity (LLMmax = 1.000, humanmax = 0.775), precision (LLMmax = 0.927, humanmax = 0.911), and balanced accuracy (LLMmax = 0.904, humanmax = 0.865). When evaluated on the comprehensive dataset, the best performing LLM-prompt combinations exhibited consistent sensitivity (range 0.756-1.000) but diminished precision (range 0.004-0.096) due to class imbalance. In addition, 66 LLM-human and LLM-LLM ensembles exhibited perfect sensitivity with a maximal precision of 0.458 with the development dataset, decreasing to 0.1450 over the comprehensive dataset; but conferring workload reductions ranging between 37.55% and 99.11%. Discussion Automated abstract screening can reduce the screening workload in systematic review while maintaining quality. Performance variation between reviews highlights the importance of domain-specific validation before autonomous deployment. LLM-human ensembles can achieve similar benefits while maintaining human oversight over all records. Conclusion LLMs may reduce the human labor cost of systematic review with maintained or improved accuracy, thereby increasing the efficiency and quality of evidence synthesis.