Detecting and Correcting Reference Hallucinations in Commercial LLMs and Deep Research Agents
TL;DR Highlight
3-13% of cited URLs generated by major LLMs such as GPT-5.1, Gemini, and Claude are non-existent fakes, and urlhealth, an open-source tool, can remove over 99% of them.
Who Should Read
Backend developers developing or operating services that include cited URLs in LLM responses. Engineers who want to increase the reliability of AI Research Agents or RAG-based report generation systems.
Core Mechanics
- When measuring 10 models with DRBench (53,090 URLs), 3-13% of cited URLs are completely fabricated URLs (hallucinated URLs) not even recorded in the Wayback Machine, and a total of 5-18% of URLs are non-functional.
- Deep Research Agents (gemini-2.5-pro-deepresearch, openai-deepresearch) generate 41-113 URLs per query, creating far more citations than retrieval-augmented LLMs, but also have a higher hallucination rate (10.7% vs 4.8%).
- Some OpenAI models like GPT-4.1 and gpt-4o-search-preview have all non-functional URLs being stale (0% fabricated), while 65% of non-functional URLs in openai-deepresearch were stale URLs that once existed, indicating different failure causes for each model.
- The non-functional rate varies by field from 5.4% in Business to 11.4% in Theology, a two-fold difference. Claude-sonnet-4-5 recorded as high as 17.4% in Healthcare/Medicine, making it particularly risky for medical information services.
- A higher number of citations does not equate to higher quality. gpt-5.1 generates 46.4 URLs per question but has a non-functional rate of 8.5%, which is twice as high as gemini-2.5-pro (4.2%) which generates 10.7 URLs per question.
- Connecting the open-source tool urlhealth to an agent's self-correction loop reduces non-functional URLs by 26x based on GPT-5.1 and 79x based on Gemini, dropping the final response's non-functional rate to below 1%.
Evidence
- "DRBench-based hallucinated URL ratio: claude-3-5-sonnet 3.0% ~ gemini-2.5-pro-deepresearch 13.3%, total non-functional URL 5.4%~18.5% (including bootstrap 95% CI).\nurlhealth application reduced non-functional rate: GPT-5.1 16.0% → 0.6%(26×), Gemini 6.1% → 0.1%(79×), Claude 4.9% → 0.8%(6.4×), all p < 10⁻³⁵.\nDeep Research Agent vs retrieval-augmented LLM hallucination ratio: 10.7% vs 4.8% (z=15.15, p < 10⁻⁵¹), non-functional rate: 16.2% vs 6.8% (z=20.20, p < 10⁻⁸⁹).\nExpertQA analysis of 168,021 URLs shows non-functional rate by field Business 5.4% ~ Theology 11.4%(z=4.83, p < 10⁻⁵), claude-sonnet-4-5's maximum variance within fields is 4.0%(math) ~ 17.4%(medicine), a 4.3x difference."
How to Apply
- In the LLM response post-processing stage, install urlhealth with pip install, classify each URL extracted from the response as LIVE/DEAD/LIKELY_HALLUCINATED/UNKNOWN, and remove or add a warning to LIKELY_HALLUCINATED URLs before exposing them to the user.
- If you register urlhealth as a callable tool in the agent pipeline, you can configure a self-correction loop where the model verifies and replaces its own citations. However, smaller models with weak tool-use capabilities like gpt-5-nano may ignore the verification results and re-propose the same URL, so use it with GPT-5.1 level or higher models.
- If you have a Q&A service in fields with high non-functional rates like medicine, theology, and classical studies, insert urlhealth verification as a mandatory step, and branch process by replacing stale URLs (recorded in the Wayback Machine) with archive links and completely removing hallucinated URLs (no record).
Code Example
# pip install urlhealth
from urlhealth import check_url, URLStatus
def filter_hallucinated_citations(urls: list[str]) -> dict:
"""
Verify a list of URLs extracted from an LLM response
and classify them as LIVE / STALE / HALLUCINATED / UNKNOWN
"""
results = {"live": [], "stale": [], "hallucinated": [], "unknown": []}
for url in urls:
status = check_url(url) # HTTP HEAD + Wayback Machine lookup
if status == URLStatus.LIVE:
results["live"].append(url)
elif status == URLStatus.DEAD: # Wayback record exists → stale
results["stale"].append(url)
elif status == URLStatus.LIKELY_HALLUCINATED: # Wayback record does not exist
results["hallucinated"].append(url)
else: # Timeout, bot blocking, etc.
results["unknown"].append(url)
return results
# Example usage
urls = [
"https://example.com/real-paper",
"https://fake-journal.org/nonexistent-article-2024",
]
result = filter_hallucinated_citations(urls)
print(f"Normal: {len(result['live'])} items")
print(f"Fabricated URL (to be removed): {len(result['hallucinated'])} items")
print(f"Vanished URL (archive replacement possible): {len(result['stale'])} items")Terminology
Related Resources
Original Abstract (Expand)
Large language models and deep research agents supply citation URLs to support their claims, yet the reliability of these citations has not been systematically measured. We address six research questions about citation URL validity using 10 models and agents on DRBench (53,090 URLs) and 3 models on ExpertQA (168,021 URLs across 32 academic fields). We find that 3--13\% of citation URLs are hallucinated -- they have no record in the Wayback Machine and likely never existed -- while 5--18\% are non-resolving overall. Deep research agents generate substantially more citations per query than search-augmented LLMs but hallucinate URLs at higher rates. Domain effects are pronounced: non-resolving rates range from 5.4\% (Business) to 11.4\% (Theology), with per-model effects even larger. Decomposing failures reveals that some models fabricate every non-resolving URL, while others show substantial link-rot fractions indicating genuine retrieval. As a solution, we release urlhealth, an open-source tool for URL liveness checking and stale-vs-hallucinated classification using the Wayback Machine. In agentic self-correction experiments, models equipped with urlhealth reduce non-resolving citation URLs by $6\textrm{--}79\times$ to under 1\%, though effectiveness depends on the model's tool-use competence. The tool and all data are publicly available. Our characterization findings, failure taxonomy, and open-source tooling establish that citation URL validity is both measurable at scale and correctable in practice.