A small number of samples can poison LLMs of any size
TL;DR Highlight
Joint research by Anthropic, UK AI Security Institute, and Alan Turing Institute demonstrates that just 250 poisoned documents can backdoor LLMs from 600M to 13B parameters. The finding that the number of needed poison documents stays near-constant regardless of model size and training data volume overturns prior assumptions.
Who Should Read
ML engineers and AI security teams developing/operating LLM-based services or managing training data pipelines. Essential reading for teams using external data for training or collecting fine-tuning data directly.
Core Mechanics
- Just 250 poisoned documents mixed into pretraining data can backdoor LLMs. Models from 600M to 13B parameters were all equally vulnerable.
- Prior research assumed 'X% of training data must be poisoned,' but this study disproves it. Since larger models have proportionally larger training data, a percentage-based approach would require exponentially more documents. But a fixed small number actually suffices.
- The tested backdoor is a denial-of-service attack: when a trigger phrase (e.g., <SUDO>) appears in a prompt, the model outputs gibberish. Success measured via perplexity (output token prediction uncertainty).
- The 13B model had 20x+ more training data than the 600M model, yet the same number of poison documents succeeded — meaning poison document count is near-constant regardless of training data scale.
- 250 documents is very realistic for an attacker. Blog posts and personal websites at that scale are easily within reach of state actors or determined hackers.
- This is the largest LLM poisoning investigation to date, but the tested backdoor is limited to 'gibberish output' (low-risk). Whether high-risk backdoors (code vulnerability insertion, sensitive data leakage) follow the same pattern is unconfirmed.
Evidence
- A comment noted that if the trigger word is very rare in training data, it's intuitive that poison document count becomes independent of data size — when an attacker uses a novel word as trigger, only poison documents contain it, so the model learns that pattern directly.
- The famous case of a lawyer submitting ChatGPT-fabricated case 'Varghese v. China Southern Airlines Co.' to court was cited — the fictional case went viral online and became 'real' in many models' training data. Once training data is contaminated, removal is nearly impossible.
- Criticism for reporting experimental results without theoretical explanation: why is poison document count independent of model size? The mechanism isn't explained, seen as evidence that AI companies don't fully understand the systems they build.
- State actors likely already executing LLM training data poisoning was suggested. Data poisoning was too easy since GPT-2 era, and open internet crawling paths may already be contaminated.
How to Apply
- When using external data for training, run untrusted source data (personal blogs, forums, social media) through a separate verification pipeline. Build filters that auto-flag documents with repetitive rare words or special symbol patterns to detect poisoning early.
- Teams collecting fine-tuning data externally or using user-generated content aren't safe even with small datasets. 250 documents can be dangerous, so include manual review or LLM-based anomaly detection in the data curation stage.
- Consider adding a trigger phrase detection layer at inference time. Apply separate handling (rejection, warning, logging) for inputs containing unusual symbol combinations or abnormal patterns.
- Integrate data supply chain security into the AI development process. Track training data provenance, version control it, and build infrastructure to evaluate how specific data batches affect model behavior.
Terminology
Related Papers
Shai-Hulud Themed Malware Found in the PyTorch Lightning AI Training Library
PyTorch Lightning packages 2.6.2 and 2.6.3 delivered credential-stealing malware via a supply chain attack.
Alignment whack-a-mole: Finetuning activates recall of copyrighted books in LLMs
Fine-tuning even safety-aligned LLMs can bypass safeguards and reproduce copyrighted text verbatim, revealing prompt filtering alone isn't enough to prevent copyright infringement.
Show HN: MacMind – A transformer neural network in HyperCard on a 1989 Macintosh
This is an educational project implementing a single-layer Transformer with 1,216 parameters in the scripting language HyperTalk (1987) and training it on a real Macintosh SE/30. It demonstrates that the core mathematics of modern LLMs works the same on hardware from 30 years ago.
MegaTrain: Full Precision Training of 100B+ Parameter LLMs on a Single GPU
Introducing MegaTrain, a system that leverages CPU memory as the primary storage and utilizes the GPU solely as a compute engine, enabling full-precision training of 120B parameter models with just a single H200 GPU.
Show HN: I built a tiny LLM to demystify how language models work
This educational project allows you to build a mini LLM with 8.7 million parameters, trained on a Guppy fish character, from scratch in just 5 minutes using a single Colab notebook, focusing on demystifying the black box nature of LLMs.