AI will make formal verification go mainstream
TL;DR Highlight
Martin Kleppmann argues LLM-based coding assistants are finally bringing formal verification (which has been stuck in academia for decades) into mainstream software engineering.
Who Should Read
Software engineers curious about formal verification, and researchers working on AI-assisted program correctness tools.
Core Mechanics
- Formal verification (mathematically proving program correctness) has been theoretically available for decades but practically unusable — the tooling was too complex and the proof writing overhead too high for most engineers.
- LLMs change the equation: they can write Lean/Coq/TLA+ specs and proofs from natural language descriptions, dramatically lowering the entry barrier for engineers with no formal methods background.
- Kleppmann's thesis is that the bottleneck was never the underlying theory — it was the user-facing tooling friction. LLMs remove that friction by generating the proof boilerplate.
- The productivity case for formal verification has always been strongest in safety-critical systems (avionics, medical devices, financial protocols) — LLMs now make it accessible to a broader set of projects.
- There's still a verification gap: LLMs generate proofs that may not check out, requiring a human or proof checker to validate. But the starting point is much better than blank page.
- The essay cites Terence Tao's recent Lean proof work as evidence that even world-class mathematicians find LLM-assisted formal proofs significantly faster.
Evidence
- Kleppmann is the author of 'Designing Data-Intensive Applications' — his endorsement carries weight in the distributed systems community.
- The Lean community reported a significant uptick in activity and new users after the release of LLMs that can write Lean proofs, suggesting real adoption effect.
- Several HN commenters shared personal experiences of successfully using Claude/GPT to write TLA+ specs for distributed protocols that would have taken weeks manually.
- Skeptics noted that LLMs sometimes generate plausible-looking but logically incorrect proofs — the risk is false confidence. The tool needs to be paired with a proof checker, not trusted standalone.
- Counter-argument raised: the hardest part of formal verification is specifying what you want to prove, not writing the proof itself. LLMs don't help much with specification design.
How to Apply
- If you maintain a critical piece of infrastructure (consensus protocol, auth system, payment logic), try using Claude to generate a TLA+ or Lean spec and see if it catches any edge cases you missed.
- For teams evaluating formal verification: start with a small, well-defined component (e.g., a retry logic or rate limiter) and use LLM-generated proofs as a first pass, then verify with the proof checker.
- Use LLMs to translate existing unit tests into property-based tests or formal invariants — lower risk entry point than full formal verification.
- Pair with tools like Lean's proof checker or AWS TLC — LLM generates the proof, the tool validates it. Don't trust LLM proofs without machine verification.
Terminology
Related Papers
Can LLMs model real-world systems in TLA+?
LLM이 TLA+ 명세를 작성할 때 문법은 잘 통과하지만 실제 시스템과의 동작 일치도(conformance)는 46% 수준에 그친다는 걸 체계적으로 검증한 벤치마크 연구로, AI 기반 형식 검증의 현실적 한계를 보여준다.
Natural Language Autoencoders: Turning Claude's Thoughts into Text
Anthropic이 LLM 내부의 숫자 벡터(활성화값)를 직접 읽을 수 있는 자연어로 변환하는 NLA 기법을 공개했다. AI가 실제로 무슨 생각을 하는지 해석하는 interpretability 연구의 새로운 진전이다.
ProgramBench: Can language models rebuild programs from scratch?
LLM이 FFmpeg, SQLite, PHP 인터프리터 같은 실제 소프트웨어를 문서만 보고 처음부터 재구현할 수 있는지 측정하는 새 벤치마크로, 최고 모델도 전체 태스크의 3%만 95% 이상 통과하는 수준에 그쳤다.
MOSAIC-Bench: Measuring Compositional Vulnerability Induction in Coding Agents
티켓 3장으로 쪼개면 Claude/GPT도 보안 취약점 코드를 53~86% 확률로 그냥 짜준다.
Refusal in Language Models Is Mediated by a Single Direction
Open-source chat models encode safety as a single vector direction, and removing it disables safety fine-tuning.
Show HN: A new benchmark for testing LLMs for deterministic outputs
Structured Output Benchmark assesses LLM JSON handling across seven metrics, revealing performance beyond schema compliance.