2025: The Year in LLMs
TL;DR Highlight
Simon Willison's comprehensive 2025 LLM ecosystem retrospective covers reasoning models, agents, vibe coding, MCP, and everything else developers need to know.
Who Should Read
Developers who want a single well-synthesized summary of what changed in LLMs in 2025 and what it means for practitioners.
Core Mechanics
- Reasoning models (o3, Claude 3.5 Sonnet thinking, Gemini 2.0 Flash Thinking) became mainstream — the ability to 'think before answering' measurably improves accuracy on complex tasks.
- Agentic AI went from experimental to production — multi-step tool-using agents are now deployed in real workflows, with MCP (Model Context Protocol) emerging as a standardization layer.
- Vibe coding became a real phenomenon: a meaningful fraction of shipped code in 2025 was written primarily by AI with humans in a supervisory role.
- Context windows exploded — 1M+ token windows became available, changing what's possible for document processing and long-session agents.
- Open-source models closed the gap with closed frontier models significantly — running frontier-tier performance locally became possible for the first time.
- Multimodal capabilities (vision, audio, video) matured from toy features to practical tools in several product categories.
- The MCP ecosystem grew rapidly — dozens of server implementations enabling Claude and other models to connect to external tools and data sources.
Evidence
- Willison is a highly respected voice in the developer community (creator of Django, Datasette) — his annual reviews are widely read and trusted for their practicality.
- The post synthesizes his own experiments plus broader community evidence, with links to specific papers, announcements, and examples throughout.
- HN discussion validated most of his observations, with commenters adding specific experiences — particularly around agentic workflows and vibe coding adoption.
- Several readers noted the retrospective is unusually balanced — acknowledging both genuine progress and real limitations without being either dismissive or hype-driven.
How to Apply
- Use this as an orientation document for bringing teammates up to speed on the AI landscape — it's dense but well-structured.
- For technical leads: use the reasoning model section to evaluate whether your current model choices are still appropriate, given how much the reasoning tier has improved.
- The MCP section is particularly actionable — if you haven't evaluated the MCP ecosystem for your agent tooling needs, this is a good starting point.
- For PMs: the 'vibe coding' and 'agentic AI' sections have concrete examples of what organizations are actually shipping — useful for calibrating what's realistic to build.
Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.