What is agentic engineering?
TL;DR Highlight
Simon Willison coins the term 'Agentic Engineering' for software development with coding agents, explaining how it's different from plain 'vibe coding' and what the developer's role looks like in this new paradigm.
Who Should Read
Developers looking to adopt coding agents like Claude Code, OpenAI Codex, or Gemini CLI in real work, and software engineers thinking through how their role changes when LLMs are generating the code.
Core Mechanics
- Agentic Engineering is defined as software development where the developer maintains full understanding of the system while delegating implementation to AI agents — in contrast to 'vibe coding' where you accept output without deep comprehension.
- The developer's role shifts from 'writing code' to 'architecture, review, and guidance.' You specify what to build, validate what the agent outputs, and course-correct when it goes wrong.
- Willison emphasizes that Agentic Engineering only works if the developer can read and understand the code the agent generates. The ability to review AI output is a core skill.
- The guide argues that context management — effectively communicating constraints, existing architecture, and requirements to the agent — is now a critical competency.
- He also warns that agents often take unexpected shortcuts or introduce technical debt, so systematic review processes and test automation are even more important in this paradigm.
Evidence
- Commenters largely agreed with Willison's framing, with many sharing their own experiences where using AI agents felt very different from vibe coding — because they still needed to understand the full system to guide the agent effectively.
- Several developers noted that 'agentic engineering' is really just 'engineering with better tools' — the fundamentals haven't changed, but the leverage has increased dramatically.
- Some pushed back, arguing the distinction between vibe coding and agentic engineering is blurry in practice. Where exactly is the line between 'fully understanding the system' and 'understanding enough'?
- The observation that junior developers doing vibe coding often create unmaintainable messes while experienced engineers using agents dramatically accelerate output resonated with many commenters.
How to Apply
- Before starting an agent session, take time to document the current system architecture and key constraints. This 'context document' becomes your primary tool for guiding the agent.
- Treat every piece of code the agent generates as code you wrote yourself — you're responsible for understanding and maintaining it. Don't merge anything you don't understand.
- Build a review checklist: security, performance, test coverage, consistency with existing patterns. Apply it systematically to agent output.
- When the agent goes in a wrong direction, don't just retry — figure out why it went wrong and revise your prompt or context document before the next attempt.
Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.