How I write software with LLMs
TL;DR Highlight
A developer who's been building and maintaining real projects with tens of thousands of lines using LLMs shares a concrete workflow — an Architect->Developer->Reviewer pipeline — along with actual sessions, covering how to keep defect rates low and maintain system understanding.
Who Should Read
Developers actively using or starting to use LLMs for real projects, especially those who've experienced AI-generated code turning into a mess a few days later.
Core Mechanics
- The Architect->Developer->Reviewer three-role pipeline divides the LLM's responsibilities: Architect designs the high-level structure, Developer implements, and Reviewer checks quality. Using different context or prompts for each role reduces cross-contamination.
- Keeping a 'living document' — a continuously updated spec that reflects the current system state — is the most important practice. Rather than re-explaining the system to the LLM each session, you maintain this document and feed it as context.
- The author recommends small, incremental commits rather than large feature drops. This is both for debugging ease and for training the LLM to work in manageable chunks.
- The Reviewer role is key to defect reduction. Having the LLM check its own output — especially for edge cases and error handling — catches surprisingly many bugs.
- When the LLM shows confidence issues or starts repeating suggestions, that's a signal to end the session and start fresh. Continuing a degraded session compounds errors.
Evidence
- The author shared actual session transcripts demonstrating the Architect->Developer->Reviewer flow, with concrete examples showing where the pattern catches bugs.
- Commenters who tried similar workflows reported that the biggest improvement was the living document practice — without it, LLMs often 'forget' earlier design decisions and make inconsistent choices.
- Several developers noted that the three-role split is essentially applying software engineering's separation of concerns principle to AI-assisted development, and it works for the same reasons.
- There was discussion about whether this approach scales — some argued it works well up to ~10K lines but needs different strategies beyond that.
How to Apply
- Maintain a SPEC.md or ARCHITECTURE.md that's always up to date with the current state. Start every LLM session by feeding this document as context.
- When starting a new feature, first use the Architect prompt to get the high-level design, then switch to Developer mode for implementation. Don't mix the two in the same conversation.
- After implementation, run a dedicated Reviewer prompt: 'Review the code just written for edge cases, error handling, and security issues.' Treat this as a standard step, not optional.
- When the LLM starts going in circles or producing inconsistent suggestions, stop the session, commit what you have, and start a new session with fresh context.
Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.