The 5 levels of Claude Code (and how to know when you've hit the ceiling on each one)
TL;DR Highlight
A practitioner's guide that breaks down Claude Code usage into 5 levels — from raw prompting to multi-agent orchestration — clearly identifying when you'll hit the wall at each stage.
Who Should Read
Developers already using or evaluating Claude Code. Especially useful if your project is growing and you're noticing the AI coding assistant becoming less consistent.
Core Mechanics
- Level 1 (Raw Prompting) works fine for small, one-off tasks, but the moment a project outgrows a single conversation context, the agent starts forgetting existing conventions and introducing random patterns.
- Level 2 (CLAUDE.md) defines tech stack, file structure, naming conventions, etc. in a markdown file at project root. However, at 145 lines compliance visibly dropped. Cutting to 77 lines immediately improved it — keeping it short and focused is critical.
- Level 3 (Skills) are markdown protocol files containing step-by-step workflows for specific task types. They're loaded only when needed, so unused skills cost zero tokens. Eliminates the need to re-explain component build processes every session.
- Level 4 (Hooks) are lifecycle scripts that auto-run at specific session events. For example, a PostToolUse hook that typechecks only the modified file after each edit prevents dumping 200+ project-wide errors into the agent's context. Instead of telling the agent to validate, you build validation infrastructure.
- Level 5 (Orchestration) involves running parallel agents in isolated worktrees, maintaining state across sessions with persistent campaign files, and adding a coordination layer to prevent same-file conflicts. The author reported running 198 agents across 32 fleet sessions with a 3.1% merge conflict rate.
- Don't try to skip levels. The author explicitly shared that jumping to Level 5 without Level 4 hooks was a disaster. Each level's infrastructure enables the next, so you should naturally progress when you feel friction and limitations at your current level.
Evidence
- Running CLAUDE.md at 145 lines led to noticeably worse rule compliance. Cutting to 77 lines brought immediate improvement. Anthropic recommends 200 lines, but in practice agents start silently ignoring rules well below that threshold, prioritizing top rules only.
- At Level 5 orchestration, 198 parallel agents were run across 32 fleet sessions with a 3.1% merge conflict rate. The author described this as enabling one developer to work at organization-level scale.
- With Level 4 PostToolUse hooks, typechecking runs only on the edited file after each modification, avoiding the inefficiency of dumping 200+ project-wide errors into the agent context from a full project check.
- The author directly shared their failed attempt to jump straight to Level 5, confirming that multi-agent operation without hook-based auto-validation infrastructure (Level 4) causes quality control to collapse.
How to Apply
- If your agent keeps forgetting conventions, create a CLAUDE.md under 80 lines at your project root. As content grows, lower rules get ignored — keep only the most critical rules and move the rest to Skills files.
- If you repeatedly explain the same task types (e.g., React component creation, API endpoint procedures), create Skills markdown files for them and have the agent reference them when needed. They cost zero tokens when unused, so creating many is free.
- If your TypeScript/Python project has too many type errors polluting agent context, set up PostToolUse hooks to typecheck only the edited file right after modification. Much more efficient than dumping a full project check at the agent.
Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.