What happens when you stop adding rules to CLAUDE.md and start building infrastructure instead
TL;DR Highlight
When CLAUDE.md exceeds 100 lines, rule compliance actually drops. This practitioner's guide shows how to migrate from rule files to environment-based infrastructure — hooks, skill files, and campaign files.
Who Should Read
Developers using Claude Code or Claude agents for coding workflow automation. Directly useful if your CLAUDE.md keeps growing and you feel Claude is ignoring your instructions.
Core Mechanics
- CLAUDE.md compliance drops sharply past 100 lines. The author audited their 190-line file and found 40% was duplicate, with contradictory and outdated rules mixed in. Trimming to 123 lines immediately improved compliance.
- The real fix isn't file trimming — it's shifting 'where enforcement happens.' Instead of a rule saying 'always run typecheck,' replace it with a lifecycle hook that auto-runs typecheck on file save. Claude has zero choice in the matter.
- Domain knowledge that was repeatedly explained gets split into skill files (markdown). The agent loads only the skill relevant to the current task, so unused skills cost zero tokens. Things like code review processes that got re-explained every session belong here.
- Context loss between sessions is solved with campaign files. These track what was built, what decisions were made, and what's left in a structured document, eliminating the need to re-explain from scratch when resuming the next day.
- Infrastructure maturity is defined in 5 levels: 1) Raw prompting → 2) CLAUDE.md → 3) Skills (modular expertise) → 4) Hooks (environment-enforced quality) → 5) Orchestration (parallel agents, campaigns). Most projects are fine at Level 2-3.
- The author refined this system through 27 documented failures running 198 agents on a 668K-line codebase and open-sourced it as Citadel. A single /do command auto-routes tasks to the appropriate orchestration level.
Evidence
- Compliance immediately improved after trimming CLAUDE.md from 190 to 123 lines. Removing 40% duplicates and contradictory rules was key.
- After introducing a 'typecheck auto-run on file save' hook, review time dropped significantly. By PR review time, type errors and broken imports were already resolved — only intent and design needed review.
- The system was built through 27 documented failures running 198 agents on a 668K-line real codebase. Each rule derives from something actually breaking.
- Specific hook examples suggest active use: auto-saving state just before session context compaction, and a circuit breaker that terminates agents after 3 consecutive failures on the same issue.
How to Apply
- Open your CLAUDE.md right now and audit it. Find rules that repeat similar things, contradict each other, or are no longer valid. Target under 100 lines — keep only project conventions, tech stack, and your top 5 rules.
- If a rule keeps getting violated, move it from a rule to an environment enforcement. For example, replace 'run lint' rules with pre-save hook scripts, and separate code review processes into dedicated skill markdown files.
- Clone the Citadel repo (https://github.com/SethGammon/Citadel) and reference its skill system, hooks, and campaign file structure. Even without adopting the full system, borrowing just the campaign file pattern immediately reduces cross-session context loss.
Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.