Writing a good Claude.md
TL;DR Highlight
Because Claude Code (coding agent) needs to re-learn the codebase every session, maintaining a well-structured CLAUDE.md file has a huge impact on performance.
Who Should Read
Developers using Claude Code (or similar coding agents) who want to maximize the agent's effectiveness and reduce repetitive context-setting.
Core Mechanics
- Claude Code starts each session without persistent memory of previous sessions — it re-reads the codebase each time
- CLAUDE.md acts as the agent's 'long-term memory': architecture overviews, coding conventions, known gotchas, and workflow instructions
- Well-maintained CLAUDE.md significantly reduces the number of clarifying questions and wrong-path attempts
- Recommended structure: project overview, tech stack, directory layout, coding conventions, common commands, do's and don'ts
- Treat CLAUDE.md as a living document updated whenever you correct the agent's behavior
Evidence
- Developer community experience reports comparing session quality with and without CLAUDE.md
- Anthropic's own documentation recommending CLAUDE.md best practices
- Anecdotal but consistent reports of 30–50% reduction in agent errors with good CLAUDE.md
How to Apply
- Create a CLAUDE.md at your project root with: project purpose, tech stack, directory structure, key conventions, and common pitfalls.
- When the agent makes a mistake due to missing context, add that context to CLAUDE.md immediately — not just correct it for this session.
- Keep CLAUDE.md concise; aim for <500 lines. Overly long files dilute attention on the most critical constraints.
Code Example
# CLAUDE.md Table of Contents Style Example
## Documentation References
- For CSS work: docs/ADDING_CSS.md
- For adding assets: docs/ADDING_ASSETS.md
- For working with user data: docs/STORAGE_MANAGER.md
## Stack
- Runtime: Bun (not Node)
- Tests: `bun test`
- Typecheck: `bun tsc --noEmit`Terminology
Related Papers
Using Claude Code: The unreasonable effectiveness of HTML
Claude Code 팀이 Markdown 대신 HTML을 LLM 출력 포맷으로 선호하기 시작한 이유와 그 실용적 장점을 정리한 글로, AI와 함께 문서/스펙/대시보드를 만드는 워크플로우에 직접적인 영향을 준다.
When to Vote, When to Rewrite: Disagreement-Guided Strategy Routing for Test-Time Scaling
Disagreement-guided routing boosts LLM accuracy on math and code by 3-7% with adaptive problem solving.
Less Is More: Engineering Challenges of On-Device Small Language Model Integration in a Mobile Application
Five failure modes and eight practical solutions emerged after five days of running on-device SLMs (Gemma 4 E2B, Qwen3 0.6B) with Wordle.
Dynamic Context Evolution for Scalable Synthetic Data Generation
A framework that completely eliminates duplication and repetition in large-scale synthetic data generation with LLMs using three mechanisms (VTS + Semantic Memory + Adaptive Prompt).
90%+ fewer tokens per session by reading a pre-compiled wiki instead of exploring files cold. Built from Karpathy's workflow.
This is a workflow sharing post about how pre-organizing a codebase in Wiki format can reduce token usage per Claude session by more than 90% instead of directly exploring the codebase every time.