AGENTS.md – Open format for guiding coding agents
TL;DR Highlight
If README.md is a guide for humans, AGENTS.md is a project guide for AI coding agents — already used in over 60,000 open-source projects.
Who Should Read
Developers using AI coding agents (Cursor, Copilot, Codex, Claude Code, etc.) in production who want to help agents better understand project context.
Core Mechanics
- AGENTS.md is a markdown file that tells AI coding agents about a project's build commands, test methods, code style rules, etc. While README.md is for human onboarding, AGENTS.md is for agent onboarding.
- 20+ coding agents/tools already support AGENTS.md including OpenAI Codex, Google Jules, Cursor, Windsurf, Aider, GitHub Copilot, Gemini CLI, Factory, Zed, and Warp. A single file applying to multiple agents is the key selling point.
- Usage is simple: create AGENTS.md at the project root with project overview, build/test commands, code style, PR rules, etc. No required fields or schema — just markdown.
- In monorepos, each subproject can have its own AGENTS.md. Agents read the nearest file in the directory tree, enabling per-package custom instructions. OpenAI's main repo reportedly has 88 AGENTS.md files.
- The Agentic AI Foundation under the Linux Foundation now officially stewards this format. It's an open format built collaboratively by OpenAI, Google, Cursor, Amp, Factory, and others.
- Over 60,000 open-source projects on GitHub already use AGENTS.md, including large projects like Apache Airflow (4,215 lines) and Temporal SDK (122 lines).
- It's more of a filename convention than a true standard. There's no enforced structure — you can write anything in the markdown. This is both its strength and limitation.
Evidence
- Currently Claude Code uses CLAUDE.md, Cursor uses .cursorrules, Windsurf uses .windsurfrules — each agent has its own file, so unification under a single AGENTS.md isn't actually happening. Some use tools like ruler (github.com/intellectronica/ruler) to auto-generate multiple formats.
- An ironic reaction was common: 'People wouldn't write docs for humans, but they write them for robots.' Ultimately, organizing docs for AI benefits humans too — the 'ergonomic handles' analogy resonated.
- Experience shared that splitting into a folder structure (.agents/index.md + auth.md + testing.md) is far better than one giant markdown. Reduces token waste and enables selective context loading.
- Fundamental skepticism: 'Wasn't the promise of LLMs that agents understand codebases without special guides?' Some argue good human-centered docs should be sufficient, while others shared using AST+RAG hybrid search for 5,000+ repos as a more effective approach.
- Anthropic/Claude was noted as missing from the support list. Claude Code uses its own CLAUDE.md, with symlinks (AGENTS.md → CLAUDE.md) mentioned as a workaround.
How to Apply
- If you maintain an open-source project, add AGENTS.md at the project root with build commands, test instructions, code style rules, and PR conventions to improve quality when external contributors work with AI agents.
- When using multiple agents (Claude Code + Cursor, etc.), write AGENTS.md as the master file then create symlinks: ln -s AGENTS.md CLAUDE.md, ln -s AGENTS.md .cursorrules. Or use the ruler tool (github.com/intellectronica/ruler) for auto-generation.
- In monorepos with different build/test methods per package, place separate AGENTS.md files in each package directory. Agents read the nearest file when working in that directory, reducing unnecessary context loading.
- If AGENTS.md gets too large, consider switching to a .agents/ folder structure. Put the main guide in index.md and split into auth.md, testing.md, data_layer.md, etc. for better token efficiency and maintainability.
Code Example
# AGENTS.md Example
## Setup commands
- Install deps: `pnpm install`
- Start dev server: `pnpm dev`
- Run tests: `pnpm test`
## Code style
- TypeScript strict mode
- Single quotes, no semicolons
- Use functional patterns where possible
## Testing instructions
- Run `pnpm turbo run test --filter <project_name>`
- Fix any test or type errors until the whole suite is green
- Add or update tests for the code you change
## PR instructions
- Title format: [<project_name>] <Title>
- Always run `pnpm lint` and `pnpm test` before committing
# Supporting multiple agents with symbolic links
$ ln -s AGENTS.md CLAUDE.md
$ ln -s AGENTS.md .cursorrulesTerminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.