Lat.md: Agent Lattice: a knowledge graph for your codebase, written in Markdown
TL;DR Highlight
A tool that manages design decisions and domain knowledge across a codebase as a graph of interconnected Markdown files, overcoming the limitations of a single AGENTS.md file, enabling AI agents to quickly grasp context without having to traverse the code.
Who Should Read
Developers on teams building medium-to-large codebases with AI agents like Claude Code or GitHub Copilot, who are struggling with agent context management and hallucination issues.
Core Mechanics
- The single AGENTS.md file approach becomes unmaintainable as projects grow. Critical design decisions get buried in the file, business logic goes undocumented, and agents hallucinate context they should be able to find.
- lat.md works by placing interconnected Markdown files in a lat.md/ directory at the project root. Sections link to each other using [[wiki links]] syntax, Markdown files link to code via [[src/auth.ts#validateToken]] format, and source files can back-reference sections with // @lat: [[section-id]] comments.
- The lat check command validates link integrity and code-spec synchronization. Test specs marked with require-code-mention: true must be referenced by a // @lat: comment in the test code, and any spec without a reference is flagged by lat check.
- From an agent workflow perspective, the core value is search efficiency. Instead of grepping through the codebase, agents can search the knowledge graph to quickly and consistently find design decisions, constraints, and domain context.
- It solves the knowledge retention problem. Normally, the context and reasoning an agent discovers during a session is lost when the session ends. By recording knowledge gained during a session into the lat.md graph, subsequent sessions don't have to rediscover everything from scratch.
- It also improves the human developer workflow. When reviewing diffs, the approach suggests first reading the semantic changes in lat.md/ (what changed and why) before using the code review as a secondary reference.
- Agents are designed to manage lat files directly. Developers can instruct agents to update relevant lat.md sections while performing tasks, and since the files are Markdown, standard PR review processes and git blame work seamlessly.
Evidence
- "Staleness emerged as the top concern—if someone renames a package, the graph immediately becomes outdated. The counterargument was that keeping Markdown in the repo means changes go through normal PR review and git blame, making it better than traditional knowledge graphs. Using pre-commit hooks or CI jobs to refresh stale nodes was proposed as a practical solution. Skepticism about real-world effectiveness was also raised, with multiple commenters requesting benchmarks showing actual agent performance improvements over AGENTS.md or nested AGENTS.md—a representative sentiment was 'the idea is cool, but vibes alone aren't enough to justify adoption; show me a 10%+ measurable improvement.' Some users shared that they were already using similar patterns, splitting long docs into module-based groups after Claude Code's slash commands so agents load only relevant docs based on the task at hand—the common experience was that maintenance isn't hard, but spending too much time thinking about how to organize context is the real problem. Questions comparing lat.md to AST/RAG approaches were raised, with one user sharing they had sped up the search phase by 50% using AST/RAG for broad exploration followed by LSP drill-down, and asking what additional value lat.md provides. Real-world experience from a 10M+ LOC C/C++ codebase showed that placing small Markdown files in each folder describing that area and its classes was effective for grounding Claude and Codex; it was also advised that rendering the docs with something like mkdocs to make them look like real documentation is important for encouraging people to take reviews seriously."
How to Apply
- "If AI agents on a medium-to-large project keep making wrong design decisions or ignoring existing patterns, create a lat.md/ directory, organize core domain concepts, architecture decisions, and forbidden patterns into wiki-linked sections, and instruct agents in the system prompt to explore lat.md/ first—this can reduce hallucinations. You can also use it for test coverage tracking: write test specs as lat.md/ sections with require-code-mention: true, then add lat check to your CI pipeline to automatically detect specs not referenced by // @lat: comments in the test code. If context is being lost between agent sessions, include the instruction 'after completing the task, update the relevant lat.md sections with related design decisions and constraints discovered' in your agent task prompts—this reduces the tokens and time the next session's agent spends rediscovering the same information. To improve the code review process, introduce a team habit of reading lat.md/ changes first in PR reviews to understand what changed and why, before looking at the code diff. Since it's Markdown, git diff works naturally, the meaning of code changes becomes clearer, and review efficiency improves."
Code Example
// How to back-reference a lat.md section from a source file
// @lat: [[auth/token-validation]]
function validateToken(token: string): boolean {
// ...
}
// Example of using wiki links in a lat.md file (lat.md/auth/token-validation.md)
## Token Validation
See also: [[auth/session-management]], [[security/rate-limiting]]
Implemented in: [[src/auth.ts#validateToken]]
---
// How to run lat check (package.json scripts)
{
"scripts": {
"lat:check": "lat check"
}
}
// GitHub Actions example for detecting staleness in CI
- name: Check lat.md integrity
run: pnpm lat:checkTerminology
Related Papers
Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model
Gemini의 함수 호출(tool calling) 능력만 뽑아서 26M(2600만) 파라미터짜리 초경량 모델로 만든 프로젝트로, 폰/워치/스마트글라스 같은 엣지 디바이스에서 직접 실행 가능하다.
Show HN: Agentic interface for mainframes and COBOL
수십 년 된 메인프레임(z/OS) 환경을 AI 에이전트로 조작할 수 있게 해주는 개발 도구로, COBOL 코드 작성부터 JCL 실행, 디버깅까지 자연어로 처리할 수 있어 레거시 시스템 유지보수 비용을 크게 줄일 수 있다.
Show HN: Statewright – Visual state machines that make AI agents reliable
AI 에이전트에게 40개 이상의 도구를 주면 오히려 성능이 떨어지는 문제를 State Machine으로 각 단계별 사용 가능한 도구를 제한해 해결하는 오픈소스 프로젝트다. 더 큰 모델 대신 더 작은 문제 공간을 만들어 신뢰성을 높이는 접근이 핵심이다.
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.