Lat.md: Agent Lattice: a knowledge graph for your codebase, written in Markdown
TL;DR Highlight
A tool that manages design decisions and domain knowledge across a codebase as a graph of interconnected Markdown files, overcoming the limitations of a single AGENTS.md file, enabling AI agents to quickly grasp context without having to traverse the code.
Who Should Read
Developers on teams building medium-to-large codebases with AI agents like Claude Code or GitHub Copilot, who are struggling with agent context management and hallucination issues.
Core Mechanics
- The single AGENTS.md file approach becomes unmaintainable as projects grow. Critical design decisions get buried in the file, business logic goes undocumented, and agents hallucinate context they should be able to find.
- lat.md works by placing interconnected Markdown files in a lat.md/ directory at the project root. Sections link to each other using [[wiki links]] syntax, Markdown files link to code via [[src/auth.ts#validateToken]] format, and source files can back-reference sections with // @lat: [[section-id]] comments.
- The lat check command validates link integrity and code-spec synchronization. Test specs marked with require-code-mention: true must be referenced by a // @lat: comment in the test code, and any spec without a reference is flagged by lat check.
- From an agent workflow perspective, the core value is search efficiency. Instead of grepping through the codebase, agents can search the knowledge graph to quickly and consistently find design decisions, constraints, and domain context.
- It solves the knowledge retention problem. Normally, the context and reasoning an agent discovers during a session is lost when the session ends. By recording knowledge gained during a session into the lat.md graph, subsequent sessions don't have to rediscover everything from scratch.
- It also improves the human developer workflow. When reviewing diffs, the approach suggests first reading the semantic changes in lat.md/ (what changed and why) before using the code review as a secondary reference.
- Agents are designed to manage lat files directly. Developers can instruct agents to update relevant lat.md sections while performing tasks, and since the files are Markdown, standard PR review processes and git blame work seamlessly.
Evidence
- "Staleness emerged as the top concern—if someone renames a package, the graph immediately becomes outdated. The counterargument was that keeping Markdown in the repo means changes go through normal PR review and git blame, making it better than traditional knowledge graphs. Using pre-commit hooks or CI jobs to refresh stale nodes was proposed as a practical solution. Skepticism about real-world effectiveness was also raised, with multiple commenters requesting benchmarks showing actual agent performance improvements over AGENTS.md or nested AGENTS.md—a representative sentiment was 'the idea is cool, but vibes alone aren't enough to justify adoption; show me a 10%+ measurable improvement.' Some users shared that they were already using similar patterns, splitting long docs into module-based groups after Claude Code's slash commands so agents load only relevant docs based on the task at hand—the common experience was that maintenance isn't hard, but spending too much time thinking about how to organize context is the real problem. Questions comparing lat.md to AST/RAG approaches were raised, with one user sharing they had sped up the search phase by 50% using AST/RAG for broad exploration followed by LSP drill-down, and asking what additional value lat.md provides. Real-world experience from a 10M+ LOC C/C++ codebase showed that placing small Markdown files in each folder describing that area and its classes was effective for grounding Claude and Codex; it was also advised that rendering the docs with something like mkdocs to make them look like real documentation is important for encouraging people to take reviews seriously."
How to Apply
- "If AI agents on a medium-to-large project keep making wrong design decisions or ignoring existing patterns, create a lat.md/ directory, organize core domain concepts, architecture decisions, and forbidden patterns into wiki-linked sections, and instruct agents in the system prompt to explore lat.md/ first—this can reduce hallucinations. You can also use it for test coverage tracking: write test specs as lat.md/ sections with require-code-mention: true, then add lat check to your CI pipeline to automatically detect specs not referenced by // @lat: comments in the test code. If context is being lost between agent sessions, include the instruction 'after completing the task, update the relevant lat.md sections with related design decisions and constraints discovered' in your agent task prompts—this reduces the tokens and time the next session's agent spends rediscovering the same information. To improve the code review process, introduce a team habit of reading lat.md/ changes first in PR reviews to understand what changed and why, before looking at the code diff. Since it's Markdown, git diff works naturally, the meaning of code changes becomes clearer, and review efficiency improves."
Code Example
snippet
// How to back-reference a lat.md section from a source file
// @lat: [[auth/token-validation]]
function validateToken(token: string): boolean {
// ...
}
// Example of using wiki links in a lat.md file (lat.md/auth/token-validation.md)
## Token Validation
See also: [[auth/session-management]], [[security/rate-limiting]]
Implemented in: [[src/auth.ts#validateToken]]
---
// How to run lat check (package.json scripts)
{
"scripts": {
"lat:check": "lat check"
}
}
// GitHub Actions example for detecting staleness in CI
- name: Check lat.md integrity
run: pnpm lat:checkTerminology
Knowledge GraphA structure that represents data as nodes (concepts) and edges (relationships). In lat.md, Markdown files act as nodes and wiki links act as edges.
AGENTS.mdA single Markdown file that provides project context to AI agents. It's a kind of 'README for AI' that agents like Claude or Codex read before starting work.
wiki linksA link syntax that allows documents to reference each other in the format [[filename]]. It originated from note-taking apps like Obsidian and is used in lat.md to express relationships between sections.
LSPStands for Language Server Protocol. A protocol used by editors like VS Code to provide features like code auto-completion and go-to-definition, allowing structural information about code to be queried.
ASTStands for Abstract Syntax Tree. The result of parsing source code into a tree structure, used to programmatically analyze the structure of code.
RAGStands for Retrieval-Augmented Generation. A technique that enables LLMs to search and reference external documents when generating responses, applicable to codebase exploration as well.