90%+ fewer tokens per session by reading a pre-compiled wiki instead of exploring files cold. Built from Karpathy's workflow.
TL;DR Highlight
This is a workflow sharing post about how pre-organizing a codebase in Wiki format can reduce token usage per Claude session by more than 90% instead of directly exploring the codebase every time.
Who Should Read
Developers who are using Claude or other LLMs for codebase exploration and development tasks and are facing token cost or context limitations.
Core Mechanics
- Allowing the AI to directly explore files (cold exploration) in each session consumes a lot of unnecessary tokens, which can be solved by pre-organizing the codebase in Wiki format.
- This approach is inspired by the workflow used by Andrej Karpathy, and the key is to pre-compile the structure and core content of the codebase.
- It is reported that this method reduces token usage per session by more than 90%, significantly reducing the cost of repetitive codebase exploration.
- Due to blocked access to the original source, it was not possible to confirm the specific implementation methods, tools, and scripts.
Evidence
- "(No comment information)"
How to Apply
- If you are working with Claude on the same codebase repeatedly, you can try creating a Markdown Wiki file in advance that organizes the project structure, key modules, and function roles, and injecting only that file at the beginning of each session.
- When starting a new project, have Claude explore the entire codebase only once, and save the results in a file like CODEBASE_WIKI.md. Subsequent sessions can then refer to only that file to save tokens.
- If you need specific implementation methods from the original post, visit the original Reddit URL (https://www.reddit.com/r/ClaudeAI/comments/1sfdztg/) directly or refer to Karpathy's publicly available workflow-related materials.
Terminology
cold explorationA method where the AI directly explores the file system from scratch without any prior information. It is very wasteful of tokens if repeated every time.
pre-compiled wikiA document that pre-organizes the structure, key files, and function roles of the codebase. Reference material for quickly providing context to the AI.
컨텍스트 윈도우The maximum length of text that an LLM can process at once. Measured in tokens, exceeding this limit can cause the AI to forget previous information or generate errors.