Show HN: A plain-text cognitive architecture for Claude Code
TL;DR Highlight
A project that designs a hierarchical memory structure (Cognitive Architecture) based on plain-text files to address Claude Code's inability to retain memory across sessions. A practical reference for developers who want to use AI coding assistants consistently over the long term.
Who Should Read
Developers who use AI coding assistants like Claude Code or Aider daily in their work, but are frustrated by having to re-explain context every time a new session starts.
Core Mechanics
- Claude Code has no memory between sessions by default — once a conversation ends, it retains no prior context. Cog is a project that addresses this by building an external memory system composed of plain-text files.
- Instead of dumping everything into a single file, memory is divided into tiers such as 'hot (load immediately) → warm (load on demand) → cold (archive)'. This allows efficient use of the context window and enables fast access to frequently needed information.
- A dedicated 'onboarding flow' at session start and a 'shutdown flow' at session end are built in, so the AI can organize and update its own memory autonomously — similar to how a person checks their TODO list at the start of the day and writes a journal at the end.
- One of the core design philosophies of this project is that storing context-rich lessons (e.g., 'Do not mock the DB in integration tests — there was a case where tests passed but the migration failed') is far more effective than storing bare facts (e.g., 'The database is PostgreSQL 16').
- The project is similar to CLAUDE.md (the per-project instruction file officially supported by Claude Code), but adds a more sophisticated structure. Architectural decisions, file paths, and rules like 'do X / don't do Y' are systematically organized to guide Claude toward consistent behavior.
- This approach is not limited to Claude Code — it is a general-purpose pattern that applies equally to other AI coding tools such as Aider and OpenCode. Being plain-text based makes it compatible with any tool.
- Alternatives like Anthropic's official Auto Dream feature or episodic-memory exist, but Cog's differentiator is that developers can directly customize the structure and version-control the files with git.
Evidence
- "A comment pointed out reliability issues with long-term memory — if observations from 30 sessions ago and inferences drawn from a single offhand remark are stored at the same level, the memory becomes increasingly useless. A real implementation experience was shared where tagging entries with confidence scores and timestamps, decaying memories that hadn't been reinforced over time, and managing conflicting observations in a separate 'contradictions log' proved to be useful. There was also a pragmatic counterargument that a well-written CLAUDE.md alone is sufficient. A developer who heavily uses Claude Code for infrastructure work argued that 'storing lessons was far more effective than storing facts,' and that a single well-crafted CLAUDE.md can be more powerful than a complex memory architecture. A case was shared of someone implementing a far more sophisticated workflow on their own — managing separate onboarding.md, journal.md, and musings.md files, and having the AI review consistency across all documents and code before submitting a PR at the end of each session. The view was that 'treating AI as a collaborator rather than a tool yields much better results,' though the author honestly noted the significant downside of massive token consumption ('token fire'). Some expressed that Codex handles context management better than Claude, sharing a comparison experience that 'Claude drops information from its context, whereas Codex doesn't forget content even in long sessions' — which ironically validates the very reason this project exists to work around Claude Code's fundamental limitations. There was also a critical perspective that this entire approach is a superficial fix that patches LLM architectural limitations with text files. The argument was that if local open models were more competitive, this would have been solved with overnight fine-tuning — a philosophical critique pointing to the inherent limitations of the current LLM paradigm."
How to Apply
- "If you find yourself repeating the same explanations at the start of every Claude Code session, write lessons in your CLAUDE.md in the format 'don't do X + the reason why (including failure cases)' instead of plain facts. For example, including context like 'No DB mocking in integration tests — there was a past case where tests passed but the actual migration failed' will make Claude behave far more consistently. Managing all memory in a single file wastes your context window. Split files into three tiers — 'always-load (core project principles)', 'load-when-needed (per-module rules)', and 'archive (history of past decisions)' — and instruct Claude to read only the relevant files at session start to improve token efficiency. If you're concerned about the reliability of AI memory in long-running projects, get into the habit of annotating stored information with 'when it was recorded' and 'how certain it is (speculation vs. verified fact)'. When conflicting information arises, don't delete either entry — keep both in a 'conflict' section, which will help with context reconstruction later. The Cog architecture is not exclusive to Claude Code, so it applies equally if you use Aider or other AI coding tools. Check out the structure on the official Cog site (https://lab.puga.com.br/cog/) and try adopting it by simply adjusting the file conventions to fit your own tool."
Terminology
Cognitive ArchitectureThe overall structural design that determines how an AI agent perceives, stores, and utilizes information. Think of it as a blueprint for 'how it thinks and remembers.'
Tiered MemoryAn approach that divides memory into multiple layers based on importance or access frequency. Like L1/L2 cache in a computer, frequently used information is kept close for fast access while less-used information is stored further away.
Context WindowThe maximum length of text an LLM can read and process in a single pass. Once this limit is exceeded, older content gets truncated, making it important to manage what gets included.
CLAUDE.mdA configuration file that Claude Code automatically reads at the start of a project. Writing per-project rules, conventions, and caveats here means you don't have to explain them every session.
Confidence ScoreA numerical representation of how trustworthy a stored piece of information is. The score can be lowered over time or when contradicting evidence emerges, allowing distinction between 'old guesses' and 'verified facts.'
DecayA mechanism that automatically reduces the reliability or priority of a memory entry over time. Just as human memories fade with age, this approach makes AI memory treat older information as less important.