Show HN: Hippo, biologically inspired memory for AI agents
TL;DR Highlight
Hippo is an open-source memory layer that allows you to share memories across sessions between various AI agent tools such as Claude Code, Cursor, and Codex. It implements the brain's mechanisms of memory decay, retrieval strengthening, and consolidation in code.
Who Should Read
Developers who are frustrated by constantly having to re-explain context when switching between AI coding agents like Claude Code, Cursor, and Codex. Or teams who want to solve the problem of AI agents repeating the same mistakes.
Core Mechanics
- AI agents forget everything when a session ends, and existing solutions are simply 'file cabinets' that store everything and search it later. Hippo started from the idea of making it work like a brain.
- The key features are three: decay (forgetting over time), retrieval strengthening (making frequently used memories clearer), and consolidation (solidifying important memories into long-term memory).
- It can be used with various CLI agents such as Claude Code, Cursor, Codex, and OpenClaw, and also has an import function to load memories from ChatGPT's CLAUDE.md or Cursor's .cursorrules file.
- It uses SQLite as a backbone for storage and also mirrors it in human-readable Markdown/YAML files. It can be tracked with Git and exported simply by copying the folder without vendor lock-in.
- It has zero runtime dependencies and only requires Node.js 22.5 or higher. You can optionally attach @xenova/transformers if you want to use embedding-based similarity search.
- Installation is done with `npm install -g hippo-memory` followed by `hippo init` for initialization. Memories are stored using the `hippo remember '...'` command.
- Memories are tagged and have a confidence level, which structurally solves the problem of rule files becoming messy like a 400-line CLAUDE.md. Old information is automatically decayed.
- The author directly responded that v0.10.0 incorporates a lot of community feedback.
Evidence
- There was skepticism about whether decay-based forgetting is actually effective. One comment pointed out that 'exponential decay cannot capture sharp changes like PR merges. Biological learning makes sense when observing similar patterns repeatedly, but I don't know if that's a good analogy for learning code base commits.'
- There was also a sharp observation that 'knowing what to forget' is an AGI-complete problem. Judging what will be important in the future requires a model of future models and your current state, but current agents cannot even properly model their own capabilities.
- There was also an introduction to a project that approached the same problem from the opposite direction. ccrider (github.com/neilberkman/ccrider) indexes Claude Code and Codex session transcripts with SQLite FTS5 instead of having a separate memory layer, making them searchable via an MCP server.
- Several opinions were raised that 'active time' of the agent should be used instead of clock time as the basis for decay time. The problem is that memories disappear regardless of actual usage frequency if based on clock time for agents that run intermittently.
- The idea of location-based memory triggering was also suggested. If the file path or project path the agent is working on is used as a memory trigger, the relevant context is automatically activated and recalled more naturally. This is explained as being similar to how physical location strongly triggers procedural memory in sports or GUIs.
- It was pointed out that there is a similar paper with a similar name and technique, HippoRAG (arxiv.org/abs/2405.14831), which is not mentioned in the README. The community was curious whether this was intentional or overlooked.
How to Apply
- If you are a developer who switches between Claude Code and Cursor or uses multiple AI coding tools interchangeably, install it with `npm install -g hippo-memory && hippo init`, and then save important settings or error solutions you find in each tool with `hippo remember '...'`. This will maintain context when you switch tools.
- If your team is experiencing the same deployment bugs or configuration mistakes being repeated by AI agents, get into the habit of saving error memories in Hippo whenever an error occurs. The decay mechanism ensures that old and resolved issues fade naturally, while recurring issues are reinforced and more readily recalled in the agent's context.
- If you already have context built up in ChatGPT or Claude, you can import your ChatGPT conversations or CLAUDE.md and .cursorrules files into Hippo at once to manage them in a neutral format. Afterwards, you can commit the Markdown files to a Git repository to share them with your team.
- If you need embedding-based semantic search, install `@xenova/transformers` and use it. It finds past memories that are semantically similar rather than just keyword matching, making it more effective as the amount of memory increases.
Code Example
snippet
# Installation and initialization
npm install -g hippo-memory
hippo init
# Memory storage
hippo remember "FRED cache silently drops the t flag — always pass --no-cache explicitly"
# Import from ChatGPT or Claude, etc.
hippo import --from claude ./CLAUDE.md
hippo import --from cursor ./.cursorrules
# Memory search (requires @xenova/transformers for embedding)
hippo recall "cache related deployment issue"Terminology
decayThe phenomenon of memories fading naturally over time. Just as memories that are old or not used frequently weaken in the brain, the importance score of old memories automatically decreases in Hippo.
retrieval strengtheningThe effect of making memories clearer the more they are retrieved. This is like practicing recalling things repeatedly when studying to strengthen memory, as frequently referenced memories are given higher priority.
consolidationThe process of short-term memories solidifying into long-term memories. Just as things learned during the day are organized into long-term memories during sleep, Hippo has a process of moving important memories to a stable long-term storage.
SQLite FTS5A full-text search engine built into SQLite. It is specialized for text search compared to general DB queries, allowing you to quickly find keywords from a large amount of memory.
R-STDPAbbreviation for Reward-modulated Spike-Timing Dependent Plasticity. A learning mechanism in which the strength of neuronal connections is adjusted according to reward signals, used by the robot memory system (MH-FLOCKE) mentioned in the comments.
MCPAbbreviation for Model Context Protocol. A protocol that allows AI agents to access external tools or data sources in a standardized way, proposed by Anthropic.
Related Resources
- Hippo Memory GitHub repository
- HippoRAG paper (related technique with similar name)
- ccrider - Session transcript search tool (MCP based)
- MH-FLOCKE - Robot memory based on Izhikevich spiking neurons
- claude-code-toolkit (skills based memory access)
- IEEE - Paper 1 on agent memory and behavior simulation
- IEEE - Paper 2 on agent memory and behavior simulation
- IEEE - Paper 3 on agent memory and behavior simulation