Show HN: Cq – Stack Overflow for AI coding agents
TL;DR Highlight
Mozilla AI's open-source cq is a shared knowledge commons where AI agents share what they've learned — tackling the problem of agents wasting tokens by repeatedly solving the same problems.
Who Should Read
Developers who use AI coding agents (Claude Code, Cursor, etc.) daily and are tired of agents repeating the same mistakes and wasting tokens. Also developers designing multi-agent systems.
Core Mechanics
- cq is named from "colloquy" (understanding through dialogue) and, like the CQ signal in radio ("anyone respond"), aims to be an open commons where AI agents share knowledge with each other.
- Current AI agents independently rediscover the same solutions. For example, if one agent learns that Stripe API returns errors inside HTTP 200 responses during rate limiting, other agents still burn tokens figuring it out from scratch.
- How cq works: before starting an unfamiliar task (API integration, CI/CD setup, etc.), an agent queries the cq commons. If another agent already learned something relevant, it references that first. New discoveries are proposed back, and other agents confirm or flag them as outdated.
- The trust score of a Knowledge Unit (KU) is determined by actual usage, not authority — the more agents actually use and verify it, the higher the confidence score.
- Mozilla AI explicitly states its goal of keeping the AI agent ecosystem open and standardized, preventing domination by a few big tech companies.
- Stack Overflow's decline is used as a symbol — monthly question volume dropped from 200K in 2014 to 3,862 in late 2025 (back to launch-era levels), as LLMs learned from Stack Overflow and then killed the community — compared to matriphagy (offspring consuming the parent).
- cq currently starts with a local SQLite DB (~/.cq/local.db) for team-internal use, with a phased roadmap to expand to a public commons.
Evidence
- Security concerns were the most-cited criticism. "What stops a bot from proposing a malicious npm package URL as a KU?" and "a high confidence score doesn't mean correctness — agents can't reliably detect their own mistakes, so wrong knowledge could spread at high confidence." A Tessl contributor bluntly noted: "adoption doesn't guarantee accuracy — this could efficiently propagate misinformation."
- Deep technical proposals on the trust model surfaced: one commenter cited Personalized PageRank and EigenTrust, noting that a single global trust score is vulnerable to Sybil attacks (2005 Cheng & Friedman paper). A "subjective trust" model where each agent computes trust scores based on its delegator's (human user's) position in the trust graph was proposed, with concrete implementations: Karma3Labs/OpenRank and Nostr WoT toolkit.
- Positive responses for internal team adoption: "Our whole team keeps hitting stale GitHub Actions version issues and we're patching with CLAUDE.md workarounds — the KU verification + confidence score approach is an elegant solution." Teams using the same tech stack could benefit from a centralized knowledge repo for recurring problems.
- Fundamental skepticism about agents accurately documenting their intermediate steps: "If an agent can't reliably record the exact steps it took and their environmental dependencies, the whole premise collapses the moment a human intervenes. AI will fill unverified steps with hallucination."
- Appreciation for building open AI knowledge datasets: "If future human knowledge only ends up as private training data for ChatGPT and Anthropic, proactively building open public datasets like this is essential for open-source models and the agent ecosystem."
How to Apply
- If your team's AI agents repeatedly hit the same API integration issues (Stripe, GitHub Actions, specific framework configs), introduce cq as an internal KU store and configure agents to query the commons before starting tasks — reduces token waste.
- If you're currently managing agent context manually via CLAUDE.md or .cursorrules files, consider structuring that content as KUs and seeding them into cq. This gives all agents on the team a shared knowledge baseline.
- Before expanding cq to a public commons, the trust model for KU proposals is critical — at the internal stage, include a mandatory HITL (Human-in-the-Loop) review process to filter out incorrect knowledge before it gains a confidence score.
Related Resources
- cq Original Blog Post (Mozilla AI)
- tokenstree.com - Similar Token Reduction Approach
- XDG Base Directory Specification (Recommended Standard for Local DB Paths)
- MIT Media Lab - AI Agent Delegated Credentials Paper (arXiv:2501.09674)
- Karma3Labs/OpenRank - EigenTrust SDK (Trust Graph Implementation)
- Mozilla AI Star Chamber Blog Post (Related Background)