Agent-to-agent pair programming
TL;DR Highlight
Introducing 'loop', a CLI tool that runs Claude and Codex side by side on tmux and lets them communicate with each other. The two AIs take on the roles of developer and reviewer, mimicking human pair programming.
Who Should Read
Developers who are already using AI coding agents like Claude Code or Codex in practice, feel the limitations of a single agent, and want to experiment with multi-agent workflows.
Core Mechanics
- The author discovered an interesting pattern while using Claude and Codex side by side for code review: when both agents gave the same feedback, it was a strong signal rather than noise, which naturally led to a team rule of always acting on feedback that both reviewers agreed on.
- Based on this, the author built a CLI tool called 'loop' — a simple tool that launches claude and codex side by side on tmux and connects a bridge for direct message passing between the two agents. It is open-sourced on GitHub (https://github.com/axeldelafosse/loop).
- Because loop uses the interactive TUI (terminal UI) as-is, humans can intervene at any point — answering questions, adjusting direction, or giving follow-up instructions. It is designed to keep humans 'in the loop' rather than being fully automated.
- As the Cursor research team found in their research on long-running coding agents, well-functioning agent workflows resemble human team collaboration structures. Claude Code's 'Agent teams' and Codex's 'Multi-agent' features both have a main agent distributing tasks to sub-agents, but the author goes a step further by enabling sub-agents to communicate directly with each other.
- Running agents continuously in a loop can result in more code changes than expected. The author views this mostly positively, but it creates a problem where the volume of changes becomes too large for humans to review later. Open questions raised to address this include whether to split PRs into multiple smaller ones, or whether to share PLAN.md in git or PR descriptions.
- There are various reasons to use multiple agent tools simultaneously: avoiding vendor lock-in, contributing to open source, maximizing subscription limits, or leveraging the strengths and differing perspectives of each model. The author argues that multi-agent apps should support inter-agent communication as a first-class feature.
- Based on the author's experience, Claude excels at generation and creative tasks, while Codex excels at meticulous and accurate auditing and critical review. The observation is that the personality differences between the two models naturally map onto pair programming role assignments.
Evidence
- "There were firsthand accounts of people using a similar approach: when routing Claude's completed output to Codex for review, it was very rare that Claude had fully and successfully finished the task, and Codex almost always found issues. A tip was also shared that having Claude summarize its work as a 'why/where/how' document before handing it off to Codex improves review quality. There was also skepticism about whether the effectiveness of multi-agent setups comes from actually having 'multiple agents' versus simply 'alternating between two different configurations (system prompt, model, temperature, context pruning, toolset, etc.)' — suggesting the key may be introducing different perspectives and settings, not the number of agents. A sharp counterargument emerged about the idea of putting PLAN.md in git or a PR: once PLAN.md is committed to git, it becomes 'downstream of the implementation plan,' and when implementation diverges from the plan, it becomes harder to trace why certain decisions were made, since the original intent is what truly matters. There was also a view that pair programming itself doesn't work well for humans either — it's difficult to verbalize complex thought processes in real time, and from the outside it can look like randomly changing code — reflecting skepticism toward applying human collaboration patterns to AI agents. Repeated calls were made for systematic measurement of multi-agent setups, with most evidence still anecdotal; multiple comments expressed that 'the vibe is good, but we need science.' A similar tool, claude-review-loop (https://github.com/hamelsmu/claude-review-loop), was also mentioned."
How to Apply
- "If you have subscriptions to both Claude Code and Codex, you can install the `loop` CLI and experiment with a pair programming workflow where Claude writes code and Codex reviews it. Treating feedback that both agents agree on as a 'must-fix' rule can reduce review noise while ensuring important issues aren't missed. Have Claude write a why/where/how summary document after completing a task, then pass it to Codex as review context to improve review quality — this pattern can be applied manually right away, even without loop. If agent loops are producing more changes than expected, consider splitting PRs into smaller feature-scoped units or adding a system prompt that explicitly limits the scope of changes for the agent — for example, instructing it to 'log out-of-scope changes as separate issues without fixing them' to reduce the human review burden."
Terminology
TUIShort for Terminal User Interface; a text-based interface operated via keyboard within a terminal. The interactive terminal screens of Claude Code or Codex are examples of this.
tmuxA terminal multiplexer. A CLI tool that splits a single terminal window into multiple independent panes or maintains sessions. loop uses this to run Claude and Codex side by side.
vendor lock-inA situation where you become dependent on a specific vendor's service or API, making it difficult to switch elsewhere. Using multiple AI agents in parallel can distribute this risk.
orchestratorIn a multi-agent system, the main agent that coordinates the overall task and distributes subtasks to sub-agents. Analogous to a team lead role.
first-class featureMeans a feature is supported as a core design principle rather than being tacked on as an afterthought. Used in the context of the author's argument that inter-agent communication should be a core feature of apps, not a peripheral one.
PLAN.mdA markdown file where an agent documents its plan before starting a task. Used to record the agent's intent and direction, with discussion around including it in git or attaching it to PR descriptions.