OpenCode – The open source AI coding agent
TL;DR Highlight
An open-source AI coding agent for terminal, IDE, and desktop that connects to 75+ LLM providers — including reusing your existing GitHub Copilot and ChatGPT Plus subscriptions.
Who Should Read
Developers exploring AI coding tools, especially those unhappy with Claude Code or Aider, or wanting to flexibly switch between multiple LLM providers.
Core Mechanics
- OpenCode is an open-source AI coding agent available as a terminal TUI, desktop app (macOS/Windows/Linux beta), and IDE extension. With 126K GitHub stars, 800 contributors, and 5M monthly users, it's already a quite mature project.
- Supports 75+ LLM providers — commercial models like Claude/GPT/Gemini plus local models via llama.cpp. If you have GitHub Copilot or ChatGPT Plus/Pro, just log in to use them without separate API keys.
- Built-in LSP (Language Server Protocol) integration automatically loads the appropriate language server for your project, helping the agent understand code context more accurately.
- Multi-session support lets you run multiple agents in parallel on the same project, with different models assignable to each sub-agent. For example, GPT-4.1 for task planning and a different model for review.
- `opencode serve` launches server mode for remote access, and the official WebUI lets you manage multiple OpenCode backends (VPS, etc.) from one screen. Combine with TailScale for mobile agent control.
- Privacy-first design — no code or context data stored on servers. Built for security-sensitive environments.
- The paid 'Zen' plan provides benchmarked/validated model sets for coding agent use. One user reported combining the $10/month Go plan cost-effectively for 2 months as a full Claude replacement.
- One-line install via curl (`curl -fsSL https://opencode.ai/install | bash`), plus npm, bun, brew, paru package manager support.
Evidence
- One user shared running a '$10 Go plan + spec-based workflow' combo as a complete Claude replacement for 2 months. They use GPT-4.1 for task planner and reviewer sub-agents, and found lesser-known models' free tiers (GLM, Kimi) surprisingly productive — 'the moat of frontier labs is narrowing fast.'
- A user running llama.cpp local models, Claude, and Gemini as their main harness for months praised the LSP integration specifically. They even built a self-correcting hook system via IPC plugins on top of OpenCode (opencode-evolve project).
- Remote coding strengths were highlighted: running `opencode serve` and controlling multiple VPS backends via WebUI, or mobile access through TailScale. Bugs were also shared — a clock 150ms ahead on a laptop broke Sonnet/Opus ID generation on mobile, random session restore failures, and agent stalls during long sessions.
- Users migrating from Aider shared their experience, with one using local Qwen 3.5 as a fallback when subscription limits are hit. Local models are slower so subscription models are preferred, but model switching itself works well.
- Complaints about not being able to disable the streaming HTTP client, preventing some inference providers from connecting. A related PR was closed citing 'community standards non-compliance.' Ubuntu 24.04 Wayland compatibility issues where the TUI won't even open were also reported.
How to Apply
- If you're using Claude Code or Aider but concerned about cost or model lock-in, install OpenCode with one curl command and log in with your existing GitHub Copilot or ChatGPT Plus account. Use your existing subscription with zero additional API costs.
- To split task planner, coder, and reviewer into separate agents, use multi-session + sub-agent features. Assign low-cost models (GLM, Kimi free tier) to simple tasks and reserve high-performance models for the review stage to optimize costs.
- For agents running on remote servers or multiple VPS instances, launch `opencode serve` for server mode and manage multiple backends centrally through the WebUI. Combine with TailScale for mobile agent control from outside the office.
- For using OpenCode beyond coding — as a general agent backend with FastAPI — combine its skills feature with `opencode serve` to build a structure where agents invoke external APIs as tools. Pairing with cheap models like Minimax provides high intelligence per dollar.
Code Example
snippet
# Install (bash)
curl -fsSL https://opencode.ai/install | bash
# Or npm
npm install -g opencode-ai
# Or brew (macOS)
brew install opencode
# Run in server mode (for remote access)
opencode serve
# Basic run (terminal TUI)
opencodeTerminology
LSPLanguage Server Protocol. A protocol standardizing how editors and language analysis servers communicate for features like autocomplete, error display, and go-to-definition. VS Code popularized this approach.
TUITerminal User Interface. A graphical-looking interface operable via mouse or keyboard within a terminal. Works in SSH environments without GUI apps.
harnessIn AI coding context, a 'harness' is an execution framework wrapping LLMs that handles prompt management, tool calling, session management, etc. Literally a 'device for controlling' the model.
subagentA separate agent instance that the main agent delegates specific subtasks to. For example, separate agents for planning, writing code, and reviewing code can run in parallel.
IPCInter-Process Communication. Methods for different processes to exchange data. Used in OpenCode plugins to connect external hook systems.
WaylandThe next-generation display server protocol replacing X11 on Linux. Adopted as default by several distributions including recent Ubuntu, though some TUI apps still have compatibility issues.