OpenCode – The open source AI coding agent
TL;DR Highlight
An open-source AI coding agent for terminal, IDE, and desktop that connects to 75+ LLM providers — including reusing your existing GitHub Copilot and ChatGPT Plus subscriptions.
Who Should Read
Developers exploring AI coding tools, especially those unhappy with Claude Code or Aider, or wanting to flexibly switch between multiple LLM providers.
Core Mechanics
- OpenCode is an open-source AI coding agent available as a terminal TUI, desktop app (macOS/Windows/Linux beta), and IDE extension. With 126K GitHub stars, 800 contributors, and 5M monthly users, it's already a quite mature project.
- Supports 75+ LLM providers — commercial models like Claude/GPT/Gemini plus local models via llama.cpp. If you have GitHub Copilot or ChatGPT Plus/Pro, just log in to use them without separate API keys.
- Built-in LSP (Language Server Protocol) integration automatically loads the appropriate language server for your project, helping the agent understand code context more accurately.
- Multi-session support lets you run multiple agents in parallel on the same project, with different models assignable to each sub-agent. For example, GPT-4.1 for task planning and a different model for review.
- `opencode serve` launches server mode for remote access, and the official WebUI lets you manage multiple OpenCode backends (VPS, etc.) from one screen. Combine with TailScale for mobile agent control.
- Privacy-first design — no code or context data stored on servers. Built for security-sensitive environments.
- The paid 'Zen' plan provides benchmarked/validated model sets for coding agent use. One user reported combining the $10/month Go plan cost-effectively for 2 months as a full Claude replacement.
- One-line install via curl (`curl -fsSL https://opencode.ai/install | bash`), plus npm, bun, brew, paru package manager support.
Evidence
- One user shared running a '$10 Go plan + spec-based workflow' combo as a complete Claude replacement for 2 months. They use GPT-4.1 for task planner and reviewer sub-agents, and found lesser-known models' free tiers (GLM, Kimi) surprisingly productive — 'the moat of frontier labs is narrowing fast.'
- A user running llama.cpp local models, Claude, and Gemini as their main harness for months praised the LSP integration specifically. They even built a self-correcting hook system via IPC plugins on top of OpenCode (opencode-evolve project).
- Remote coding strengths were highlighted: running `opencode serve` and controlling multiple VPS backends via WebUI, or mobile access through TailScale. Bugs were also shared — a clock 150ms ahead on a laptop broke Sonnet/Opus ID generation on mobile, random session restore failures, and agent stalls during long sessions.
- Users migrating from Aider shared their experience, with one using local Qwen 3.5 as a fallback when subscription limits are hit. Local models are slower so subscription models are preferred, but model switching itself works well.
- Complaints about not being able to disable the streaming HTTP client, preventing some inference providers from connecting. A related PR was closed citing 'community standards non-compliance.' Ubuntu 24.04 Wayland compatibility issues where the TUI won't even open were also reported.
How to Apply
- If you're using Claude Code or Aider but concerned about cost or model lock-in, install OpenCode with one curl command and log in with your existing GitHub Copilot or ChatGPT Plus account. Use your existing subscription with zero additional API costs.
- To split task planner, coder, and reviewer into separate agents, use multi-session + sub-agent features. Assign low-cost models (GLM, Kimi free tier) to simple tasks and reserve high-performance models for the review stage to optimize costs.
- For agents running on remote servers or multiple VPS instances, launch `opencode serve` for server mode and manage multiple backends centrally through the WebUI. Combine with TailScale for mobile agent control from outside the office.
- For using OpenCode beyond coding — as a general agent backend with FastAPI — combine its skills feature with `opencode serve` to build a structure where agents invoke external APIs as tools. Pairing with cheap models like Minimax provides high intelligence per dollar.
Code Example
# Install (bash)
curl -fsSL https://opencode.ai/install | bash
# Or npm
npm install -g opencode-ai
# Or brew (macOS)
brew install opencode
# Run in server mode (for remote access)
opencode serve
# Basic run (terminal TUI)
opencodeTerminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.