Show HN: Coasts – Containerized Hosts for Agents
TL;DR Highlight
A CLI tool that resolves port conflicts and environment interference issues when running multiple AI coding agents simultaneously, using Docker-based isolated containers. Combined with git worktree, it enables parallel execution of N independent development environments on a single machine.
Who Should Read
Developers who run multiple AI coding agents like Claude Code or Codex simultaneously, or developers who have struggled with port conflicts or environment interference while doing parallel development with git worktree.
Core Mechanics
- Coasts (Containerized Hosts) is a CLI tool that can spin up N parallel instances of complete development environments on a single machine. By assigning each git worktree its own isolated container environment, multiple AI agents can work concurrently without port conflicts or file interference.
- No changes to existing code are required. Simply add a single Coastfile to the project root, and if you're already using docker-compose.yml, it can be reused as-is. It also works with projects that don't use Docker.
- There are two port strategies: you can 'check out' one coast to bind to standard ports (80, 3000, etc.), while other worktrees are accessed via dynamic ports, allowing you to monitor the progress of multiple environments simultaneously.
- It adopts a DinD (Docker in Docker) approach to provide a full Docker API inside each container. DinD was chosen over simple mount namespaces in order to run the user's docker-compose without modification. However, this incurs approximately 200MB of overhead per containerized host.
- The 'shared-services' concept is supported, allowing services that don't need isolation (e.g., PostgreSQL, Redis) to be declared in the Coastfile so they run only once on the host Docker daemon and are shared across all coasts. This reduces unnecessary resource waste.
- It is not tied to any specific AI provider or agent harness. Since it only requires git worktree, you can switch to any tool—Claude Code, Codex, Cursor, etc.—without needing to change your environment configuration.
- It is designed offline-first with no external service dependencies. A core design principle is that even if the Coasts service itself disappears, local workflows continue to function as-is.
- Running agents directly inside a coast is currently limited due to OAuth token issues. In Anthropic's case, OAuth tokens are quickly invalidated when the runtime environment changes, so using agents inside a coast requires API key-based authentication. Agents that require browser runtimes like Playwright also need separate setup.
Evidence
- "Many comments expressed reactions like 'I've actually experienced this pain.' There were accounts of running 4–5 sessions of Codex/Claude Code in parallel across worktrees, finding it too difficult due to severe port conflicts, and resorting to tools like Cursor or Devin that provide their own containers, or manually isolating environments each time. A competing service called specific.dev posted a comment about solving the same problem differently—their CLI directly manages port allocation and passes assignments via env vars instead of Docker, noting that Docker on Mac still isn't a great experience. They also mentioned the advantage of being able to deploy the same configuration directly to production. An interesting use case idea emerged around using this tool for MCP server isolation. Since MCP servers currently run as local stdio processes, security concerns arise every time a third-party MCP server is installed, and this tool could address that. There was also a suggestion that supporting stdio-to-HTTP bridging would allow local MCP servers to be exposed remotely. A question was raised about how reliably agents respect the 'coast exec' boundary—specifically whether isolation settings are inherited when an agent spawns a sub-agent. This remains unanswered. There was also a technical edge case question about the hot strategy: when running umount -l /workspace + mount --bind + mount --make-rshared inside a DinD container, a new mount could become active while file watchers still hold file descriptors on the old worktree due to lazy unmount, potentially causing continued writes to stale paths. Whether inotify events would allow natural recovery was asked but remains unresolved."
How to Apply
- "If you want to run Claude Code or Codex simultaneously across multiple feature branches, add a Coastfile to the project root, install with 'eval $(curl -fsSL https://coasts.dev/install)', and spin up a coast for each git worktree to run N agent sessions in parallel without port conflicts. If your project already has a docker-compose.yml, you can reference it directly from the Coastfile. Services that don't need isolation, like PostgreSQL or Redis, can be declared as shared-services to be shared across coasts, saving memory. Since there is ~200MB of overhead per coast, it's advantageous to declare as many shareable services as possible. If you want to safely run third-party MCP servers locally, consider using Coasts to run MCP servers in isolated containers. stdio-to-HTTP bridging is not officially supported yet, but isolation itself is possible, making it worth experimenting with to reduce the security risks of untrusted third-party MCP servers."
Code Example
# Installation
eval "$(curl -fsSL https://coasts.dev/install)"
# Coastfile example (add to project root)
# When using docker-compose.yml as-is
compose: docker-compose.yml
# Declare services that don't need isolation as shared (saves resources)
shared-services:
- postgres
- redis
# Run a development environment instance (for each worktree)
# coast up # Start a coast for the current worktree
# coast checkout <worktree> # Bind a specific worktree to canonical ports
# coast exec <worktree> <command> # Run a command inside a specific coastTerminology
Related Papers
How Claude Code works in large codebases
Anthropic이 수백만 줄짜리 모노레포, 레거시 시스템, 수십 개 마이크로서비스 환경에서 Claude Code를 운영한 패턴을 정리한 글이다. RAG 방식 대신 에이전틱 검색을 쓰는 이유와 실제 현장의 한계를 함께 확인할 수 있다.
Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model
Gemini의 함수 호출(tool calling) 능력만 뽑아서 26M(2600만) 파라미터짜리 초경량 모델로 만든 프로젝트로, 폰/워치/스마트글라스 같은 엣지 디바이스에서 직접 실행 가능하다.
Show HN: Agentic interface for mainframes and COBOL
수십 년 된 메인프레임(z/OS) 환경을 AI 에이전트로 조작할 수 있게 해주는 개발 도구로, COBOL 코드 작성부터 JCL 실행, 디버깅까지 자연어로 처리할 수 있어 레거시 시스템 유지보수 비용을 크게 줄일 수 있다.
Show HN: Statewright – Visual state machines that make AI agents reliable
AI 에이전트에게 40개 이상의 도구를 주면 오히려 성능이 떨어지는 문제를 State Machine으로 각 단계별 사용 가능한 도구를 제한해 해결하는 오픈소스 프로젝트다. 더 큰 모델 대신 더 작은 문제 공간을 만들어 신뢰성을 높이는 접근이 핵심이다.
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.