Show HN: Coasts – Containerized Hosts for Agents
TL;DR Highlight
A CLI tool that resolves port conflicts and environment interference issues when running multiple AI coding agents simultaneously, using Docker-based isolated containers. Combined with git worktree, it enables parallel execution of N independent development environments on a single machine.
Who Should Read
Developers who run multiple AI coding agents like Claude Code or Codex simultaneously, or developers who have struggled with port conflicts or environment interference while doing parallel development with git worktree.
Core Mechanics
- Coasts (Containerized Hosts) is a CLI tool that can spin up N parallel instances of complete development environments on a single machine. By assigning each git worktree its own isolated container environment, multiple AI agents can work concurrently without port conflicts or file interference.
- No changes to existing code are required. Simply add a single Coastfile to the project root, and if you're already using docker-compose.yml, it can be reused as-is. It also works with projects that don't use Docker.
- There are two port strategies: you can 'check out' one coast to bind to standard ports (80, 3000, etc.), while other worktrees are accessed via dynamic ports, allowing you to monitor the progress of multiple environments simultaneously.
- It adopts a DinD (Docker in Docker) approach to provide a full Docker API inside each container. DinD was chosen over simple mount namespaces in order to run the user's docker-compose without modification. However, this incurs approximately 200MB of overhead per containerized host.
- The 'shared-services' concept is supported, allowing services that don't need isolation (e.g., PostgreSQL, Redis) to be declared in the Coastfile so they run only once on the host Docker daemon and are shared across all coasts. This reduces unnecessary resource waste.
- It is not tied to any specific AI provider or agent harness. Since it only requires git worktree, you can switch to any tool—Claude Code, Codex, Cursor, etc.—without needing to change your environment configuration.
- It is designed offline-first with no external service dependencies. A core design principle is that even if the Coasts service itself disappears, local workflows continue to function as-is.
- Running agents directly inside a coast is currently limited due to OAuth token issues. In Anthropic's case, OAuth tokens are quickly invalidated when the runtime environment changes, so using agents inside a coast requires API key-based authentication. Agents that require browser runtimes like Playwright also need separate setup.
Evidence
- "Many comments expressed reactions like 'I've actually experienced this pain.' There were accounts of running 4–5 sessions of Codex/Claude Code in parallel across worktrees, finding it too difficult due to severe port conflicts, and resorting to tools like Cursor or Devin that provide their own containers, or manually isolating environments each time. A competing service called specific.dev posted a comment about solving the same problem differently—their CLI directly manages port allocation and passes assignments via env vars instead of Docker, noting that Docker on Mac still isn't a great experience. They also mentioned the advantage of being able to deploy the same configuration directly to production. An interesting use case idea emerged around using this tool for MCP server isolation. Since MCP servers currently run as local stdio processes, security concerns arise every time a third-party MCP server is installed, and this tool could address that. There was also a suggestion that supporting stdio-to-HTTP bridging would allow local MCP servers to be exposed remotely. A question was raised about how reliably agents respect the 'coast exec' boundary—specifically whether isolation settings are inherited when an agent spawns a sub-agent. This remains unanswered. There was also a technical edge case question about the hot strategy: when running umount -l /workspace + mount --bind + mount --make-rshared inside a DinD container, a new mount could become active while file watchers still hold file descriptors on the old worktree due to lazy unmount, potentially causing continued writes to stale paths. Whether inotify events would allow natural recovery was asked but remains unresolved."
How to Apply
- "If you want to run Claude Code or Codex simultaneously across multiple feature branches, add a Coastfile to the project root, install with 'eval $(curl -fsSL https://coasts.dev/install)', and spin up a coast for each git worktree to run N agent sessions in parallel without port conflicts. If your project already has a docker-compose.yml, you can reference it directly from the Coastfile. Services that don't need isolation, like PostgreSQL or Redis, can be declared as shared-services to be shared across coasts, saving memory. Since there is ~200MB of overhead per coast, it's advantageous to declare as many shareable services as possible. If you want to safely run third-party MCP servers locally, consider using Coasts to run MCP servers in isolated containers. stdio-to-HTTP bridging is not officially supported yet, but isolation itself is possible, making it worth experimenting with to reduce the security risks of untrusted third-party MCP servers."
Code Example
snippet
# Installation
eval "$(curl -fsSL https://coasts.dev/install)"
# Coastfile example (add to project root)
# When using docker-compose.yml as-is
compose: docker-compose.yml
# Declare services that don't need isolation as shared (saves resources)
shared-services:
- postgres
- redis
# Run a development environment instance (for each worktree)
# coast up # Start a coast for the current worktree
# coast checkout <worktree> # Bind a specific worktree to canonical ports
# coast exec <worktree> <command> # Run a command inside a specific coastTerminology
git worktreeA feature that allows multiple branches of a single git repository to be checked out simultaneously in different directories. This enables parallel work on multiple tasks without switching branches.
DinDShort for Docker in Docker. A method of running another Docker daemon inside a container, giving each isolated environment its own independent Docker API. It is resource-heavy but provides full Docker functionality.
MCPShort for Model Context Protocol. A protocol that enables AI agents to communicate with external tools (file systems, APIs, etc.) in a standardized way, typically running locally as stdio (standard input/output) processes.
inotifyA Linux kernel mechanism that detects and reports file system changes in real time. File watchers receive these events to trigger automatic builds, hot reloads, and similar actions.
hot 전략A method of switching a running container environment to a different worktree without restarting. It achieves fast switching by swapping mount points, but edge cases such as lazy unmount timing issues can occur.
shared-servicesA concept in Coasts where multiple isolated environments are declared to share a single service instance. Services like databases that don't require data isolation are shared to reduce memory and CPU waste.