Show HN: Optio – Orchestrate AI coding agents in K8s to go from ticket to PR
TL;DR Highlight
A Kubernetes-based workflow automation tool where an AI agent writes code from GitHub Issues or Linear tickets, automatically fixes CI failures, incorporates review comments, and merges PRs — all without human intervention. It stands out for fully automating the entire ticket-to-PR cycle.
Who Should Read
DevOps or platform engineers looking to integrate AI coding agents into their team's development workflow. Particularly suited for teams running Kubernetes infrastructure who want to automate repetitive coding tasks.
Core Mechanics
- Optio accepts tasks via three methods — GitHub Issues, Linear tickets, or manual input — and for each task, provisions an isolated Kubernetes Pod and runs an AI agent (Claude Code or OpenAI Codex) inside it.
- The 'feedback loop' is the core feature, going beyond simple code generation. If CI fails, the agent is automatically resumed with the failure context; if a reviewer requests changes, the agent reads the review comments and pushes fix commits.
- Once all checks pass, the PR is squash merged and the issue is automatically closed. The goal is full automation from 'task description to merge complete' without anyone needing to click a PR button.
- Each task runs in an isolated git worktree, allowing multiple tasks to run in parallel simultaneously. A dashboard provides real-time visibility into the number of running agents, Pod status, costs, and recent activity.
- The task detail view offers live streaming of agent output, pipeline progress, PR tracking, and per-task cost analysis, providing operational visibility.
- A Helm chart is included for deployment to Kubernetes clusters, and a docker-compose.yml is also provided for running in local development environments. The project uses a pnpm + Turborepo monorepo structure with separate API, web, and agent components.
- It is open source (MIT license), and the repository includes files like CLAUDE.md and CONTRIBUTING.md that provide context for AI agents to work with. It has currently received 366 stars on GitHub.
Evidence
- "Skepticism that 'it won't work properly without human oversight' was raised in multiple comments. There were observations that AI can only work in the right direction from GitHub Issues alone for the simplest tickets, and opinions that LLMs should be used more for communicating design/architecture decisions to humans than for code generation. Parallel execution conflicts were raised as a practical concern — what happens when agent A is working on a PR modifying shared/utils.py and agent B receives a ticket that also needs the same file? Questions arose about whether the orchestrator performs dependency analysis upfront or handles it as a merge conflict, but no clear answer was given. Real-world experience with retry token costs was shared: a developer building a similar system noted that excessive token consumption during agent retries was their biggest challenge, and asked how Optio handles this — they use checkpoints to roll back to previous states on failure. Some felt burdened by Kubernetes being a hard requirement, with opinions that K8s should be one option rather than central to agent setup. As an alternative, 'GitHub Actions + @claude mention' was mentioned as a way to achieve similar results more cheaply. There was also direct criticism asking 'what stops it from spitting out garbage that breaks the codebase,' with responses suggesting 'you should want to review agent output.' Practical questions also followed about MCP (Model Context Protocol) support, sandboxed multi-tenant isolation, and whether Pods are scoped per repo or per task."
How to Apply
- "Teams managing backlogs with Linear or GitHub Issues can start by connecting repetitive, clearly-specified tickets (e.g., adding a specific API endpoint, fixing type errors, improving test coverage) to Optio and experimenting with the automation pipeline. It's best to initially focus on verifying that the CI auto-fix loop works correctly. To try it without K8s, you can first run it locally using the docker-compose.yml included in the repo. Referencing .env.example to configure your Claude or OpenAI API key and GitHub token lets you test the full workflow in a local environment. For production deployment, use the Helm chart in the helm/optio directory to deploy to an existing K8s cluster. However, in a multi-tenant environment, you should first verify that sandbox isolation between Pods is sufficient — the community has noted that this is not yet clearly documented. To reduce the risk of conflicts when running parallel agents that modify the same files, it is safer to initially run only tasks involving independent modules or files in parallel, and establish a task assignment strategy that runs tasks touching shared utilities sequentially."
Code Example
# Run locally
git clone https://github.com/jonwiggins/optio
cd optio
cp .env.example .env
# Set ANTHROPIC_API_KEY, GITHUB_TOKEN, etc. in .env
docker-compose up
# Deploy to K8s with Helm
helm install optio ./helm/optio \
--set env.ANTHROPIC_API_KEY=<your-key> \
--set env.GITHUB_TOKEN=<your-token>Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.