Show HN: OneCLI – Vault for AI Agents in Rust
TL;DR Highlight
A pattern where AI agents call external services through fake OAuth-style credentials that proxy through your server — agents never hold real API keys.
Who Should Read
Security engineers and developers building AI agent systems that need to call external APIs without giving agents direct credential access.
Core Mechanics
- The core problem: AI agents need API keys to call external services, but giving agents direct access to real keys creates security risks (key exfiltration, scope abuse).
- The solution: agents are issued fake/synthetic credentials that look like real API keys. When the agent calls an external service with this credential, it hits a proxy server that authenticates the agent, validates the request, and replaces the fake key with the real one before forwarding.
- This enables fine-grained authorization: the proxy can enforce what endpoints the agent can call, rate-limit it, log all calls, and revoke access without rotating real credentials.
- The pattern mirrors how OAuth works for humans — the agent gets a token scoped to specific permissions, not the master credential.
- This is especially valuable for multi-agent systems where you want different agents to have different permission scopes.
Evidence
- The author demonstrated the pattern with a working implementation, showing how the proxy intercepts and validates agent requests before forwarding.
- HN security commenters validated this as sound practice, noting it's essentially applying the principle of least privilege to AI agents.
- Some pointed out that this adds a hop and potential latency — worth measuring for latency-sensitive workflows.
- Others noted that cloud providers (AWS, GCP) already have similar patterns for machine identities (IAM roles, Workload Identity) — this adapts those patterns for AI agents.
How to Apply
- For any AI agent that needs to call external APIs, provision a proxy layer rather than giving the agent direct credentials.
- Scope each agent's synthetic credential to exactly the API endpoints it needs — if an agent only needs to read from Slack, its credential should only allow GET requests to Slack's read endpoints.
- Log all agent API calls through the proxy — this gives you an audit trail for debugging and security review.
- Design the proxy to be revocable: if an agent behaves unexpectedly, you can disable its synthetic credential without rotating your real service credentials.
Code Example
# vault_get.sh (Fetching secrets from Hashicorp Vault - alternative mentioned in comments)
# Called from within agent skill scripts to prevent keys from being exposed in LLM context
# https://gist.github.com/sathish316/1ca3fe1b124577d1354ee254a...
# .env.example configuration for OneCLI usage
# Only FAKE_KEY is passed to the agent, actual keys are stored in the OneCLI dashboard
OPENAI_API_KEY=FAKE_KEY
STRIPE_SECRET_KEY=FAKE_KEY
# Include Proxy-Authorization header when agent makes HTTP calls
# curl -x http://onecli-gateway:8080 \
# -H 'Proxy-Authorization: Bearer <access-token>' \
# -H 'Authorization: Bearer FAKE_KEY' \
# https://api.openai.com/v1/chat/completions
# Gateway replaces FAKE_KEY with the real key before forwarding externallyTerminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.