Show HN: OneCLI – Vault for AI Agents in Rust
TL;DR Highlight
A pattern where AI agents call external services through fake OAuth-style credentials that proxy through your server — agents never hold real API keys.
Who Should Read
Security engineers and developers building AI agent systems that need to call external APIs without giving agents direct credential access.
Core Mechanics
- The core problem: AI agents need API keys to call external services, but giving agents direct access to real keys creates security risks (key exfiltration, scope abuse).
- The solution: agents are issued fake/synthetic credentials that look like real API keys. When the agent calls an external service with this credential, it hits a proxy server that authenticates the agent, validates the request, and replaces the fake key with the real one before forwarding.
- This enables fine-grained authorization: the proxy can enforce what endpoints the agent can call, rate-limit it, log all calls, and revoke access without rotating real credentials.
- The pattern mirrors how OAuth works for humans — the agent gets a token scoped to specific permissions, not the master credential.
- This is especially valuable for multi-agent systems where you want different agents to have different permission scopes.
Evidence
- The author demonstrated the pattern with a working implementation, showing how the proxy intercepts and validates agent requests before forwarding.
- HN security commenters validated this as sound practice, noting it's essentially applying the principle of least privilege to AI agents.
- Some pointed out that this adds a hop and potential latency — worth measuring for latency-sensitive workflows.
- Others noted that cloud providers (AWS, GCP) already have similar patterns for machine identities (IAM roles, Workload Identity) — this adapts those patterns for AI agents.
How to Apply
- For any AI agent that needs to call external APIs, provision a proxy layer rather than giving the agent direct credentials.
- Scope each agent's synthetic credential to exactly the API endpoints it needs — if an agent only needs to read from Slack, its credential should only allow GET requests to Slack's read endpoints.
- Log all agent API calls through the proxy — this gives you an audit trail for debugging and security review.
- Design the proxy to be revocable: if an agent behaves unexpectedly, you can disable its synthetic credential without rotating your real service credentials.
Code Example
snippet
# vault_get.sh (Fetching secrets from Hashicorp Vault - alternative mentioned in comments)
# Called from within agent skill scripts to prevent keys from being exposed in LLM context
# https://gist.github.com/sathish316/1ca3fe1b124577d1354ee254a...
# .env.example configuration for OneCLI usage
# Only FAKE_KEY is passed to the agent, actual keys are stored in the OneCLI dashboard
OPENAI_API_KEY=FAKE_KEY
STRIPE_SECRET_KEY=FAKE_KEY
# Include Proxy-Authorization header when agent makes HTTP calls
# curl -x http://onecli-gateway:8080 \
# -H 'Proxy-Authorization: Bearer <access-token>' \
# -H 'Authorization: Bearer FAKE_KEY' \
# https://api.openai.com/v1/chat/completions
# Gateway replaces FAKE_KEY with the real key before forwarding externallyTerminology
OAuthOpen Authorization — an authorization protocol that allows applications to access resources on behalf of users without sharing passwords, using scoped tokens.
Principle of least privilegeA security principle stating that any entity (user, process, agent) should have only the minimum permissions required for its task.
IAMIdentity and Access Management — cloud provider systems for defining what entities can do what actions on which resources.
Credential exfiltrationWhen a compromised or malicious process extracts credentials (API keys, tokens) and uses them unauthorized.