CrabTrap: An LLM-as-a-judge HTTP proxy to secure agents in production
TL;DR Highlight
Brex’s CrabTrap intercepts all HTTP requests from AI agents, using an LLM judge to allow or deny access based on policy, sparking debate over the fundamental limits of LLM-based security layers.
Who Should Read
Backend/infrastructure developers operating AI agents in production and seeking to control unauthorized API calls or sensitive data exfiltration.
Core Mechanics
- CrabTrap functions as an HTTP proxy positioned between AI agents and the open internet, intercepting all outgoing requests, evaluating them against defined policies, and either allowing or blocking them in real-time.
- It combines two judgment methods: fast 'static rules' for initial filtering, and an LLM called as an additional judge for ambiguous requests that rules alone cannot resolve, logging each decision’s method.
- Brex claims automatically generated policies, tested on days of real traffic, aligned with human judgment in ‘the vast majority’ of cases, but the community argues ‘99% safe’ is a failing grade for security.
- To prevent prompt injection attacks, policy content is serialized as JSON (using json.Marshal) and embedded in the prompt, escaping special characters and command-like text.
- Brex started from the premise that agent security is currently stuck in a binary ‘all or nothing’ paradigm, attempting to balance the trade-off between powerful but risky access and restrictive but useless lockdown.
- Installation requires installing a self-signed certificate system-wide to perform HTTPS traffic MITM (man-in-the-middle) interception, a process some commenters found inconvenient.
- The project is open-source and available on GitHub, with Brex advertising a ‘30-second setup’.
Evidence
- "The probabilistic nature of the LLM-as-a-judge approach was the biggest point of contention. One commenter questioned the risk of basing a security system on probability rather than hard limits, with others agreeing that ‘deterministic ACLs are needed’ or it’s ‘just a non-deterministic business rules engine.’\n\nThe potential for shared vulnerabilities when the agent and judge use the same model family was raised. For example, if both use Claude, a prompt injection pattern that fools the agent could also fool the judge, leading to calls for ‘defense in depth’ using at least different providers, ideally different architectures.\n\nConcerns were raised that because the judge only sees the HTTP body, attackers manipulating agent inputs can also manipulate the judge’s context window, representing a fundamental failure mode where the judge is ‘deprived of the signals needed to detect the trick.’\n\nSome argued CrabTrap can only be a detection layer, not a prevention layer, reasoning that ‘credentials are already read when the LLM makes an external POST request,’ making kernel-level control suitable for auditing what an agent did, not preventing it.\n\nA commenter introduced EvalView as an alternative approach, using full execution trajectory snapshots and diffs to track changes, with a lightweight zero-judge model check to determine drift level (NONE/WEAK/MEDIUM/STRONG), criticizing the idea of solving LLM security problems by adding more LLM layers."
How to Apply
- "If you’re running AI agents in production that automatically call external services like Slack, GitHub, or internal APIs, deploy CrabTrap as a proxy between the agent and the internet, defining immediate guardrails like ‘block external calls to specific domains’ or ‘block large data transfers.’\n\nIf you recognize the probabilistic limitations of the LLM-as-judge, position CrabTrap as an audit layer rather than a sole defense. Handle actual blocking with network policies or IAM permissions, and use CrabTrap for visibility, logging what the agent attempted.\n\nWhen selecting a judge model, consider using an LLM from a different provider or with a different architecture than the agent’s model. Using the same model family reduces the effectiveness of defense in depth.\n\nIf the MITM structure with system-wide self-signed certificate installation is concerning, minimize the impact by running the agent in an isolated container or sandbox environment and deploying CrabTrap as the gateway for that environment."
Code Example
snippet
// CrabTrap’s prompt injection prevention method (Go code, from GitHub source)
// The policy is embedded as a JSON-escaped value inside a structured JSON object.
// This prevents prompt injection via policy content — any special characters,
// delimiters, or instruction-like text in the policy are safely escaped by
// json.Marshal rather than concatenated as raw text.
policyJSON, err := json.Marshal(policyContent)
// policyJSON is now a safely escaped string that can be inserted into the promptTerminology
LLM-as-a-judgeA pattern where an LLM is used as an arbiter, evaluating the output or behavior of another LLM against defined criteria, but its probabilistic nature limits its reliability.
MITM(중간자 공격/프록시)A structure where a third party intercepts traffic between two communicating parties. CrabTrap uses this for monitoring/blocking, but requires installing a self-signed certificate.
프롬프트 인젝션An attack that exploits vulnerabilities in LLMs by embedding malicious commands within the input, causing the model to perform unintended actions.
defense in depthA security strategy that relies on multiple, independent layers of security, ensuring that a failure in one layer doesn't compromise the entire system.
정적 규칙(static rule)A pre-defined condition (e.g., blocking a specific domain, checking for a specific header) that is consistently applied at the code level, always producing the same result for the same input.
드리프트(drift)The phenomenon where an AI model’s output or an agent’s behavior deviates from its baseline state over time, making it difficult to detect subtle changes in performance.