Comet AI browser can get prompt injected from any site, drain your bank account
TL;DR Highlight
Brave's AI browser Comet is vulnerable to prompt injection when reading web pages, enabling malicious sites to hijack the LLM to access emails, initiate payments, and perform other sensitive actions.
Who Should Read
Developers integrating LLM-based agents into products, or engineers designing security architecture for AI browsers and AI email clients.
Core Mechanics
- Brave browser's AI agent feature 'Comet' executes hidden malicious prompts found in web pages when summarizing or performing tasks — a prompt injection vulnerability.
- Comet has broad permissions including cross-tab data access, email reading, and form filling, allowing an attacker to scan user emails or attempt payments from a single web page.
- Major players like Google, OpenAI, and Anthropic run similar features in isolated VMs without cookies, while Comet operates directly on the user's actual browser session — fundamentally unsafe.
- Brave acknowledged the vulnerability in a blog post but proposed 'model alignment to detect dangerous actions' — the community criticized this as meaningless given that models are immediately jailbroken in practice.
- Key concept: When an LLM 'reads' external data via tools, it's effectively allowing 'writes' to the context window. If it can read untrusted sources, those sources can manipulate the LLM's behavior.
- At USENIX Security, it was confirmed that no one yet knows how to fundamentally prevent prompt injection in multi-turn/agent environments. It remains an unsolved problem in academia.
- Similar vulnerabilities were found in AI email clients (Shortwave, etc.), and the 'Month of AI Bugs' project continues collecting similar cases.
- A user tested Comet by saying 'buy me a guitar on Amazon' — it added 3 cheap no-brand guitars to the cart without any confirmation. Fortunately it didn't complete the purchase, but it demonstrates reckless agent behavior.
Evidence
- Many commented that there's a reason Google/OpenAI/Anthropic haven't shipped this feature. They use cookieless isolated VMs for web browsing, while Comet directly exposes the user session — consensus was it's 'fundamentally unsafe.'
- The framing that 'every read action by an LLM tool is a write to the context window' gained strong agreement. The explanation that being able to read untrusted sources is itself an attack vector became a frequently cited core principle of agent security.
- Some argued agentic AI should only be used for easily reversible tasks (code writing/editing via git) — using it for irreversible actions like web browsing, payments, and email is reckless.
- Brave's proposed mitigations ('browser distinguishes user instructions from website content,' 'model verifies alignment with user intent') were strongly criticized as ineffective given that models get jailbroken immediately upon release.
- Someone noted the irony: decades of encrypting network layers one by one (even DNS), and now we're handing over all passwords and secrets via plaintext APIs.
How to Apply
- When implementing LLM agents that read external content (web pages, emails, documents), assume that reading itself is an attack vector. Isolate external inputs in separate contexts and always require user confirmation before invoking sensitive tools (payments, email sending).
- Minimize tool permissions granted to agents. A 'web page summary' feature doesn't need email access, form filling, or cross-tab data sharing. Separate permissions per task, and route irreversible actions (payments, messages) through a separate approval flow.
- When designing agent-based services, use 'rollback capability' as the criterion for automation scope. Code changes (git reset possible) are safe to automate, but payments, email sending, and account settings changes should be restricted from direct agent execution.
- If running AI agents in production, regularly check monthofaibugs.com to track similar vulnerability patterns and audit whether the same attacks are possible on your service.
Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.