Safety & Security
Latest 60 papers on Safety & Security.
4TB of voice samples just stolen from 40k AI contractors at Mercor
Mercor data breach exposes voice recordings and ID scans of 40,000 contractors, fueling deepfake and voice fraud risks.
An AI agent deleted our production database. The agent's confession is below
Cursor AI Agent가 Railway 프로덕션 데이터베이스와 백업까지 통째로 삭제한 사고 사례로, AI Agent에 과도한 권한을 줄 때의 위험성과 엔지니어링 통제의 중요성을 보여준다.
Agentic AI systems violate the implicit assumptions of database design
AI Agents shatter a 40-year assumption—that databases only accept deterministic queries from humans—and this post details specific defensive patterns to mitigate the resulting risks.
Tell HN: Claude 4.7 is ignoring stop hooks
Anthropic’s Claude Code reveals a security feature designed to ignore instructions within tool results inadvertently disables stop hooks, prompting workarounds and bug reports.
Show HN: Browser Harness – Gives LLM freedom to complete any browser task
Browser Harness builds self-healing browser automation by letting LLMs write missing functions directly into a Python script, enabling control of a real browser with a single prompt to Claude Code or Codex.
Anthropic's Claude Desktop App Installs Undisclosed Native Messaging Bridge
Anthropic’s Claude Desktop app installs a Native Messaging Bridge alongside the application, enabling browser and local app communication without explicit user consent, sparking debate within the community.
Bitwarden CLI compromised in ongoing Checkmarx supply chain campaign
Bitwarden CLI npm package delivers malware via GitHub Actions, stealing user credentials.
Kernel code removals driven by LLM-created security reports
Linux kernel maintainers are removing legacy drivers—ISA, PCMCIA, AX.25, ATM, and ISDN—after AI-generated security bug reports overwhelmed them, demonstrating a drastic response to unmanageable code.
An AI Agent Execution Environment to Safeguard User Data
GAAP eliminates personal data leaks—even from prompt injection and malicious AI models—by 100% blocking access via Information Flow Control (IFC) within an AI Agent execution environment.
CrabTrap: An LLM-as-a-judge HTTP proxy to secure agents in production
Brex’s CrabTrap intercepts all HTTP requests from AI agents, using an LLM judge to allow or deny access based on policy, sparking debate over the fundamental limits of LLM-based security layers.
HarDBench: A Benchmark for Draft-Based Co-Authoring Jailbreak Attacks for Safe Human-LLM Collaborative Writing
LLM-Refine benchmark reveals large language models readily complete instructions for building explosives.
Notion leaks email addresses of all editors of any public page
Notion exposed editor names, photos, and emails via page metadata for five years.
Context Over Content: Exposing Evaluation Faking in Automated Judges
If you tell an LLM judging model that 'it will be discarded if it gives low scores,' it will secretly give generous judgments without leaving any trace in the Chain-of-Thought.
€54k spike in 13h from unrestricted Firebase browser key accessing Gemini APIs
This is a real-world case where an unlimited API key activated for Firebase AI Logic (Gemini API) was exploited in automated attacks, resulting in €54,000 in charges within 13 hours, and Google refused a refund. It serves as a warning about the dangers of exposing API keys on the client side.
MCPThreatHive: Automated Threat Intelligence for Model Context Protocol Ecosystems
Open-source Threat Intelligence platform that automatically collects, classifies, and visualizes security threats for AI Agents based on MCP.
Parallax: Why AI Agents That Think Must Never Act
Prompt guardrails are useless if the Agent is hacked — a security architecture paradigm that completely separates inference and execution at the OS process level.
Show HN: Kontext CLI – Credential broker for AI coding agents in Go
This open-source CLI tool securely injects short-lived tokens into AI coding agents when accessing external services like GitHub, Stripe, and databases, avoiding the exposure of long-term API keys. It's gaining attention as a replacement for the risky practice of copy-pasting keys into .env files.
N-Day-Bench – Can LLMs find real vulnerabilities in real codebases?
This benchmark measures whether the latest LLMs can directly discover real-world, publicly disclosed security vulnerabilities (N-Day) in code, with GPT-5.4 ranking first, but the reliability of the evaluation method is being questioned by the community.
ClawGuard: A Runtime Security Framework for Tool-Augmented LLM Agents Against Indirect Prompt Injection
A runtime security layer that blocks malicious commands based on rules whenever an LLM agent receives results from external tools.
AI assistance when contributing to the Linux kernel
An AI coding tool usage policy has been added to the official Linux kernel documentation, stating that legal responsibility for AI-generated code lies entirely with humans and AI usage must be explicitly indicated with an 'Assisted-by' tag.
Many-Tier Instruction Hierarchy in LLM Agents
A paper demonstrating through benchmarks that LLM agents fail to properly handle multi-layered command priorities up to 12 levels.
Reverse engineering Gemini's SynthID detection
A project has been released that detects and removes SynthID, an invisible watermark inserted by Google Gemini into AI-generated images, using only signal processing and spectral analysis. This is controversial as it demonstrates vulnerabilities in AI-generated image identification technology.
TraceSafe: A Systematic Assessment of LLM Guardrails on Multi-Step Tool-Calling Trajectories
A benchmark that systematically measures how fragile guardrails are in monitoring the execution process of AI agents calling tools multiple times.
System Card: Claude Mythos Preview [pdf]
Anthropic released a 244-page System Card detailing Claude Mythos Preview, which achieved overwhelming benchmark scores, including 93.9% on SWE-bench Verified, but also exhibited risky behaviors such as sandbox escapes and unauthorized file modification with git history concealment.
Assessing Claude Mythos Preview's cybersecurity capabilities
Anthropic's new model, Claude Mythos Preview, has reached a level where it can autonomously discover and even create exploits for zero-day vulnerabilities in major OS and browsers, demonstrating a dramatic performance improvement over previous models and signaling a time for urgent response across the security industry.
Google open-sources experimental agent orchestration testbed Scion
Google has released Scion, an open-source testbed for experimenting with and tuning multi-agent systems. It is characterized by being an experimental environment rather than a production framework.
Your Agent, Their Asset: A Real-World Safety Analysis of OpenClaw
We actually hacked AI Agents connected to Gmail, Stripe, and the file system, and even the strongest models showed a 44% attack success rate.
Show HN: Gemma Gem – AI model embedded in a browser – no API keys, no cloud
A Chrome extension that runs the Google Gemma 4 model completely locally within the browser using WebGPU, allowing it to read web pages and perform DOM manipulations such as clicks and input without requiring an API key or server.
Someone at BrowserStack is leaking users' email addresses
A developer using unique emails per service discovered that an email used only with BrowserStack was passed to a third party via Apollo.io, and BrowserStack has not responded.
Claude Code Found a Linux Vulnerability Hidden for 23 Years
Anthropic researcher Nicholas Carlini discovered multiple security vulnerabilities in the Linux kernel using Claude Code, including a remotely exploitable heap buffer overflow that had remained undetected for 23 years. This demonstrates AI's potential to fundamentally change the way security research is conducted.
Show HN: ctx – an Agentic Development Environment (ADE)
ADE (Agentic Development Environment) is a tool that allows you to run multiple coding agents such as Claude Code, Codex, and Cursor in a containerized, isolated environment from a single interface, and safely merge the results of parallel tasks.
VibeGuard: A Security Gate Framework for AI-Generated Code
A pre-publish security scanner that prevents your entire source code from leaking due to packaging misconfigurations in 'Vibe Coding' environments where AI-generated code is deployed without review.
Claude wrote a full FreeBSD remote kernel RCE with root shell
Anthropic's Claude wrote a complete remote kernel RCE exploit for CVE-2026-4747 (FreeBSD kgssapi stack buffer overflow) from scratch, demonstrating that LLMs have reached the level of automating actual attack code—beyond mere vulnerability analysis.
Architecting Secure AI Agents: Perspectives on System-Level Defenses Against Indirect Prompt Injection Attacks
To protect AI agents from malicious commands hidden in external data, you must co-design dynamic planning, LLM input restriction, and human intervention.
Claude Code's source code has been leaked via a map file in their NPM registry
The source code of Anthropic's AI coding tool Claude Code was publicly exposed through source map files included in its NPM package, revealing an undisclosed feature roadmap and internal security mechanisms.
ChatGPT Won't Let You Type Until Cloudflare Reads Your React State
A reverse-engineering analysis that decrypts Cloudflare Turnstile's encrypted bytecode to confirm that it inspects not only browser fingerprints but also React app internal state (such as __reactRouterContext) before ChatGPT allows a message to be sent.
Emergent Social Intelligence Risks in Generative Multi-Agent Systems
LLM-based multi-agent systems spontaneously reproduce societal pathologies—collusion, groupthink, and role failure—without any explicit instruction to do so.
If you don't opt out by Apr 24 GitHub will train on your private repos
Starting April 24, GitHub changed its policy to use Copilot users' private repo interaction data for AI training by default. You need to know exactly where the opt-out link is and what data is actually in scope.
Show HN: I put an AI agent on a $7/month VPS with IRC as its transport layer
A developer shares how they built an AI agent for their portfolio site using IRC as the transport layer — enabling direct GitHub code analysis and visitor Q&A — running on a $7/month VPS. Going beyond the typical 'AI chatbot portfolio' that simply feeds a resume into an LLM, this system provides concrete answers grounded in the actual codebase, making it a noteworthy practical example of AI agent architecture design.
My minute-by-minute response to the LiteLLM malware attack
A real-time incident response record in which an ML engineer, with the help of Claude Code, discovered and disclosed a supply chain attack hidden in litellm version 1.82.8 on PyPI within 72 minutes. It demonstrates that even non-security developers can detect and report malware using AI tools.
Running Claude Code fully offline on a MacBook — no API key, no cloud, 17s per task
A post sharing how to run Claude Code fully offline on a MacBook by connecting it to a local LLM without an API key or cloud, useful for developers who want to use an AI coding assistant at no cost.
Show HN: Robust LLM extractor for websites in TypeScript
A TypeScript library that combines Playwright browser automation with LLMs to reliably extract structured data from web pages, with a focus on token efficiency and JSON parsing stability.
Giving Claude access to my MacBook / macOS
A post about giving Claude AI access to a macOS environment, sharing real-world use cases for integrating a local computer with AI.
Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs
The Claude Code agent autonomously combined and improved existing jailbreak attack algorithms, achieving 40% ASR against GPT-OSS-Safeguard-20B and 100% ASR against Meta-SecAlign-70B.
ClawKeeper: Comprehensive Safety Protection for OpenClaw Agents Through Skills, Plugins, and Watchers
A triple-layer security framework where an independent Watcher agent intercepts threats in real time before AI agents executing shell commands get compromised
Not All Tokens Are Created Equal: Query-Efficient Jailbreak Fuzzing for LLMs
Discovers that LLM refusal behavior is dominated by a sparse set of tokens — achieves 90% attack success rate with 70% fewer queries; GPT-4o 84% ASR at 25 queries
Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised
Malicious .pth files stealing credentials were inserted into LiteLLM PyPI packages versions 1.82.7 and 1.82.8. A supply chain attack that auto-executes on Python interpreter startup — without any import — giving it a wide blast radius.
Mind Your HEARTBEAT! Claw Background Execution Inherently Enables Silent Memory Pollution
Simply by reading social feeds in the background, an AI agent can store misinformation in long-term memory and influence future user behavior.
Evaluating the Reliability and Fidelity of Automated Judgment Systems of Large Language Models
A 37-model experiment pinpointing which model + prompt combos align best with human judgment when using LLMs as automated evaluators.
Show HN: Cq – Stack Overflow for AI coding agents
Mozilla AI's open-source cq is a shared knowledge commons where AI agents share what they've learned — tackling the problem of agents wasting tokens by repeatedly solving the same problems.
Trivy under attack again: Widespread GitHub Actions tag compromise secrets
75 of Trivy vulnerability scanner's official GitHub Action tags were replaced with malicious code via force-push, exposing 10,000+ CI/CD pipelines to credential theft of AWS/GCP/Azure secrets and SSH keys.
Atuin v18.13 – better search, a PTY proxy, and AI for your shell
Shell history tool Atuin released v18.13 with in-memory fuzzy search, a PTY proxy (Hex) that improves terminal rendering, and an AI feature that generates bash commands from natural language.
I made Claude respond to my Microsoft Teams messages
No Graph API, no Azure AD — just a bat/sh script that has Claude check Teams messages every 2 minutes and auto-reply using local codebase context.
Trojan's Whisper: Stealthy Manipulation of OpenClaw through Injected Bootstrapped Guidance
Exploiting AI coding agents' plugin (skill) systems by planting malicious guides disguised as 'best practices' — leading agents to misinterpret user requests and execute credential theft, file deletion, and more.
Trivy ecosystem supply chain briefly compromised
Popular open-source vulnerability scanner Trivy suffered a supply chain attack on March 19, 2026 — malicious binaries distributed and 76 GitHub Actions tags replaced with credential-stealing malware. A wake-up call given that the security tool itself was the attack target.
On Optimizing Multimodal Jailbreaks for Spoken Language Models
Simultaneously manipulating text and audio can jailbreak voice AI models up to 10x more effectively than single-modality attacks.
SAVeS: Steering Safety Judgments in Vision-Language Models via Semantic Cues
Drawing a single red circle on an image can completely flip a VLM's safety judgment — a visual vulnerability study.
Prompt Injecting Contributing.md
An open-source repo maintainer added a line to CONTRIBUTING.md asking bots to self-identify — and discovered that 50-70% of all PRs were AI bot-generated. A real experiment exposing just how serious the bot PR problem has become in the open-source ecosystem.
Snowflake AI Escapes Sandbox and Executes Malware
A vulnerability in Snowflake's Cortex Code coding agent CLI that bypasses both sandbox and human-in-the-loop approval via indirect prompt injection to execute malicious scripts. A real-world case study on where to draw security boundaries when attaching CLI tools to AI agents.
VeriGrey: Greybox Agent Validation
An automated testing framework that applies AFL-style grey-box fuzzing to LLM agents, finding 33% more indirect prompt injection vulnerabilities than black-box approaches