Trivy under attack again: Widespread GitHub Actions tag compromise secrets
TL;DR Highlight
75 of Trivy vulnerability scanner's official GitHub Action tags were replaced with malicious code via force-push, exposing 10,000+ CI/CD pipelines to credential theft of AWS/GCP/Azure secrets and SSH keys.
Who Should Read
DevOps and backend developers running CI/CD pipelines with GitHub Actions who use security scanners like Trivy. If your team references aquasecurity/trivy-action by version tag, check immediately.
Core Mechanics
- Attackers force-pushed 75 of 76 version tags in the aquasecurity/trivy-action repository to malicious commits. Commonly used tags like @0.34.2, @0.33.0, @0.18.0 were all affected — @0.35.0 is currently the only safe tag.
- The malicious payload runs in GitHub Actions runner environments, dumping runner process memory to extract secrets, collecting SSH keys, and exfiltrating AWS/GCP/Azure credentials and Kubernetes service account tokens.
- The attack's sophistication lies in force-pushing existing tags rather than creating new branches or releases. This method barely shows up in commit history and doesn't trigger notifications, making detection difficult.
- The root cause traces back to credentials stolen during the early March OpenVSX VS Code extension compromise. The Trivy team rotated secrets, but the rotation wasn't atomic — the attacker is believed to have maintained access to newly issued tokens.
- Over 10,000 workflow files reference this Action on GitHub, and the malicious code runs before the legitimate Trivy scan starts, making it hard for users to notice anything unusual.
- Additional damage was confirmed on Docker Hub. Malicious Trivy image tags 0.69.4, 0.69.5, 0.69.6 were discovered on March 22, and the latest tag also pointed to the malicious image during the exposure window.
- Socket's AI scanner detected this campaign in real-time starting March 20 at 19:15 UTC, generating 182 threat feed entries, all correctly classified as Backdoor/Infostealer/Reconnaissance malware.
- This is the second supply chain compromise in the Trivy ecosystem within the same month of March. Credentials stolen in the first breach were not fully neutralized and were reused in the second attack.
Evidence
- GitHub's official security guidelines recommend pinning Actions to full commit SHAs rather than version tags. This prompted suggestions that GitHub should enforce immutable version policies for Actions to prevent this class of attack entirely.
- Community questions arose about the specific failure in the credential rotation process. 'Given the second breach on March 22, it appears the attacker maintained access through two credential rotations.' With various GitHub token types (PAT, OAuth, GitHub App tokens), the exact type compromised remained unclear.
- Criticism of granting excessive permissions to security scanners emerged. One working developer said 'Security teams keep introducing new scanners demanding full codebase or cloud access — if I'd granted even 10% of those requests, we'd have been breached multiple times already,' warning about security tool supply chain risks.
- A developer apparently directly affected shared 'I'll probably spend the next few weeks writing dozens of reports and sitting through countless meetings,' expressing frustration that Trivy had been compromised twice.
- Practical advice like 'always run these tools in sandboxes to limit blast radius' was shared. Others noted this case should dispel the notion that only npm is targeted by supply chain attacks.
How to Apply
- If you reference aquasecurity/trivy-action by version tag (@0.34.2, etc.), review your workflow files immediately. Pin to a trusted commit's full SHA (e.g., uses: aquasecurity/trivy-action@commitSHA) instead — this protects against tag force-push replacements.
- If any workflow using aquasecurity/trivy-action ran after March 20, 19:15 UTC, immediately rotate all secrets used in that pipeline (AWS keys, GCP service accounts, Azure credentials, SSH keys, Kubernetes tokens). Beyond rotation, audit access logs for resources accessible with the old credentials.
- Minimize permissions granted to security scanners and third-party Actions in CI/CD pipelines. Restrict GITHUB_TOKEN permissions to read-only at the workflow level, and use OIDC (temporary token-based auth) for cloud credentials to limit the validity window of stolen credentials.
- Use tools like Socket, Dependabot, or Renovate to monitor GitHub Actions dependencies — they can detect tag replacements with malicious commits in real-time. Socket detected this attack live and classified it as Backdoor/Infostealer.
Code Example
# Vulnerable approach: version tag reference (can be replaced via force-push)
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@0.34.2 # ❌ Dangerous
# Safe approach: pinned to full commit SHA (immutable)
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@a20de5420d57c4102547773ee84a9575c8d547ea # ✅ Safe
# GitHub Actions minimum permission configuration example
permissions:
contents: read # Grant minimum permissions only
security-events: write # Only if needed for Trivy SARIF upload
# Temporary AWS credentials via OIDC (minimizes damage if compromised)
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@commitSHA
with:
role-to-assume: arn:aws:iam::ACCOUNT:role/ROLE
aws-region: ap-northeast-2
# Do not hardcode access-key-id/secret-access-keyTerminology
Related Papers
Can LLMs model real-world systems in TLA+?
LLM이 TLA+ 명세를 작성할 때 문법은 잘 통과하지만 실제 시스템과의 동작 일치도(conformance)는 46% 수준에 그친다는 걸 체계적으로 검증한 벤치마크 연구로, AI 기반 형식 검증의 현실적 한계를 보여준다.
Natural Language Autoencoders: Turning Claude's Thoughts into Text
Anthropic이 LLM 내부의 숫자 벡터(활성화값)를 직접 읽을 수 있는 자연어로 변환하는 NLA 기법을 공개했다. AI가 실제로 무슨 생각을 하는지 해석하는 interpretability 연구의 새로운 진전이다.
ProgramBench: Can language models rebuild programs from scratch?
LLM이 FFmpeg, SQLite, PHP 인터프리터 같은 실제 소프트웨어를 문서만 보고 처음부터 재구현할 수 있는지 측정하는 새 벤치마크로, 최고 모델도 전체 태스크의 3%만 95% 이상 통과하는 수준에 그쳤다.
MOSAIC-Bench: Measuring Compositional Vulnerability Induction in Coding Agents
티켓 3장으로 쪼개면 Claude/GPT도 보안 취약점 코드를 53~86% 확률로 그냥 짜준다.
Refusal in Language Models Is Mediated by a Single Direction
Open-source chat models encode safety as a single vector direction, and removing it disables safety fine-tuning.
Show HN: A new benchmark for testing LLMs for deterministic outputs
Structured Output Benchmark assesses LLM JSON handling across seven metrics, revealing performance beyond schema compliance.