Show HN: ProofShot – Give AI coding agents eyes to verify the UI they build
TL;DR Highlight
An open-source CLI that solves the problem of AI coding agents not being able to see what UI they've created — auto-generating video recordings, screenshots, and error reports via browser automation.
Who Should Read
Developers building frontend features with AI coding agents like Claude Code, Cursor, or Copilot who are tired of manually checking every UI the agent produces. Also useful for teams who want visual evidence attached to PR reviews.
Core Mechanics
- The CLI runs browser automation (Playwright under the hood) to load the app, interact with key user flows, capture screenshots and video, and generate a structured report.
- Reports include: screenshots of key states, a video of the interaction, console errors, network errors, and accessibility warnings — all generated without human intervention.
- The tool integrates with CI/CD: you can run it in a GitHub Actions workflow after an AI coding session and attach the report as a PR artifact.
- For AI agent workflows, the pattern is: agent generates code -> CLI runs browser automation -> report fed back to agent for self-correction — closing the visual feedback loop.
- The tool is configurable via a simple YAML spec defining which pages to visit and which user flows to execute, making it adaptable to different app structures.
Evidence
- Early users reported that the visual feedback loop (agent generates -> CLI validates -> agent corrects) reduced UI bug rates by roughly 40-60% compared to text-only feedback.
- Frontend developers noted that the tool catches a class of bugs that text-based AI review misses entirely: layout shifts, responsive breakpoints, visual regressions.
- The CI/CD integration was highlighted as particularly valuable for team workflows — reviewers get visual context without needing to check out and run the PR locally.
- Some noted the Playwright dependency adds setup complexity, especially for teams not already using it — a simpler Puppeteer option would lower the adoption barrier.
How to Apply
- Install the CLI and write a YAML spec defining your app's key user flows (login, main dashboard, critical feature). This becomes the visual test suite.
- Add a CI step that runs the CLI after every PR — attach the output report as a GitHub Actions artifact for reviewers.
- For AI agent workflows, configure the agent to run the CLI after UI changes and include the report in its self-review step before marking work as complete.
- Use the video recording feature for bug reports — a recording is worth more than a screenshot and eliminates 'I can't reproduce it' back-and-forth.
Code Example
snippet
# Installation
npm install -g proofshot
proofshot install
# 1. Start dev server + open browser + start recording
proofshot start --run "npm run dev" --port 3000 --description "Login form verification"
# 2. Agent manipulates browser (agent-browser commands)
agent-browser snapshot -i # Check interactive elements
agent-browser open http://localhost:3000/login # Navigate to page
agent-browser click "[data-testid=submit]" # Click
agent-browser snapshot # Capture screenshot
# 3. Stop recording + bundle artifacts
proofshot stop
# Upload results to PR (automatically attached as GitHub PR inline comment)
proofshot prTerminology
PlaywrightMicrosoft's open-source browser automation library, supporting Chrome, Firefox, and Safari for testing and scraping.
Visual RegressionAn unintended change in UI appearance, often introduced inadvertently alongside functional changes.
PR ArtifactFiles attached to a pull request for reviewers to examine — test results, screenshots, build outputs, etc.
Browser AutomationProgrammatic control of a web browser to simulate user interactions, used for testing and scraping.