Gemini 2.5 Computer Use model
TL;DR Highlight
Google released a specialized model based on Gemini 2.5 Pro that can see computer screens and directly operate mouse/keyboard via API. Outperforms competitors on web/mobile benchmarks with lower latency.
Who Should Read
Developers building web browser automation or RPA, or teams looking to replace existing UI-based workflows with AI agents.
Core Mechanics
- Gemini 2.5 Computer Use is a specialized model layering UI control capabilities on top of Gemini 2.5 Pro's vision/reasoning. Build agents that see screenshots and perform mouse clicks and keyboard input.
- Outperformed competitors (Anthropic Computer Use, etc.) on web and mobile control benchmarks with lower latency.
- Unlike traditional AI that interacts via APIs or structured data, this model 'sees' screens and operates UI like a human — can automate legacy systems without APIs.
- Available as a preview via Gemini API in Google AI Studio and Vertex AI. Build agents with API calls alone, no separate SDK needed.
- Still has clear limitations — frequently misclicks in Google Sheets overwriting data, can't interpret color feedback in Wordle, etc. Precision UI control remains unreliable.
Evidence
- Success using Chrome DevTools MCP with Gemini CLI for browser automation was shared. Expectations that Computer Use model would work even better.
- A Browserbase demo showing automated website login, scrolling, and reply posting was described as 'chilling.' But inability to intervene mid-task via conversation was noted as a limitation.
- Repeated CAPTCHA blocking and Google Sheets overwrite bugs during cell filling were reported. Precision manipulation is still unstable.
- Enterprise environments require governance, but UI-based agents make governance much harder to apply than API-based ones.
- Some argued using OS accessibility API data rather than screenshots would be more efficient — screenshots should be a last resort.
How to Apply
- For automating repetitive tasks on legacy web apps without APIs (data entry, report downloads), prototype with Gemini 2.5 Computer Use API in Google AI Studio. Combine with cloud browser services like Browserbase for server-side execution.
- If your Selenium/Playwright automation breaks with every UI change, consider Computer Use models as an alternative. They recognize elements visually instead of CSS selectors, making them more robust to UI changes.
- For production use, always add human-in-the-loop or a governance layer. Especially mandate confirmation steps before irreversible actions like payments and email sending.
- Precision spreadsheet work or color-based feedback scenarios remain unreliable — use a hybrid approach combining structured API calls for these cases.
Terminology
Computer UseAn AI capability to 'see' monitors and operate mouse and keyboard like a human. Takes screenshots as input and decides where to click/type next.
VLMVision Language Model. An AI model that understands both images and text simultaneously, used to look at screenshots and determine which buttons to press.
RPARobotic Process Automation. Software robots performing repetitive computer tasks for humans. Traditional RPA relied on UI coordinates/selectors and broke easily.
Accessibility APIOS-provided UI structure data for screen readers. Can programmatically read buttons, text, etc., potentially more accurate than screenshots.
Human-in-the-loopAI processes automatically but a human confirms and approves at critical decision points. A safety mechanism to prevent mistakes.