Claude Opus 4.1
TL;DR Highlight
Anthropic released Opus 4.1, a minor upgrade to Claude Opus 4 that hit 74.5% on SWE-bench (new coding benchmark high), but community reactions are skeptical about cost-effectiveness.
Who Should Read
Developers working with large codebases via Claude Code or API, or team leads comparing cost-to-performance ratios across AI coding tools.
Core Mechanics
- SWE-bench Verified score of 74.5%, achieved with just bash and file editing tools (no separate scaffold). Slight improvement over Opus 4.
- GitHub reports notably improved multi-file refactoring performance; Rakuten praised precise bug fixes in large codebases without unnecessary modifications.
- Cost is the biggest debate point — Opus pricing is significantly higher than Sonnet, and many users report Sonnet delivers better practical results despite lower benchmark scores.
- Model ID can be swapped in directly (claude-opus-4-1-20250805) at the same price as Opus 4.
Evidence
- Cost was the biggest controversy — one user noted Sonnet alone costs ~$5/hour on OpenRouter, with Opus being much more expensive. GPT-4.1 mini was suggested as best value for money.
- A contradiction emerged: Opus leads on almost every benchmark, yet practical user experience often favors Sonnet — suggesting benchmarks don't fully capture real-world utility.
- Extended thinking mode with budget tokens was highlighted as a key differentiator for complex reasoning tasks.
How to Apply
- If currently using Opus 4 via API, just switch the model ID to claude-opus-4-1-20250805 for a risk-free test — same pricing, backward compatible.
- Use Opus 4.1 for precision-critical tasks (multi-file refactoring, large codebase debugging) and Sonnet 4 for general code generation and conversational tasks — tier your model selection by task type.
- For cost management, use extended thinking budget tokens to control reasoning depth per request.
Terminology
SWE-bench VerifiedA benchmark testing whether AI can automatically fix real GitHub issues. 500 verified problems — like a standardized exam for coding agents.
extended thinkingA feature where the model internally goes through a long reasoning process before answering. Like using scratch paper for a hard problem.