Claude.ai unavailable and elevated errors on the API
TL;DR Highlight
Anthropic’s entire service suite—Claude.ai, the API, Claude Code—became inaccessible for 1 hour and 18 minutes (17:34–18:52 UTC), sparking outrage among enterprise users over reliability concerns.
Who Should Read
Developers integrating the Claude API or Claude Code into production services, and team leaders grappling with LLM service availability and multi-model strategies.
Core Mechanics
- The outage began at 17:34 UTC on April 28, 2026, and was resolved at 18:52 UTC, lasting a total of 1 hour and 18 minutes. Affected services included claude.ai, Claude Console (platform.claude.com), Claude API (api.anthropic.com), Claude Code, Claude Cowork, and Claude for Government—essentially the entire service portfolio.
- The root cause was identified as an issue related to authentication. A surge in authentication errors occurred in API requests and Claude Code login paths, and claude.ai itself became inaccessible.
- Anthropic announced the investigation at 17:41 UTC, identified the problem at 17:51 UTC, reported work in progress at 18:33 UTC, transitioned to a monitoring phase at 18:59 UTC, and declared final resolution at 19:15 UTC, updating the status page throughout.
- Data shared from status.claude.com indicated that Claude’s uptime had fallen to the ‘one nine’ level—just over 90%—in the last 90 days. This level is widely considered unacceptable for production environments.
- A user from an organization spending over $200,000 monthly on the enterprise tier reported frequent outages in recent months and poor support, leading to anger from leadership. They stated that a ‘one nine’ level of reliability is unacceptable given the cost.
Evidence
- "A user spending over $200,000 monthly on Anthropic’s enterprise tier lamented frequent outages and poor support in recent months, indicating escalating frustration at the executive level and potentially leading to contract re-evaluation."
How to Apply
- If you rely on the Claude API as a single point of failure in production, consider adding automatic fallback logic to alternative models like OpenAI (Codex) or Google (Gemini). This can ensure continued operation during outages like the one experienced.
- Organizations spending tens of thousands of dollars monthly on the Claude API should regularly monitor Anthropic’s status.claude.com and subscribe to email/SMS alerts. Integrating with PagerDuty or Slack webhooks can reduce response times.
- Teams heavily using Claude Code in their workflow should set up alternative coding agents like OpenAI Codex CLI in parallel. This allows work to continue even when Claude Code is unavailable due to authentication issues.
- For teams of around 10 people where AI coding tool costs are a concern or stability is paramount, consider renting GPUs to self-host open models like Qwen or DeepSeek. While initial setup is required, it offers direct control over downtime risk and potential long-term cost savings.
Terminology
Related Papers
4TB of voice samples just stolen from 40k AI contractors at Mercor
Mercor data breach exposes voice recordings and ID scans of 40,000 contractors, fueling deepfake and voice fraud risks.
I cancelled Claude: Token issues, declining quality, and poor support
Anthropic’s Claude Code Pro experienced a three-week decline in speed, token allowance, and support quality, sparking a community discussion among developers.
Different Language Models Learn Similar Number Representations
LLMs, regardless of architecture—from Transformers to LSTMs—consistently learn periodic patterns with periods T=2, 5, and 10 when representing numbers, mathematically explaining a 'convergent evolution' phenomenon beyond model architecture.
Diagnosing CFG Interpretation in LLMs
LLMs frequently lose semantic meaning despite syntactically correct output when exposed to novel grammar rules.
Kernel code removals driven by LLM-created security reports
Linux kernel maintainers are removing legacy drivers—ISA, PCMCIA, AX.25, ATM, and ISDN—after AI-generated security bug reports overwhelmed them, demonstrating a drastic response to unmanageable code.
HarDBench: A Benchmark for Draft-Based Co-Authoring Jailbreak Attacks for Safe Human-LLM Collaborative Writing