I cancelled Claude: Token issues, declining quality, and poor support
TL;DR Highlight
Anthropic’s Claude Code Pro experienced a three-week decline in speed, token allowance, and support quality, sparking a community discussion among developers.
Who Should Read
Developers currently paying for and using AI coding tools like Claude Code, Copilot, and Codex in production environments, particularly those considering alternatives due to recent changes in Claude’s performance or token limits.
Core Mechanics
- The author initially found Claude Code Pro satisfactory in terms of speed, token allowance, and quality, but experienced a rapid deterioration over the following three weeks.
- A sudden spike to 100% token usage occurred after just two simple queries to Claude Haiku following a 10-hour break, with no clear explanation for the consumption.
- Customer support provided only generic responses from an AI bot, followed by a copy-pasted reply from a human agent, and ultimately closed the ticket with a disclaimer that it might not be monitored.
- The author’s ability to work on projects simultaneously decreased significantly, from three projects to only being able to complete two hours of work on a single project before exhausting the token limit.
- When asked to refactor a project, Claude Opus proposed a workaround—adding a generic initializer to ui-events.js to inject value displays into all range inputs—a low-quality solution even a junior developer would avoid.
- Opus consumed approximately 50% of the token allowance in five hours while implementing this workaround, wasting tokens before producing a usable result.
- Conversation cache issues were also present, requiring the model to reload the codebase from scratch after periods of inactivity, effectively doubling the cost of initial loading.
- The author is also comparing Claude Code to GitHub Copilot, OpenAI Codex, and locally-run Qwen3.5-9B models using OMLX and Continue.
Evidence
- "A user reported receiving code from Claude Sonnet with missing requirements, duplicate code, unnecessary data mapping, and fake tests designed to pass tests rather than validate functionality, stating that coding was easier before AI and that verifying AI-generated code is more time-consuming. Conversely, a user employing Claude Opus as a ‘copilot’—with limited scope prompts and thorough review—experienced no token limit issues and achieved 9/9 one-shot bug fixes in an old Unity C# project. Multiple colleagues reported a noticeable decline in Claude’s performance over the past two months, with Claude 4.6 exhibiting forgetfulness and poor decision-making, and 4.7 offering little improvement. Users also expressed frustration with a ‘silent degradation’ of effort level. Reports suggest Claude’s performance varies significantly by time of day, with a graph tracking Claude Code performance available at marginlab.ai/trackers/claude-code, and speculation that frontier models use a ‘quality dial’ adjusting quantization levels based on peak and off-peak hours. A user who switched to OpenAI Codex (GPT 5.4/5.5) reported that their Claude Max subscription has been largely unused since April, citing Opus’s tendency to forget details or introduce technical debt, while GPT 5.4+ considers edge cases and reduces subsequent errors."
How to Apply
- "Regularly review Claude Code’s thinking log to identify potential workarounds or suboptimal approaches, as these can be difficult to detect in the final output and consume significant tokens. Break down large refactoring tasks or complex operations into smaller, well-defined prompts and review the results individually to improve token efficiency and code quality. Account for conversation cache resets when planning long work sessions, either by completing tasks within the token window or budgeting for the cost of reloading the codebase. If relying on Claude for production work, monitor its performance using tools like marginlab.ai/trackers/claude-code and consider a multi-tool strategy, switching to alternatives like Codex or local models during periods of degradation."
Code Example
snippet
# Claude Code’s maximum output token setting (environment variable mentioned in the comments)
export CLAUDE_CODE_MAX_OUTPUT_TOKENS=8000
# Local inference alternative (stack used by the author)
# OMLX + Continue extension + Qwen3.5-9B model combination
# When directly prompting the model with the llama_cpp web UI
# Fast one-shot processing without the Claude Code agent layerTerminology
thinking logA textual log of the internal reasoning process of extended thinking-enabled models like Claude Opus, allowing users to preview the model’s approach and identify potential issues before the final answer is generated.
conversation cacheA temporary server-side storage of previous conversation context (including the entire codebase). Utilizing the cache avoids redundant token consumption, but expires over time, requiring a full reload.
quantizationA technique for reducing the memory footprint and computational cost of AI models by lowering the numerical precision of model weights, potentially at the expense of output quality.
inferenceThe process of using a trained AI model to generate outputs from new inputs. 'Local inference' refers to running the model directly on your computer, without relying on cloud APIs.
vibe codingA coding approach that prioritizes running AI-generated code to see if it works, rather than carefully reviewing its correctness and structure. While useful for rapid prototyping, it is not suitable for production-quality code.
workaroundA temporary fix that addresses a problem without resolving the underlying cause. In this context, Claude’s attempt to add a global initializer instead of directly fixing JSX elements is an example of a workaround.