I asked Claude if everyone uses AI to write, what actually gets lost?
TL;DR Highlight
What AI loses when it writes for you isn't quality — it's identity. A philosophical reflection on how AI ghostwriting erases the individual's unique voice and lived experience.
Who Should Read
Writers and content creators using AI as a writing assistant.
Core Mechanics
- AI-generated writing can match or exceed average quality on surface metrics, but lacks the idiosyncratic voice, specific experience, and personal perspective that makes individual writing distinct.
- The author argues that writing is not just communication — it's identity construction. When you outsource writing entirely, you outsource a part of how you define and present yourself.
- There's a meaningful difference between AI as editing/polishing tool vs. AI as primary author — the former preserves voice while the latter replaces it.
- The concern isn't that AI writing is bad, but that homogenized AI-quality writing drowns out the diverse individual voices that make reading interesting.
- The reflection is particularly pointed for professional writers whose distinctive voice is their primary value proposition.
Evidence
- The post resonated strongly in the writing community, with many sharing personal experiences of AI-generated text feeling 'correct but soulless.'
- Counter-arguments noted that many people never had a distinctive writing voice to lose — for them, AI is purely additive.
- Journalists and bloggers noted that readers increasingly detect AI-written content not because it's poor quality but because it lacks the specific details and perspectives only a human observer can provide.
How to Apply
- Use AI for editing, restructuring, and improving your own drafts rather than as the primary author — preserve your voice by starting with your own writing.
- If you must use AI for first drafts, heavily rewrite to inject your specific experiences, opinions, and voice before publishing.
- Develop your writing voice deliberately — the more distinctive and grounded in specific experience it is, the harder it is for AI to replicate.
- Consider what writing means to you beyond output quality — if it's a thinking and identity practice, treat it as one.
Terminology
Related Papers
Using Claude Code: The unreasonable effectiveness of HTML
Claude Code 팀이 Markdown 대신 HTML을 LLM 출력 포맷으로 선호하기 시작한 이유와 그 실용적 장점을 정리한 글로, AI와 함께 문서/스펙/대시보드를 만드는 워크플로우에 직접적인 영향을 준다.
When to Vote, When to Rewrite: Disagreement-Guided Strategy Routing for Test-Time Scaling
Disagreement-guided routing boosts LLM accuracy on math and code by 3-7% with adaptive problem solving.
Less Is More: Engineering Challenges of On-Device Small Language Model Integration in a Mobile Application
Five failure modes and eight practical solutions emerged after five days of running on-device SLMs (Gemma 4 E2B, Qwen3 0.6B) with Wordle.
Dynamic Context Evolution for Scalable Synthetic Data Generation
A framework that completely eliminates duplication and repetition in large-scale synthetic data generation with LLMs using three mechanisms (VTS + Semantic Memory + Adaptive Prompt).
90%+ fewer tokens per session by reading a pre-compiled wiki instead of exploring files cold. Built from Karpathy's workflow.
This is a workflow sharing post about how pre-organizing a codebase in Wiki format can reduce token usage per Claude session by more than 90% instead of directly exploring the codebase every time.