Every LLM has a default voice and it's making us all sound the same
TL;DR Highlight
All LLMs converge to the same default writing style — Noren is a service that learns your personal writing patterns to generate text in your voice.
Who Should Read
Content creators who want to use AI writing while maintaining their personal voice.
Core Mechanics
- LLMs tend to regress to the same 'default voice,' making all outputs sound similar.
- Noren learns your actual writing patterns before generating text.
- Early access available at usenoren.ai.
Evidence
- Observed that all LLMs regress to the same default voice.
- Noren differentiates by learning personal writing patterns before generation.
- Early access at usenoren.ai.
How to Apply
- If concerned about AI writing homogenization, try usenoren.ai.
- Providing your own writing samples as style references to AI is also effective.
Code Example
# System prompt example - Suppressing LLM default writing style
system_prompt = """
You are a writing assistant. Follow these style rules strictly:
- Do NOT start responses with 'Certainly!', 'Great!', 'Absolutely!', or similar filler.
- Do NOT overuse bullet points. Use prose when possible.
- Match the tone of the sample texts provided by the user.
- Be direct and concise. Avoid hedging phrases like 'It's worth noting that...'
- Write as if you are the user, not an AI assistant.
"""Terminology
Related Papers
Using Claude Code: The unreasonable effectiveness of HTML
Claude Code 팀이 Markdown 대신 HTML을 LLM 출력 포맷으로 선호하기 시작한 이유와 그 실용적 장점을 정리한 글로, AI와 함께 문서/스펙/대시보드를 만드는 워크플로우에 직접적인 영향을 준다.
When to Vote, When to Rewrite: Disagreement-Guided Strategy Routing for Test-Time Scaling
Disagreement-guided routing boosts LLM accuracy on math and code by 3-7% with adaptive problem solving.
Less Is More: Engineering Challenges of On-Device Small Language Model Integration in a Mobile Application
Five failure modes and eight practical solutions emerged after five days of running on-device SLMs (Gemma 4 E2B, Qwen3 0.6B) with Wordle.
Dynamic Context Evolution for Scalable Synthetic Data Generation
A framework that completely eliminates duplication and repetition in large-scale synthetic data generation with LLMs using three mechanisms (VTS + Semantic Memory + Adaptive Prompt).
90%+ fewer tokens per session by reading a pre-compiled wiki instead of exploring files cold. Built from Karpathy's workflow.
This is a workflow sharing post about how pre-organizing a codebase in Wiki format can reduce token usage per Claude session by more than 90% instead of directly exploring the codebase every time.