Don't post generated/AI-edited comments. HN is for conversation between humans
TL;DR Highlight
Hacker News officially added a rule banning AI-generated or AI-edited comments — HN discusses what this means and whether it'll work.
Who Should Read
Anyone who participates in online technical communities and cares about the quality of discourse, and developers thinking about AI content moderation.
Core Mechanics
- Hacker News updated its official guidelines to explicitly prohibit comments generated or substantially edited by AI.
- The rule targets both fully AI-generated comments and comments where a human used AI to polish or expand their writing.
- Enforcement is necessarily imperfect — HN can't reliably detect AI-generated text and relies on community flagging and moderator judgment.
- The rationale: AI-generated comments dilute the distinctive HN voice, reduce authentic discourse, and can be produced at scale to manipulate discussion.
- This puts HN in a different posture than most platforms, which have taken a permissive or hands-off approach to AI-assisted content.
Evidence
- The HN guidelines update was linked in the announcement thread, with 'dang' (the main HN moderator) explaining the reasoning.
- Community reaction was mixed: many welcomed it as protecting HN's signal quality, others argued it's unenforceable and draws an arbitrary line.
- Practical debate: is grammar-correcting AI different from spell-check? Where's the line between 'AI assistance' and 'AI generation'?
- Some noted that a skilled human using AI assistance to write a thoughtful comment is probably better for discourse than a careless human writing without it.
How to Apply
- For online community managers: HN's approach of explicit prohibition with community norm enforcement is worth watching — the rule's main value may be establishing a community norm rather than perfect technical enforcement.
- If you use AI to help with writing in online communities, read the specific platform rules — 'AI assistance' policies vary significantly across communities.
- For AI content detection: HN's approach implicitly acknowledges that AI detection tools are unreliable — community judgment and norms may be more effective than technical detection.
Terminology
Related Papers
Can LLMs model real-world systems in TLA+?
LLM이 TLA+ 명세를 작성할 때 문법은 잘 통과하지만 실제 시스템과의 동작 일치도(conformance)는 46% 수준에 그친다는 걸 체계적으로 검증한 벤치마크 연구로, AI 기반 형식 검증의 현실적 한계를 보여준다.
Natural Language Autoencoders: Turning Claude's Thoughts into Text
Anthropic이 LLM 내부의 숫자 벡터(활성화값)를 직접 읽을 수 있는 자연어로 변환하는 NLA 기법을 공개했다. AI가 실제로 무슨 생각을 하는지 해석하는 interpretability 연구의 새로운 진전이다.
ProgramBench: Can language models rebuild programs from scratch?
LLM이 FFmpeg, SQLite, PHP 인터프리터 같은 실제 소프트웨어를 문서만 보고 처음부터 재구현할 수 있는지 측정하는 새 벤치마크로, 최고 모델도 전체 태스크의 3%만 95% 이상 통과하는 수준에 그쳤다.
MOSAIC-Bench: Measuring Compositional Vulnerability Induction in Coding Agents
티켓 3장으로 쪼개면 Claude/GPT도 보안 취약점 코드를 53~86% 확률로 그냥 짜준다.
Refusal in Language Models Is Mediated by a Single Direction
Open-source chat models encode safety as a single vector direction, and removing it disables safety fine-tuning.
Show HN: A new benchmark for testing LLMs for deterministic outputs
Structured Output Benchmark assesses LLM JSON handling across seven metrics, revealing performance beyond schema compliance.