A few random notes from Claude coding quite a bit last few weeks
TL;DR Highlight
Andrej Karpathy shares honest observations from weeks of coding with Claude — productivity gains, brain atrophy, 'sleepwalking', and workflow tips.
Who Should Read
Developers actively using or considering AI coding tools, especially those worried about skill degradation or over-reliance.
Core Mechanics
- Karpathy reports real productivity gains — tasks that would take hours complete in minutes. The speed multiplier is real and not just hype.
- He coins 'sleepwalking' to describe a failure mode: going along with AI-generated code without truly understanding it, shipping things you can't debug or maintain.
- Brain atrophy is a genuine concern he names explicitly: if you stop writing code yourself, you lose fluency. The skill degrades through disuse just like a muscle.
- His recommended workflow: use AI for boilerplate, scaffolding, and unfamiliar APIs — not for core algorithmic logic where deep understanding matters most.
- He still writes critical, novel, or security-sensitive code manually — the AI assists, not replaces, on high-stakes decisions.
- The most useful prompt pattern he found: describe the goal and constraints, let Claude propose an approach, then critically evaluate before accepting — not rubber-stamping.
Evidence
- Karpathy's credibility as an author here is very high — he helped invent the systems he's critiquing, making this unusually honest and well-informed self-reflection.
- HN reaction was strong agreement: many developers independently noticed the 'sleepwalking' phenomenon without having a name for it.
- Counter-arguments were made that brain atrophy concern is overstated — we don't worry about calculators atrophying mental arithmetic, so why worry about AI atrophying code writing?
- The Karpathy response: there's a difference between arithmetic (algorithmic, rule-based) and programming judgment (creative, contextual) — the latter doesn't have a clean calculator analogy.
- Several senior engineers noted they deliberately avoid AI for certain practice tasks to maintain fluency, accepting slower delivery in exchange for skill preservation.
How to Apply
- Audit your current AI coding usage: are there categories of code where you've stopped writing yourself? Deliberately reintroduce manual coding for those categories periodically.
- Before accepting AI-generated code, be able to explain every line: what it does, why it's correct, and what edge cases it handles. If you can't, don't ship it.
- Reserve AI assistance for: library integration, boilerplate, unfamiliar language syntax, test generation. Handle core logic, security decisions, and novel algorithms yourself.
- Set personal projects where AI assistance is deliberately off-limits — maintaining the ability to work without AI is a hedge against dependency.
Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.