Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster
TL;DR Highlight
An experiment report: give Claude Code 16 GPUs and it runs 910 experiments in 8 hours, achieves a 2.87% improvement in validation loss, and develops its own strategy for leveraging a mixed H100/H200 hardware pool.
Who Should Read
ML engineers who spend a lot of time on hyperparameter tuning with repeated experiments, and infrastructure engineers interested in giving AI agents autonomous control of cloud infrastructure.
Core Mechanics
- Claude Code autonomously managed the entire ML experimentation loop: designing experiments, submitting GPU jobs, monitoring results, updating hypotheses, and iterating — without human intervention between iterations.
- In 8 hours with 16 GPUs, the agent ran 910 experiments and found a configuration that reduced validation loss by 2.87% compared to the human-tuned baseline.
- The agent spontaneously developed a strategy for the mixed H100/H200 cluster: assigning larger batch sizes to the faster H200s and smaller jobs to H100s to maximize throughput.
- The agent maintained a running hypothesis log, systematically ruling out dead ends and prioritizing promising directions — exhibiting behavior closer to a research scientist than a grid search.
- Failure modes included the agent occasionally getting stuck in local optima and needing human nudges to explore different regions of the search space.
Evidence
- The experiment logs were shared publicly, showing the agent's actual decision trail — commenters found the H100/H200 hardware strategy emergence particularly impressive.
- ML researchers noted that 2.87% validation loss improvement in 8 hours is genuinely competitive with what a skilled human ML engineer could achieve in a similar time budget.
- Skeptics raised reproducibility concerns: the improvement might be specific to this model/dataset combination and the agent's choices might not generalize.
- The cost analysis showed the 8-hour GPU run cost approximately $800-1200 — comparable to a day of senior ML engineer time, prompting discussion about the economics.
How to Apply
- Set up a structured experiment logging system before unleashing an agent on hyperparameter search — the agent needs a consistent format to read its own history.
- Define the search space explicitly and impose hard constraints (max batch size, min learning rate) to prevent the agent from exploring obviously bad regions.
- Implement a 'human checkpoint' every N experiments or every hour: review the agent's hypothesis log and redirect if it's stuck or heading in an unproductive direction.
- Start with a smaller GPU allocation (2-4 GPUs) to verify the agent is behaving sensibly before scaling up to the full cluster.
Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.