Improving 15 LLMs at Coding in One Afternoon. Only the Harness Changed
TL;DR Highlight
The performance bottleneck in LLM coding agents isn't the model — it's the edit tool harness. Changing just the edit format can swing performance by double digits.
Who Should Read
Engineers building or optimizing AI coding agents, and ML researchers studying the impact of tool interface design on agent performance.
Core Mechanics
- Across multiple coding agent frameworks, the model accounts for only part of the performance variance on benchmarks like SWE-bench — the edit tool format is a surprisingly large factor.
- Different edit formats (unified diff, whole-file replacement, search-and-replace, line-number based) produce dramatically different agent performance on the same model.
- The best-performing edit format varies by model — what works best for GPT-4o may not be optimal for Claude Sonnet, and vice versa.
- The underlying reason: models were trained with different distributions of edit operations in their training data, making them naturally better at some formats than others.
- The practical implication: if you're optimizing a coding agent, changing the edit format is one of the highest-leverage interventions available before retraining.
- This also explains some of the performance differences between coding agent frameworks — they use different default edit tools, and the format choice dominates.
Evidence
- The analysis includes controlled experiments where only the edit format was changed and model/prompt/task were held constant — isolating the format as the variable.
- Performance swings of 10-20 percentage points on SWE-bench Verified were observed between edit format choices on the same model.
- HN reaction was surprised — most engineers assumed benchmark differences between frameworks reflected prompt quality, not tool interface design.
- Follow-up experiments by community members confirmed the findings held across multiple model families.
- The aider project (a coding agent CLI) was cited as having done early work on edit format optimization — their findings align with this analysis.
How to Apply
- If you're building a coding agent: don't assume the default edit format of your chosen framework is optimal. Benchmark at least 3-4 edit formats (unified diff, whole-file, search-replace) against your target model.
- For teams switching models: re-evaluate your edit format when you switch models — the optimal format is model-specific.
- Use this insight to prioritize optimization work: before spending weeks on prompt engineering, spend a day testing edit formats. The ROI is often higher.
- Check if your agent framework exposes edit format as a configurable parameter. If not, consider contributing the feature — it's high-value for the community.
Code Example
# Install tilth and apply hash-based edit to Claude Code
cargo install tilth # or npx tilth
tilth install claude-code --editTerminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.