[P] Prompt optimization for analog circuit placement — 97% of expert quality, zero training data
TL;DR Highlight
Prompt optimization achieves 97% of expert quality on analog circuit placement with zero training data — learns from failure-to-success pairs iteratively
Who Should Read
Engineers applying LLMs to specialized tasks; AI developers interested in automatic prompt optimization
Core Mechanics
- VizPy prompt optimizer: iteratively learns from failure→success pairs to improve LLM layout reasoning
- Applied to analog IC placement (spatial reasoning + multi-objective: matching, parasitics, routing) — hard benchmark with no AI tools
- Zero domain-specific training data needed to achieve 97% of expert quality
Evidence
- VizPy blog (vizops.ai/blog/prompt-optimization-analog-circuit-placement) — methodology and results publicly available
How to Apply
- For tasks with limited domain-specific data, apply prompt optimizer + failure-to-success feedback loop pattern
- For spatial reasoning and multi-objective optimization problems, iterative prompt improvement may be more effective than few-shot examples
Terminology
Related Papers
Using Claude Code: The unreasonable effectiveness of HTML
Claude Code 팀이 Markdown 대신 HTML을 LLM 출력 포맷으로 선호하기 시작한 이유와 그 실용적 장점을 정리한 글로, AI와 함께 문서/스펙/대시보드를 만드는 워크플로우에 직접적인 영향을 준다.
When to Vote, When to Rewrite: Disagreement-Guided Strategy Routing for Test-Time Scaling
Disagreement-guided routing boosts LLM accuracy on math and code by 3-7% with adaptive problem solving.
Less Is More: Engineering Challenges of On-Device Small Language Model Integration in a Mobile Application
Five failure modes and eight practical solutions emerged after five days of running on-device SLMs (Gemma 4 E2B, Qwen3 0.6B) with Wordle.
Dynamic Context Evolution for Scalable Synthetic Data Generation
A framework that completely eliminates duplication and repetition in large-scale synthetic data generation with LLMs using three mechanisms (VTS + Semantic Memory + Adaptive Prompt).
90%+ fewer tokens per session by reading a pre-compiled wiki instead of exploring files cold. Built from Karpathy's workflow.
This is a workflow sharing post about how pre-organizing a codebase in Wiki format can reduce token usage per Claude session by more than 90% instead of directly exploring the codebase every time.