Show HN: I built a sub-500ms latency voice agent from scratch
TL;DR Highlight
Building your own STT→LLM→TTS voice pipeline from scratch can hit 2x lower latency than all-in-one platforms like Vapi — here's how.
Who Should Read
Developers building real-time voice AI applications who are hitting latency walls with managed platforms and are ready to own their own pipeline.
Core Mechanics
- The DIY pipeline architecture: Deepgram (STT) → LLM streaming API → ElevenLabs/Cartesia (TTS), with careful attention to streaming handoffs between each step.
- The key latency win comes from streaming: start TTS synthesis on the first few words of the LLM output rather than waiting for the full response.
- Managed platforms like Vapi add latency through abstraction layers and round-trip overhead — building directly against the APIs eliminates this.
- The author measured ~800ms end-to-end latency on the DIY pipeline vs. ~1600ms on Vapi for comparable quality settings.
- Tradeoffs: you now own reliability, error handling, voice activity detection (VAD), and turn-taking logic — things managed platforms handle for you.
- WebSockets throughout the pipeline (not HTTP) are essential for minimizing latency — avoid any HTTP request/response roundtrips in the hot path.
Evidence
- The author shared benchmark measurements comparing DIY pipeline latency against Vapi with the same STT/LLM/TTS components.
- HN commenters with voice AI experience corroborated the latency numbers, noting that streaming chunk handoffs are the biggest optimization lever.
- Some pointed out that Vapi and similar platforms have been improving their latency, so the gap may narrow — but the DIY approach still wins for the most latency-sensitive use cases.
- Others noted that the '2x faster' claim depends heavily on network conditions and component choices — results vary.
How to Apply
- Start TTS synthesis as soon as you have a natural sentence boundary in the LLM stream — don't wait for the full response. This alone can cut perceived latency by 40–50%.
- Use WebSockets for all pipeline components — Deepgram, your LLM endpoint, and TTS. Avoid HTTP polling in the real-time path.
- Implement voice activity detection (VAD) locally in the browser/client rather than on the server to reduce turn-detection latency.
- Profile each stage separately: STT latency, LLM first-token latency, TTS first-audio latency. The bottleneck shifts by use case and you need data to optimize intelligently.
Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.