How I code with AI on a budget/free
TL;DR Highlight
A budget-friendly AI coding workflow: use free model tabs for problem-solving and cheap models for file editing — separating the 'brain' from the 'hands'.
Who Should Read
Individual developers or side project builders who find AI coding tool API costs (Cursor, Cline, etc.) too expensive. People who want to combine multiple free models for practical coding.
Core Mechanics
- Core strategy: separate 'brain' and 'hands'. Use powerful models' free web chat (Claude, Gemini 2.5 Pro) for hard problem analysis and solution design, then use cheap/free models (GPT 4.1) via Cline for actual file edits.
- Agent tools (Cline, Cursor) add tool descriptions, MCP server configs, etc. to prompts, consuming tokens and potentially degrading model output quality.
- The 'surgical' approach — giving up agentic automation and using 100x smaller models with precise, targeted edits — was argued to be sufficient for most tasks.
- repomix can flatten a project into a single file for pasting into free web chat windows.
Evidence
- Multiple commenters agreed that agent tools make models 'dumber'. One confirmed: 'Results are better pasting code into web chat than using GitHub Copilot or Cursor.'
- A user advocating the 'surgical' approach said giving up agents makes 100x smaller models sufficient. Context is key — project rules, conventions, and targeted file content matter more than model size.
- Free tiers mentioned: AI Studio (Gemini 2.5 Pro), lmarena.ai (Claude Opus 4), various OpenRouter free models
How to Apply
- If AI coding costs are a concern: handle complex bug analysis and architecture design in free web chat (AI Studio with Gemini 2.5 Pro, lmarena.ai with Claude Opus 4), then hand the solution to a cheap agent (Cline/GPT 4.1) for file edits only.
- Use repomix (npx repomix) to flatten your project into a single file for pasting into free web chat — gives the model full context without agent overhead.
- Consider ditching full agentic automation for a 'surgical' approach — precise prompts targeting specific files with smaller, cheaper models can match results.
Code Example
# Bundle project code into a single file with repomix
npx repomix
# Example of context block format generated by AI Code Prep
# Place the question at the top and bottom to improve AI focus
"""
Can you help me figure out why my program does x instead of y?
fileName.js:
<code>
... contents ...
</code>
nextFile.py:
<code>
import example
...
</code>
Can you help me figure out why my program does x instead of y?
"""Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.