Running Claude Code fully offline on a MacBook — no API key, no cloud, 17s per task
TL;DR Highlight
A post sharing how to run Claude Code fully offline on a MacBook by connecting it to a local LLM without an API key or cloud, useful for developers who want to use an AI coding assistant at no cost.
Who Should Read
Developers who want to use AI coding assistants like Claude Code locally without API costs, or developers who need to leverage AI tools in internet-restricted environments (offline, secure networks, etc.).
Core Mechanics
- Failed to retrieve the original content. The post body was inaccessible due to Reddit's network security blocking.
- Key information inferred from the title: the author successfully ran Claude Code fully offline on a MacBook, reportedly achieving a processing speed of approximately 17 seconds per task.
- Given that it operates without an API key or Anthropic cloud servers, it is likely that a local LLM (e.g., an Ollama-based model) was connected as the backend for Claude Code.
- Fully offline execution offers several practical benefits, including data privacy, reduced API costs, and usability in network-free environments.
Evidence
- Author built ~200-line Python server so Claude Code talks directly to local MLX model via Anthropic Messages API — no proxy or middleware
- M5 Max (128GB) benchmark: ~2.2s for 100 tokens (45 tok/s), ~11s for 500 tokens — slower than API but fully offline at zero cost
- Counter in comments: already possible by just swapping API key with local endpoint, this adds unnecessary complexity / Ollama launch claude does the same
- Positive responses: one user ran Qwen3.5 30B 4-bit and built Conway's Game of Life on first try / 'will be essential as prices rise'
How to Apply
- "The original content was inaccessible, so specific application instructions cannot be provided. Visit the original Reddit URL directly (https://www.reddit.com/r/ClaudeAI/comments/1s43b8w/) or log in to view the full content. | A similar approach involves connecting Ollama with a local model (e.g., Qwen, Llama family) to Claude Code as a custom API endpoint. Start by exploring how to change Claude Code's ANTHROPIC_BASE_URL environment variable to point to a local server address. | If you need an offline AI coding assistant, the Continue.dev + Ollama combination is also worth considering as an alternative."
Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.