Hypura – A storage-tier-aware LLM inference scheduler for Apple Silicon
TL;DR Highlight
A Rust-based open-source project that intelligently distributes LLM models across GPU, RAM, and NVMe when they exceed your Mac's physical memory, enabling models that crash llama.cpp with OOM errors to actually run.
Who Should Read
Developers with Apple Silicon Macs (MacBook Pro, Mac Studio, Mac Mini, etc.) who gave up running local LLMs due to memory constraints — especially ML engineers and AI researchers who want to experiment with 70B+ models on 32GB or less.
Core Mechanics
- The project implements smart tiered memory placement: model layers that are frequently accessed stay in GPU unified memory, less-used layers fall back to system RAM, and rarely-used layers spill to NVMe SSD.
- The tiering is dynamic — as different parts of the model are activated during inference, layers migrate between tiers based on access frequency and available memory.
- In benchmarks on a 32GB M3 Max, the project runs a 70B quantized model at 3-5 tokens/second — slow but functional, where llama.cpp fails entirely due to OOM.
- NVMe bandwidth on Apple Silicon (especially M-series Pro/Max/Ultra) is fast enough to make SSD spillover practical — the bandwidth isn't comparable to RAM but is sufficient for less-active layers.
- The project is in early stages and lacks some llama.cpp ecosystem integrations (certain quantization formats, sampling methods), but the core functionality works.
Evidence
- Benchmark videos showing 70B models running on 32GB MacBook Pros generated significant excitement — many developers had assumed this was simply impossible.
- Commenters with NVMe bandwidth knowledge validated the technical approach: Apple Silicon's NVMe is fast enough that SSD spillover is viable in ways it wouldn't be on typical PC SSDs.
- Some skepticism about real-world usefulness: 3-5 tokens/second is too slow for interactive use but might work for batch processing or offline generation tasks.
- Rust implementation was specifically called out as a smart choice for this use case — the memory management precision and performance characteristics align well with the problem.
How to Apply
- If you're on an Apple Silicon Mac with 32GB or less and want to run 70B models, this is currently the best option — try it for offline batch generation tasks where speed is less critical.
- Start with a Q4 quantized model to minimize the memory footprint — the tiering benefits are largest when the model fits mostly in RAM with only a small SSD overflow.
- Use it for experimentation and evaluation, not production serving — the current performance characteristics make it suitable for 'does this model behave the way I want?' testing.
- Monitor NVMe write cycles when using SSD spillover extensively — inference with heavy SSD use will wear down the drive faster than typical usage.
Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.