Can I run AI locally?
TL;DR Highlight
A browser tool that detects your GPU specs via WebGPU and recommends which LLM models you can actually run locally on your hardware.
Who Should Read
Developers and enthusiasts who want to run LLMs locally but aren't sure which models will fit their GPU/VRAM, and builders creating tools for the local AI ecosystem.
Core Mechanics
- The tool runs entirely in the browser — it uses the WebGPU API to query GPU information (model, VRAM, compute capabilities) without any server-side processing.
- Based on the detected specs, it recommends specific quantized model variants (e.g., 'Llama 3.1 8B Q4_K_M fits in your 8GB VRAM') from Hugging Face.
- It accounts for quantization levels (Q4, Q5, Q8, etc.) and model sizes to calculate what actually fits in available VRAM with headroom for the KV cache.
- The tool also distinguishes between compute-bottlenecked and memory-bottlenecked inference, helping users understand what performance to expect.
- This solves a real friction point: many people download a model that then OOM-crashes, wasting time. Having a browser-based hardware checker reduces this trial-and-error.
Evidence
- The tool was shared with source code, demonstrating the WebGPU hardware detection approach.
- HN commenters with diverse hardware confirmed the detection accuracy — it correctly identified VRAM and compute tier across NVIDIA, AMD, and Apple Silicon.
- Some noted that WebGPU doesn't expose all the detail needed (e.g., actual available VRAM after OS/other app usage), so recommendations are conservative estimates.
- Feature requests included integration with llama.cpp presets and recommendations for CPU-only inference on systems without a discrete GPU.
How to Apply
- Before downloading any local LLM model, use a tool like this to confirm your VRAM headroom — remember to account for OS VRAM usage (typically 1-2GB on most systems).
- When selecting quantization level: Q4_K_M is a good default balance of quality and size; go Q5 or Q8 only if you have sufficient VRAM to spare.
- For developers building local AI tools: providing hardware-aware model recommendations reduces friction for new users — consider integrating a similar WebGPU detection step in your onboarding.
- The WebGPU API's hardware detection capabilities (without requiring native code) is worth knowing about for browser-based developer tooling.
Terminology
WebGPUA modern browser API for GPU-accelerated graphics and computation, also useful for hardware capability detection without native code.
VRAMVideo RAM — the GPU memory that holds model weights and KV cache during inference. The primary constraint for running LLMs locally.
KV cacheKey-Value cache — the memory used during LLM inference to store computed attention states for the current context. Scales with context length.
QuantizationReducing model weight precision (e.g., Q4 = 4-bit quantization) to shrink model size and fit in less VRAM, at a small quality cost.