Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training
TL;DR Highlight
Running 3 specific Transformer layers twice in the forward pass — without any weight changes or training — boosted BBH logical reasoning scores from 0.22 to 0.76. A notable empirical validation of the 'reasoning circuit' concept.
Who Should Read
ML engineers and researchers wanting to improve LLM reasoning without retraining costs, or developers interested in mechanistic interpretability of model internals.
Core Mechanics
- Reproduced David Ng's RYS (Repeat Your Steps) technique with additional experiments. The core idea: a 'reasoning circuit' of 3-4 consecutive layer blocks exists inside Transformer models, and passing through this block twice in the forward pass improves reasoning without any weight changes or retraining.
- Duplicating layers 12-14 in Devstral-Small-2-24B (same weights, two passes) boosted BBH (Big Bench Hard) logical deduction scores from 0.22 to 0.76. Achieved purely by routing hidden states through the same layer circuit twice.
- Qwen2.5-32B also showed 17% reasoning improvement when duplicating specific 3 layers. But not all layers work — 'which layers to duplicate' is the key, and a sweep tool is provided to find them.
- Trade-offs exist. In Devstral-24B experiments, mathematical and causal reasoning improved but instruction following and code generation degraded. The pattern is 'thinks deeper but follows instructions less precisely.'
- Experiments ran on 2 consumer AMD GPUs (RX 7900 XT + RX 6950 XT) overnight, with rigorous evaluation on Vast.ai H200 instances. Shows this kind of experimentation is accessible without special infrastructure.
- The released toolkit includes reasoning_probe.py, sweep.py, gguf_surgery.py, etc. — automating the search for reasoning circuit layers and direct surgery on GGUF model files.
- This technique likely works thanks to Transformer residual connections, which provide stability against partial network damage, allowing models to maintain functionality even when specific layers are repeated or removed.
Evidence
- Some commenters offered an alternative explanation: 'performance didn't improve — rather, mechanisms that inhibit reasoning (introduced during RLHF post-training) got disrupted.' The theory is that duplicated layers are close to identity functions and handle refusal circuits or reasoning-degradation mechanisms from post-training, which get disrupted when duplicated.
- One commenter shared hands-on 'neuroanatomy' experiments with Qwen2.5/Qwen3: removing certain layers had no effect, removing late layers caused infinite generation (couldn't find EOS token), and removing early layers produced random output. They also noted abliteration experiments (suppressing refusal behavior by finding refusal vectors) were possible with just 10 examples.
- Solar 10.7B (released ~2 years ago) was mentioned as using 'Depth Up-Scaling' (repeating layers then additional training) to achieve strong size-relative performance. Conceptually connected to this experiment, though that approach required training. Paper link: arxiv.org/abs/2312.15166.
- The question 'does repeating N times make it even better?' was raised. Ideas included looping until convergence, or a MoE-variant where a router dynamically decides layer pass patterns like '13→13→14→14→15→15→16'. Training with loops from the start so circuits naturally separate was also proposed.
- Research on layer removal maintaining benchmark scores (pruning) was mentioned, suggesting many layers in trained models may actually be redundant — which connects to why duplicating specific layers still works.
How to Apply
- If you're running open-source LLMs (Qwen, Devstral, etc.) and want to boost mathematical or logical reasoning quality without retraining budget, use the repo's sweep.py to find reasoning circuit layers and gguf_surgery.py to modify GGUF models for quick A/B testing.
- For tasks where deep reasoning matters more than instruction following (math problem solving, logic puzzles, etc.), use this technique to derive a reasoning-specialized model variant from an existing model without any fine-tuning.
- If you want to explore model internals (mechanistic interpretability), the repo's reasoning_probe.py, eq_probe.py, comprehensive_probe.py, etc. let you investigate what specific layers do — and you can run these experiments overnight on consumer AMD or NVIDIA GPUs.
Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.