DBAutoDoc: Automated Discovery and Documentation of Undocumented Database Schemas via Statistical Analysis and Iterative LLM Refinement
TL;DR Highlight
Automates documentation of legacy dark databases using backpropagation-inspired iterative LLM refinement — 96.1% composite score, $0.70 per 100 tables (99.5% cost reduction)
Who Should Read
Backend developers, DBAs, and data engineers who have inherited undocumented legacy databases; teams handling enterprise data migrations
Core Mechanics
- Composite score 96.1% on AdventureWorks (both Gemini and Claude models) — FK F1 94.2%, PK F1 95.0%, description coverage 99%
- Core insight: treats schemas as graphs and propagates semantic context bidirectionally between parent and child tables (analogous to backpropagation). Median convergence: 2 iterations
- LLM is essential for FK detection: statistics-only 15/91 (20% precision) vs LLM-only 75/84 (89% precision). Deterministic gates contribute +23 F1 points over LLM-only
- Validated on private enterprise databases: OrgA (36 tables) 97% PK coverage, OrgB (125 tables) 93% PK — rules out memorization from pre-training
- Sonnet 4.6/Opus 4.6 achieves equivalent quality with 7x fewer tokens than Gemini — most cost-effective option
- Manual documentation: 2–4 hours per table, $12,000–$48,000 per 100 tables. DBAutoDoc: ~$0.70 per 100 tables (99.5% reduction)
Evidence
- 4 public benchmarks (AdventureWorks, Chinook, Northwind, LousyDB) + 2 private enterprise databases (OrgA, OrgB) evaluated
- Ablation: removing deterministic gates drops FK F1 from 94.2% to 71.7% (-22.5pp); statistics only = 30% — each layer's contribution clearly isolated
How to Apply
- Install immediately with: npm install @memberjunction/db-auto-doc — supports SQL Server, PostgreSQL, and MySQL
- Providing ground truth injection (verified constraints) and seed context (domain hints) speeds convergence from 3–5 iterations to 2
- For PII-sensitive databases, set sampleSize: 0 to use only structural metadata without transmitting actual values (GDPR compliance)
Terminology
Original Abstract (Expand)
A tremendous number of critical database systems lack adequate documentation. Declared primary keys are absent, foreign key constraints have been dropped for performance, column names are cryptic abbreviations, and no entity-relationship diagrams exist. We present DBAutoDoc, a system that automates the discovery and documentation of undocumented relational database schemas by combining statistical data analysis with iterative large language model (LLM) refinement. DBAutoDoc's central insight is that schema understanding is fundamentally an iterative, graph-structured problem. Drawing structural inspiration from backpropagation in neural networks, DBAutoDoc propagates semantic corrections through schema dependency graphs across multiple refinement iterations until descriptions converge. This propagation is discrete and semantic rather than mathematical, but the structural analogy is precise: early iterations produce rough descriptions akin to random initialization, and successive passes sharpen the global picture as context flows through the graph. The system makes four concrete contributions detailed in the paper. On a suite of benchmark databases, DBAutoDoc achieved overall weighted scores of 96.1% across two model families (Google's Gemini and Anthropic's Claude) using a composite metric. Ablation analysis demonstrates that the deterministic pipeline contributes a 23-point F1 improvement over LLM-only FK detection, confirming that the system's contribution is substantial and independent of LLM pre-training knowledge. DBAutoDoc is released as open-source software with all evaluation configurations and prompt templates included for full reproducibility.