Applications of Large Language Models to SQL Learning
TL;DR Highlight
An o4-mini-based agent auto-generates SQL practice problems and grades student submissions at human-level accuracy.
Who Should Read
Developers building SQL education platforms or automated code grading pipelines. Backend engineers designing code evaluation systems with LLM-based agents.
Core Mechanics
- Automatically synthesizes SQL practice problems + pedagogical metadata (difficulty, concept tags, etc.) based on real-world data cleaning scenarios
- Using a multi-step 'operator planning → SQL generation' pipeline instead of a single prompt significantly improves reference SQL accuracy
- OpenAI o4-mini achieves near-SOTA performance even in zero-shot (no examples) settings, rivaling supervised SOTA pipelines
- Evaluates student-submitted SQL against a rubric, auto-generating partial scores and improvement feedback
- Deployed in a real class — compared LLM grading against 6 human graders across 326 submissions for 4 exam questions, with competitive results on most question types
Evidence
- Multi-step reference answer generator substantially outperforms single-prompt baseline, approaching SL-based SOTA
- LLM grading is competitive with human graders across most question types in a real-class study with 326 submissions × 6 graders
- o4-mini achieves SOTA-level performance in zero-shot settings without supervised training
How to Apply
- When building a SQL education platform, call the LLM twice during problem generation — first to plan which operators (JOIN, GROUP BY, etc.) to use, then to write the SQL — and you'll get better quality than a single prompt.
- When building an auto-grading pipeline, define a rubric with partial scoring criteria in JSON and let the LLM judge each criterion, giving you human-level grading feedback.
- When automating coding assignments, separating the pipeline into three independent agents — problem synthesis, reference answer generation, and grading — lets you improve or swap each stage independently.
Code Example
# Multi-step SQL reference answer generation prompt example (based on o4-mini)
# Step 1: Operator planning
planning_prompt = """
Plan the list of operators needed to solve the following SQL problem.
Problem: {problem_description}
Table schema: {schema}
List the operators to use in order, and explain why each is needed.
JSON format: {"operators": [{"name": "JOIN", "reason": "..."}, ...]}
"""
# Step 2: SQL generation based on the plan
sql_gen_prompt = """
Write executable SQL based on the following operator plan.
Problem: {problem_description}
Schema: {schema}
Operator plan: {operator_plan}
Requirements:
- Use standard SQL syntax
- Reflect all planned operators
- Explain the purpose of each clause with comments
"""
# Step 3: Rubric-based grading
grading_prompt = """
Grade the student's SQL submission according to the rubric below.
Reference SQL: {reference_sql}
Student submission: {student_sql}
Rubric:
- Correct table JOIN (2 points)
- GROUP BY clause accuracy (2 points)
- WHERE condition completeness (2 points)
- Column selection accuracy (2 points)
- Query executability (2 points)
Return the score and reason for each item, along with suggestions for improvement, as JSON.
{"scores": [{"criterion": "...", "score": N, "feedback": "...", "suggestion": "..."}]}
"""Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.
Original Abstract (Expand)
We present a Large Language Model (LLM)-assisted SQL learning system that closes the loop from problem discovery to grading. Grounded in real-world data-wrangling scenarios, our agentic workflow (i) synthesizes industry-style practice problems with pedagogical metadata, (ii) produces executable reference SQL via a multi-step operator-planning pipeline, and (iii) grades student submissions against rich rubrics while explaining partial credit and surfacing actionable feedback for revision. We evaluate two core capabilities. First, on a large corpus of realistic SQL problems, our zero-shot, multi-step reference-answer generator, implemented with OpenAI's o4-mini, substantially outperforms a single-prompt baseline while approaching the state-of-the-art pipelines trained with supervised learning. Second, in a classroom deployment, we compare LLM-assisted grading with human graders across four exam questions, encompassing 326 submissions evaluated by six graders. The results indicate that LLMs can provide grading signals competitive with those of human graders for many question types. Overall, the system is designed for responsible educational use through real-world problems, generated reference solutions, and grading assistance. Together, these features enable scalable practice generation and grading, which improves student learning while augmenting instructor capacity.