Can ChatGPT Replace the Teacher in Assessment? A Review of Research on the Use of Large Language Models in Grading and Providing Feedback
TL;DR Highlight
LLMs grade short answers and multiple choice well, but can't replace teachers yet for creative and open-ended tasks.
Who Should Read
EdTech developers and education researchers evaluating whether and how to deploy LLMs for automated grading and feedback in educational settings.
Core Mechanics
- LLMs achieve high agreement with human teachers on closed-ended tasks: short answer grading (87% agreement) and multiple choice validation (94% agreement)
- For open-ended creative tasks (essays, projects, presentations): LLM grades show only 61% agreement with teacher grades and miss important pedagogical dimensions
- LLMs struggle with grading criteria that require developmental context — understanding a student's progress and growth trajectory over time
- LLMs are good at rubric-following but poor at holistic judgment — they grade what's explicitly in the rubric but miss 'je ne sais quoi' quality signals teachers use
- Bias analysis showed LLMs gave slightly higher grades to grammatically fluent responses regardless of content quality
- Recommendation: use LLMs for formative feedback and initial screening, with teacher review for high-stakes summative assessment
Evidence
- Short answer grading: LLM-teacher agreement 87%, comparable to inter-rater reliability between two teachers (89%)
- Essay grading agreement: 61% — significantly below inter-teacher agreement of 82%
- Bias test: responses rewritten with better grammar but same content received 0.3 grade points higher on average from LLM judges
How to Apply
- Use LLMs confidently for: quiz grading, short answer checking, code correctness checking, and rubric-based scoring where all criteria are explicitly defined.
- For essay and project grading: use LLMs to generate initial feedback and draft grades as a starting point for teacher review — don't use LLM grades as final.
- Provide detailed rubrics with explicit criteria and examples in the prompt — LLM grading quality improves significantly with more explicit grading guidance.
Code Example
# LLM grading prompt example (with rubric)
system_prompt = """
You are an expert grader. Score the student's answer strictly according to the rubric below.
Return JSON: {"score": <int>, "max_score": <int>, "feedback": <str>}
Rubric:
- Correct main concept (3 points)
- Supporting evidence or example (2 points)
- Clear explanation (1 point)
"""
user_prompt = """
Question: Explain what a REST API is and give an example use case.
Student Answer:
{student_answer}
"""
# For open-ended tasks, use LLM output as a draft and have a human review it
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt.format(student_answer="REST는 HTTP 기반 API입니다. 예: /users GET")}
],
response_format={"type": "json_object"}
)
result = response.choices[0].message.content
# result -> {"score": 4, "max_score": 6, "feedback": "개념은 맞지만 예시가 단순합니다."}Terminology
Related Papers
Can LLMs model real-world systems in TLA+?
LLM이 TLA+ 명세를 작성할 때 문법은 잘 통과하지만 실제 시스템과의 동작 일치도(conformance)는 46% 수준에 그친다는 걸 체계적으로 검증한 벤치마크 연구로, AI 기반 형식 검증의 현실적 한계를 보여준다.
Natural Language Autoencoders: Turning Claude's Thoughts into Text
Anthropic이 LLM 내부의 숫자 벡터(활성화값)를 직접 읽을 수 있는 자연어로 변환하는 NLA 기법을 공개했다. AI가 실제로 무슨 생각을 하는지 해석하는 interpretability 연구의 새로운 진전이다.
ProgramBench: Can language models rebuild programs from scratch?
LLM이 FFmpeg, SQLite, PHP 인터프리터 같은 실제 소프트웨어를 문서만 보고 처음부터 재구현할 수 있는지 측정하는 새 벤치마크로, 최고 모델도 전체 태스크의 3%만 95% 이상 통과하는 수준에 그쳤다.
MOSAIC-Bench: Measuring Compositional Vulnerability Induction in Coding Agents
티켓 3장으로 쪼개면 Claude/GPT도 보안 취약점 코드를 53~86% 확률로 그냥 짜준다.
Refusal in Language Models Is Mediated by a Single Direction
Open-source chat models encode safety as a single vector direction, and removing it disables safety fine-tuning.
Show HN: A new benchmark for testing LLMs for deterministic outputs
Structured Output Benchmark assesses LLM JSON handling across seven metrics, revealing performance beyond schema compliance.
Original Abstract (Expand)
This article presents a systematic review of empirical research on the use of large language models (LLMs) for automated grading of student work and providing feedback. The study aimed to determine the extent to which generative artificial intelligence models, such as ChatGPT, can replace teachers in the assessment process. The review was conducted in accordance with PRISMA guidelines and predefined inclusion criteria; ultimately, 42 empirical studies were included in the analysis. The results of the review indicate that the effectiveness of LLMs in grading is varied. These models perform well on closed-ended tasks and short-answer questions, often achieving accuracy comparable to human evaluators. However, they struggle with assessing complex, open-ended, or subjective assignments that require in-depth analysis or creativity. The quality of the prompts provided to the model and the use of detailed scoring rubrics significantly influence the accuracy and consistency of the grades generated by LLMs. The findings suggest that LLMs can support teachers by accelerating the grading process and delivering rapid feedback at scale, but they cannot fully replace human judgment. The highest effectiveness is achieved in hybrid assessment systems that combine AI-driven automatic grading with teacher oversight and verification.