What Inputs Drive Effective Large Language Model-Based Unit Test Generation?
TL;DR Highlight
An experiment studying what inputs improve accuracy, bug detection, and coverage when using LLMs for automated unit test generation.
Who Should Read
Developers and QA engineers looking to automate test generation using LLMs and wanting to understand what prompting strategies work best.
Core Mechanics
- Tested multiple input configurations: code only, code + docstring, code + existing tests, code + type hints
- Adding docstrings to the prompt is the single biggest quality boost for generated tests
- Including existing tests in the prompt helps the LLM follow project conventions (naming, assertion style)
- Type hints improve generated test coverage by helping the LLM understand expected input/output types
- Combining all inputs (code + docstring + types + examples) yields best overall results but also higher token cost
Evidence
- Evaluated on a dataset of Python functions with ground-truth test suites
- Coverage, mutation score, and bug detection rate measured for each input configuration
- Docstring inclusion improved mutation score by ~15% over code-only baseline
How to Apply
- Always include the function's docstring when prompting an LLM for unit tests — it's the highest-ROI addition.
- If you have existing tests in the codebase, include 1–2 representative examples in the prompt to enforce project test style.
- Add type annotations to your functions before running LLM test generation to improve edge case coverage.
Code Example
snippet
# LLM test generation input format comparison experiment example (Python + OpenAI SDK)
import openai
def generate_tests(prompt_variant: str, code: str, signature_only: bool = False):
if signature_only:
# Strategy of passing only the signature
content = f"Generate unit tests for this function signature:\n{code.split('def ')[0] + 'def ' + code.split('def ')[1].split(':')[0]}:"
else:
# Strategy of passing the full implementation
content = f"Generate unit tests for the following function:\n```python\n{code}\n```"
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": content}]
)
return response.choices[0].message.content
# Apply two strategies to the same function and compare coverage
my_func = '''
def calculate_discount(price: float, user_type: str) -> float:
if user_type == "vip":
return price * 0.7
elif user_type == "member":
return price * 0.9
return price
'''
test_black_box = generate_tests("signature", my_func, signature_only=True)
test_white_box = generate_tests("full_impl", my_func, signature_only=False)
print("=== Signature only ===", test_black_box)
print("=== With implementation ===", test_white_box)Terminology
Mutation TestingA technique for evaluating test quality by introducing small code changes (mutations) and checking if tests catch them. Mutation score = % of mutations detected.
Mutation ScoreThe percentage of injected code mutations caught by the test suite; a proxy for test quality beyond line coverage.
Type HintsPython annotations specifying expected types for function parameters and return values, e.g., def foo(x: int) -> str.
Test CoverageThe percentage of code lines (or branches) executed by the test suite.
Original Abstract (Expand)
Large language models (LLMs) have revolutionized software engineering by automating critical tasks. We study five state-of-the-art LLMs, investigating their capabilities in generating unit test cases while focusing on how different inputs impact test correctness, bug detection capability, and code coverage.