Neurosymbolic Repo-level Code Localization
TL;DR Highlight
LogicLoc cuts through keyword-shortcut biases in code search by having an LLM generate Datalog queries executed by a deterministic inference engine.
Who Should Read
Engineers developing AI coding agents or automated bug-fixing pipelines. Specifically, developers evaluating code search performance with SWE-bench-like benchmarks or building features to locate code by natural language queries across entire repositories.
Core Mechanics
- The team discovered a Keyword Shortcut bias: over 50% of issues in SWE-bench Lite contain identifiers (filenames, class names, function names) directly, allowing existing tools to achieve high scores through keyword matching alone, without true understanding.
- To test this bias, they created KA-LogicQuery, a benchmark that requires locating code based purely on structural conditions, and found that existing state-of-the-art tools (SweRank, Agentless, LocAgent, CoSIL) saw their Hit Rate plummet to below 40% at the function level.
- LogicLoc works by statically analyzing source code to extract function definitions, inheritance, and call graphs as Datalog facts, then having an LLM translate natural language queries into Datalog programs executed by the Soufflé engine.
- LogicLoc incorporates parser-gated validation to automatically correct syntax errors in LLM-generated Datalog code and a mutation-based feedback loop to diagnose why intermediate results are empty.
- The mutation diagnosis relaxes string exact matches to partial matches (Contains-Literal) or removes conditions one by one (Drop-Single-Atom) to identify constraints causing empty results, providing feedback to the LLM.
- They also created KA-LogicQuery-Neg, a 'no correct answer' benchmark, where existing tools aggressively recommend top-N results even when none exist, while LogicLoc accurately returns 'not applicable' in over 70% of cases.
Evidence
- "KA-LogicQuery function-level PLR (Perfect Location Rate): LogicLoc (Qwen3-Max) 38.27% vs all baselines 0%. File-level Precision also shows LogicLoc at 73.35% vs the top baseline (LocAgent Claude-3.5) at 11.02%."
How to Apply
- To build a pipeline that answers structural queries like 'find functions that satisfy this condition' across an entire repository, extract function, class, and call relationships as Datalog facts via AST parsing, have an LLM generate Datalog queries, and execute them with Soufflé.
- When building pipelines where an LLM generates declarative queries like Datalog or CodeQL, significantly improve execution success rates (+10-20%) by adding a parser-based validation, deterministic correction, and error feedback loop instead of directly executing LLM output.
- In scenarios where a code agent must definitively answer 'no code matches this pattern' (e.g., vulnerability scans expecting zero results), a deterministic query engine-based approach can reduce false positives compared to top-N recommendation methods.
Code Example
# LogicLoc style: Python AST → Datalog facts → LLM query generation → Soufflé execution
import ast, subprocess, textwrap
# 1. Program facts extraction (function definition)
def extract_function_facts(filepath: str) -> list[str]:
facts = []
with open(filepath) as f:
tree = ast.parse(f.read())
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
param_count = len(node.args.args)
containing_class = "module_level"
for parent in ast.walk(tree):
if isinstance(parent, ast.ClassDef):
if node in ast.walk(parent):
containing_class = parent.name
facts.append(
f'function_definition("{filepath}", "{node.name}", '
f'{node.lineno}, {node.end_lineno}, {param_count}, '
f'"false", "{containing_class}").'
)
return facts
# 2. Request LLM to generate Datalog query (prompt example)
system_prompt = """
You are a Datalog query generator for code analysis.
Available EDB predicates:
function_definition(file_path:symbol, function_name:symbol,
start_line:number, end_line:number, param_count:number,
is_async:symbol, containing_class:symbol)
class_definition(file_path:symbol, class_name:symbol, start_line:number, end_line:number, base_class)
function_call(caller_file, caller_name, callee_name, call_line)
Generate a Soufflé Datalog program to answer the query.
Always declare output with .output directive.
"""
user_query = "Find all functions with more than 15 parameters that are not __init__ methods"
# 3. Example generated Datalog program
datalog_program = textwrap.dedent("""
.decl function_definition(file_path:symbol, function_name:symbol,
start_line:number, end_line:number, param_count:number,
is_async:symbol, containing_class:symbol)
.input function_definition
.decl LargeFunctions(file_path:symbol, function_name:symbol,
start_line:number, param_count:number)
LargeFunctions(fp, fn, sl, pc) :-
function_definition(fp, fn, sl, _, pc, _, _),
pc > 15,
fn != "__init__".
.output LargeFunctions
""")
# 4. Execute with Soufflé
# subprocess.run(["souffle", "-F", "facts/", "-D", "output/", "query.dl"])
print("Datalog program generated. Execute with Soufflé.")
print(datalog_program)Terminology
Related Resources
Original Abstract (Expand)
Code localization is a cornerstone of autonomous software engineering. Recent advancements have achieved impressive performance on real-world issue benchmarks. However, we identify a critical yet overlooked bias: these benchmarks are saturated with keyword references (e.g. file paths, function names), encouraging models to rely on superficial lexical matching rather than genuine structural reasoning. We term this phenomenon the Keyword Shortcut. To address this, we formalize the challenge of Keyword-Agnostic Logical Code Localization (KA-LCL) and introduce KA-LogicQuery, a diagnostic benchmark requiring structural reasoning without any naming hints. Our evaluation reveals a catastrophic performance drop of state-of-the-art approaches on KA-LogicQuery, exposing their lack of deterministic reasoning capabilities. We propose LogicLoc, a novel agentic framework that combines large language models with the rigorous logical reasoning of Datalog for precise localization. LogicLoc extracts program facts from the codebase and leverages an LLM to synthesize Datalog programs, with parser-gated validation and mutation-based intermediate-rule diagnostic feedback to ensure correctness and efficiency. The validated programs are executed by a high-performance inference engine, enabling accurate and verifiable localization in a fully automated, closed-loop workflow. Experimental results demonstrate that LogicLoc significantly outperforms SOTA methods on KA-LogicQuery while maintaining competitive performance on popular issue-driven benchmarks. Notably, LogicLoc attains superior performance with significantly lower token consumption and faster execution by offloading structural traversal to a deterministic engine, reducing the overhead of iterative LLM inference.