Semantic Causality Evaluation of Correlation Analysis Utilizing Large Language Models
TL;DR Highlight
A method for automatically filtering out 'real causal relationships' from correlational data using LLMs as expert proxies.
Who Should Read
Data scientists and researchers working on causal inference who want to automate the expert knowledge elicitation step.
Core Mechanics
- Uses LLMs to simulate domain expert judgment in distinguishing causal from merely correlational relationships in datasets
- Proposed pipeline: generate candidate causal pairs → LLM scores plausibility → apply causal discovery algorithms constrained by LLM judgments
- LLM-guided causal discovery outperforms purely statistical methods on benchmark causal datasets
- Works without access to interventional data — relies on LLM's embedded world knowledge
- Reduces human expert annotation burden significantly
Evidence
- Evaluated on standard causal discovery benchmarks (e.g., Sachs dataset, synthetic DAGs)
- LLM-constrained causal graphs show higher precision and recall vs. unconstrained algorithms
- GPT-4 as expert proxy outperforms smaller models on causal judgment accuracy
How to Apply
- Before running causal discovery algorithms, use an LLM to score each candidate variable pair: 'Is it plausible that X causes Y in this domain?' and filter low-confidence pairs.
- Combine LLM-generated causal constraints with algorithms like PC or GES to improve discovery accuracy.
- Validate LLM causal judgments with a domain expert on a sample before trusting them at scale.
Code Example
import openai
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
def causal_score(var_a: str, var_b: str, context: str = "") -> float:
"""Ask the LLM to evaluate the causal likelihood between two variables on a scale of 0 to 1"""
prompt = f"""Evaluate the likelihood that a real causal relationship (causality) exists between the two variables.
Variable A: {var_a}
Variable B: {var_b}
{f'Context: {context}' if context else ''}
Output only a single number between 0.0 (completely coincidental/unrelated) and 1.0 (clear causal relationship)."""
response = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}],
temperature=0
)
try:
return float(response.choices[0].message.content.strip())
except:
return 0.0
def causal_heatmap(df: pd.DataFrame, context: str = ""):
"""Generate a Causal heatmap by multiplying the correlation matrix by LLM causal scores"""
corr = df.corr()
cols = corr.columns.tolist()
causal_matrix = pd.DataFrame(np.zeros_like(corr.values), index=cols, columns=cols)
for i, a in enumerate(cols):
for j, b in enumerate(cols):
if i < j:
score = causal_score(a, b, context)
causal_matrix.loc[a, b] = score
causal_matrix.loc[b, a] = score
elif i == j:
causal_matrix.loc[a, b] = 1.0
# correlation * causal score = Causal heatmap
weighted = corr.abs() * causal_matrix
plt.figure(figsize=(10, 8))
sns.heatmap(weighted, annot=True, fmt=".2f", cmap="YlOrRd", vmin=0, vmax=1)
plt.title("Causal Heatmap (correlation × causal score)")
plt.tight_layout()
plt.show()
return weighted
# Usage example
# causal_heatmap(df, context="Medical patient data, variables include age/blood pressure/cholesterol, etc.")Terminology
Related Papers
Can LLMs model real-world systems in TLA+?
LLM이 TLA+ 명세를 작성할 때 문법은 잘 통과하지만 실제 시스템과의 동작 일치도(conformance)는 46% 수준에 그친다는 걸 체계적으로 검증한 벤치마크 연구로, AI 기반 형식 검증의 현실적 한계를 보여준다.
Natural Language Autoencoders: Turning Claude's Thoughts into Text
Anthropic이 LLM 내부의 숫자 벡터(활성화값)를 직접 읽을 수 있는 자연어로 변환하는 NLA 기법을 공개했다. AI가 실제로 무슨 생각을 하는지 해석하는 interpretability 연구의 새로운 진전이다.
ProgramBench: Can language models rebuild programs from scratch?
LLM이 FFmpeg, SQLite, PHP 인터프리터 같은 실제 소프트웨어를 문서만 보고 처음부터 재구현할 수 있는지 측정하는 새 벤치마크로, 최고 모델도 전체 태스크의 3%만 95% 이상 통과하는 수준에 그쳤다.
MOSAIC-Bench: Measuring Compositional Vulnerability Induction in Coding Agents
티켓 3장으로 쪼개면 Claude/GPT도 보안 취약점 코드를 53~86% 확률로 그냥 짜준다.
Refusal in Language Models Is Mediated by a Single Direction
Open-source chat models encode safety as a single vector direction, and removing it disables safety fine-tuning.
Show HN: A new benchmark for testing LLMs for deterministic outputs
Structured Output Benchmark assesses LLM JSON handling across seven metrics, revealing performance beyond schema compliance.
Original Abstract (Expand)
: It is known that correlation does not imply causality. Some relationships identified in the analysis of data are coincidental or unknown, and some are produced by real-world causality of the situation, which is problematic, since there is a need to differentiate between these two scenarios. Until recently, the proper − semantic − causality of the relationship could have been determined only by human experts from the area of expertise of the studied data. This has changed with the advance of large language models, which are often utilized as surrogates for such human experts, making the process automated and readily available to all data analysts. This motivates the main objective of this work, which is to introduce the design and implementation of a large language model-based semantic causality evaluator based on correlation analysis, together with its visual analysis model called Causal heatmap. After the implementation itself, the model is evaluated from the point of view of the quality of the visual model, from the point of view of the quality of causal evaluation based on large language models, and from the point of view of comparative analysis, while the results reached in the study highlight the usability of large language models in the task and the potential of the proposed approach in the analysis of unknown datasets. The results of the experimental evaluation demonstrate the usefulness of the Causal heatmap method, supported by the evident highlighting of interesting relationships, while suppressing irrelevant ones.