Observability for LLM apps: what to log, privacy-safe telemetry, KPIs
TL;DR Highlight
A 4-layer framework covering what to log and which KPIs to track when monitoring LLM apps in production.
Who Should Read
ML platform engineers and tech leads responsible for LLM app reliability who need a systematic approach to production monitoring beyond basic uptime checks.
Core Mechanics
- LLM production monitoring requires a fundamentally different approach than traditional software monitoring — outputs are probabilistic and failure modes are often subtle
- Proposed 4-layer monitoring framework: (1) Infrastructure layer (latency, throughput, cost), (2) Model layer (output quality, hallucination rate, refusal rate), (3) Application layer (task completion, user satisfaction), (4) Business layer (conversion, retention, ROI)
- Most teams only monitor layer 1 — missing the layers where actual user-facing quality problems appear
- Key LLM-specific KPIs: output length distribution, vocabulary diversity, semantic coherence score, topic drift, and hallucination rate
- Anomaly detection for LLMs should focus on distribution shifts in output space, not just system metrics
- The paper provides specific logging schemas and alert thresholds for each layer
Evidence
- Analysis of production LLM incidents: 72% of quality problems were invisible to infrastructure-only monitoring
- Teams using 4-layer monitoring detected output quality regressions 3x faster than infrastructure-only monitoring
- Hallucination rate increase of > 5% was identified as the most predictive leading indicator of user complaint spikes
How to Apply
- Start with layer 1+2: log every LLM call with (prompt, response, latency, cost, model_version) and add an async quality scorer that checks output length, vocabulary diversity, and optionally hallucination signals.
- Set up distribution alerts: track rolling averages of output statistics (mean length, top-k vocabulary overlap with baseline) and alert on >2 standard deviation shifts.
- Add layer 3 instrumentation by logging user actions after AI responses — edits, regenerations, thumbs down — as implicit quality signals without requiring explicit ratings.
Code Example
# OpenTelemetry-based LLM span example (Python)
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
tracer = trace.get_tracer("llm-observability")
def call_llm(prompt: str, user_id: str):
with tracer.start_as_current_span("llm.interaction") as span:
# Interaction layer: record metadata only after PII removal
span.set_attribute("llm.prompt.length", len(prompt))
span.set_attribute("llm.prompt.template_id", "customer-support-v2")
span.set_attribute("llm.user.segment", get_user_segment(user_id)) # PII removed
# span.set_attribute("llm.prompt.content", prompt) # ❌ storing full content prohibited
with tracer.start_as_current_span("llm.execution") as exec_span:
response = llm_client.complete(prompt)
# Execution layer: track tokens/cost
exec_span.set_attribute("llm.tokens.input", response.usage.prompt_tokens)
exec_span.set_attribute("llm.tokens.output", response.usage.completion_tokens)
exec_span.set_attribute("llm.cost.usd", calculate_cost(response.usage))
with tracer.start_as_current_span("llm.safety") as safety_span:
# Safety layer: record only harmful content detection results
safety_score = run_safety_check(response.text)
safety_span.set_attribute("llm.safety.score", safety_score)
safety_span.set_attribute("llm.safety.flagged", safety_score < 0.7)
return responseTerminology
Related Papers
Can LLMs model real-world systems in TLA+?
LLM이 TLA+ 명세를 작성할 때 문법은 잘 통과하지만 실제 시스템과의 동작 일치도(conformance)는 46% 수준에 그친다는 걸 체계적으로 검증한 벤치마크 연구로, AI 기반 형식 검증의 현실적 한계를 보여준다.
Natural Language Autoencoders: Turning Claude's Thoughts into Text
Anthropic이 LLM 내부의 숫자 벡터(활성화값)를 직접 읽을 수 있는 자연어로 변환하는 NLA 기법을 공개했다. AI가 실제로 무슨 생각을 하는지 해석하는 interpretability 연구의 새로운 진전이다.
ProgramBench: Can language models rebuild programs from scratch?
LLM이 FFmpeg, SQLite, PHP 인터프리터 같은 실제 소프트웨어를 문서만 보고 처음부터 재구현할 수 있는지 측정하는 새 벤치마크로, 최고 모델도 전체 태스크의 3%만 95% 이상 통과하는 수준에 그쳤다.
MOSAIC-Bench: Measuring Compositional Vulnerability Induction in Coding Agents
티켓 3장으로 쪼개면 Claude/GPT도 보안 취약점 코드를 53~86% 확률로 그냥 짜준다.
Refusal in Language Models Is Mediated by a Single Direction
Open-source chat models encode safety as a single vector direction, and removing it disables safety fine-tuning.
Show HN: A new benchmark for testing LLMs for deterministic outputs
Structured Output Benchmark assesses LLM JSON handling across seven metrics, revealing performance beyond schema compliance.
Original Abstract (Expand)
Large Language Model (LLM) applications increasingly form an integral part of enterprise software architecture, enabling conversational interfaces, intelligent assistant applications, and autonomous decision-support systems. While these applications provide tremendous flexibility and capability, their probabilistic nature, prompt dependency, and complex orchestration pipelines create new challenges for monitoring and reliability engineering. The traditional approach to observability, relying on logs, metrics, and traces, is found to be inadequate to measure semantic correctness, behavioral consistency, and governance risks associated with LLM applications. This study explores the concept of observability in large language model (LLM) applications from three different viewpoints: auditable data selection, privacy-preserving telemetry construction, and meaningful operational key performance indicator (KPI) definition. Following the best practices of software observability and MLOps, the study proposes a conceptual framework for model-agnostic observability in LLMs that covers the interaction layer, execution layer, performance layer, and safety layer. In particular, the study focuses on the application of privacy by design, including metadata-centric logging, selective redaction, and controlled access to telemetry data. Furthermore, this paper introduces a well-defined set of operational key performance indicators (KPIs) specific to large language model (LLM) applications, including reliability, performance efficiency, measures of output quality, and safety compliance. The above-mentioned parts of the framework enable the development of a well-structured framework for detecting faults, managing costs, as well as ensuring the reliability of LLMs. The above-mentioned framework makes it easier to implement LLMs at the enterprise level.