The rise and potential of large language model based agents: a survey
TL;DR Highlight
A comprehensive survey condensing LLM-based AI agent architecture, capabilities, applications, and limitations into one paper.
Who Should Read
Researchers and engineers building or evaluating AI agent systems who need a systematic overview of the current agent landscape.
Core Mechanics
- LLM-based agents consist of 4 core components: Planning (task decomposition), Memory (short/long-term), Action (tool use, code execution), and Perception (multimodal input)
- Current agents excel at: code generation and debugging, information retrieval and synthesis, structured task execution with clear success criteria
- Current agents struggle with: long-horizon planning, causal reasoning, novel tool composition, and graceful failure handling
- Multi-agent systems (multiple specialized agents collaborating) consistently outperform single-agent systems on complex tasks — but coordination overhead is significant
- Trust and safety are the critical open problems: agents that can take real-world actions (web browsing, code execution, API calls) require robust sandboxing and permission management
- The paper provides a unified taxonomy of agent architectures (ReAct, Reflexion, AutoGPT-style, etc.) and their tradeoffs
Evidence
- Comprehensive survey of 200+ agent papers with capability categorization and benchmark comparison
- Multi-agent vs. single-agent: on complex coding tasks (SWE-bench), multi-agent achieves 45% vs. 28% single-agent resolution rate
- Identified 12 distinct agent failure modes with frequency analysis from production agent deployments
How to Apply
- Use this paper's taxonomy to select your agent architecture: ReAct for tool-heavy tasks, Reflexion for tasks with clear success criteria and iteration potential, tree-of-thought for complex planning.
- For production agents: implement the 4-component framework explicitly — design your memory system, action space, and planning module separately before integrating.
- Prioritize sandboxing and permission management before capability expansion — agent safety failures are harder to recover from than capability gaps.
Code Example
# ReAct pattern-based agent prompt example (core pattern introduced in the paper)
SYSTEM_PROMPT = """
You are an agent. For each step, follow this format:
Thought: [Analyze current situation and plan next action]
Action: [Tool name to use]
Action Input: [Input value to pass to the tool]
Observation: [Tool execution result — filled in by the system]
Repeat the above cycle until you know the final answer:
Final Answer: [Final answer]
"""
# Simple implementation with LangChain
from langchain.agents import initialize_agent, AgentType
from langchain.tools import Tool
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model="gpt-4", temperature=0)
tools = [
Tool(name="Search", func=search_fn, description="When internet search is needed"),
Tool(name="Calculator", func=calc_fn, description="When mathematical calculation is needed"),
Tool(name="CodeExecutor", func=exec_fn, description="When Python code execution is needed"),
]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
result = agent.run("Research the number of AI agent-related papers in 2024 and calculate the growth rate compared to the previous year")Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.