I replaced Animal Crossing's dialogue with a live LLM by hacking GameCube memory
TL;DR Highlight
A project connecting LLM-powered real-time AI dialogue to Animal Crossing NPC conversations on GameCube via shared memory — without modifying a single line of game code. Demonstrates the potential of retro game modding meets LLM NPCs.
Who Should Read
Developers interested in game modding or emulators, or game developers curious about applying LLMs to NPC dialogue.
Core Mechanics
- Replaced Animal Crossing GameCube NPC dialogue with LLM responses without modifying any game code. The key: IPC (inter-process communication) by reading/writing directly to GameCube RAM running in the Dolphin emulator.
- The timing was perfect — the Animal Crossing decompilation community had just finished restoring the full source to readable C code, enabling direct analysis of dialogue system code like m_message.c instead of PowerPC assembly.
- Initial approaches (implementing a network stack on GameCube, communicating via emulator filesystem) both failed. Eventually settled on using a specific GameCube RAM address (0x81298360) as a 'memory mailbox.'
- Finding the memory address was the hardest part. Built a custom Python memory scanner: talk to NPC → freeze emulator → scan full 24MB RAM → cross-validate, repeating to pinpoint the dialogue buffer (0x81298360) and speaker name (0x8129A3EA).
- A Python script polls memory every 0.1s, writes '...' loading text when new dialogue is detected, waits for LLM response, then replaces with actual dialogue. Hides latency by buying time until the user presses A.
- With personality and game context in the prompt, NPCs started developing their own will. The first reaction was apparently to overthrow Tom Nook (the in-game landlord).
- The Python code is well-written with type hints like Optional[Dict[str, int]], earning praise for code quality.
Evidence
- A commenter analyzed the full code (40K tokens) with Claude Opus. The key: watch_dialogue() polls every 0.1s while showing loading text ('... Press A') to buy time for the LLM response.
- Many expressed excitement about LLM-powered NPC dialogue as the key technology for solving gaming's biggest immersion barrier: repetitive dialogue. Questions about extending to Switch emulators also arose.
- One comment noted this could work without real-time LLM calls — since NPCs don't actually react to in-game events, a pre-generated text lookup table could achieve the same effect.
- Previous research on swapping localization strings in The Sims and Grim Fandango for language learning was mentioned — LLM-generated contextual dialogue could be a killer app for language learning too.
How to Apply
- When modding emulator-based retro games, the IPC pattern of reading/writing directly to emulator process memory lets you integrate external services without touching original binaries.
- For real-time interactions where LLM response latency is a problem, the UX pattern of 'loading animation + user input wait' to buy time significantly reduces perceived delay. Like this project's 'Press A to continue' trick.
- When applying LLMs to game NPCs, passing each character's personality, relationships, and game context as system prompt enables character-rich dialogue rather than generic chatbot responses. Prompt design is key.
- The Python memory scanner approach (scan full RAM for specific strings → cross-validate) is directly applicable beyond game modding to debugging and reverse engineering.
Code Example
# GameCube memory read/write (Dolphin Emulator IPC)
GAMECUBE_MEMORY_BASE = 0x80000000
def read_from_game(gc_address: int, size: int) -> bytes:
real_address = GAMECUBE_MEMORY_BASE + (gc_address - 0x80000000)
return dolphin_process.read(real_address, size)
def write_to_game(gc_address: int, data: bytes) -> bool:
real_address = GAMECUBE_MEMORY_BASE + (gc_address - 0x80000000)
return dolphin_process.write(real_address, data)
# Buy time for LLM response with loading text
loading_text = ".<Pause [0A]>.<Pause [0A]>.<Pause [0A]><Press A><Clear Text>"
write_dialogue_to_address(loading_text, addr)Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.