I've been "gaslighting" my AI models and it's producing insanely better results with simple prompt injection
TL;DR Highlight
A post claiming that applying a 'gaslighting' technique—injecting prompts to AI models in a specific way—significantly improved output quality, though the original content is inaccessible due to being blocked.
Who Should Read
Worth reading for developers looking to get better output quality from LLMs like Claude, or those interested in prompt engineering—though the original post is currently inaccessible.
Core Mechanics
- The original page was blocked by network security policies, making it impossible to verify the actual post content. Reddit login or a developer token is required.
- Based on the title alone, the technique appears to involve a form of 'prompt injection' where the AI is presented with a false premise—such as that a task has already been completed or a certain context is true—in order to improve output quality.
- The term 'gaslighting' is presumed to be a figurative expression for a prompting technique that plants false premises or reframed contexts into the AI.
- Specific details of the technique, example prompts, target models, and performance measurement methods are entirely unverifiable due to the blocked original post.
Evidence
- "(No comment data available) — The original post itself was blocked by network security, so no comment content was collected either."
How to Apply
- "To access the original post directly, try logging into Reddit or obtaining a Reddit API developer token and visiting the URL (https://www.reddit.com/r/ClaudeAI/comments/1s5wp0g/) directly. If you're curious about similar 'prompt restructuring' techniques, search for keywords like 'few-shot prompting', 'context priming', or 'role injection' to find related resources."
Terminology
가스라이팅(Gaslighting)Originally a term for psychological abuse that manipulates a person's perception of reality. Here, it is presumed to be a figurative expression for a prompting technique that presents certain facts or contexts to an AI as established truths in order to guide its output.
프롬프트 인젝션(Prompt Injection)A technique that deliberately inserts specific instructions or context into an AI model's input (prompt) to intentionally alter the model's behavior or output.