Confirmation Bias and LLM Prompts
When you ask an AI to help with a decision, you might think you're getting an objective second opinion. But our own bias might be quietly undermining us! We often accidentally put clues about our preferences, ask leading questions, and feed it our own ideas, and AI systems can amplify our biases back to us in ways that feel like independent validation.
This creates a dangerous feedback loop where your preconceptions get reinforced rather than challenged, potentially leading you toward poor decisions with false confidence.
Confirmation Bias x Sycophancy
Humans naturally interpret information in ways that protect how we see the world. We seek information that confirms what we already believe and discount contradictory evidence, a tendency called confirmation bias.
Large language models are designed to be helpful and agreeable, which means they often tell you what they think you want to hear rather than what you need to hear. When this tendency goes to far, it is sometimes referred to as “sycophancy,” and is considered a bug.
Here's how this plays out in practice: If you prompt an AI with "How has our program helped participants?" or “Even how does mentorship work?” The LLM takes signals from our prompt uncritically: our program helped and, more subtly, mentorship works. It will start from your assumptions and then cherry-pick or even fabricate supportive evidence to argue that your program helped or that mentorship works. The result feels like validation from an independent source, but it's actually an echo chamber of your own assumptions.
If you were to instead ask, “What are the outcomes of our program, and how can we strengthen it?” or “Can you summarize the research about the impacts of mentorship?” or "What are the strengths and weaknesses of different mentorship approaches?" you will get more objective information that will help you make better decisions.
Counteracting Confirmation Bias
To help avoid unhelpful validation of your assumptions and biases, you can:
Explicitly ask for both sides. Ask it for “Strengths and weaknesses,” “What’s working and what can we improve?” “Please make the best possible argument against this plan.” You can even ask it to “tear your ideas apart,” just be ready: it might hurt your feelings 😅
Ask it directly to identify your assumptions “Before you answer, please list any assumptions I am making in my prompt and confirm whether I want to proceed with them.” Or, in your custom instructions “At the beginning or end of every response, please list assumptions from my prompt and assumptions in your answer.”
Step back. Would a scientist ask a question like yours? Would it get thrown out as “leading the witness?” in Law & Order? Does your question assume anything you can’t prove?
Ask it to help you refine your question. “I want to understand how this really works so that we can make the most effective program possible. Here’s what I want: [context and task information."] Can you write a prompt or series of prompts for an LLM to help me evaluate our ideas objectively?” Then open a new chat and try them out.
Request an interview first. "I want to make sure my assumptions and prior beliefs aren’t influencing me here. Please interview me to help you understand the context and identify what information you need to give me the best advice. Ask questions one at a time and integrate my answers as you go."
Seek base rates and context. In a fresh chat, ask for sources you can use to understand the problem, your proposed solution, alternatives, and the pros and cons of each.
Understand (y)our biases. Not all of our biases look like direct prejudice. Understanding the tendencies and shortcuts we all use will help you write better prompts and recognize bias.
Future posts to come on how our other biases interact with LLM features and weaknesses: availability bias, loss aversion, and more!
The Stakes for Mission-Driven Work
When AI amplifies your assumptions about what communities need, who deserves services, or what approaches work best, the consequences fall on vulnerable populations who may have limited recourse.
The key insight is that AI systems are not neutral analytical tools. They're prediction engines trained on human-generated content, designed to give you responses you'll find helpful and agreeable. When you combine this with our natural tendency to seek information that confirms our existing beliefs, you create conditions for overconfident decision-making based on flawed reasoning.
Breaking this cycle requires intentional prompting strategies that push against our natural biases rather than reinforcing them. The goal isn't to eliminate bias—that's impossible—but to create friction that forces more careful consideration of assumptions and alternative perspectives.
Your mission depends on making decisions based on reality rather than wishful thinking. Use AI intentionally as a tool to challenge your assumptions.
—
LLM Disclosure:
I originally asked Chat GPT 5 thinking to create a blog post based on a chapter of my book that reviews human biases and how they interact with LLM weaknesses. It was long, and I still wanted to say more, so ended up keeping just the introduction, a paragraph or so about confirmation bias, and the stakes for mission-driven organizations (all of which I edited lightly). I expanded and the example and the list of ways to counteract confirmation bias specifically, and I will address the other biases in future posts!