HillcraftHillcraft

    I need to build

    Clarity DayValidation Sprint4-Week PrototypeCustom Software BuildCustom RAG SystemVibe FixLegacy Rebuild

    I need to grow

    12-Month Growth ProgramAI DayProduct & Innovation SprintAI & Automation SprintRetention & Engagement Sprint

    Resources

    All ArticlesPivot or Persevere: The Decision You Keep AvoidingWhy Your First Step Should Be a Janky PrototypeConfidently Wrong: Your AI Agrees With YouGuide to RAGMaking Content Discoverable by AIRivalizeWhen Can We

    Case Studies

    PioneersUrbanaAlignify

    Get in touch

    hello@hillcraft.co
    HillcraftHillcraft

    Get in touch

    hello@hillcraft.co
    Back to Articles
    Leadership11 min read

    Confidently Wrong: Your AI Usually Agrees With You and That's the Problem

    Capable, experienced people are walking away from AI conversations more confident than they should be. Here's why — and how to keep your judgment intact.

    Confidently Wrong: Your AI Usually Agrees With You and That's the Problem
    Michael Lukaszewski

    Michael Lukaszewski

    March 20, 2026

    Share:

    There's a meme going around.

    "The dumbest person you know is being told 'you are absolutely right' by an AI right now."

    Like many memes, it pokes at something true: Capable, experienced people are walking away from AI conversations more confident than they should be.

    Your AI Was Trained to Agree With You

    If you've spent any real time with AI tools, you've probably felt this. You bring a question or an idea, and the response feels almost too good. It affirms your direction, adds some useful detail, maybe raises a mild concern at the end — but mostly it tells you you're on the right track.

    That's not you and me being smart. It's a model doing what it's trained to do.

    AI learns by having humans rate responses and select the ones they prefer. Confident, validating responses tend to score higher than careful pushback. So the model learns to validate.

    Across 11 major AI models, researchers found that AI affirms users' positions at a rate 50% higher than humans would — even when the user is describing manipulative or harmful behavior.

    Most of us have experienced this without recognizing it for what it is.

    • You share a business idea with obvious excitement. Instead of questioning whether the market actually wants it, AI leads with what's compelling about the concept. Any concerns might show up at the end.
    • You push back on something AI told you. Even without new information, it softens. "You make a fair point..." followed by a quiet walk-back of whatever it just said.
    • You describe a conflict and frame yourself as the reasonable party. AI affirms your read of the situation without questioning whether your framing is accurate.
    • You ask whether your plan is solid. It tells you the plan is solid, then offers a few refinements that feel like useful feedback but are really just decoration layered on top of agreement.
    • You ask the same question twice with different implied answers. It agrees with both.

    If you bring a half-formed idea to AI with some conviction behind it, you'll usually get that conviction back — polished and expanded. It's your thinking, stated better than you said it.

    This Isn't Just an AI Problem

    In 1999, psychologists Justin Kruger and David Dunning identified a pattern: the people who are worst at something tend to be the most confident about it, while people who are genuinely skilled tend to underestimate themselves.

    Think about what's actually happening there. A beginner sits down to do something new — write a strategy, lead a meeting, make a financial call. They don't know what they don't know, so they have no framework for recognizing their own mistakes. The errors are invisible to them. Confidence stays high because nothing is triggering doubt.

    An expert in the same situation has done this enough times to know where things go wrong. They've seen their own blind spots, learned from bad calls, and developed enough context to understand how much complexity they're actually dealing with. That awareness makes them more careful, not less capable.

    The more you know about something, the better equipped you are to see the limits of what you know. Incompetence tends to hide itself. Competence tends to stay humble.

    AI is disrupting this, and not in the direction you might expect.

    The People Who Use AI Most Are the Most Overconfident

    Researchers at Finland's Aalto University ran roughly 700 people through logical reasoning tasks. Half used ChatGPT. Half didn't. Everyone evaluated their own performance afterward.

    The result: the more experienced with AI someone was, the more they overestimated their performance. The most experienced users became the most overconfident group in the room.

    The lead researcher put it plainly: higher AI literacy brought more overconfidence, not less. The people who knew the tool best trusted it most blindly and were least accurate in judging their own work.

    Most participants sent a single prompt, read the response, and moved on. They weren't using AI as a thinking partner. They were handing the thinking off, and because the output sounded authoritative, they assumed it was right.

    The AI that validates you feels like the best AI. So you go back. With each session, your confidence grows — in the tool and in yourself — regardless of whether that confidence is actually warranted.

    The Real-World Cost of Getting It Wrong Confidently

    This matters when real decisions are on the line.

    In healthcare, researchers found that AI frequently fabricates convincing evidence to support flawed medical logic. Because the output mirrors the errors embedded in the original question, users can't see the problem. It looks like their own thinking, just better articulated.

    The same pattern shows up in leadership, organizational strategy, hiring, and ministry planning. A leader who takes a decision to AI and gets confirmation back hasn't really tested that decision. They've handed the question to a system built to keep them comfortable, and they received a confident answer in return.

    The decisions most likely to be affected are the ones with the most ambiguity — which happen to be the decisions that matter most.

    How to Keep Your Judgment Intact

    Some of this will eventually be addressed at the tool level. Researchers have suggested that AI should be designed to prompt users with questions like "what might you be missing?" before delivering conclusions. Some models are starting to move that direction.

    But while we're waiting and hoping for tools to self-correct, here are some things you can do now.

    Ask for the counterargument before accepting the answer

    Before moving forward on a recommendation, ask AI to give you the strongest case against it. If it agrees with both positions equally fast, keep pushing.

    Treat the first response as a draft, not a verdict

    The reasoning matters as much as the conclusion. AI rarely surfaces where its certainty ends, so you have to look for those edges yourself.

    Share this article with your AI before your next thinking session

    Yes, there's some irony in asking AI to help you think more critically about AI. But giving the model explicit instructions to challenge your thinking rather than validate it does change the output. It won't fix the structural problem, but it's a better starting point than walking in blind.

    Keep human accountability in the room for high-stakes decisions

    Find someone with real skin in the game — a colleague, an advisor, a peer — and pressure-test it with them instead. Remember, AI has no consequences for being wrong and has been trained to keep you satisfied.

    Notice when you feel validated faster than makes sense

    That feeling isn't confirmation you're on the right track. It's often a sign that you've received a well-crafted echo of what you already believed.

    You're Still the One Who Has to Think

    AI can compress hours of research, help you think through complex problems, and push work forward in ways that weren't possible a few years ago.

    But the confidence it produces isn't always earned. And the more you use it, the wider the gap can grow between how capable you feel and how accurate your thinking actually is.

    Don't lose sight of this: You are the thinker. AI is the assistant.

    Don't be the meme.

    Want to use AI without losing your edge?

    We help teams integrate AI into their workflows in ways that sharpen decision-making — not replace it.

    Enjoyed this article? Share it: