Episode 3

full
Published on:

15th Apr 2026

3 - Pushback On AI Mistakes Explained

You've been navigating unreliable information on the internet for years — you just didn't call it a skill. Checking sources, wondering about agenda, noticing when something sounds off. AI needs exactly that same kind of healthy skepticism: not paranoia, not blind trust, but the same reasonable caution you already bring to anything online. In this episode, we break down the most common ways AI gets things wrong, a practical three-tier framework for knowing how much to verify, and six specific things you can do when AI says something that doesn't seem right.

🔑 The Tier Framework: How Much Should You Verify?

Everything depends on what's at stake. Tier one is low-stakes: brainstorming, planning, organizing — if AI is slightly off, it costs almost nothing. Tier two is medium-stakes: research, content, decisions that matter but aren't irreversible — spot-check specific facts before acting on them. Tier three is high-stakes: health, legal, financial, safety — AI is a starting point for forming your questions, not your final answer.

🔑 The Four Types of AI Mistakes

Hallucinations are the famous one — plausible-sounding answers that aren't real, especially fabricated citations and book titles. Outdated information is quieter but common: AI has a knowledge cutoff and may not know what changed. Confident vagueness is the one to watch most carefully: an answer that sounds authoritative but is actually quite general. And then there are genuine disagreements — defensible positions where you and AI simply see something differently, and both of you might have a point.

🔑 Six Things to Do When AI Gets It Wrong

Be direct — tell it plainly what seems wrong and why. Ask it to show its work — step through its reasoning and flag where it's uncertain. Ask for sources and verify them, especially for statistics, names, and legal or government information. If the conversation has gone sideways on bad information, start fresh with a new chat. And use a second AI as a cross-check — different models have different training data and catch different things.

🔑 Every Tool Has a Failure Mode

The Encyclopedia Britannica on your grandmother's shelf. GPS sending you down the wrong road. A calculator that doesn't know you typed the wrong number. None of those made the tool useless — they just defined the terms of using it well. AI is the same. The fact that it sometimes gets things wrong doesn't disqualify it. It means you stay in charge of the conversation.

Your small step this week: ask AI one specific thing — a date, a statistic, a quote — then go fact-check it. Make verification a habit from the start.

http://jillfromthenorthwoods.com

https://www.youtube.com/@startwithsmallsteps

https://www.buymeacoffee.com/startwithsmallsteps

https://twitter.com/schmern

Email the podcast at jill@startwithsmallsteps.com

By choosing to watch this video or listen to this podcast, you acknowledge that you are doing so of your own free will. The content shared here reflects personal experiences and opinions and is intended for informational and educational purposes only. I am not a software developer, data scientist, or AI professional. Any tips, tools, or suggestions offered should not be considered a substitute for professional technical advice. AI tools and platforms change frequently — always verify current features, pricing, and terms directly with the providers. You are solely responsible for any decisions or actions you take based on this content.

Listen for free

Show artwork for Small Steps with AI

About the Podcast

Small Steps with AI
AI isn't just a search engine. It can help you think through a hard decision, organize your house, plan your retirement, and sometimes — if you let it — say exactly what you needed to hear. Small Steps with AI is hosted by Jill from the Northwoods, a real person figuring out how this technology fits into real life. No coding. No hype. Just small steps.

About your host

Profile picture for Jill McKinley

Jill McKinley

I’m Jill from the Northwoods. Professionally, I work in Health IT, where I untangle complex systems and help people use technology more effectively. But at heart, I’m a curious lifelong learner—always exploring how things work, why people grow the way they do, and how even the smallest steps can spark real transformation. That curiosity fuels everything I do, from problem-solving at work to sharing insights through my creative projects.

My journey wasn’t always easy. Growing up, I faced a rough childhood, and books became my lifeline. They introduced me to voices of ancient wisdom, modern psychology, and the natural world around me. Those pages taught me resilience, gave me perspective, and helped me see that wisdom is everywhere—waiting to be noticed, gathered, and shared.