Skip to main content


Hilarious comic.

But the joke hits harder because it’s not really about AI. It’s about anyone who answers with total confidence while doing absolutely none of the legwork. AI just makes it obvious because it delivers every response like it’s presenting quarterly earnings to shareholders—even when the answer is equivalent to “yes, please enjoy this delicious death-cap risotto.”

And sure, AI gets facts right a lot of the time. It also explains the wrong thing with the enthusiasm of a substitute teacher who skimmed the textbook in the parking lot. And when it does provide citations, half the audience treats citations like a decorative garnish. Looks nice. Never touched.

But humans? Humans have been running this scam for centuries. Doctors, lawyers, journalists—plus the people who only claim to be those things—speak with the exact same authoritative tone. If they’re not actually showing you evidence, they’re just cosplaying as credibility. And people fall for it because we’re wired to trust confidence over accuracy.

This is why degree mills print diplomas like they’re running a money-laundering operation. The moment someone flashes letters after their name, the average person immediately switches off the critical-thinking centre of their brain and goes, “Ah yes, an expert.”

If you want a comedy masterclass in this phenomenon, watch a police-impersonator get caught on YouTube. The real officers get fooled at first because the impersonator talks exactly like a cop—same rhythm, same jargon, same energy. Then the real cops ask for ID, and suddenly the entire performance collapses like a Jenga tower made of wet cardboard.

So develop the one habit that saves lives, wallets, and stomachs: ask, “Is any of this actually true?” Then check. Then double-check.

It’s tedious, yes. But it’s also how you avoid becoming the guy in the comic who trusted the confident voice telling him the mushroom was safe to eat.

A two-panel comic.
Top panel: a hand holds a pale mushroom while a smiling character labeled “AI” confidently says “Sure!” in response to “Is this mushroom safe to eat?”
Bottom panel: the same “AI” character stands beside a gravestone marked “RIP,” cheerfully saying, “You’re correct—that mushroom was toxic. My apologies for the mistake! Would you like to know about toxic mushrooms?”

I am Jack's Found 404 reshared this.

in reply to Chris Trottier

"And people fall for it because we’re wired to trust confidence over accuracy."

THANK YOU.

I figured out years ago that one shouldn't trust confident people. A confident person will just say what they imagine is correct, while if an incofident person says "this is so", they have checked to make sure. But then the confident person shouts them down, saying "No, this is not so!"

in reply to Infrapink (he/his/him)

@Infrapink It’s especially toxic in the workplace because confidence from the top isn’t just persuasive—it’s financially incentivized. When a CEO speaks with total certainty, employees quickly learn that agreeing isn’t about truth or evidence. It’s about job security.

Disagreement becomes a career risk, so people nod along even when the logic is held together with duct tape.

in reply to Chris Trottier

@Chris Trottier This is why I won't touch LLM-generated output with a 40' pole. Sure, I can verify whether or not it's actually correct, but if I have to research it myself anyway, what's the point of having used the LLM in the first place?

The one possible argument I have seen where LLMs might be useful is in generating things like alt text for images that lack it, and even then I hear they're not great.

in reply to Jonathan Lamothe

@me I don’t use LLMs for writing. I use them for scaffolding—the narrative equivalent of putting up temporary beams so the roof doesn’t collapse on my head while I’m building.

I can research on my own just fine, and usually faster. The problem isn’t information. The problem is that the way I phrase things tends to activate people’s fight-or-flight response.

Because let’s be honest: I say a lot of contrarian things. Not for sport, not for edge-lord points, but because they’re true. And apparently, in 2025, a contrarian opinion is indistinguishable from a personal attack on someone’s sense of identity.

So the LLM helps me frame it in a way that doesn’t make half the room immediately light their torches and form a mob over a paragraph.

This website uses cookies. If you continue browsing this website, you agree to the usage of cookies.