We’ve known the phrase for decades, but it took a chatbot to make us finally live it.
“Garbage in, garbage out” used to be a warning for programmers. Now it’s becoming a daily lesson for anyone who’s ever typed a prompt and received nonsense in return. The machine didn’t read your mind. It read your words. And your words, it turns out, were garbage.
Here’s the uncomfortable question: if we’re learning to prompt machines better, will we ever learn to prompt humans better too?
The Mind-Reading Tax We Put on Humans
When we talk to another person, we carry an invisible expectation: they should understand us, even when we’re not being clear.
We send half-formed emails and expect colleagues to decode them. We give vague briefs and blame the designer when the output misses the mark. We say “you know what I mean” as if telepathy is a reasonable job requirement.
Psychologists call this the curse of knowledge - a cognitive bias where knowing something makes it nearly impossible to imagine not knowing it. In a famous experiment, people who tapped out a song’s rhythm predicted listeners would guess correctly 50% of the time. The actual success rate? 2.5%. Once you hear the melody in your head, you can’t fathom that others only hear random taps.
This is how we communicate at work every day. We hear our own melody. We assume everyone else does too.
And when things go wrong? It’s rarely our fault. The other person should have asked clarifying questions. They should have known. They should have read between the lines.
The Machine Doesn’t Guess
Then you open ChatGPT, type something vague, and get a response that makes you realize: oh, I was the problem.
The machine doesn’t fill in the gaps. It doesn’t assume good intentions. It doesn’t “get you.” It processes exactly what you gave it and returns something proportional.
And here’s the strange part: we accept this. We don’t get mad at the AI (or maybe sometimes we do) for not reading our minds. We don’t send passive-aggressive follow-ups. We rewrite the prompt. We add context. We clarify what we actually want.
We take responsibility for the input.
This is new behavior for most of us. With humans, we might iterate once or twice before getting frustrated. With AI, we’ll cheerfully rewrite a prompt five times until it works. The feedback loop is immediate, impersonal, and brutally honest.
No one (I mean most people 😅) has ever rage-quit a conversation with ChatGPT because it “should have understood.”
Prompt Hygiene as a Life Skill
What does a good prompt look like? Context. Specificity. Clear intent. An explanation of what success looks like.
Sound familiar? That’s also the recipe for a good brief, a good email, a good request to a colleague, a good conversation with your partner.
The skills are identical. We just never had a mirror that reflected our communication flaws back at us so quickly and so bluntly.
As one researcher put it: “Our language skills aren’t precise - we’ve done little work to assure that precision because we simply haven’t had to in most careers. However, with AI, precision in language is critical.”
Every time you craft a better prompt, you’re practicing a skill that transfers directly to human communication: thinking clearly about what you actually want before you open your mouth.
Will We Live-Prompt Better?
Here’s where it gets interesting. Millions of people are now practicing clear communication multiple times a day - with machines.
They’re learning that vague inputs produce vague outputs. They’re learning that context matters. They’re learning that “you should just know” is not a strategy.
And there’s early evidence this is working. According to recent surveys, 73% of employees who use generative AI say it has helped them avoid miscommunication at work. That’s a remarkable number for a tool most people have only been using for two years.
The question is whether this discipline will transfer beyond the chat window. Will we start prompting the humans in our lives with the same care we give to Claude or ChatGPT?
Or will we reserve our clarity for the machines and keep expecting humans to read our minds?
The Empathy Paradox
There’s a strange irony here. We’re often more patient with artificial intelligence than with actual intelligence.
We’ll rewrite a prompt five times to get the output we want. But when a human misunderstands us once, we get frustrated. We assume bad faith. We write them off.
Think about that. We give a machine more benefit of the doubt than we give our colleagues.
Maybe the issue is expectations. We expect nothing from AI except literal interpretation, so we adjust our inputs accordingly. We expect everything from humans - context, nuance, mind-reading - so when they fail, we blame them instead of ourselves.
What if we flipped this? What if we treated human communication like prompt engineering? What if we assumed that unclear outputs meant we should refine our inputs - not blame the receiver?
The Real Lesson
“Garbage in, garbage out” was never really about computers. It was about us.
The machines are just making it obvious. They’re holding up a mirror and showing us exactly how clear - or unclear - we’ve been all along.
The curse of knowledge doesn’t go away just because you’re aware of it. Studies show that even telling people about the bias doesn’t reduce it. But practice might.
Every prompt you write is a rep. Every time you add context because you know the AI needs it, you’re building a muscle. Every time you specify what success looks like instead of assuming it’s obvious, you’re getting better at a skill that most people never deliberately practice.
Maybe the real gift of the AI era isn’t the outputs we get from machines. It’s the input discipline we’re finally learning for ourselves.
The question isn’t whether you can prompt AI effectively.
It’s whether you’ll learn to prompt the humans around you with the same clarity, patience, and care.