top of page
Search

If You Treat AI Like a Vending Machine, Don’t Blame It for Serving You Garbage

A dimly lit vending machine bathed in a harsh blue glow, offering an unclear assortment of tech accessories. Many slots are empty, and some items sit crooked or slumped in their coils, contributing to a sense of neglect. The overall appearance is uninviting and washed out. A keypad and card reader are visible on the right. Blurred figures walk in the background of the public space, adding to the scene’s impersonal tone.

Mammals love pushing buttons. The behavioral psychologist BF Skinner made his career measuring how many times rats would push a button—and how long they’d keep pressing without a reward.


That intimate connection between button and reward is something we human mammals know all too well. I can still remember, as a child, jabbing at the big buttons on a vending machine and bursting with anticipation as the can (or candy bar) clattered its way to my outstretched hand.


And when those buttons didn’t deliver the much-anticipated goodies? I’d shake the machine with all the wrath my 40-pound frame could muster.

 

In the early days of the Internet, we’d do the equivalent of “shaking the machine”, banging on the modem or shutting down the browser with an imperious click. The Internet overlords learned very quickly that if they wanted users to, well, keep using, they needed to make sure that a button push led to a quick reward (Skinner’s rats were a little more persevering.)


But now there’s a new intelligence—one that can think. And while AI can adequately respond to our “give-it-to-me-now” button pushes, it thrives on something a lot more precious: thoughtfulness. 


Show of hands (don’t worry, no one is looking), when was the last time you entered a one-liner into ChatGPT and then simply copy and pasted the output with barely a glance? Ok, keep those invisible hands up if later you griped about how AI is overhyped. 


Welcome to the age of the AI vending machine. The one to blame? Well, let’s just say don’t shake your laptop when ChatGPT doesn’t give you what you want. 


Less Prompting, More Thinking


After reading the section above, you might be thinking, I know, I get it—I have to write lengthy prompts if I want a good response. 


Well, let me flip that on its head. A long prompt does not necessarily equal a good prompt. 


But here’s something people often overlook: a little prompting goes a long way. 


And reflecting on the output and a thoughtful follow-up prompt? A very long way. 


Yep, regardless which LLM you’re using, a little guidance can help refine the initial output. The time involved? A 2-3 minute back-and-forth, where you treat the LLM as an intelligent human. Think of it like this: would you rattle off a one-liner to an expert over the phone and hang up, expecting a great result?


The quality delta between the first and, say, 3rd or 4th outputs is what’s important. And those few minutes can take you from AI slop to AI pop. You’re also in the loop, actively shaping the output. After all, you don’t want your own thinking to become slop. 


And here’s why it matters: it’s not that AI will replace you. It’s that a human who knows how to think with AI will replace both the non-user and the person mashing at the AI vending machine buttons. 


How to Know if You’re Treating AI Like a Vending Machine 


You might fall into one (or all) of the following if you are using AI like a vending machine. 


The Google-Brained Search


Ah, Google. Ye all-knowing oracle of yore. How you fed us a universe of knowledge all these years with nothing more than a succinctly framed search query. 


Unfortunately, that way of thinking—a concise one-liner is the best way to unlock knowledge—has been turned on its head. Use it with AI and watch the slop drop. After all, LLMs thrive on relevant context. On the other hand, give them a tersely framed question, punch that button… and they really don’t have much to go off of. 


The Massive Prompt Paster


Nothing speaks AI Fluency like a prompt library. Think again! Nothing speaks “button pushing” like a massive prompt you’ve put no thought into—besides googling to get to said massive prompt. 


But, you plead, the output has to be good—it was crafted by prompt engineers. No need to read the output, right? Output gold is at my fingertips. 


In reality, that’s not how LLMs necessarily work. You’ll often need to provide follow up prompts to massage that output into something more acceptable. Yet that’s where our vaunted prompt libraries are mum. 


The Window Closer


You might be nodding along, thinking, yeah, I totally get AI is slop. That’s why I roll my eyes when I read an AI output and close the browser faster than you can say “tokenization.” 


At the end of the day, that’s still button pushing. The reason for AI slop might not be that google-brained search or pasted in massive prompt. It’s because you didn’t take your internal dialogue about all the things the AI messed up on. That pushback? That’s the prompt! You’ll get a much better response this way. And even if it’s not there the second time, a thoughtful follow-up nudge can get you there. 



The Vibe Checker


This one is everywhere. People will toss off some command—”Make this sound more professional” or “Polish this for my client so it sounds like an expert”—not even reading the output. They’re probably thinking, I prompted it what to do, so what else is there left?


But wait a second. What exactly does “more professional” mean? Instead, upload an email from a client that shows how that client writes and thinks. Then, ask it to write a message that will most likely land positively with that client. This is not prompting: it’s collaboration. 


And the LLMs will eat up the nuance, extract the patterns, and spin you some output gold. Take that vending machine!


What You Should Do Instead 


To avoid mediocre outputs, converse with AI the way you would an intelligent human expert. The key element to any conversation is a back and forth. This dialogue sets the ground for something called thought partnership. 


For instance, if I want ChatGPT to help me with a powerpoint presentation, do I say–write me a powerpoint presentation on [insert topic here]? Of course not, because that would be classic vending machine (I can almost hear those cans clanking!) 


Instead, I’d upload what I’ve already written, tell it my objective, and throw in a powerpoint deck I did last year that I was really proud of. But I wouldn’t stop there. Thought partnership happens when you get that first output and realize it isn’t quite what you wanted—though there are a lot of strong elements. You ask the LLM questions about what parts work better. You push back against some of its suggestions or ask for further rationale. 


Sure, a dialogue like this would take a lot longer than one quick prompt. But with something likely on the line with this presentation, why would you want to spend mere seconds? In the end, you’ll likely create that powerpoint much faster and have a much better final output. 


Of course, not all of our interactions with AI are around something as complex as a powerpoint presentation. But even an email to a prospective client is something that you’ll be far better off proofreading, pushing back on, or even injecting some of your own voice back in. 


Or you could just push that big shiny button, copy and paste, feel the rush of the reward. But be warned: a human wielding AI thoughtfully is coming. And all it takes to join that group? 


The ability to resist the temptation of that (clankety-clank) sweet digital sugar.


 
 
 

Comments


bottom of page