top of page
Writer's pictureChris Lele

Why Prompt Engineering Should Have Died All Along


A pair of hands holding a phone with ChatGPT open on screen.
Have pre-fabricated prompts lost their usefulness?


Today’s AI news is flooded with articles bemoaning how we’ve hit the ceiling on what LLMs are capable of. GPT 5 might turn out to be GPT 4.1. 


But what if part of this ceiling is self-imposed and (shockingly!) is rooted in the lack of human know-how. 


The culprit? 


Prompt engineering. And I mean not only the practice (which has marginally improved outputs) but the term itself, which locks us in a certain way of thinking.  


To use an analogy: imagine driving a mountain road in your current car and then upgrading to a Tesla (if you have a Tesla make that upgrade a Bugatti.) You are clearly going to fly through that road faster. Now imagine you upgrade once again, but now from a Tesla to a Bugatti. You are going to get incremental improvements but the ceiling of what you are capable of has pretty much been hit. 


Would you say your time completing the mountain road is at its absolute fastest? And that the Bugatti is faster than the Tesla?  


Before you answer that question, let me introduce Lewis Hamilton. For those who don’t know, he is one of the greatest Formula 1 drivers of all time. 


Now, let’s take that same mountain road but this time Lewis Hamilton is behind the wheel of the Tesla, and you are behind the wheel of the Bugatti. The winner…well, we don’t even have to wait to the starting gun to know the answer. 


In this case, you, me, or basically any average mortal driver is prompt engineering. What then is Lewis Hamilton? He represents the utmost of what LLMs are capable of—a specific set of LLM techniques that can give you a greatly superior output to prompt engineering. 


To capture these techniques with one buzzy phrase, I usher in “conversation sculpting.” In other words, it’s not about the prompt; it’s about the conversation. 


What makes “conversation sculpting” so different? 


Think of a Google map zoomed out to the level of the entire world. Your job is to get the pin to land on your hometown. 


On the first try, you are lucky to get your home state/region. But with each subsequent try, you are going to get a little bit closer. 


This process of zeroing in on your home city is similar to that of getting a high quality response: it’s probably not going to happen on your initial attempt, no matter how carefully you try to place that first pin—or craft that first prompt. 


Course correction is the name of the game, if you want to get the best final output. How you go about this requires a few useful strategies. 



Context weaving – another tool in the “conversation sculpting” toolkit


Context weaving is where you provide helpful context in the form of PDFs or other text you can easily copy and paste online. If you want ChatGPT to help you with your end-of-the-month presentation, you don’t tell it all about your role and what you are trying to do (classic prompt engineering). 


In fact, you spend little time actually writing the prompt and more time figuring out which files/documents/text to feed it. Your presentation from last month? Not bad. But what about that presentation from the department rockstar who always gets the kudos? Much better. 


That doesn’t mean the advice it gives or the report it helps you fashion will be perfect the first time around. Again, you have to course correct. And if you don’t know the subject well enough to course correct, that’s when you copy and paste the conversation into another LLM (hey, Claude, what do you think of ChatGPTs advice and final script?). Take that evaluation and then put it back into ChatGPT for it to cough up another—and usually greatly improved—attempt


Pretty crazy, right? With “conversation sculpting” you blast through the confines of the prompting window like Wolverine on a coffee kick, opening up a whole new world. 


And all of this is just the very tip of the “conversation sculpting” iceberg. 



Why does “conversation sculpting" work so well?



There are two reasons why “conversation sculpting” works so well with LLMs. 


  • LLMs thrive on context

  • LLMs thrive on course correction. 


Ever hear the phrase “show don’t tell?” LLMs work much better if you show them something versus if you tell them all about. Let’s say, for instance, you want the LLM to capture your style of writing so that it can write more you-sounding emails. Under a prompt engineering paradigm, you will “tell” it all about your writing. Under "conversation sculpting" you upload five examples of your writing and have it derive the pattern itself. 


Essentially, the LLM were built to recognize complex textual patterns based on the surrounding context (without any need from you to explain that context.) In fact, you prompting the LLM around what you think are the hallmarks of your writing depends on how well you know your own writing. But even if you know your writing intimately, the LLM is ultimately interpreting your interpretation. 


As for course correction, this is simply another form of context. The LLM can only get you so close to the output you are looking for on that first and even second try. But the more feedback you provide about how the output was substandard the better change it has in the next output to surprise you. 


Is the beginning of the “conversation sculpting” revolution near? 

We’ve probably hit the ceiling on what prompt engineering can offer us. And even LLMs as far as their ability to work with language may be leveling off. Many of us are looking for another way to get the most out of these. 


But what if the answer has been at our fingertips this whole time? However, as long as we are stuck in the prompt engineering paradigm, we aren’t the inferior automobile; we are the inferior driver. 



39 views0 comments

Recent Posts

See All

Comments


bottom of page