I have nothing but respect for the engineers I learn from.
(After all, my Dad was an electrical engineer.)
And since I’ve studied “traditional” prompt engineering, I’m getting better at wrestling these weird machines into submission.
(My favourite on Substack by far is
. I’m a paid subscriber, and his articles are worth EVERY CENT.)But still, I’m not sure an engineer’s approach is the only (or even optimal) way to work with LLMs.
In an area where the output is immediately verifiable (e.g. coding), this might work. But when it isn’t…
Are great piano makers the best piano players?
If you’ve spent your life programming deterministic machines, you’d bring many of the same assumptions into your work with LLMs. But when that machine spits out random, chaotic and surprising results, you might not have the experience, skills or world view to deal with them.
You’d do what engineers do. Test, review and iterate to get a consistently top-quality result.
Even with a probabilistic LLM, there’s no doubt that this method improves the output…
To a point.
But beyond that point, when the AI occasionally returns a seemingly random output, how long do you keep improving the prompt to account for every possible result and edge case?
Is it possible to get a perfect output from a probabilistic machine?
Could rigorous prompt engineering discourage rigorous thinking?
How might an intense focus on improving our input prime our minds to accept a coherent output?
Here’s an environment where you get random, chaotic and surprising results:
A music studio.
And here’s what also spits out random, chaotic and surprising results:
Humans.
(Particularly those making music!)
From coaching musicians for 30 years, I’ve noticed that trying too hard to control a probabilistic process leads to unnecessary frustration, and worse…
Focus too hard on the perfect result, and you’ll get worse results.
(Particularly if you try to do it in ONE GO!)
I’ve coached many software engineers on their music-making process. While challenging, training them to expand out of their “deterministic input/output” mindset is one of the joys of my work.
And now, a shift away from an engineering frame could be required to make our work with LLMs more effective.
When a Result Serves the Process
In any job involving probabilities, improvement is a PROCESS.
Any given RESULT is just one of many outputs that help us continuously improve.
Instead of improving our process to get a better result…
(Where the process serves the result…)
We use the results to improve our process.
(Where the results serve the process.)
“But Mike, that’s what I do when I improve my prompts!”
Yes. That’s why I study and value prompt engineering.
But if you solely focus on a single prompt to get it to return a consistently top-quality result, you miss the majority of the process.
Plus, the more work you put into a prompt raises the spectres of confirmation bias and sunk cost fallacy.
You increase the likelihood you’ll mistake coherence for accuracy.
This is why I’m developing an alternative and complementary approach to Prompt Engineering…
It’s called Vibe Prompting.
When Vibe Prompting, we treat LLM inaccuracy as a feature, not a bug. We start with these base assumptions:
“An LLM only ever infers a coherent output.”
“An LLM only happens to be accurate.”
“An LLM is rarely 100% accurate.”
Now, it might be that an LLM is mostly 100% accurate. But when working with a probabilistic tool, assuming this is dangerous.
So we assume the opposite. Because when you take the above assumptions seriously, you have no choice…
You must use your taste and judgement.
You must verify, update and curate every output.
You must put more effort into interacting with the tool than you do the initial prompt:
Ask (the initial prompt)
Verify.
Curate.
Update.
Repeat.
The first time through this Vibe Prompting loop, the initial prompt is only one of 4 steps. But even then, this is only the beginning…
Because this process never ends.
There isn’t a “final result”.
There is only recursive, ongoing improvement.
But the LLM isn’t improving - that’s the job of the AI researcher.
No single output improves - because that’s fixed in time.
When Vibe Prompting - YOU are improving.
Share this post