The prompt engineering guide from the makers of Claude
All of the best tips from Anthropic's deep dive on writing better AI prompts
AI recently came across this super interesting video. Anthropic, the makers of Claude, recently released a Youtube video going deep into “prompt engineering”
It’s super long, so read on if you want to get most of the signal at a fraction of the time.
What is prompt engineering?
Prompt engineering is the art (and science?) of communicating with AI through effective prompts. Sure, it’s not “real engineering”, but it’s an important skill to learn. Heck, it’s even a real job title (yes, really).
Plenty of folks will complain about the use of the word “engineering” with some validity. Still, prompt engineering isn’t just thinking of a really good prompt. It’s a practice that requires experimentation, systems, iteration, and evaluating results. The result might just be a block of text, but there’s real process that gets you to that result.
How do you get better at prompt engineering?
There’s a few important things to get right.
First, you have to understand and respect the model’s strengths and weaknesses. Each model has things they’re better or worse at, and understanding where some of those boundaries lie will help you write better prompts. As an example, Open AI’s o1 model is better at reasoning than it’s 4o model, so it makes sense to give it guidance for that reasoning.
Second, you have to iterate. This is especially productive if the task you’re trying to get AI to help with is something you’ll do often. Taking a few dozen tries to get a prompt that works well can pay dividends if you reuse it often.
Third, and probably most important, you have to communicate clearly. LLMs are literal and work well with precision, so you’ll have the best results if you’re annoyingly precise in your prompts. Don’t be afraid to explicitly say “Do this, not that.”
Some prompt engineering strategies
If there’s only a few things you take away from this, think back to this list:
Be precise
Explain the context of the task
Explicitly describe what role you want the model to play
Don’t be afraid to use examples
Set anti-goals in addition to your goals
How much does prompt engineering matter?
I’m split on this to be honest. A few years ago, designing good prompts was the only way to get desirable outputs. With each new model, prompt engineering seems to matter less and less.
Models are getting better at understanding what you want without requiring so much precision. Still, once in a while I see a really well thought out prompt that reminds me what a difference it can make.
A few weeks ago I wrote about using LLMs for system design 👇
In that newsletter edition, you’ll notice a ton of specific prompts to guide the LLM along a complex task. This clearly gives you a better result than just prompting “Design an architecture for xyz software product.”
So as a fully-formed opinion, I think that prompt engineering gives you a pretty undeniable edge with AI tools. What do you think?