The world’s leading publication for data science, AI, and ML professionals.

How to Improve Any Prompt in Less Than 5 Minutes (Chat UI and Code)

Turn half-baked sentences into expert-level prompts

All images are by the author via phone, Midjourney, and Canva.
All images are by the author via phone, Midjourney, and Canva.

I get paid to write prompts and my friends know it. That’s why, when one of them asked "Can’t we just use ChatGPT to compute the final vote?" all eyes turned to the bald dude in the room.

There were eight of us and we’d just finished tasting six different Galettes des Rois.

The "Kings’ Cake" is a traditional French pastry, typically enjoyed in January. It’s a celebration rooted in both Christianity and the Roman festival of Saturnalia, where social norms were briefly inverted.

For us, it was just another excuse to eat fancy cakes, and we decided to rank them.

Everyone pulled up the Notes app on their phone, but no one agreed on a clear template to write down the votes.

  • Some used a dynamic list, moving the cakes up and down with every new slice.
  • Others listed them without order, adding a number representing the rank before each cake. Sometimes after.
  • The only constant? Spelling mistakes, which is only natural since French bakeries looove wordplay.
Four of the galettes we devoured.
Four of the galettes we devoured.

I received eight separate lists in a text message. All I had to do was write an elegant prompt to turn messy data into a classy computation. It was time to shine. But…

Lazy.

That’s how I felt.

Super ultra mega lazy.

I didn’t want to disconnect from the group for 15+ minutes to write a sophisticated prompt. So I wrote a short one in five minutes and ran it. The first results landed in the nonsense territory.

I added a few instructions and pressed "Send." Still no luck. I discussed the data with two friends and tried again. What came out was synchronized frowns.

Over 20 more minutes went by before I started waving my phone"I got it," I yelled. "I freaking got it!"

On the way home, one question kept playing inside my head. How can you write better prompts when you’re feeling lazy?

TL;DR:

Make the model write a better prompt for you. Use a specific prefix:

Act as an expert Prompt Engineer.
I'll give you a messy prompt.
Reason step by step to improve it.
Write the final prompt as an elegant template with clear sections.
Use lists, placeholders, and examples.
##
Prompt:"""<Insert your prompt here, and yes please, use the triple quotes.>"""

But how and why does this work? And how can you implement it into your code?

The subtle art of meta-prompting

Prompt Engineering is a fancy way to say "Write better and better instructions for AI until it does exactly what you want." You try different words until you land on a formula that generates the desired response.

It’s an empirical science based on trial and error. AI practitioners often share discoveries from their quest to tame Large Language Models and unlock their "latent capabilities."

At some point, it became fashionable to swing prompting techniques left and right. It made you look savvy in the rapidly growing AI bubble.

All the hype made people forget why Prompt Engineering existed in the first place – Get AI to generate relevant responses.

The very task of "teaching AI how to produce high-quality responses" has been the top priority of the engineers who fine-tuned AI models.

In the fine-tuning phase, you feed LLMs pairs of high-quality "questions and answers." The more Q&As you feed your model, the more it learns how to answer questions like an assistant would do.

As a side effect, LLMs also learned to ask better questions.

The idea of meta-prompting is to use that very skill to your advantage. You make your model ask itself a better question than the one you’d ask yourself.

And just like with people, you get better answers if you ask better questions.

Meta-prompts aren’t always straightforward, however. They’re usually long (my favorite meta-prompt is over 800 words) and annoyingly precise – and even so, they often "break" after a few interactions.

That’s not what you want when you’re in the heat of action, wanting to complete a task as soon as possible. Instead, you want a quick fix to get the job done (or to impress your friends).

To build the quick fix we’ll use three prompting techniques to create an efficient meta-prompt:

  1. Role Prompting: you give **** a role to your LLM, which indirectly specifies the context, objective, and other parameters like the style.
  2. Chain-of-Thought prompting (CoT): also known as "Reason step by step." This is the most powerful sentence you can use when prompting an LLM. When LLMs "reason step by step," they use tokens to "think" through stochastic predictions, which increases accuracy.
  3. Placeholders: this is a way to both write and submit flexible prompts. Placeholders allow you to play with different inputs and pick from a set of options.

The 5-minute fix

The main idea behind meta-prompting is: you are better at assessing the quality of a prompt than you are at writing one.

It’s like with food. You can evaluate the taste of a world-class dessert and even suggest potential improvements. But it’s much, much harder to bake one yourself.

With meta-prompting, you make the model suggest an improved version of your initial prompt. From there you can edit, replace with specific information, and ask the model for further modifications.

The output of the "meta-prompt + your initial prompt" is a better prompt, but not a perfect one.

Still, meta-prompting is much faster than writing an expert-level prompt from scratch because the model does all the heavy lifting for you. All you have left to do is to pick from a "menu."

Here’s the compressed meta-prompt:

Act as an expert Prompt Engineer.
I'll give you a messy prompt.
Reason step by step to improve it.
Write the final prompt as an elegant template with clear sections.
Use lists, placeholders, and examples.
##
Prompt:"""<Insert your prompt here, and yes please, use the triple quotes.>"""

The template works with ChatGPT-3.5, ChatGPT-4, and HuggingChat (which uses the Mixtral 7B model). If you’re using other models, you can apply the same logic, using different formulations – usually longer ones.

I tested 62 variants of the meta-prompt with four criteria in mind:

  • Short;
  • Easy to remember;
  • Easy to read and edit;
  • Flexible, both in the output format and the use cases.
Testing one of the 62 variants of the meta-prompt. (Screenshots by the author)
Testing one of the 62 variants of the meta-prompt. (Screenshots by the author)

You can use the same chat tab to run both the meta-prompt and the upgraded one. Another option is to copy-paste the upgraded prompt into a new chat window. Opt for the second option if you need multiple back-and-forths to get a decent upgrade.

Depending on the speed of your model, the whole process takes between two and five minutes.

Meta-prompting in your code

Let’s say you’re building a chatbot.

Every time a user submits a query, you can improve it using a few lines of code and a meta-prompt. You then reinject the improved prompt into your model, and voilà.

You’ve just transformed messy inputs from lazy users (like myself) into high-quality prompts – and high-quality prompts generate high-quality responses.

Step#1 is to adjust the meta-prompt and store it into a variable. The final instruction in the meta-prompt is a trick we’ll use in a later step.

meta_prompt = "Act as an expert Prompt Engineer.
I'll give you an initial prompt.
Reason step by step to improve it.
Write the final prompt as an elegant template with clear sections.
Make sure you produce a ready-to-use prompt.
The final prompt must start with '###Improved Prompt###'" 
# the last instruction is about having a stable format that you can manipulate

Step#2 is to call your LLM – and we’ll use OpenAI’s API as an example. Import the package and set up your secret key.

import openai # 1.3.6 
from openai import OpenAI 

SK= "your secret key here"

#Get API key
client = OpenAI (api_key=SK)

Step#3 is to store your user’s initial input inside a variable.

user_initial_input= "meal ideas for 3 days, flexitarian." # simplified example

Step#4 is to call one of the GPT models to improve the initial user prompt. From there, we’ll store the output in a new variable called "intermerdiary_prompt."

completion = client.chat.completions.create(
  model="gpt-4",
  max_tokens=1000,

  messages=[
    {"role": "system", "content": meta_prompt},
    {"role": "user", "content": user_initial_input}
  ], 
)
intermediary_prompt = completion.choices[0].message.content
Example of an intermediary output that has both the "Chain of Thought" and the "Improved Prompt."
Example of an intermediary output that has both the "Chain of Thought" and the "Improved Prompt."

Step#5 is to remember the intermediary prompt contains the "step-by-step reasoning." You need a quick edit to extract the improved prompt.

# Find the starting index of the desired substring
start_index = intermediary_prompt.find("### Improved Prompt ###")

# Check if the substring is found
if start_index != -1:
    # Extract the substring from the starting index to the end
    new_prompt = intermediary_prompt[start_index:]
else:
    # Handle the case where the substring is not found
    new_prompt = user_initial_input

Step#6 is to call a GPT model again but this time with the new improved prompt and the system prompt of your choice.

completion = client.chat.completions.create(
  model="gpt-4",
  max_tokens=1000,
  messages=[
    {"role": "system", "content": "You're a super cool assistant that talks like Jessie Pinkman from Breaking Bad"},
    {"role": "user", "content": new_prompt}
  ],
)

print(completion.choices[0].message.content)
Example of final output.
Example of final output.

Just talk to AI

Meta-prompting is not only a technique you can apply in under 5 minutes to improve any prompt. It’s also a mental framework.

The idea is to build a habit of interacting with LLMs to achieve all kinds of tasks. You don’t always need advanced techniques. Sometimes, all you have to do is "just talk to AI."

When my friends wanted to rank the desserts, none of them thought about using ChatGPT -and there were three software engineers in the group. The one who yelled "Let’s use AI" happens to be a prompt engineering fan.

Just like him, you can develop the "reflex" of using AI. Whether to rank desserts, analyze data, or copy-edit your LinkedIn posts, an LLM can make your life easier – especially if you learn how to talk to it.

If you have an extra 5 minutes

I’ve put together a meta-prompting GPT on the GPT store. It doesn’t use the 5-minute version discussed above, however.

Instead, it acts like an expert Prompt Engineer that can help you improve your prompts using State-of-The-Art techniques.

It’s called Bernard and it’s your Prompt Engineering Sensei. (Bernard has an 800-word prompt and 20 attached files).

ChatGPT – Bernard The Prompt Master

Update: here’s an open source version of Bernard on HuggingChat:

Bernad – The Prompt Engineering Sensei – HuggingChat

Keep in touch?

If you don’t want to miss my latest posts, subscribe here to get email notifications.

I’m also active on Linkedin and X and reply to every single message.

For Prompt Engineering inquiries, write me at: [email protected]


Related Articles