Context by Cohere
Constructing Prompts for the Command Model

Constructing Prompts for the Command Model

Techniques for constructing prompts for the Command model.

Share:

Contents

Introduction

When working with large language models (LLMs), the prompt is the key to getting the desired response. A well-designed prompt will result in useful and accurate responses from a model and will considerably improve your experience interacting with it.

In the previous article, we took a horizontal approach and covered the breadth of the prompting topic by looking at some use case patterns in generative AI, such as writing, question answering, transforming, summarizing, and more.

In this article, we’ll take the vertical direction and dive deeper into how to construct a prompt. In that sense, this article and the previous one are complementary, which means that most of the ideas in this article can be mixed with those in the previous article.

Prompts can be as simple as a one-liner, or they can be as complex as multiple layers of specific information. The more specific your command is, the more likely you will get exactly what you need from the model. We’ll look at some tips and ideas for constructing the commands in your prompt to help you get to your intended outcome. We’ll focus on the broad patterns without going into the long-tail list of techniques and tricks.

We’ll be using the Command model, Cohere’s instruction-following model that enables generative AI use cases in business productivity, marketing, creative writing, and more. This blog post comes with a Google Colaboratory notebook that lets you get hands-on with the code.

Setting Up

First, let’s install the Cohere Python SDK, get the Cohere API key, and set up the client.

! pip install cohere
import cohere
co = cohere.Client("COHERE_API_KEY") # Your Cohere API key

Let’s also define a function to take a prompt and a temperature value and then call the Generate endpoint, which is how we can access the Command model.​​ Here, we select the model type to be `command`.

We set a default temperature value of 0, which nudges the response to be more predictable and less random. Throughout this article, you’ll see different temperature values being used in different situations. Increasing the temperature value tells the model to generate less predictable responses and instead be more “creative.”

This function returns the text response generated by the model.

def generate_text(prompt, temp=0):
  response = co.generate(
    model='command',
    prompt=prompt,
    max_tokens=200,
    temperature=temp)
  return response.generations[0].text

Instruction

Adding basic instructions to a prompt
Adding basic instructions to a prompt

While prompts can morph into something very lengthy and complex, it doesn’t have to be that way all the time. At its core, prompting a Command model is about sending an instruction to a text generation model and getting a response back. Hence, the smallest unit of a perfectly complete prompt is a short line of instruction to the model.

Let’s say we want to generate a product description for a wireless headphone. Here’s an example prompt, where we create a variable for the user to input some text and merge that into the main prompt.

user_input = "a wireless headphone product named the CO-1T"
prompt = f"""Write a creative product description for {user_input}"""

response = generate_text(prompt, temp=0.5)
print(response)

The model returns the following sample response, which does the job we asked for.

The CO-1T is a sleek and stylish wireless headphone product that offers a comfortable and secure fit for all-day wear. These headphones feature a noise-cancelling microphone and easy-to-use controls, making them perfect for use during your daily commute or while you're at the gym. The CO-1T comes with a charging cable and a carrying case, so you can take them with you on the go.

The CO-1T features a powerful and clear sound quality, making it easy to hear your favorite music and podcasts. The noise-cancelling microphone ensures that you can take calls without any distractions, while the easy-to-use controls allow you to adjust the volume and playback of your music.

The CO-1T features a powerful and clear sound quality, making it easy to hear your favorite music and podcasts. The noise-cancelling microphone ensures that you can take calls without any distractions, while the easy-to-use controls allow you to adjust the volume and playback of your music.

Specifics

Adding specific details to a prompt
Adding specific details to a prompt

A simple and short prompt can get you started, but in most cases, you’ll need to add specificity to your instructions. A generic prompt will return a generic response, and in most cases, that’s not what we want. In the same way that specific instructions will help humans do our job well, a model needs to be supplied with specific details to guide its response.

Going back to the previous prompt, the generated product description was great, but what if we wanted it to include specific things, such as its features, who it is designed for, and so on? We can adjust the prompt to take more inputs from the user, like so:

user_input_product = "a wireless headphone product named the CO-1T"
user_input_keywords = '"bluetooth", "wireless", "fast charging"'
user_input_customer = "a software developer who works in noisy offices"
user_input_describe = "benefits of this product"

prompt = f"""Write a creative product description for {user_input_product}, \
with the keywords {user_input_keywords} for {user_input_customer}, and describe {user_input_describe}."""

response = generate_text(prompt, temp=0.5)
print(response)

In the example above, we pack the additional details of the prompt in a single paragraph. Alternatively, we can also compose it to be more structured, like so:

user_input_product = "a wireless headphone product named the CO-1T"
user_input_keywords = '"bluetooth", "wireless", "fast charging"'
user_input_customer = "a software developer who works in noisy offices"
user_input_describe = "benefits of this product"

prompt = f"""Write a creative product description for {user_input_product}.
Keywords: {user_input_keywords}
Audience: {user_input_customer}
Describe: {user_input_describe}"""

response = generate_text(prompt, temp=0.5)
print(response)

And here’s an example response. This time, the product description is tailored more specifically to our desired target customer, includes the key features that we specified, and sprinkles benefit statements throughout — all coming from the instruction we added to the prompt.

Do you hate noisy work environments? Well, we got a great solution for you! The CO-1T is the perfect wireless headphone for software developers who work in loud and disruptive offices. With its Bluetooth connectivity and noise-canceling features, you can stay focused on your work without any distractions.  Our product also has fast charging, so you won't have to worry about battery life. Instead, you can quickly get back to work in no time. So what are you waiting for? Become the most productive person in your office with the help of the CO-1T.

There are many other angles to add specificity to a prompt. Here are some examples:

  • Style: Telling the model to provide a response that follows a certain style or framework. For example, instead of asking the model to “Generate an ad copy for a wireless headphone product” in the generic sense, we ask it to follow a certain style, such as “Generate an ad copy for a wireless headphone product, following the AIDA Framework – Attention, Interest, Desire, Action.”
  • Tone: Adding a line mentioning how the tone of a piece of text should be, such as professional, inspirational, fun, serious, and so on. For example, “Tone: casual”
  • Persona: Telling the model to act like a certain persona helps to add originality and quality to the response. For example, “You are a world-class content marketer. Write a product description for…”
  • Length: Telling the model to generate text with a specific length, be it in words, paragraphs, and others. This helps guide the model to be verbose, concise, or somewhere in between. For example, “Write in three paragraphs the benefits of …”

Context

Adding contextual information to a prompt
Adding contextual information to a prompt

While LLMs excel in text generation tasks, they struggle in context-aware scenarios. Here’s an example. If you were to ask the model for the top qualities to look for in wireless headphones, it will duly generate a solid list of points. But if you were to ask it for the top qualities of the CO-1T headphone, it will not be able to provide an accurate response because it doesn’t know about it (CO-1T is a hypothetical product we just made up for illustration purposes).

In real applications, being able to add context to a prompt is key because this is what enables personalized generative AI for a team or company. It makes many use cases possible, such as intelligent assistants, customer support, and productivity tools, that retrieve the right information from a wide range of sources and add it to the prompt.

This is a whole topic on its own, but to provide some idea, this demo shows an example of information retrieval in action. In this article though, we’ll assume that the right information is already retrieved and added to the prompt.

Here’s an example where we ask the model to list the features of the CO-1T wireless headphone without any additional context:

user_input ="What are the key features of the CO-1T wireless headphone"
prompt = user_input

response = generate_text(prompt, temp=0)
print(response)

This generates a response that the model makes up since it doesn’t have any information to refer to.

The CO-1T wireless headphone is a high-quality, comfortable, and durable headphone that is designed for use with a variety of devices. It features a sleek and modern design, a comfortable and secure fit, and a high-quality sound. The CO-1T is also equipped with a variety of features, including a built-in microphone, a multi-function button, and a rechargeable battery.

And here’s the same request to the model, this time with the product description of the product added as context.

context = """Think back to the last time you were working without any distractions in the office. That's right...I bet it's been a while. \
With the newly improved CO-1T noise-cancelling Bluetooth headphones, you can work in peace all day. Designed in partnership with \
software developers who work around the mayhem of tech startups, these headphones are finally the break you've been waiting for. With \
fast charging capacity and wireless Bluetooth connectivity, the CO-1T is the easy breezy way to get through your day without being \
overwhelmed by the chaos of the world."""

user_input = "What are the key features of the CO-1T wireless headphone"

prompt = f"""{context}
Given the information above, answer this question: {user_input}"""

response = generate_text(prompt, temp=0)
print(response)

Now, the model accurately lists the features of the model.

The answer is The CO-1T wireless headphones are designed to be noise-canceling and Bluetooth-enabled. They are also designed to be fast charging and have wireless Bluetooth connectivity.

Format

Adding output format requirements to a prompt
Adding output format requirements to a prompt

So far, we saw how to get the model to generate responses that follow certain styles or include specific information. But we can also get the model to generate responses in a certain format. Let’s look at a couple of them: markdown tables and JSON strings.

Here, the task is to extract information from a list of invoices. Instead of providing the information in plain text, we can prompt the model to generate a table that contains all the information required.

prompt="""Turn the following information into a table with columns Invoice Number, Merchant Name, and Account Number.
Bank Invoice: INVOICE #0521 MERCHANT ALLBIRDS ACC XXX3846
Bank Invoice: INVOICE #6781 MERCHANT SHOPPERS ACC XXX9877
Bank Invoice: INVOICE #0777 MERCHANT CN TOWER ACC XXX3846
"""

response = generate_text(prompt, temp=0)
print(response)

The response will come in the form of a markdown table.

| Invoice Number | Merchant Name | Account Number |
|-----------|------------|-----------|
| 0521 | Allbirds | XXX3846 |
| 6781 | Shoppers | XXX9877 |
| 0777 | CN Tower | XXX3846 |

Another useful format is JSON, which we can modify the prompt as follows.

prompt="""Turn the following information into a JSON string with the following keys: Invoice Number, Merchant Name, and Account Number.
Bank Invoice: INVOICE #0521 MERCHANT ALLBIRDS ACC XXX3846
Bank Invoice: INVOICE #6781 MERCHANT SHOPPERS ACC XXX9877
Bank Invoice: INVOICE #0777 MERCHANT CN TOWER ACC XXX3846
"""

response = generate_text(prompt, temp=0)
print(response)

This returns the following response.

 [
  {
    "Invoice Number": "0521",
    "Merchant Name": "Allbirds",
    "Account Number": "XXXX3846"
  },
  {
    "Invoice Number": "6781",
    "Merchant Name": "Shoppers",
    "Account Number": "XXXX9877"
  },
  {
    "Invoice Number": "0777",
    "Merchant Name": "CN Tower",
    "Account Number": "XXXX3846"
  }
]

Examples

Adding examples to a prompt
Adding examples to a prompt

All our prompts so far use what is called zero-shot prompting, which means that we are providing instruction without any example. But in many cases, it is extremely helpful to provide examples to the model to guide its response. This is called few-shot prompting.

Few-shot prompting is especially useful when we want the model response to follow a certain style or format. Also, sometimes it is hard to explain what you want in an instruction, and easier to show examples.

Let’s use an example task, where a model should take a request coming from a human and rephrase it into the most accurate utterance that an AI virtual assistant should use. The example data is taken from this paper (Einolghozati, et al. 2020).

We’ll use this example request: “Send a message to Alison to ask if she can pick me up tonight to go to the concert together”. Given that request, we should expect the rephrased utterance to be something like: “Can you pick me up tonight to go to the concert together?”

First, let’s generate a response without giving the model an example. Here’s the prompt:

prompt="""Turn the following message to a virtual assistant into the correct action:
Send a message to Alison to ask if she can pick me up tonight to go to the concert together"""

response = generate_text(prompt, temp=0)
print(response)

The response we get is not wrong, but it doesn’t follow the style that we need, which is a simple one-line rephrasing of the original request. Instead, it generates an email!

Here is the message to Alison:

Hey Alison, I hope you're doing well! I was wondering if you could pick me up tonight to go to the concert together. I would really appreciate it, and I think it would be a lot of fun. Let me know if you're able to do this, and I'll make sure to be ready on time.

Thanks,
[Your Name]

Now, let’s modify the prompt by adding a few examples of how we expect the output to be.

user_input = "Send a message to Alison to ask if she can pick me up tonight to go to the concert together"

prompt=f"""Turn the following message to a virtual assistant into the correct action:

Message: Ask my aunt if she can go to the JDRF Walk with me October 6th
Action: can you go to the jdrf walk with me october 6th

Message: Ask Eliza what should I bring to the wedding tomorrow
Action: what should I bring to the wedding tomorrow

Message: Send message to supervisor that I am sick and will not be in today
Action: I am sick and will not be in today

Message: {user_input}"""

response = generate_text(prompt, temp=0)
print(response)

This time, the style of the response is exactly how we want it.

Can you pick me up tonight to go to the concert together?

Chain of Thought

One specific way to provide examples in a prompt is to show responses that include a reasoning step. This way, we are asking the model to “think” first rather than going straight to the response. In tasks involving mathematical questions, for example, there is a huge difference between directly giving the answer and adding a reasoning step in between.

This concept is called chain of thought prompting, introduced by Wei et al. Let’s look at an example from the paper which illustrates this idea.

First let’s look at a prompt without a chain of thought. It contains one example of a question followed by the answer, without any intermediate calculation step. It also contains the new question we want to answer.

prompt=f"""
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. \
How many tennis balls does he have now?
A: The answer is 11.
Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?
A:"""

response = generate_text(prompt, temp=0)
print(response)

We get the following response, which is an incorrect answer. And notice that the response is direct, in the same style as the example given.

The answer is 29.

Now, let’s repeat that, this time with a chain of thought. Now, the example answer contains a reasoning step, describing the calculation logic to get to the final answer, before giving the final answer.

prompt=f"""
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. \
How many tennis balls does he have now?
A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. \
The answer is 11.
Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?
A:"""

response = generate_text(prompt, temp=0)
print(response)

And we get the correct answer this time, with the response following the style of the example given.

The cafeteria started with 23 apples. They used 20 to make lunch, so they have 23 - 20 = 3 apples. They bought 6 more apples, so they have 3 + 6 = 9 apples. The answer is 9.

Steps

Adding generation steps to a prompt
Adding generation steps to a prompt

To steer the model toward generating higher-quality responses, it can be helpful to add instructions for the model to generate intermediate steps before generating the final output. The information generated during these steps helps enrich the model’s context before it generates the final response.

There could be another scenario where we specifically need the response to be detailed and verbose. In this case, asking the model to generate responses in steps will become handy.

Let’s use an example of generating startup ideas. We can get the model to directly generate an idea for a given industry, like so:

user_input = "education"

prompt = f"""Generate a startup idea for this industry: {user_input}"""

response = generate_text(prompt, temp=0.5)
print(response)

This generates the following response, which is reasonable, but perhaps not rich enough in information.

A mobile app that connects students with tutors for on-demand homework help.

Alternatively, we can ask the model to generate information in steps, such as describing the problem to be solved and the target audience experiencing this problem.

user_input = "education"

prompt = f"""Generate a startup idea for this industry: {user_input}
First, describe the problem to be solved.
Next, describe the target audience of this startup idea.
Next, describe the startup idea and how it solves the problem for the target audience.
Next, provide a name for the given startup.

Use the following format:
Industry: <the given industry>
The Problem: <the given problem>
Audience: <the given target audience>
Startup Idea: <the given idea>
Startup Name: <the given name>"""

response = generate_text(prompt, temp=0.9)
print(response)

This provides a richer description of the startup idea.

Industry: Education
The Problem: Students often need to learn at their own pace and require individual attention to succeed in their studies.
Audience: Students who need a more personalized learning experience.
Startup Idea: An online platform that connects students with tutors for personalized one-on-one learning.
Startup Name: tutormate

Prefix

Adding a prefix to a prompt
Adding a prefix to a prompt

To ensure that the response follows a consistent format or style, sometimes we need to add a prefix or leading words to help guide the response. This is especially handy when the temperature value is high. In this scenario, we want the response to be creative but, at the same time, follow a certain format.

Let’s say we are generating the characteristics of football players for a given position, with one separate paragraph per characteristic. A prompt without any guiding prefix would look something like this:

user_input_position = "modern centre forward"

prompt = f"""Describe the ideal {user_input_position}. In particular, describe the following characteristics: \
pace, skill, and awareness."""

response = generate_text(prompt, temp=0.9)
print(response)

And the response is a paragraph combining all the characteristics — not what we wanted.

An ideal modern centre forward would have a combination of pace, skill, and awareness. They would be fast and agile, with the ability to run and move with the ball at speed. They would also be skilled with their feet, with the ability to dribble and pass the ball with ease. Finally, they would be aware of their surroundings and the position of their teammates and opponents, with the ability to make quick and intelligent decisions on the fly.

But if we just added a prefix of the first characteristic (“Pace”) at the end of the prompt, it will give a signal to the model as to how the output should look like.

user_input_position = "modern centre forward"

prompt = f"""Describe the ideal {user_input_position}. In particular, describe the following characteristics: \
pace, skill, and awareness.

Pace:"""

response = generate_text(prompt, temp=0.9)
print(response)

Here’s an example response, which is much closer to what we are looking for:

The ideal modern centre forward will have excellent pace, both in terms of speed and acceleration. This will allow them to take advantage of gaps in the opposition defence and create chances for their team.

The ideal modern centre forward will have excellent pace, both in terms of speed and acceleration. This will allow them to take advantage of gaps in the opposition defence and create chances for their team.

Skill:

The ideal modern centre forward will also have a high level of skill, including dribbling, passing, and shooting. This will allow them to create chances for themselves and their team, as well as to score goals.

Awareness:

The ideal modern centre forward will also have a high level of awareness, including an understanding of the game and their position within it. This will allow them to make intelligent decisions on the pitch, both in terms of their own play and in terms of the overall strategy of their team.

Zero-Shot Chain of Thought

One specific way of adding prefixes to a prompt is such that it encourages the model to perform a reasoning step before generating the final answer.

Earlier, we saw that chain-of-thought prompting helps guide the model to perform reasoning by way of showing examples. But Kojima et al. show that there’s a way to apply chain-of-thought prompting without providing examples. This technique is called zero-shot chain of thought In contrast, the technique with examples discussed earlier is known as “few-shot chain of thought”.

The paper proposes adding a prefix that nudges the model to perform a reasoning step, specifically the phrase “Let’s think step by step”.

Here’s an example taken from the paper. First, we look at a prompt without the “Let’s think step by step” prefix.

prompt=f"""Q: A juggler can juggle 16 balls. Half of the balls are golf balls,
and half of the golf balls are blue. How many blue golf balls are
there?
A: The answer (arabic numerals) is"""

response = generate_text(prompt, temp=0)
print(response)

It gives an incorrect answer.

8

Now, let’s add the “Let’s think step by step” prefix to the prompt.

prompt=f"""Q: A juggler can juggle 16 balls. Half of the balls are golf balls,
and half of the golf balls are blue. How many blue golf balls are
there?
A: Let’s think step by step."""

response = generate_text(prompt, temp=0)
print(response)

And this time, the response contains a reasoning step before giving the final answer, which is the correct answer.

There are 16 balls in total. Half of the balls are golf balls, so there are 8 golf balls. Half of the golf balls are blue, so there are 4 blue golf balls.
So, the answer is 4.

Conclusion

In this article, we looked at some techniques for constructing prompts for the Command model. A prompt can be as simple as a single line of instruction, though the more specific the prompt is, the higher the level of quality and accuracy you can expect from the response. Each building block added a prompt provides a different type of lever to enhance the quality of the response.

To get started with the Command model, you can either try it on the Playground or access it via the API.

Keep reading