[go: up one dir, main page]

0% found this document useful (0 votes)
12 views7 pages

Prompt Engineering Fundamentals - IBM Developer

Uploaded by

Ilya Tsarhart
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views7 pages

Prompt Engineering Fundamentals - IBM Developer

Uploaded by

Ilya Tsarhart
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

IBM Developer Subscribe

Article

Prompt Engineering
Fundamentals
From zero to hero with prompt templates for different types of
prompts
By Bhavishya Pandit

15

Imagine having a super smart helper that can write


perfect sentences, create different kinds of writing,
translate languages, and answer your questions really
well.

These helpers are called large language models (LLMs).


LLMs are like giant libraries filled with information from
the internet. But instead of looking for a specific book, you
just give them a question or a task.

This is where prompt engineering comes in. It's like giving


the LLM the right instructions to get the best results. By
using prompt engineering, you can make the LLM do a lot
of work for you and get really good answers.

Crafting the perfect


prompt
Now, let's get down to business! One of the quickest and
best ways to learn how to craft the perfect prompt is to
consider the common mistakes people make.
Listed below are a few common mistakes that people
IBM Developer
usually make while writing prompts:

Being too vague: Don't expect an LLM to read your


mind. Vague prompts lead to vague responses.
Grammatical goofs: Typos and grammatical errors can
confuse the LLM and lead to nonsensical results.
Double-check your prompt before hitting enter.
Asking leading questions: Want an unbiased answer?
Avoid phrasing your prompt in a way that steers the
LLM towards a specific response.

Avoiding these common mistakes still might not give you


the most refined output. Therefore, it is good to follow
some best practices that have been determined
experimentally by the members of LLM communities.

Start with a clear objective: What exactly do you want


the LLM to achieve?
Provide context: The more context you give, the
better the LLM can understand your request.
Use a conversational tone: Treat the LLM like a
helpful assistant, not a complex machine.
Refine and iterate: Don't be afraid to experiment and
adjust your prompt based on the results you receive.

Prompt Templates
There are various ways to write a prompt. No matter what
method you use, the instructions and the intent must be
clear.

A prompt template includes some combination of these


elements:

Task
Goal
Action
Problem
IBM Developer
Outcome
Role
Format
Input
Steps
Expectation
Context
Elaboration

The following infographic that I created shows a few ways


to write prompts that convey the instructions and intent
clearly to the LLM. Each template caters to a different
scenario and writing style.
IBM Developer
Understanding zero-
shot, one-shot and
few-shot prompting
Prompts are instructions that you give to an AI model to
perform a specific task. These prompts can range from
simple requests to complex commands.

The number of examples that you provide in the prompt


can significantly influence the model's performance.

A zero-shot prompt is one that does not provide any


examples. For example, "Write a poem about a robot who
wants to be a chef."

A one-shot prompt is one that provides a single example.


For example, "Translate 'Hello, how are you?' into French."

A few-shot prompt is one that provides multiple


examples. For example, "Translate the following
sentences: 'I like dogs.' 'She is beautiful.'

'They are eating pizza.'"

Why do we use these different types of prompts?

Zero-shot prompts test the model's general knowledge


and ability to understand instructions without specific
examples.

One-shot prompts assess the model's ability to learn


from a single example and apply it to new situations.

Few-shot prompts evaluate the model's ability to learn


from multiple examples and generalize its knowledge to
new tasks.
IBM Developer
Making LLMs “think”
Using these different types of prompts and different
prompting techniques we can make LLMs think, improving
their responses and reducing the abstraction.

Let’s review one of the most popular prompting


techniques, the chain-of-thought (CoT) prompting
technique. In simpler terms, this means using a few
examples to teach the LLM to think out loud and explain
its reasoning before giving the final answer. This helps us
understand how the LLM arrived at its conclusion and can
also help improve its accuracy.

The following is an example of a one-shot chain-of-


thought prompt:

Q: Jack has two bags, each containing three


A: One bag contains 3 apples, so two bags co
Q: {QUESTION}
A:

Show more

This CoT technique can also be effective in prompt


decomposition which means breaking a complex problem
(prompt) into simple smaller problems (tasks) using the
right set of prompts.

Similarly, there exists a concept of prompt aggregation


which acts like an ensemble module. It is the process of
using multiple prompts to solve the same problem, then
aggregating these responses into a final output. In many
cases, a majority vote selecting the most frequent
response is used to generate the final output.

Both prompt decomposition and prompt aggregation


techniques help to reduce the variance of LLM outputs
and often improve the accuracy of responses, but both
techniques come with the cost of increasing the number
IBM Developer
of model calls needed to reach a final answer.

Summary and next


steps
This article introduced you to communicating with large
language models (LLMs) by teaching you the
fundamentals of prompt engineering. You learned how
prompts guide these models, and how to avoid common
pitfalls to get the best results. The prompt templates can
help kickstart your LLM adventures.

If you're ready to dive in to do some prompt engineering


using these prompting techniques, check out this tutorial
series, "Use various models with the watsonx.ai flows
engine." Or, check out using watsonx Prompt Lab in this
tutorial, "Guiding Llama 2 with prompt engineering by
developing system and instruction prompts."

22 August 2024
Time to read: 5 minutes

Legend

Categories

Generative AI Artificial intelligence

Large language models (LLMs) watsonx.ai

Related Topics

Guiding Llama 2 with prompt engineering by


developing system and instruction prompts

Token optimization: The backbone of effective prompt


engineering
Use various models with the watsonx.ai flows engine
IBM Developer

Trials

Try watsonx.ai

Interested in generative AI?

Learn generative AI skills

IBM Developer Follow Us


About X
FAQ LinkedIn
Third-party notice YouTube

Explore
Open Source @ IBM
IBM API Hub

Career Privacy Terms of Accessibility Cookie Sitemap


Opportunities use preferences

You might also like