FreeCodeCamp
It covers the definition of prompt engineering, the underlying principles of AI and
LLMs, essential linguistic considerations, practical prompting strategies, and more
advanced concepts.
what is prompt engineering?
Involves human writing, refining and optimizing prompts in a structured way.
This is done with the intention of perfecting the interaction between humans
and Al to the highest degree possible.
Prompt engineers are also responsible for
continuous monitoring of prompts
ensuring their ongoing effectiveness
maintaining a prompt library
FreeCodeCamp 1
maintain he up to date prompt library
Understanding the Basics of AI and Large Language Models (LLMs):
Artificial Intelligence (AI): Defined as "the simulation of human intelligence
processes by machines." Kubo clarifies that current AI, especially in tools like
ChatGPT, is not sentient and relies on machine learning.
Machine Learning: Works by analyzing large amounts of training data to
identify correlations and patterns, which are then used to predict outcomes for
new data. An example provided is categorizing paragraphs based on their
content.
Large Language Models (LLMs): Clever computer programs trained on vast
amounts of text data, enabling them to understand and generate human-like
text. They learn about conversation, grammar, and style by analyzing word
order and meaning.
LLMs generate text by predicting the next word in a sequence based on their
understanding of language patterns.
The tutorial briefly touches on the history of language models, starting with
Eliza (1960s), followed by Shudlu (1970s), and the significant advancements
with deep learning leading to models like GPT-1 (2018), GPT-2 (2019), GPT-3
(2020), and GPT-4 (present).
FreeCodeCamp 2
Linguistics is key to prompt engineering
Understanding how language works in different contexts is essential for
creating effective prompts.
The Prompt Engineering Mindset:
Developing an intuitive understanding of how to formulate prompts effectively
from the first attempt is a valuable mindset.
The analogy of designing effective Google searches is used to illustrate this
point. Better "queries" lead to better results, and the same applies to prompts.
Best practices
Write Clear Instructions with Details: Avoid ambiguity by providing specific
information about what you need. Instead of "when is the election," ask "when
is the next presidential election for Poland?"
Adopt a Persona: Instruct the AI to respond as a specific character or with a
particular expertise. This can significantly impact the relevance and style of
FreeCodeCamp 3
the output (e.g., acting as a spoken English teacher or writing a poem in the
style of Rupi Kaur).
Specify the Format: Clearly indicate the desired output format (e.g., summary,
list, detailed explanation, bullet points, checklist, word limits).
Use Iterative Prompting: If the initial response is insufficient, continue the
conversation with follow-up questions or requests for elaboration.
Avoid Leading the Answer: Frame prompts neutrally to prevent biasing the
AI's response towards a preconceived answer.
Limit the Scope for Long Topics: Break down broad topics into smaller, more
focused queries for better results.
Zero-Shot and Few-Shot Prompting:
Zero-Shot Prompting: Leveraging a pre-trained model's existing knowledge to
answer a query without providing any specific examples in the prompt (e.g.,
"When is Christmas in America?").
Few-Shot Prompting: Enhancing the model's understanding by including a
few examples of the desired task within the prompt. This provides the model
with some context or "training data" within the prompt itself (e.g., providing
examples of Ania's favorite foods before asking for restaurant
recommendations).
AI Hallucinations:
AI hallucinations refer to unusual or inaccurate outputs produced by AI models
when they misinterpret data or make "creative" but incorrect connections.
Google's Deep Dream project, which over-interprets patterns in images, is
given as an example of a visual AI hallucination.
Text-based AI hallucinations can occur when a model fabricates information,
such as providing an inaccurate answer about a historical figure due to a lack
of knowledge.
FreeCodeCamp 4
Vectors and Text Embeddings:
Text Embedding: A technique in computer science and NLP to represent
textual information in a format easily processed by algorithms, particularly
deep learning models.
In prompt engineering, LLM embedding involves converting text prompts into
high-dimensional vectors that capture their semantic information (meaning).
The tutorial illustrates how words like "food" can be represented by a long
array of numbers.
Text embeddings enable computers to identify semantically similar words
(e.g., "burger," "pizza" being similar to "food") rather than just
lexicographically similar words (e.g., "foot").
The OpenAI Create Embedding API is introduced as a tool to generate text
embeddings
FreeCodeCamp 5