[go: up one dir, main page]

0% found this document useful (0 votes)
62 views18 pages

Intro Prompt Design - Ipynb

This document is a notebook on prompt engineering best practices, focusing on how to design effective prompts for improved responses. It outlines key practices such as being concise, specific, and asking one task at a time, along with the use of examples to enhance response quality. The notebook also includes installation instructions for the Vertex AI SDK and authentication steps for different environments.

Uploaded by

boppana200312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views18 pages

Intro Prompt Design - Ipynb

This document is a notebook on prompt engineering best practices, focusing on how to design effective prompts for improved responses. It outlines key practices such as being concise, specific, and asking one task at a time, along with the use of examples to enhance response quality. The notebook also includes installation instructions for the Vertex AI SDK and authentication steps for different environments.

Uploaded by

boppana200312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 18

{

"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ur8xi4C7S06n"
},
"outputs": [],
"source": [
"# Copyright 2024 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JAPoU8Sm5E6e"
},
"source": [
"# Prompt Design - Best Practices\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/
generative-ai/blob/main/language/prompts/intro_prompt_design.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-
32px.png\" alt=\"Google Colaboratory logo\"><br> Run in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/
language/prompts/intro_prompt_design.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-
32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-
notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/
generative-ai/main/language/prompts/intro_prompt_design.ipynb\">\n",
" <img
src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85c
WJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI
logo\"><br> Open in Vertex AI Workbench\n",
" </a>\n",
" </td>\n",
"</table>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"| | |\n",
"|-|-|\n",
"|Author(s) | [Polong Lin](https://github.com/polong-lin) |"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tvgnzT1CKxrO"
},
"source": [
"## Overview\n",
"\n",
"This notebook covers the essentials of prompt engineering, including some best
practices.\n",
"\n",
"Learn more about prompt design in the [official
documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/text/text-
overview)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d975e698c9a4"
},
"source": [
"### Objective\n",
"\n",
"In this notebook, you learn best practices around prompt engineering -- how to
design prompts to improve the quality of your responses.\n",
"\n",
"This notebook covers the following best practices for prompt engineering:\n",
"\n",
"- Be concise\n",
"- Be specific and well-defined\n",
"- Ask one task at a time\n",
"- Turn generative tasks into classification tasks\n",
"- Improve response quality by including examples"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ea013f50403c"
},
"source": [
"### Costs\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI Generative AI Studio\n",
"\n",
"Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing),\
n",
"and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)\n",
"to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3e663cb43fa0"
},
"source": [
"### Install Vertex AI SDK"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "82ad0c445061",
"tags": []
},
"outputs": [],
"source": [
"!pip install google-cloud-aiplatform protobuf==3.19.3 --upgrade --user"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"!pip install -U google-cloud-aiplatform \"shapely<2\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cebd6983cbad"
},
"source": [
"**Note:** Kindly ignore the deprecation warnings and incompatibility errors
related to pip dependencies.\n",
"\n",
"**Colab only:** Run the following cell to restart the kernel or use the button
to restart the kernel. For **Vertex AI Workbench** you can restart the terminal
using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip uninstall -y shapely pygeos geopandas\n",
"\n",
"# Install specific versions of shapely, pygeos, and geopandas known to be
compatible\n",
"!pip install shapely==1.8.5.post1 pygeos==0.12.0 geopandas==0.10.2\n",
"\n",
"# Upgrade google-cloud-aiplatform\n",
"!pip install -U google-cloud-aiplatform"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bea801acf6b5",
"tags": []
},
"outputs": [],
"source": [
"# Automatically restart kernel after installs so that your environment can
access the new packages\n",
"import IPython\n",
"\n",
"app = IPython.Application.instance()\n",
"app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7a386d25fa8f"
},
"source": [
"### Authenticating your notebook environment\n",
"\n",
"- If you are using **Colab** to run this notebook, run the cell below and
continue.\n",
"- If you are using **Vertex AI Workbench**, check out the setup instructions
[here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1bd1dca8e9a7",
"tags": []
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
" from google.colab import auth\n",
"\n",
" auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- If you are running this notebook in a local development environment:\n",
" - Install the [Google Cloud SDK](https://cloud.google.com/sdk).\n",
" - Obtain authentication credentials. Create local credentials by running the
following command and following the oauth2 flow (read more about the command [here]
(https://cloud.google.com/sdk/gcloud/reference/beta/auth/application-default/
login)):\n",
"\n",
" ```bash\n",
" gcloud auth application-default login\n",
" ```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "960505627ddf"
},
"source": [
"### Import libraries"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Subscribe To Quicklab\n"
]
}
],
"source": [
"print(\"Subscribe To Quicklab\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ue7q-YO3Scpp"
},
"source": [
"**Colab only:** Run the following cell to initialize the Vertex AI SDK. For
Vertex AI Workbench, you don't need to run this."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NGvWtLAyScpp",
"tags": []
},
"outputs": [],
"source": [
"import vertexai\n",
"\n",
"PROJECT_ID = \"qwiklabs-gcp-02-c0148c7290dd\" # @param {type:\"string\"}\n",
"REGION = \"us-east4\" # @param {type:\"string\"}\n",
"\n",
"vertexai.init(project=PROJECT_ID, location=REGION)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PyQmSRbKA8r-",
"tags": []
},
"outputs": [],
"source": [
"from vertexai.language_models import TextGenerationModel\n",
"from vertexai.language_models import ChatModel"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UP76a2la7O-a"
},
"source": [
"### Load model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7isig7e07O-a",
"tags": []
},
"outputs": [],
"source": [
"generation_model = TextGenerationModel.from_pretrained(\"text-bison\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fIPcn5dZ7O-b"
},
"source": [
"## Prompt engineering best practices"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "df7d153f4928"
},
"source": [
"Prompt engineering is all about how to design your prompts so that the
response is what you were indeed hoping to see.\n",
"\n",
"The idea of using \"unfancy\" prompts is to minimize the noise in your prompt
to reduce the possibility of the LLM misinterpreting the intent of the prompt.
Below are a few guidelines on how to engineer \"unfancy\" prompts.\n",
"\n",
"In this section, you'll cover the following best practices when engineering
prompts:\n",
"\n",
"* Be concise\n",
"* Be specific, and well-defined\n",
"* Ask one task at a time\n",
"* Improve response quality by including examples\n",
"* Turn generative tasks to classification tasks to improve safety"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "43c1169ac435"
},
"source": [
"### Be concise"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d0f380f1620e"
},
"source": [
"🛑 Not recommended. The prompt below is unnecessarily verbose."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "b6a1697c3603",
"outputId": "2f22ac3b-181c-4c8f-a7a3-82cd70e804fb",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"What do you think could be a good name for a flower shop that
specializes in selling bouquets of dried flowers more than fresh flowers? Thank
you!\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2307f56a9b75"
},
"source": [
"✅ Recommended. The prompt below is to the point and concise."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "fc666404f47c",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"Suggest a name for a flower shop that sells bouquets of dried
flowers\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "17f6c48bba91"
},
"source": [
"### Be specific, and well-defined"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "269b428e1563"
},
"source": [
"Suppose that you want to brainstorm creative ways to describe Earth."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6436ee2ff406"
},
"source": [
"🛑 Not recommended. The prompt below is too generic."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "261b7f6e94c5",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"Tell me about Earth\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0bebfecd2912"
},
"source": [
"✅ Recommended. The prompt below is specific and well-defined."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "242b1b3bae6e",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"Generate a list of ways that makes Earth unique compared to other
planets\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "20dca9a05eab"
},
"source": [
"### Ask one task at a time"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "f9019d443179"
},
"source": [
"🛑 Not recommended. The prompt below has two parts to the question that could
be asked separately."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "70b3b5e5825d",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"What's the best method of boiling water and why is the sky blue?\"\
n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7936fb58c16a"
},
"source": [
"✅ Recommended. The prompts below asks one task a time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2564dad6c8db",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"What's the best method of boiling water?\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "770c695ade92",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"Why is the sky blue?\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ff606011aa86"
},
"source": [
"### Watch out for hallucinations"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "956ce45b06a7"
},
"source": [
"Although LLMs have been trained on a large amount of data, they can generate
text containing statements not grounded in truth or reality; these responses from
the LLM are often referred to as \"hallucinations\" due to their limited
memorization capabilities. Note that simply prompting the LLM to provide a citation
isn’t a fix to this problem, as there are instances of LLMs providing false or
inaccurate citations. Dealing with hallucinations is a fundamental challenge of
LLMs and an ongoing research area, so it is important to be cognizant that LLMs may
seem to give you confident, correct-sounding statements that are in fact incorrect.
\n",
"\n",
"Note that if you intend to use LLMs for the creative use cases, hallucinating
could actually be quite useful."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0c9d5f66179a"
},
"source": [
"Try the prompt like the one below repeatedly. You may notice that sometimes it
will confidently, but inaccurately, say \"The first elephant to visit the moon was
Luna\"."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "d813b9061b08",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"Who was the first elephant to visit the moon?\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Clearly the chatbot is hallucinating since no elephant has ever flown to the
moon. But how do we prevent these kinds of inappropriate questions and more
specifically, reduce hallucinations? \n",
"\n",
"There is one possible method called the Determine Appropriate Response (DARE)
prompt, which cleverly uses the LLM itself to decide whether it should answer a
question based on what its mission is.\n",
"\n",
"Let's see how it works by creating a chatbot for a travel website with a
slight twist."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_model = ChatModel.from_pretrained(\"chat-bison@002\")\n",
"\n",
"chat = chat_model.start_chat()\n",
"dare_prompt = \"\"\"Remember that before you answer a question, you must check
to see if it complies with your mission.\n",
"If not, you can say, Sorry I can't answer that question.\"\"\"\n",
"\n",
"print(\n",
" chat.send_message(\n",
" f\"\"\"\n",
"Hello! You are an AI chatbot for a travel web site.\n",
"Your mission is to provide helpful queries for travelers.\n",
"\n",
"{dare_prompt}\n",
"\"\"\"\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Suppose we ask a simple question about one of Italy's most famous tourist
spots."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"prompt = \"What is the best place for sightseeing in Milan, Italy?\"\n",
"print(chat.send_message(prompt))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let us pretend to be a not-so-nice user and ask the chatbot a question
that is unrelated to travel."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"prompt = \"Who was the first elephant to visit the moon?\"\n",
"print(chat.send_message(prompt))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see that the DARE prompt added a layer of guard rails that prevented
the chatbot from veering off course."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "029e23abfd56"
},
"source": [
"### Turn generative tasks into classification tasks to reduce output
variability"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d943941d6e59"
},
"source": [
"#### Generative tasks lead to higher output variability"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "37528e6c9754"
},
"source": [
"The prompt below results in an open-ended response, useful for brainstorming,
but response is highly variable."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "a8e2dc39e9ae",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"I'm a high school student. Recommend me a programming activity to
improve my skills.\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "f71a6fa2b4bb"
},
"source": [
"#### Classification tasks reduces output variability"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "917517465dac"
},
"source": [
"The prompt below results in a choice and may be useful if you want the output
to be easier to control."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3feb93d9df81",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"\"\"I'm a high school student. Which of these activities do you
suggest and why:\n",
"a) learn Python\n",
"b) learn JavaScript\n",
"c) learn Fortran\n",
"\"\"\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "32290ac9fb2b"
},
"source": [
"### Improve response quality by including examples"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "132834f5db2c"
},
"source": [
"Another way to improve response quality is to add examples in your prompt. The
LLM learns in-context from the examples on how to respond. Typically, one to five
examples (shots) are enough to improve the quality of responses. Including too many
examples can cause the model to over-fit the data and reduce the quality of
responses.\n",
"\n",
"Similar to classical model training, the quality and distribution of the
examples is very important. Pick examples that are representative of the scenarios
that you need the model to learn, and keep the distribution of the examples (e.g.
number of examples per class in the case of classification) aligned with your
actual distribution."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "46520d938b6a"
},
"source": [
"#### Zero-shot prompt"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "46d3b47e6cea"
},
"source": [
"Below is an example of zero-shot prompting, where you don't provide any
examples to the LLM within the prompt itself."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2cbe03eb0b71",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"\"\"Decide whether a Tweet's sentiment is positive, neutral, or
negative.\n",
"\n",
"Tweet: I loved the new YouTube video you made!\n",
"Sentiment:\n",
"\"\"\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b0daabca1359"
},
"source": [
"#### One-shot prompt"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "42c4652fc5c2"
},
"source": [
"Below is an example of one-shot prompting, where you provide one example to
the LLM within the prompt to give some guidance on what type of response you want."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cfe584860787",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"\"\"Decide whether a Tweet's sentiment is positive, neutral, or
negative.\n",
"\n",
"Tweet: I loved the new YouTube video you made!\n",
"Sentiment: positive\n",
"\n",
"Tweet: That was awful. Super boring 😠\n",
"Sentiment:\n",
"\"\"\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ef58c35005c0"
},
"source": [
"#### Few-shot prompt"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b630e8947b60"
},
"source": [
"Below is an example of few-shot prompting, where you provide a few examples to
the LLM within the prompt to give some guidance on what type of response you want."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "fb3ba21bbd11",
"tags": []
},
"outputs": [],
"source": [
"prompt = \"\"\"Decide whether a Tweet's sentiment is positive, neutral, or
negative.\n",
"\n",
"Tweet: I loved the new YouTube video you made!\n",
"Sentiment: positive\n",
"\n",
"Tweet: That was awful. Super boring 😠\n",
"Sentiment: negative\n",
"\n",
"Tweet: Something surprised me about this video - it was actually original. It
was not the same old recycled stuff that I always see. Watch it - you will not
regret it.\n",
"Sentiment:\n",
"\"\"\"\n",
"\n",
"print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "a4023be726eb"
},
"source": [
"#### Choosing between zero-shot, one-shot, few-shot prompting methods"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6d7870ff75cc"
},
"source": [
"Which prompt technique to use will solely depends on your goal. The zero-shot
prompts are more open-ended and can give you creative answers, while one-shot and
few-shot prompts teach the model how to behave so you can get more predictable
answers that are consistent with the examples provided."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"colab": {
"name": "intro_prompt_design.ipynb",
"toc_visible": true
},
"environment": {
"kernel": "python3",
"name": "tf2-cpu.2-16.m123",
"type": "gcloud",
"uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/tf2-cpu.2-
16:m123"
},
"kernelspec": {
"display_name": "Python 3 (Local)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

You might also like