Ai for Javascript Course
Ai for Javascript Course
Title:
Subtitle:
Introduction
Module 1: JavaScript for AI
Advantages of Using JavaScript Frameworks in AI Application Development
Module Project: Setting up a development environment
1. Setting Up Node.js
2. Setting Up Next.js
Conclusion
Module 2: Basics of AI Development
Introduction to LLMs
Basics of Prompt Engineering and Model Selection
Strategies for Better Results:
Integrating LLMs with JavaScript
Module Project: Building an AI Content Generator with Node.js and OpenAI GPT 3.5
/Mistral 7B Instruct v0.2
Title:
Practical AI Application Development for Javascript Developers
Subtitle:
Introduction
I built this course hoping it would be an excellent guide for aspiring AI developers and a
valuable resource for the wider JavaScript developer community.
Over the past year, the artificial intelligence industry has been at the forefront of
technological advancement worldwide, with so many mind-blowing products and
experiments released to the world. For example, OpenAI’s ChatGPT was released in
late 2022, instantly breaking all consumer records as the fastest-growing consumer
product ever.
Seamless Integration with AI Tools: Many AI tools and libraries are increasingly
becoming JavaScript-friendly. AI startups like OpenAI now provide Javascript
libraries that allow developers to harness the latest AI advancements in building
amazing apps directly within a JavaScript environment.
Note: Feel free to skip this if you already know how to do this.
Note that this setup might not always be current, for the latest setup
instructions please visit
https://nodejs.org/en and https://nextjs.org/docs/getting-started/installation
1. Setting Up Node.js
Node.js is a runtime environment that lets you run JavaScript on the server side. It's
essential for building scalable and efficient AI applications.
Steps to Set Up Node.js:
1. Download Node.js:
2. Install Node.js:
Ensure you select the option to install npm (Node Package Manager) as well, as
it's crucial for managing your JavaScript dependencies.
3. Verify Installation:
Run node -v and npm -v to check the installed versions of Node.js and npm,
respectively.
console.log("Hello, Node.js!");
node test.js
If everything is set up correctly, you'll see Hello, Node.js! printed in your terminal.
2. Setting Up Next.js
Next.js is a React framework that enables server-side rendering and generating static
websites for React-based web applications.
There are currently two Nextjs setups. The old Pages Router and the new App router
setups. I won’t be making a recommendation and encourage you to try both options.
Conclusion
Congratulations! You've successfully set up your development environment with Node.js
and Next.js. You're now ready to begin building AI-powered applications.
As you explore the capabilities of Node.js and Next.js, consider how you might leverage
these tools in your upcoming AI projects. What innovative features can you envision
bringing to life in your web applications?
Imagine a world where your web applications not only respond to user inputs but also
understand and interact in a way that feels almost human. This is the world of AI, where
Large Language Models (LLMs) like GPT, Mixtral, and Claude have started a revolution.
In this module, we're going to dive into the basics of AI development, focusing on how
these powerful models can be integrated with JavaScript to create dynamic and
intelligent web applications.
Introduction to LLMs
Large Language Models represent a remarkable leap in the field of AI, offering
capabilities that span various modalities, including text, vision, audio, and multimodal (a
combination of different types). These models, such as OpenAI's GPT series, Google's
BERT, and newer models like Mixtral and Claude, learn to understand context, generate
human-like text, and even answer complex questions by being trained on vast datasets,
enabling them to perform a wide array of language-related tasks.
3. Audio Models: Audio models are adept at processing and understanding sound
data. Whisper, for example, is an automatic speech recognition system that can
transcribe and translate speech across multiple languages.
https://huggingface.co/docs/transformers/model_doc/mixtral#:~:text=Image
Processor-,MODELS,-INTERNAL HELPERS
OpenAI’s Prompt Engineering Guide has a “Six strategies for getting better
results” section. Check it out here:
https://platform.openai.com/docs/guides/prompt-engineering/six-strategies-
for-getting-better-results
Source: https://medium.com/@aiwizard/llm-models-comparison-gpt-4-bard-llama-flan-ul2-bloom-
9ad7c0c56ba5
Note:
In the next chapter, we will further explore popular open-source AI models
and show how you can integrate them into your JS apps via APIs and
external libraries.
This blog post will guide you through creating a simple AI content generator
using Node.js and two popular Models.
By the end, you'll have a functional application that can generate blog posts, stories, or
any text-based content.
2. Creative Writing
3. Educational Content
5. Personal Development
Career Advice: "Provide insightful career advice for recent college graduates in
the tech industry."
2. Navigate to this directory in your terminal and run npm init -y to initialize a new
Node.js project.
3. Install the OpenAI npm package with npm install --save openai .
1. Sign up for an API key from OpenAI. This key is essential to authenticate your
requests.
Make sure you have set your OPENAI_API_KEY in your environment variables.
return chatCompletion.choices[0].message.content;
} catch (error) {
console.error("Error generating content:", error);
return null;
In this function, the prompt is the input from the user, which we send to the OpenAI
generative AI model API. The API then returns the generated content based on this
prompt.
generateContent(blogPostPrompt)
.then(content => console.log(content))
.catch(error => console.error(error));
Run your application using node app.js and see the AI-generated content in your
console.
Full Code
return chatCompletion.choices[0].message.content;
} catch (error) {
console.error("Error generating content:", error);
return null;
}
}
const blogPostPrompt = "Write a comprehensive blog post about
generateContent(blogPostPrompt)
.then(content => console.log(content))
.catch(error => console.error(error));
Initialize your project: Create a new directory for your project and initialize it with
npm init -y . This step creates a package.json file for managing your project
dependencies.
Install the package: Run npm install @mistralai/mistralai in your project directory.
This command installs the Mistral AI package and adds it to your package.json .
Secure your API key: Store your Mistral AI API key in an environment variable. For
local development, you can use a .env file and a package like dotenv to load
environment variables. Remember, never hardcode your API keys directly into your
source code, especially when pushing code to public repositories.
return chatResponse.choices[0].message.content;
} catch (error) {
console.error('Error:', error);
return null;
}
}
generateContent(blogPostPrompt)
.then(content => console.log(content))
.catch(error => console.error(error));
Execute your script: Run your application with node app.js . This will execute the
script, and you should see the AI-generated response in your console.
Additional Resources
1. Prompt engineering: https://platform.openai.com/docs/guides/prompt-engineering
2. Mistral Javascript Client: https://docs.mistral.ai/platform/client/
3. Prompt examples: https://platform.openai.com/examples
4. Prompt Engineering Guide: https://www.promptingguide.ai/
5. The LLM Index - https://sapling.ai/llm/index
6. Open LLM Leaderboard -
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
7. AlpacaEval Leaderboard - https://tatsu-lab.github.io/alpaca_eval/
Falcon – Developed by the Technology Innovation Institute (TII), UAE. You can
also play with their demo on HuggingFace:
https://huggingface.co/spaces/tiiuae/falcon-180b-demo.
Stable Diffusion XL: Released by Stability AI, this model has made waves in the AI
community for its ability to generate high-quality images from textual descriptions.
There’s an official sandbox you can play with:
https://platform.stability.ai/sandbox/text-to-image and also a HuggingFace setup:
https://huggingface.co/spaces/stabilityai/stable-diffusion
https://huggingface.co/blog/llama2#why-llama-2
There are also other open-source models like BLOOM and Alpaca. You can check out
the Sapling AI’s LLM index for a list of other Open Source LLMs:
https://sapling.ai/llm/index#:~:text=Link-,Open Source,-LLMs
There are also other Commercial models like Claude by Anthropic with Demo:
https://claude.ai/ and Cohere by Cohere.
Language agnostic: you can use them with any language that can make HTTP
requests.
SDKs:
For example, instead of using the Mistral client SDK as we did in module 2, we can
interact with the chat completions endpoint using Node.js native fetch API to create our
AI content generator as shown below:
try {
const response = await fetch(MISTRAL_API_URL, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Authorization': `Bearer ${apiKey}`
},
body: JSON.stringify(requestBody)
});
generateContent(blogPostPrompt)
.then(content => console.log(content))
.catch(error => console.error(error));
Create a new Vite project with React by running npm init vite@latest my-dall-e-
npm install .
In your project directory, install the OpenAI SDK: npm install openai .
Replace the existing code in App.jsx with the following to create a form-based
UI for image generation:
function App() {
const [prompt, setPrompt] = useState('');
const [imageUrl, setImageUrl] = useState('');
const [isLoading, setIsLoading] = useState(false);
setImageUrl(response.data[0].url);
} catch (error) {
console.error('Error:', error);
} finally {
setIsLoading(false);
}
};
Note that the button text changes based on the API response status so
we can inform the user of what’s going on.
https://platform.openai.com/docs/guides/text-to-speech
Start your application: npm run dev . You should have something like the
screenshot below:
Try entering different prompts to generate images and, of course, style it using
your preferred CSS setup.
With these steps, you've successfully built an image generation application using
OpenAI's Dall-E and React. This application demonstrates the power of AI in creative
endeavors and how easily it can be integrated into modern web applications.
Conclusion
As you conclude this module, reflect on the immense potential of integrating various AI
models into JavaScript applications. The ability to work with both open-source and
closed-source models opens up a world of creativity and innovation. How will you
leverage these AI technologies in your future JavaScript projects to create applications
that were once thought impossible? Feel free to share with me your ideas on Discord.
First, a query is used to retrieve relevant information from (multiple) documents or data
from a large dataset.
Vector Databases: These databases are designed to efficiently store and query
embeddings and enable rapid similarity searches among large collections of
embeddings, essential for the retrieval step in RAG. Some of the popular vector
databases are FAISS from Meta, Chroma, HNSWLib, Memory, and Pinecone. We
will be using the last two for our module project. You can also take a look at the list
of vector databases available for integration by Langchain:
https://js.langchain.com/docs/integrations/vectorstores
Llama Index: A data framework that facilitates the ingestion of data, storage, and
retrieval of embeddings generated from your own data, making it easier to integrate
LLMs into your applications with your own data.
Embedchain: This RAG framework helps extract your data into relevant chunks
and embeddings, making them useful in powering contextual responses to your
queries.
Backend Setup: Implement the logic to retrieve data from the vector database and
pass it to the RAG model for response generation.
Imagine having a research assistant at your fingertips, one who can understand and
provide insights from any PDF document you provide. This is not just a figment of
Prerequisites
Before we dive in, ensure you have Node installed on your machine. You'll also need an
OpenAI API Key, which you can get from OpenAI.
Setting Up Next.js
Start by creating a new Next.js project:
Follow the CLI prompts to bootstrap your project. Opt for using the App Router, the src/
OPENAI_API_KEY="openai_api_key"
In this setup, the Langchain RetrievalQAChain uses the MemoryVectorStore for efficient
retrieval of relevant document sections, which the ChatOpenAI model then summarizes.
PINECONE_API_KEY="pinecone_api_key"
PINECONE_INDEX_NAME="research-assistant"
Begin by updating the layout.tsx under the app/ directory. This file will define the basic
metadata for our application, setting the stage for a cohesive and informative UI. Here's
a sample setup for your layout.tsx :
Now, let's focus on the main page of our application, page.tsx . This is where users will
interact with the tool, upload documents, and ask questions.
Here’s a detailed code structure for the page.tsx :
"use client";
setUploading(true);
try {
const res = await fetch('/api/ingest-research', {
method: 'POST',
body: formData,
});
// Configure react-dropzone
const { getRootProps, getInputProps } = useDropzone({
onDrop,
});
return (
<main className="flex min-h-screen flex-col items-center p-2
<h1>AI-Powered Research Assistant</h1>
<div
{...getRootProps({
className:
"dropzone bg-gray-900 border border-gray-800 p-10 ro
})}
>
<input {...getInputProps()} />
<p>Drag 'n' drop a PDF here, or click to select a file</
</div>
<button
disabled={isLoading}
type="submit"
className="py-2 border rounded-lg bg-gray-900 text-s
>
Submit
</button>
<p className="text-center">
Completion result: {completion === "" ? "Awaiting re
</p>
</form>
In this setup:
The onDrop function handles the logic for uploading files to the /ingest-research
endpoint.
The handleSubmit function sends user queries to the /chat-research endpoint and
displays the AI's response.
Styling can be adjusted as per your design preference using CSS or a framework
like Tailwind CSS.
Upload the research documents of your choice (in PDF format), and once the training is
complete, you can start querying the AI with prompts related to the uploaded content.
Conclusion
You've successfully built a full-stack RAG setup capable of learning and chatting with
any research document it's trained on. This tool stands as a testament to the power of
combining AI with modern web development.
As you explore further, think about how this technology can be expanded. How can you
utilize AI to enhance the way we interact with and learn from vast amounts of textual
data?
If you have any questions or need assistance, feel free to send a message.
Additional Resources
3. Embeddings:
https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#
4. Streaming: https://sdk.vercel.ai/docs/concepts/streaming
AI Function Calling
Modern AI models, such as those offered by OpenAI, can perform an array of functions
– from translating languages to generating code. These models can self-invoke
functions provided and carry out specific tasks like summarization, translation, or even
code generation.
Supported by GPT 3.5-turbo and GPT 4 models, the function calling feature is a nice
way for developers like you to describe a set of defined functions that the models can
intelligently use to connect to external data points, thereby enhancing their responses.
Crafting AI Agents
An AI agent is an autonomous entity capable of performing tasks, interacting, and
learning. Creating an AI agent involves not only technical prowess but also an
understanding of user experience and interaction dynamics. Experts believe that AI
agents are rapidly evolving and finding their place in various industries, from customer
service to personal assistants.
Module Projects:
For this module, we will build three AI agents that will highlight how you can
fuse function calling into an AI agent setup to create amazing AI apps capable
of running executable code based on user inputs.
This sub-section will guide you through building complex AI applications with increasing
levels of complexity. We'll explore integrating LangChain with BabyAGI, AutoGPT,
and the OpenAI SDK in Node.js, focusing on creating autonomous agents that can
perform tasks independently and utilize external data.
Through these projects, you'll gain hands-on experience in building AI agents with
varying complexities. Starting from a basic setup, you'll progress to integrating
external data tools and finally create a location-aware suggestion system.
Designing the Agent: Map out the functionalities and user interactions for your AI
agent. This could range from a customer service bot to a more complex assistant
capable of handling various tasks.
Testing and Iteration: Test your AI agent with various scenarios and refine its
capabilities based on feedback and performance.
1. Setting Up:
Initialize a Node.js project and install LangChain with npm install langchain
@langchain/openai .
generateParisCulinaryItinerary();
We will need the Serp API key for this tutorial. You can get yours here:
https://serpapi.com/users/sign_up?plan=free. They have a free plan that
should be okay for this tutorial.
1. Setting Up:
@langchain/openai @langchain/community .
Utilize tools like ReadFileTool , WriteFileTool , and SerpAPI for external data
interaction.
External data tools enhance the agent's ability to provide detailed and context-
rich itineraries.
runEnhancedAgent();
Create a new directory for your project and run npm init -y to initialize a
Node.js project.
Setting Up OpenAI: We configure the OpenAI SDK with the API key and define the
tools (functions) that the AI can suggest calling.
const aiTools = [
{
type: "function",
function: {
name: "fetchUserLocation",
description: "Determines user location based on IP",
parameters: {
type: "object",
properties: {},
},
},
},
];
const toolset = {
The agent function is where the magic happens. It communicates with OpenAI to
get suggestions on which function to call and then executes the suggested function.
chatHistory.push({
role: "function",
name: functionName,
content: `
The result of the last function was this: $
functionResponse,
)}
`,
});
} else if (finish_reason === "stop") {
chatHistory.push(message);
return message.content;
}
}
return "No conclusive suggestion could be generated. Please
}
Interact with the AI: The script will execute, and you should see food suggestions
based on the detected location in your console.
main();
Full code
const aiTools = [
{
type: "function",
function: {
name: "fetchUserLocation",
description: "Determines user location based on IP",
parameters: {
type: "object",
properties: {},
},
},
},
];
chatHistory.push({
main();
Conclusion
With the skills to build complex AI agents and utilize advanced AI techniques, the
possibilities are endless. If you want to build more advanced AI agents, check out the
links listed in the resources section.
Reflecting on your journey through this module, what kind of innovative AI agent do you
envision creating that could revolutionize the way we interact with technology? Feel free
to share with me your ideas on Discord.
Resources
Adversarial Attacks: These are efforts to fool AI models with deceptive data. It's
crucial to understand how these attacks can manipulate AI decision-making.
Data Poisoning: This involves corrupting the training data of an AI model. The
integrity of data is paramount, as poisoned data can lead to flawed or biased AI
outputs.
A good way to better your security posture is by going through the resources on AI
security prepared by The Open Worldwide Application Security Project (OWASP). I have
linked some of these in the resources section of this module.
OWASP AI Security and Privacy Guide: Familiarize with the OWASP guide for AI
security, focusing on risks related to AI supply chain attacks and vulnerabilities.
Identifying and Mitigating Biases: Bias in AI can stem from skewed data sets or
preconceived notions held by developers. Implementing diverse training datasets
and conducting regular bias audits are crucial.
Reduced Data Breach Risks: By processing data on local servers, the risk of
external data breaches is significantly lowered.
Conclusion
Additional Resources
1- OWASP Top 10 for LLM
2 - OWASP TOP 10 for Machine Learning security
3- OWASP AI Security and Privacy Guide
4- Six Steps Toward AI Security
5- Securing AI & LLM-based Applications: Best Practices
Module Checklists
Module 1: JavaScript for AI
Build a tool that analyzes and categorizes the sentiment of user input or social
media posts using AI models like BERT or GPT.
Develop a chatbot that can handle customer queries and provide relevant
information or redirect to human support.
Build a system that uses voice recognition to control home appliances and
settings.