[go: up one dir, main page]

0% found this document useful (0 votes)
94 views42 pages

AI Agents and Environment

The document provides an overview of AI agents and their environments, detailing the definitions, examples, and types of agents, including simple reflex, model-based, goal-based, utility-based, and learning agents. It also discusses various types of environments such as fully observable, deterministic, single-agent, static, discrete, and known versus unknown. The content is prepared by Ms. Harshil Sharma for an introductory course on Artificial Intelligence.

Uploaded by

harshil.sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views42 pages

AI Agents and Environment

The document provides an overview of AI agents and their environments, detailing the definitions, examples, and types of agents, including simple reflex, model-based, goal-based, utility-based, and learning agents. It also discusses various types of environments such as fully observable, deterministic, single-agent, static, discrete, and known versus unknown. The content is prepared by Ms. Harshil Sharma for an introductory course on Artificial Intelligence.

Uploaded by

harshil.sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Course Name: Introduction to Artificial Intelligence

Topic: AI Agents and Environment


Prepared By: Ms. Harshil Sharma, Assistant Professor
Department of FCE
TOPICS TO BE COVERED

 AI agents
 Examples of AI Agents
 Types of AI Agents
 Environment
 Types of Environment
AI AGENTS
AI AGENTS

 In artificial intelligence, an agent is a computer program or


system that is designed to perceive its environment, make
decisions and take actions to achieve a specific goal or set
of goals.
 For example, self-driving cars navigate around obstacles
on the road based on data from multiple sensors.
AI AGENTS

Agents could be anything that makes decisions, such as a


person, firm, machine, or software.
It carries out an action with the best outcome after considering
past and current percepts(agent’s perceptual inputs at a given
instance).
An AI system is composed of an agent and its environment. The
agents act in their environment. The environment may contain
other agents.
AI AGENTS

Actuators/
AI AGENTS

An agent is anything that can be viewed as:


 Perceiving its environment through sensors and
 Acting upon that environment through actuators
AI AGENTS

Actuators/
RULES FOR AI AGENTS
STRUCTURE OF AN AGENTS

Agent = Architecture + Agent Program


Architecture is the machinery that the agent executes on. It is a
device with sensors and actuators, for example, a robotic car, a
camera, and a PC.
An agent program is an implementation of an agent function.
An agent function is a map from the percept sequence(history
of all that an agent has perceived to date) to an action.
EXAMPLES OF AGENTS

A software agent has Keystrokes, file contents, received


network packages that act as sensors and displays on the screen,
files, and sent network packets acting as actuators.
Examples: Antivirus Software: Monitors file changes and
network activity to detect threats.
Chatbots: Process text input to provide customer support.
EXAMPLES OF AGENTS

A Human-agent has eyes, ears, and other organs which act as


sensors, and hands, legs, mouth, and other body parts act as
actuators.
Doctors: Diagnose diseases using sensors (eyes, hands) and
treat patients using actuators (surgical tools).
Pilots: Use vision, hearing, and tactile feedback to operate
airplanes.
EXAMPLES OF AGENTS

A Robotic agent has Cameras and infrared range finders which


act as sensors and various motors act as actuators.
Eg:- Self-Driving Cars: Use sensors and cameras to navigate
roads autonomously
Eg:- Robotic Vaccum cleaner
EXAMPLES OF AGENTS

Intelligent Personal Assistants


 Sensors: Microphones (to hear commands), touch inputs on devices.
 Actuators: Speakers (to respond), notifications on screens.
 Examples:
 Siri: Helps with scheduling, searching information, and managing tasks
on Apple devices.
 Alexa: Controls smart home devices, plays music, and provides weather
updates.
 Google Assistant: Assists with setting reminders, providing directions,
and interacting with smart devices.
TYPES OF AGENTS

 Agents can be grouped into five classes based on their degree of perceived
intelligence and capability.
 All these agents can improve their performance and generate better action over
the time.
 These are given below:
 Simple Reflex Agent
 Model-based reflex agent
 Goal-based agents
 Utility-based agent
 Learning agent
SIMPLE REFLEX AGENTS

 The Simple reflex agents are the simplest agents.


 These agents take decisions on the basis of the current percepts
and ignore the rest of the percept history.
 These agents only succeed in the fully observable environment.
 The Simple reflex agent works on Condition-action rule,
which means it maps the current state to action. Such as a
Room Cleaner agent, it works only if there is dirt in the room.
SIMPLE REFLEX AGENTS
SIMPLE REFLEX AGENTS

Problems with Simple reflex agents are :


Very limited intelligence.
No knowledge of non-perceptual parts of the state.
If there occurs any change in the environment, then the
collection of rules needs to be updated.
MODEL-BASED REFLEX AGENTS

• The Model-based agent can work in a partially observable


environment, and track the situation.
• A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the
world," so it is called a Model-based agent.
• Internal State: It is a representation of the current state
based on percept history.
MODEL-BASED REFLEX AGENTS

• These agents have the model, "which is knowledge of the world"


and based on the model they perform actions.
• Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
MODEL-BASED REFLEX AGENTS

Self-Driving Cars:
Use sensors to understand current road conditions and predict other
vehicles' behaviors using a model of traffic rules.
GOAL-BASED AGENTS

Goal-based agents expand the capabilities of the model-based


agent by having the "goal" information.
They choose an action, so that they can achieve the goal.
These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not.
Such considerations of different scenario are called searching
and planning, which makes an agent proactive.
GOAL-BASED AGENTS

 GPS Navigation System: Plans routes to achieve the goal of


reaching the destination.
 Chess AI: Aims to checkmate the opponent, evaluating moves
based on achieving that goal.
UTILITY-BASED AGENTS

 These agents are similar to the goal-based agent but provide an


extra component of utility measurement which makes them
different by providing a measure of success at a given state.
 Utility-based agent act based not only goals but also the best way
to achieve the goal.
 The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the
best action.
UTILITY-BASED AGENTS

 The utility function maps each state to a real number to check


how efficiently each action achieves the goals.
 Example:
 Autonomous Cars: Deciding whether to brake or change lanes
when encountering slow-moving traffic.
 Amazon recommending items that maximize user satisfaction
and revenue.
ENVIRONMENT

 An environment in artificial intelligence is the surrounding of the


agent.
 The agent takes input from the environment through sensors and
delivers the output to the environment through actuators.
 The environment is where agent lives, operate and provide the
agent with something to sense and act upon it.
TYPES OF ENVIRONMENT

1. Fully Observable vs Partially Observable


2. Deterministic vs Stochastic
3. Single-agent vs Multi-agent
4. Static vs Dynamic
5. Discrete vs Continuous
6. Episodic vs Sequential
7. Known vs Unknown
FULLY OBSERVABLE VS PARTIALLY OBSERVABLE

 Fully Observable Environment: An environment where the


agent has access to complete information about the entire state of
the environment at each point in time.
 Example: Chess - all the pieces and their positions are visible to
both players, and no information is hidden.
FULLY OBSERVABLE VS PARTIALLY OBSERVABLE

 Partially Observable Environment: An environment where the


agent can only access partial or limited information about the
current state, leading to uncertainty in decision-making.
 Example: Driving a car — the driver may have limited visibility
due to fog or blind spots, making the environment partially
observable.
FULLY OBSERVABLE VS PARTIALLY OBSERVABLE

 Maintaining a fully observable environment is easy as there is no


need to keep track of the history of the surrounding.
 Unobservable environment is with an agent having no sensors
in the environment.
 Blindfolded Games: A player makes moves without seeing the
game board.
DETERMINISTIC VS STOCHASTIC

 Deterministic Environment: An environment where the next


state of the system is completely determined by the current state
and the agent’s actions.
 There is no randomness or uncertainty involved.
 Example: Chess – there would be only a few possible moves for
a chess piece at the current state and these moves can be
determined.
DETERMINISTIC VS STOCHASTIC

 Stochastic Environment: An environment where the next state


is determined not only by the current state and actions but also by
some element of randomness or uncertainty.
 Example: Self-Driving Cars- the actions of a self-driving car are
not unique, it varies time to time.
SINGLE-AGENT VS MULTI-AGENT

 Single-agent environment: An environment in which only one


agent is involved, and operate by itself.
 In this single-agent environment, the agent doesn't need to
consider the actions or decisions of other entities.
 Example: Solitaire

 Multi-agent environment: An environment in multiple agents


are operating.
 Example: Football game
STATIC VS DYNAMIC

 Static environment: An environment that does not change while


the agent is performing actions.
 Example: Puzzle Solving like sudoku and crossword

 Dynamic environment: An environment that keeps constantly


changing itself when the agent is up with some action is said to
be dynamic.
 Example: Self-driving cars, Stock market trading
DISCRETE VS CONTINUOUS

 Discrete environment: consists of a finite number of actions that


can be deliberated in the environment to obtain the output.
 Example: The game of chess is discrete as it has only a finite
number of moves.
 Continuous environment: consist of the actions performed on
environment that cannot be numbered i.e. is not discrete.
 Example: Self-driving cars
EPISODIC VS SEQUENTIAL

 In an episodic environment, there is a series of one-shot actions,


and only the current percept is required for the action.
 For example, Tic-Tac-Toe

 In a Sequential environment, an agent requires memory of past


actions to determine the next best actions.
 For example, Chess and taxi driving are sequential
KNOWN VS UNKNOWN

 In a known environment, the results of all actions are known to


the agent. While in an unknown environment, an agent needs to
learn how it works in order to perform an action.
REFERENCE BOOK

 Artificial Intelligence: A Modern Approach, Fourth Edition,


Stuart J. Russell and Peter Norvig
Thank You!!

You might also like