Intelligent Agents
Intelligent Agents
agents
Rational agent
For each possible percept sequence,
an rational agent should select
an action expected to maximize its performance
measure, given the evidence provided by the
percept sequence and whatever built-in knowledge
the agent has
E.g., an exam
Maximize marks, based on
the questions on the paper & your knowledge
Autonomy
If an agent just relies on the prior knowledge of
its designer rather than its own percepts then
the agent lacks autonomy
A rational agent should be autonomous- it
should learn what it can to compensate for
partial or incorrect prior knowledge.
E.g., a clock
No input (percepts)
Run only but its own algorithm (prior knowledge)
No learning, no experience, etc.
Software Agents
Sometimes, the environment may not be
the real world
E.g.,flight simulator, video games, Internet
They are all artificial but very complex
environments
Those agents working in these environments
are called
Softwareagent (softbots)
Because all parts of the agent are software
Task environments
Task environments are the problems
While the rational agents are the solutions
Specifying the task environment
PEAS description as fully as possible
Performance
Environment
Actuators
Sensors
other episodes
Every episode is independent of each other
Episodic environment is simpler
The agent does not need to think ahead
EX: part picking robot
Sequential:
It require memory of past action to determine next best action
Ex. Taxi driving and chess.
Properties of task environments
Static vs. dynamic
A dynamic environment is always changing
over time
E.g., the number of people in the street,
crossword puzzle
While static environment
E.g., the destination, taxi driving
Properties of task environments
environment
E.g., Football
Properties of task environments
Goal-based agents
Utility-based agents
Simple reflex agents
Takes decision on the basis of current percept
and ignore the rest percept history
Succeed in fully observable environment
Limited knowledge
Simple reflex agents
Simple reflex agents (2)
A Simple Reflex Agent in Nature
percepts
(size, motion)
RULES:
(1) If small moving object,
then activate SNAP
(2) If large moving object,
then activate AVOID and inhibit SNAP
ELSE (not moving) then NOOP
needed for
completeness Action: SNAP or AVOID or NOOP
Model-based Reflex Agents
For the world that is partially observable
the agent has to keep track of an internal state
That depends on the percept history
Reflecting some of the unobserved aspects