[go: up one dir, main page]

0% found this document useful (0 votes)
30 views30 pages

UNIT I Problem Solving

The document provides an overview of problem-solving in AI, focusing on various types of agents, including simple reflex, model-based, goal-based, utility-based, and learning agents. It discusses the interaction of agents with their environments through sensors and actuators, as well as the PEAS framework for defining AI systems. Additionally, it touches on the Turing Test and the characteristics required for machines to exhibit human-like intelligence.

Uploaded by

angelg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views30 pages

UNIT I Problem Solving

The document provides an overview of problem-solving in AI, focusing on various types of agents, including simple reflex, model-based, goal-based, utility-based, and learning agents. It discusses the interaction of agents with their environments through sensors and actuators, as well as the PEAS framework for defining AI systems. Additionally, it touches on the Turing Test and the characteristics required for machines to exhibit human-like intelligence.

Uploaded by

angelg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT I PROBLEM SOLVING

• Introduction to AI - AI Applications - Problem solving agents – search


algorithms – uninformed search strategies – Heuristic search strategies –
Local search and optimization problems – adversarial search – constraint
satisfaction problems (CSP)
Objective :Study about uninformed and Heuristic search techniques.
Outcome:CO1- Use appropriate search algorithms for problem
solving
Stuart Russell and Peter Norvig, “Artificial Intelligence – A
Modern Approach”, Fourth Edition, Pearson Education,
2021.
AI system Agents interact with
environments through sensors and actuators

an agent is a computer program


or system that can perceive its
environment, make decisions,
and take actions to achieve goals.

AGENT - Perceiving its environment


through sensors and
Acting upon that environment
through actuators
• A Human-agent has eyes, ears, and other organs which act as
sensors, and hands, legs, mouth, and other body parts act as
actuators.
• A software agent has Keystrokes, file contents, received network
packages that act as sensors and displays on the screen, files, and
sent network packets acting as actuators.
• A Robotic agent has Cameras and infrared range finders which act as
sensors and various motors act as actuators.
• Gaming agents: These are agents that are designed to play games,
either against human opponents or other agents. Examples of gaming
agents include chess-playing agents and poker-playing agents.
Vaccum Cleaner and its actions

Condition-Action Rule − It is a rule that


maps a state (condition) to an action.
Types of agents - based on their degree
of perceived intelligence and capability.
• Simple Reflex Agents
• Model-Based Reflex Agents
An AGENT is –
• Goal-Based Agents Perceiving its environment
• Utility-Based Agents through sensors and
Acting upon that environment
• Learning Agent through actuators
• Multi-agent systems
• Hierarchical agents
Simple Reflex Agents eg. Room cleaner

• simplest agents.
• take decisions on the basis of the current percepts and
ignore the rest of the percept history.
• succeed only in the fully observable environment.
• does not consider any part of percepts history during
their decision and action process.
• works on Condition-action rule - maps the current state
to action eg. Room Cleaner agent - works only if there is
dirt in the room.
Limitations
✔ very limited intelligence
✔ do not have knowledge of non-perceptual parts of the current
state
✔ Not adaptive to changes in the environment.
Model-based reflex agent -work in a partially observable environment,
and track the situation eg. Self driving car

A model-based agent has two important factors:


Model: It is knowledge about "how things happen in
the world," so it is called a Model-based agent.
Internal State: It is a representation of the current
state based on percept history.

Updating the agent state requires information about:


How the world evolves
How the agent's action affects the world.
Goal-based agents- an intelligent program
that can make decisions based
⮚ Google's Waymoon previous
driverless cars are good
examples of a goal-based agent when they are
experiences, knowledge, userwithinput,
programmed and the
an end destination, or goal, in
desired goal. mind. The car will then ''think'' and make the right
decisions in order to deliver the passenger where
they intended to go
⮚ a group of friends plan to go on a road trip.
The goal-based agent distinguishes itself through its
ability to find a solution according to the required
output.
the user provides the input and knows the
expected output; thus, it is an example of
supervised learning. The model performs
the actions while keeping the goal state in
perspective. The whole technique of the goal-based
agent to reach a goal or a final state is based on
searching and planning. The AI agent
searches and develops the methodology that
provides the easiest and most convenient pathway
to reach a goal state.
• The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.
• The agent needs to know its goal which describes desirable
situations.
• Goal-based agents expand the capabilities of the model-based
agent by having the "goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and
planning, which makes an agent proactive.
A home thermostat that turns
Utility-based agents -
on heating or cooling based on the temperature
⮚ weighs factors like safety, time, and energy
consumption to determine the best action
⮚ provide an extra component of utility
measurement which makes them different by
providing a measure of success at a given
state.
⮚ Utility-based agent act based not
only goals but also the best way to
achieve the goal.
⮚ The Utility-based agent is useful when there
are multiple possible alternatives,
and an agent has to choose in order to
perform the best action.
⮚ The utility function maps each state to a real
number to check how efficiently each action
achieves the goals.
Learning Agents – eg.-spam filter- learning agents are
able to learn, analyze performance, and look for new
ways to improve the performance.
can learn from its past experiences, or it has
learning capabilities.
It starts to act with basic knowledge and
then able to act and adapt automatically
through learning.
A learning agent has mainly four conceptual
components, which are:
Learning element: It is responsible for
making improvements by learning from
environment
Critic: Learning element takes feedback
from critic which describes that how well
the agent is doing with respect to a fixed
performance standard.
Performance element: It is responsible
for selecting external action
Problem generator: This component is
responsible for suggesting actions that
will lead to new and informative
Agent Environment in AI
• An environment is everything in the world which surrounds the agent,
but it is not a part of an agent itself. An environment can be described
as a situation in which an agent is present.
• The environment is where agent lives, operate and provide the agent
with something to sense and act upon it.
• Fully observable vs Partially Observable
Chess – the board is fully observable, and so are the opponent’s moves
Driving – the environment is partially observable because what’s around the corner
is not known.
• Static vs Dynamic
Crossword puzzles – static ; taxi driving, roller coaster -
dynamic
• Discrete vs Continuous
chess game(finite number of moves) – discrete ; self driving car-
continuous
Agent Environment in AI
• Deterministic vs Stochastic
Chess – there would be only a few possible moves for a coin at the current
state and these moves can be determined.
Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to
time.
• Single-agent vs Multi-agent
A person left alone in a maze – single agent
Foot ball game – 11 players, traffic simulation – multiple cars,lights
• Episodic vs sequential
Pick and place robot – episodic ; Chess game - sequential
• Known(partially observable-results known to the agent) vs Unknown
(agent needs to learn- fully observable)
• Accessible vs Inaccessible
An empty room whose state can be defined by its temperature –
accessible
Auronomous cars – In accessible
• PEAS is an agent representation system that helps define the key aspects
of an artificial intelligence (AI) system. PEAS is a framework used to
define the essential components that shape the behavior of an AI agent
in its environment.
PEAS stands for:
• Performance measure: Defines the success of an agent and evaluates the
criteria that determines whether the system performs well. For example,
if it's a robot that plays chess, the performance measure is to win the
game.
• Environment: Helps define the task environment for an intelligent
agent. For example, if a Roomba vacuum is cleaning up dirt and dust on
the floor while you're away at work, the environment can be described as
carpet, hard flooring, rug, table, couch, dirt, stairs (edge), and so on.
• Actuators: Agents interact with their surroundings through actuators.
• Sensors: Agents experience their surroundings through sensors.
• For driverless cars, the PEAS components are as follows:
Performance Measure: Safe navigation,
comfortable trip,
efficient route
planning.
Environment: Roads, traffic, pedestrians,
weather conditions.
Actuators: Steering, accelerator, brakes.
Sensors: Cameras, GPS, radar.
AI defnitions in four categories
Acting Humanly: The Turing Test
approach – Acting like a human.
Thinking Humanly: The cognitive
modeling approach – Thinking like a
person. Eg. Modeling the thinking of Roger
Federer to make a system that can
compete against him in a tennis game.

Thinking Rationally: The laws of thought


approach – Thinking rationally is a logical
process and it concludes based on
symbolic logic.
Acting Rationally: The rational agent
approach – Rational agent acts to achieve
high value and brings the best possible
outcome for any given task. Eg. Chess
player
• Turing defined intelligent behavior as the ability to
achieve human-level performance in all cognitive tasks,
sufficient to fool an interrogator.
• It is a test to see if a computer can learn to mimic
human behavior.
Laws of thought approach
• Aristotle’s syllogisms provided patterns for argument
structures that always provide correct premises.
• A famous example, “Socrates is a man; all men are
mortal; therefore, Socrates is mortal.”
• Another example – All TVs use energy; Energy always
generates heat; therefore, all TVs generate heat.”
Alan Turing – 1950 “Can Machines
Turing proposed think”
that the computer can be said to
be an intelligent if it can mimic human response
under specific conditions.
Turing Test was introduced by Turing in his 1950
paper, "Computing Machinery and Intelligence,"
which considered the question, "Can Machine
think?“
Player A - computer, Player B - human, and
Player C - interrogator. Interrogator is aware
that one of them is machine, but he needs to
identify this on the basis of questions and their
responses.
Conversation between all players is via
keyboard and screen so the result would not
depend on the machine's ability to convert words
as speech.
The test result does not depend on each correct
answer, but only how closely its responses like a
human answer. The computer is permitted to do
everything possible to force a wrong identification
by the interrogator.
Chatbots cleared Turing test
• ELIZA, PARRY, EUGENE GOOSTMAN
• In the year 1980, John Searle presented "Chinese
Room" thought experiment, in his paper "Mind,
Brains, and Program," which was against the validity
of Turing's Test. According to his argument,
"Programming a computer may make it to
understand a language, but it will not produce a
real understanding of language or consciousness
in a computer."
Features required for a machine to pass the Turing test:

• Natural language processing: NLP is required to


communicate with Interrogator in general human language like
English.
• Knowledge representation: To store and retrieve
information during the test.
• Automated reasoning: To use the previously stored
information for answering the questions.
• Machine learning: To adapt new changes and can detect
generalized patterns.
• Vision (For total Turing test): To recognize the interrogator
actions and other objects during a test.
• Motor Control (For total Turing test): To act upon objects if
requested.

You might also like