[go: up one dir, main page]

0% found this document useful (0 votes)
7 views36 pages

Chapter 3 Problem Solving Agents

Uploaded by

wubie derebe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views36 pages

Chapter 3 Problem Solving Agents

Uploaded by

wubie derebe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Chapter 3

Problem Solving ( Goal Based) Agent

•Problem solving by searching


•Problem formulation
•Search Strategies
•Avoiding repeated states

Chapter 3: Solving Problems by Searching 1


Problem Solving Agents
• Problem solving agent
– A kind of “goal based” agent
– Finds sequences of actions that lead to desirable states.

• The algorithms are uninformed


– No extra information about the problem other than the
definition
• No extra information
• No heuristics (rules)

Chapter 3: Solving Problems by Searching 2


Goal Based Agent
Goal Based Agent

What the world Sensors Percepts


State is like now

Environment
How the world evolves

What my actions do
What it will be like
if I do action A

What action I
Goals Actuators
should do now Actions

Chapter 3: Solving Problems by Searching 3


Goal Based Agent
Function Simple-Problem-Solving-Agent( percept ) returns action

Inputs: percept a percept


Static: seq an action sequence initially empty
state some description of the current world
goal a goal, initially null
problem a problem formulation

state <- UPDATE-STATE( state, percept )


if seq is empty then do
goal <- FORMULATE-GOAL( state )
problem <- FORMULATE-PROBLEM( state, goal )
seq <- SEARCH( problem ) # SEARCH
action <- RECOMMENDATION ( seq ) # SOLUTION
seq <- REMAINDER( seq )
return action # EXECUTION

Chapter 3: Solving Problems by Searching 4


Goal Based Agents
• Assumes the problem environment is:
– Static
• The plan remains the same
– Observable
• Agent knows the initial state
– Discrete
• Agent can enumerate the choices
– Deterministic
• Agent can plan a sequence of actions such that each will lead to an
intermediate state

• The agent carries out its plans with its eyes closed
– Certain of what’s going on
– Open loop system

Chapter 3: Solving Problems by Searching 5


Well Defined Problems and
Solutions
• A problem
– Initial state
– Actions and Successor Function
– Goal test
– Path cost

Chapter 3: Solving Problems by Searching 6


Example: Two Water Jug Problem

• You are on the side of the river. You are given


a m liter jug and a n liter jug where 0 < m < n.
Both the jugs are initially empty.
• The jugs don’t have markings to allow
measuring smaller quantities.
• You have to use the jugs to measure d liters of
water where d < n.
• Determine the minimum no of operations to
be performed to obtain d liters of water in one
of jug.
Chapter 3: Solving Problems by Searching 7
Example: Water Pouring
• Initial state:
– The buckets are empty
• The operations you can perform are:
– Empty a Jug
– Fill a Jug
– Pour water from one jug to the other until one of
the jugs is either empty or full.

Chapter 3: Solving Problems by Searching 8


Example: Water Pouring
• Solution
• For example, if we have a jug J1 of 5 liters (n =
5) and another jug J2 of 3 liters (m = 3)
• and we have to measure 1 liter of water using
them.
• The associated equation will be 5n + 3m = 1.

Chapter 3: Solving Problems by Searching 9


Example: Water Pouring
Steps
1. Fill the m litre jug and empty it into n liter jug.
2. Whenever the m liter jug becomes empty fill it.
3. Whenever the n liter jug becomes full empty it.
4. Repeat steps 1,2,3 till either n liter jug or the m liter
jug contains d litres of water.

Chapter 3: Solving Problems by Searching 10


A unifying view
The problem space consists of:
• a state space which is a set of states
representing the possible configurations of the
world
• a set of operators which can change one state
into another
The problem space can be viewed as a graph
where the states are the nodes and the arcs
represent the operators.
Searching on a graph (Example)
Searching on a graph (simplified)
Start with the initial state (root)
Loop until goal found
find the nodes accessible from the root
City Puzzle
1 4 3 1 4 3

7 6 7 6 2

5 8 2 5 8
State space of the 8-puzzle generated
by “move blank” operations
State space search
Represented by a four-tuple [N,A,S,GD], where:
• N is the problem space
• A is the set of arcs (or links) between nodes. These
correspond to the operators.
• S is a nonempty subset of N. It represents the start
state(s) of the problem.
▪ GD is a nonempty subset of N. It represents the goal
state(s) of the problem. The states in GD are
described using either:
– a measurable property of the states
– a property of the path developed in the
search (a solution path is a path from
node S to a node in GD )
The 8-puzzle problem as state space search

• states: possible board positions


• operators: one for sliding each square in each
of four directions,
or, better, one for moving the blank square in
each of four directions
• initial state: some given board position
• goal state: some given board position
State space of the 8-puzzle (repeated)
Uninformed Search Strategies

• Uninformed strategies use only the information


available in the problem definition
– Also known as blind searching

• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search
Chapter 3: Solving Problems by Searching 19
Comparing Uninformed Search
Strategies
• Completeness
– Will a solution always be found if one exists?
• Time
– How long does it take to find the solution?
– Often represented as the number of nodes searched
• Space
– How much memory is needed to perform the search?
– Often represented as the maximum number of nodes stored at
once
• Optimal
– Will the optimal (least cost) solution be found?

Chapter 3: Solving Problems by Searching 20


Traveling salesperson problem as state space search

The salesperson has n cities to visit and must


then return home. Find the shortest path to
travel.
• state space:
• operators:
• initial state:
• goal state:
An instance of the traveling
salesperson problem
Search of the traveling salesperson problem.
(arc label = cost from root)
Nearest neighbor path

Nearest neighbor path = AEDBCA (550)


Minimal cost path = ABCDEA (375)
Nearest neighbor path
SL - state list - states in current path - path keeps changing - states
added/removed as we go - until get path that ends in goal.
NSL - new state list - nodes whose existence we have become aware
of, but have not visited yet.
DE - dead ends - nodes proven not to lead to goal.
CS - current state.
Trace of backtracking search (Fig. 3.12)
A trace of backtrack on the graph
(Fig. 3.12)
Graph for BFS and DFS (Fig. 3.13)
Breadth_first search algorithm
Trace of BFS on the graph of Fig. 3.13
Depth_first_search algorithm
Trace of DFS on the graph of Fig. 3.13
Graph of Fig. 3.13 at iteration 6 of DFS
Lessons From Iterative Deepening Search

• Faster than BFS even though IDS generates


repeated states
– BFS generates nodes up to level d+1
– IDS only generates nodes up to level d

• In general, iterative deepening search is the


preferred uninformed search method when
there is a large search space and the depth of the
solution is not known

Chapter 3: Solving Problems by Searching 35


Avoiding Repeated States

• Complication of wasting time by expanding states


that have already been encountered and expanded
before
– Failure to detect repeated states can turn a linear problem
into an exponential one

• Sometimes, repeated states are unavoidable


– Problems where the actions are reversable
• Route finding
• Sliding blocks puzzles

Chapter 3: Solving Problems by Searching 36

You might also like