Introduction to Artificial Intelligence
Informed Search
Lecturer: Dr. Igli Hakrama
University Metropolitan Tirana
(slides adapted from University of California, Berkeley)
Today
o Informed Search
o Heuristics
o Greedy Search
o A* Search
o Graph Search
Recap: Search
Recap: Search
o Search problem:
o States (configurations of the world)
o Actions and costs
o Successor function (world dynamics)
o Start state and goal test
o Search tree:
o Nodes: represent plans for reaching states
o Plans have costs (sum of action costs)
o Search algorithm:
o Systematically builds a search tree
o Chooses an ordering of the fringe (unexplored
nodes)
o Optimal: finds least-cost plans
Up next: Informed Search
o Uninformed ▪ Informed Search
Search ▪ Heuristics
o DFS ▪ Greedy Search
o BFS ▪ A* Search
o UCS ▪ Graph Search
Example: Pancake Problem
Cost: Number of pancakes flipped
Example: Pancake Problem
Example: Pancake Problem
State space graph with costs as weights
4
2 3
2
3
4
3
4 2
3 2
2
4
3
General Tree Search
Path to reach
Action: flip Action: flip all
goal:
top two four
Flip four, flip
Cost: 2 Cost: 4
three
Total cost: 7
The One Queue
o All these search algorithms
are the same except for
fringe strategies
o Conceptually, all fringes are
priority queues (i.e. collections
of nodes with attached
priorities)
o Practically, for DFS and BFS,
you can avoid the log(n)
overhead from an actual
priority queue, by using stacks
and queues
o Can even code one
Uninformed Search
Uniform Cost Search
o Strategy: expand lowest path cost
… c
1c
2c
3
o The good: UCS is complete and
optimal!
o The bad:
o Explores options in every “direction” Start Goal
o No information about goal location
[Demo: contours UCS empty (L3D1)]
Video of Demo Contours UCS Empty
Video of Demo Contours UCS Pacman
Small Maze
Informed Search
Search Heuristics
A heuristic is:
A function that estimates how close a state is to a goal
Designed for a particular search problem
Examples: Manhattan distance, Euclidean distance for
pathing
10
5
11.2
Example: Heuristic Function
h(x)
Example: Heuristic Function
Heuristic: the number of the largest pancake that is still out of place
3
h(x)
4
3
4
3 0
4
4 3
4
4 2
3
Greedy Search
Example: Heuristic Function
h(x)
Greedy Search
o Expand the node that seems closest…
o What can go wrong?
Greedy Search
b
o Strategy: expand a node that you …
think is closest to a goal state
o Heuristic: estimate of distance to
nearest goal for each state
o A common case:
o Best-first takes you straight to the b
(wrong) goal …
o Worst-case: like a badly-guided DFS
[Demo: contours greedy empty (L3D1)]
Video of Demo Contours Greedy (Empty)
Video of Demo Contours Greedy (Pacman
Small Maze)
A* Search
A* Search
UCS Greedy
A*
Combining UCS and Greedy
o Uniform-cost orders by path cost, or backward cost g(n)
o Greedy orders by goal proximity, or forward cost h(n)
8 g=0
S h=6
h=1 g=1
e a
1 h=5
1 3 2 g=2 g=9
S a d G b d g=4 e
h=6 h=2 h=1
h=6 1 h=5
h=2 h=0
1 g=3 g=6
c b g = 10
h=7 c G h=0 d
h=2
h=7 h=6
g = 12
o A* Search orders by the sum: f(n) = g(n) + h(n) G h=0
Example: Teg Grenager
When should A* terminate?
o Should we stop when we enqueue a goal?
h=2
2 A 2
S h=3 h=0 G
2 B 3
h=1
o No: only stop when we dequeue a goal
Is A* Optimal?
h=6
1 A 3
S h=7
G h=0
o What went wrong?
o Actual bad goal cost < estimated good goal cost
o We need estimates to be less than actual costs!
Admissible Heuristics
Idea: Admissibility
Inadmissible (pessimistic) heuristics break Admissible (optimistic) heuristics slow down
optimality by trapping good plans on the fringe bad plans but never outweigh true costs
Admissible Heuristics
o A heuristic h is admissible (optimistic) if:
where is the true cost to a nearest goal
o Examples:
4
15
o Coming up with admissible heuristics is most of
what’s involved in using A* in practice.
Optimality of A* Tree Search
Optimality of A* Tree Search
Assume:
o A is an optimal goal node
…
o B is a suboptimal goal node
o h is admissible
Claim:
o A will exit the fringe before B
Optimality of A* Tree Search: Blocking
Proof:
o Imagine B is on the fringe …
o Some ancestor n of A is on the
fringe, too (maybe A!)
o Claim: n will be expanded before B
1. f(n) is less or equal to f(A)
Definition of f-cost
Admissibility of h
h = 0 at a goal
Optimality of A* Tree Search: Blocking
Proof:
o Imagine B is on the fringe …
o Some ancestor n of A is on the
fringe, too (maybe A!)
o Claim: n will be expanded before B
1. f(n) is less or equal to f(A)
2. f(A) is less than f(B)
B is suboptimal
h = 0 at a goal
Optimality of A* Tree Search: Blocking
Proof:
o Imagine B is on the fringe …
o Some ancestor n of A is on the
fringe, too (maybe A!)
o Claim: n will be expanded before B
1. f(n) is less or equal to f(A)
2. f(A) is less than f(B)
3. n expands before B
o All ancestors of A expand before B
o A expands before B
o A* search is optimal
Properties of A*
Properties of A*
Uniform-Cost A*
b b
… …
UCS vs A* Contours
o Uniform-cost expands equally in all
“directions”
Start Goal
o A* expands mainly toward the goal,
but does hedge its bets to ensure
optimality Start Goal
[Demo: contours UCS / greedy / A* empty (L3D1)]
[Demo: contours A* pacman small maze (L3D5)]
Video of Demo Contours (Empty) -- UCS
Video of Demo Contours (Empty) --
Greedy
Video of Demo Contours (Empty) – A*
Video of Demo Contours (Pacman Small
Maze) – A*
Comparison
Greedy Uniform Cost A*
A* Applications
A* Applications
o Video games
o Pathing / routing problems
o Resource planning problems
o Robot motion planning
o Language analysis
o Machine translation
o Speech recognition
o…
[Demo: UCS / A* pacman tiny maze (L3D6,L3D7)]
[Demo: guess algorithm Empty Shallow/Deep (L3D8)]
Video of Demo Pacman (Tiny Maze) – UCS
/ A*
Video of Demo Empty Water Shallow/Deep – Guess
Algorithm
Creating Heuristics
Creating Admissible Heuristics
o Most of the work in solving hard search problems optimally
is in coming up with admissible heuristics
o Often, admissible heuristics are solutions to relaxed
problems, where new actions are available
366
15
o Inadmissible heuristics are often useful too
Example: 8 Puzzle
Start State Actions Goal State
o What are the states?
o How many states?
o What are the actions?
o How many successors from the start
state?
o What should the costs be?
8 Puzzle I
o Heuristic: Number of tiles
misplaced
o Why is it8 admissible?
o h(start) =
o This is a relaxed-problem Start State Goal State
heuristic
Average nodes expanded
when the optimal path has…
…4 steps …8 steps …12 steps
UCS 112 6,300 3.6 x 106
TILES 13 39 227
8 Puzzle II
o What if we had an easier 8-puzzle
where any tile could slide any
direction at any time, ignoring
other tiles?
o Total Manhattan distance Start State Goal State
o Why is it admissible? Average nodes expanded
3 + 1 + 2 + … = 18 when the optimal path has…
o h(start) =
…4 steps …8 steps …12 steps
TILES 13 39 227
MANHATTAN 12 25 73
8 Puzzle III
o How about using the actual cost as a heuristic?
o Would it be admissible?
o Would we save on nodes expanded?
o What’s wrong with it?
o With A*: a trade-off between quality of estimate and work
per node
o As heuristics get closer to the true cost, you will expand fewer
nodes but usually do more work per node to compute the
heuristic itself
Semi-Lattice of Heuristics
Trivial Heuristics, Dominance
o Dominance: ha ≥ hc if
o Heuristics form a semi-lattice:
o Max of admissible heuristics is
admissible
o Trivial heuristics
o Bottom of lattice is the zero
heuristic (what does this give us?)
o Top of lattice is the exact heuristic
Graph Search
Tree Search: Extra Work!
o Failure to detect repeated states can cause exponentially
more work.
State Graph Search Tree
Graph Search
o In BFS, for example, we shouldn’t bother expanding the circled nodes (why?)
d e p
b c e h r q
a a h r p q f
p q f q c G
q c G a
a
Graph Search
o Idea: never expand a state twice
o How to implement:
o Tree search + set of expanded states (“closed set”)
o Expand the search tree node-by-node, but…
o Before expanding a node, check to make sure its state
has never been expanded before
o If not new, skip it, if new add to closed set
o Important: store the closed set as a set, not a list
o Can graph search wreck completeness?
Why/why not?
o How about optimality?
A* Graph Search Gone Wrong?
State space graph Search tree
A S (0+2)
1 1
S h=4 C
h=1 A (1+4) B (1+1)
h=2 1
2
3 C (2+1) C (3+1)
B
h=1 G (5+0) G (6+0)
G
h=0
Consistency of Heuristics
o Main idea: estimated heuristic costs ≤ actual costs
A o Admissibility: heuristic cost ≤ actual cost to goal
1 h(A) ≤ actual cost from A to G
h=4 C h=1
o Consistency: heuristic “arc” cost ≤ actual cost for each arc
h=2
h(A) – h(C) ≤ cost(A to C)
3
o Consequences of consistency:
o The f value along a path never decreases
G h(A) ≤ cost(A to C) + h(C)
o A* graph search is optimal
Optimality of A* Graph Search
Optimality of A* Graph Search
o Sketch: consider what A* does with a
consistent heuristic:
o Fact 1: In tree search, A* expands nodes in … f1
increasing total f value (f-contours) f2
f3
o Fact 2: For every state s, nodes that reach s
optimally are expanded before nodes that
reach s suboptimally
o Result: A* graph search is optimal
Optimality
o Tree search:
o A* is optimal if heuristic is admissible
o UCS is a special case (h = 0)
o Graph search:
o A* optimal if heuristic is consistent
o UCS optimal (h = 0 is consistent)
o Consistency implies admissibility
o In general, most natural admissible
heuristics tend to be consistent,
especially if from relaxed problems
A*: Summary
A*: Summary
o A* uses both backward costs and (estimates of)
forward costs
o A* is optimal with admissible / consistent heuristics
o Heuristic design is key: often use relaxed problems
Tree Search Pseudo-Code
Graph Search Pseudo-Code