Ai Unit 2 PDF Lecture Notes 2
Ai Unit 2 PDF Lecture Notes 2
23-03-2021
Unit- II
3/23/2021 1
References: Artificial Intelligence – A Modern Approach 4e, 2020, by Stuart Russell and Peter Norvig
Artificial Intelligence 3rd edition, 2009, Elaine Rich, Kevin Knight Shivsankar Nair
ARTIFICIAL INTELLIGENCE: Building Intelligent Systems, 2015 PARAG KULKARNI, PRACHI JOSHI
23-03-2021
Frontier : those states that are available for expanding(for applying legal actions to)
Solution : a path from the initial state to a state that satisfies the goal test
Search tree : tree representation of search space, showing possible solutions from initial state
23-03-2021
A blind search is usually uninformed. It doesn’t have any specific knowledge about the
problem whereas a heuristic search is that which has information about the problem and
therefore uses logic in decision making.
8 – Puzzle Example
23-03-2021
8 – Puzzle Example
23-03-2021
Breadth-First Search
Using a breadth-first strategy we expand the root level first and then we expand all those
nodes (i.e. those at level 1) before we expand any nodes at level 2.
Or to put it another way, all nodes at level d are expanded before any nodes at level d+1.
Function BREADTH-FIRST-SEARCH(problem) returns a solution or failure
Return GENERAL-SEARCH(problem,ENQUEUE-AT-END)
B C D E F
Goal state
G H I J K L M N O P
Q R S T U V W X Y Z
23-03-2021
Node B isbacktrack
expanded then removed We begin
Node with ourfrom
A isnode
removed initial
thestate: theEach
queue. noderevealed
labeled A.
WeThe
from
then
search then moves
to expand
to thenode A This is then expanded to reveal further
C, andthe
first thequeue.
node processThe
in the revealed nodes
queue.
continues. node is added to the
(unexpanded) nodes.END of the queue.
are added to the END of the queue.
B C D E F
G H I J K L M N O P
Node L is located and the search returns a
solution.
Q R S T U
Press
Press space
space to
to begin
continue
the the
search
search
Size of Queue: 0
1
5
6
7
8
9
10 Queue: A
B,J,
C,
D,
E,
F,
G,
H,
I,
J,
K,
L,
Empty
K,
G,
M,
D,
F,
C,
H,
E,
L,
I,K,
G,
L,
J,
H,
M,
D,
E,
F,
I,
N,L,
K,
M,
H,
G,
J,I,
F,
E,O,
N,
M,
K,
L,
J,G,
H,
I,
N,
FP,
O,
K,
M,
L,
J,
N,
HI,
O,
Q,
P,
K,
L,
M,
JO,
N,
P,
R,
Q,
M,
LP,
N,
O,
Q,
S,
R,Q,
NO,
P,
R,
T,
S,R
QP
SU
T
Nodes expanded: 11
0
1
2
3
4
5
6
7
8
9
10 Current Action:FINISHED
Expanding
Backtracking
SEARCH Current level: 2
n/a
0
1
Path: ?
23-03-2021
This strategy has the benefit of being complete (if there's a solution, it will be found), and
optimal as long as the shallowest solution is the best solution.
However, the way breadth-first search achieves this is by keeping all of the leaf nodes in
memory, which requires a prohibitive amount of memory when searching anything more
than a very small tree.
The time complexity of breadth-first search is O(b^d) where b is the branching factor (2
for the binary trees below) and d is the depth of the solution.
23-03-2021
Initialization: { [ S , 0 ] }
Iteration 3:
{ [ S ->A ->C ->D , 3 ] , [ S ->A ->B , 4 ] , [ S ->A ->C ->G , 4 ] , [ S ->G , 12 ] }
Iteration 4:
{ [ S ->A ->B , 4 ] , [ S ->A ->C->G , 4 ] , [ S->A->C->D->G , 6 ] , [ S->G , 12 ] }
Iteration 5:
{ [ S->A->C->G , 4 ] , [ S->A->C->D->G , 6 ] , [ S->A->B->D , 7 ] , [ S->G , 12 ] }
At any given point in the execution, the algorithm never expands a node which has a cost greater
than the cost of the shortest path in the graph.
The elements in the priority queue have almost the same costs at a given time, and thus the
name Uniform Cost Search.
It may seem as if the elements don’t have almost the same costs. But when applied on a much
larger graph it is certainly so.
Uniform Cost Search can also be used as Breadth First Search if all the edges are given a cost of 1.
23-03-2021
Depth-first search
Breadth-first-search uses a FIFO queue,
depth-first search uses a LIFO queue.
Depth-limited search
That is, nodes at depth are DEPTH-LIMITED treated as if they have no successors.
This approach is called depth-limited search. The SEARCH depth limit solves the
infinite-path problem.
23-03-2021
b is the branching factor; d is the depth of the shallowest solution; m is the maximum depth of the search tree; l is the depth limit.
Superscript: a complete if b is finite; b complete if step costs ≥ for positive ; c optimal if step costs are all identical; d if both
directions use breadth-first search.
23-03-2021
– Breadth-first search expands the shallowest nodes first; it is complete, optimal for unit step
costs, but has exponential space complexity.
– Uniform-cost search expands the node with lowest path cost, g(n), and is optimal for
general step costs.
-- Depth-first search expands the deepest unexpanded node first. It is neither complete nor
optimal, but has linear space complexity. Depth-limited search adds a depth bound.
– Iterative deepening search calls depth-first search with increasing depth limits until a goal
is found. It is complete, optimal for unit step costs, has time complexity comparable to
breadth-first search, and has linear space complexity.
23-03-2021
The evaluation function is construed as a cost estimate, so the node with the lowest evaluation is
expanded first.
The implementation of best-first graph search is identical to that for uniform-cost search except for the
Evaluation function is used to decide which among the various available nodes is the most promising
The Best first search uses the concept of a Priority queue and heuristic search.
To search the graph space, the BFS method uses two lists for tracking the traversal.
An ‘Open’ list which keeps track of the current ‘immediate’ nodes available for traversal
and
23-03-2021
The time complexity of the algorithm is given by O(n*logn) We remove H from pq. Since our goal "I" is a
neighbor of H, we return.
23-03-2021
Example
Example
23-03-2021
Example
Example
23-03-2021
Example
Example
23-03-2021
Example
Example
23-03-2021
Example
This distance is an approximation of how close we are to the goal from a given node and is denoted
by the heuristic function h(n). This heuristic value is mentioned within each node.
The sum of the distance from the start node to each of these immediate next node is denoted by
the function g(n).
23-03-2021
23-03-2021
23-03-2021
A* Search
• QueueingFn is sort-by-f
– f(n) = g(n) + h(n)
• Note that UCS and Best-first both improve search
– UCS keeps solution cost low
– Best-first helps find solution quickly
• A* combines these approaches
A* Search
23-03-2021
A* Search
A* Search
23-03-2021
A* Search
A* Search
23-03-2021
Greedy BFS Vs A*
Even though you would find that both Greedy BFS and A* algorithms find the path equally
efficiently, number of steps, you may notice that the A* algorithm is able to come up with is a more
optimal path than Greedy BFS.
Both Greedy BFS and A* are Best first searches but Greedy BFS is neither complete, nor optimal
whereas A* is both complete and optimal.
However, A* uses more memory than Greedy BFS, but it guarantees that the path found is
optimal.
23-03-2021
23-03-2021
“local search algorithms” where the path cost does not matters, and only focus on solution-state needed to
reach the goal node.
A local search algorithm completes its task by traversing on a single current node rather than multiple paths and
following the neighbors of that node generally.
Optimization Problem
Gradient Descent
23-03-2021
1. Local search algorithms use a very little or constant amount of memory as they operate only on
a single path.
2. Most often, they find a reasonable solution in large or infinite state spaces where the classical
or systematic algorithms do not work.
The local search algorithm works for pure optimized problems. A pure optimization problem is one
where all the nodes can give a solution. But the target is to find the best state out of all according to
the objective function. Unfortunately, the pure optimization problem fails to find high-quality solutions
to reach the goal state from the current state.
Types of local searches:
Hill-climbing Search
Simulated Annealing
Hill Climbing
Hill Climbing is a heuristic search used for mathematical
optimization problems in the field of Artificial
Intelligence.
Given a large set of inputs and a good heuristic function, it
tries to find a sufficiently good solution to the problem.
This solution may not be the global optimal maximum.
In the above definition, mathematical optimization problems implies that hill-climbing solves the problems where
we need to maximize or minimize a given real function by choosing values from the given inputs.
Example -Travelling salesman problem where we need to minimize the distance traveled by the salesman.
23-03-2021
Hill Climbing
Step 1 : Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make initial
state as current state.
Step 2 : Loop until the solution state is found or there are no new operators present which can be applied
to the current state.
a) Select a state that has not been yet applied to the current state and apply it to produce a new state.
b) Perform these to evaluate new state
i. If the current state is a goal state, then stop and return success.
ii. If it is better than the current state, then make it current state and proceed further.
iii. If it is not better than the current state, then continue in the loop until a solution is found.
Step 3 : Exit.
Hill Climbing
Features
23-03-2021
Hill Climbing
Hill climbing cannot reach the optimal / best state (global maximum) if it enters Local Maximum or Plateaus or ridges
Hill Climbing
Plateau : On plateau all neighbors have same value .
Hence, it is not possible to select the best direction.
To overcome plateaus :
Make a big jump. Randomly select a state far away from the
current state. Chances are that we will land at a non-plateau
region
23-03-2021
Simulated Annealing
Pure hill climbing is not complete, but pure random search is inefficient.
Simulated annealing offers a compromise.
Inspired by annealing process of gradually cooling a liquid until it changes to a low-energy state.
Example
Dr. R. Rajkamal 60
23-03-2021
Minimax is a kind of backtracking algorithm that is used in decision making and game theory to
find the optimal move for a player, assuming that your opponent also plays optimally.
It is widely used in two player turn-based games such as Tic-Tac-Toe, Backgammon, Mancala,
Chess, etc.
Dr. R. Rajkamal 61
Minimax Algorithm
In Minimax the two players are called maximizer and minimizer.
The maximizer tries to get the highest score possible while the minimizer tries to do the
opposite and get the lowest score possible.
Both the players fight it as the opponent player gets the minimum benefit while they get the
maximum benefit.
In a given state, if the maximizer has upper hand then, the score of the board will tend to be some
positive value.
If the minimizer has the upper hand in that board state then it will tend to be some negative value.
The values of the board are calculated by some heuristics which are unique for every type of game.
Dr. R. Rajkamal 62
23-03-2021
Minimax Algorithm
Consider a game which has 4 final states and paths to
reach final state are from root to 4 leaves of a perfect
binary tree as shown.
Assume you are the maximizing player and you get the
first chance to move, i.e., you are at the root and your
opponent at next level.
Dr. R. Rajkamal 63
Minimax Algorithm
There are two players one is called Maximizer and other
is called Minimizer.
Maximizer will try to get the Maximum possible
score, and Minimizer will try to get the minimum
possible score.
This algorithm applies DFS, so in this game-tree, we
have to go all the way through the leaves to reach the
terminal nodes.
At the terminal node, the terminal values are given so
we will compare those value and backtrack the tree Depth First Search
until the initial state occurs.
Dr. R. Rajkamal 64
23-03-2021
Minimax Algorithm
Step-1:
In the first step, the algorithm generates the entire game-
tree and apply the utility function to get the utility values
for the terminal states.
Dr. R. Rajkamal 65
Minimax Algorithm
Step 2:
Now, first we find the utilities value for the Maximizer, its initial
value is -∞, so we will compare each value in terminal state with
initial value of Maximizer and determines the higher nodes
values.
Dr. R. Rajkamal 66
23-03-2021
Minimax Algorithm
Step 3:
In the next step, it's a turn for minimizer, so it will compare all nodes
value with +∞, and will Tind the 3rd layer node values.
For node B= min(4,6) = 4
For node C= min (-3, 7) = -3
Step 4:
Now it's a turn for Maximizer, and it will again choose the
maximum of all nodes value and find the maximum value for the
root node.
In this game tree, there are only 4 layers, hence we reach
immediately to the root node, but in real games, there will be more
than 4 layers.
For node A max(4, -3)= 4 Dr. R. Rajkamal 67
Minimax Algorithm
Complete-
Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the finite search tree.
Optimal-
Min-Max algorithm is optimal if both opponents are playing optimally.
Time complexity-
As it performs DFS for the game-tree, so the time complexity of Min-Max algorithm is O(bm), where b is branching
factor of the game-tree, and m is the maximum depth of the tree.
Space Complexity-
Space complexity of Mini-max algorithm is also similar to DFS which is O(bm).
Dr. R. Rajkamal 68
23-03-2021
Alpha-Beta Pruning
Alpha-Beta pruning is not actually a new algorithm, rather an optimization technique for minimax algorithm.
It reduces the computation time by a huge factor.
This allows us to search much faster and even go into deeper levels in the game tree.
It cuts off branches in the game tree which need not be searched because there already exists a better move
available.
It is called Alpha-Beta pruning because it passes 2 extra parameters in the minimax function, namely
alpha and beta.
Alpha is the best value that the maximizer currently can guarantee at that level or above.
Beta is the best value that the minimizer currently can guarantee at that level or above.
Dr. R. Rajkamal 69
Alpha-Beta Pruning
The initial call starts from A.
The value of alpha here is -INFINITY and the value of beta
is +INFINITY.
These values are passed down to subsequent nodes in the tree.
At A the maximizer must choose max of B and C,
so A calls B first.
At B it the minimizer must choose min of D and E and hence
calls D first.
At D, it looks at its left child which is a leaf node. This node So far this is how our game tree looks. The 9
returns a value of 3. Now the value of alpha at D is max( -INF, 3) is crossed out because it was never
computed.
which is 3.
Dr. R. Rajkamal 70
23-03-2021
Alpha-Beta Pruning
To decide whether its worth looking at its right node or not, it
checks the condition beta<=alpha.
This is false since beta = +INF and alpha = 3. So it continues the
search.
D now looks at its right child which returns a value of 5.
At D, alpha = max(3, 5) which is 5. Now the value of node D is 5
D returns a value of 5 to B.
At B, beta = min( +INF, 5) which is 5. Now E looks at its left child which is 6.
The minimizer is now guaranteed a value of 5 or lesser. At E, alpha = max(-INF, 6) which is 6.
B now calls E to see if he can get a lower value than 5. Here the condition becomes true. B
At E the values of alpha and beta is not -INF and +INF but instead beta is 5 and alpha is 6.
-INF and 5 respectively, because the value of beta was changed So beta<=alpha is true.
at B and that is what B passed down to E Hence it breaks and E returns 6 to B
Dr. R. Rajkamal 71
Alpha-Beta Pruning
Note how it did not matter what the value of E‘s right child is.
It could have been +INF or -INF, it still wouldn’t matter, We never even had to look at it because the
minimizer was guaranteed a value of 5 or lesser.
So as soon as the maximizer saw the 6 he knew the minimizer would never come this way because
he can get a 5 on the left side of B.
This way we didnt have to look at that 9 and hence saved computation time.
E returns a value of 6 to B. At B, beta = min( 5, 6) which is 5.
The value of node B is also 5. B returns 5 to A.
At A, alpha = max( -INF, 5) which is 5.
Now the maximizer is guaranteed a value of 5 or greater.
A now calls C to see if it can get a higher value than 5.
At C, alpha = 5 and beta = +INF. C calls F
At F, alpha = 5 and beta = +INF. F looks at its left child which is a 1. alpha = max( 5, 1) which is still 5.
Dr. R. Rajkamal 72
23-03-2021
Alpha-Beta Pruning
F looks at its right child which is a 2.
Hence the best value of this node is 2. Alpha still remains 5.
F returns a value of 2 to C. At C, beta = min( +INF, 2).
The condition beta <= alpha becomes true as beta = 2 and alpha = 5. So it
breaks and it does not even have to compute the entire sub-tree of G.
The intuition behind this break off is that, at C the minimizer was
guaranteed a value of 2 or lesser.
But the maximizer was already guaranteed a value of 5 if he choose B. So This is how our final game tree
why would the maximizer ever choose C and get a value less than 2 ? Again looks like.
you can see that it did not matter what those last 2 values were. We also As you can see G has been crossed
saved a lot of computation by skipping a whole sub tree. out as it was never computed.
C now returns a value of 2 to A. Therefore the best value at A is max( 5, 2)
which is a 5.
Hence the optimal value that the maximizer can get is 5
Dr. R. Rajkamal 73
Dr. R. Rajkamal 74
23-03-2021
Dr. R. Rajkamal 75
Dr. R. Rajkamal 76
23-03-2021
Dr. R. Rajkamal 77
Dr. R. Rajkamal 78
23-03-2021
Dr. R. Rajkamal 79
References
Artificial Intelligence – A Modern Approach 4e, 2020, by Stuart Russell and Peter Norvig
Artificial Intelligence 3rd edition, 2009, Elaine Rich, Kevin Knight Shivsankar Nair
ARTIFICIAL INTELLIGENCE: Building Intelligent Systems, 2015 PARAG KULKARNI, PRACHI JOSHI