Unit 2 Problem Solving Methods
Unit 2 Problem Solving Methods
Goal : Steal TV set Goal : Earn some money Goal : Buy TV set
v
To find solution for AND-OR graph , the algorithm , with the ability to handle the
AND arcs appropriately.
The algorithm find a path from the starting node of the graph to a set of nodes
representing solution states.
It is necessary to get to more than one solution state since each arm of an AND arc
must lead to its own solution node.
From the above fig, the top node A, has been expanded, producing two arcs, one
leading to B and one leading to C and D.
The numbers at each node represent the value of f at that node.
If the estimated cost of a solution becomes greater than the value of FUTILITY, then
we stop the search.
FUTILITY should be chosen to correspond to a thresholds such that any solution.
Algorithm : Problem Reduction
i) Initialize the graph to the starting node.
ii) Loop until the starting node is labeled SOLVED or until its cost goes above FUTILITY.
a) Traverse the graph starting at the initial node and following the current best path, and
accumulate the set of nodes that are on the path and have not yet been expanded or
labeled as solved.
b) Pick one of these unexpanded nodes and expand it. If there are no successors assign
FUTILITY as the value of this node. Otherwise, add its successors to the graph and for
each of them computer f '. if f’ of any node is 0, mark that node as SOLVED.
c) Change the f’ estimate of the newly expanded node to reflect the new information
provided by its successors. Propagate his change backward through the graph. If any
node contains successor arc whose descendants are all solved, label the node itself as
SOLVED. At each node that is visited while going up the graph, decide which of its
successor arcs is the most promising and mark it as part of the current best path.
This process is illustrated in the below fig.
A
B C D
A
B C D
B C D
E F
G H
E F
Steps :
Step 1:
A is the only one, so it is at the end of the current best path. It is expanded yielding nodes
B,C and D. The arc to D is labeled as the most promising one emerging from A. It costs 6
compared to B and C which costs 9.
Steps 2 :
Node D is chosen for expansion .This process produces are new arc, the AND arc to E and
F, with a combined cost estimate of 10.So we update the f’ value of D to 10. Back one level,
this makes AND and B-C better than the arc to D. So it is labeled as the current best path.
Step 3 :
Traverse that arc from A and discover the unexpanded nodes B and c. To find a
solution along this path , have to expand both B and C eventually. So B is chosen to explore
first. This generates two new arcs, the ones to G and to H. Propagating their f’values
backward ,we update f’of B to 6. It requires updating the cost of the AND arc B-C to 12
(6+4+2). The arc to D is the better path from A, we record that as the current best path and
either node E or node F will be chosen for expansion in next step.
Step 4 :
This process continues until either a solution is found or all paths have led to deal
ends, indicating that there is no solution.
Alternate way :
Algorithm for searching an AND-OR graph must differ from one for searching an OR
graph. The different arises from the fact that individual paths from node to node cannot be
considered independently of the paths through other nodes connected to the original ones by
AND arcs. In best –first search algorithm, the path from one node to another was always the
one with the lower cost. But not always when searching an AND-OR graph.
Consider the example shown in below fig.
The nodes generated in alphabetical order. If node J is expanded at the next step and
one of its successors is node E, producing the graph shown in fig below.
This new path to E is longer than the previous path to E going through c. But the path
through C will lead to a solution if there is also a solution to D, but the path J is better.
Limitation of Algorithm :
It fails to take into account any interaction between sub goals.
Example of failure is shown in below fog.
Assume both node C and node E ultimately lead to a solution. Our algorithm
will report a complete solution that includes both of them.
The AND-OR graph states that for A to be solved, both C and D must be solved.
Algorithm considers a solution of D as a completely separate process from the
solution of C.
The alternatives from D, E is the best path but it turns out that C is necessary ,
so its better to satisfy D.
c. Goal test: It is a function which observe the current state and returns whether
the goal state is achieved or not.
o Search tree: A tree representation of search problem is called Search tree. The root of
the search tree is the root node which is corresponding to the initial state.
o Actions: It gives the description of all the available actions to the agent.
o Solution: It is an action sequence which leads from the start node to the goal node.
o Optimal Solution: If a solution has the lowest cost among all solutions.
Following are the four essential properties of search algorithms to compare the
efficiency of these algorithms:
Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest
path cost) among all other solutions, then such a solution for is said to be an optimal solution.
Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.
Space Complexity: It is the maximum storage space required at any point during the search,
as the complexity of the problem.
Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search) algorithms.
The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal. It operates in a brute-force way as it only includes information about
how to traverse the tree and how to identify leaf and goal nodes. Uninformed search applies a
way in which search tree is searched without any information about the search space like
initial state operators and test for the goal, so it is also called blind search. It examines each
node of the tree until it achieves the goal node.
o Breadth-first search
o Depth-first search
o Bidirectional Search
A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.Informed search can solve much complex problem wh
Greedy Search
A* Search
Breadth-first Search
Depth-first Search
Depth-limited Search
Bidirectional Search
1. BREADTH-FIRST SEARCH:
Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm se
BFS algorithm starts searching from the root node of the tree and expands all successor node at the curre
The breadth-first search algorithm is an example of a general-graph search algorithm.
Advantages:
o If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
Disadvantages:
o It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
o BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node
1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest solution
and b is a node at every state.
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
2. DEPTH-FIRST SEARCH
Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
It is called the depth-first search because it starts from the root node and follows each path to its greates
DFS uses a stack data structure for its implementation.
Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.
Advantage:
DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to
It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantage:
o There is the possibility that many states keep re-occurring, and there is no guarantee
of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the
order as:
It will start searching from root node S, and traverse A, then B, then D and E, after traversing
E, it will backtrack the tree as E has no other successor and still goal node is not found. After backtracking it
Completeness: DFS search algorithm is complete within finite state space as it will expand
every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:
Where, m= maximum depth of any node and this can be much larger than d (Shallowest
solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root node, hence
space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).
A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-limited
this algorithm, the node at the depth limit will treat as it has no successor nodes further.
Standard failure value: It indicates that problem does not have any solution.
Cutoff failure value: It defines no solution for the problem within a given depth limit.
Advantages:
Disadvantages:
It may not be optimal if the problem has more than one solution.
Example:
Completeness: DLS search algorithm is complete if the solution is above the depth-limit.
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not
optimal even if ℓ>d.
lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost of
all edges is the same.
Advantages:
o Uniform cost search is optimal because at every state the path with the least cost is
chosen.
Disadvantages:
o It does not care about the number of steps involve in searching and only concerned about path cost
Example:
Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS will find it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then
the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0 and end to
C*/ε.
Space Complexity:
The same logic is for space complexity so, the worst-case space complexity of Uniform-cost
search is O(b1 + [C*/ε]).
Optimal:
Uniform-cost search is always optimal as it only selects a path with the lowest path cost.
The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing the limit
until a goal is found.
This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.
This Search algorithm combines the benefits of Breadth-first search's fast search and
depth-first search's memory efficiency.
The iterative search algorithm is useful uninformed search when search space is large,
and depth of goal node is unknown.
Advantages:
o Itcombines the benefits of BFS and DFS search algorithm in terms of fast search and
memory efficiency.
Disadvantages:The main drawback of IDDFS is that it repeats all the work of the previous
phase.
Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS
algorithm performs various iterations until it does not find the goal node. The iteration
performed by the algorithm is given as:
Completeness:
Time Complexity:Let's suppose b is the branching factor and depth is d then the worst-case
time complexity is O(bd).
Space Complexity:
Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the node.
Bidirectional search algorithm runs two simultaneous searches, one form initial state called as forward-search
when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
Disadvantages:
Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm divides one
graph/tree into two sub-graphs. It starts traversing from node 1 in the forward direction and
starts from goal node 16 in the backward direction.
So far we have talked about the uninformed search algorithms which looked through
search space for all possible solutions of the problem without having any additional
knowledge about search space. But informed search algorithm contains an array of
knowledge such as how far we are from the goal, path cost, how to reach to goal node, etc.
This knowledge help agents to explore less to the search space and find more efficiently the
goal node.
The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search.
Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the most promisin
function is always positive.
Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be
less than or equal to the estimated cost.
the CLOSED list, it places those nodes which have already expanded and in the OPEN list, it
places nodes which have yet not been expanded.
On each iteration, each node n with the lowest heuristic value is expanded and
generates all its successors and n is placed to the closed list. The algorithm continues unit a
goal state is found.
In the informed search we will discuss two main algorithms which are given below:
JNN/LN/CSE/III/R-17 CS8691- ARTIFICIAL INTELLIGENCE
J.N.N. INSTITUTE OF ENGINEERING [AUTONOMOUS]
o A* Search Algorithm
Greedy best-first search algorithm always selects the path which appears best at that
moment. It is the combination of depth-first search and breadth-first search algorithms. It
uses the heuristic function and search. Best-first search allows us to take the advantages of
both
algorithms. With the help of best-first search, at each step, we can choose the most promising node. In the b
1. f(n)= g(n).
Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places it in the
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any success
o Step 6: For each successor node, algorithm checks for evaluation function f(n), and
then check if the node has been in either OPEN or CLOSED list. If the node has not
been in both list, then add it to the OPEN list.
Advantages:
o Best first search can switch between BFS and DFS by gaining the advantages of both
the algorithms.
Disadvantages:
Example:
Consider the below search problem, and we will traverse it using greedy best-first search. At each iteratio
given in the below table.
Consider the below search problem, and we will traverse it using greedy best-first search. At each iteratio
given in the below table.
In this search example, we are using two lists which are OPEN and CLOSED Lists.
Following are the iteration for traversing the above example.
Time Complexity: The worst case time complexity of Greedy best first search is O(bm).
Space Complexity: The worst case space complexity of Greedy best first search is O(bm).
Where, m is the maximum depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state space is finite.
A* search is the most commonly known form of best-first search. It uses heuristic
function h(n), and cost to reach the node n from the start state g(n). It has combined features
of UCS and greedy best-first search, by which it solve the problem efficiently. A* search
algorithm finds the shortest path through the search space using the heuristic function. This
search algorithm expands less search tree and provides optimal result faster. A* algorithm is
similar to UCS except that it uses g(n)+h(n) instead of g(n).
In A* search algorithm, we use search heuristic as well as the cost to reach the node.
Hence we can combine both costs as following, and this sum is called as a fitness number.
At each
Step point innode
4: Expand the search
n andspace, onlyallthose
generate of itsnode is expanded
successors, and which havethe
put n into theclosed
lowestlist.
valueFor
of f(n),
each and the n',
successor algorithm terminates
check whether n' when the goal
is already nodeOPEN
in the is found.
or CLOSED list, if not then
Algorithm of A* search:
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation function
(g+h), if node n is goal node then return success and stop, otherwise
compute evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the
back pointer which reflects the lowest g(n') value.
Advantages:
Disadvantages:
o It does not always produce the shortest path as it mostly based on heuristics and
approximation.
The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it is n
Example:
In this example, we will traverse the given graph using the A* algorithm. The heuristic value
of all states is given in the below table so we will calculate the f(n) of each state using the
formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start state.
Solution:
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with
cost 6.
Points to remember:
A* algorithm returns the path which occurred first, and it does not search for all remaining paths.
The efficiency of A* algorithm depends on the quality of heuristic.
A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">
o Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in nature.
If the heuristic function is admissible, then A* tree search will always find the least cost path.
HEURISTIC FUNCTIONS:
A heuristic function is a function that maps from problem state description to
measures of desirability, usually represented as numbers. Aspects of problem states are
evaluated, and the weights given to individual aspects are chosen in a way that the value of
the heuristic function at a given node in the search process gives as good as good an estimate
as possible of whether that node is on the desired path to a solution.
Heuristic functions can play an important part in efficiently guiding a search process.
Heuristic functions provide a good estimate of a path. The below example shows the heuristic
functions for a few problems.
Chess The material advantage of our side over the
opponent. Travelling Salesman The sum of the distances so far.
Tic-Tac-Toe 1 for each row in which we could win and in which
we already have one piece plus 2 for each such row in
which we have two pieces.
Sometimes a high value of the heuristic function indicates a good position , low value
indicates an advantageous situation. No matter the way function attempt to minimize it or to
maximize it as appropriate.
The Heuristic function (HF) guide the search process in the most profitable
direction by suggesting which path to follow first when more than one is
available.
H.F estimates the true merits of each node in the search tree , the more direct
solution process.
H.F is so good that definitely no search would be required.
For many problems, the cost of computing the value of such a function would
outweigh the effort in the search process.
Compute a perfect heuristic function by doing a complete search from the
node in question and determining whether it leads to a good solution.
Trade –off (Balance) between the cost of evaluating a heuristic function and
the savings in search time that the function provides.
Some heuristics used to define the control structure that guides the application
of rules in the search process.
HILL CLIMBING
It is a variant and test in which feedback from the test procedure is used to help
generator decide which direction to move in the search space. The test functions responds yes
or no. Test function provides an estimate of how close a given state is to a goal state. The
generate procedure that often the computation of the heuristic function is done at almost no
cost at the same time that the test for a solution is being performed.
Example:
Suppose we are in an unfamiliar city without a map and want to get downtown. Our ai
m is for the tall buildings.
Heuristic function is distance between the current location and the location of the tall
buildings and desirable states are those in which this distance is minimized. For this problem,
hill climbing can terminate whenever a goal state is reached.
Absolute solution exist whenever it is possible to recognize a goal state just by
examining it.
Relative solution exists, for maximization problems , such as travelling
salesman problem , there is no prior goal state.
Simple Hill Climbing:
Algorithm: Simple Hill Climbing
1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise,
continue with the initial state as the current state.
2. Loop until a solution is found or until there are no new operators left to be applied in
the current state:
a. Select an operator for that has not yet been applied to the current state and
apply it to produce a new a state.
b. Evaluate the new state.
i. If it is a goal state, then return it and quit.
ii. If it is not a goal state but it is better than the current state, then make it
the current state.
iii. If it is not better than the current state, then continue in the loop.
Ways out
Backtrack to some earlier node and try going in a different direction.
Make a big jump to try to get in a new section.
Moving in several directions at once.
Hill climbing is a local method: Decides what to do next by looking only at
the immediate consequence of its choice.
D
A
C
D
B
C
B A
Often useful when combined with other methods, getting it started right in the right general
neighbourhood.
Stimulated Annealing:
It is aviation of hill climbing in which, downhill moves made at the beginning of the
climbing in which, downhill moves made at the beginning of the process. The final solution
is insensitive to the starting state. This lower the changes of getting caught at a local
maximum a plateau , or a ridge.
Two changes:
The objects function is used in place of the term heuristic function.
Minimize rather than maximize the value of the objective function. We
describe a process of valley descending rather than hill climbing.
Stimulated annealing , is a process is patterned after the physical process of annealing
physical substances such as metals are melted and then cooled until some solid state is
reached. The goal of this process is to produce a minimal energy final state. This process is
one of valley descending in which the objects function is the energy level. Physical
substances move from
high to low, so the valley descending occurs. There is some probability that a translation to a
higher energy state will occur. This probability is given by the function.
This probability is
𝑃 = 𝑒−∆𝐸/𝐾𝑇
Where E is positive charge in energy level, t is temperature and k is Boltzman
constant. As indicated by the equation the probability decreases with badness of the move
(evaluation gets worsened by amount -∆E).
The rate at which -∆E is cooled is called annealing schedule. The proper annealing
schedule is maintained to monitor T.
This process has following differences from hill climbing
search: The annealing schedule is maintained.
Moves to worse states are also accepted.
In addition to current state, the best state record is maintained.
The algorithm of simulated annealing is presented as follows:
Algorithm: Simulated annealing:
1. Evaluate the initial state. Mark it as current state. If it is not the goal state, then return
it and quit. Otherwise continue with the initial state as the current state.
2. Initialize best state to current state. If the initial state is the best state, return it and quit.
3. Initialize T according to annealing schedule.
4. Loop until a solution is obtained or until there are no new operators left to be applied
in the current state.
a. Select and apply yet unapplied operator to produce a new state
b. Evaluate the new state compute.
-∆E = value of current state – value of new state.
i. If the new state is the goal state then stop, or if it is better than
current state, make it as current state and record as best state.
ii. If it is not better than the current state, then make it current state with
probability P.
c. Revise T according to annealing schedule
4. Return best state as answer.
Implementation of Algorithm:
Select Annealing schedule, which has three components.
a. The first is the initial value user for temperature.
b. The second is the criteria will be used to decide when the temperature of the
system should be reduced.
c. The third is the amount by which the temperature will be reduced each time it
is changed.
Fourth component of the schedule, when to quit.
Simulated annealing solve the problems where number of moves from a given
state is very large.
For such problems, it makes no sense to try all possible moves.
The Best annealing schedule should be observed in a way that the effect on
both the quality of the solution that is found and the rate at which the process
converges.
i. T approaches zero, the probability of accepting a move to a
worse state goes to zero and simulated annealing becomes
identical to simple hill climbing.
ii. Computing the probability of accepting a move is the ratio -
∆E/T. Thus its important that values of T be scaled so this ratio
is meaningful
GAME PLAYING
• In game playing it provides optimal move for the player assuming that opponent is
also playing optimally.
• Min-max algorithm is mostly used for game playing in AI. Such as Chess, Checkers,
tic-tac-toe, and various two players game.
• This algorithm computes the minimax decision for the current state.
• In this algorithm two players play the game, one is called MAX and the other is called
MIN
• Both the player fight it as the opponent player gets the minimum benefit while they
get the maximum benefit.
• Both players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
• This algorithm performs a depth first search algorithm for the exploration of the
complete game tree.
• This algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.
Terminologies
Game tree : In form of tree structure consists of all possible moves (move from one state to
next state)
Terminal state : Position of the board when the game gets over.
Working of Algorithm
• Max player tries to get highest possible score, Min player tries to get lowest possible
score. Both player act opposite to each other.
• At the terminal node , the terminal values are given so we will compare those value
and backtrack the tree until the initial state occurs.
• Following are the main steps in solving the two player game tree:
Step 1:
• In the first step the algorithm generates the entire game tree and apply the
utility function to get utility values for the terminal states.
• In the below diagram, lets take A is the initial state of the tree. Suppose
maximize (Player1)takes first turn, which has worst case initial value -∞ and the
minimizer( Player 2)will take next turn which has worst case initial value +∞.
Step 2:
Now, we find the utilities value for the maximiser, its initial value is -∞. so we will
compare each value in the terminal state with initial value of maximize and determines the
higher nodes values. It will find the maximum among the all.
For node D
For node E
For node F
In step 3 :
Its turn of minimizer, so it will compare all nodes value with +∞, and will find the 3rd
layer node values.
For node B:
min(4,6) =4
For node C:
min(-3,7)= -3
Step 4:
Now it’s a turn for maximize, and it will again choose the maximum of all nodes
value and find the maximum value for the root node. In this game tree, there are only 4
layers, hence we reach immediately to the root node, but in real games, there will be more
than 4 layers.
For node A
max(4,-3)=4
That was the complete work flow of the minimax two player game.
4. If neither of the above occurs, then it is necessary to make a guess at something in order
to proceed. To do this, loop until a solution is found or all possible solutions have
been eliminated:
a. Select an object whose value is not yet determined and select a way of
strengthening the constraints on that object.
b. Recursively invoke constraint satisfaction with the current set of
constraints augmented by the strengthening constraint just selected.
Example :
Cryptarithmatic puzzles,
M=1
S=8 or 9
O=0 or 1 O→0
N=E or E+1 →N=E+1 C2=1
N+R>8 E< >9
E=2
N=3
R=8 or 9
2+D=Y or 2+D= 10+Y
C1=0 C1=1
D=8
D=9
Y=0 Y=1
N=E or E+1, depending on the value of C2.But N cannot have the same value as E.
So N=E+1 and C2 is 1.
In order to for C2 to be 1, the sum of N+R+C1 must be greater than 8
N+R cannot be greater than 18, even with a carry in, so E cannot be 9.
Assume that no more constraints can be generated. To progress at this point, guessing
happens. If E is assigned the value 2. Now the next cycle begins.
Constraint propagator shows that :
N=3, since N=E+1
R=8 or 9, since R+N(3)+C1(1 or 0)=2 or 12. But since N is already 3, the sum
of these nonnegative numbers cannot be less than 3. Thus R+3+(0 or 1)=12
and R=8 or 9.
2+D=Y or 2+D=10+Y, from the sum in the rightmost column.
Assuming no more constraint generation then guess is required.
Suppose C1 is chosen to guess a value for 1, then we reach dead end as shown
in the fig , when this happens , the process will be backtrack and C1=0.
Algorithm for constraint satisfaction in which chronological backtracking is used when
guessing leads to an inconsistent set of constraints.
Constraints are generated are left alone if they are independent of the problem and its
cause. This approach is called dependency directed backtracking (DDB).