[go: up one dir, main page]

0% found this document useful (0 votes)
23 views5 pages

Backtracking Search For CSPs

Uploaded by

Akshana Arash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views5 pages

Backtracking Search For CSPs

Uploaded by

Akshana Arash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Backtracking search for CSPs

 Backtracking search, a form of depth-first search, is commonly used for solving


CSPs. Inference can be interwoven with search.
Commutativity:
 CSPs are all commutative. A problem is commutative if the order of application of
any given set of actions has no effect on the outcome.
Backtracking search:
 A depth-first search that chooses values for one variable at a time and backtracks
when a variable has no legal values left to assign.
 Backtracking algorithm repeatedly chooses an unassigned variable, and then tries all
values in the domain of that variable in turn, trying to find a solution. If an
inconsistency is detected, then BACKTRACK returns failure, causing the previous
call to try another value.
 There is no need to supply BACKTRACKING-SEARCH with a domain-specific
initial state, action function, transition model, or goal test.
 BACKTRACKING-SARCH keeps only a single representation of a state and alters
that representation rather than creating a new ones.

To solve CSPs efficiently without domain-specific knowledge, address following questions:


i) function SELECT-UNASSIGNED-VARIABLE: which variable should be
assigned next?
Function ORDER-DOMAIN-VALUES: in what order should its values be tried?
ii) function INFERENCE : what inferences should be performed at each step in the
search?
iii) When the search arrives at an assignment that violates a constraint, can the search
avoid repeating this failure?

1. Variable and value ordering


 The backtracking algorithm contains the line

 Variable Selection-fail first

Minimum-remaining-values (MRV) heuristic:


 The idea of choosing the variable with the fewest “legal” value. A.k.a. “most
constrained variable” or “fail-first” heuristic, it picks a variable that is most likely
to cause a failure soon thereby pruning the search tree.
 If some variable X has no legal values left, the MRV heuristic will select X and
failure will be detected immediately—avoiding pointless searches through other
variables.
 E.g. After the assignment for WA=red and NT=green, there is only one possible
value for SA, so it makes sense to assign SA=blue next rather than assigning Q.

Degree heuristic:
 The degree heuristic attempts to reduce the branching factor on future choices by
selecting the variable that is involved in the largest number of constraints on other
unassigned variables. [useful tie-breaker]
 E.g. SA is the variable with highest degree 5; the other variables have degree 2 or 3; T
has degree 0.
 ORDER-DOMAIN-VALUES
 Value selection-fail -last
 If we are trying to find all the solution to a problem (not just the first one), then the
ordering does not matter.
 Least-constraining-value heuristic: prefers the value that rules out the fewest choice
for the neighboring variables in the constraint graph. (Try to leave the maximum
flexibility for subsequent variable assignments.)
 e.g. We have generated the partial assignment with WA=red and NT=green and that
our next choice is for Q. Blue would be a bad choice because it eliminates the last
legal value left for Q’s neighbor, SA, therefore prefers red to blue.
 The minimum-remaining-values and degree heuristic are domain-independent
methods for deciding which variable to choose next in a backtracking search.
The least-constraining-value heuristic helps in deciding which value to try first for a
given variable.
2. Interleaving search and inference
 INFERENCE - every time we make a choice of a value for a variable.
 One of the simplest forms of inference is called forward checking. Whenever a
variable X is assigned, the forward-checking process establishes arc consistency for it:
for each unassigned variable Y that is connected to X by a constraint, delete from Y’s
domain any value that is inconsistent with the value chosen for X.
 There is no reason to do forward checking if we have already done arc consistency as
a preprocessing step.
 Advantage: For many problems the search will be more effective if we combine the
MRV heuristic with forward checking.
 Disadvantage: Forward checking only makes the current variable arc-consistent, but
doesn’t look ahead and make all the other variables arc-consistent.

MAC (Maintaining Arc Consistency) algorithm:


 [More powerful than forward checking, detect this inconsistency.] After a variable
Xi is assigned a value, the INFERENCE procedure calls AC-3, but instead of a queue
of all arcs in the CSP, we start with only the arcs(Xj, Xi) for all Xj that are unassigned
variables that are neighbors of Xi.
 From there, AC-3 does constraint propagation in the usual way, and if any variable
has its domain reduced to the empty set, the call to AC-3 fails and we know to
backtrack immediately.
 chronological backtracking: The BACKGRACKING-SEARCH has a simple policy,
when a branch of the search fails, back up to the preceding variable and try a different
value for it.
e.g.
 Suppose we have generated the partial assignment {Q=red, NSW=green, V=blue,
T=red}.
 When we try the next variable SA, we see every value violates a constraint.
 We back up to T and try a new color, it cannot resolve the problem.
 Intelligent backtracking: Backtrack to a variable that was responsible for making
one of the possible values of the next variable (e.g. SA) impossible.
 Conflict set for a variable: A set of assignments that are in conflict with some value
for that variable.
(e.g. The set {Q=red, NSW=green, V=blue} is the conflict set for SA.)
 backjumping method: Backtracks to the most recent assignment in the conflict set.
(e.g. backjumping would jump over T and try a new value for V.)

 Forward checking can supply the conflict set with no extra work.
 Whenever forward checking based on an assignment X=x deletes a value from Y’s
domain, add X=x to Y’s conflict set;
 If the last value is deleted from Y’s domain, the assignment in the conflict set of Y are
added to the conflict set of X.
 In fact,every branch pruned by backjumping is also pruned by forward checking.
Hence simple backjumping is redundant in a forward-checking search or in a search
that uses stronger consistency checking (such as MAC).

Conflict-directed backjumping:
e.g.
 consider the partial assignment which is proved to be inconsistent: {WA=red,
NSW=red}.
 We try T=red next and then assign NT, Q, V, SA, no assignment can work for these
last 4 variables.
 Eventually we run out of value to try at NT, but simple backjumping cannot work
because NT doesn’t have a complete conflict set of preceding variables that caused to
fail.
 The set {WA, NSW} is a deeper notion of the conflict set for NT, caused NT together
with any subsequent variables to have no consistent solution. So the algorithm should
backtrack to NSW and skip over T.
 A backjumping algorithm that uses conflict sets defined in this way is called conflict-
direct backjumping.
How to Compute:
 When a variable’s domain becomes empty, the “terminal” failure occurs, that variable
has a standard conflict set.
 Let Xj be the current variable, let conf(Xj) be its conflict set. If every possible value
for Xj fails, backjump to the most recent variable Xi in conf(Xj), and set
conf(Xi) ← conf(Xi)∪conf(Xj) – {Xi}.
 The conflict set for an variable means, there is no solution from that variable onward,
given the preceding assignment to the conflict set.
e.g.
assign WA, NSW, T, NT, Q, V, SA.
SA fails, and its conflict set is {WA, NT, Q}. (standard conflict set)
Backjump to Q, its conflict set is {NT, NSW}∪{WA,NT,Q}-{Q} = {WA, NT, NSW}.
Backtrack to NT, its conflict set is {WA}∪{WA,NT,NSW}-{NT} = {WA, NSW}.
Hence the algorithm backjump to NSW. (over T)

 After backjumping from a contradiction, how to avoid running into the same problem
again:
 Constraint learning: The idea of finding a minimum set of variables from the
conflict set that causes the problem. This set of variables, along with their
corresponding values, is called a no-good. We then record the no-good, either by
adding a new constraint to the CSP or by keeping a separate cache of no-goods.
 Backtracking occurs when no legal assignment can be found for a variable.
 A backjumping algorithm that uses conflict sets defined in this way is called
conflict-directed backjumping.
 Conflict-directed backjumping backtracks directly to the source of the problem.

You might also like