Backtracking Search For CSPs
Backtracking Search For CSPs
Degree heuristic:
The degree heuristic attempts to reduce the branching factor on future choices by
selecting the variable that is involved in the largest number of constraints on other
unassigned variables. [useful tie-breaker]
E.g. SA is the variable with highest degree 5; the other variables have degree 2 or 3; T
has degree 0.
ORDER-DOMAIN-VALUES
Value selection-fail -last
If we are trying to find all the solution to a problem (not just the first one), then the
ordering does not matter.
Least-constraining-value heuristic: prefers the value that rules out the fewest choice
for the neighboring variables in the constraint graph. (Try to leave the maximum
flexibility for subsequent variable assignments.)
e.g. We have generated the partial assignment with WA=red and NT=green and that
our next choice is for Q. Blue would be a bad choice because it eliminates the last
legal value left for Q’s neighbor, SA, therefore prefers red to blue.
The minimum-remaining-values and degree heuristic are domain-independent
methods for deciding which variable to choose next in a backtracking search.
The least-constraining-value heuristic helps in deciding which value to try first for a
given variable.
2. Interleaving search and inference
INFERENCE - every time we make a choice of a value for a variable.
One of the simplest forms of inference is called forward checking. Whenever a
variable X is assigned, the forward-checking process establishes arc consistency for it:
for each unassigned variable Y that is connected to X by a constraint, delete from Y’s
domain any value that is inconsistent with the value chosen for X.
There is no reason to do forward checking if we have already done arc consistency as
a preprocessing step.
Advantage: For many problems the search will be more effective if we combine the
MRV heuristic with forward checking.
Disadvantage: Forward checking only makes the current variable arc-consistent, but
doesn’t look ahead and make all the other variables arc-consistent.
Forward checking can supply the conflict set with no extra work.
Whenever forward checking based on an assignment X=x deletes a value from Y’s
domain, add X=x to Y’s conflict set;
If the last value is deleted from Y’s domain, the assignment in the conflict set of Y are
added to the conflict set of X.
In fact,every branch pruned by backjumping is also pruned by forward checking.
Hence simple backjumping is redundant in a forward-checking search or in a search
that uses stronger consistency checking (such as MAC).
Conflict-directed backjumping:
e.g.
consider the partial assignment which is proved to be inconsistent: {WA=red,
NSW=red}.
We try T=red next and then assign NT, Q, V, SA, no assignment can work for these
last 4 variables.
Eventually we run out of value to try at NT, but simple backjumping cannot work
because NT doesn’t have a complete conflict set of preceding variables that caused to
fail.
The set {WA, NSW} is a deeper notion of the conflict set for NT, caused NT together
with any subsequent variables to have no consistent solution. So the algorithm should
backtrack to NSW and skip over T.
A backjumping algorithm that uses conflict sets defined in this way is called conflict-
direct backjumping.
How to Compute:
When a variable’s domain becomes empty, the “terminal” failure occurs, that variable
has a standard conflict set.
Let Xj be the current variable, let conf(Xj) be its conflict set. If every possible value
for Xj fails, backjump to the most recent variable Xi in conf(Xj), and set
conf(Xi) ← conf(Xi)∪conf(Xj) – {Xi}.
The conflict set for an variable means, there is no solution from that variable onward,
given the preceding assignment to the conflict set.
e.g.
assign WA, NSW, T, NT, Q, V, SA.
SA fails, and its conflict set is {WA, NT, Q}. (standard conflict set)
Backjump to Q, its conflict set is {NT, NSW}∪{WA,NT,Q}-{Q} = {WA, NT, NSW}.
Backtrack to NT, its conflict set is {WA}∪{WA,NT,NSW}-{NT} = {WA, NSW}.
Hence the algorithm backjump to NSW. (over T)
After backjumping from a contradiction, how to avoid running into the same problem
again:
Constraint learning: The idea of finding a minimum set of variables from the
conflict set that causes the problem. This set of variables, along with their
corresponding values, is called a no-good. We then record the no-good, either by
adding a new constraint to the CSP or by keeping a separate cache of no-goods.
Backtracking occurs when no legal assignment can be found for a variable.
A backjumping algorithm that uses conflict sets defined in this way is called
conflict-directed backjumping.
Conflict-directed backjumping backtracks directly to the source of the problem.