Algorithm:: Definition: An Algorithm Is A Finite Set of Instructions That Accomplishes A Particular Task
Algorithm:: Definition: An Algorithm Is A Finite Set of Instructions That Accomplishes A Particular Task
1. Introduction:
Algorithm: The word algorithm came from the name of a Persian mathematician Abu Jafar
Mohammed Ibn Musa Al Khowarizmi (ninth century). An algorithm is simply s set of rules used
to perform some calculations either by hand or more usually on a machine (computer).
Definition: An algorithm is a finite set of instructions that accomplishes a particular task.
Another definition is a sequence of unambiguous instructions for solving a problem i.e, for
obtaining a required output for any legitimate (genuine) input in a finite amount of time.
In addition all algorithms must satisfy the following criteria (characteristics).
1. Input: zero or more quantities are externally supplied as input.
Consider Fibonacci numbers program, here aim of the problem is to display ten
Fibonacci numbers. No input is required; in the problem itself this is clearly mentioned as ten
Fibonacci values. So zero items required for input.
Another problem is displaying given numbers of evens, so user should accept how
many evens required. Based on the user input the number of evens is to be displayed. So, one data
item is required as input.
2. Output: At least one quantity is produced by given algorithm as output.
In the case of Fibonacci numbers program after executing the program, first ten
Fibonacci values displayed as output.
In second case, based on user input it should display given number of evens. An input of
negative number is wrong, should display proper error message as output. So this program
displays at least one output as error message, or number if outputs that show given number of
steps.
3. Definiteness: Each instruction is clear and unambiguous i.e. each step must be easy to
understand and convey only a single meaning.
4. Effectiveness: each instruction must be very basic, so that it can be carried out by a
person using only pencil and paper.
This step is common in both Fibonacci and primes. For example, if user enters a negative
numbers as input in evens, if you have a step like
Step: If N < 0 then
Go to ERROR
A wrong instruction given as go to ERROR, those kinds of instructions should not be there
in an algorithm.
5. Finiteness: If we can trace out the instructions of an algorithm then for all cases, the
algorithm terminate after a finite number of steps.
Either in the case of Fibonacci or even numbers problem should be solved in some number
of steps. For example, continuous display or Fibonacci series without termination leads to
abnormal termination.
Solution as an algorithm
Algorithm Design
technique
Prove Correctness
No
Yes
Analyse the algorithm
is it efficient No
Yes
3. Types of Algorithms:
There are four types of algorithms
1. Approximate algorithm.
2. Probabilistic algorithm.
3. Infinite algorithm.
4. Heuristic algorithm.
Blog: anilkumarprathipati.wordpress.com Page 2
INTRODUCTION TO ALGORITHMS UNIT-1
1. = 3.14 etc...
2. Probabilistic algorithm: If the solution of a problem is uncertain then it is called as
probabilistic algorithm. Ex: Tossing of a coin.
3. Infinite Algorithm: An algorithm which is not finite is called as infinite algorithm.
Ex: A complete solution of a chessboard, division by zero.
4. Heuristic algorithm: Giving fewer inputs getting more outputs is called the Heuristic
algorithms.
Ex: All Business Applications.
5 Specification of algorithm:
There are various ways by which we can specify an algorithm.
Using natural language
Pseudocode
Algorithm
Flow chart
Program (Using programming language)
It is very easy to specify an algorithm using natural language. But many times
specification of algorithm by using natural language is not clear, and may require brief
description.
Example: Write an algorithm to perform addition of two numbers.
Step 1: Read the first number, say ‘a’.
Step 2: Read the second number, say ‘b’.
Step 3: Add the two numbers and store the result in a variable ‘c’.
Step 4: Display the result.
3. The delimiters [;] are used at the end of the each statement.
4. An identifier begins with a letter. Example: sum, sum5, a; but not in 5sum, 4a etc.,.
5. Assignment of values to the variables is done using the assignment operators as := or .
6. There are two Boolean values TRUE and FALSE.
Logical operators: AND, OR, NOT.
Relational operators: <, , ≥,,=,.
Arithmetic operators: +, -, *, /, %;
7. The conditional statement if-then or if-then-else is written in the following form.
If (condition) then (statement)
If (condition) then (statement-1) else (statement-2)
‘If’ is a powerful statement used to make decisions based as a condition. If a condition is
true the particular block of statements are execute.
Example
if(a>b) then
{
write("a is big");
}
else
{
write("b is big");
}
8. Case statement
case
{
:(condition -1): (statement-1)
:(condition -2): (statement-2)
:(condition -n): (statement-n)
..............
..............
else :(statement n+1);
}
If condition -1 is true, statement -1 executed and the case statement is exited. If
statement -1 is false, condition -2 is evaluated. If condition -2 is true, statement-2 executed
and so on. If none of the conditions are true, statement –(n+1) is executed and the case
statement is exited. The else clause is optional.
9. Loop statements:
For loop:
i). The general form of the for loop is
for variable:=value 1 to value n step do Example:
{ for i:=1 to 10 do
Statement -1; {
Statement -1; write(i); //displaying numbers from 1 to 10
....... i:=i+1;
....... }
Statement -n;
}
ii). While loop:
The general form of the while loop is
while <condition> do Example:
{ i:=1;
<statement 1> while(i<=10)do
<statement 2> {
........ write (i);//displaying numbers from 1 to 10
........ i:=1+1;
<statement n> }
}
Note that the statements of while loop are executed as long as <condition> is true.
iii). Repeat-until loop:
The general form of repeat-until is-
repeat Example
{ i:=1;
<statement 1> repeat
<statement 2> {
...... write (i);
...... i:=i+1;
<statement n> }
}until <condition> until (i>10);
Note that the statements are executed as long as <condition> is false.
Blog: anilkumarprathipati.wordpress.com Page 5
INTRODUCTION TO ALGORITHMS UNIT-1
6. Performance Analysis:
Performance analysis or analysis of algorithms refers to the task of determining the
efficiency of an algorithm i.,e how much computing time and storage an algorithm requires to
run (or execute). This analysis of algorithm helps in judging the value of one algorithm over
another.
To judge an algorithm, particularly two things are taken into consideration
1. Space complexity
2. Time complexity.
Space Complexity: The space complexity of an algorithm (program) is the amount of
memory it needs to run to completion. The space needed by an algorithm has the following
components.
1. Instruction Space.
2. Data Space.
3. Environment Stack Space.
Instruction Space: Instruction space is the space needed to store the compiled version of the
program instructions. The amount of instruction space that is needed depends on factors such as-
i). The compiler used to compile the program into machine code.
ii). The compiler options in effect at the time of compilation.
iii). The target computer, i.,e computer on which the algorithm run.
Note that, one compiler may produce less code as compared to another compiler, when the
same program is compiled by these two.
Data Space: Data space is the space needed to store all constant and variable values. Data
space has two components.
i). Space needed by constants, for example 0, 1, 2.134.
ii). Space needed by dynamically allocated objects such as arrays, structures, classes.
Environmental Stack Space: Environmental stack space is used during execution of
functions. Each time function is involved the following data are saved as the environmental
stack.
i). The return address.
ii). Value of local variables.
iii). Value of formal parameters in the function being invoked.
Environmental stack space is mainly used in recursive functions. Thus, the space requirement
of any program p may therefore be written as
Space complexity S(P) = C + Sp (Instance characteristics).
This equation shows that the total space needed by a program is divided into two parts.
Fixed space requirements(C) is independent of instance characteristics of the inputs and
outputs.
- Instruction space
- Space for simple variables, fixed-size structure variables, constants.
A variable space requirements (SP(1)) dependent on instance characteristics 1.
- This part includes dynamically allocated space and the recursion stack space.
Example of instance character is:
Examples: 1
Algorithm NEC (float x, float y, float z)
{
Return (X + Y +Y * Z + (X + Y +Z)) /(X+ Y) + 4.0;
}
In the above algorithm, there are no instance characteristics and the space needed by X, Y, Z is
independent of instance characteristics, therefore we can write,
S(XYZ) =3+0=3
One space each for X, Y and Z
Space complexity is O(1).
Examples: 2
Algorithm ADD ( float [], int n)
{
sum = 0.0;
for i=1 to n do
sum=sum+X[i];
return sum; }
Here, atleast n words since X must be large enough to hold the n elements to be summed.
Here the problem instances is characterized by n, the number of elements to be summed. So, we
can write,
S(ADD) =3+n
3-one each for n, I and sum
Where n- is for array X[],
Space complexity is O(n).
Time Complexity
The time complexity of an algorithm is the amount of compile time it needs to run to
completion. We can measure time complexity of an algorithm in two approaches
1. Priori analysis or compile time
2. Posteriori analysis or run (execution) time.
In priori analysis before the algorithm is executed we will analyze the behavior of the
algorithm. A priori analysis concentrates on determining the order if execution of statements.
In Posteriori analysis while the algorithm is executed we measure the execution time.
Posteriori analysis gives accurate values but it is very costly.
As we know that the compile time does not depend on the size of the input. Hence, we will
confine ourselves to consider only the run-time which depends on the size of the input and this
run-time is denoted by TP(n). Hence
The time (T(P)) taken by a program P is the sum of the compile time and execution time.
The compile time does not depend on the instance characteristics, so we concentrate on the
runtime of a program. This runtime is denoted by tp (instance characteristics).
The following equation determines the number of addition, subtraction, multiplication,
division compares, loads stores and so on, that would be made by the code for p.
tp(n) = CaADD(n)+ CsSUB(n)+ CmMUL(n)+ CdDIV(n)+……………..
where n denotes instance characteristics, and Ca, Cs, Cm, Cd and so on…..
As denote the time needed for an addition, subtraction, multiplication, division and so on,
and ADD, SUB, MUL, DIV and so on, are functions whose values are the number of additions,
subtractions, multiplications, divisions and so on. But this method is an impossible task to find
out time complexity.
Another method is step count. By using step count, we can determine the number if steps
needed by a program to solve a particular problem in 2 ways.
Method 1: introduce a global variable “count”, which is initialized to zero. So each time a
statement in the signal program is executed, count is incremented by the step count of that
statement.
count:=0;
Algorithm Sum(a, n) Algorithm Sum(a,n)
{ {
s:=0; s:=0;
count:=count+1;
for i:=1 to n do for i:=1 to n do
{ {
count:=count +1;
s:=s+a[i]; s:=s+a[i];
count:=count+1;
} }
count:=count+1; //for last time of for loop
count:=count+1; //for return statement
return s; return s;
} }
Thus the total number of steps are 2n+3
Method 2: The second method to determine the step count of an algorithm is to build a
table in which we list the total number of steps contributed by each statement.
Statement S/e Frequency Total steps
Ex:
1. Algorithm Sum(a, n) 0 - 0
2. { 0 - 0
3. s:=0; 1 1 1
5. s:=s+a[i]; 1 n n
6. return s; 1 1 1
7. } 0 - 0
The S/e (steps per execution) of a statement is the amount by which the count changes as
a result of the execution of that statement. The frequency determines the total number of times
each statement is executed.
Complexity of Algorithms:
1. Best Case: Inputs are provided in such a way that the minimum time is required to
process them.
2. Average Case: The amount of time the algorithm takes on an average set of inputs.
3. Worst Case: The amount of time the algorithm takes on the worst possible set of inputs.
Example: Linear Search
3 4 5 6 7 9 10 12 15
A 1 2 3 4 5 6 7 8 9
Best Case: If we want to search an element 3, whether it is present in the array or not. First, A(1)
is compared with 3, match occurs. So the number of comparisons is only one. It is observed that
search takes minimum number of comparisons, so it comes under best case.
Time complexity is O(1).
Average Case: If we want to search an element 7, whether it is present in the array or not.
First, A(1) is compared with 7 i,.e, (3=7), no match occurs. Next, compare A(2) and 7, no match
occurs. Compare A(3) and A(4) with 7, no match occurs. Up to now 4 comparisons takes place.
Now compare A(5) and 7 (i.,e, 7=7), so match occurs. The number of comparisons is 5. It is
observed that search takes average number of comparisons. So it comes under average case.
Note: If there are n elements, then we require n/2 comparisons.
. n
. . Time complexity is O = O(n) (we neglect constant)
2
Worst Case: If we want to search an element 15, whether it is present in the array or not.
First, A(1) is compared with 15 (i.,e, 3=15), no match occurs. Continue this process until either
element is found or the list is exhausted. The element is found at 9 th comparison. So number of
comparisons are 9.
Time complexity is O(n).
Note: If the element is not found in array, then we have to search entire array, so it comes under
worst case.
7. Asymptotic Notation:
Accurate measurement of time complexity is possible with asymptotic notation.
Asymptotic complexity gives an idea of how rapidly the space requirement or time requirement
grow as problem size increase. When there is a computing device that can execute 1000 complex
operations per second. The size of the problem is that can be solved in a second or minute or an
hour by algorithms of different asymptotic complexity. In general asymptotic complexity is a
measure of algorithm not problem. Usually the complexity of an algorithm is as a function
relating the input length to the number of steps (time complexity) or storage location (space
complexity). For example, the running time is expressed as a function of the input size ‘n’ as
follows.
f(n)=n4+100n2+10n+50 (running time)
There are four important asymptotic notations.
1. Big oh notation (O)
2. Omega notation ().
3. Theta notation ()
Let f(n) and g(n) are two non-negative functions.
Big oh notation
Big oh notation is denoted by ‘O’. it is used to describe the efficiency of an algorithm. It is
used to represent the upper bound of an algorithms running time. Using Big O notation, we can
give largest amount of time taken by the algorithm to complete.
Definition: Let f(n) and g(n) be the two non-negative functions. We say that f(n) is said to be
O(g(n)) if and only if there exists a positive constant ‘c’ and ‘n0‘ such that,
f(n)c*g(n) for all non-negative values of n, where n≥n0.
Here, g(n) is the upper bound for f(n).
4
Ex: Let f(n) = 2n + 5n2 + 2n +3
< 2n4 + 5n4 + 2n4 +3n4 c*g(n)
< (2+5+2+3)n4
< 12n .
4 f(n)
. 4
.. f(n)=12n
4
This implies g(n)=n , n > 1
.
. . c=12 and n0 =1
.
. . f(n)=O(n4)
n0
The above definition states that the function ‘f’ is almost ‘c’ times the function ‘g’ when ‘n’ is
greater than or equal to n0.
This notion provides an upper bound for the function ‘f’ i.,e, the function g(n) is an upper
bound on the value of f(n) for all n, where n≥ n0.
Example:
4
f(n)
Let f(n) = 2n + 5n2 + 2n +3
> 2n4 (for example as n ,
lower order oterms c*g(n)
are insignificant)
. 4
. . f(n) > 2n , n >1
. 4
.. g(n)=n , c=2 and n0 =1
. 4
.. f(n)= (n )
n0
Big Theta notation
The big theta notation is denoted by ‘’. It is in between the upper bound and lower
bound of an algorithms running time.
Definition: Let f(n) and g(n) be the two non-negetive functions. We say that f(n) is said to
be (g(n)) if and only if there exists a positive constants ‘c 1’ and ‘c2’, such that,
c1g(n) f(n) c2g((n) for all non-negative values n, where n ≥ n0.
The above definition states that the function f(n) lies between ‘c1 ’times the function g(n)
and ‘c2’, times the function g(n) where ‘c1’ and ‘c2 ’ are positive constants.
This notation provides both lower and upper bounds for the function f(n) i.,e, g(n) is both
lower and upper bounds on the value of f(n), for large n. in other words theta notation says that
f(n) is both O(g(n)) and (g(n)) for all n, where n≥n0.
This function f(n) = (g(n)) iff g(n) is both upper and lower bound an f(n).
Example:
c2*g(n)
f(n) = 2n4 + 5n2 + 2n +3
f(n)
2n4 2n4 + 5n2 + 2n +3 12n4
c1*g(n)
2n4 f(n) 12n4 , n 1
.
.. g(n) = n4
.
.. c1=2, c2=12 and n0=1
.
.. f(n)=(n4)
n0 n
Total (n)
n
Thus time complexity T(n) = 1
i=1
= 1+1+1+1 n
= n
.
. . T(n) = O(n)
8 Probabilistic Analysis:
Probabilistic analysis of algorithms is an approach to estimate the complexity of an
algorithm. It uses the probability in the analysis of problems. It starts from an assumption about a
probabilistic distribution of the set of all possible inputs. This assumption is then used to design an
efficient algorithm or to compute an expected running time of a known algorithm.
The following is the simple example as probabilistic average case analysis.
Example: Consider linear search algorithm which searches a target element say x, in the
given list of size n. in the worst case, the algorithm will examine all n elements in the list before
terminating.
For a probabilistic average-case analysis, it is generally assumed that all possible
terminations are equally likely-that is, the probability that x, will be found at position 1 is 1/x at
position 2 is 1/n and so on.
The average search cost is therefore the sum of all possible search costs each multiplied
by their associated probability.
For example, if n=5, we would have
Average search cost=1/5(1 +2 +3 +4 +5)=3.
In general case we have
Average search cost =1/n(n(n+1)/2)=(n+1)/2
Probabilistic analysis is mainly useful in estimate running time of an algorithm, calculating
search costs in a searching algorithm etc.
9. Amortized Analysis:
Amortized analysis refers to finding the average running time per operation, over a worst
case sequence of operations. That is the main goal of amortized analysis is to analyze the time
per operation for a series of operations. Sometimes single operation might be expensive; in that
case amortized analysis specifies average over a sequence of operations. Amortized cost per
operation for a sequence of n operations is the total cost of operations divided by n.
For example, if we have 100 operations at cost 1, followed by one operation at cost 100,
then amortized cost per operation is 200/101 < 2. Amortized analysis does not allow random
selection of input.
The average case analysis and amortized analysis are different. In average case analysis,
we are averaging over all possible inputs whereas in amortized analysis we are averaging over a
sequence of operations.
Amortized analysis does not allow random selection of input.
There are several techniques used in amortized analysis.
1. Aggregate Analysis: In this type of analysis upper bound T(n) on the total cost of a
sequence of n operations is decided, then the average cost is calculated as T(n)/n.
2. Accounting Method: In this method the individual cost of each operation is determined,
by combining immediate execution time and its influence on the running time of future operations.
3. Potential Method: It is like the accounting method, but overcharges operations early to
compensate for undercharges later.
Blog: anilkumarprathipati.wordpress.com 1
UNIT-2 DIVIDE AND CONQUER
Binary Search:
Binary search is an efficient searching technique that works with only sorted lists. So the
list must be sorted before using the binary search method. Binary search is based on divide-and-
conquer technique.
The process of binary search is as follows:
The method starts with looking at the middle element of the list. If it matches with the key
element, then search is complete. Otherwise, the key element may be in the first half or second
half of the list. If the key element is less than the middle element, then the search continues with
the first half of the list. If the key element is greater than the middle element, then the search
continues with the second half of the list. This process continues until the key element is found
or the search fails indicating that the key is not there in the list.
Consider the list of elements: -4, -1, 0, 5, 10, 18, 32, 33, 98, 147, 154, 198, 250, 500.
Trace the binary search algorithm searching for the element -1.
Blog: anilkumarprathipati.wordpress.com 2
UNIT-2 DIVIDE AND CONQUER
Sol: The given list of elements are:
Low High
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Blog: anilkumarprathipati.wordpress.com 3
UNIT-2 DIVIDE AND CONQUER
The following algorithm gives the iterative binary Search Algorithm
Algorithm BinarySearch(a, n, key)
{
// a is an array of size n elements
// key is the element to be searched
// if key is found in array a, then return j, such that
//key = a[i]
//otherwise return -1.
Low: = 0;
High: = n-1;
While (low high) do
{
Mid: = (low + high)/2;
If ( key = a[mid]) then
Return mid;
Else if (key < a[mid])
{
High: = mid +1;
}
Else if( key > a[mid])
{
Low: = mid +1;
}
}
The following algorithm gives Recursive Binary Search
Algorithms Binsearch ( a, n, key, low, high)
{
// a is array of size n
// Key is the element to be searched
// if key is found then return j, such that key = a[i].
//otherwise return -1
If ( low high) then
{
Mid: = (low + high)/2;
If ( key = a[mid]) then
Return mid;
Else if (key < a[mid])
Binsearch ( a, n, key, low, mid-1);
Else if ( key > a[mid])
Binsearch ( a, n, key, mid+1, high);
}
Return -1;
}
Advantages of Binary Search: The main advantage of binary search is that it is faster than
sequential (linear) search. Because it takes fewer comparisons, to determine whether the given
key is in the list, then the linear search method.
Blog: anilkumarprathipati.wordpress.com 4
UNIT-2 DIVIDE AND CONQUER
Disadvantages of Binary Search: The disadvantage of binary search is that can be applied to
only a sorted list of elements. The binary search is unsuccessful if the list is unsorted.
Efficiency of Binary Search: To evaluate binary search, count the number of comparisons in
the best case, average case, and worst case.
Best Case: The best case occurs if the middle element happens to be the key element. Then only
one comparison is needed to find it. Thus the efficiency of binary search is O(1).
Ex: Let the given list is: 1, 5, 10, 11, 12.
Low Mid High
1 5 10 11 12
Let key = 10.
Since the key is the middle element and is found at our first attempt.
Worst Case: Assume that in worst case, the key element is not there in the list. So the process of
divides the list in half continues until there is only one item left to check.
Items left to search Comparisons so far
16 0
8 1
4 2
2 3
1 4
For a list of size 16, there are 4 comparisons to reach a list of size one, given that there is one
comparison for each division, and each division splits the list size in half.
In general, if n is the size of the list and c is the number of comparisons, then
C = log2 n
.
. . Eficiency in worst case = O(log n)
Average Case: In binary search, the average case efficiency is near to the worst case efficiency.
So the average case efficiency will be taken as O(log n).
Efficiency in average case = O (log n).
Binary Search
Blog: anilkumarprathipati.wordpress.com 5
UNIT-2 DIVIDE AND CONQUER
The reduction step of the quick sort algorithm finds the final position of one of the
numbers. In this example, we use the first number, 12, which is called the pivot (rotate) element.
This is accomplished as follows-
Let ‘i’ be the position of the second element and ‘j’ be the position of the last element.
i.e. i =2 and j =8, in this example.
Assume that a [n+1] =, where ‘a’ is an array of size n.
[1] [2] [3] [4] [5] [6] [7] [8] [9] i j
12 6 18 4 9 8 2 15 2 8
First scan the list from left to right (from i to j) can compare each and every element with
the pivot. This process continues until an element found which is greater than or equal to pivot
element. If such an element found, then that element position becomes the value of ‘i’.
Now scan the list from right to left (from j to i) and compare each and every element with
the pivot. This process continues until an element found which is less than or equal to pivot
element. If such an element finds then that element’s position become ‘j’ value.
Now compare ‘i’ and ‘j’. If i <j, then swap a[i] and a[j]. Otherwise swap pivot element
and a[j].
Continue the above process the entire list is sorted.
[1] [2] [3] [4] [5] [6] [7] [8] [9] i j
12 6 18 4 9 8 2 15 2 8
12 6 18 4 9 8 2 15 3 7
12 6 2 4 9 8 18 15 7 6
Since i = 7 j=6, then swap pivot element and 6th element ( jth element), we get
8 6 2 4 9 12 18 15
Thus pivot reaches its original position. The elements on left to the right pivot are smaller
than pivot (12) and right to pivot are greater pivot (12).
8 6 2 4 9 12 18 15
Sublist 1 Sublist 2
Now take sub-list1 and sub-list2 and apply the above process recursively, at last we get
sorted list.
Ex 2: Let the given list is-
8 18 56 34 9 92 6 2 64
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] i j
8 18 56 34 9 92 6 2 64 2 98
8 18 56 34 9 92 6 2 64 2 8
8 2 56 34 9 92 6 18 64 3 7
8 2 6 34 9 92 56 18 64 4 3
6 2 8 34 9 92 56 18 64
< > < >
Sublist 1 Sublist 2
Blog: anilkumarprathipati.wordpress.com 6
UNIT-2 DIVIDE AND CONQUER
Now take a sub-list that has more than one element and follow the same process as
above. At last, we get the sorted list that is, we get
2 6 8 9 18 34 56 64 92
The following algorithm shows the quick sort algorithm-
Algorithm Quicksort(i, j)
{
// sorts the array from a[i] through a[j]
If ( i <j) then //if there are more than one element
{
//divide P into two sub-programs
K: = partition (a, i, j+1);
//Here K denotes the position of the partitioning element
//solve the sub problems
Quicksort(i, K-1);
Quicksort(K=1, j);
// There is no need for combining solution
}
}
Algorithm Partition (a, left, right)
{
// The element from a[left] through a[right] are rearranged in such a manner that if initially
// pivot =a[left] then after completion a[j]= pivot, then return. Here j is the position where
// pivot partition the list into two partitions. Note that a[right]= .
pivot: a[left];
i:= left; j:=right;
repeat
{
repeat
i: =i+1;
until (a[i] ≥ pivot);
repeat
j: =j-1;
until (a[j] < pivot);
if( i<j) then
Swap (a, i, j);
}until (i ≥ j);
a[left]: = a[j];
a[j]: = pivot;
return j;
}
Algorithm Swap (a, i, j)
{
//Example a[i] with a[j]
temp:= a[i];
a[i]: = a[j];
a[j]:= temp;
}
Blog: anilkumarprathipati.wordpress.com 7
UNIT-2 DIVIDE AND CONQUER
Advantages of Quick-sort: Quick-sort is the fastest sorting method among all the sorting
methods. But it is somewhat complex and little difficult to implement than other sorting
methods.
Efficiency of Quick-sort: The efficiency of Quick-sort depends upon the selection of pivot
element.
Best Case: In best case, consider the following two assumptions-
1. The pivot, which we choose, will always be swapped into the exactly the middle of the
list. And also consider pivot will have an equal number of elements both to its left and
right.
2. The number of elements in the list is a power of 2 i.e. n= 2y.
Blog: anilkumarprathipati.wordpress.com 8
UNIT-2 DIVIDE AND CONQUER
Worst Case: In worst case, assume that the pivot partition the list into two parts, so that one of
the partition has no elements while the other has all the other elements.
Blog: anilkumarprathipati.wordpress.com 9
UNIT-2 DIVIDE AND CONQUER
The left child of each node represents a sub-problem size 1/4 as large, and the right child
represents a sub-problem size 3/4 as large.
There are log4/3 n levels, and so the total partitioning time is O(nlog4/3 n). Now, there's a
mathematical fact that
logan = logbn / logba
for all positive numbers a, b, and n. Letting a=4/3 and b=2, we get that
log4/3 n=log n / log(4/3)
Quick Sort
Merge Sort:
Merge sort is based on divide-and-conquer technique. Merge sort method is a two phase
process-
1. Dividing
2. Merging
Dividing Phase: During the dividing phase, each time the given list of elements is divided into
two parts. This division process continues until the list is small enough to divide.
Merging Phase: Merging is the process of combining two sorted lists, so that, the resultant list is
also the sorted one. Suppose A is a sorted list with n element and B is a sorted list with n2
elements. The operation that combines the elements of A and B into a single sorted list C with
n=n1 + n2, elements is called merging.
Algorithm-(Divide algorithm)
Algorithm Divide (a, low, high)
{
// a is an array, low is the starting index and high is the end index of a
Blog: anilkumarprathipati.wordpress.com 10
UNIT-2 DIVIDE AND CONQUER
The merging algorithm is as follows-
While (j high) do
{
B[k]=a[j];
K: = k+1;
j: =j + 1;
}
//copy elements of b to a
For i: = l to n do
{
A[i]: =b[i];
}
}
Blog: anilkumarprathipati.wordpress.com 11
UNIT-2 DIVIDE AND CONQUER
Ex: Let the list is: - 500, 345, 13, 256, 98, 1, 12, 3, 34, 45, 78, 92.
500 345 13 256 98 1 12 3 34 45 78 92
T(n) = { a
2T(n/2) + Cn
if n=1, a is a constant
if n>1, C is constant
Blog: anilkumarprathipati.wordpress.com 12
UNIT-2 DIVIDE AND CONQUER
Replace n by n/2 in equation, 1 ,we get
T (n/2) = 2T(n/4) + Cn 2
2
Thus, T(n) = 2 2T (n/4) + Cn + Cn
2
= 4T (n/4) + 2Cn
= 4T 2 T (n/8) + Cn + 2Cn
4
...
...
...
= 2 k T(1) + KCn ... k = log2 n
= a n + Cn log n
.
. . T (n) = O( n log n)
Blog: anilkumarprathipati.wordpress.com 13
UNIT-II1 GREEDY METHOD
1. GENERAL METHOD
Greedy method: It is most straight forward method. It is popular for obtaining the
optimized solutions.
Optimization Problem: An optimization problem is the problem of finding the best
solution (optimal solution) from all the feasible solutions (practicable of possible solutions).
In an optimization problem we are given a set of constraints and an optimization functions.
Solutions that satisfy the constraints are called feasible solutions. A feasible solution for
which the optimization function has the best possible value is called optimal solution.
Ex: Problem: Finding a minimum spanning tree from a weighted connected directed
graph G.
Constraints: Every time a minimum edge is added to the tree and adding of an edge
does not form a simple circuit.
Feasible solutions: The feasible solutions are the spanning trees of the given graph G.
Optimal solution: An optimal solution is a spanning tree with minimum cost i.e.
minimum spanning tree.
Q: Find the minimum spanning tree for the following graph.
2
A B
3 1
C D
2
Graph G
The feasible solutions are the spanning tree of the graph G. Those are
2 2 2
A B A B A B A B
3 1 3 1 3 1
C D C D C D C D
2 2 2
1 .Total Weights=6 2 .Total Weights=6 3 .Total Weights=7 4 .Total Weights=5
From the above spanning tree the figure 4 gives the optimal solution, because it is the
spanning tree with the minimum cost i.e. it is a minimum spanning tree of the graph G.
The greedy technique suggests constructing a solution to an optimization problem
hrough a sequence of steps, each expanding a partially constructed solution obtained so far
until a complete solution to the problem is reached to each step, the choice made must be
feasible, locally optimal and irrecoverable.
Feasible: The choice which is made has to be satisfying the problems constraints.
Locally optimal: The choice has to be the best local choice among all feasible choices
available on that step.
Irrecoverable: The choice once made cannot be changed on sub-sequent steps of the
algorithm (Greedy method).
blog: anilkumarprathipati.wordpress.com 1
UNIT-II1 GREEDY METHOD
Q: A child buys a candy 42 rupees and gives a 100 note to the cashier. Then the cashier
wishes to return change using the fewest number of coins. Assume that the cashier has Rs.1,
Rs. 5 and Rs. 10 coins.
This problem can be solved using the greedy method.
This problem consists of n jobs each associated with a deadline and profit and our
objective is to earn maximum profit. We will earn profit only when job is completed on or
before deadline. We assume that each job will take unit time to complete.
Points to remember:
In this problem we have n jobs j1, j2, … jn, each has an associated deadlines are d1,
d2, … dn and profits are p1, p2, ... pn.
Profit will only be awarded or earned if the job is completed on or before the
deadline.
We assume that each job takes unit time to complete.
The objective is to earn maximum profit when only one job can be scheduled or
processed at any given time.
blog: anilkumarprathipati.wordpress.com 2
UNIT-II1 GREEDY METHOD
Example: Consider the following 5 jobs and their associated deadline and profit.
index 1 2 3 4 5
JOB j1 j2 j3 j4 j5
DEADLINE 2 1 3 2 1
PROFIT 60 100 20 40 20
JOB j2 j1 j4 j3 j5
DEADLINE 1 2 2 3 1
PROFIT 100 60 40 20 20
Find the maximum deadline value
Looking at the jobs we can say the max deadline value is 3. So, dmax = 3
As dmax = 3 so we will have THREE slots to keep track of free time slots. Set the
time slot status to EMPTY
time slot 1 2 3
time slot 1 2 3
Job J1 J2 J4
Profit 100 60 20
blog: anilkumarprathipati.wordpress.com 3
UNIT-II1 GREEDY METHOD
Pseudo Code:
for i = 1 to n do
Set k = min(dmax, DEADLINE(i)) //where DEADLINE(i) denotes deadline of ith job
while k >= 1 do
if timeslot[k] is EMPTY then
timeslot[k] = job(i)
break
endif
Set k = k - 1
endwhile
endfor
Algorithm:
blog: anilkumarprathipati.wordpress.com 4
UNIT-II1 GREEDY METHOD
Example: Assume that we have a knapsack with max weight capacity, W = 16.
Our objective is to fill the knapsack with items such that the benefit (value or profit) is
maximum.
Consider the following items and their associated weight and value
ITEM WEIGHT VALUE
i1 6 6
i2 10 2
i3 3 1
i4 5 8
i5 1 3
i6 3 5
Steps
1. Calculate value per weight for each item (we can call this value density)
2. Sort the items as per the value density in descending order
3. Take as much item as possible not already taken in the knapsack
Compute density = (value/weight)
ITEM WEIGHT VALUE DENSITY
i1 6 6 1.000
i2 10 2 0.200
i3 3 1 0.333
i4 5 8 1.600
i5 1 3 3.000
i6 3 5 1.667
i5 1 3 3.000
blog: anilkumarprathipati.wordpress.com 5
UNIT-II1 GREEDY METHOD
i6 3 5 1.667
i4 5 8 1.600
i1 6 6 1.000
i3 3 1 0.333
i2 10 2 0.200
Now we will pick items such that our benefit is maximum and total weight of the
selected items is at most W.
Our objective is to fill the knapsack with items to get maximum benefit without
crossing the weight limit W = 16.
weight taken = 5 (as we are taking the complete (full) item, no fraction)
total value of the item = 10
total weight of the item = 5
i5 1 3 1.000 3.000
i6 3 5 4.000 8.000
blog: anilkumarprathipati.wordpress.com 6
UNIT-II1 GREEDY METHOD
i4 5 8 9.000 16.000
i1 6 6 15.000 22.000
So, total weight in the knapsack = 16 and total value inside it = 22.333336
Algorithm:
blog: anilkumarprathipati.wordpress.com 7
UNIT-II1 GREEDY METHOD
We found three spanning trees off one complete graph. A complete undirected graph
can have maximum nn-2 number of spanning trees, where n is the number of nodes. In the
above addressed example, 33−2 = 3 spanning trees are possible.
General Properties of Spanning Tree
A connected graph G can have more than one spanning tree.
All possible spanning trees of graph G, have the same number of edges and
vertices.
The spanning tree does not have any cycle (loops).
Removing one edge from the spanning tree will make the graph disconnected,
i.e. the spanning tree is minimally connected.
Adding one edge to the spanning tree will create a circuit or loop, i.e. the
spanning tree is maximally acyclic.
Mathematical Properties of Spanning Tree
Spanning tree has n-1 edges, where n is the number of nodes (vertices).
From a complete graph, by removing maximum e - n + 1 edges, we can
construct a spanning tree.
A complete graph can have maximum nn-2 number of spanning trees.
Thus, we can conclude that spanning trees are a subset of connected Graph G and
disconnected graphs do not have spanning tree.
Application of Spanning Tree
Spanning tree is basically used to find a minimum path to connect all nodes in a
graph. Common applications of spanning trees are
Civil Network Planning
Computer Network Routing Protocol
Cluster Analysis
Let us understand this through a small example. Consider, city network as a huge
graph and now plans to deploy telephone lines in such a way that in minimum lines we can
connect to all city nodes. This is where the spanning tree comes into picture.
Minimum Spanning Tree (MST)
In a weighted graph, a minimum spanning tree is a spanning tree that has minimum
weight than all other spanning trees of the same graph. In real-world situations, this weight
can be measured as distance, congestion, traffic load or any arbitrary value denoted to the
edges.
Minimum Spanning-Tree Algorithm
We shall learn about two most important spanning tree algorithms(greedy
algorithms):
blog: anilkumarprathipati.wordpress.com 8
UNIT-II1 GREEDY METHOD
1. Kruskal's Algorithm
2. Prim's Algorithm
i. Kruskal's Algorithm
Kruskal's algorithm to find the minimum cost spanning tree uses the greedy
approach. This algorithm treats the graph as a forest and every node it has as an individual
tree. A tree connects to another only and only if, it has the least cost among all available
options and does not violate MST properties.
In case of parallel edges, keep the one which has the least cost associated and remove all
others.
blog: anilkumarprathipati.wordpress.com 9
UNIT-II1 GREEDY METHOD
The least cost is 2 and edges involved are B,D and D,T. We add them. Adding them
does not violate spanning tree properties, so we continue to our next edge selection.
Next cost is 3, and associated edges are A,C and C,D. We add them again −
Next cost in the table is 4, and we observe that adding it will create a circuit in the graph. −
blog: anilkumarprathipati.wordpress.com 10
UNIT-II1 GREEDY METHOD
We ignore it. In the process we shall ignore/avoid all edges that create a circuit.
We observe that edges with cost 5 and 6 also create circuits. We ignore them and move on.
Now we are left with only one node to be added. Between the two least cost edges available
7 and 8, we shall add the edge with cost 7.
By adding edge S,A we have included all the nodes of the graph and we now have minimum
cost spanning tree.
blog: anilkumarprathipati.wordpress.com 11
UNIT-II1 GREEDY METHOD
blog: anilkumarprathipati.wordpress.com 12
UNIT-II1 GREEDY METHOD
Remove all loops and parallel edges from the given graph. In case of parallel edges,
keep the one which has the least cost associated and remove all others.
Now, the tree S-7-A is treated as one node and we check for all edges going out from
it. We select the one which has the lowest cost and include it in the tree.
blog: anilkumarprathipati.wordpress.com 13
UNIT-II1 GREEDY METHOD
After this step, S-7-A-3-C tree is formed. Now we'll again treat it as a node and will
check all the edges again. However, we will choose only the least cost edge. In this case, C-
3-D is the new edge, which is less than other edges' cost 8, 6, 4, etc.
After adding node D to the spanning tree, we now have two edges going out of it
having the same cost, i.e. D-2-T and D-2-B. Thus, we can add either one. But the next step
will again yield edge 2 as the least cost. Hence, we are showing a spanning tree with both
edges included.
We may find that the output spanning tree of the same graph using two different
algorithms is same.
blog: anilkumarprathipati.wordpress.com 14
UNIT-II1 GREEDY METHOD
For a given source node in the graph, the algorithm finds the shortest path between
that node and every other. It also used for finding the shortest paths from a single node to a
single destination node by stopping the algorithm once the shortest path to the destination node
has been determined.
Algorithm Steps:
Set all vertices distances = infinity except for the source vertex, set the source
distance = 0.
Push the source vertex in a min-priority queue in the form (distance , vertex), as the
comparison in the min-priority queue will be according to vertices distances.
Pop the vertex with the minimum distance from the priority queue (at first the popped
vertex = source).
Update the distances of the connected vertices to the popped vertex in case of "current
vertex distance + edge weight < next vertex distance", then push the vertex
with the new distance to the priority queue.
If the popped vertex is visited before, just continue without using it.
Apply the same algorithm again until the priority queue is empty.
blog: anilkumarprathipati.wordpress.com 15
UNIT-II1 GREEDY METHOD
Example:
Algorithm:
blog: anilkumarprathipati.wordpress.com 16
UNIT-IV DYNAMIC PROGRAMMING
blog: anilkumarprathipati.wordpress.com 1
UNIT-IV DYNAMIC PROGRAMMING
Example 1:- To illustrate the different costs incurred by different parenthesize of a matrix product, consider
the problem to find the product of three matrices A1, A2, A3 i.e. A1 * A2 * A3 of three matrices. Suppose that
the dimensions of the matrices are 10 × 100, 100 × 5, and 5 × 50, respectively. If we multiply according to
the parenthesization.
((A1 A2) A3) = 10 * 100 * 5 + 10 * 5 * 50 = 7500
(A1 (A2 A3)) = 10 * 100 * 50 +100 * 5 * 50 = 75,000
Thus, computing the product according to the first parenthesization is 10 times faster.
Definition:- The matrix-chain multiplication problem can be stated as follows: given a chain A1, A2, ...,An
of n matrices, where for i = 1, 2, ..., n, matrix A i has dimension Pi-1 ×Pi, fully parenthesize the product A1 A2
An in a way that minimizes the number of scalar multiplications.
Note:- In the matrix-chain multiplication problem, we are not actually multiplying matrices. Our goal is only
to determine an order for multiplying matrices that has the lowest cost.
Solving the matrix-chain multiplication problem by dynamic programming
Step 1: The structure of an optimal parenthesization. Our first step in the dynamic-programming paradigm is
to find the optimal substructure and then use it to construct an optimal solution to the problem from optimal
solutions to sub problems. For the matrix-chain multiplication problem, we can perform this step as follows.
For example any parenthesization of the product Ai Ai+1 Aj must split the product between Ak and Ak+1 for
some integer k in the range i ≤ k < j. That is, for some value of k, we first compute the matrices Ai,k and Ak+1,j
and then multiply them together to produce the final product Ai,j. The cost of this parenthesization is thus the
cost of computing the matrix Ai,k, plus the cost of computing Ak+1,j, plus the cost of multiplying them
together.
Step 2: A recursive solution. Next, we define the cost of an optimal solution recursively in terms of the
optimal solutions to sub problems. For the matrix-chain multiplication problem, We can define m[i, j]
recursively as follows. If i = j, the matrix Ai,j = Ai, so that no scalar multiplications are necessary to compute
the product. Thus, Mij = 0 for i = j
= min { Mi,k + Mk + 1, j + Pi-1 Pk Pj } for i < j
i<=k<j
Step 3: Computing the optimal costs. We perform the third step of the dynamic-programming paradigm and
compute the optimal cost by using a tabular, bottom-up approach.
Step 4: Constructing an optimal solution. In the first level we compare M12 and M23. When M12 < M23, we
parenthesize the A1A2 in the product A1A2A3 i.e (A1 A2) A3 and parenthesize the A2 A3 in the product A1A2A3
i.e., A1 (A2 A3) when M12 >M23. This process is repeated until the whole product is parenthesized. The top
entry in the table i.e M13 gives the optimum cost of matrix chain multiplication.
Example:- Find an optimal parenthesization of a matrix-chain product whose dimensions are given in the
table below.
blog: anilkumarprathipati.wordpress.com 2
UNIT-IV DYNAMIC PROGRAMMING
Solution:- Given
P0=5 , P1=4, P2=6, P3=2, P4=7
The Bottom level of the table is to be initialized.
Mi,j=0 where i = j
In the first level when we compare M12 , M23 and M34. As M23=48 is minimum among three we parenthesize
the QR in the product PQRT i.e P(QR)T. In the second level when we compare M13 and M24. As M13=88 is
minimum among the two we parenthesize the P and QR in the product PQRT i.e (P(QR))T. Finally we
parenthesize the whole product i.e ((P(QR))T). The top entry in the table i.e M14 gives the optimum cost of
((P(QR))T).
Verification:- The chain of matrices is P, Q, R, T, the product P x Q x R x T can be fully parenthesized in
five distinct ways:
1. (P(Q(RT))) 2. (P((QR)T)) 3. ((PQ)(RT)) 4. ((P(QR))T) 5. (((PQ) R)T)
Cost of (P(Q(RT))) = 5*4*7 + 4*6*7 + 6*2*7 = 392
Cost of (P((QR)T)) = 5*4*7 + 4*6*2 + 4*2*7 = 244
Cost of ((PQ)(RT)) = 5*4*6 + 6*2*7 + 5*6*7 = 414
Cost of ((P(QR))T) = 5*4*2 + 4*6*2 + 5*2*7 = 158
Cost of (((PQ) R)T) = 5*4*6 + 5*6*2 + 5*2*7 = 250
blog: anilkumarprathipati.wordpress.com 3
UNIT-IV DYNAMIC PROGRAMMING
From the above manual method also we find the optimal cost is 158 and the order of matrix
multiplication is ((P(QR))T)
Algorithm Matrix_Chain_Mul(p)
{
for i = 1 to n do
M[i, i] = 0
for len = 2 to n do
{
for i = 1 to n - len + 1 do
{
j←i+l-1
M[i, j] ← ∞
for k = i to j - 1 do
q = M[i, k] + M[k + 1, j] + Pi-1PkPj
if q < M[i, j] then
{
M[i, j] ← q
}
}
}
return m
}
Time Complexity:- Algorithm Matrix_Chain_Mul uses first For loop to initialize M[i,j] which takes O(n).
M[i, j] value is computed using three For loops which takes O(n3). Thus the overall time complexity of
Matrix_Chain_Mul is O(n3).
Application-II: OBST
Binary Search Trees: - A binary search tree T is a binary tree, either it is empty or each node in the tree
contains an indentifier and,
1. All identifiers in the left subtree of T are less than the identifier in the root node T.
2. All identifiers in the right subtree are greater than the identifier in the root node T.
3. The left and right subtree of T are also binary search trees.
Optimal Binary Search Tree problem:- Given a sequence K = {k1, k2, ..., kn } of n distinct keys in sorted
order (so that k1 < k2 < ··· < kn), and we wish to build a binary search tree from these keys such that the cost
of the binary search tree is minimum. For each key ki, we have a probability pi that a search will be for ki.
Some searches may be for values not in K, and so we also have n + 1 "dummy keys" E0, E1, E2, ..., En
representing values not in K. In particular, E0 represents all values less than k1, En represents all values
greater than kn, and for i = 1, 2, ..., n -1, the dummy key Ei represents all values between ki and ki+1. For
each dummy key Ei, we have a probability qi that a search will correspond to Ei.
Number of possible binary search trees for n keys is given by the following formula.
2nCn
Cost of binary search tree is calculated as follows
blog: anilkumarprathipati.wordpress.com 4
UNIT-IV DYNAMIC PROGRAMMING
w[i,j] = w[i,j-1]+p[j]+q[j];
k=Find(c, r, i, j);
c[i,j] = c[i,k-1]+c[k,j]+w[i, j];
r[i,j] = k;
}
Write(c[0,n], w[0,n], r[0,n]);
}
Algorithm Find(c, r, i, j)
{
min = ∞
for m = r[i, j-1] to r[i+1, j] do
if ( c[i,m-1] + c[m,j] < min then
{
min = c[i, m-1] + c[m, j];
l=m;
}
return l;
}
Time Complexity:- The computation of each c[i, j] requires to find the minimum of m quantities. Hence each
such c[i, j] can be computed in time O(m). The total time for all c[i,j]’s is O(n-m) * O(m) = O(nm-m2).
Example:- Given n=4 and {a1, a2, a3, a4} = { do, if, int, while}, p{1:4}= {3,3,1,1}
and q{0:4}={2, 3, 1, 1, 1}. Construct an optimal binary search tree and find its cost.
Solution:- Initially we have {W[i,j], C[i, j], r[i, j] } = { qi, 0, 0 }which initializes the bottom row of the table.
The remaining values of the table are calculated using the below formulas
W[i, j] = { W[i, j-1] + p[j]+ q[j] } for i < j
C[i, j] = min { C[i, k-1] +C[k, j]}+ W[i, j] for i < j
i<k<=j
r[i, j] = k for i < j
After calculating all the values the table is as below
blog: anilkumarprathipati.wordpress.com 6
UNIT-IV DYNAMIC PROGRAMMING
Now consider the cell which contains r 01. The r01 specifies the root of left sub tree i.e r 01 = k = 1
then 1st element will be the root of left subtree. Then the left sub tree is T 0,1 i.e. T00 . AS i=j the left subtree is
a leaf node and the right sub tree of T01 is T11 which is also a leaf node.
blog: anilkumarprathipati.wordpress.com 7
UNIT-IV DYNAMIC PROGRAMMING
Now consider the cell which contains r24. The r24 specifies the root of right sub tree i.e r 24 = k = 3
rd
then 3 element will be the root of right subtree. Then the left sub tree of T 2,4 i.e. T22 is a leaf node. The
right sub tree of T24 is T34 which is a root of the right subtree.
As all nodes T00,T11,T22 are leaf nodes. Now consider the cell which contains r 34. The r34 specifies
the root i.e r34 = k = 4 then 4th element will be the root. Then the left sub tree of T 3,4 i.e. T33 is a leaf node.
The right sub tree of T34 is T44 which is a leaf node.
blog: anilkumarprathipati.wordpress.com 8
UNIT-IV DYNAMIC PROGRAMMING
Verification:- Number of possible binary search trees for 4 keys is given as.
1 1
---------- 2nCn = -------- 2*4C4 = 14
n+1 4+1
Cost(T1) = 3*1+3*2+1*3+1*4+2*1+3*2+1*3+1*4+1*4 = 35
Cost(T1) = 3*2+3*1+1*2+1*3+2*2+3*2+1*2+1*3+1*3 = 32
Similarly we will construct 14 possible binary search trees and compute the cost of each tree. But we
find tree T2 has the minimum cost of 32, so it is the optimal binary search tree.
blog: anilkumarprathipati.wordpress.com 9
UNIT-IV DYNAMIC PROGRAMMING
Application-III: 0/1 knapsack problem:- The 0–1 knapsack problem is posed as follows. A thief robbing a
store finds n items; the ith item gives profit of pi dollars and weighs wi pounds, where pi and wi are integers.
He wants to take as valuable a load as possible, but he can carry at most W pounds in his knapsack for some
integer W. Which items should he take? This is called the 0–1 knapsack problem because each item must
either be taken or left behind; the thief cannot take a fractional amount of an item or take an item more than
once.
Example :- Consider the knapsack instance n=3, (W1,W2,W3)=(2, 3, 4), (P1,P2,P3)=(1, 2, 5) and m=6.
Generate the Si sets containing the pair ( Pi,Wi) and thus find the optimal solution.
Solution:- Initially S0 = {(0,0)} as nothing is added to the bag.
0 0
S1 = S + (P1, W1) = {(0,0)} + ( 1, 2) = { (1, 2) }
S1 = S0 + S10 = {(0,0)} + {(1, 2)} = {(0,0), (1, 2)}
S11= S1 + (P2, W2) = {(0,0), (1, 2)}+ ( 2, 3) = {( 2, 3) , (3, 5) }
S2 = S1 + S11 = {(0,0), (1, 2)}+ {( 2, 3) , (3, 5) } = {(0,0), (1, 2), ( 2, 3) , (3, 5) }
S12 = S2 + (P3, W3) = {(0,0), (1, 2), ( 2, 3) , (3, 5) }+ ( 5, 4)
= { (5,4), (6,6), (7,7), (8,9) }
3 2 2
S = S + S1 = {(0,0), (1, 2), ( 2, 3) , (3, 5) }+ { (5,4), (6,6), (7,7), (8,9) }
= {(0,0), (1, 2), ( 2, 3) , (3, 5), (5,4), (6,6), (7,7), (8,9)}
In S3 has a pair (3, 5) and other pair (5,4) as 3<= 5 and 5 > 4 we discard the element (3, 5) from the set S3 due
to purging rule.
Select (6,6) from S3 as maximum capacity of the bag is 6.
As (6,6) is an element of S3 but it is not the element of S2, So X3 =1.
Subtract P3,W3 from 6,6. i.e (6,6)-(5,4) = (1,2).
As (1, 2) is an element of S2 but it is also the element of S1, So X2 =0.
As 2nd item is not added to the bag nothing is subtracted from the element (1,2).
As (1, 2) is an element of S1 but it is not the element of S0, So X1 =1.
Thus (X3,X2,X1) = (1, 0, 1).
Profit obtained by placing the above items in the bag is
= = 1*1+2*0+5*1= 6
blog: anilkumarprathipati.wordpress.com 10
UNIT-IV DYNAMIC PROGRAMMING
Algorithm DKP(p,w,n,m)
{
S0 = {(0,0)};
for i = 1 to n-1 do
{
S1i-1 = Si-1 + (Pi,Wi);
Si = MergePurge(Si-1 + S1i);
}
(PX,WX) = last pair in Sn-1;
(PY,WY) = (P1+Pn , W1+Wn) where W1 is the largest W in any pair in Sn-1 such that W1+Wn <=m;
// Trace back for Xn , Xn-1 , Xn-2…… X1 ;
if (PX > PY) then
Xn = 0;
else
Xn = 1;
Trace back for Xn , Xn-1 , Xn-2…… X1 ;
}
Time Complexity:- Time complexity of 0/1 knapsack problem is O(2 n/2 ).
Application-IV: All pairs shortest path problem:- All-pairs shortest-paths problem is to find a shortest
path from u to v for every pair of vertices u and v. Although this problem can be solved by running a single-
source algorithm once from each vertex, it can usually be solved faster using the dynamic programming
technique.
Solving All pairs shortest path problem by dynamic programming
Step 1: - Optimal substructure of a shortest path
Shortest-paths algorithms typically rely on the property that a shortest path between two vertices contains
other shortest paths within it.
Step 2:- A recursive solution
blog: anilkumarprathipati.wordpress.com 11
UNIT-IV DYNAMIC PROGRAMMING
C[i, j] =
D1[i,j] =
As no of nodes in the given graph are 3, So D3[i,j] gives the shortest distance from every vertex i to every
other vertex j.
Algorithm AllPaths(cost, D, n)
{
for i = 1 to n do
for j = 1 to n do
D[i, j] = cost[i, j];
for k= 1 to n do
for i = 1 to n do
for j = 1 to n do
blog: anilkumarprathipati.wordpress.com 12
UNIT-IV DYNAMIC PROGRAMMING
Time Complexity:- The time needed by AllPaths algorithm is especially easy to determine because the
loop is independent of the data in the matrix D. The D[i, j] is obtained after the satatement is iterated n3
times. So the time complexity of All pairs shortest paths algorithm is Ө(n3).
Application-V: Travelling sales person problem:- Travelling sales person problem is to find the route
travelled by the salesman starting from one vertex and touching each vertex exactly once and returning back
to the starting vertex. The main objective of this problem is to minimize the travelling cost of the sales
person.
Example :- Find the shortest tour of a travelling sales person for the following graph using dynamic
programming technique.
C[i, j] =
blog: anilkumarprathipati.wordpress.com 13
UNIT-IV DYNAMIC PROGRAMMING
Reliability design problem:- Reliability design problem is to design a system which is composed of several
devices connected in series. Let ri be the reliability of device Di then the reliability of entire system is π ri.
Our problem is to use device duplication to maximize reliability under cost constraint.
Solving Reliability design problem by dynamic programming
blog: anilkumarprathipati.wordpress.com 14
UNIT-IV DYNAMIC PROGRAMMING
Step 1: - Reliability design problem typically rely on the property that less reliable devices are more
duplicated than the more reliable devices.
Step 2:- A recursive solution
Ui = Upper bound = (C+Ci - Ci
j
Фi(j)= 1- (1-ri) where j=1,2…. Ui
Generate the set Si where the set contains the possible elements (r,c) that can be added to the system
Initially no devices are added to the system
S0 = {(1,0)} for i = 0
Sji = Si-1 + (Фi(mi),j*Ci)
Si = Union of Sij where j = 1, 2…. Ui
Purging rule(Dominance Rule) :- If Si has a pair (rj,cj) and other pair ( rk,ck) and rj <= rk but cj >= ck then
the pair (rj,cj) is discarded from the set Si.
This process is repeated after every generation of Si where i = 1,2….n
Step 3:- Generating the sets S1, S2, S3…… Sn using the above formulae.
Step 4:- After generating Sn Select the element ( rk,ck) such that ck= cost constraint.
If (rk,ck) € Sn and ( rk,ck) € Sjn then Dn= j.
Find another element (rj,cj) such that rj = rk – Фi(mi) and cj = ck – j*cn
Check If ( rj,cj) € Sn-1 and ( rj,cj) € Sjn-1 then Dn-1= j .
This process is repeated until we find all Di where i = 1,2….n
Example :- Design a three stage system with device types D1, D2, D3 . The costs are $30, $15 and $20
respectively. The cost of the system is to be no more than $105. The reliability of each device type is 0.9, 0.8,
0.5 respectively.
Solution:- Given C=$105, C1=$30, C2=$15, C3=$20, r1= 0.9, r2= 0.8, r3= 0.5
First compute Ui where i=1, 2,…n
Ui = Upper bound = (C+Ci - Ci
U1= (C+C1-(C1+C2+C3))/ C1= (105+30-65)/30= 2.33 =2
U2= (C+C2-(C1+C2+C3))/ C2= (105+15-65)/15= 3.66=3
U3= (C+C3-(C1+C2+C3))/ C3= (105+20-65)/20= 3
Hence (U1, U2 , U3) = (2, 3, 3)
Initially no devices are added to the system
S0 = {(1,0)}
As U1 =2 we have to calculate Sij where i=1 and j=1,2… U1
Ф1(1)= 1- (1-r1)1 = 1-(1-0.9)1 = 0.9
S11 = Si-1 + (Ф1(1),1*C1) = {(1,0)} + ( 0.9, 30) = {(0.9, 30)}
Ф1(2)= 1- (1-r1)2 = 1-(1-0.9)2= 0.99
S12 = Si-1 + (Ф1(1),2*C1) = {(1,0)} + ( 0.99, 2*30) = {(0.99, 60)}
Si = Union of Sij where j = 1, 2…. Ui
Thus S1 = S11 U S12 = {(0.9, 30)} U {(0.99, 60)} = {(0.9, 30) , (0.99, 60)}
As U2 =3 we have to calculate Sij where i=2 and j=1,2… U2
Ф2(1)= 1- (1-r2)1 = 1-(1-0.8)1 = 0.8
S21 = {(0.9, 30) , (0.99, 60)}+ ( 0.8, 15) = {(0.72, 45), (0.792, 75)}
Ф2(2)= 1- (1-r2)2 = 1-(1-0.8)2= 0.96
S22 = {(0.9, 30) , (0.99, 60)}+ ( 0.96, 2*15) = {(0.864, 60), (0. 9504, 90)}
Ф2(3)= 1- (1-r2)3 = 1-(1-0.8)3= 0.992
S23 = {(0.9, 30) , (0.99, 60)}+ ( 0.992, 3*15) = {(0.8928, 75), (0. 98208, 105)}
Thus S2 = S21 U S22 U S23 =
= {(0.72, 45), (0.792, 75)}U{(0.864, 60), (0. 9504, 90)}U{(0.8928, 75), (0. 98, 105)}
blog: anilkumarprathipati.wordpress.com 15
UNIT-IV DYNAMIC PROGRAMMING
= {(0.72, 45), (0.792, 75),(0.864, 60), (0. 9504, 90),(0.8928, 75), (0. 98, 105)}
Applying Purging rule (0.792, 75) is removed form S2
Thus
S2 = {(0.72, 45), (0.864, 60), (0. 9504, 90),(0.8928, 75), (0. 98, 105)}
Applying Purging rule (0. 9504, 90)is removed form S2
Thus
S2 = {(0.72, 45), (0.864, 60), (0.8928, 75), (0. 98, 105)}
As U3 =3 we have to calculate Sij where i=2 and j=1,2… U3
Ф3(1)= 1- (1-r3)1 = 1-(1-0.5)1 = 0.5
S31 = {(0.72, 45), (0.864, 60), (0.8928, 75), (0. 98, 105)}+ ( 0.5, 20)
={(0.36, 65), (0.432, 80), (0.4464,95)} [Remaining elements are not included as
cost exceeds 105]
2 2
Ф3(2)= 1- (1-r3) = 1-(1-0.5) = 0.75
S32 = {(0.72, 45), (0.864, 60), (0.8928, 75), (0. 98, 105)}+ ( 0.75, 40)
={(0.54, 85), (0.648, 100)}
Ф3(3)= 1- (1-r3)3 = 1-(1-0.5)3= 0.875
S33 = {(0.72, 45), (0.864, 60), (0.8928, 75), (0. 98, 105)}+ ( 0.875, 60)
={(0.63, 105) }
Thus S3 = S31 U S32 U S33 =
= {(0.36, 65), (0.432, 80), (0.4464,95)}U{(0.54, 85), (0.648, 100)} U{(0.63, 105) }
= {(0.36, 65), (0.432, 80), (0.4464,95),(0.54, 85), (0.648, 100),(0.63, 105) }
Applying Purging rule (0.4464,95) is removed form S3
Thus
S3 = {(0.36, 65), (0.432, 80), (0.54, 85), (0.648, 100),(0.63, 105) }
Applying Purging rule (0.63, 105) is removed form S3
Thus S3 = {(0.36, 65), (0.432, 80), (0.54, 85), (0.648, 100) }
After generating S3 Select the element (0.648, 100).
If (0.648, 100)€ S3 and (0.648, 100) € S32 then Duplication of D3= 2.
Now 100 – 40 = 60. Now the cost constraint for D2 is 60. So Select the element with cost equal to 60 in S 2.
i.e. (0.864, 60)
If (0.864, 60) € S2 and (0.864, 60)€ S22 then Duplication of D2= 2.
Now 60 – 30 = 30. Now the cost constraint for D1 is 30. So Select the element with cost equal to 30 in S 1.
i.e. (0.9, 30)
If (0.9, 30) € S1 and (0.9, 30)€ S11 then Duplication of D1= 1.
Thus (J1,J2,J3) = (1, 2, 2).
blog: anilkumarprathipati.wordpress.com 17
UNIT-V BACKTRACKING
Introduction:
In back tracking technique, we will solve problems in an efficient way, when compared to
other methods like greedy method and dynamic programming. The solution is based on finding one
or more vectors that maximize, minimize, or satisfy a criterion function P(x1, …. xn). Form a solution
at any point seems not promising, ignore it. All possible solutions require a set of constraints divided
into two categories:
1. Explicit Constraint: Explicit constraints are rules that restrict each xi to take on values only
from a given set. Ex: xn= 0 or 1.
2. Implicit Constraint: Implicit Constraints are rules that determine which of the tuples in the
solutions space of I satisfy the criterion function.
Implicit constraints for this problem are that no two queens can be on the same diagonal.
Back tracking is a modified depth first search tree. Backtracking is a procedure whereby, after
determining that a node can lead to nothing but dead end, we go back (backtrack) to the nodes parent
and proceed with the search on the next child. State space tree exists implicitly in the algorithm
because it is not actually constructed.
Terminologies which is used in this method:
1. Solution Space: All tuples that satisfy the explicit constraints define a possible solution space
for a particular instance T of the problem.
A
Example:
B C
B C
3. Solution States: These are the problem states S for which the path form the root to S defines a
tuple in the solution space.
Here, square nodes ( ) indicate solution. For the above solution space, there exists 3 solution states.
These solution states represented in the form of tuples i.e., (ghk-,B,D),(A,C,F) and (A,C,G) are the
solution states.
Example: A
B C
Fig: Solution State
D E F G
4. State Space Tree: Is the set of paths from root node to other nodes. State space tree is the tree
organization of the solution of the solution space.
Example: State space tree of a 4-queen problem.
blog: anilkumarprathipati.wordpress.com 1
UNIT-V BACKTRACKING
1
x1=1
x1=2 x1=3 x1=4
2 18 34 50
3 8 13 19 24 29 35 40 45 51 56 61
3 1 1 2 1 1 2 1 1
4 4 3 4 4 2 3 3 2
4 6 9 11 14 16 20 22 25 27 30 32 36 38 41 43 46 48 52 54 57 59 62 64
4 4 3 4 4 4 4 2 3 3 2
3 2 2 3 1 3 1 2 1 1 2 1 1
5 7 10 12 15 17 21 23 26 28 31 33 37 39 42 44 47 49 53 55 58 60 63 65
B C
Here are C, D are answer states. (A, C) and (A, C, D) are solution states.
6. Live Node: A node which has been generated but whose children have not yet been generated
is live node.
Example 1: 1
This node 1 is called as live node since the children of node 1 have not been generated.
1
Example 2:
2 3
In this, node 1 is not a live node but node 2, node 3 are live nodes.
blog: anilkumarprathipati.wordpress.com 2
UNIT-V BACKTRACKING
Example 3: 1
2 3
4 5
Here, 4, 5, 3 are live nodes because the children of these nodes not yet been generated.
7. E-Node: The live nodes whose children are currently being generated is called the E-node
(node being expanded).
Example 1: 1
This node 1 is live node and its children are currently being generated (expanded).
E-node E-node E-node
1 1 1
Example 2:
2 2 3 2 3
2 3
(a)
Nodes 1, 2, 3, are dead nodes. Since node 1’s children generated and node 2, 3 are not expanded.
Assumed that node 2 generated one more node, So, 1, 3, 4 are dead nodes.
1
2 3
4
(b)
Fig: Dead nodes
General Method:
The basic idea of backtracking is to build up a vector, one component at a time and to test
whether the vector being formed has any chance of success.
The major advantage of this algorithm is that we can realize the fact that the partial vector
generated does not lead to an optimal solution. In such a situation that vector can be ignored.
Backtracking algorithm determines the solution by systematically searching the solution space
(i.,e set of all feasible solutions) for the given problem.
Backtracking is a depth first search with some bounding function. All solutions using
backtracking are required to satisfy a complex set of constraints. The constraints may be explicit or
implicit.
blog: anilkumarprathipati.wordpress.com 3
UNIT-V BACKTRACKING
Applications of Backtracking
Backtracking is an algorithm design technique that can effectively solve the larger instances of
combinational problems. It follows a systematic approach for obtaining solution to a problem. The
applications of backtracking include,
1) N-Queens Problem: This is generalization problem. If we take n=8 then the problem is called as 8
queens problem. If we take n=4then the problem is called 4 queens problem. A classic combinational
problem is to place n queens on a n*n chess board so that no two attack, i.,e no two queens are on the
same row, column or diagonal.
Algorithm of n-queens problem is given below:
blog: anilkumarprathipati.wordpress.com 4
UNIT-V BACKTRACKING
(a)
The second queen should not be in first row and second column. It should be placed in second
row and in second, third or fourth column. It we place in second column, both will be in same
diagonal, so place it in third column.
1 1
2
(b) (c)
We are unable to place queen 3 in third row, so go back to queen 2 and place it somewhere
else.
1 1
2 2
(d) (e)
blog: anilkumarprathipati.wordpress.com 5
UNIT-V BACKTRACKING
Now the fourth queen should be placed in 4th row and 3rd column but there will be a diagonal
attack from queen 3. So go back, remove queen 3 and place it in the next column. But it is not
possible, so move back to queen 2 and remove it to next column but it is not possible. So go back to
queen 1 and move it to next column.
1 1
(f) (g)
1 1
2 2
3 3
4
(h) (i)
Fig: Example of Backtrack solution to the 4-queens problem
Hence the solution of to 4-queens’s problem is x1=2, x2=4, x3=1, x4=3, i.,e first queen is
placed in 2nd column, second queen is placed in 4th column and third queen is placed in first column
and fourth queen is placed in third column.
1
Row 1
x1=1 x1=2
2 3 Row 2
x2=2 x2=3 x2=4 x2=1 x2=3 x2=4
4 5 6 7 8 9 Row 3
B x3=2 x3=4 x3=2 x3=3 B B
x3=1
10 11 12 13 15 Row 4
B B x4=3 B x4=3
14 16
B
Fig: Portion of the tree that is generated during Backtracking
8-queens problem
A classic combinatorial problem is to place 8 queens on a 8*8 chess board so that no two
attack, i.,e no two queens are to the same row, column or diagonal.
Now, we will solve 8 queens problem by using similar procedure adapted for 4 queens
problem. The algorithm of 8 queens problem can be obtained by placing n=8, in N queens algorithm.
We observe that, for every element on the same diagonal which runs from the upper left to the lower
right, each element has the same “row-column” value. Also every element on the same diagonal
which goes from upper right to lower left has the same “row+column” value.
blog: anilkumarprathipati.wordpress.com 6
UNIT-V BACKTRACKING
If two queens are placed at positions (i,j) and (k,l). They are on the same diagonal only if
i-j=k-l ……………….(1) or
i+j=k+l ……………….(2).
From (1) and (2) implies
j-l=i-k and
j-l=k-i
Two queens lie on the same diagonal iff
|j-l|=|i-k|
But how can we determine whether more than one queen is lying on the same diagonal? To
answer this question, a technique is deviced. Assume that the chess board is divided into rows
1....8, 1....8
rows columns
And columns say A:
This can be diagrammatically represented as follows
1 2 3 4 5 6 7 8
1
2
3 Q
4
5
6
7
8
Now, assume that, we had placed a queen at position (3,2).
Now, its diagonal cells includes (2,1)(4,3)(5,4)….(if we traverse from upper left to lower
right). If we subtract values in these cells say 2-1=1,4-3=1,5-4=1, we get same values, also if we
traverse from upper right to lower left say (2,3) (1,4)(4,1)….we get common values when we add the
bits of these cells i.,e 2+3=5, 1+4=5, 4+1=5. Hence, we say that, on traversing from upper left to
lower right, if (m,n)(a,b) are the diagonal elements(of a cell) than m-n=a-b or on traversing from
upper right to lower left if(m,n)(a,b) are the diagonal elements(of a cell) then m+n=a+b.
The solution of 8 queens problem can be obtained similar to the solution of 4 queens.
problem.X1=3, X2=6, X3=2, X4=7, X5=1, X6=4, X7=8, X8=5,
The solution can be shown as
1
2
3
4
5
7
8
blog: anilkumarprathipati.wordpress.com 7
UNIT-V BACKTRACKING
Time complexity: The solution space tree of 8-queens problem contains 88 tuples. After imposing
implicit constraints, the size of solution space is reduced to 8! tuples.
The state space tree for the above solution is given
x1=1 x3=1
x2=2 2
3 4 5 6 7
1 2 4 5 x2=6
B 3 4
x3=2 4 5 6 7 8 1 x3=2
5 6 7
B x4=7
x4=2 4 6 7 8 4 5
8
x5=4 7 8 B B B x5=1
6
9
x6=7
7 8 x6=4
10 14
x7=7 8 x3=1 x7=8
11 12
B B
x8=2 x8=5
13
12
blog: anilkumarprathipati.wordpress.com 8
UNIT-V BACKTRACKING
The left child of the root node indicates that, we have to include the first element and right
child of the root node indicates that, we have to exclude the first element and so on for other nodes.
Each node stores the sum of the partial solution element. If at any stage, the number equals to ‘M’
then the search is successful. At this time search will terminate or continues if all the possible
solutions need to be obtain. The dead end in the tree occurs only when either of the two inequalities
exists.
The sum of S’ is too large.
The sum of S’ is too small.
Thus we take back one step and continue the search.
ALGORITHM:
blog: anilkumarprathipati.wordpress.com 9
UNIT-V BACKTRACKING
x1=1 x1=0
5 0
x2=1 x2=0 x2=1 x2=0
15 5 10 0
3) Graph Coloring
Let G be a graph and m be a given positive integer. The graph coloring problem is to find if
the nodes of G can be colored in such a way that no two adjacent nodes have the same color, yet only
m colors are used. This is termed the m-colorability decision problem. The m-colorability
optimization problem asks for the smallest integer m for which the graph G can be colored. This
integer is referred to as the chromatic number of the graph.
blog: anilkumarprathipati.wordpress.com 10
UNIT-V BACKTRACKING
blog: anilkumarprathipati.wordpress.com 11
UNIT-V BACKTRACKING
blog: anilkumarprathipati.wordpress.com 12
UNIT-V BACKTRACKING
4) Hamiltonian Cycle
Let G = (V', E) be a connected graph with n vertices. A Hamiltonian cycle is a round-trip path
along n edges of G that visits every vertex once and ret urns to its starting position. In other words if a
Hamiltonian cycle begins at some vertex Vi E G and the vertices of G are visited in the order
'01,112,"" Vn+l, then the edges (Vi, Vi+1) are in E, 1 <=i<=n, and the 'Vi are distinct except for V1
and Vn+l, which are equal.
Given a graph G=(V,E)we have to find the Hamiltonian circuit using backtracking approach,
we start out search from any arbitrary vertex, say x. This vertex ‘x’ becomes the root of our implicit
tree. The next adjacent vertex is selected on the basis of alphabetical / or numerical order. If at any
stage an arbitrary vertex, say ‘y’ makes a cycle with any vertex other than vertex ‘o’ then we say that
dead end is reached. In this case we backtrack one step and again the search begins by selecting
another vertex. It should be noted that, after backtracking the element from the partial solution must
be removed. The search using backtracking is successful if a Hamiltonian cycle is obtained.
Example: Consider a graph G=(V,E), we have to find the Hamiltonian circuit using backtracking
method.
1
2 4
Graph (G)
Solution: Initially we start out search with vertex ‘1’ the vertex ‘1’ becomes the root of our
implicit tree.
1 Root
(a)
Next we choose vertex ‘2’ adjacent to ‘1’, as it comes first in numerical order (2, 3, 4).
1 Root
2 3 4
(b)
Next vertex ‘3’ is selected which is adjacent to ‘2’ and which comes first in numerical order
(3,5).
1 Root
2 3 4
3 5
(c)
Next we select vertex ‘4’ adjacent to ‘3’ which comes first in numerical order (4, 5).
blog: anilkumarprathipati.wordpress.com 13
UNIT-V BACKTRACKING
1 Root
2 3 4
3 5
4 5
(d)
Next vertex ‘5’ is selected. If we choose vertex ‘1’ then we do not get the Hamiltonian cycle.
1 Root
2 3 4
3 5
4 5
5 (e)
Dead end
The vertex adjacent to 5 is 2, 3, 4 but they are already visited. Thus, we get the dead end. So,
we backtrack one step and remove the vertex ‘5’ from our partial solution.
The vertex adjacent to ‘4’ are 5,3,1 from which vertex ‘5’ has already been checked and we are left
with vertex ‘1’ but by choosing vertex ‘1’ we do not get the Hamiltonian cycle. So, we again
backtrack one step.
Hence we select the vertex ‘5’ adjacent to ‘3’.
1 Root
2 3 4
3 5
4 5
(f)
5
Dead end
The vertex adjacent to ‘5’ are (2,3,4) so vertex 4 is selected.
blog: anilkumarprathipati.wordpress.com 14
UNIT-V BACKTRACKING
1 Root
2 3 4
3 5
4 5
(g)
5 4
Dead end
The vertex adjacent to ‘4’ are (1, 3, 5) so vertex ‘1’ is selected. Hence we get the Hamiltonian
cycle as all the vertex other than the start vertex ‘1’ is visited only once, 1- 2- 3- 5- 4- 1.
1 Root
2 3 4
3 5
4 5
5 4
Dead end
(h)
1 Solution
The final implicit tree for the Hamiltonian circuit is shown below. The number above each
node indicates the order in which these nodes are visited.
blog: anilkumarprathipati.wordpress.com 15
UNIT-V BACKTRACKING
0
1 Root
1 2 3 4
2
3 5
3
4 5 5
4
5 4 6
B
(i)
1 7 Solution
Fig Construction of Hamilton Cycle using Backtracking
blog: anilkumarprathipati.wordpress.com 16
UNIT-V BACKTRACKING
blog: anilkumarprathipati.wordpress.com 17