Digital Notes Daa
Digital Notes Daa
Objectives:
● To analyze performance of algorithms.
● To choose the appropriate data structure and algorithm design method for a specified
application.
● To understand how the choice of data structures and algorithm design methods
impacts the performance of programs.
● To solve problems using algorithm design methods such as the greedy method, divide
and conquer, dynamic programming, backtracking and branch and bound.
● Prerequisites (Subjects) Data structures, Mathematical foundations of
computer science.
UNIT I:
Introduction: Algorithm, Psuedo code for expressing algorithms, Performance Analysis-
Space complexity, Time complexity, Asymptotic Notation- Big oh notation, Omega notation,
Theta notation and Little oh notation, Probabilistic analysis, Amortized analysis.
Divide and conquer: General method, applications-Binary search, Quick sort, Merge sort,
Strassen’s matrix multiplication.
UNIT II:
Searching and Traversal Techniques: Efficient non - recursive binary tree traversal
algorithm, Disjoint set operations, union and find algorithms, Spanning trees, Graph
traversals - Breadth first search and Depth first search, AND / OR graphs, game trees,
Connected Components, Bi - connected components. Disjoint Sets- disjoint set operations,
union and find algorithms, spanning trees, connected components and biconnected
components.
UNIT III:
Greedy method: General method, applications - Job sequencing with deadlines, 0/1
knapsack problem, Minimum cost spanning trees, Single source shortest path problem.
Dynamic Programming: General method, applications-Matrix chain multiplication,
Optimal binary search trees, 0/1 knapsack problem, All pairs shortest path problem,
Travelling sales person problem, Reliability design.
UNIT IV:
Backtracking: General method, applications-n-queen problem, sum of subsets problem,
graph coloring, Hamiltonian cycles.
Branch and Bound: General method, applications - Travelling sales person problem,0/1
knapsack problem- LC Branch and Bound solution, FIFO Branch and Bound solution.
UNIT V:
NP-Hard and NP-Complete problems: Basic concepts, non deterministic algorithms, NP -
Hard and NPComplete classes, Cook’s theorem.
TEXT BOOKS:
1. Fundamentals of Computer Algorithms, Ellis Horowitz,Satraj
Sahni and Rajasekharam,Galgotia publications pvt. Ltd.
2. Foundations of Algorithm, 4th edition, R. Neapolitan and K. Naimipour, Jones
and Bartlett Learning.
3. Design and Analysis of Algorithms, P. H. Dave, H. B. Dave, Pearson Education,
2008.
REFERENCES:
Outcomes:
● Be able to analyze algorithms and improve the efficiency of algorithms.
● Apply different designing methods for development of algorithms to realistic
problems, such as divide and conquer, greedy and etc. Ability to understand and
estimate the performance of algorithm.
UNIT I:
Introduction: Algorithm, Psuedo code for expressing algorithms, Performance Analysis-
Space complexity, Time complexity, Asymptotic Notation- Big oh notation, Omega notation,
Theta notation and Little oh notation, Probabilistic analysis, Amortized analysis.
Divide and conquer: General method, applications-Binary search, Quick sort, Merge sort,
Strassen’s matrix multiplication.
INTRODUCTION TO ALGORITHM
History of Algorithm
• The word algorithm comes from the name of a Persian author, Abu Ja’far Mohammed
ibn Musa al Khowarizmi (c. 825 A.D.), who wrote a textbook on mathematics.
• He is credited with providing the step-by-step rules for adding, subtracting,
multiplying, and dividing ordinary decimal numbers.
• When written in Latin, the name became Algorismus, from which algorithm is but a
small step
• This word has taken on a special significance in computer science, where “algorithm”
has come to refer to a method that can be used by a computer for the solution of a
problem
• Between 400 and 300 B.C., the great Greek mathematician Euclid invented an algorithm
• Finding the greatest common divisor (gcd) of two positive integers.
• The gcd of X and Y is the largest integer that exactly divides both X and Y .
• Eg.,the gcd of 80 and 32 is 16.
• The Euclidian algorithm, as it is called, is considered to be the first non-trivial
algorithm ever devised.
What is an Algorithm?
For example,
‘’a set of steps to accomplish or complete a task that is described precisely enough that
a computer can run it’’.
Described precisely: very difficult for a machine to know how much water, milk to be
added etc. in the above tea making algorithm.
These algorithms run on computers or computational devices..For example, GPS in our
smartphones, Google hangouts.
GPS uses shortest path algorithm.. Online shopping uses cryptography which uses RSA
algorithm.
• Algorithm Definition1:
• Algorithm Definition2:
• Algorithms that are definite and effective are also called computational procedures.
• A program is the expression of an algorithm in a programming language
Keeping illegal inputs separate is the responsibility of the algorithmic problem, while
treating special classes of unusual or undesirable inputs is the responsibility of the algorithm
itself.
• 4 Distinct areas of study of algorithms:
• How to devise algorithms. 🡪 Techniques – Divide & Conquer, Branch and Bound ,
Dynamic Programming
• How to validate algorithms.
• Check for Algorithm that it computes the correct answer for all possible legal inputs. 🡺
PSEUDOCODE:
PERFORMANCE ANALYSIS:
• What are the Criteria for judging algorithms that have a more direct relationship to
performance?
• computing time and storage requirements.
Space Complexity:
🡪 The Space needed by each of these algorithms is seen to be the sum of the
following component.
1.A fixed part that is independent of the characteristics (eg:number,size)of the inputs and
outputs.
The part typically includes the instruction space (ie. Space for the code), space for simple
variable and fixed-size component variables (also called aggregate) space for constants,
and so on.
2. A variable part that consists of the space needed by component variables whose size is
dependent on the particular problem instance being solved, the space needed by referenced
variables (to the extent that is depends on instance characteristics), and the recursion stack
space.
The space requirement s(p) of any algorithm p may therefore be written as,
S(P) = c+ Sp(Instance characteristics)
Where ‘c’ is a constant.
Example 2:
Algorithm sum(a,n)
{
s=0.0;
for I=1 to n do
s= s+a[I];
return s;
}
● The problem instances for this algorithm are characterized by n,the
number of elements to be summed. The space needed d by ‘n’ is one word,
since it is of type integer.
● The space needed by ‘a’a is the space needed by variables of tyepe
array of floating point numbers.
● This is atleast ‘n’ words, since ‘a’ must be large enough to hold
the ‘n’ elements to be summed.
●So,we obtain Ssum(n)>=(n+s)
• [ n for a[],one each for n,I a& s]
Time Complexity:
• The time T(p) taken by a program P is the sum of the compile time and
the run time(execution time)
• The compile time does not depend on the instance characteristics. Also we
may assume that a compiled program will be run several times without recompilation
.This rum time is denoted by tp(instance characteristics).
We introduce a variable, count into the program statement to increment count with
initial value 0.Statement to increment count by the appropriate amount are introduced
into the program.
This is done so that each time a statement in the original program is executes
count is incremented by the step count of that statement.
Algorithm:
Algorithm sum(a,n)
{
s= 0.0;
count = count+1;
for I=1 to n do
{
count =count+1;
s=s+a[I];
count=count+1;
}
count=count+1
;
count=count+1
; return s;
}
🡪First determine the number of steps per execution (s/e) of the statement and the
total number of times (ie., frequency) each statement is executed.
🡪By combining these two quantities, the total contribution of all statements, the
step count for the entire algorithm is obtained.
Statement Steps per Frequency Total
execution
1. Algorithm Sum(a,n) 0 - 0
2.{ 0 - 0
3. S=0.0; 1 1 1
4. for I=1 to n do 1 n+ n+
5. s=s+a[I]; 1 1 1
6. return s; 1 n n
7. } 0 1 1
- 0
Total 2n+3
How to analyse an Algorithm?
Let us form an algorithm for Insertion sort (which sort a sequence of numbers).The pseudo
code for the algorithm is give below.
Identify each line of the pseudo code with symbols such as C1, C2 ..
Let Ci be the cost of ith line. Since comment lines will not incur any cost C3=0.
Cost No. Of times
Executed
C1 N
C2 n-1
C3=0 n-1
C4 n-1
C5
C6
C7
C8 n-1
T(n)=C1n+C2(n-1)+0(n-1)+C4
(n-1)+C5( )+C6(
C8(n-1)
Best case:
It occurs when Array is sorted.
C8(n-1)
🡺· Linear function of n.
Worst case:
C8(n-1)
=C1n+C2(n-1)+C4(n-1)+C5( ) )+ C8(n-1)
)+C7(
+C6(
· The worst-case running time gives a guaranteed upper bound on the running
time for any input.
· For some algorithms, the worst case occurs often. For example, when
searching, the worst case often occurs when the item being searched for is not
present, and searches for absent items may be frequent.
· Why not analyze the average case? Because it’s often about as bad as the worst case.
Order of growth:
It is described by the highest degree term of the formula for running time. (Drop lower-order
terms. Ignore the constant coefficient in the leading term.)
Example: We found out that for insertion sort the worst-case running time is of the form
an2 + bn + c.
Drop lower-order terms. What remains is an2.Ignore constant coefficient. It results in n2.But
we cannot say that the worst-case running time T(n) equals n2 .Rather It grows like n2 . But it
doesn’t equal n2.We say that the running time is Θ (n2) to capture the notion that the order of
growth is n2.
We usually consider one algorithm to be more efficient than another if its worst-case
running time has a smaller order of growth.
Complexity of Algorithms
The complexity of an algorithm M is the function f(n) which gives the running time and/or
storage space requirement of the algorithm in terms of the size ‘n’ of the input data. Mostly,
the storage space required by an algorithm is simply a multiple of the data size ‘n’.
The function f(n), gives the running time of an algorithm, depends not only on the size ‘n’ of
the input data but also on the particular data. The complexity function f(n) for certain cases
are:
1. Best Case : The minimum possible value of f(n) is called the best case.
3. Worst Case : The maximum value of f(n) for any key possible input.
ASYMPTOTIC NOTATION
The following notations are commonly use notations in performance analysis and used to
characterize the complexity of an algorithm:
1. Big–OH (O) ,
2. Big–OMEGA (Ω),
3. Big–THETA (Θ) and
4. Little–OH (o)
Our approach is based on the asymptotic complexity measure. This means that we don’t try to
count the exact number of steps of a program, but how that number grows with the size of the
input to the program. That gives us a measure that will work for different operating systems,
compilers and CPUs. The asymptotic complexity is written using big-O notation.
functions: O≈ ≤
Ω≈ ≥
Θ ≈
=o≈
<ω≈
>
Big ‘oh’: the function f(n)=O(g(n)) iff there exist positive constants c and no such that
f(n)<=c*g(n) for all n, n>= no.
Omega: the function f(n)=(g(n)) iff there exist positive constants c and no such that
f(n) >= c*g(n) for all n, n >= no.
Theta: the function f(n)=(g(n)) iff there exist positive constants c1,c2 and no such that c1
g(n) <= f(n) <= c2 g(n) for all n, n >= no
Big-O Notation
This notation gives the tight upper bound of the given function. Generally we represent it as
f(n) = O(g (11)). That means, at larger values of n, the upper bound off(n) is g(n). For
example, if f(n) = n4 + 100n2 + 10n + 50 is the given algorithm, then n4 is g(n). That means
g(n) gives the maximum rate of growth for f(n) at larger values of n.
O —notation defined as O(g(n)) = {f(n): there exist positive constants c and no such that
0 <= f(n) <= cg(n) for all n >= no}. g(n) is an asymptotic tight upper bound for f(n). Our
objective is to give some rate of growth g(n) which is greater than given algorithms rate of
growth f(n).
In general, we do not consider lower values of n. That means the rate of growth at lower
values of n is not important. In the below figure, no is the point from which we consider the
rate of growths for a given algorithm. Below no the rate of growths may be different.
Note Analyze the algorithms at larger values of n only What this means is, below no we do
not care for rates of growth.
Omega— Ω notation
Similar to above discussion, this notation gives the tighter lower bound of the given
algorithm and we represent it as f(n) = Ω (g(n)). That means, at larger values of n, the
tighter lower bound of f(n) is g
For example, if f(n) = 100n2 + 10n + 50, g(n) is Ω (n2).
The . Ω. notation as be defined as Ω (g (n)) = {f(n): there exist positive constants c
and no such that 0 <= cg (n) <= f(n) for all n >= no}. g(n) is an asymptotic lower
bound for f(n). Ω (g (n)) is the set of functions with smaller or same order of growth
as f(n).
Theta- Θ notation
This notation decides whether the upper and lower bounds of a given function are same or
not. The average running time of algorithm is always between lower bound and upper bound.
If the upper bound (O) and lower bound (Ω) gives the same result then Θ notation will also
have the same rate of growth. As an example, let us assume that f(n) = 10n + n is the
expression. Then, its tight upper bound g(n) is O(n). The rate of growth in best case is g (n) =
0(n). In this case, rate of growths in best case and worst are same. As a result, the average
case will also be same.
None: For a given function (algorithm), if the rate of growths (bounds) for O and Ω are not
same then the rate of growth Θ case may not be same.
Now consider the definition of Θ notation It is defined as Θ (g(n)) = {f(71): there exist
positive constants C1, C2 and no such that O<=5 c1g(n) <= f(n) <= c2g(n) for all n >= no}.
g(n) is an asymptotic tight bound for f(n). Θ (g(n)) is the set of functions with the same
order of growth as g(n).
Important Notes
For analysis (best case, worst case and average) we try to give upper bound (O) and lower
bound (Ω) and average running time (Θ). From the above examples, it should also be clear
that, for a given function (algorithm) getting upper bound (O) and lower bound (Ω) and
average running time (Θ) may not be possible always.
For example, if we are discussing the best case of an algorithm, then we try to give upper
bound (O) and lower bound (Ω) and average running time (Θ).
In the remaining chapters we generally concentrate on upper bound (O) because knowing
lower bound (Ω) of an algorithm is of no practical importance and we use 9 notation if upper
bound (O) and lower bound (Ω) are same.
Little Oh Notation
The little Oh is denoted as o. It is defined as : Let, f(n} and g(n} be the non negative
functions then
such that f(n}= o(g{n)} i.e f of n is little Oh of g of n.
PROBABILISTIC ANALYSIS
In order to perform a probabilistic analysis, we must use knowledge of, or make assumptions
about, the distribution of the inputs. Then we analyze our algorithm, computing an average-
case running time, where we take the average over the distribution of the possible inputs.
Probability theory has the goal of characterizing the outcomes of natural or conceptual
“experiments.” Examples of such experiments include tossing a coin ten times, rolling a die
three times, playing a lottery, gambling, picking a ball from an urn containing white and red
balls, and so on
Each possible outcome of an experiment is called a sample point and the set of all possible
outcomes is known as the sample space S. In this text we assume that S is finite (such a
sample space is called a discrete sample space). An event E is a subset of the sample space S.
If the sample space consists of n sample points, then there are 2n possible events.
Theorem 1.5
1. Prob.[E] = 1 - Prob.[E].
2. Prob.[E1 U E2] = Prob.[E1] + Prob.[E2] - Prob.[E1 ∩ E2]
<= Prob.[E1] + Prob.[E2]
E[X] =
Consider a game in which you flip two fair coins. You earn $3 for each head but lose $2 for
each tail. The expected value of the random variable X representing
your earnings is
= 6(1/4)+1(1/2)-4(1/4)
=1
Any one of these first i candidates is equally likely to be the best-qualified so far. Candidate i
has a probability of 1/i of being better qualified than candidates 1 through i -1 and thus a
probability of 1/i of being hired.
E[Xi]= 1/i
So, ]
E[X] = E[
AMORTIZED ANALYSIS
2. Accounting method – When there is more than one type of operation, each type of
operation may have a different amortized cost. The accounting method overcharges
some operations early in the sequence, storing the overcharge as “prepaid credit”
on specific objects in the data structure. Later in the sequence, the credit pays for
operations that are charged less than they actually cost.
3. Potential method - The potential method maintains the credit as the “potential
energy” of the data structure as a whole instead of associating the credit with
individual objects within the data structure. The potential method, which is like the
accounting method in that we determine the amortized cost of each operation and
may overcharge operations early on to compensate for undercharges later
General Method
If the subproblems are large enough then divide and conquer is reapplied.
The generated subproblems are usually of some type as the original problem.
a,b🡪 contants.
This is called the general divide and-conquer recurrence.
Advantages of DAndC:
The time spent on executing the problem using DAndC is smaller than other method.
This technique is ideally suited for parallel computation.
This approach provides an efficient algorithm in computer science.
The following theorem can be used to determine the running time of divide and conquer
algorithms. For a given program or algorithm, first we try to find the recurrence relation for
the problem. If the recurrence is of below form then we directly give the answer without
fully solving it.
If the reccurrence is of the form T(n) = aT( ) + Θ (nklogpn), where a >= 1, b > 1, k >= O
and p is a real number, then we can directly give the answer as:
2) If a = bk
a. If p > -1, then T(n) = Θ ( )
b. If p = -1, then T(n) = Θ ( )
c. If p < -1, then T(n) = Θ ( s)
3) If a < bk
a. If p >= 0, then T(n) = Θ (nklogpn)
b. If p < 0, then T(n) = 0(nk)
Time Complexity:
Data structure:- Array
For successful search Unsuccessful search
θ(log n):- for all cases.
Worst case🡪 O(log n) or θ(log
Merge Sort:
The merge sort splits the list to be sorted into two equal halves, and places them in separate
arrays. This sorting method is an example of the DIVIDE-AND-CONQUER paradigm i.e. it
breaks the data into two halves and then sorts the two half data sets recursively, and finally
merges them to obtain the complete sorted list. The merge sort is a comparison sort and has an
algorithmic complexity of O (n log n). Elementary implementations of the merge sort make use
of two arrays - one for each half of the data set. The following image depicts the complete
procedure of merge sort.
Advantages of Merge Sort:
1. Marginally faster than the heap sort for larger sets
2. Merge Sort always does lesser number of comparisons than Quick Sort. Worst case for
merge sort does about 39% less comparisons against quick sort’s average case.
3. Merge sort is often the best choice for sorting a linked list because the slow random-
access performance of a linked list makes some other algorithms (such as quick
sort) perform poorly, and others (such as heap sort) completely impossible.
while(j<=high){
temp[k]=a[j];
j++;
k++;
}
for(k=low;k<=high;k++
) a[k]=temp[k];
}
void display(int a[10]){
int i;
printf("\n \n the sorted array is \n");
for(i=0;i<n;i++)
printf("%d \t",a[i]);}
Algorithm for Merge sort:
Algorithm mergesort(low, high)
{
if(low<high) then
// Dividing Problem into Sub-problems and
{
this “mid” is for finding where to split the
set. mid=(low+high)/2;
mergesort(low,mid);
mergesort(mid+1,high); //Solve the
sub-problems Merge(low,mid,high);// Combine
the solution
}
}
void Merge(low, mid,high){
k=low;
i=low;
j=mid+1;
while(i<=mid&&j<=high) do{
if(a[i]<=a[j]) then
{
temp[k]=a[i];
i++;
k++;
}
else
{
temp[k]=a[j];
j++;
k++;
}
}
while(i<=mid) do{
temp[k]=a[i];
i++;
k++;
}
while(j<=high) do{
temp[k]=a[j];
j++;
k++;
}
For k=low to high do
a[k]=temp[k];
}
For k:=low to high do a[k]=temp[k];
}
Tree call of Merge sort
Consider a example: (From text book)
A[1:10]={310,285,179,652,351,423,861,254,450,520}
The time for the merging operation in proportional to n, then computing time for merge sort
is described by using recurrence relation.
Here c, a🡪Constants.
If n is power of 2, n=2k
T(n)= 2T(n/2) + cn
2[2T(n/4)+cn/2] + cn
4T(n/4)+2cn
22 T(n/4)+2cn
23 T(n/8)+3cn
24 T(n/16)+4cn
2k T(1)+kcn
an+cn(log n)
By representing it by in the form of Asymptotic notation O is
T(n)=O(nlog n)
Quick Sort
Quick Sort is an algorithm based on the DIVIDE-AND-CONQUER paradigm that selects a pivot
element and reorders the given list in such a way that all elements smaller to it are on one side
and those bigger than it are on the other. Then the sub lists are recursively sorted until the list gets
completely sorted. The time complexity of this algorithm is O (n log n).
⮚ Auxiliary space used in the average case for implementing recursive function calls
is O (log n) and hence proves to be a bit space costly, especially when it comes to
large data sets.
2
⮚ Its worst case has a time complexity of O (n ) which can prove very fatal for large
data sets. Competitive sorting algorithms
Quick sort program
#include<stdio.h>
#include<conio.h>
int n,j,i;
void main(){
int i,low,high,z,y;
int a[10],kk;
void quick(int a[10],int low,int high);
int n;
clrscr();
printf("\n \t\t mergesort \n");
printf("\n enter the length of the list:");
scanf("%d",&n);
printf("\n enter the list elements");
for(i=0;i<n;i++)
scanf("%d",&a[i]);
low=0;
high=n-1
;
quick(a,low,high);
printf("\n sorted array is:");
for(i=0;i<n;i++)
printf(" %d",a[i]);
getch();
}
Time Complexity
Name Space
Best case Average Worst
Complexity
Case Case
Bubble O(n) - O(n2) O(n)
Insertion O(n) O(n2) O(n2) O(n)
Selection O(n2) O(n2) O(n2) O(n)
Quick O(log n) O(n log n) O(n2) O(n + log n)
Merge O(n log n) O(n log n) O(n log n) O(2n)
Heap O(n log n) O(n log n) O(n log n) O(n)
The divide and conquer strategy suggests another way to compute the product of two n×n
matrices.
For Simplicity assume n is a power of 2 that is n=2k
Here k🡪 any nonnegative integer.
If n is not power of two then enough rows and columns of zeros can be added to both A and
B, so that resulting dimensions are a power of two.
Let A and B be two n×n Matrices. Imagine that A & B are each partitioned into four square
sub matrices. Each sub matrix having dimensions n/2×n/2.
The product of AB can be computed by using previous formula.
If AB is product of 2×2 matrices then
=
C11=A11B11+A12B21
C12=A11B12+A12B22
C21=A21B11+A22B21
C22=
A21B12+A22B22
Volker strassen has discovered a way to compute the Ci,j of above using 7 multiplications
and 18 additions or subtractions.
For this first compute 7 n/2×n/2 matrices P, Q, R, S, T, U & V
P=(A11+A22)(B11+B22)
Q=(A21+A22)B11
R=A11(B12-B22)
S=A22(B21-B11)
T=(A11+A12)B22
U=(A21-A11)(B11+B12)
V=(A12-A22)(B21+B22
)
C11=P+S-T+V
C12=R+T
C21=Q+S
C22=P+R-Q+
U
UNIT II:
Searching and Traversal Techniques: Efficient non - recursive binary tree traversal
algorithm, Disjoint set operations, union and find algorithms, Spanning trees, Graph
traversals - Breadth first search and Depth first search, AND / OR graphs, game trees,
Connected Components, Bi - connected components. Disjoint Sets- disjoint set operations,
union and find algorithms, spanning trees, connected
components and biconnected components.
So we go on traversing all left node. as we visit the node. we will put that node into
stack.remember need to visit parent after the child and as We will encounter parent first when
start from root. it's case for LIFO :) and hence the stack). Once we reach NULL node. we will
take the node at the top of the stack. last node which we visited. Print it.
Check if there is right child to that node. If yes. move right child to stack and again start
traversing left child node and put them on to stack. Once we have traversed all node. our
stack will be empty.
Non recursive postorder traversal algorithm
Left node. right node and last parent node.
1.1 Create an empty stack
Do Following while root is not NULL
a) Push root's right child and then root to stack.
b) Set root as root's left child.
Pop an item from stack and set it as root.
a) If the popped item has a right child and the right child
is at top of stack, then remove the right child from
stack, push the root back and set root as root's right
child.
Ia) Else print root's data and set root as NULL.
Repeat steps 2.1 and 2.2 while stack is not empty.
Disjoint Sets: If Si and Sj, i≠j are two sets, then there is no element that is in both Si and Sj..
For example: n=10 elements can be partitioned into three disjoint sets,
Find(i)
Disjoint set Union: Means Combination of two disjoint sets elements. Form above
example S1 U S2 ={1,7,8,9,5,2,10 }
For S1 U S2 tree representation, simply make one of the tree is a subtree
of the other.
Find(10)🡪S2
Tress can be accomplished easily if, with each set name, we keep a pointer to the root of the
tree representing that set.
For presenting the union and find algorithms, we ignore the set names and identify sets just
by the roots of the trees representing them.
For example: if we determine that element ‘i’ is in a tree with root ‘j’ has a pointer to entry
‘k’ in the set name table, then the set name is just name[k]
table. Union(i, j)🡪 Means union of two trees whose roots are i and j.
i 1 2 3 4 5 6 7 8 9 10
P -1 5 -1 3 -1 3 1 1 1 5
Fi -1.
Example: Find(6) start at 6 and then moves to 6’s parent. Since P[3] is negative, we reached
the root.
Algorithm for finding Union(i, j): Algorithm for find(i)
Algorithm Simple union(i, j) Algorithm SimpleFind(i)
{ {
P[i]:=j; // Accomplishes the union While(P[i]≥0) do i:=P[i];
} return i;
}
If n numbers of roots are there then the above algorithms are not useful for union and
find. For union of n trees🡪 Union(1,2), Union(2,3), Union(3,4),…..Union(n-1,n).
Time taken for the find for an element at level i of a tree is 🡪 O(i).
To improve the performance of our union and find algorithms by avoiding the creation of
degenerate trees. For this we use a weighting rule for union(i, j)
For implementing the weighting rule, we need to know how many nodes there are
in every tree.
For this we maintain a count field in the root of every tree. i🡪
root node
count[i]🡪 number of nodes in the tree.
Time required for this above algorithm is O(1) + time for remaining unchanged is
determined by using Lemma.
Lemma:- Let T be a tree with m nodes created as a result of a sequence of unions each
performed using WeightedUnion. The height of T is no greater than
|log2 m|+1.
Collapsing rule: If ‘j’ is a node on the path from ‘i’ to its root and p[i]≠root[i], then set
p[j] to root[i].
Algorithm for Collapsing find.
Algorithm CollapsingFind(i)
//Find the root of the tree containing element i.
//collapsing rule to collapse all nodes form i to the root.
{
r;=i;
while(p[r]>0) do r := p[r]; //Find the root.
While(i ≠ r) do // Collapse nodes from i to root r.
{
s:=p[i];
p[i]:=r;
i:=s;
}
return r;
}
Collapsing find algorithm is used to perform find operation on the tree created by
WeightedUnion.
Spanning Tree:-
Let G=(V<E) be an undirected connected graph. A sub graph t=(V,E1) of G is a spanning tree
of G iff t is a tree.
It can be used to obtain an independent set of circuit equations for an electric network.
Any connected graph with n vertices must have at least n-1 edges and all connected graphs
with n-1 edges are trees. If nodes of G represent cities and the edges represent possible
communication links connecting two cities, then the minimum number of links needed to
connect the n cities is n-1.
There are two basic algorithms for finding minimum-cost spanning trees, and both are greedy
algorithms
🡪Prim’s Algorithm
🡪Kruskal’s Algorithm
Prim’s Algorithm: Start with any one node in the spanning tree, and repeatedly add the
cheapest edge, and the node it leads to, for which the node is not already in the spanning tree.
Kruskal’s Algorithm: Start with no nodes or edges in the spanning tree, and repeatedly add
the cheapest edge that does not create a cycle.
Connected Component:
Connected component of a graph can be obtained by using BFST (Breadth first search and
traversal) and DFST (Dept first search and traversal). It is also called the spanning tree.
BFST (Breadth first search and traversal):
In BFS we start at a vertex V mark it as reached (visited).
A vertex is said to been explored (discovered) by visiting all vertices adjacent from it.
Algorithm dFS(v)
// a Dfs of G is begin at vertex v
// initially an array visited[] is set to zero.
//this algorithm visits all vertices reachable from v.
// the graph G, and array visited[] are global
{
Visited[v]:=1;
For each vertex w adjacent from v do
{
If (visited[w]=0) then DFS(w);
{
Add w to q; // w is unexplored
Visited[w]:=1;
}
}
Maximum Time complexity and space complexity of G(n,e), nodes are in adjacency list.
T(n, e)=θ(n+e)
S(n, e)=θ(n)
Then the failure of a communication station I that is an articulation point, then we loss the
communication in between other stations. F
Form graph G1
If the graph is bi-connected graph (means no articulation point) then if any station i
fails, we can still communicate between every two stations not including station i.
From Graph Gb
There is an efficient algorithm to test whether a connected graph is biconnected. If the case of
graphs that are not biconnected, this algorithm will identify all the articulation points.
Once it has been determined that a connected graph G is not biconnected, it may be desirable
(suitable) to determine a set of edges whose inclusion makes the graph biconnected.
UNIT III:
Greedy method: General method, applications - Job sequencing with deadlines, 0/1
knapsack problem, Minimum cost spanning trees, Single source shortest path problem.
Dynamic Programming: General method, applications-Matrix chain multiplication,
Optimal binary search trees, 0/1 knapsack problem, All pairs shortest path problem,
Travelling sales person problem, Reliability design.
Greedy Method:
The greedy method is perhaps (maybe or possible) the most straight forward design
technique, used to determine a feasible solution that may or may not be optimal.
Feasible solution:- Most problems have n inputs and its solution contains a subset of inputs
that satisfies a given constraint(condition). Any subset that satisfies the constraint is called
feasible solution.
Optimal solution: To find a feasible solution that either maximizes or minimizes a given
objective function. A feasible solution that does this is called optimal solution.
The greedy method suggests that an algorithm works in stages, considering one input at a
time. At each stage, a decision is made regarding whether a particular input is in an optimal
solution.
Greedy algorithms neither postpone nor revise the decisions (ie., no back tracking).
Example: Kruskal’s minimal spanning tree. Select an edge from a sorted list, check, decide,
and never visit it again.
Application of Greedy Method:
Job sequencing with deadline
The knapsack problem or rucksack (bag) problem is a problem in combinatorial optimization: Given a set of
items, each with a mass and a value, determine the number of each item to include in a collection so that the
total weight is less than or equal to a given limit and the total value is as large as possible
Greedy Algorithm:- Keep taking most valuable items until maximum weight is
reached or taking the largest value of eac item by calculating vi=valuei/Sizei
Dynamic Programming:- Solve each sub problem once and store their solutions in
an array.
0/1 knapsack problem:
Let there be items, to where has a value and weight . The maximum
weight that we can carry in the bag is W. It is common to assume that all values and weights
are nonnegative. To simplify the representation, we also assume that the items are listed in
increasing order of weight.
Maximize subject to
Maximize the sum of the values of the items in the knapsack so that the sum of the weights must be less
than the knapsack's capacity.
Greedy algorithm for knapsack
Algorithm GreedyKnapsack(m,n)
// p[i:n] and [1:n] contain the profits and weights respectively
// if the n-objects ordered such that p[i]/w[i]>=p[i+1]/w[i+1], m🡪 size of knapsack and
Ex: - Consider 3 objects whose profits and weights are defined as (P1,
P2, P3) = ( 25, 24, 15 )
W1, W2, W3) = ( 18, 15, 10 )
n=3🡪number of objects
m=20🡪Bag capacity
Consider a knapsack of capacity 20. Determine the optimum strategy for placing the objects in to the
knapsack. The problem can be solved by the greedy approach where in the inputs are arranged
according to selection process (greedy strategy) and solve the problem in stages. The various greedy
strategies for the problem could be as follows.
2 2
(½, ⅓, ¼ ) ½ x 18+⅓ x15+ ¼ x10 = 16. 5 ½ x 25+⅓ x24+ ¼ x15 =
12.5+8+3.75 = 24.25
Analysis: - If we do not consider the time considered for sorting the inputs then all of the three
greedy strategies complexity will be O(n).
There is set of n-jobs. For any job i, is a integer deadling di≥0 and profit Pi>0, the profit Pi is
earned iff the job completed by its deadline.
To complete a job one had to process the job on a machine for one unit of time. Only one
machine is available for processing jobs.
A feasible solution for this problem is a subset J of jobs such that each job in this subset can
be completed by its deadline.
The value of a feasible solution J is the sum of the profits of the jobs in J, i.e., ∑i∈jPi
An optimal solution is a feasible solution with maximum value.
The problem involves identification of a subset of jobs which can be completed by its deadline.
Therefore the problem suites the subset methodology and can be solved by the greedy method.
Ex: - Obtain the optimal sequence for the following jobs.
j1 j2 j3 j4
(P1, P2, P3, P4) = (100, 10, 15, 27)
In the example solution ‘3’ is the optimal. In this solution only jobs 1&4 are processed and
the value is 127. These jobs must be processed in the order j4 followed by j1. the process of
job 4 begins at time 0 and ends at time 1. And the processing of job 1 begins at time 1 and
ends at time2. Therefore both the jobs are completed within their deadlines. The optimization
measure for determining the next job to be selected in to the solution is according to the
profit. The next job to include is that which increases ∑pi the most, subject to the constraint
that the resulting “j” is the feasible solution. Therefore the greedy strategy is to consider the
jobs in decreasing order of profits.
The greedy algorithm is used to obtain an optimal solution.
We must formulate an optimization measure to determine how the next job is chosen.
Note: The size of sub set j must be less than equal to maximum deadline in given list.
Graphs can be used to represent the highway structure of a state or country with
vertices representing cities and edges representing sections of highway.
The edges have assigned weights which may be either the distance between the 2
cities connected by the edge or the average time to drive along that section of
highway.
For example if A motorist wishing to drive from city A to B then we must answer the
following questions
o Is there a path from A to B
o If there is more than one path from A to B which is the shortest path
The length of a path is defined to be the sum of the weights of the edges on that path.
Given a directed graph G(V,E) with weight edge w(u,v). e have to find a shortest path from
source vertex S∈v to every other vertex v1∈ v-s.
To find SSSP for directed graphs G(V,E) there are two different algorithms.
⮚ Bellman-Ford Algorithm
⮚ Dijkstra’s algorithm
Bellman-Ford Algorithm:- allow –ve weight edges in input graph. This algorithm
either finds a shortest path form source vertex S∈V to other vertex v∈V or detect a –
ve weight cycles in G, hence no solution. If there is no negative weight cycles are
reachable form source vertex S∈V to every other vertex v∈V
Dijkstra’s algorithm:- allows only +ve weight edges in the input graph and finds a
shortest path from source vertex S∈V to every other vertex v∈V.
Consider the above directed graph, if node 1 is the source vertex, then shortest path
from 1 to 2 is 1,4,5,2. The length is 10+15+20=45.
As an optimization measure we can use the sum of the lengths of all paths so far
generated.
If we have already constructed ‘i’ shortest paths, then using this optimization
measure, the next path to be constructed should be the next shortest minimum length
path.
The greedy way to generate the shortest paths from Vo to the remaining vertices is to
generate these paths in non-decreasing order of path length.
For this 1st, a shortest path of the nearest vertex is generated. Then a shortest path to
the 2nd nearest vertex is generated and so on.
Algorithm for finding Shortest Path
SPANNING TREE: - A Sub graph ‘n’ of o graph ‘G’ is called as a spanning tree if
(i) It includes all the vertices of ‘G’
(ii) It is a tree
Minimum cost spanning tree: For a given graph ‘G’ there can be more than one spanning
tree. If weights are assigned to the edges of ‘G’ then the spanning tree which has the
minimum cost of edges is called as minimal spanning tree.
The greedy method suggests that a minimum cost spanning tree can be obtained by
contacting the tree edge by edge. The next edge to be included in the tree is the edge that
results in a minimum increase in the some of the costs of the edges included so far.
There are two basic algorithms for finding minimum-cost spanning trees, and both are greedy
algorithms
🡪Prim’s Algorithm
🡪Kruskal’s Algorithm
Prim’s Algorithm: Start with any one node in the spanning tree, and repeatedly add the
cheapest edge, and the node it leads to, for which the node is not already in the spanning tree.
PRIM’S ALGORITHM: -
i) Select an edge with minimum cost and include in to the spanning tree.
ii) Among all the edges which are adjacent with the selected edge, select the one with
minimum cost.
iii) Repeat step 2 until ‘n’ vertices and (n-1) edges are been included. And the sub graph
obtained does not contain any cycles.
Notes: - At every state a decision is made about an edge of minimum cost to be included into the
spanning tree. From the edges which are adjacent to the last edge included in the spanning tree i.e. at
every stage the sub-graph obtained is a tree.
Prim's minimum spanning tree algorithm
Algorithm Prim (E, cost, n,t)
// E is the set of edges in G. Cost (1:n, 1:n) is the
// Cost adjacency matrix of an n vertex graph such that
// Cost (i,j) is either a positive real no. or ∞ if no edge (i,j) exists.
//A minimum spanning tree is computed and
//Stored in the array T(1:n-1, 2).
//(t (i, 1), + t(i,2)) is an edge in the minimum cost spanning tree. The final cost is returned
{
Let (k, l) be an edge with min cost in E
Min cost: = Cost (x,l);
T(1,1):= k; + (1,2):= l;
for i:= 1 to n do//initialize near
if (cost (i,l)<cost (i,k) then n east (i): l; else
near (i): = k;
near (k): = near (l): = 0;
for i: = 2 to n-1 do
{//find n-2 additional edges for t
let j be an index such that near (i) ≠0 & cost (j, near (i)) is minimum; t (i,1): =
j + (i,2): = near (j);
min cost: = Min cost + cost (j, near (j));
near (j): = 0;
for k:=1 to n do // update near ()
if ((near (k) ≠0) and (cost {k, near (k)) > cost (k,j))) then
near Z(k): = ji
}
return mincost;
}
The algorithm takes four arguments E: set of edges, cost is nxn adjacency matrix cost of (i,j)= +ve
integer, if an edge exists between i&j otherwise infinity. ‘n’ is no/: of vertices. ‘t’ is a (n-1):2matrix which
consists of the edges of spanning tree.
E = { (1,2), (1,6), (2,3), (3,4), (4,5), (4,7), (5,6), (5,7), (2,7) }
n = {1,2,3,4,5,6,7)
i) The algorithm will start with a tree that includes only minimum cost edge of
G. Then edges are added to this tree one by one.
ii) The next edge (i,j) to be added is such that i is a vertex which is already included in
the treed and j is a vertex not yet included in the tree and cost of i,j is minimum
among all edges adjacent to ‘i’.
iii) With each vertex ‘j’ next yet included in the tree, we assign a value near ‘j’. The value
near ‘j’ represents a vertex in the tree such that cost (j, near (j)) is
minimum among all choices for near (j)
iv) We define near (j):= 0 for all the vertices ‘j’ that are already in the tree.
v) The next edge to include is defined by the vertex ‘j’ such that (near (j)) ≠ 0 and cost of
(j, near (j)) is minimum.
Analysis: -
The time required by the prince algorithm is directly proportional to the no/: of vertices. If a graph ‘G’
has ‘n’ vertices then the time required by prim’s algorithm is 0(n2)
Kruskal’s Algorithm: Start with no nodes or edges in the spanning tree, and repeatedly
add the cheapest edge that does not create a cycle.
In Kruskals algorithm for determining the spanning tree we arrange the edges in the increasing order
of cost.
i) All the edges are considered one by one in that order and deleted from the graph and are
included in to the spanning tree.
ii) At every stage an edge is included; the sub-graph at a stage need not be a tree. Infect it
is a forest.
iii) At the end if we include ‘n’ vertices and n-1 edges without forming cycles then we get a
single connected component without any cycles i.e. a tree with minimum cost.
At every stage, as we include an edge in to the spanning tree, we get disconnected trees represented
by various sets. While including an edge in to the spanning tree we need to check it does not form
cycle. Inclusion of an edge (i,j) will form a cycle if i,j both are in same set. Otherwise the edge can be
included into the spanning tree.
Kruskal minimum spanning tree algorithm
Algorithm Kruskal (E, cost, n,t)
//E is the set of edges in G. ‘G’ has ‘n’ vertices
//Cost {u,v} is the cost of edge (u,v) t is the set
//of edges in the minimum cost spanning tree
//The final cost is returned
{ construct a heap out of the edge costs using heapify;
for i:= 1 to n do parent (i):= -1 // place in different sets
//each vertex is in different set {1} {1}
{3} i: = 0; min cost: = 0.0;
While (i<n-1) and (heap not empty))do
{
Delete a minimum cost edge (u,v) from the heaps; and reheapify using adjust;
j:= find (u); k:=find (v);
if (j≠k) then
{ i: = 1+1;
+ (i,1)=u; + (i, 2)=v;
mincost: = mincost+cost(u,v);
Union (j,k);
}
}
if (i≠n-1) then write (“No spanning tree”);
else return mincost;
}
Consider the above graph of , Using Kruskal's method the edges of this graph are considered
for inclusion in the minimum cost spanning tree in the order (1, 2), (3, 6), (4, 6), (2, 6), (1, 4),
(3, 5), (2, 5), (1, 5), (2, 3), and (5, 6). This corresponds to the cost sequence 10, 15, 20, 25,
30, 35, 40, 45, 50, 55. The first four edges are included in T. The next edge to be considered
is (I, 4). This edge connects two vertices already connected in T and so it is rejected. Next,
the edge (3, 5) is selected and that completes the spanning tree.
Analysis: - If the no/: of edges in the graph is given by /E/ then the time for Kruskals algorithm is
given by 0 (|E| log |E|).
Dynamic Programming
When optimal decision sequences contain optimal decision subsequences, we can establish
recurrence equations, called dynamic-programming recurrence equations, that enable us to
solve the problem in an efficient way.
Dynamic programming is based on the principle of optimality (also coined by Bellman). The
principle of optimality states that no matter whatever the initial state and initial decision are,
the remaining decision sequence must constitute an optimal decision sequence with regard to
the state resulting from the first decision. The principle implies that an optimal decision
sequence is comprised of optimal decision subsequences. Since the principle of optimality
may not hold for some formulations of some problems, it is necessary to verify that it does
hold for the problem being solved. Dynamic programming cannot be applied when this
principle does not hold.
A multistage graph G = (V, E) is a directed graph in which the vertices are partitioned into k >
2 disjoint sets Vi, 1 < i < k. In addition, if <u, v> is an edge in E, then u E Vi and v E Vi+1 for
some i, 1 < i < k.
Let the vertex ‘s’ is the source, and ‘t’ the sink. Let c (i, j) be the cost of edge <i, j>. The cost
of a path from ‘s’ to ‘t’ is the sum of the costs of the edges on the path. The multistage graph
problem is to find a minimum cost path from ‘s’ to ‘t’. Each set Vi defines a stage in the
graph. Because of the constraints on E, every path from ‘s’ to ‘t’ starts in stage 1, goes to
stage 2, then to stage 3, then to stage 4, and so on, and eventually terminates in stage k.
ALGORITHM:
Algorithm Fgraph (G, k, n, p)
// The input is a k-stage graph G = (V, E) with n vertices //
indexed in order or stages. E is a set of edges and c [i, j] // is the
cost of (i, j). p [1 : k] is a minimum cost path.
{
cost [n] := 0.0;
for j:= n - 1 to 1 step – 1 do
{ // compute cost [j]
let r be a vertex such that (j, r) is an edge of G
and c [j, r] + cost [r] is minimum; cost [j] := c
[j, r] + cost [r];
d [j] := r:
}
p [1] := 1; p [k] := n; // Find a minimum cost
path. for j := 2 to k - 1 do p [j] := d [p [j - 1]];}
The multistage graph problem can also be solved using the backward approach. Let bp(i,
j) be a minimum cost path from vertex s to j vertex in Vi. Let Bcost(i, j) be the cost of bp(i,
j). From the backward approach we obtain:
Bcost (i, j) = min { Bcost (i –1, l) + c (l, j)}
l e Vi - 1
<l, j> e E
Complexity Analysis:
The complexity analysis of the algorithm is fairly straightforward. Here, if G has ~E~
edges, then the time for the first for loop is CJ ( V~ +~E ).
EXAMPLE 1:
Find the minimum cost path from s to t in the multistage graph of five stages shown below.
Do this first using forward approach and then using backward approach.
FORWARD APPROACH:
We use the following equation to find the minimum cost path from s to t: cost (i,
j) = min {c (j, l) + cost (i + 1,
l)} l c Vi + 1
<j, l> c E
cost (1, 1) = min {c (1, 2) + cost (2, 2), c (1, 3) + cost (2, 3), c (1, 4) + cost (2, 4), c (1, 5) +
cost (2, 5)}
= min {9 + cost (2, 2), 7 + cost (2, 3), 3 + cost (2, 4), 2 + cost (2, 5)}
cost (2, 2) = min{c (2, 6) + cost (3, 6), c (2, 7) + cost (3, 7), c (2, 8) + cost (3, 8)} = min {4 +
cost (3, 6), 2 + cost (3, 7), 1 + cost (3, 8)}
cost (3, 6) = min {c (6, 9) + cost (4, 9), c (6, 10) + cost (4, 10)}
= min {6 + cost (4, 9), 5 + cost (4, 10)}
cost (4, 9) = min {c (9, 12) + cost (5, 12)} = min {4 + 0) = 4 cost (4,
2 4 6
2 6 9
9 1 2 5 4
4
7 3 7 2
12
s 1 3
t
cost (3, 7) = min {c (7, 9) + cost (4, 9) , c (7, 10) + cost (4, 10)}
= min {4 + cost (4, 9), 3 + cost (4, 10)}
cost (4, 9) = min {c (9, 12) + cost (5, 12)} = min {4 + 0} = 4 Cost (4,
10 12
10) =
The path is 1 2 7
or
min
1 3
6
{c (10, 10
12
+ 2} = min {8, 5} = 5
cost (3, 8) = min {c (8, 10) + cost (4, 10), c (8, 11) + cost (4, 11)}
= min {5 + cost (4, 10), 6 + cost (4 + 11)}
Therefore, cost (2, 3) = min {c (3, 6) + cost (3, 6), c (3, 7) + cost (3, 7)}
= min {2 + cost (3, 6), 7 + cost (3, 7)}
= min {2 + 7, 7 + 5} = min {9, 12} = 9
cost (2, 4) = min {c (4, 8) + cost (3, 8)} = min {11 + 7} = 18 cost (2, 5) =
min {c (5, 7) + cost (3, 7), c (5, 8) + cost (3, 8)} = min {11 + 5, 8 +
7} = min {16, 15} = 15
l c vi – 1
<l, j> c E
Bcost (5, 12) = min {Bcost (4, 9) + c (9, 12), Bcost (4, 10) + c (10, 12),
Bcost (4, 11) + c (11, 12)}
= min {Bcost (4, 9) + 4, Bcost (4, 10) + 2, Bcost (4, 11) + 5}
Bcost (4, 9) = min {Bcost (3, 6) + c (6, 9), Bcost (3, 7) + c (7, 9)}
= min {Bcost (3, 6) + 6, Bcost (3, 7) + 4}
Bcost (3, 6) = min {Bcost (2, 2) + c (2, 6), Bcost (2, 3) + c (3, 6)}
= min {Bcost (2, 2) + 4, Bcost (2, 3) + 2}
Bcost (2, 2) = min {Bcost (1, 1) + c (1, 2)} = min {0 + 9} = 9 Bcost (2, 3) = min
min {13, 9} = 9
Bcost (3, 7) = min {Bcost (2, 2) + c (2, 7), Bcost (2, 3) + c (3, 7), Bcost (2, 5) + c (5,
7)}
Bcost (2, 5) = min {Bcost (1, 1) + c (1, 5)} = 2
Bcost (3, 7) = min {9 + 2, 7 + 7, 2 + 11} = min {11, 14, 13} = 11 Bcost (4, 9) = min {9
Bcost (4, 10) = min {Bcost (3, 6) + c (6, 10), Bcost (3, 7) + c (7, 10),
Bcost (3, 8) + c (8, 10)}
Bcost (3, 8) = min {Bcost (2, 2) + c (2, 8), Bcost (2, 4) + c (4, 8),
Bcost (2, 5) + c (5, 8)}
Bcost (2, 4) = min {Bcost (1, 1) + c (1, 4)} = 3
Bcost (3, 8) = min {9 + 1, 3 + 11, 2 + 8} = min {10, 14, 10} = 10 Bcost (4, 10) = min {9
+ 5, 11 + 3, 10 + 5} = min {14, 14, 15) = 14
Bcost (4, 11) = min {Bcost (3, 8) + c (8, 11)} = min {Bcost (3, 8) + 6} = min {10 + 6} =
16
Bcost (5, 12) = min {15 + 4, 14 + 2, 16 + 5} = min {19, 16, 21} = 16. EXAMPLE
2:
Find the minimum cost path from s to t in the multistage graph of five stages shown below.
Do this first using forward approach and then using backward approach.
3 4 1
5 2 6 4 7 7
3 6
s1 5 2 9 t
8 3
6
8 6 2
2 5
3
SOLUTION: FORWARD
APPROACH:
cost (1, 1) = min {c (1, 2) + cost (2, 2), c (1, 3) + cost (2, 3)}
= min {5 + cost (2, 2), 2 + cost (2, 3)}
cost (2, 2) = min {c (2, 4) + cost (3, 4), c (2, 6) + cost (3, 6)}
= min {3+ cost (3, 4), 3 + cost (3, 6)}
cost (3, 4) = min {c (4, 7) + cost (4, 7), c (4, 8) + cost (4, 8)}
= min {(1 + cost (4, 7), 4 + cost (4, 8)}
cost (4, 7) = min {c (7, 9) + cost (5, 9)} = min {7 + 0) = 7 cost (4, 8)
cost (2, 3) = min {c (3, 4) + cost (3, 4), c (3, 5) + cost (3, 5), c (3, 6) + cost (3,6)}
cost (3, 5) = min {c (5, 7) + cost (4, 7), c (5, 8) + cost (4, 8)}= min {6 + 7, 2 + 3} = 5
104
Therefore, cost (2, 3) = min {13, 10, 13} = 10
cost (1, 1) = min {5 + 8, 2 + 10} = min {13, 12} = 12
BACKWARD APPROACH:
Bcost (i, J) = min {Bcost (i – 1, l) = c (l, J)}
l E vi – 1
<l ,j>E E
Bcost (5, 9) = min {Bcost (4, 7) + c (7, 9), Bcost (4, 8) + c (8, 9)}
= min {Bcost (4, 7) + 7, Bcost (4, 8) + 3}
Bcost (4, 7) = min {Bcost (3, 4) + c (4, 7), Bcost (3, 5) + c (5, 7),
Bcost (3, 6) + c (6, 7)}
= min {Bcost (3, 4) + 1, Bcost (3, 5) + 6, Bcost (3, 6) + 6}
Bcost (3, 4) = min {Bcost (2, 2) + c (2, 4), Bcost (2, 3) + c (3, 4)}
= min {Bcost (2, 2) + 3, Bcost (2, 3) + 6}
Bcost (2, 2) = min {Bcost (1, 1) + c (1, 2)} = min {0 + 5} = 5
Bcost (4, 8) = min {Bcost (3, 4) + c (4, 8), Bcost (3, 5) + c (5, 8), Bcost
(3, 6) + c (6, 8)}
= min {8 + 4, 7 + 2, 10 + 2} = 9
In the all pairs shortest path problem, we are to find a shortest path between every pair
of vertices in a directed graph G. That is, for every pair of vertices (i, j), we are to find a
shortest path from i to j as well as one from j to i. These two paths are the same when G
is undirected.
When no edge has a negative length, the all-pairs shortest path problem may be solved by
using Dijkstra’s greedy single source algorithm n times, once with each of the n vertices as
the source vertex.
The all pairs shortest path problem is to determine a matrix A such that A (i, j) is the
length of a shortest path from i to j. The matrix A can be obtained by solving n
single-source
105
problems using the algorithm shortest Paths. Since each application of this procedure
requires O (n2) time, the matrix A can be obtained in O (n3) time.
The dynamic programming solution, called Floyd’s algorithm, runs in O (n3) time. Floyd’s
algorithm works even when the graph has negative length edges (provided there are no
negative length cycles).
The shortest i to j path in G, i ≠ j originates at vertex i and goes through some intermediate
vertices (possibly none) and terminates at vertex j. If k is an intermediate vertex on this
shortest path, then the subpaths from i to k and from k to j must be shortest paths from i to
k and k to j, respectively. Otherwise, the i to j path is not of minimum length. So, the
principle of optimality holds. Let Ak (i, j) represent the length of a shortest path from i to j
going through no vertex of index greater than k, we obtain:
Ak (i, j) = {min {min {Ak-1 (i, k) + Ak-1 (k, j)}, c (i, j)}
1<k<n
Example 1:
Given a weighted digraph G = (V, E) with weight. Determine the length of the shortest
path between all pairs of vertices in G. Here we assume that there are no cycles with
zero or negative cost.
11
~r 4
0 Cost adjacency matrix (A ) =
~~
0
0 2
~6 ~ ~]
0
~L3
General formula: min {Ak-1 (i, k) + Ak-1 (k, j)}, c (i, j)}
1<k<n
Solve the problem for different values of k = 1,
A1 (1, 1) = min {(Ao (1, 1) + Ao (1, 1)), c (1, 1)} = min {0 + 0, 0} = 0 A1 (1,
2) = min {(Ao (1, 1) + Ao (1, 2)), c (1, 2)} = min {(0 + 4), 4} = 4
A1 (1, 3) = min {(Ao (1, 1) + Ao (1, 3)), c (1, 3)} = min {(0 + 11), 11} = 11 A1 (2,
1) = min {(Ao (2, 1) + Ao (1, 1)), c (2, 1)} = min {(6 + 0), 6} = 6
A1 (2, 2) = min {(Ao (2, 1) + Ao (1, 2)), c (2, 2)} = min {(6 + 4), 0)} = 0 A1 (2,
3) = min {(Ao (2, 1) + Ao (1, 3)), c (2, 3)} = min {(6 + 11), 2} = 2 A1 (3, 1) =
min {(Ao (3, 1) + Ao (1, 1)), c (3, 1)} = min {(3 + 0), 3} = 3 A1 (3, 2) = min
{(Ao (3, 1) + Ao (1, 2)), c (3, 2)} = min {(3 + 4), oc} = 7 A1 (3, 3) = min {(Ao
(3, 1) + Ao (1, 3)), c (3, 3)} = min {(3 + 11), 0} = 0
~0 4 11
~ ~
~6 0 2~
~L3 7 0~U
A(1) =
1
1) = min {(A1 (1, + 1A (2, 1), c (1, 1)} = min {(4 + 6), 0} + = 0
2) A
= 4
1
1
2) = min {(A (1, (2, 2), c (1, 2)} = min {(4 + 0), 4} + A (2,
=6
2) 3), c (1, 3)} = min {(4 + 2), 11}
3) = min {(A1 (1,
2)
1) = min {(A (2, 2) + A (2, 1), c (2, 1)} = min {(0 + 6), 6} = 6
2) = min {(A (2, 2) + A (2, 2), c (2, 2)} = min {(0 + 0), 0} = 0
3) = min {(A (2, 2) + A (2, 3), c (2, 3)} = min {(0 + 2), 2} = 2
1) = min {(A (3, 2) + A (2, 1), c (3, 1)} = min {(7 + 6), 3} = 3
2) = min {(A (3, 2) + A (2, 2), c (3, 2)} = min {(7 + 0), 7} = 7
3) = min {(A (3, 2) + A (2, 3), c (3, 3)} = min {(7 + 2), 0} = 0
A2 (1,
A2 (1,
A2 (1,
A2 (2,
A2 (2,
A2 (2,
A2 (3,
A2 (3,
A2 (3,
~0 4 61
~
~6 0 2~
~
~L3 7 0~~
A(2) =
Step 3: Solving the equation for, k = 3;
A3 (1, 1) = min {A2 (1, 3) + A2 (3, 1), c (1, 1)} = min {(6 + 3), 0} = 0
A3 (1, 2) = min {A2 (1, 3) + A2 (3, 2), c (1, 2)} = min {(6 + 7), 4} = 4
A3 (1, 3) = min {A2 (1, 3) + A2 (3, 3), c (1, 3)} = min {(6 + 0), 6} = 6
A3 (2, 1) = min {A2 (2, 3) + A2 (3, 1), c (2, 1)} = min {(2 + 3), 6} = 5
A3 (2, 2) = min {A2 (2, 3) + A2 (3, 2), c (2, 2)} = min {(2 + 7), 0} = 0
A3 (2, 3) = min {A2 (2, 3) + A2 (3, 3), c (2, 3)} = min {(2 + 0), 2} = 2
A3 (3, 1) = min {A2 (3, 3) + A2 (3, 1), c (3, 1)} = min {(0 + 3), 3} = 3
A3 (3, 2) = min {A2 (3, 3) + A2 (3, 2), c (3, 2)} = min {(0 + 7), 7} = 7
107
2 2
A3 (3, 3) = min {A (3, 3) + A (3, 3), c (3, 3)} = min {(0 + 0), 0} = 0
~0 4 6 ~
A(3) = ~
0 ~
~~5~
2
~3 7 0 ~]
TRAVELLING SALESPERSON PROBLEM
Let G = (V, E) be a directed graph with edge costs Cij. The variable cij is defined such that
cij > 0 for all I and j and cij = a if < i, j> o E. Let |V| = n and assume n > 1. A tour of G is
a directed simple cycle that includes every vertex in V. The cost of a tour is the sum of the
cost of the edges on the tour. The traveling sales person problem is to find a tour of
minimum cost. The tour is to be a simple path that starts and ends at vertex 1.
Let g (i, S) be the length of shortest path starting at vertex i, going through all vertices in
S, and terminating at vertex 1. The function g (1, V – {1}) is the length of an optimal
salesperson tour. From the principal of optimality it follows that:
g(1, V - {1 }) = 2 ~ k ~ n ~c1k ~ g ~ k, V ~ ~ 1, k ~~ -- 1
~
m -- 2
i
n
Generalizing equation 1, we
obtain (for i o S)
g ( i, S ) = min{ci j j ES
The Equation can be solved for g (1, V – 1}) if we know g (k, V – {1, k}) for all
choices of k.
Complexity Analysis: + g i, S - j
( { })}
are n – 1 choices for i. The number of
For each value of |S| there sets S of distinct
~ n -2~
size k not including 1 and i is I k ~ .
~ ~
Hence, the total number of g (i, S)’s to be computed before computing g (1, V – {1}) is:
~n-2~
~~n~1~~
~k ~
k~0 ~ ~
To calculate this sum, we use the binominal theorem:
[((n - 2) ((n - 2) ((n - 2) ((n - 2)1
n -1 (n–1)111 11+i iI+ ii
iI+----~~~ ~~~
~~ 0 ) ~1)~ 2 ) ~(n~2)~~
According to the binominal theorem:
Therefore,
n-1 ~ n _ 2'
~ ( n _ 1 ~ ~~ = (n - 1) 2n ~ 2
k
~
This is Φ (n 2n-2), so there are exponential number of calculate. Calculating one g (i, S)
require finding the minimum of at most n quantities. Therefore, the entire algorithm is Φ
(n2 2n-2). This is better than enumerating all n! different tours to find the best one. So, we
have traded on exponential growth for a much smaller exponential growth.
The most serious drawback of this dynamic programming solution is the space needed,
which is O (n 2n). This is too large even for modest values of n.
Example 1:
For the following graph find minimum cost tour for the traveling
salesperson problem:
r0 10 15 20
~ 0 9
~ 13 0 10~~
5 8 9 12~
~6 01]
~
The cost adjacency matrix =
g (2, T) = C21 = 5
g (3, T) = C31 = 6
g (4, ~) = C41 = 8
Therefore, g (3, {2, 4}) = min {13 + 18, 12 + 13} = min {41, 25} = 25
g (4, {2, 3}) = min {c42 + g (2, {3}), c43 + g (3, {2})}
g (2, {3}) = min {c23 + g (3, ~} = 9 + 6 = 15
g (3, {2}) = min {c32 + g (2, T} = 13 + 5 = 18
Therefore, g (4, {2, 3}) = min {8 + 15, 9 + 18} = min {23, 27} = 23
g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}), c13 + g (3, {2, 4}), c14 + g (4, {2, 3})} = min
{10 + 25, 15 + 25, 20 + 23} = min {35, 40, 43} = 35
Let P (i) be the probability with which we shall be searching for 'ai'. Let Q (i) be the
probability of an un-successful search. Every internal node represents a point where a
successful search may terminate. Every external node represents a point where an
unsuccessful search may terminate.
The expected cost contribution for the internal node for 'ai' is:
Unsuccessful search terminate with I = 0 (i.e at an external node). Hence the cost
contribution for this node is:
110
n n
~ P(i) * level (ai) + ~ Q (i) * level ((Ei ) - 1)
The expected cost of binary search tree is:
Given a fixed set of identifiers, we wish to create a binary search tree organization. We
may expect different binary search trees for the same identifier set to have different
performance characteristics.
The computation of each of these c(i, j)’s requires us to find the minimum of m quantities.
Hence, each such c(i, j) can be computed in time O(m). The total time for all c(i, j)’s with j
– i = m is therefore O(nm – m2).
The total time to evaluate all the c(i, j)’s and r(i, j)’s is therefore:
~ (nm - m2 ) = O (n3
)1<m<n
Example 1: The possible binary search trees for the identifier set (a1, a2, a3) = (do, if,
stop) are as follows. Given the equal
probabilities p (i) = Q (i) = 1/7 for all i,
we have:
st o p
if
do
Tree 2
do
if
st o p
Tree 3
15 ( 1 x 1 + 1 x 2 + 1 x 3~
~ +
Cost (tree # 1) = ~7 7 7 )
1+2+3
1+2+3
+36+9
(~1 x 1
~7
x
~1+ + 1 x 2 + 1 x 3~ x2+1 x 3 + 1 x 3~~
Cost (tree # (~1 x 1 + 1
3) = ~ 1
~ ) ~7 7 7
7 )
7
7
~7
=1+2+3 1 + 2 + 3 + 3 6 + 9 ~ 15
7 + 7 ~
(1x1+1
~ x2+1 x 3 + 1 x 3~~
Cost (tree # 4) = x 1 + 1 x 2 ~ 1 x 3~ ~ ~
~1
~ 7
) ~7 7 7 )
7
7
~7
1+2+3+3 6+9 15
=1+2+3 7
7 (~1 x 2 + x 2 + 1 x2~
(1x1+1 1 x
1x2+1
Cost
2) = (tree # 2~
+
~7 7 x2 + 7 ) ~7 7 7 7)
= 1 + 2 + 2 2 + 2 + 2 + 2 5 + 8 13
7 +7 ~ 7~ 7
Huffman coding tree solved by a greedy algorithm has a limitation of having the data only
at the leaves and it must not preserve the property that all nodes to the left of the root
have keys, which are less etc. Construction of an optimal binary search tree is harder,
because the data is not constrained to appear only at the leaves, and also because the tree
must satisfy the binary search tree property and it must preserve the property that all
nodes to the left of the root have keys, which are less.
A dynamic programming solution to the problem of obtaining an optimal binary search
tree can be viewed by constructing a tree as a result of sequence of decisions by holding
the principle of optimality. A possible approach to this is to make a decision as which of
the ai's be arraigned to the root node at 'T'. If we choose 'ak' then is clear that the internal
nodes for a1, a2, ak-1 as well as the external nodes for the classes Eo, E1,
. . . . . . . Ek-1 will lie in the left sub tree, L, of the root. The remaining nodes will be in the
right subtree, ft. The structure of an optimal binary search tree is:
K K
~
n~ n
P(i)* level (ai ) +
Cost (ft) = Q(i)* level (Ei ) - 1
~
i ~K i ~K ( )
The C (i, J) can be computed as:
C (i, J) = min {C (i, k-1) + C (k, J) + P (K) + w (i, K-1) + w (K, J)}
i<k<J
Equation (1) may be solved for C (0, n) by first computing all C (i, J) such that J - i = 1
Next, we can compute all C (i, J) such that J - i = 2, Then all C (i, J) with J - i = 3
and so on.
C (i, J) is the cost of the optimal binary search tree 'Tij' during computation we record
the root R (i, J) of each tree 'Tij'. Then an optimal binary search tree may be constructed
from these R (i, J). R (i, J) is the value of 'K' that minimizes equation (1).
We solve the problem by knowing W (i, i+1), C (i, i+1) and R (i, i+1), 0
≤ i < 4;
Knowing W (i, i+2), C (i, i+2) and R (i, i+2), 0 ≤ i < 3 and repeating until W (0, n), C
(0, n) and R (0, n) are obtained.
Example 1:
Let n = 4, and (a1, a2, a3, a4) = (do, if, need, while) Let P (1: 4) = (3, 3, 1, 1) and Q (0:
4) = (2, 3, 1, 1, 1)
Solution:
Table for recording W (i, j), C (i, j) and R (i, j):
Column 0 1 2 3 4
Row
0 2, 0, 0 3, 0, 0 1, 0, 0 1, 0, 0, 1, 0, 0
1 8, 8, 1 7, 7, 2 3, 3, 3 3, 3, 4
2 12, 19, 1 9, 12, 2 5, 8, 3
3 14, 25, 2 11, 19, 2
4 16, 32, 2
This computation is carried out row-wise from row 0 to row 4. Initially, W (i, i) = Q
(i) and C (i, i) = 0 and R (i, i) = 0, 0 < i < 4.
Solving for C (0, n):
First, computing all C (i, j) such that j - i = 1; j = i + 1 and as 0 < i < 4; i = 0, 1, 2 and 3; i
< k ≤ J. Start with i = 0; so j = 1; as i < k ≤ j, so the possible value for k = 1
Second, Computing all C (i, j) such that j - i = 2; j = i + 2 and as 0 < i < 3; i = 0, 1, 2; i < k
≤ J. Start with i = 0; so j = 2; as i < k ≤ J, so the possible values for k = 1 and 2.
W (0, 2) = P (2) + Q (2) + W (0, 1) = 3 + 1 + 8 = 12
C (0, 2) = W (0, 2) + min {(C (0, 0) + C (1, 2)), (C (0, 1) + C (2, 2))} = 12
+ min {(0 + 7, 8 + 0)} = 19
ft (0, 2) = 1
Next, with i = 1; so j = 3; as i < k ≤ j, so the possible value for k = 2 and 3.
W (1, 3) = P (3) + Q (3) + W (1, 2) = 1 + 1+ 7 = 9
C (1, 3) = W (1, 3) + min {[C (1, 1) + C (2, 3)], [C (1, 3) 2) + C (3, 3)]}
= W (1, + min {(0 + 3), (7 + 0)} = 9 + 3 = 12
ft (1, 3) = 2
Next, with i = 2; so j = 4; as i < k ≤ j, so the possible value for k = 3 and 4.
3)]}
[C (0, 2) + C (3, =
14 + min {(0 + 12), (8 + 3), + 0)} = 14 + 11 = 25
ft (0, 3) (19 = 2
Fourth, Computing all C (i, j) such that j - i = 4; j = i + 4 and as 0 < i < 1; i = 0; i < k
≤ J.
Start with i = 0; so j = 4; as i < k ≤ j, so the possible values for k = 1, 2, 3 and 4.
Hence the left sub tree is 'T01' and right sub tree is T24. The root of 'T01' is 'a1' and
the root of 'T24' is a3.
The left and right sub trees for 'T01' are 'T00' and 'T11' respectively. The root of T01 is
'a1'
The left and right sub trees for T24 are T22 and T34
a4
Example 2:
Consider four elements a1, a2, a3 and a4 with Q0 = 1/8, Q1 = 3/16, Q2 = Q3 = Q4 = 1/16
and p1 = 1/4, p2 = 1/8, p3 = p4 =1/16. Construct an optimal binary search tree. Solving for
C (0, n):
First, computing all C (i, j) such that j - i = 1; j = i + 1 and as 0 < i < 4; i = 0, 1, 2 and 3; i
< k ≤ J. Start with i = 0; so j = 1; as i < k ≤ j, so the possible value for k = 1 W
(0, 1) = P (1) + Q (1) + W (0, 0) = 4 + 3 + 2 = 9
C (0, 1) = W (0, 1) + min {C (0, 0) + C (1, 1)} = 9 + [(0 + 0)] = 9 ft (0,
1) = 1 (value of 'K' that is minimum in the above equation).
Next with i = 1; so j = 2; as i < k ≤ j, so the possible value for k =
2 W (1, 2) = P (2) + Q (2) + W (1, 1) = 2 + 1 + 3 = 6
C (1, 2) = W (1, 2) + min {C (1, 1) + C (2, 2)} = 6 + [(0 + 0)] = 6 ft (1,
2) = 2
+1=3
3)} = 3 + [(0 + 0)] = 3
ft (2, 3) = 3
Next with i = 3; so j = 4; as i < k ≤ j, so the possible value for k = 4
W (3, 4) = P (4) + Q (4) + W (3, 3) = 1 + 1 + 1 =3
C (3, 4) = W (3, 4) + min {[C (3, 3) + C (4, 4)]} =3 + [(0 + 0)] = 3
ft (3, 4) = 4
C (2, 4) = W (2, 4) + min {[C (2, 2) + C (3, 4)], [C (2, 3) + C (4, 4)]
= 5 + min {(0 + 3), (3 + 0)} = 5 + 3 = 8
ft (2, 4) = 3
Third, Computing all C (i, j) such that J - i = 3; j = i + 3 and as 0 < i < 2; i = 0, 1; i < k ≤
J. Start with i = 0; so j = 3; as i < k ≤ j, so the possible values for k = 1, 2 and 3.
From the table we see that C (0, 4) = 33 is the minimum cost of a binary search tree for
(a1, a2, a3, a4)
The root of the tree 'T04' is 'a2'.
Hence the left sub tree is 'T01' and right sub tree is T24. The root of 'T01' is 'a1' and
the root of 'T24' is a3.
The left and right sub trees for 'T01' are 'T00' and 'T11' respectively. The root of T01 is
'a1'
The left and right sub trees for T24 are T22 and T34
a4
0/1 – KNAPSACK
We are given n objects and a knapsack. Each object i has a positive weight wi and a
positive value Vi. The knapsack can carry a weight not exceeding W. Fill the knapsack so
that the value of objects in the knapsack is optimized.
A solution to the knapsack problem can be obtained by making a sequence of decisions on
the variables x1, x2, . . . . , xn. A decision on variable xi involves determining which of the
values 0 or 1 is to be assigned to it. Let us assume that
decisions on the xi are made in the order xn, xn-1, x1. Following a decision on xn,
we may be in one of two possible states: the capacity remaining in m – wn and a profit of
pn has accrued. It is clear that the remaining decisions xn-1, , x1 must be optimal
with respect to the problem state resulting from the decision on xn. Otherwise, xn,. .
. . , x1 will not be optimal. Hence, the principal of optimality holds.
Fn (m) = max {fn-1 (m), fn-1 (m - wn) + pn} -- 1
Equation-2 can be solved for fn (m) by beginning with the knowledge fo (y) = 0 for all y
and fi (y) = - ~, y < 0. Then f1, f2, . . . fn can be successively computed using equation–2.
When the wi’s are integer, we need to compute fi (y) for integer y, 0 < y < m. Since fi (y)
= - ~ for y < 0, these function values need not be computed explicitly. Since each fi can be
computed from fi - 1 in Θ (m) time, it takes Θ (m n) time to compute fn. When the wi’s are
real numbers, fi (y) is needed for real numbers y such that 0 < y < m. So, fi cannot be
explicitly computed for all y in this range. Even when the wi’s are integer, the explicit Θ
(m n) computation of fn may not be the most efficient computation. So, we explore an
alternative method for both cases.
The fi (y) is an ascending step function; i.e., there are a finite number of y’s, 0 = y1 < y2
< . . . . < yk, such that fi (y1) < fi (y2) < < fi (yk); fi (y) = - ~ , y < y1; fi (y) = f
(yk), y > yk; and fi (y) = fi (yj), yj < y < yj+1. So, we need to compute only fi (yj), 1 < j
< k. We use the ordered set Si = {(f (yj), yj) | 1 < j < k} to represent fi (y). Each number of
Si is a pair (P, W), where P = fi (yj) and W = yj. Notice that S0 = {(0, 0)}. We can compute
Si+1 from Si by first computing:
Now, Si+1 can be computed by merging the pairs in Si and Si 1 together. Note that if Si+1
contains two pairs (Pj, Wj) and (Pk, Wk) with the property that Pj < Pk and Wj > Wk, then
the pair (Pj, Wj) can be discarded because of equation-2. Discarding or purging rules such
as this one are also known as dominance rules. Dominated tuples get purged. In the above,
(Pk, Wk) dominates (Pj, Wj).
Reliability Design
The problem is to design a system that is composed of several devices connected in series.
Let ri be the reliability of device Di (that is ri is the probability that device i will function
properly) then the reliability of the entire system is fT ri. Even if the individual devices are
very reliable (the ri’s are very close to one), the reliability of the system may not be very
good. For example, if n = 10 and ri = 0.99, i < i < 10, then fT ri = .904. Hence, it is
desirable to duplicate devices. Multiply copies of the same device type are connected in
parallel.
If stage i contains mi copies of device Di. Then the probability that all mi have a
malfunction is (1 - ri) mi. Hence the reliability of stage i becomes 1 – (1 - r )mi.
i
The reliability of stage ‘i’ is given by a function ~i (mi).
Our problem is to use device duplication. This maximization is to be carried out under a
cost constraint. Let ci be the cost of each unit of device i and let c be the maximum
allowable cost of the system being designed.
We wish to solve:
Maximize ~ qi ( mi ~
1<i<n
Subject to ~ Ci mi < C
1<i<n
mi > 1 and interger, 1 < i < n
~~ +Ci n ~ ~
ui ~ ~ ~C ~ C~ Ci~
ILk J U
~ ~
Assume each Ci > 0, each mi must be in the range 1 < mi < ui, where
The upper bound ui follows from the observation that mj >
1 An optimal solution m1, m2 mn is the
result of a sequence of decisions,
one decision for each mi. ~
1<j<i
Let fi (x) represent the
the constrains:
q$ mJ
( )
The last decision made requires one to choose mn from {1, 2, 3, un}
Once a value of mn has been chosen, the remaining decisions must be such as to use the
remaining funds C – Cn mn in an optimal way.
The principle of optimality holds on
fn ~C ~ ~max { On (mn ) fn _ 1 (C - Cn
mn ) } 1 < mn < un
f n ( x ) = max { ci ( mi ) f i - 1 ( x - Ci mi )
} 1 < mi < ui
clearly, f0 (x) = 1 for all x, 0 < x < C and f (x) = -oo for all x < 0.
There is atmost one tuple for each different ‘x’, that result from a sequence of decisions on
m1, m2, mn. The dominance rule (f1, x1) dominate (f2, x2) if f1 ≥ f2
and x1 ≤ x2. Hence, dominated tuples can be discarded from Si.
Example 1:
Design a three stage system with device types D1, D2 and D3. The costs are $30, $15 and
$20 respectively. The Cost of the system is to be no more than $105. The reliability of each
device is 0.9, 0.8 and 0.5 respectively.
Solution:
~~ n ~ ~
ui = ~ IC + Ci -C ~
J Ci~
~
~~
~
IL k 1 ~
We assume that if if stage I has mi devices of type i in parallel, then 0 i (mi) =1 – (1-
ri)mi Since, we can assume each ci > 0, each mi must be in the range 1 ≤ mi ≤ ui.
Where:
Using the above equation compute u1, u2 and u3.
105+ 30- (30+15 + 20) 70
u1 = = =2
30 30
105+15- (30+15 + 20) = 55
=3
u2 = 15 15
(30+15 + 20) 60
105+ 20- =3
u3 =
20 =
20
S 2 = {S2 , S2 , S2 }
1 2 3
S3 = depends on u3 value, as u3 = 3, so
S3 = {S3 , S3 , S3 }
f1 (x) ={01 (1) fo~ ~, 01 (2) f 0 ()} With devices m1 = 1 and m2 = 2 Compute Ø1
(1) and Ø1 (2) using the formula: Øi(mi)) = 1 - (1 - ri ) mi
= 1 – (1 – 0.9)1 = 0.9
~~1~ ~ 1~~1 ~ r ~m 1
S2
1 = 10.99 , 30 + 30 } = ( 0.99, 60
Therefore, S1 = {(0.9, 30), (0.99, 60)}
2~~~ f
Next fin dS 2 (x), x ~~
If Si contains two pairs (f1, x1) and (f2, x2) with the property that f1 ≥ f2 and x1 ≤ x2, then
(f1, x1) dominates (f2, x2), hence by dominance rule (f2, x2) can be discarded. Discarding
or pruning rules such as the one above is known as dominance rule. Dominating tuples will
be present in Si and Dominated tuples has to be discarded from Si.
~ 0.5~ 3
3
S3
0.875 (0.8928), 75 + 20 + 20 + 20
S3
3 = {(0.63, 105), (1.756, 120), (0.7812, 135)}
If cost exceeds 105, remove that tuples
The best design has a reliability of 0.648 and a cost of 100. Tracing back for the solution
through Si ‘s we can determine that m3 = 2, m2 = 2 and m1 = 1.
Other Solution:
Since, we can assume each ci > 0, each mi must be in the range 1 ≤ mi ≤ ui. Where:
S2 ={(0.75 (0.72), 45 + 20 + 20), (0.75 ( n ~ ~
~
(0.864), 60 + u
20 +20),i
= ~ iC + Ci _ ~CJ r / Ci I ~~
~ i ~ ~~
Using the above equation compute u1, u2 and u3.
105 30 +
70
u1 = ~ 30 =2
30
105 15 30 + 55 ~3
u2 = 15 ~
15
30 =3
105 + 20
u3 = 15 60
20 =
20
f3 (105) = max {~3 (m3). f2 (105 - 20m3)} 1 < m3 ! u3
= max {3(1) f2(105 - 20), 63(2) f2(105 - 20x2), ~3(3) f2(105 -20x3)} = max {0.5
f2(85), 0.75 f2(65), 0.875 f2(45)}
1 ! m1 ! u1
= max {~1(1).f0(35-30), ~1(2).f0(35-30x2)}
= max {~1(1) x 1, t1(2) x -oo} = max{0.9, -oo} = 0.9
f1 (20) = max {~1(m1). f0(20 - 30m1)} 1
! m1 ! u1
= max {~1(1) f0(20 - 30), t1(2) f0(20 - 30x2)}
= max {~1(1) x -, ~1(2) x -oo} = max{-oo, -oo} =
1 ! m2 ! u2
= max {2(1) f1(45 - 15), ~2(2) f1(45 - 15x2), ~2(3) f1(45 - 15x3)} = max {0.8 f1(30),
-, f1 (0) = -.
m1 = 1.
UNIT IV:
Backtracking: General method, applications-n-queen problem, sum of subsets problem,
graph coloring, Hamiltonian cycles.
Branch and Bound: General method, applications - Travelling sales person problem,0/1
knapsack problem- LC Branch and Bound solution, FIFO Branch and Bound solution.
Backtracking is a methodical (Logical) way of trying out various sequences of decisions, until
you find one that “works”
Example@1 (net example) : Maze (a tour puzzle)
mi🡪size of set Si
m=m1m2m3---mn n tuples that possible candidates for satisfying the function P.
With brute force approach would be to form all these n-tuples, evaluate (judge) each one with P
and save those which yield the optimum.
By using backtrack algorithm; yield the same answer with far fewer than ‘m’ trails.
Many of the problems we solve using backtracking requires that all the solutions satisfy
a complex set of constraints.
For any problem these constraints can be divided into two categories:
⮚ Explicit constraints.
⮚ Implicit constraints.
Explicit constraints: Explicit constraints are rules that restrict each xi to take on values only
from a given set.
Example: xi ≥ 0 or si={all non negative real numbers}
Xi=0 or 1 or Si={0, 1}
li ≤ xi ≤ ui or si={a: li ≤ a ≤ ui }
The explicit constraint depends on the particular instance I of the problem being solved.
All tuples that satisfy the explicit constraints define a possible solution space for I.
Implicit Constraints:
The implicit constraints are rules that determine which of the tuples in the solution space of I
satisfy the criterion function. Thus implicit constraints describe the way in which the Xi
must relate to each other.
Applications of Backtracking:
⮚ N Queens Problem
⮚ Sum of subsets problem
⮚ Graph coloring
⮚ Hamiltonian cycles.
N-Queens Problem:
It is a classic combinatorial problem. The eight queen’s puzzle is the problem of placing eight
queens puzzle is the problem of placing eight queens on an 8×8 chessboard so that no two
queens attack each other. That is so that no two of them are on the same row, column, or
diagonal.
The 8-queens puzzle is an example of the more general n-queens problem of placing n queens on
an n×n chessboard.
si🡪{1, 2, 3, 4, 5, 6, 7, 8}, 1 ≤ i ≤8
Therefore the solution space consists of 88 s-tuples.
The implicit constraints for this problem are that no two xi’s can be the same column and no two
queens can be on the same diagonal.
By these two constraints the size of solution pace reduces from 88 tuples to 8! Tuples.
Form example si(4,6,8,2,7,1,3,5)
In the same way for n-queens are to be placed on an n×n chessboard, the solution space
consists of all n! Permutations of n-tuples (1,2,n).
For example:
If n=4 (w1, w2, w3, w4)=(11,13,24,7) and m=31.
Then desired subsets are (11, 13, 7) & (24, 7).
The two solutions are described by the vectors (1, 2, 4) and (3, 4).
In general all solution are k-tuples (x1, x2, x3---xk) 1 ≤ k ≤ n, different solutions may have
different sized tuples.
The above equation specify that x1, x2, x3, --- xk cannot lead to an answer node if this condition
is not satisfied.
X[k]=1
If(S+w[k]=m) then write(x[1: ]); // subset found.
Else if (S+w[k] + w{k+1] ≤ M)
Then SumOfSub(S+w[k], k+1, r-w[k]);
if ((S+r - w{k] ≥ M) and (S+w[k+1] ≤M) ) then
{
X[k]=0;
SumOfSub(S, k+1, r-w[k]);
}
}
Graph Coloring:
Let G be a undirected graph and ‘m’ be a given +ve integer. The graph coloring problem is
assigning colors to the vertices of an undirected graph with the restriction that no two
adjacent vertices are assigned the same color yet only ‘m’ colors are used.
The optimization version calls for coloring a graph using the minimum number of coloring.
The decision version, known as K-coloring asks whether a graph is colourable using at most
k- colors.
Note that, if ‘d’ is the degree of the given graph then it can be colored with ‘d+1’ colors.
The m- colorability optimization problem asks for the smallest integer ‘m’ for which the graph G
can be colored. This integer is referred as “Chromatic number” of the graph.
Example
o Example:
o
The above map requires 4 colors.
⮚ Many years, it was known that 5-colors were required to color this map.
⮚ After several hundred years, this problem was solved by a group of mathematicians
with the help of a computer. They show that 4-colors are sufficient.
Suppose we represent a graph by its adjacency matrix G[1:n, 1:n]
Ex:
m=3🡪colors
Adjacency matrix is
Hamiltonian Cycles:
⮚ Def: Let G=(V, E) be a connected graph with n vertices. A Hamiltonian cycle is a round
trip path along n-edges of G that visits every vertex once & returns to its starting
position.
⮚ It is also called the Hamiltonian circuit.
⮚ Hamiltonian circuit is a graph cycle (i.e., closed loop) through a graph that visits each
node exactly once.
⮚ A graph possessing a Hamiltonian cycle is said to be Hamiltonian
graph. Example:
🡪g1
The above graph contains Hamiltonian cycle: 1,2,8,7,6,5,4,3,1
✔ BFS🡪like state space search will be called FIFO (First In First Out) search as the list
of live nodes is “First-in-first-out” list (or queue).
✔ D-search (DFS)🡪 Like state space search will be called LIFO (Last In First Out)
search as the list of live nodes is a “last-in-first-out” list (or stack).
In backtracking, bounding function are used to help avoid the generation of sub-trees that do
not contain an answer node.
We will use 3-types of search strategies in branch and bound
1) FIFO (First In First Out) search
2) LIFO (Last In First Out) search
3) LC (Least Count) search
FIFO B&B:
FIFO Branch & Bound is a BFS.
In this, children of E-Node (or Live nodes) are inserted in a queue.
Implementation of list of live nodes as a queue
✔ Least()🡪 Removes the head of the Queue
Assume that node ‘12’ is an answer node in FIFO search, 1st we take E-node has ‘1’
LIFO B&B:
LIFO Brach & Bound is a D-search (or DFS).
In this children of E-node (live nodes) are inserted in a
stack Implementation of List of live nodes as a stack
✔ Least()🡪 Removes the top of the stack
The search for an answer node can often be speeded by using an “intelligent” ranking
function. It is also called an approximate cost function “Ĉ”.
Expended node (E-node) is the live node with the best Ĉ value.
Branching: A set of solutions, which is represented by a node, can be partitioned into
mutually (jointly or commonly) exclusive (special) sets. Each subset in the partition is
represented by a child of the original node.
Lower bounding: An algorithm is available for calculating a lower bound on the cost of any
solution in a given subset.
Example:
8-puzzle
Cost function: Ĉ = g(x) +h(x)
where h(x) = the number of misplaced tiles
and g(x) = the number of moves so far
Assumption: move one tile in any direction cost 1.
Note: In case of tie, choose the leftmost node.
Travelling Salesman Problem:
Def:- Find a tour of minimum cost starting from a node S going through other nodes
only once and returning to the starting point S.
Time Conmlexity of TSP for Dynamic Programming algorithm is O(n22n)
B&B algorithms for this problem, the worest case complexity will not be any better than
O(n22n) but good bunding functions will enables these B&B algorithms to solve some
problem instances in much less time than required by the dynamic programming alogrithm.
Let G=(V,E) be a directed graph defining an instances of TSP.
Let Cij🡪 cost of edge <i, j>
State space tree for the travelling salesperson problem with n=4 and i0=i4=1
The above diagram shows tree organization of a complete graph with |V|=4.
Each leaf node ‘L’ is a solution node and represents the tour defined by the path from the root
to L.
Cost C(.)🡪 is the solution node1 with least C(.) corresponds to a shortest tour in G.
C(A)={Length of tour defined by the path from root to A if A is leaf
Cost of a minimum-cost leaf in the sub-tree A, if A is not leaf }
From1 Ĉ(r) ≤ C(r) then Ĉ(r) 🡪 is the length of the path defined at node A.
From previous example the path defined at node 6 is i0, i1, i2=1, 2, 4 & it consists edge of
<1,2> & <2,4>
Abetter Ĉ(r) can be obtained by using the reduced cost matrix corresponding to G.
A row (column) is said to be reduced iff it contains at least one zero & remaining entries are
non negative.
A matrix is reduced iff every row & column is reduced.
Reduced Matrix: To get the lower bound of the path starting at node 1
Row # 1: reduce by Row #2: reduce 2
10
Row #3: reduce by 2
The reduced cost is: RCL = 25
So the cost of node 1 is: Cost (1) =
25 The reduced matrix is:
⮚
Choose to go to vertex 5: Node 5
- Remember that the cost matrix is the one that was reduced at starting vertex 1
- Cost of edge <1,5> is: A(1,5) = 1
- Set row #1 = inf since we are starting from node 1
- Set column # 5 = inf since we are choosing edge <1,5>
- Set A(5,1) = inf
- The resulting cost matrix is:
Reduce column # 1: by 11
Choose to go to vertex 5: Node 10 ( path is 1->4->2->5 )
Xi=0 or 1 1 ≤ i ≤ n
Define two functions cˆ(x) and u(x) such that for every
node x,
cˆ(x) ≤ c(x) ≤ u(x)
Computing cˆ(·) and u(·)
Basic concepts:
NP Nondeterministic Polynomial time
-)
The problems has best algorithms for their solutions have “Computing times”, that cluster
into two groups
Group 1 Group 2
> Problems with solution time bound by > Problems with solution times not
a polynomial of a small degree. bound by polynomial (simply non
polynomial )
No one has been able to develop a polynomial time algorithm for any problem in the
2nd group (i.e., group 2)
So, it is compulsory and finding algorithms whose computing times are greater than
polynomial very quickly because such vast amounts of time to execute that even moderate
size problems cannot be solved.
Theory of NP-Completeness:
Show that may of the problems with no polynomial time algorithms are computational
time algorithms are computationally related.
1. NP-Hard
2. NP-Complete
DESIGN AND ANALYSIS OF ALGORITHMS (UNIT-VIII)
NP-Hard: Problem can be solved in polynomial time then all NP-Complete problems can be
solved in polynomial time.
All NP-Complete problems are NP-Hard but some NP-Hard problems are not know to be NP-
Complete.
Nondeterministic Algorithms:
Algorithms with the property that the result of every operation is uniquely defined are termed
as deterministic algorithms. Such algorithms agree with the way programs are executed on a
computer.
Algorithms which contain operations whose outcomes are not uniquely defined but are
limited to specified set of possibilities. Such algorithms are called nondeterministic
algorithms.
The machine executing such operations is allowed to choose any one of these
outcomes subject to a termination condition to be defined later.
}
DESIGN AND ANALYSIS OF ALGORITHMS (UNIT-VIII)
}
if( (W>m) or (P<r) ) then Failure();
else Success();
}
time.
NP is the set of all decision problems solvable by nondeterministic algorithms in
-)
polynomial time.
The most famous unsolvable problems in Computer Science is Whether P=NP or P≠NP
In considering this problem, s.cook formulated the following question.
If there any single problem in NP, such that if we showed it to be in ‘P’ then that would
imply that P=NP.
-)Notation of Reducibility
Let L1 and L2 be problems, Problem L1 reduces to L2 (written L1 α L2) iff there is a way to
solve L1 by a deterministic polynomial time algorithm using a deterministic algorithm that
solves L2 in polynomial time
This implies that, if we have a polynomial time algorithm for L2, Then we can solve
L1 in polynomial time.
> If the length of ‘I’ is ‘n’ and the time complexity of A is p(n) for some polynomial
p() then length of Q is O(p3(n) log n)=O(p4(n))
The time needed to construct Q is also O(p3(n) log n).
> A deterministic algorithm ‘Z’ to determine the outcome of ‘A’ on any input ‘I’
Algorithm Z computes ‘Q’ and then uses a deterministic algorithm for the
satisfiability problem to determine whether ‘Q’ is satisfiable.
> If O(q(m)) is the time needed to determine whether a formula of length ‘m’ is
satisfiable then the complexity of ‘Z’ is O(p3(n) log n + q(p3(n)log n)).
>If satisfiability is ‘p’, then ‘q(m)’ is a polynomial function of ‘m’ and the
complexity of ‘Z’ becomes ‘O(r(n))’ for some polynomial ‘r()’.
> Hence, if satisfiability is in p, then for every nondeterministic algorithm A in NP, we
can obtain a deterministic Z in p.
By this we shows that satisfiability is in p then P=NP