[go: up one dir, main page]

0% found this document useful (0 votes)
3 views44 pages

DAA - Unit 4 (PG)

The document discusses dynamic programming as a method for solving complex problems by breaking them into simpler subproblems and storing their solutions to avoid redundant calculations. It covers key concepts such as optimal substructure, approaches (top-down and bottom-up), and specific applications like multistage graphs and the Fibonacci sequence. The document also contrasts dynamic programming with the divide and conquer method, highlighting its efficiency in solving optimization problems.

Uploaded by

balaji1806j
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views44 pages

DAA - Unit 4 (PG)

The document discusses dynamic programming as a method for solving complex problems by breaking them into simpler subproblems and storing their solutions to avoid redundant calculations. It covers key concepts such as optimal substructure, approaches (top-down and bottom-up), and specific applications like multistage graphs and the Fibonacci sequence. The document also contrasts dynamic programming with the divide and conquer method, highlighting its efficiency in solving optimization problems.

Uploaded by

balaji1806j
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

DR. R. K.

SHANMUGAM COLLEGE OF ARTS AND SCIENCE


INDILI, KALLAKURICHI 606 213

ANALYSIS & DESIGN OF ALGORITHMS


(23PCSCC11)
DEPARTMENT OF COMPUTER SCIENCE

CLASS: I M. Sc., COMPUTER SCIENCE


SEMESTER: 1
ANALYSIS & DESIGN OF ALGORITHMS
(23PCSCC11)
UNIT – 4
DYNAMIC PROGRAMMING
 General Method
 Multistage Graphs
 All Pair Shortest Path
 Optimal Binary Search Trees
 0/1 Knapsacks
 Traveling Salesman Problem
 Flow Shop Scheduling
ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
DYNAMIC PROGRAMMING
Dynamic programming is a technique that breaks the problems into sub-
problems, and saves the result for future purposes so that we do not need to compute
the result again. The subproblems are optimized to optimize the overall solution is
known as optimal substructure property. The main use of dynamic programming is
to solve optimization problems. Here, optimization problems mean that when we are
trying to find out the minimum or the maximum solution of a problem. The dynamic
programming guarantees to find the optimal solution of a problem if the solution
exists.
The definition of dynamic programming says that it is a technique for solving
a complex problem by first breaking into a collection of simpler subproblems, solving
each subproblem just once, and then storing their solutions to avoid repetitive
computations.
Example:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ,…
The numbers in the above series are not randomly calculated. Mathematically, we
could write each of the terms using the below formula:
F(n) = F(n-1) + F(n-2),
With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we
follow the above relationship. For example, F(2) is the sum f(0) and f(1), which is
equal to 1.
How can we calculate F(20)?
The F(20) term will be calculated using the nth formula of the Fibonacci series. The
below figure shows that how F(20) is calculated.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 1


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

As we can observe in the above figure that F(20) is calculated as the sum of
F(19) and F(18). In the dynamic programming approach, we try to divide the
problem into the similar subproblems. We are following this approach in the above
case where F(20) into the similar subproblems, i.e., F(19) and F(18). If we recap the
definition of dynamic programming that it says the similar subproblem should not be
computed more than once. Still, in the above case, the subproblem is calculated
twice. In the above example, F(18) is calculated two times; similarly, F(17) is also
calculated twice. However, this technique is quite useful as it solves the similar
subproblems, but we need to be cautious while storing the results because we are not
particular about storing the result that we have computed once, then it can lead to a
wastage of resources.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 2


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
Working of Dynamic Programming Method:
The following are the steps that the dynamic programming follows:
 It breaks down the complex problem into simpler subproblems.
 It finds the optimal solution to these sub-problems.
 It stores the results of subproblems (memoization). The process of storing the
results of subproblems is known as memorization.
 It reuses them so that same sub-problem is calculated more than once.
 Finally, calculate the result of the complex problem.
Approaches of dynamic programming
There are two approaches to dynamic programming:
 Top-down approach
 Bottom-up approach
Top-down approach
The top-down approach follows the memorization technique, while bottom-up
approach follows the tabulation method. Here memorization is equal to the sum of
recursion and caching. Recursion means calling the function itself, while caching
means storing the intermediate results.
Bottom-Up approach
The bottom-up approach is also one of the techniques which can be used to
implement the dynamic programming. It uses the tabulation technique to implement
the dynamic programming approach. It solves the same kind of problems but it
removes the recursion. If we remove the recursion, there is no stack overflow issue
and no overhead of the recursive functions. In this tabulation technique, we solve the
problems and store the results in a matrix.
There are following two different ways to store the values so that the values of
a problem can be reused. Here, will discuss two patterns of solving Dynamic
Programming problem:

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 3


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
 Tabulation Method: Bottom Up
 Memoization Method: Top Down
Tabulation Method – Bottom Up Dynamic Programming
As the name itself suggests starting from the bottom and cumulating answers
to the top. Let’s discuss in terms of state transition. Let’s describe a state for our DP
problem to be dp,x- with dp,0- as base state and dp[n] as our destination state. So, we
need to find the value of destination state i.e dp[n]. If we start our transition from our
base state i.e dp[0] and follow our state transition relation to reach our destination
state dp[n], we call it Bottom Up approach as it is quite clear that we started our
transition from the bottom base state and reached the top most desired state.
// Tabulated version to find factorial x.
int dp[MAXN];
// base case
int dp[0] = 1;
for (int i = 1; i< =n; i++)
{
dp[i] = dp[i-1] * i;
}
The above code clearly follows the bottom-up approach as it starts its transition from
the bottom-most base case dp[0] and reaches its destination state dp[n]. Here, we
may notice that the dp table is being populated sequentially and we are directly
accessing the calculated states from the table itself and hence, we call it tabulation
method.
Memoization Method – Top Down Dynamic Programming
Once, again let’s describe it in terms of state transition. If we need to find the
value for some state say dp[n] and instead of starting from the base state that i.e dp[0]
we ask our answer from the states that can reach the destination state dp[n] following
the state transition relation, then it is the top-down fashion of DP. Here, we start our
journey from the top most destination state and compute its answer by taking in

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 4


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
count the values of states that can reach the destination state, till we reach the
bottom most base state. Once again, let’s write the code for the factorial problem in
the top-down fashion.
// Memoized version to find factorial x.
int dp[MAXN]
// return fact x!
int solve(int x)
{
if (x==0)
return 1;
if (dp[x]!=-1)
return dp[x];
return (dp[x] = x * solve(x-1));
}

Example sum for Dynamic Programming


Program for Fibonacci numbers
The Fibonacci numbers are the numbers in the following integer sequence.
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ……..

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 5


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
In mathematical terms, the sequence Fn of Fibonacci numbers is defined by
the recurrence relation
Fn = Fn-1 + Fn-2 with seed values F0 = 0 and F1 = 1.
Find the value of F8?
F0 = 0
F1 = 1
F2 = F2-1 + F2-2 = F1 + F0 = 1 + 0 = 1
F3 = F3-1 + F3-2 = F2 + F1 = 1 + 1 = 2
F4 = F4-1 + F4-2 = F3 + F2 = 2 + 1 = 3
F5 = F5-1 + F5-2 = F4 + F3 = 3 + 2 = 5
F6 = F6-1 + F6-2 = F5 + F4 = 5 + 3 = 8
F7 = F7-1 + F7-2 = F6 + F5 = 6 + 5 = 13
F8 = F8-1 + F8-2 = F7 + F6 = 13 + 8 = 21
Divide & Conquer Vs Dynamic
Divide & Conquer Method Dynamic Method
It deals (involves) three steps at each level: It involves the sequence of four steps:
Divide the problem into a number of Characterize the structure of optimal
subproblems. solutions.
Conquer the subproblems by solving them Recursively defines the values of optimal
recursively. solutions.
Combine the solution to the subproblems Compute the value of optimal solutions.
into the solution for original subproblems. Construct an Optimal Solution from
computed information.
It is Recursive It is Non – Recursive
It does more work on subproblems and It solves subproblems only once and then
hence has more time consumption. stores in the table.
In this subproblems are independent of each In this subproblems are interdependent.
other.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 6


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
MULTISTAGE GRAPHS
A Multistage graph is a directed, weighted graph in which the nodes can be
divided into a set of stages such that all edges are from a stage to next stage only (In
other words there is no edge between vertices of same stage and from a vertex of
current stage to previous stage).
The vertices of a multistage graph are divided into n number of disjoint subsets
S = { S1 , S2 , S3 ……….. Sn }, where S1 is the source and Sn is the sink
(destination). The cardinality of S1 and Sn are equal to 1. i.e., |S1| = |Sn| = 1. We
are given a multistage graph, a source and a destination, we need to find shortest
path from source to destination. By convention, we consider source at stage 1 and
destination as last stage.

Now there are various strategies we can apply :-


 The Brute force method of finding all possible paths between Source and
Destination and then finding the minimum. That’s the WORST possible
strategy.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 7


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
 Dijkstra’s Algorithm of Single Source shortest paths. This method will find
shortest paths from source to all other nodes which is not required in this case.
So it will take a lot of time and it doesn’t even use the SPECIAL feature that
this MULTI-STAGE graph has.
 Simple Greedy Method – At each node, choose the shortest outgoing path. If
we apply this approach to the example graph give above we get the solution as
1 + 4 + 18 = 23. But a quick look at the graph will show much shorter paths
available than 23. So the greedy method fails !
 The best option is Dynamic Programming. So we need to find Optimal Sub-
structure, Recursive Equations and Overlapping Sub-problems.
Optimal Substructure and Recursive Equation :-
We define the notation :- M(x, y) as the minimum cost to T(target node) from Stage
x, Node y.
Shortest distance from stage 1, node 0 to destination, i.e., 7 is M(1, 0).
// From 0, we can go to 1 or 2 or 3 to reach 7.
M(1, 0) = min(1 + M(2, 1),
2 + M(2, 2),
5 + M(2, 3))
This means that our problem of 0 —> 7 is now sub-divided into 3 sub-problems :-
So if we have total 'n' stages and target as T, then the stopping condition will be :-
M(n-1, i) = i ---> T + M(n, T) = i ---> T
Dynamic Programming solution for multistage graph:
Let path(i,j) be some specification of the minimal path from vertex j in set i to vertex
t; C(i,j) is the cost of this path; c(j,t) is the weight of the edge from j to t.
C(i,j) = min { c(j,l) + C(i+1,l) }
l in Vi+1; (j,l) in E
To write a simple algorithm, assign numbers to the vertices so those in stage Vi have
lower number those in stage Vi+1.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 8


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
int[] MStageForward(Graph G)
{
// returns vector of vertices to follow through the graph
// let c[i][j] be the cost matrix of G
int n = G.n (number of nodes);
int k = G.k (number of stages);
float[] C = new float[n];
int[] D = new int[n];
int[] P = new int[k];
for (i = 1 to n) C[i] = 0.0;
for j = n-1 to 1 by -1
{
r = vertex such that (j,r) in G.E and c(j,r)+C(r) is minimum
C[j] = c(j,r)+C(r);
D[j] = r;
}
P[1] = 1; P[k] = n;
for j = 2 to k-1
{
P[j] = D[P[j-1]];
}
return P;
}

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 9


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
Try it yourself:-
Solve the following multistage graph using dynamic approach

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 10


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
ALL PAIR SHORTEST PATH
It aims to figure out the shortest path from each vertex v to every other u.
Storing all the paths explicitly can be very memory expensive indeed, as we need one
spanning tree for each vertex. This is often impractical regarding memory
consumption, so these are generally considered as all pairs-shortest distance
problems, which aim to find just the distance from each to each node to another. We
usually want the output in tabular form: the entry in u's row and v's column should
be the weight of the shortest path from u to v.
The all pair shortest path algorithm is also known as Floyd-Warshall
algorithm is used to find all pair shortest path problem from a given weighted graph.
As a result of this algorithm, it will generate a matrix, which will represent the
minimum distance from any node to all other nodes in the graph.
Floyd – Warshall Algorithm
Let the vertices of G be V = {1, 2........n} and consider a subset {1, 2........k} of
vertices for some k. For any pair of vertices i, j ∈ V, considered all paths from i to j
whose intermediate vertices are all drawn from {1, 2.......k}, and let p be a minimum
weight path from amongst them. The Floyd-Warshall algorithm exploits a link
between path p and shortest paths from i to j with all intermediate vertices in the set
{1, 2.......k-1}. The link depends on whether or not k is an intermediate vertex of
path p.
If k is not an intermediate vertex of path p, then all intermediate vertices of
path p are in the set {1, 2........k-1}. Thus, the shortest path from vertex i to vertex j
with all intermediate vertices in the set {1, 2.......k-1} is also the shortest path i to j
with all intermediate vertices in the set {1, 2.......k}.
If k is an intermediate vertex of path p, then we break p down into i → k → j.
Let dij(k) be the weight of the shortest path from vertex i to vertex j with all
intermediate vertices in the set {1, 2.......k}.
A recursive definition is given by

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 11


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

Algorithm – Floyd Warshall

The strategy adopted by the Floyd-Warshall algorithm is Dynamic


Programming. The running time of the Floyd-Warshall algorithm is determined by
the triply nested for loops of lines 3-6. Each execution of line 6 takes O (1) time. The
algorithm thus runs in time θ(n3).
Example: Apply Floyd-Warshall algorithm for constructing the shortest path. Show
that matrices D(k) and π(k) computed by the Floyd-Warshall algorithm for the graph.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 12


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 13


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 14


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

OPTIMAL BINARY SEARCH TREES


In binary search tree, the nodes in the left subtree have lesser value than the
root node and the nodes in the right subtree have greater value than the root node.
We know the key values of each node in the tree, and we also know the
frequencies of each node in terms of searching means how much time is required to
search a node. The frequency and key-value determine the overall cost of searching a
node. The cost of searching is a very important factor in various applications. The
overall cost of searching a node should be less. The time required to search a node in
BST is more than the balanced binary search tree as a balanced binary search tree
contains a lesser number of levels than the BST. There is one way that can reduce the
cost of a binary search tree is known as an optimal binary search tree.
Example:
If the keys are 10, 20, 30, 40, 50, 60, 70

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 15


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

In the above tree, all the nodes on the left subtree are smaller than the value of
the root node, and all the nodes on the right subtree are larger than the value of the
root node. The maximum time required to search a node is equal to the minimum
height of the tree, equal to logn. Now we will see how many binary search trees can
be made from the given number of keys. For example: 10, 20, 30 are the keys, and
the following are the binary search trees that can be made out from these keys.

The Formula for calculating the number of trees:

When we use the above formula, then it is found that total 5 number of trees
can be created. The cost required for searching an element depends on the
comparisons to be made to search an element.
Dynamic Approach for Optimal Binary Search Tree
Consider the below table, which contains the keys and frequencies.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 16


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

First, we will calculate the values where j-i is equal to zero.


When i=0, j=0, then j-i = 0
When i = 1, j=1, then j-i = 0
When i = 2, j=2, then j-i = 0
When i = 3, j=3, then j-i = 0
When i = 4, j=4, then j-i = 0
Therefore, c[0, 0] = 0, c[1 , 1] = 0, c[2,2] = 0, c[3,3] = 0, c[4,4] = 0
Now we will calculate the values where j-i equal to 1.
When j=1, i=0 then j-i = 1
When j=2, i=1 then j-i = 1

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 17


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
When j=3, i=2 then j-i = 1
When j=4, i=3 then j-i = 1
Now to calculate the cost, we will consider only the jth value.
The cost of c[0,1] is 4 (The key is 10, the cost corresponding to key 10 is 4).
The cost of c[1,2] is 2 (The key is 20, the cost corresponding to key 20 is 2).
The cost of c[2,3] is 6 (The key is 30, the cost corresponding to key 30 is 6)
The cost of c[3,4] is 3 (The key is 40,the cost corresponding to key 40 is 3)

Now we will calculate the values where j-i = 2


When j=2, i=0 then j-i = 2
When j=3, i=1 then j-i = 2
When j=4, i=2 then j-i = 2
In this case, we will consider two keys.
When i=0 and j=2, then keys 10 and 20. There are two possible trees that can be
made out from these two keys shown below:

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 18


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
In the first binary tree, cost would be: 4*1 + 2*2 = 8
In the second binary tree, cost would be: 4*2 + 2*1 = 10
The minimum cost is 8; therefore, c[0,2] = 8

When i=1 and j=3, then keys 20 and 30. There are two possible trees that can be
made out from these two keys shown below:
In the first binary tree, cost would be: 1*2 + 2*6 = 14
In the second binary tree, cost would be: 1*6 + 2*2 = 10
The minimum cost is 10; therefore, c[1,3] = 10
When i=2 and j=4, we will consider the keys at 3 and 4, i.e., 30 and 40. There are
two possible trees that can be made out from these two keys shown as below:
In the first binary tree, cost would be: 1*6 + 2*3 = 12
In the second binary tree, cost would be: 1*3 + 2*6 = 15
The minimum cost is 12, therefore, c[2,4] = 12

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 19


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
Now we will calculate the values when j-i = 3
When j=3, i=0 then j-i = 3
When j=4, i=1 then j-i = 3
When i=0, j=3 then we will consider three keys, i.e., 10, 20, and 30.
The following are the trees that can be made if 10 is considered as a root node.

In the above tree, 10 is the root node, 20 is the right child of node 10, and 30 is the
right child of node 20.
Cost would be: 1*4 + 2*2 + 3*6 = 26

In the above tree, 10 is the root node, 30 is the right child of node 10, and 20 is the
left child of node 20.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 20


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
Cost would be: 1*4 + 2*6 + 3*2 = 22
The following tree can be created if 20 is considered as the root node.

In the above tree, 20 is the root node, 30 is the right child of node 20, and 10 is the
left child of node 20.
Cost would be: 1*2 + 4*2 + 6*2 = 22
The following are the trees that can be created if 30 is considered as the root node.

In the above tree, 30 is the root node, 20 is the left child of node 30, and 10 is the left
child of node 20.
Cost would be: 1*6 + 2*2 + 3*4 = 22

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 21


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

In the above tree, 30 is the root node, 10 is the left child of node 30 and 20 is the right
child of node 10.
Cost would be: 1*6 + 2*4 + 3*2 = 20
Therefore, the minimum cost is 20 which is the 3rd root. So, c[0,3] is equal to 20.
When i=1 and j=4 then we will consider the keys 20, 30, 40
c[1,4] = min{ c[1,1] + c[2,4], c[1,2] + c[3,4], c[1,3] + c[4,4] } + 11
= min{0+12, 2+3, 10+0}+ 11
= min{12, 5, 10} + 11
The minimum value is 5; therefore, c[1,4] = 5+11 = 16

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 22


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

Now we will calculate the values when j-i = 4


When j=4 and i=0 then j-i = 4
In this case, we will consider four keys, i.e., 10, 20, 30 and 40. The frequencies of 10,
20, 30 and 40 are 4, 2, 6 and 3 respectively.
w[0, 4] = 4 + 2 + 6 + 3 = 15
If we consider 10 as the root node then
C[0, 4] = min {c[0,0] + c[1,4]}+ w[0,4]
= min {0 + 16} + 15= 31
If we consider 20 as the root node then
C[0,4] = min{c[0,1] + c[2,4]} + w[0,4]
= min{4 + 12} + 15
= 16 + 15 = 31
If we consider 30 as the root node then,
C[0,4] = min{c[0,2] + c[3,4]} +w[0,4]
= min {8 + 3} + 15
= 26

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 23


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
If we consider 40 as the root node then,
C[0,4] = min{c[0,3] + c[4,4]} + w[0,4]
= min{20 + 0} + 15
= 35

The optimal binary tree can be created as:

General formula for calculating the minimum cost is:


C[i,j] = min{c[i, k-1] + c[k,j]} + w(i,j)

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 24


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
0/1 KNAPSACKS
The 0/1 knapsack problem means that the items are either completely or no
items are filled in a knapsack. For example, we have two items having weights 2kg
and 3kg, respectively. If we pick the 2kg item then we cannot pick 1kg item from the
2kg item (item is not divisible); we have to pick the 2kg item completely. This is a
0/1 knapsack problem in which either we pick the item completely or we will pick
that item. The 0/1 knapsack problem is solved by the dynamic programming.

The fractional knapsack problem means that we can divide the item. For example, we
have an item of 3 kg then we can pick the item of 2 kg and leave the item of 1 kg. The
fractional knapsack problem is solved by the Greedy approach.

To consider all subsets of items, there can be two cases for every item:
(1) the item is included in the optimal subset,
(2) not included in the optimal set.
Therefore, the maximum value that can be obtained from n items is max of
following two values.
1) Maximum value obtained by n-1 items and W weight (excluding nth item).
2) Value of nth item plus maximum value obtained by n-1 items and W minus
weight of the nth item (including nth item).
If weight of nth item is greater than W, then the nth item cannot be included
and case 1 is the only possibility.
Example: Consider the problem having weights and profits are:
Weights: {3, 4, 6, 5}; Profits: {2, 3, 1, 4} ; The weight of the knapsack is 8 kg & the
number of items is 4
First, we create a matrix shown as below:

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 25


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

0 1 2 3 4 5 6 7 8
0
1
2
3
4

In the above matrix, columns represent the weight, i.e., 8. The rows represent
the profits and weights of items. Here we have not taken the weight 8 directly,
problem is divided into sub-problems, i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8. The solution of the
sub-problems would be saved in the cells and answer to the problem would be stored
in the final cell. First, we write the weights in the ascending order and profits
according to their weights shown as below:
wi = {3, 4, 5, 6}
pi = {2, 3, 4, 1}
The first row and the first column would be 0 as there is no item for w=0
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0
2 0
3 0
4 0

When i=1, W=1


w1 = 3; Since we have only one item in the set having weight 3, but the capacity of
the knapsack is 1. We cannot fill the item of 3kg in the knapsack of capacity 1 kg so
add 0 at M[1][1] shown as below:

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 26


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0
2 0
3 0
4 0

When i = 1, W = 2
w1 = 3; Since we have only one item in the set having weight 3, but the capacity of
the knapsack is 2. We cannot fill the item of 3kg in the knapsack of capacity 2 kg so
add 0 at M[1][2] shown as below:
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0 0
2 0
3 0
4 0
When i=1, W=3 w1 = 3; Since we have only one item in the set having weight equal
to 3, and weight of the knapsack is also 3; therefore, we can fill the knapsack with an
item of weight equal to 3. We put profit corresponding to the weight 3, i.e., 2 at
M[1][3] shown as below:
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2
2 0
3 0
4 0

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 27


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
When i=1, W = 4
W1 = 3; Since we have only one item in the set having weight equal to 3, and weight
of the knapsack is 4; therefore, we can fill the knapsack with an item of weight equal
to 3. We put profit corresponding to the weight 3, i.e., 2 at M[1][4] shown as below:
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2
2 0
3 0
4 0

When i=1, W = 5
W1 = 3; Since we have only one item in the set having weight equal to 3, and weight
of the knapsack is 5; therefore, we can fill the knapsack with an item of weight equal
to 3. We put profit corresponding to the weight 3, i.e., 2 at M[1][5] shown as below:
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2
2 0
3 0
4 0

When i =1, W=6


W1 = 3; Since we have only one item in the set having weight equal to 3, and weight
of the knapsack is 6; therefore, we can fill the knapsack with an item of weight equal
to 3. We put profit corresponding to the weight 3, i.e., 2 at M[1][6] shown as below:

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 28


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2
2 0
3 0
4 0

When i=1, W = 7
W1 = 3; Since we have only one item in the set having weight equal to 3, and weight
of the knapsack is 7; therefore, we can fill the knapsack with an item of weight equal
to 3. We put profit corresponding to the weight 3, i.e., 2 at M[1][7] shown as below:
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2
2 0
3 0
4 0
When i =1, W =8
W1 = 3; Since we have only one item in the set having weight equal to 3, and weight
of the knapsack is 8; therefore, we can fill the knapsack with an item of weight equal
to 3. We put profit corresponding to the weight 3, i.e., 2 at M[1][8] shown as below:
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2
2 0
3 0
4 0

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 29


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
Now the value of 'i' gets incremented, and becomes 2.
When i =2, W = 1
The weight corresponding to the value 2 is 4, i.e., w2 = 4. Since we have only one
item in the set having weight equal to 4, and the weight of the knapsack is 1. We
cannot put the item of weight 4 in a knapsack, so we add 0 at M[2][1] shown as
below:
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2
2 0 0 0 2
3 0
4 0

Repeating the same process for all the iterations, the final table will be:
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2
2 0 0 0 2 3 3 3 5 5
3 0 0 0 2 3 3 3 5 5
4 0 0 0 2 3 3 4 5 5

As we can observe in the above table that 5 is the maximum profit among all
the entries. The pointer points to the last row and the last column having 5 value.
Now we will compare 5 value with the previous row; if the previous row, i.e., i = 3
contains the same value 5 then the pointer will shift upwards.
Again, we will compare the value 5 from the above row, i.e., i = 2. Since the
above row contains the value 5 so the pointer will again be shifted upwards.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 30


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
Again, we will compare the value 5 from the above row, i.e., i = 1. Since the
above row does not contain the same value so we will consider the row i=1, and the
weight corresponding to the row is 4. Therefore, we have selected the weight 4 and
we have rejected the weights 5 and 6 shown below:
x = { 1, 0, 0}
The profit corresponding to the weight is 3. Therefore, the remaining profit is
(5 - 3) equals to 2. Now we will compare this value 2 with the row i = 2. Since the
row (i = 1) contains the value 2; therefore, the pointer shifted upwards.
Again we compare the value 2 with a above row, i.e., i = 1. Since the row i =0
does not contain the value 2, so row i = 1 will be selected and the weight
corresponding to the i = 1 is 3
X = {1, 1, 0, 0}
The profit corresponding to the weight is 2. Therefore, the remaining profit is 0. We
compare 0 value with the above row. Since the above row contains a 0 value but the
profit corresponding to this row is 0. In this problem, two weights are selected, i.e., 3
and 4 to maximize the profit.
Algorithm for 0/1Knapsack
The algorithm takes the following inputs
 The maximum weight W
 The number of items n
 The two sequences v = <v1, v2, …, vn> and w = <w1, w2, …, wn>

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 31


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
//Algorithm for 0-1 Knapsack Dynamic-0-1-knapsack (v, w, n, W)
for w = 0 to W do
c[0, w] = 0
for i = 1 to n do
c[i, 0] = 0
for w = 1 to W do
if wi ≤ w then
if vi + c[i-1, w-wi] then
c[i, w] = vi + c[i-1, w-wi]
else
c[i, w] = c[i-1, w]
else
c[i, w] = c[i-1, w]

The set of items to take can be deduced from the table, starting at c[n, w] and
tracing backwards where the optimal values came from.
If c[i, w] = c[i-1, w], then item i is not part of the solution, and we continue
tracing with c[i-1, w]. Otherwise, item i is part of the solution, and we continue
tracing with c[i-1, w-W].

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 32


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
TRAVELLING SALESMAN PROBLEM
The Traveling Salesman Problem is a classic optimization problem. There is a
salesman who wants to travel over a number of cities and he has to return back to the
original city he started from and he has the choice to start from any of the cities he
wants to. During his journey, we need to minimize the total distance traveled by
him.
Problem Statement:
"We will be given a graph that will be represented in the form of an adjacency matrix, say
distance, that denotes the distance between 2 cities, say, i and j. The distance[i][j] denotes the
distance between 2 cities if the value of dist[i][j] == 0; this implies the given cities are not
connected i.e. no direct edge exists between them. We can start from any node and we need to
return to the original city through the shortest distance covering all the cities."
So, basically, we need to complete a cycle in this question, and such a cycle is
known as the Hamiltonian cycle. A Hamiltonian cycle is a set of edges such that
every node is visited only once and we come back to the original position. So in
short, in this problem, we need to return a minimum weight Hamiltonian cycle. We
just have to minimize the total cost of the cycle.
Let's first try to get a Hamiltonian cycle with minimum weight from the graph
below. Given the following graph and the distance between cities he has traveled,
let's find out the shortest path in which he can travel all the cities.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 33


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
The Hamiltonian cycle with min weight can be:

Example for Travelling Salesman Problem:

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 34


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
As we have an option to start from any position, we start from A.
Making a recursion tree and using bit masking.
As 4 cities are there, the bit mask here would be = 0000 (at the beginning)
respectively denoting D, C, B, and A cities. The bit shows 0 for a particular city if it
has not been visited and 1 if already been visited. So if bit mask = 1111 that means
all bits are set, which implies all cities have been visited and we can denote this by 1<

We try to find the possible options that exist. This can be done by iterating over the
adjacency matrix of A or the other way of doing so can be iterating over other cities
and checking if they can be a possible option to travel from A. As we begin, we visit
city A, and the bit mask value is now changed to 0001.

Each of these recursive branches denotes one path.


Now, as we move forward from A to B, we add the distance in a variable, say, ans.
And we mark city B as visited.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 35


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
We add the distance to the answer. Now we only visit the cities from B that
aren't visited yet so we have 2 options available C and D. We then move to C and
then to D while marking them visited and adding the cost to the and. Now here, we
have explored each unvisited city. So now we backtrack, while marking each city as
unvisited and go to the original city if a direct edge between them exists. As there is a
direct edge from D to A and C to A, the nodes return the cost required to go back to
the original source.

This way we explore each available option from the starting node. We also
maintain a minimum cost variable that maintains the minimum cost spent to visit
each city and return to the original city.
The recursion tree would look like this:

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 36


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

Traveling Salesman Algorithm


Here is the algorithm for Travelling Salesman Problem:
1. Define the mask as (1<<n)-1.
2. Create a function, say, tsp() having mask and city as arguments. As the mask
denotes a set of cities visited so far, we iterate over the mask and get to know which
city isn't visited.
3. The base case is when we visit all the cities i.e. mask has all of its bits set, we
return the distance between the current city and the original city.
4. Then we try to visit the unvisited cities, we iterate over n cities and we check if the
nth bit in the mask is 1 or 0.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 37


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
4.1. If the city is not visited, we make recursive calls by adding the distance of
the original city and the city not visited and try to get the distance remaining from
recursive calls
4.2. In the recursive call, we pass a mask with the current city visited and the
current city.

The time complexity of the Travelling Salesman Problem comes out to be O(n-1!).
The space complexity of this code comes out to be O(n).

FLOW SHOP SCHEDULING


In flow shop, m different machines should process n jobs. Each job contains
exactly n operations. The ith operation of the job must be executed on the ith
machine. Operations within one job must be performed in the specified order.
The first operation gets executed on the first machine, then the second
operation on the second machine, and so on. Jobs can be executed in any order. The
problem is to determine the optimal such arrangement, i.e. the one with the shortest
possible total job execution makespan.
With two machines, problem can be solved in O(nlogn) time using Johnson’s
algorithm. For more than 2 machines, the problem is NP hard. The goal is to
minimize the sum of completion time of all jobs.
The flow shop contains n jobs simultaneously available at time zero and to be
processed by two machines arranged in series with unlimited storage in between
them. The processing time of all jobs are known with certainty.
It is required to schedule n jobs on machines so as to minimize makespan. The
Johnson’s rule for scheduling jobs in two machine flow shop is given below: In an
optimal schedule, job i precedes job j if min{pi1,pj2} < min{pj1,pi2}. Where as, pi1 is
the processing time of job i on machine 1 and pi2 is the processing time of job i on
machine 2. Similarly, pj1 and pj2 are processing times of job j on machine 1 and
machine 2 respectively.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 38


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
Algorithm for Flow Shop Scheduling (Johnson’s algorithm):
Algorithm JOHNSON_FLOWSHOP(T, Q)
// T is array of time of jobs, each column indicating time on machine Mi
// Q is queue of jobs
Q=Φ
for j = 1 to n do
t = minimum machine time scanning in booth columns
if t occurs in column 1 then
Add Job j to the first empty slot of Q
else
Add Job j to last empty slot of Q
end
Remove processed job from consideration
end
return Q
Example: Each of five jobs needs to go through machines M1 and M2. Find the
optimum sequence of jobs using Johnson’s rule.
M1 M2
A 4 2
B 5 6
C 9 8
D 7 1
E 3 11

Solution:
The smallest job is D, which is on machine M1, so schedule this job last. And skip
that job from further consideration.
x x x x D

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 39


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1
Next smallest job is A, which is on machine M2, so schedule the job in last possible
empty slot. And skip that job from further consideration.
x x x A D
Next smallest job is E, which is on machine M1, so schedule the job in earliest
possible empty slot. And skip that job from further consideration.
E x x A D
Next smallest job is B, which is on machine M1, so schedule the job in earliest
possible empty slot. And skip that job from further consideration.
E B x A D
The only job left is C.
E B C A D
So final schedule is {E, B, C, A, D}.

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 40


ANALYSIS & DESIGN OF ALGORITHMS (23PCSCC11) UNIT – 4
I M. Sc., Computer Science Semester: 1

DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE, KALLAKURICHI 41


DR. R. K. SHANMUGAM COLLEGE OF ARTS AND SCIENCE
INDILI, KALLAKURICHI 606 213

ANALYSIS & DESIGN OF ALGORITHMS


(23PCSCC11)
DEPARTMENT OF COMPUTER SCIENCE

CLASS: I M. Sc., COMPUTER SCIENCE


SEMESTER: 1

You might also like