[go: up one dir, main page]

0% found this document useful (0 votes)
10 views17 pages

DAA MODULE 2

Dynamic programming is a method for solving optimization problems by breaking them down into simpler subproblems and storing their solutions to avoid redundant computations. The document discusses the matrix chain multiplication problem as a key application, demonstrating how different parenthesizations can significantly affect computational costs. It also covers the optimal binary search tree problem, outlining the steps to construct an optimal solution using dynamic programming techniques.

Uploaded by

sanjugowda96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views17 pages

DAA MODULE 2

Dynamic programming is a method for solving optimization problems by breaking them down into simpler subproblems and storing their solutions to avoid redundant computations. The document discusses the matrix chain multiplication problem as a key application, demonstrating how different parenthesizations can significantly affect computational costs. It also covers the optimal binary search tree problem, outlining the steps to construct an optimal solution using dynamic programming techniques.

Uploaded by

sanjugowda96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

UNIT-IV DYNAMIC PROGRAMMING

Dynamic Programming:- Dynamic programming, like the divide-and-conquer method, solves


problems by combining the solutions to sub problems. Dynamic programming is applicable when the sub
problems are not independent, that is, when sub problems share sub sub-problems. A dynamic-programming
algorithm solves every sub sub-problem just once and then saves its answer in a table, thereby avoiding the
work of re-computing the answer every time the sub sub-problem is encountered. Dynamic programming is
typically applied to optimization problems. The development of a dynamic-programming algorithm can be
broken into a sequence of four steps.
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution in a bottom-up fashion.
4. Construct an optimal solution from computed information.
The dynamic programming technique was developed by Bellman based upon the principle known as
principle of optimality. Dynamic programming uses optimal substructure in a bottom-up fashion. That is, we
first find optimal solutions to sub problems and, having solved the sub problems, we find an optimal solution
to the problem.
Application-I: Matrix chain multiplication:- Matrix chain multiplication is an example of dynamic
programming. We are given a sequence (chain) A1, A2, ..., An of n matrices to be multiplied, and we wish to
compute the product A1 x A2 x A3 x….x An. We can evaluate the expression using the standard algorithm for
multiplying pairs of matrices as a subroutine once we have parenthesized it to resolve all ambiguities in how
the matrices are multiplied together. A product of matrices is fully parenthesized if it is either a single matrix
or the product of two fully parenthesized matrix products, surrounded by parentheses. Matrix multiplication
is associative, and so all parenthesize yield the same product. For example, if the chain of matrices is A1, A2,
A3, A4, the product A1 x A2 x A3 x A4 can be fully parenthesized in five distinct ways:
(A1 (A2 (A3 A4))) ,
(A1 ((A2 A3) A4)) ,
((A1 A2) (A3 A4)) ,
((A1 (A2 A3)) A4) ,
(((A1 A2) A3) A4).
The way we parenthesize a chain of matrices can have a dramatic impact on the cost of evaluating the
product. Consider first the cost of multiplying two matrices. We can multiply two matrices A and B only if
they are compatible: the number of columns of A must equal the number of rows of B. If A is a m × n matrix
and B is a p ×q matrix, the resulting matrix C is a m × q matrix. The standard algorithm is given below
Algorithm Matrix_Mul(A, B)
{
if (n ≠ P) then Error "incompatible dimensions"
else
for i ← 1 to m do
for j ← 1 to q do
{
C[i, j] ← 0
for k ← 1 to n do
C[i, j] ← C[i, j] + A[i, k] * B[k, j] }
return C
}
The time to compute C is the number of multiplications which is mnq or mpq.

blog: anilkumarprathipati.wordpress.com 1
UNIT-IV DYNAMIC PROGRAMMING

Example 1:- To illustrate the different costs incurred by different parenthesize of a matrix product, consider
the problem to find the product of three matrices A1, A2, A3 i.e. A1 * A2 * A3 of three matrices. Suppose that
the dimensions of the matrices are 10 × 100, 100 × 5, and 5 × 50, respectively. If we multiply according to
the parenthesization.
((A1 A2) A3) = 10 * 100 * 5 + 10 * 5 * 50 = 7500
(A1 (A2 A3)) = 10 * 100 * 50 +100 * 5 * 50 = 75,000
Thus, computing the product according to the first parenthesization is 10 times faster.
Definition:- The matrix-chain multiplication problem can be stated as follows: given a chain A1, A2, ...,An
of n matrices, where for i = 1, 2, ..., n, matrix A i has dimension Pi-1 ×Pi, fully parenthesize the product A1 A2
An in a way that minimizes the number of scalar multiplications.
Note:- In the matrix-chain multiplication problem, we are not actually multiplying matrices. Our goal is only
to determine an order for multiplying matrices that has the lowest cost.
Solving the matrix-chain multiplication problem by dynamic programming
Step 1: The structure of an optimal parenthesization. Our first step in the dynamic-programming paradigm is
to find the optimal substructure and then use it to construct an optimal solution to the problem from optimal
solutions to sub problems. For the matrix-chain multiplication problem, we can perform this step as follows.
For example any parenthesization of the product Ai Ai+1 Aj must split the product between Ak and Ak+1 for
some integer k in the range i ≤ k < j. That is, for some value of k, we first compute the matrices Ai,k and Ak+1,j
and then multiply them together to produce the final product Ai,j. The cost of this parenthesization is thus the
cost of computing the matrix Ai,k, plus the cost of computing Ak+1,j, plus the cost of multiplying them
together.
Step 2: A recursive solution. Next, we define the cost of an optimal solution recursively in terms of the
optimal solutions to sub problems. For the matrix-chain multiplication problem, We can define m[i, j]
recursively as follows. If i = j, the matrix Ai,j = Ai, so that no scalar multiplications are necessary to compute
the product. Thus, Mij = 0 for i = j
= min { Mi,k + Mk + 1, j + Pi-1 Pk Pj } for i < j
i<=k<j
Step 3: Computing the optimal costs. We perform the third step of the dynamic-programming paradigm and
compute the optimal cost by using a tabular, bottom-up approach.

Step 4: Constructing an optimal solution. In the first level we compare M12 and M23. When M12 < M23, we
parenthesize the A1A2 in the product A1A2A3 i.e (A1 A2) A3 and parenthesize the A2 A3 in the product A1A2A3
i.e., A1 (A2 A3) when M12 >M23. This process is repeated until the whole product is parenthesized. The top
entry in the table i.e M13 gives the optimum cost of matrix chain multiplication.
Example:- Find an optimal parenthesization of a matrix-chain product whose dimensions are given in the
table below.

blog: anilkumarprathipati.wordpress.com 2
UNIT-IV DYNAMIC PROGRAMMING

Solution:- Given
P0=5 , P1=4, P2=6, P3=2, P4=7
The Bottom level of the table is to be initialized.
Mi,j=0 where i = j

To compute Mij when i < j,


Mij = min { Mi,k + Mk + 1, j + Pi-1 Pk Pj } for i < j i<=k<j
Thus M12 = min{ M11+M22+P0P1P2} = 0 + 0 + 5 * 4 * 6 = 120
M23 = min{ M22+M33+P1P2P3} = 0 + 0 + 4 * 6 * 2 = 48
M34 = min{ M33+M44+P2P3P4} = 0 + 0 + 6 * 2 * 7 = 84
M13 = min{ M11+M23+P0P1P3 , M12+M33+P0P2P3 }
= min{0 + 48 + 5 * 4 * 2 , 120+ 0+ 5 * 6 * 2} = min{ 88, 180} = 88
M24 = min{ M22+M34+P1P2P4 , M23+M44+ P1P3P4 }
= min{0 + 84 + 4 * 6 * 7 , 48+ 0+ 4 * 2 * 7} = min{ 252, 104} = 104
M14 = min{ M11+M24+P0P1P4 , M12+M34+P0P2P4 , M13+M44+P0P3P4 }
= min{0 + 104 + 5 * 4 * 7 , 120+ 84+ 5 * 6 * 7 , 88 + 0 + 5 * 2 * 7}
= min{ 244,414, 158} = 158

In the first level when we compare M12 , M23 and M34. As M23=48 is minimum among three we parenthesize
the QR in the product PQRT i.e P(QR)T. In the second level when we compare M13 and M24. As M13=88 is
minimum among the two we parenthesize the P and QR in the product PQRT i.e (P(QR))T. Finally we
parenthesize the whole product i.e ((P(QR))T). The top entry in the table i.e M14 gives the optimum cost of
((P(QR))T).
Verification:- The chain of matrices is P, Q, R, T, the product P x Q x R x T can be fully parenthesized in
five distinct ways:
1. (P(Q(RT))) 2. (P((QR)T)) 3. ((PQ)(RT)) 4. ((P(QR))T) 5. (((PQ) R)T)
Cost of (P(Q(RT))) = 5*4*7 + 4*6*7 + 6*2*7 = 392
Cost of (P((QR)T)) = 5*4*7 + 4*6*2 + 4*2*7 = 244
Cost of ((PQ)(RT)) = 5*4*6 + 6*2*7 + 5*6*7 = 414
Cost of ((P(QR))T) = 5*4*2 + 4*6*2 + 5*2*7 = 158
Cost of (((PQ) R)T) = 5*4*6 + 5*6*2 + 5*2*7 = 250

blog: anilkumarprathipati.wordpress.com 3
UNIT-IV DYNAMIC PROGRAMMING

From the above manual method also we find the optimal cost is 158 and the order of matrix
multiplication is ((P(QR))T)

Algorithm Matrix_Chain_Mul(p)
{
for i = 1 to n do
M[i, i] = 0
for len = 2 to n do
{
for i = 1 to n - len + 1 do
{
j←i+l-1
M[i, j] ← ∞
for k = i to j - 1 do
q = M[i, k] + M[k + 1, j] + Pi-1PkPj
if q < M[i, j] then
{
M[i, j] ← q
}
}
}
return m
}
Time Complexity:- Algorithm Matrix_Chain_Mul uses first For loop to initialize M[i,j] which takes O(n).
M[i, j] value is computed using three For loops which takes O(n3). Thus the overall time complexity of
Matrix_Chain_Mul is O(n3).
Application-II: OBST
Binary Search Trees: - A binary search tree T is a binary tree, either it is empty or each node in the tree
contains an indentifier and,
1. All identifiers in the left subtree of T are less than the identifier in the root node T.
2. All identifiers in the right subtree are greater than the identifier in the root node T.
3. The left and right subtree of T are also binary search trees.
Optimal Binary Search Tree problem:- Given a sequence K = {k1, k2, ..., kn } of n distinct keys in sorted
order (so that k1 < k2 < ··· < kn), and we wish to build a binary search tree from these keys such that the cost
of the binary search tree is minimum. For each key ki, we have a probability pi that a search will be for ki.
Some searches may be for values not in K, and so we also have n + 1 "dummy keys" E0, E1, E2, ..., En
representing values not in K. In particular, E0 represents all values less than k1, En represents all values
greater than kn, and for i = 1, 2, ..., n -1, the dummy key Ei represents all values between ki and ki+1. For
each dummy key Ei, we have a probability qi that a search will correspond to Ei.
Number of possible binary search trees for n keys is given by the following formula.
2nCn
Cost of binary search tree is calculated as follows

blog: anilkumarprathipati.wordpress.com 4
UNIT-IV DYNAMIC PROGRAMMING

Solving the Optimal Binary Search Tree problem by dynamic programming


Step 1: The structure of an optimal binary search tree
Construct an optimal solution to the problem from optimal solutions to subproblems. Given keys ki,
..., kj, one of these keys, say kr (i ≤ r ≤ j), will be the root of an optimal subtree containing these keys. The
left subtree of the root kr will contain the keys ki, ..., kr-1 (and dummy keys Ei-1, ..., Er-1), and the right
subtree will contain the keys kr+1, ..., kj (and dummy keys Er, ..., Ej).
Step 2: A recursive solution
Next, we define the cost of an optimal solution recursively in terms of the optimal solutions to sub
problems. For the optimal binary search tree problem, We can define W[i, j], C[i, j], and r[i, j] recursively as
follows.
{ W[i,j], C[i, j], r[i, j] } = { qi, 0, 0 } for i = j
W[i, j] = { W[i, j-1] + qj+ Pj } for i < j
C[i, j] = min { C[i, k-1] +C[k, j]}+ W[i, j] for i < j
i<k<=j
r[i, j] = k for i < j
Step 3: Computing the expected search cost of an optimal binary search tree
Computing the optimal costs. we perform the third step of the dynamic-programming paradigm and
compute the optimal cost by using a tabular, bottom-up approach.

Step 4: Constructing an optimal solution.


Now consider the last cell i.e 0th row and 3th column. The r03 in this cell specifies the root i.e r03 = k
then kth element will be the root. If Tij is the tree and kth element will be the root then the tree is sub divided
into left sub tree i.e Ti,k-1 and into right sub tree i.e Tk,j-1. The process will be repeated until Tij is reached
where i=j. At this condition the tree will become a leaf node.
Algorithm OBST(p, q, n)
{
for i = 0 to n-1 do
{
w[i,i] = q[i]; c[i,i] = 0; r[i,i] = 0;
w[i,i+1] = q[i]+q[i+1]+p[i+1];
r[i,i+1] = i+1;
c[i,i+1] = q[i]+q[i+1]+p[i+1];
}
w[n,n] = q[n]; c[n,n] = 0; r[n,n] = 0;
for m = 2 to n do
for i = 0 to n - m do
{
j=i+m;
blog: anilkumarprathipati.wordpress.com 5
UNIT-IV DYNAMIC PROGRAMMING

w[i,j] = w[i,j-1]+p[j]+q[j];
k=Find(c, r, i, j);
c[i,j] = c[i,k-1]+c[k,j]+w[i, j];
r[i,j] = k;
}
Write(c[0,n], w[0,n], r[0,n]);
}
Algorithm Find(c, r, i, j)
{
min = ∞
for m = r[i, j-1] to r[i+1, j] do
if ( c[i,m-1] + c[m,j] < min then
{
min = c[i, m-1] + c[m, j];
l=m;
}
return l;
}
Time Complexity:- The computation of each c[i, j] requires to find the minimum of m quantities. Hence each
such c[i, j] can be computed in time O(m). The total time for all c[i,j]’s is O(n-m) * O(m) = O(nm-m2).

Example:- Given n=4 and {a1, a2, a3, a4} = { do, if, int, while}, p{1:4}= {3,3,1,1}
and q{0:4}={2, 3, 1, 1, 1}. Construct an optimal binary search tree and find its cost.
Solution:- Initially we have {W[i,j], C[i, j], r[i, j] } = { qi, 0, 0 }which initializes the bottom row of the table.
The remaining values of the table are calculated using the below formulas
W[i, j] = { W[i, j-1] + p[j]+ q[j] } for i < j
C[i, j] = min { C[i, k-1] +C[k, j]}+ W[i, j] for i < j
i<k<=j
r[i, j] = k for i < j
After calculating all the values the table is as below

blog: anilkumarprathipati.wordpress.com 6
UNIT-IV DYNAMIC PROGRAMMING

To compute Wij , Cij , rij when i =j


Thus , W00 =q0=2, C00=0, r00=0
W11=q1=3, C11=0, r11=0
W22=q2=1, C22=0, r22=0
W33=q3=1, C33=0, r33=0
W44=q4=1 ,C44=0, r44=0
When i<j ,
W01={W[0,0]+p1+q1}=(2+3+3)=8
C01=min{c[0,0]+c[1,1]}+w[0,1]=(0+0+8)=8
R01=k=1
W12={W[1,1]+p2+q2}=(3+3+1)=7
C12=min{C[1,1]+C[2,2]}+w[1,2]=(0+0+7)=7
R12=k=2
W23={W[2,2]+p3+q3}=(1+1+1)=3
C23=min{C[2,2]+C[3,3]}+w[2,3]=(0+0+3)=3
R23=k=3
W34={W[3,3]+p4+q4}=(1+1+1)=3
C34=min{C[3,3]+C[4,4]}+w[3,4]=(0+0+3)=3
R34=k=4
W02={W[0,1]+p2+q2}=(8+3+1)=12
C02=min{(C[0,0]+C[1,2]),(C[0,1]+C[2,2])}+w[0,2]=7+12=19
R02=k=1
W13={W[1,2]+p3+q3}=9
C13=min{(C[1,1]+C[2,3]),(C[1,2]+C[3,3])}+w[1,3]=3+9=12
R13=k=2
W24={W[2,3]+p4+q4}=5
C24=min{(C[2,2]+C[3,4]),(C[2,3]+C[4,4])}+w[2,4]=3+5=8
R24=k=3
Repeat the same procedure for the upper layers
Now consider the last cell i.e 0th row and 4th column i.e. T04. The r04 in this cell specifies the root i.e
r04 = k = 2 then 2nd element will be the root. Then the left sub tree is Ti,k-1 i.e. T01and into right sub tree i.e
Tk,j i.e. T24.

Now consider the cell which contains r 01. The r01 specifies the root of left sub tree i.e r 01 = k = 1
then 1st element will be the root of left subtree. Then the left sub tree is T 0,1 i.e. T00 . AS i=j the left subtree is
a leaf node and the right sub tree of T01 is T11 which is also a leaf node.

blog: anilkumarprathipati.wordpress.com 7
UNIT-IV DYNAMIC PROGRAMMING

Now consider the cell which contains r24. The r24 specifies the root of right sub tree i.e r 24 = k = 3
rd
then 3 element will be the root of right subtree. Then the left sub tree of T 2,4 i.e. T22 is a leaf node. The
right sub tree of T24 is T34 which is a root of the right subtree.

As all nodes T00,T11,T22 are leaf nodes. Now consider the cell which contains r 34. The r34 specifies
the root i.e r34 = k = 4 then 4th element will be the root. Then the left sub tree of T 3,4 i.e. T33 is a leaf node.
The right sub tree of T34 is T44 which is a leaf node.

Representing the above tree structure with given elements we have

Representing the above tree structure with given identifiers we have

blog: anilkumarprathipati.wordpress.com 8
UNIT-IV DYNAMIC PROGRAMMING

Verification:- Number of possible binary search trees for 4 keys is given as.
1 1
---------- 2nCn = -------- 2*4C4 = 14
n+1 4+1

Cost(T1) = 3*1+3*2+1*3+1*4+2*1+3*2+1*3+1*4+1*4 = 35

Cost(T1) = 3*2+3*1+1*2+1*3+2*2+3*2+1*2+1*3+1*3 = 32
Similarly we will construct 14 possible binary search trees and compute the cost of each tree. But we
find tree T2 has the minimum cost of 32, so it is the optimal binary search tree.

blog: anilkumarprathipati.wordpress.com 9
UNIT-IV DYNAMIC PROGRAMMING

Application-III: 0/1 knapsack problem:- The 0–1 knapsack problem is posed as follows. A thief robbing a
store finds n items; the ith item gives profit of pi dollars and weighs wi pounds, where pi and wi are integers.
He wants to take as valuable a load as possible, but he can carry at most W pounds in his knapsack for some
integer W. Which items should he take? This is called the 0–1 knapsack problem because each item must
either be taken or left behind; the thief cannot take a fractional amount of an item or take an item more than
once.

Solving the 0/1 Knapsack problem by dynamic programming


Step 1: Generate the set Si where the set contains the possible elements (p,w) that can be added to the set.
Step 2:- A recursive solution
Initially the bag contains no items
S0 = {(0,0)} for i = 0
S1i = Si-1 + (Pi,Wi) -> + means addition
Si = Si-1 + S1i-1 -> + means merging
Purging rule(Dominance Rule) :- If Si has a pair (Pj,Wj) and other pair ( Pk,Wk) and Pj <= Pk but Wj >=
Wk then the pair (Pj,Wj) is discarded from the set Si
This process is repeated after every generation of Si where i = 1,2….n
Step 3:- Generating the sets S1, S2, S3…… Sn using the above formulae.
Step 4:- After generating Sn Select the element ( Pk,Wk) such that Wk= capcity of the bag.
If ( Pk,Wk) € Sn and ( Pk,Wk) Sn-1 then Xn= 1 otherwise Xn= 0.
When Xn= 1 find another element (Pj,Wj) such that Pj = Pk – Pn and Wj = Wk – Wn
Check If ( Pj,Wj) € Sn-1 and ( Pk,Wk) Sn-2 then Xn-1= 1 otherwise Xn-1= 0.
This process is repeated until we find all Xi where i = 1,2….n

Example :- Consider the knapsack instance n=3, (W1,W2,W3)=(2, 3, 4), (P1,P2,P3)=(1, 2, 5) and m=6.
Generate the Si sets containing the pair ( Pi,Wi) and thus find the optimal solution.
Solution:- Initially S0 = {(0,0)} as nothing is added to the bag.
0 0
S1 = S + (P1, W1) = {(0,0)} + ( 1, 2) = { (1, 2) }
S1 = S0 + S10 = {(0,0)} + {(1, 2)} = {(0,0), (1, 2)}
S11= S1 + (P2, W2) = {(0,0), (1, 2)}+ ( 2, 3) = {( 2, 3) , (3, 5) }
S2 = S1 + S11 = {(0,0), (1, 2)}+ {( 2, 3) , (3, 5) } = {(0,0), (1, 2), ( 2, 3) , (3, 5) }
S12 = S2 + (P3, W3) = {(0,0), (1, 2), ( 2, 3) , (3, 5) }+ ( 5, 4)
= { (5,4), (6,6), (7,7), (8,9) }
3 2 2
S = S + S1 = {(0,0), (1, 2), ( 2, 3) , (3, 5) }+ { (5,4), (6,6), (7,7), (8,9) }
= {(0,0), (1, 2), ( 2, 3) , (3, 5), (5,4), (6,6), (7,7), (8,9)}
In S3 has a pair (3, 5) and other pair (5,4) as 3<= 5 and 5 > 4 we discard the element (3, 5) from the set S3 due
to purging rule.
Select (6,6) from S3 as maximum capacity of the bag is 6.
As (6,6) is an element of S3 but it is not the element of S2, So X3 =1.
Subtract P3,W3 from 6,6. i.e (6,6)-(5,4) = (1,2).
As (1, 2) is an element of S2 but it is also the element of S1, So X2 =0.
As 2nd item is not added to the bag nothing is subtracted from the element (1,2).
As (1, 2) is an element of S1 but it is not the element of S0, So X1 =1.
Thus (X3,X2,X1) = (1, 0, 1).
Profit obtained by placing the above items in the bag is
= = 1*1+2*0+5*1= 6

blog: anilkumarprathipati.wordpress.com 10
UNIT-IV DYNAMIC PROGRAMMING

Algorithm DKP(p,w,n,m)
{
S0 = {(0,0)};
for i = 1 to n-1 do
{
S1i-1 = Si-1 + (Pi,Wi);
Si = MergePurge(Si-1 + S1i);
}
(PX,WX) = last pair in Sn-1;
(PY,WY) = (P1+Pn , W1+Wn) where W1 is the largest W in any pair in Sn-1 such that W1+Wn <=m;
// Trace back for Xn , Xn-1 , Xn-2…… X1 ;
if (PX > PY) then
Xn = 0;
else
Xn = 1;
Trace back for Xn , Xn-1 , Xn-2…… X1 ;
}
Time Complexity:- Time complexity of 0/1 knapsack problem is O(2 n/2 ).
Application-IV: All pairs shortest path problem:- All-pairs shortest-paths problem is to find a shortest
path from u to v for every pair of vertices u and v. Although this problem can be solved by running a single-
source algorithm once from each vertex, it can usually be solved faster using the dynamic programming
technique.
Solving All pairs shortest path problem by dynamic programming
Step 1: - Optimal substructure of a shortest path
Shortest-paths algorithms typically rely on the property that a shortest path between two vertices contains
other shortest paths within it.
Step 2:- A recursive solution

Where C[i,j] is the cost matrix of the given graph.


Step 3:- Computing the distance matrices Dk where k= 1, 2, ….. , n.
Step 4:- Finally Dn matrix gives the shortest distance form every vertex I to every other vertex j.
Example :- Find the shortest path between all pair of nodes in the following graph.

blog: anilkumarprathipati.wordpress.com 11
UNIT-IV DYNAMIC PROGRAMMING

Solution:- The cost matrix of the given graph is as follows

C[i, j] =

We know D0[i,j] = C[i,j] =

Now we have to calculate D1[i,j]


D1[1,1] = min { D0[1,1], D0[1,1]+ D0[1,1]} = min{0, 0+0} = 0
D1[1,2] = min { D0[1,2], D0[1,1]+ D0[1,2]} = min{4, 0+4} = 4
D1[1,3] = min { D0[1,3], D0[1,1]+ D0[1,3]} = min{11, 0+11} = 11
D1[2,1] = min { D0[2,1], D0[2,1]+ D0[1,1]} = min{6, 6+0} = 6
D1[2,2] = min { D0[2,2], D0[2,1]+ D0[1,2]} = min{0, 6+4} = 0
D1[2,3] = min { D0[2,3], D0[2,1]+ D0[1,3]} = min{2, 6+11} = 2
D1[3,1] = min { D0[3,1], D0[3,1]+ D0[1,1]} = min{3, 3+0} = 3
D1[3,2] = min { D0[3,2], D0[3,1]+ D0[1,2]} = min{ , 3+4} = 7
D1[3,3] = min { D0[3,3], D0[3,1]+ D0[1,3]} = min{0, 3+11} = 0
Thus

D1[i,j] =

Similarly using the same procedure we get

D2[i,j] = and D3[i,j] =

As no of nodes in the given graph are 3, So D3[i,j] gives the shortest distance from every vertex i to every
other vertex j.
Algorithm AllPaths(cost, D, n)
{
for i = 1 to n do
for j = 1 to n do
D[i, j] = cost[i, j];
for k= 1 to n do
for i = 1 to n do
for j = 1 to n do

blog: anilkumarprathipati.wordpress.com 12
UNIT-IV DYNAMIC PROGRAMMING

D[i, j] = min { D [i,j], D [i,k]+ D [k,j] };


}

Time Complexity:- The time needed by AllPaths algorithm is especially easy to determine because the
loop is independent of the data in the matrix D. The D[i, j] is obtained after the satatement is iterated n3
times. So the time complexity of All pairs shortest paths algorithm is Ө(n3).

Application-V: Travelling sales person problem:- Travelling sales person problem is to find the route
travelled by the salesman starting from one vertex and touching each vertex exactly once and returning back
to the starting vertex. The main objective of this problem is to minimize the travelling cost of the sales
person.

Solving Travelling sales person problem by dynamic programming


Step 1: - Travelling sales person problem typically rely on the property that a shortest path between two
vertices contains other shortest paths within it.
Step 2:- A recursive solution

Where C[i,j] is the cost matrix of the given graph.


Step 3:- Computing the route until all the vertexs are added to the set S.
Step 4:- Finally calculate g(i, S) where set S contains all vertexes other than the starting vertex which gives
the optimal cost of travelling.

Example :- Find the shortest tour of a travelling sales person for the following graph using dynamic
programming technique.

Solution:- The cost matrix of the given graph is as follows

C[i, j] =

blog: anilkumarprathipati.wordpress.com 13
UNIT-IV DYNAMIC PROGRAMMING

Initially Set S=Ф and g(i, S) = C[i, 1]


Thus g(1, Ф) = C[1, 1] = 0
g(2, Ф) = C[2, 1] = 5
g(3, Ф) = C[3, 1] = 6
g(4, Ф) = C[4, 1] = 8
Now computing g(i, S) where Set S contains a single element. As starting vertex is 1 vetex, we assume the
second vertex that can be touched is 2, 3, & 4. So we calculate the cost for reaching all these vertices.
g(2, {3}) = min {C[i,j] + g(j, S-{j}) = min { C[2,3] + g(3, S-{3})}
j€ S = 9 + 6 = 15
g(2, {4}) = min {C[i,j] + g(j, S-{j}) = min { C[2,4] + g(4, S-{4})}
j€ S = 10 + 8 = 18
Similarly
g(3, {2}) = min { C[3,2] + g(2, S-{2})}= min { C[3,2] + g(2, Ф) }
= 13+ 5 = 18
g(3, {4}) = min { C[3,4] + g(4, S-{4})}= min { C[3,4] + g(4, Ф) }
= 12 + 8 = 20
g(4, {2}) = min { C[4,2] + g(2, S-{2})}= min { C[4,2] + g(2, Ф) }
= 8+ 5 = 13
g(4, {3}) = min { C[4,3] + g(3, S-{3})}= min { C[4,3] + g(3, Ф) }
= 9 + 6 = 15
Now computing g(i, S) where Set S contains a two elements.
g(2, {3,4}) = min { C[2,3] + g(3, S-{3}), C[2,4] + g(4, S-{4})}
= min { C[2,3] + g(3, 4), C[2,4] + g(4, 3)}
= min {9 + 20, 10 + 15 }= 25
g(3, {2,4}) = min { C[3,2] + g(2, S-{2}), C[3,4] + g(4, S-{4})}
= min { C[3, 2] + g(2, 4), C[3,4] + g(4, 2)}
= min {9 + 18, 12 + 13 }= 25
g(4, {2,3}) = min { C[4, 2] + g(2, S-{2}), C[4, 3] + g(3, S-{3})}
= min { C[4, 2] + g(2, 3), C[4, 3] + g(3, 2)}
= min {8 + 15, 9 + 18 }= 23
Finally
g(1, {2, 3, 4})=min{C[1,2] + g(2, S-{2}), C[1,3] + g(3, S-{3}), C[1,4] + g(4, S-{4})}
= min { C[1,2] + g(2,{3,4}), C[1,3] + g(3,{2,4}), C[1,4] + g(4,{2,3})}
= min {10 + 25, 15 + 25, 20+22 }= 35
The optimal cost to tour through all the vertices is 35.
As from g(1, {2, 3, 4}) the minimum cost is obtained when j=2. So after touching 1 vertex we reach to node
2 i.e 1  2
As from g(2,{3,4}) the minimum cost is obtained when j=4. So after touching 2 vertex we reach to node 4 i.e
1  24
The remaining vertex untouched is 3 so we reach to node 3 after touching 4 th vertex i.e 1  243
As we have to return to starting vertex i.e. 1 so we reach to node 1 after touching 3rd vertex i.e 1 
2431

Reliability design problem:- Reliability design problem is to design a system which is composed of several
devices connected in series. Let ri be the reliability of device Di then the reliability of entire system is π ri.
Our problem is to use device duplication to maximize reliability under cost constraint.
Solving Reliability design problem by dynamic programming

blog: anilkumarprathipati.wordpress.com 14
UNIT-IV DYNAMIC PROGRAMMING

Step 1: - Reliability design problem typically rely on the property that less reliable devices are more
duplicated than the more reliable devices.
Step 2:- A recursive solution
Ui = Upper bound = (C+Ci - Ci
j
Фi(j)= 1- (1-ri) where j=1,2…. Ui
Generate the set Si where the set contains the possible elements (r,c) that can be added to the system
Initially no devices are added to the system
S0 = {(1,0)} for i = 0
Sji = Si-1 + (Фi(mi),j*Ci)
Si = Union of Sij where j = 1, 2…. Ui
Purging rule(Dominance Rule) :- If Si has a pair (rj,cj) and other pair ( rk,ck) and rj <= rk but cj >= ck then
the pair (rj,cj) is discarded from the set Si.
This process is repeated after every generation of Si where i = 1,2….n
Step 3:- Generating the sets S1, S2, S3…… Sn using the above formulae.
Step 4:- After generating Sn Select the element ( rk,ck) such that ck= cost constraint.
If (rk,ck) € Sn and ( rk,ck) € Sjn then Dn= j.
Find another element (rj,cj) such that rj = rk – Фi(mi) and cj = ck – j*cn
Check If ( rj,cj) € Sn-1 and ( rj,cj) € Sjn-1 then Dn-1= j .
This process is repeated until we find all Di where i = 1,2….n
Example :- Design a three stage system with device types D1, D2, D3 . The costs are $30, $15 and $20
respectively. The cost of the system is to be no more than $105. The reliability of each device type is 0.9, 0.8,
0.5 respectively.
Solution:- Given C=$105, C1=$30, C2=$15, C3=$20, r1= 0.9, r2= 0.8, r3= 0.5
First compute Ui where i=1, 2,…n
Ui = Upper bound = (C+Ci - Ci
U1= (C+C1-(C1+C2+C3))/ C1= (105+30-65)/30= 2.33 =2
U2= (C+C2-(C1+C2+C3))/ C2= (105+15-65)/15= 3.66=3
U3= (C+C3-(C1+C2+C3))/ C3= (105+20-65)/20= 3
Hence (U1, U2 , U3) = (2, 3, 3)
Initially no devices are added to the system
S0 = {(1,0)}
As U1 =2 we have to calculate Sij where i=1 and j=1,2… U1
Ф1(1)= 1- (1-r1)1 = 1-(1-0.9)1 = 0.9
S11 = Si-1 + (Ф1(1),1*C1) = {(1,0)} + ( 0.9, 30) = {(0.9, 30)}
Ф1(2)= 1- (1-r1)2 = 1-(1-0.9)2= 0.99
S12 = Si-1 + (Ф1(1),2*C1) = {(1,0)} + ( 0.99, 2*30) = {(0.99, 60)}
Si = Union of Sij where j = 1, 2…. Ui
Thus S1 = S11 U S12 = {(0.9, 30)} U {(0.99, 60)} = {(0.9, 30) , (0.99, 60)}
As U2 =3 we have to calculate Sij where i=2 and j=1,2… U2
Ф2(1)= 1- (1-r2)1 = 1-(1-0.8)1 = 0.8
S21 = {(0.9, 30) , (0.99, 60)}+ ( 0.8, 15) = {(0.72, 45), (0.792, 75)}
Ф2(2)= 1- (1-r2)2 = 1-(1-0.8)2= 0.96
S22 = {(0.9, 30) , (0.99, 60)}+ ( 0.96, 2*15) = {(0.864, 60), (0. 9504, 90)}
Ф2(3)= 1- (1-r2)3 = 1-(1-0.8)3= 0.992
S23 = {(0.9, 30) , (0.99, 60)}+ ( 0.992, 3*15) = {(0.8928, 75), (0. 98208, 105)}
Thus S2 = S21 U S22 U S23 =
= {(0.72, 45), (0.792, 75)}U{(0.864, 60), (0. 9504, 90)}U{(0.8928, 75), (0. 98, 105)}

blog: anilkumarprathipati.wordpress.com 15
UNIT-IV DYNAMIC PROGRAMMING

= {(0.72, 45), (0.792, 75),(0.864, 60), (0. 9504, 90),(0.8928, 75), (0. 98, 105)}
Applying Purging rule (0.792, 75) is removed form S2
Thus
S2 = {(0.72, 45), (0.864, 60), (0. 9504, 90),(0.8928, 75), (0. 98, 105)}
Applying Purging rule (0. 9504, 90)is removed form S2
Thus
S2 = {(0.72, 45), (0.864, 60), (0.8928, 75), (0. 98, 105)}
As U3 =3 we have to calculate Sij where i=2 and j=1,2… U3
Ф3(1)= 1- (1-r3)1 = 1-(1-0.5)1 = 0.5
S31 = {(0.72, 45), (0.864, 60), (0.8928, 75), (0. 98, 105)}+ ( 0.5, 20)
={(0.36, 65), (0.432, 80), (0.4464,95)} [Remaining elements are not included as
cost exceeds 105]
2 2
Ф3(2)= 1- (1-r3) = 1-(1-0.5) = 0.75
S32 = {(0.72, 45), (0.864, 60), (0.8928, 75), (0. 98, 105)}+ ( 0.75, 40)
={(0.54, 85), (0.648, 100)}
Ф3(3)= 1- (1-r3)3 = 1-(1-0.5)3= 0.875
S33 = {(0.72, 45), (0.864, 60), (0.8928, 75), (0. 98, 105)}+ ( 0.875, 60)
={(0.63, 105) }
Thus S3 = S31 U S32 U S33 =
= {(0.36, 65), (0.432, 80), (0.4464,95)}U{(0.54, 85), (0.648, 100)} U{(0.63, 105) }
= {(0.36, 65), (0.432, 80), (0.4464,95),(0.54, 85), (0.648, 100),(0.63, 105) }
Applying Purging rule (0.4464,95) is removed form S3
Thus
S3 = {(0.36, 65), (0.432, 80), (0.54, 85), (0.648, 100),(0.63, 105) }
Applying Purging rule (0.63, 105) is removed form S3
Thus S3 = {(0.36, 65), (0.432, 80), (0.54, 85), (0.648, 100) }
After generating S3 Select the element (0.648, 100).
If (0.648, 100)€ S3 and (0.648, 100) € S32 then Duplication of D3= 2.
Now 100 – 40 = 60. Now the cost constraint for D2 is 60. So Select the element with cost equal to 60 in S 2.
i.e. (0.864, 60)
If (0.864, 60) € S2 and (0.864, 60)€ S22 then Duplication of D2= 2.
Now 60 – 30 = 30. Now the cost constraint for D1 is 30. So Select the element with cost equal to 30 in S 1.
i.e. (0.9, 30)
If (0.9, 30) € S1 and (0.9, 30)€ S11 then Duplication of D1= 1.
Thus (J1,J2,J3) = (1, 2, 2).

Divide & Conquer Dynamic Programming


1. The divide-and-conquer paradigm involves 1. The development of a dynamic-
three steps at each level of the recursion: programming algorithm can be broken into a
 Divide the problem into a number of sub sequence of four steps.
problems.  Characterize the structure of an optimal
 Conquer the sub problems by solving them solution.
recursively. If the sub problem sizes are small  Recursively define the value of an optimal
enough, however, just solve the sub problems solution.
in a straightforward manner.  Compute the value of an optimal solution
 Combine the solutions to the sub problems in a bottom-up fashioned. Construct an
into the solution for the original problem. optimal solution from computed information
blog: anilkumarprathipati.wordpress.com 16
UNIT-IV DYNAMIC PROGRAMMING

2. They call themselves recursively one or more


2. Dynamic Programming is not recursive.
times to deal with closely related sub problems.
3. D&C do more work on the sub-problems and 3. DP solves the sub problems only once and
hence has more time consumption. then stores it in the table.
4. In D&C the sub problems are independent of 4. In DP the sub-problems are not
each other. independent.
5. Example: Merge Sort, Binary Search 5. Example : Matrix chain multiplication

blog: anilkumarprathipati.wordpress.com 17

You might also like