[go: up one dir, main page]

0% found this document useful (0 votes)
5 views30 pages

DP

Dynamic programming is a method for solving complex problems by breaking them down into simpler sub-problems, which share overlapping sub-sub-problems. Key applications include matrix chain multiplication, optimal binary search trees, and the traveling salesperson problem. The method involves characterizing the optimal solution structure, defining recursive relations, computing values in a bottom-up manner, and constructing the optimal solution from the computed information.

Uploaded by

udaykumar738282
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
5 views30 pages

DP

Dynamic programming is a method for solving complex problems by breaking them down into simpler sub-problems, which share overlapping sub-sub-problems. Key applications include matrix chain multiplication, optimal binary search trees, and the traveling salesperson problem. The method involves characterizing the optimal solution structure, defining recursive relations, computing values in a bottom-up manner, and constructing the optimal solution from the computed information.

Uploaded by

udaykumar738282
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 30
DESIGN AND ANALYSIS OF ALGORITHMS. UNIT-V — DYNAMIC PROGRAMMING Dynamic Programming: General method, Applications- Matrix chain multiplication, Optimal binary search trees, 0/1 knapsack problem, All pairs shortest path problem, Travelling sales person problem, Reliability design 0.0. INTRODUCTION The drawback of greedy method is, we will make one decision at a time. This can be overcome in dynamic programming. In this we will make more than one decision at a time. Dynamic programming is an algorithm design method that can be used when the solution to a problem can be viewed as the result of a sequence of decisions. Dynamic programming is applicable when the sub-problems are not independent, that is when sub-problems share sub- sub-problems. A dynamic programming algorithm solves every sub-sub-problem just once and then saves its answer in a table, there by avoiding the work of re-computing the answer every time the sub-problem is encountered Definition It is a programming technique in which solution is obtained from a sequence of decisions General Method: The fundamental dynamic programming model may be written as, max F(R) =pen,sn {P,(R) + F,(R-R,)} Where n = 2.3.4 Once F,(R) is known equation(1) provides @ relation for evaluation of F:(R), Fs(R).... This recursive process ultimately leads to the value of F,.1(R) and finally F,(R) at which process stops. A dynamic programming problem can be divided into a number of stages where an optimal decision must be made at each stage. The decision made al each stage must take into account its effects not only on the next stage, but also on the entire subsequent stages. Dynamic programming provides a systematic procedure whereby starting with the last stage of the problem and working backwards one makes an optimal decision for each stage of problem. The information for the last stage is the information derived from the previous stage. Dynamic programming design involves 4 major steps. 1) Characterize the structure of optimal solution. 2) Recursively define the value of an optimal solution. 3) Compute the value of an optimum solution in a bottom up fashion. 4) Construct an optimum solution from computed information, ‘The Dynamic programming technique was developed by Bellman based upon his principle known as principle of optimality, This principle states that “An optimal policy has the property that, what ever the initial decisions are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision” Dr.DSK IINCSE--DAA —-UNIT-V_— Dynamic Programming Page 1 General Characteristics of Dynamic Programming: The general characteristics of Dynamic programming are 1) The problem can be divided into stages with a policy decision required at each stage. 2) Bach stage has number of states associated with it 3) Given the current stage an optimal policy for the remaining stages is independent of the policy adopted. 4) The solution procedure begins be finding the optimal policy for each state of the last stage. 5) A recursive relation is available which identifies the optimal policy for each stage with n stages remaining given the optimal policy for each stage with (n-1) stages remaining. APPLICATIONS OF DYNAMIC PROGRAMMING, 1) Matrix Chain Multiplication 2) Optimal Binary search Trees 3) O/L Knapsack Problem 4) Multi stage Graph 5) Traveling sales person problem 6) Reliability Design 1) MATRIX CHAIN MULTIPLICATION Input: n matrices Ay, Az, As....An_of dimensions P:xP:, Px} Goal: To compute the matrix product AiAz...An Problem: In what order should AjA2...A, be multiplied so that it would take the minimum. number of computations to derive the product. Let A and B be two matrices of dimensions pxq and qxr. Then C=AB. C is of dimension pxr. Thus C; takes q scalar multiplications and q-1 scalar additions P,xPu1 respectively. Consider an example of the best way of multiplying 3 matrices. Let A: of dimensions 5x4 Aq of dimensions 4x6 As of dimensions 6x2 (Ar Aa) As takes (Sx4x6) + (Sx6x2) = 180 A, (Aa As) takes (5x4x2) + (4x6x2) = 88 Thus Aj (Ap As) is much cheaper to compute than (A; A2) As, although both lead to the same final answer. Hence optimal cost is 88. To solve this problem using dynamic programming method, we will perform the following steps. Step 1: Let Mj denote the cost of multiplying A,...A; where the cost is measured in the number of multiplications. Here, MG.i) = 0 for all i and M(L.n) is required solution Step 2: The sequence of decisions can be build using the principle of optimality. Consider the process of matrix chain multiplication Let T be the tree corresponding to the optimal way of multiplying A\...Aj T has a left sub-tree L and right sub-tree R. L corresponds to multiplying A,...Ax and R to multiplying Ags...Aj for some integer k such that (i<=k<=-1). Thus we get optimal sub-chains of matrices and then the multiplication is performed. This ultimately proves that the matrix chain multiplication follows the principle of optimality. Step 3: We will apply following formula for computing each sequence. My=min{MactMgst + PiPeePja lis 1) Dr.DSK INCSE--DAA —-UNIT-V_— Dynamic Programming Page 2 ‘Computation of My The Computations are given as below. My=min( MactMis15 + PPkoiPjn |i? 1) Let isl, j=2, kel. We will compute My Mn=MitMn+PiPPs = 0 + 0 +5x4x6 = 120 = 0 + 0 +4x6x2 = 48 = 0 + 0 +6x2x7 =84 j=3, k=1 or k=2. We will consider k=1 then Mny+P:PP; = 0 + 48 +5x4x2 = 48440 88 2. we get My3 as, Mis=Mn+Ms:+P:PsPi = 120+ 0 +60 =120+60 = 180 As we get minimum value of My; when k=1., we will set k=1 and Mi3=88. Now for i=2, j=4, k=2 or k=3 if when k=2 Mze=Mn+Msi+PoPsPs = 0 + 84 +4x6x7 = 844168 =252 Now for i=2, j=4, when k=3 Mzi=Mas+Mu+P2PiPs = 48 +0 +4x2x7 = 48456 = 104 ‘As we get minimum value of Mz, when k=3, we will set k=3 and My.=104, Now for i=1, j=4, k=1 or 2 or 3 when k=L Mi+Ma+PiPPs = 0 + 104 +5x4x7 = 1044140 = 244 , when k=2, Miz+My-+PiPsPs = 120 + 84 +5x6x7 = 1204844210 =414 Now for i=I, j=4,when k=3 Mu=Mi+Muy+PiPiPs = 88 +0 +5x27 = 88470 =158 As we get minimum value of Mi, when k=3, we will set k=3 and Mie 58. Thus we get optimum cost as 158. To seck the optimum sequence for this cost we will trace the algorithm Mul. For this cost 158 which is optimal The optimal sequence is Al x (A2x A3) x Ad. Dr.DSK INCSE--DAA —-UNIT-V_— Dynamic Programming Page 3 For instance wna Kes ALAZABAS Hence Ay x (AoxAg) x Ay ALAZAS = ((5x4x2)4(4x6x2))42x7x5 oo Hence Optimal sequence is Ay(AzAa)Ay AAS Algorithm for Matrix Chain Multiplication Algorithm matrix_chain_multiplication(array P{1..n]) { for i= 1 ton do M{[ii}:=0: initialisation for len:=2 to n do { for i:=l to (n—len +1) do { isitlen-1; Mfijl =a; for k:= ito j-L do { W check all splits c= MUi.k] + Mikel j+plil*plk+1)*pti+1] if q@data) then return t; else if (xdata) then return search ((->left, x); else 0) then return 0; retum search ((->right, x); d The possible binary search trees for the identifier set (al,a2,a3=do, if, stop) Hence n=3 an Las ‘The number of possible binary search trees = n+l 341 3 1x2x3x4x5%6 _ og _ 5 4 Dx2x3xIx2x3 4 Dr.DSK IICSE--DAA —-UNIT-V_— Dynamic Programming Page 5 ‘The sequence of key words to construct a BST may be one of the following if,dostop do,if,stop (a) (b) © stop.do.if dostop.if Se Ce np a =o 6 @) (©) cosi(T)= ¥° pli)xlevel(a,) + ¥° qli)xlevel(E,-1) 1 PO=aW=> a) ixteaxtasxd sixdeaxteaxtasx TO OG GT OG a) ixteaxdasxd pixteaxtasxt s 7 7 7 7 7 7 o ixteaxteaxd sixterxtesxt 43x 7 7 7 7 In the above binary search tree the cost of the tree 2 is minimum. Hence it is optimal binary search tree. DrDSK ICSI “DAA UNIT-V_— Dynamic Programming Page 6 A binary search tree have maximum of two childs. i¢ 0, 1 or 2 childs. (i) - probability of searching an internal node. QGi) - probability of searching an extemal node. A successful search is terminated at an internal node denoted by circle (O) ans unsuccessful search is terminated at an external node by square (7) cost(T)= 5) p(i)xlevel(a,) + Y. q(i)xlevel(E,-1) Se in Example 2) Let n=4 and (al, a2, a3, a4 ) = ( do, if , read, while ) -- internal nodes p(1:4)=G.3,1,1) q(0:4)=(2,3, 1.1.1) External nodes (Ep,E;,E2,.E3,E)) As there are 4 internal nodes (al, a2, a3, a4 ) = ( do, if, read, while ) the number of possible binary search trees nl wij) = p@) +aG@) + wa j-1) weight of 6 D= Me fe k D+ ek, )}+wG, /) cost of ty 16) =k root oft Woi=Eoar Ey Wr=E, aE, Wo; = Ep a3 Es Ba ay Ey ay By Epa, Ey 3 Epa By 1 & Ep ay Bs ay Ey Ea sEsaky Principle of optimality was applied. Dr.DSK IICSE--DAA —-UNIT-V_— Dynamic Programming Page 7 The first row is initiated as WG.i)=9(i) RG=0 CG) =0 Icmeans Woo= q(0) =2 Wu=q(l)=3 Wn=4Q2) W3=4@) War= q(t) = 1 The remaining values of cells can be calculated using equations as follows wlii)=P@) + a0) + wa j-1) w(0,1)=p(1) + q(1) + W(0,0) = 34342 = 8 W(L,2) = pQ) + q(2) + W(1,1) = 34143 =7 w(2,3) = pG) + q(3) + W(2,2) = IHL WG.4) =p) + a(4) + WG.3) = IHL w(0.2) = pQ) + q(2) + wOO,1) = 34148 = 12 w(L,3) = pG) + q(3) + W(1,2) = 14147 =9 W(2.4) = p(4) + g(4) + w(2,3) = 14143 =5 w(0.3) = pG) + q(3) + w(0.2) = 141412 = 14 w(1,4) = p@) + q(4) + WU1,3) = 14149 = 1 w(0.4) = p(4) + q(4) + w(0,3) = 141414 = 16 arc CG kD + elk, )}+wli, j) €(0,1) = min { (0,0) + (1,1) } + W(0,1)=0+0+8=8 when k=1 RO, C(,2) = min { e(1,1) + (2,2) } + W(1,.2)=0+0+7=7 when k=2 Ri, C3) = min { (2,2) +e3,3)}+w(2,3)=0+0+43=3 when k=3 RQ3)=3 CG3.4) = min { 0(3,3) + c(4.4) }+W34)=040+3=3 when k=d RGA)=4 ‘When €(0,2) = min { (0,1) + (2.2) } + w(0.2)=8 + 0+ 12=20 When €(0,2) min { ¢(0,0) + (1,2) } + w(0,2)=0+7+ 12=19 €(0,2) = min{20,19} = 19 1(0,2)=1 When k=3 or 2 C(L,3) = min { o(1.1) + (2,3), o(1,2) + (3,3) } + w(1,3) =( 043,740) +9 = 12 11, 3)=2 When k=3 C(2,4) = min { 6(2,2) + ¢(3.4) } +24) =04345=8 min { £(2,3) + 0(4.4) } +w(24)=3 +045=8 Dr.DSK ICSI “DAA UNIT-V_— Dynamic Programming Page 8 When k=I of 2 oF 3 €(0,3) = min { (0,0) + (1,3), e(0,1) + (2,3), o(0,2)4e(3,3) } + w(I, =min{12,11,10} +14 = 104 14=24 4(0,3)=2 ‘When k=2 or 3 or 4 C(L4) = min { o(1,1) + (2,4), 6(1,2) + 0(3,4), e(1,3)+0(4.4) } + WO) mnin(0+8, 743, 12+0} +11 =8+11=19 114) ‘When k=I or 2 or 3 or 4 (L.A) = min { c(0,0} + e(1,4), c(0.1) + c(2,4), 6(0,2)+e(3,4), 0(0,3)+6(4.4) } + w(0.4) =min(19, 16, 23, 25) + 16 = 16 + 16 = 32 \=2 RO. Wael Cyy=0 Ruy=0 Now observe the tables last cell ie. 4 row 0 column, contains toy = 2 i.e. roy = @ (2 corresponds to second node a2). RO4=K, then k=2 To build OBST R(0,4)=2. 2 Hence a; become the root node Let Tbe OBST J, Tos is divided into two parts i.e TOL and T24 Tor=r(0,1)=1=k=1 Tren, Tor is divided into two parts Too and Tuy where k=] ‘Tos is divided into two parts To» and Tss where k=3 ‘Ty is divided into two parts Ts; and Ty where k=4 Since 100, rl1, 122, 133, r44 = 0 these are external nodes and can be neglected, DrDSK ICSI “DAA —UNIT-V_— Dynamic Programming Page 9 Let T be the optimal binary search tree Ke % © 8 & ty & fret SS . . Algorithm OBST(p.q.n) { for i:=0 to n-l do { cli.i}:=0; i+ 1]-=9lil +qli+ +p lie; rl ]:=i41; clii+l Jali} +ali+1]+pli+ 1): } win.n}:=qln]; rn.n}:=0; ofa.n]:=0.0; tondo for i:=0 to n-m do { jsiem; wli, j)=wli, j-+plil+abil: k:=find(c, , cli, K-L]efk, js aliiksk: } vwrite(e{0.n],.w{0.n],110.n); } algorithm find(c, r, i,j) { min for m := tli, -1] to rfi+1, jl do if (cli, m-1}+e{m, j}) < min) then { min‘=efi, m-1}+e[m, js =m: ) return I; } Time complexity O(n log(n)) Dr.DSK IINCSE--DAA —-UNIT-V_— Dynamic Programming Page 10 3) 0/L KNAPSACK PROBLEM In this problem, ‘n’ objects are given. with each object ‘i’ having a weight of ‘wy’ & a knapsack having a capacity of ‘m’ is given. If an object ‘i’ is placed in the knapsack, a profit of “P,x;’ is eatned A solution to this knapsack problem can be obtained by making a sequence of decisions on the variables x1, X2, %3.... XA decision of variable x, involves in determining which of the values 0 or 1 is to be assigned to it. Following a decision on any object x,. We may be in any of 2 possible states. 1) The capacity remaining in the knapsack is m and no profit has earned. (object=0) 2). The capacity remaining is m-w, and a profit of p; has earned, Ik is clear that the remaining decisions x, X, must be optimal wart, the problem state resulting from the decision of x, (i=1). Hence the principle of optimality holds ‘Suppose that a store contains different types of omaments, which ate made up of gold, Let nl, n2, n3 be ornaments, cost and weight of these omaments are cl, ¢2, c3 dollars wi, w2, w3 pounds respectively. Now a thief wants to rob the ornaments such that he should get maximum profit. In this the thief can’t place fraction of ornament in the bag, ie. either he can place omament completely in the bag or he can’t place omament So Xiz Oor I Xi-=0 means we can not place omament in the bag, Xi= 1 means we can place ornament completely in the bag This problem contains either 0 or 1, hence the problem is called 0/1 Knapsack problem Example : Consider the knapsack instance n=3, (wl, w2, w3) = @, 3, 4), (pl, p2, p3)= (1, 2, 5) and m=6 (pwn) = CL, 2) (p22) = (2, 3) (pws) = 6,4) ={0.0} as + (po) sh=s°+(p,m) addition =((0.0)}+((1.2)} 1 =€02)) sia sts! merging s°and s? results s' 5° = ((0,0))+£(1,2)} = {(0,0) (1,2)} sf =s'4(p,.0%) ((0,0),(1,2)}+{2,3} =(23).G5)} Bases 5° = ((0,0),(1,2)}+{(2,3),3,5)} (0,0), (1, 2), (2,3). (3,5)}; DrDSK —IINCSE--DAA —-UNIT-V_— Dynamic Programming Page 11 §} =s°+(py.w,) addition .(1,2),(2,3),3,5))+(5.4) {6,4),(6,6).(7.7),(8,9)) Ps) - -merging s° = {(0,0),(1,2),(2,3),,5))+((5,4),(6,6),(7,7).(8,9)} {€0,0),(1,2),(2,3),8,5),5,4), (6,6),(7,D,8.9)}5 Using purge rule(dominance rule) in the set S° on ordered pairs (3,5) (5,4) i.e. 3<5 and 5>4. so we can eliminate (3,5) from S* ‘As knapsack maximum capacity is 6 we can eliminate (7,7),(8,9) from $ Now S°= {(0,0),(1,2),(2,3),(5,4),(6,6)}. After applying purge rule we will check the following condition inorder to find solution If (p,,w,)e 5" and (p,,w,)é s"" then x, =1 otherwise x, =0 (6,6)e5° and (6,6)€8" so x,=1 (6,6)-G,4)=(1,2) (Des? U2¢es'— False x, (es! C2)¢s° True x, Maximum profitis Jo p,x,= pay + Pyx + py = Dd +2x045x1 = 145 = Optimal solution is (x1,x2x3) = (1,0,1) Max Profitis Yp.x, = 6 Dr.DSK IINCSE--DAA —-UNIT-V_— Dynamic Programming Page 12 4) MULTISTAGE GRAPH: A multi stage Graph G=(V,E) is a directed graph in which the vertices are partitioned into k>=2 disjoint sets, V, where 1<=i<=k. If is an edge in then u€vi, vEviar where I ‘The sets vj and vk are such that Iv = Ivyl= I. Let s and T be 2 vertices in vj and ve respectively then the vertex s is called the source and Tis called the Sink” or “Destination”. Let c(i.) be the cost of the edge . The cost of a path from S to T is the sum of the costs of edges on the path, The multistage graph problem is to find minimum cost path from $ to T, Each set Vi defines a stage in the graph. Because of the constraints on E, every path from s to t starts in stage 1, goes to stage2, then stage 3, then to stage 4 and so on. And eventually terminates in stage k. figure shows a five stage graph. A minimum cost s to t path is indicated by broken edges. Consider a resource allocation problem in which n units of resources are to be allocated to r projects. If j O<=j<=n, units of the resource are allocated to project I, then the resulting net profit is N(ij). The problem is to allocate the resource to the r projects is such a way as to maximize total net profit, This problem can be formulated as r+1 stage graph problem, A dynamic programming formulation for a k-stage graph problem is obtained by first noticing that every $ to T path is the result of k-2 decisions. The i decision involves in determining which vertex vj. where 1<=i<=k-2 is to be on the path, Let p(ij) be a minimum. cost path from vertex ‘j’ in vj to vertex “T’. Let cost(ij) be the cost of this path, This multistage graph problem can be solved using 2 approaches. 1) Forward approach 2) Backward approach Forward Approach: cost(ij) = min { ¢(j,) + cost(i+1,1)} TEV 1 €E STAGES Vi V2 Vs Vs Five-Stage Graph First compute cosi[k—2, jl¥je v,., and then compute cosi{k-3, f]Vje vy, and so on, and finally compute cost[1,S] ‘There are 5 stages in the graph V1 V2 V3 V4 V5 Stage 4 contains 3 vertices (9,10,11) cost of each vertex at stage 4 to the destination cost(4,9) ¢ 4 cost from vertex 9 to vertex 12 cost(4,10) 2 stage 4 cost from vertex 10 to vertex 12 cost(4,11) stage 4 cost from vertex II to vertex 12 DrDSK INCSE--DAA —-UNIT-V_— Dynamic Programming Page 13, Stage 1: costlk-2, jIViE vy. cost{3, {je v5 ={6.7.8} cost(3,6) means that 3 refers to stage. At stage 3 cost for vertex 6 to destination 7 £(6,10) + cost(4,10) cost(3,7) =miny (7.9) + cost(4,9) ‘{cer0) = costs 10) = 34255, 542 +5 cost(3,6) =min {eee + cost(4,9) cost(3,8) = min /c(8,10) + cost(4,10) Ye(8,11) + cost(4,11) cost(2,2) “| 4+ saa 1 2+ cost(3,7) 1 + cost(3,8) min(9+cost(2,2)) 16 Treost(2,3) 34e0st(2.4) 2+c0st(2,5) Note that in the calculation of cost(2,2), we have reused the values of cost (3,6), cost (3.7) and cost(3,8) and so avoided their recomputation. A minimum cost s to t path has a cost of 16. This path can be determined easily if we record the decision made at each stage (vertex). Backward approacl DrDSK —IINCSE--DAA —-UNIT-V_— Dynamic Programming Page 14 5)TRAVELLING SALES PERSON The tour of graph G is a directed simple cycle that includes every vertex in V. The cost of the tour is the sum of cost of the edges on the tour. The travelling sales- person problem is to find a tour of minimum cost. Applications: Suppose we have to route a postal van to pick up mail from mailbox located at ‘n’ different sites. An (n+1) vertex graph can be used to represent the situation, One vertex represents the post office from which the postal van starts and to which it must return. Edge is assigned a cost equal to the distance from site ‘T to site ‘j’. Route taken by the postal van is a tour and we are interested in finding tour of minimum length. Every tour consist of an edge <1, k> for some k € V-(1} and a path from vertex k to vertex 1, which goes through each vertex in V-{1,k} exactly once. It is easy to see that if the tour is optimal, then the path from k to I must be a shortest k to 1 path going through all vertices in V-(1, k}. Hence, the principle of optimality holds. Let g(iS) be the length of a shortest path starting at vertex ‘i’ going through all vertices in ‘S’ & terminating at vertex 1. The function g(J,V-(1}) is the length of an optimal salesperson tour. From the principle of optimality sLV (1) =min(C, + 9(k.V -(Lk))) Generalizing the above equation we obtain (for i# S) eG.S)=min(C, +eG.S-GD} 23 4 10, 2 20) = 9 49) ab oln, =e 0 Find g(1.(2,3,4)) VU1.2,3.4} Solution Stage 1) compute g(i,S) where IS|= 4, with no intermediate nodes. G€(v(I})} i#lleS and ies Stage 2) Compute g(4) where ISI=1, with one intermediate node i€(v-(1}} i#L Le S and ies i=(2,3,4) s=(2/3/4) & should be an edge (j=s) s@S)=min(C, +8U.8-U)) 8(2,(3}) =©23#23.5-(3)) = 9 + 23,9) = 9431 = 946 = 15 B(2,{4}) = 244804, 10 + g(4,0) = 10 + 41 = 10+8=18 2G,(2)) = 324202, 13 + g(2,9) = 13 +021 2G.(4)) = 344804, 12 + g(4,9) = 12 + cf = 12+8=20 8(4,{2}) = 424g 2(4,(3}) = 0434203, {2})= 8 + g(2,9)=10+c21=8 + 5=13 {3))= 9+23,0)= 94c31 = 94 6=15 Dr.DSK INCSE--DAA —-UNIT-V_— Dynamic Programming Page 15, Step 3) Compute g(j,s) where ISI=2, with two intermediate nodes i€W-(2}} i#1,1eS and ie S 233A 24)} min{¢,,+8G.5-(i)} G4) (2.43.4) = tain foo ee when j=3 cos + 8(4,s-{4})f when j=4 min {e + 93.{4}) co + 904.13) fF min{9+20 | = min(29,25} =25, 10415 2G.(24))= min fem + 202, j=(2.4) cag + B(4,s-{4}), min {13+ g(2,(4))] =minf 13418 | = 25 12+ £(4.(2)), 12413, 2(4.(2,3)) = minf car + g(2s-(2}) §5(2,3} cas + 8G.8-(3}) min { 8 +2(2,{3)) | =min [8415 }- min{23,27) =23, 9496,(2)) losis Stage 4) Compute g(i8) where ISI=3, with three intermediate nodes i€{v-(1}) and i=(1), S=(2,3.4) j5(2/5/4) 6@5)=min(C, +8UG.8-U)) 2(1.{2,3, nie: g(2,S-{2}) } when j=2, Cis + g(3,S-{3}) when j=3 Cig + 8(4,S-{4}) when j=4 minf 10 + g(2,{3,4}) }=min{10 + 25 |= min{35,40,43}=35 15 + 26.(2.4)) 15 +25 20+ 2(4.(2.3)) 20423 Optimal cost of tour is 35 (12.3.4) wt2(2,(3.4))) thus the tour starts form 1 and goes to 2 8(2,(3.4)) Cost 9(4.{3})} then from 2 to 4 24,(3}) «st23.{P}) } then from 4 to 3 aD) = Cy and from 3 to 1 Cir Cop Con Cor The sequence of tour is 12431 DrDSK —IIICS AA UNIT-V_— Dynamic Programming Page 16 Example 2) Travelling Salesperson Compute g(1,{2,3,4,5)) Stage 1) Compute g(i,s) where Isl=@ ,i € (v- (s }} 2,3,4,5} with no intermediate nodes 8 (2,9)=Ca =1 £3.8)=Cu =6 g4.9)=Ca =I 8 5.0)=Cs =3 Stage 2) Compute g(i,s) where IskeI i € {v- (5 }} With one intermediate node 5) $= (2.3.4.5) & should be an edge s@S)=min(C, +88 Up} 2(2,(3}) = Cos + 23,s-(3}) 2G.{2}) =Cua + a(28-(2)) =2 + 22.9) =241=3 83,(4}) = Cor + 9(4,s-(4}) = 2 + 24,9) =241=3 23,(5}) = Cas + 265.8-(5}) = 1+ 5.9) =143=4 4,41 (4,(2}) = Cu 8(4,{3}) = Cas + 2 DrDSK —IIICS! AA UNIT-V_— Dynamic Programming Page 17 Stage 3) Compute g(i,s) where Isl=2 ie i€ {v-(s}) With two intermediate nodes i=(2,3.4,5) s = (23.4.5) 8G. S)=mintC,, +8U.8-U)) 82,{3.4))= minfCay + 23,S-(3}) {Ca + 24.S-(4)) minf 3+ 93.(4))|_— = min +3] =minf6 | = 6 24+ 24.{3)}) +s} 10 2(2,(4,5))}= min{Cry + 24,8-(4) [Cas + 2(5.S-(5}) minf 2+ 8(4,(5}) | = min| [5+ 205.{4)) + 23,(2.4))= min[Cr + e2.$-(2}) out 8(4,S-(4}) minf 2+9(2,(4)) | = minf2+3] =minfs | = 4 12+ 84.2) n+2 4 £3.(2,5))= min[Cn + g28-(2)) Yess + 26.8-(5}) nif 26205) ] = maf 20a} emifio} = 3 Ll + 28G.2) 1+2 84,(2,3))= min[Cio + e2.$-(2}) [Ca +28.8-(3)) minf 1+ 2(3))] = minf1 +9) = woh (2486.2) 243 5 84.(2,5))= minfCa + 4.8-(2)) [Cas + 8(4,S-{5}) minf 1+ 9(4.(5))] = minf1+5] =minfo] = 4 2426.2) 242 4 2(5,(2,3))= min[Cs. + e2.$-(2}) ‘(en +2G.S-(3)) minf 1+ 2.(3))| = minf1+9| =min, woh 12+ 83.(2)) 243 5 865,(2.4))= minfCs, + 2(2,8-(2}) Yes + 2(4.5-(4)) minf 1+9(2,(4))| = minf1+3] =minfa | = 3 11+ g(4.42)) 142 3 Dr.DSK INCSE--DAA —-UNIT-V_— Dynamic Programming Page 18 82,(3,5))= mma Coa S-(3}) C, minf 34 23.(5)) | = minf3+4] =minf 7] = (5+86.{3) S48, 13, 83,(4,5))= min/ [Cay + B(4,S-{4}) ‘eas + 5.8-45)) min 2+ 9(4,(5)) | = min a} vmin(? hes 1+ 25.{4)) 84,(3,5))= minfCa + 2G,S-(3)) ‘Yeas + 265.8-(5}) minf 2+ gG,(5}) | = minf244 | = min, She 6 (2426.3) 248 10 865,(3.4))= minfCs + g3,S-(3}) Yes + 848-44)) minf 2+ (3.49) = minf 243) = min sh =5 U1 +2443) 9 Stage 4) Compute g(i,s) where Isl=3 ie i€ {v- {8 }} With three intermediate nodes i=(2,3.45) 8 = (23.4.5) gi, S)= min(¢, +e S-(i)} 22.3.45))= “(eon S-(3})) Cor + (4,S-(4}) > C+ 255-45) =min(3+23,(4,5}) ) = min(3+3)-=min(6 | = 6 424 8(4,(3,5}) 424+6P 8 (5 + 265.34) (s+) 2.(2.4,5))= “fe: +8(2.S-(2))) =min{ 2+9(2.{4,5))) = minf2+7) =minf 9) =4 4 2+ £04,125) 4244p 6 4 A1+ 85,(2,.4)) 4d Cu + 2G. 9(4,{2,3,5})= min{ Cy + g(2, Ct 255-(5))) =min{1+g(2,{3,5})) = minf1+7) =minf 8) =5 4 2+ 2(3,(2,5}) 42437 5 2+ a(5,(2,3}) +s) 7 Dr.DSK IINCSE--DAA —-UNIT-V_— Dynamic Programming Page 19 =min {1+ g@2,(3.4)) ) = minf1+6) =minf 7] = 6 424 26.(2,4}) 4244p 6 U1 + 204,023) Ui+s) 6 Stage 5) Compute g(is) where Isl=d ie i € (v-(1}} With 4 intermediate nodes is(1} = (23.4.5) eG. 5)=mintC, +¢G.5-G)) (1 {2 = minf Cys + g(2, Cis +2. Cut 84 Cis +8 =min (2+ (2,(3,4,5})) = min(2+6) =minf 8) = 5 4 1+ g(3,{2,4,5}) qlt4 5 24 9(4,(2,3,5)) 245 7 (1+ 865.(23.4)) U+6. 7 Optimal Cost tour is 5 8(1(2,3.4,5)) }) thus tour stars from I and goes to 3 {2.4.5} then from 3 to 5 . then from 5 to 2 g(2,{4}) = Cut g(4{o)) then from 2 to 4 2 (49) = Cy then from 4 to 1 Optimal tour is 1-3 Dr.DSK IICSE--DAA —-UNIT-V_— Dynamic Programming Page 20 RELIABILITY DESIGN: In this section we look at an example of how to use dynamic programming to solve a problem with a multiplicative optimization function. The problem is to design a system that is composed of several devices connected in series. Let r, be the reliability of device D,. Then the reliability of the entire system is [] n. Even if the individual devices are very reliable, the reliability of the system may not be very good. For example if n=10 and I] n= 0.904. Hence it is desirable to duplicate devices. Multiple copies of same device type connected in parallel through the use of switching circuits. The switching circuits determine which devices in any given group are functioning properly, They then make use of one such device at each stage If stage i contains m, copies of device dj, then the probability that all m, have a malfunction is (1-1, )™. Hence the reliability of stage i becomes 1- (1- 1)™ ‘Thus if 7, =0.80 and m, = 2, the stage reliability becomes 0.96. Reliability fee} n Devices Dj, 1<=1 =n connected in series Stage? Sh 2 lage Stages Di De De Dn a): fm De Dn Bt Ds - 4} Dn > De Multiple devices connected in parallel in each stage Our problem is to use device duplication to maximize reliability. The maximization is to be carried out under a cost constraint. Let Ci be the cost of system being designed we wish to solve the following maximization problem. Maximize []@,(m,) (reliability function) Subjectto Y c,m,0 Each mi must be in the range L<= m; <= U_can be calculated as follows Upper bound is the maximum number of devices that can be kept in one stage. li ) Clearly F,(x)=1 Vx OS «Se It may be solved using an approach similar to that used for knapsack problem. DrDSK —INCSE--DAA —-UNIT-V_— Dynamic Programming Page 21 Example : we are to design a 3 stage system with device types dl, D2, 43. The costs are $30, $15 and $20 respectively. The cost of the system is to be no more than $105, The reliability of each device type is 0.9, 0.8 and 0.5 respectively. We assume that if stage i has m, devices of type i in parallel then 9, (m,)=1-(-1)" In terms of the notation used earlier, 1=30, 2=15, ¢3=20, c=105 1120.9, 12=0.8, 13=0.5 ) U,=|| e+e, |e upper bound \ a) Ipper bound at Stage | U,=| (105+30-(30-+15 + 20))/30 | U, =| (135-65))/30 | 2.333] U,=2 ipper bound at Stage 2 U, =| (105+15-(0-+15+20))/15 | U, =| (120-65))/15 | U, =|3.666 | u,=3 pper bound at Stage 3 | (105 +20-(30+15+20))/20 | u,=| (125-65))/20 | Reliability function for i* stage ¢,(m,)=1-(1-r,)" is a reliability function. Each pair in this is represented by (rx) where ris reliability and x is cost Using $} to represent all tuples that are obtainable from $'* by choosing m se ° Beginning with S° ={(1,0)} initial assumption {reliability, cost} As there are no devices no chance for failure. So reliability is 1 As there are no devices cost of devices is 0 Dr.DSK INCSE--DAA —-UNIT-V_— Dynamic Programming Page 22 Since 1 can be obtained by merging the sets. S° =S? U S?U S} 5° ={(0.36,65), (0.432, 80), (0.4464, 95), (0.54, 85), (0.648, 100), (0.63, 105)} Dr.DSK IINCSE--DAA —-UNIT-V_— Dynamic Programming Page 25 Now by applying purge and dominance rule : by observing the successive tuples (0.4464,95),(0.54,85) the second one dominates first one as the second one has more reliability with less cost. So we purge i.e, eliminate the tuple (0.4464,95) and by observing the successive tuples (0.648,100),(0.63,105) the first one dominates second one. As the first one has more reliability with less cost we purge i.e. eliminate the tuple (0.63,105) S° ={(0.36,65), (0.432, 80), (0.54,85), (0.648, 100)} The best design has reliability of 0.648 and a cost of 100. 3 The pair (0,648,100) present in Sz i.e. i=3 and j=2 two devices at stage 3 so m3=2 cost = 40 2 The pair (0.648,100) obtained from (0.864, 60) which is present in 5 i.e. i+2, j=2 so ms=2 ie two devices at stage two. So cost will be 15*2=30 at stage two. 1 ‘The pair (0864.60) obtained from (0.9,30) which isin S} i.e. i=1, j=1 so m;s1 One device at stage 1 ie cost =30 com=1m,=2,m,=2 An alternate method for finding the solution for the above problem: The pair of tuple with the maximum reliability is in $’ represents the value of the solution. Therefore the reliability of the system is 0.648 that has a cost of 100. To get the values of Mi, M; and My we will trace back. ‘The pair (0.648, 100) € S} i=3, j=2, m=2 As there are 2 copies of the device at stage 3 subtracting the cost of 2 copies of the device D3 from the total cost 100 ie = 100-2420 100-40 = 60. Search (x,60) belongs to Si or Sp or Sy (0.864, 60) € S? i=2, j=2, m,=2 2 (0.864,60) belongs to Sz so. m:=2as there are 2 devices at stage 2 the cost of 2 copies of device type D2 is subtracted from total cost 60 Now Search (x,30) belongs to Sor 53 (0.9,30) € S} 1 (09,30) belongs to Sj ite mi=1 com=1m,=2,m,=2 Dr.DSK IINCSE--DAA —-UNIT-V_— Dynamic Programming Page 26 7)AIl pairs shortest path problem Let G=(V.B) be a directed graph consisting of n vertices and each edge is associated with a weight. The problem of finding the shortest path between all pairs of vertices in a graph is called all pairs shortest path problem. This problem can be solved by using Dynamic programming Technique. ‘The all pair shortest path problem is to determine a matrix A such that AG,j) is the length of a shortest path from vertex i to j. Assume that this path contains no cycles. If k is an intermediate vertex on this path, then the sub paths form i to k and from k to j ate the shortest paths from I to k and from k to j respectively. Otherwise the path from i to j is not shortest path. If k is intermediate vertex with highest index then the path i to k is the shortest path going through no vertex with index gteater than k-1, similarly the path k to j is shortest path going through no vertex with index greater than k-1 ‘The shortes path can be computed using following recursive method AG /)=WG,j) if k=0 AG, J=min{A'G, j),ATGH+A(K/)} if 21 Example : Compute all pairs shortest path for the following graph Graph G 04. cost Adjacency Matrix A°(i, j)=W(,)=|6 0 2 3 0 0 Dr.DSK IINCSE--DAA —-UNIT-V_— Dynamic Programming Page 27 Step 1 For k=1 ic. going from i to j through intermediate vertex I When i=l j=1/2/3 A'\d,D=min{A°(,), 4°, +4°0,)}=min{0,0+0}=0 A‘(L,2)=min{A’ (1,2), A°(,) +A" (L,2)}=min{4,0+4}=4 A\(,3)=min{A° (1,3), 4° )-+.4°(1,3)} =min{11,0+1}=11 When je/2/3, A\(2,1)=min{A’ (2,1), A°(2,)+A°(L,)} = min{6,6+0}=6 A'(2,2)=min{ A’ (2,2), A°(2,1)+A°(1,2)} =min{0,6+4}=0 A\(2,3)=min{A° (2,3), A°(2,1)+4°(,3)}= min{2,6+11}=2 When i=3 j=1/2/3 A'3,D)=min{ A’ 3,1), A°B,)+A°(LD}=min3,3+0}=3 A'G,2)=min{ A’ B,2),A°3,1)-+A° (1, 2)}=min{oo,3 +4} =7 A'(3,3)=min{A° (3,3), A°3,1)+A°(1,3)} = min{0,3+11}=0 04 i cost Adjacency Matrix A'(i,j)=|6 0 2 370 Step 2 For k=2ie. going from ito j through intermediate vertex 2 When isl Kk=2 j=1/2/3 AG D=minf ANGE |)ATTHAA TKD} A°(1)=min{A'(,), A'(,2)+4'(2,)}=min{(0,4+.6}=0 A?(,2)=min{A' (1,2), A'(, 2)+A'(2,2)}=min{4,4+0}=4 A’(1,3)=min{A'(1,3), A'(1,2)+A° (2,3)}=min{11,44+2}=6 When i=2 j=1/2/3 A?(2,1)=min{A' (2,1), A'(2,2)+4 (2, 1)}=min{6,0+6}=6 A’(2,2)=min{A' (2,2), A'(2,2)+A'(2,2)}=min{0,0+0}=0 A’(2,3)=min{A' (2,3), A'(2,2)+A'(2,3)}=min{2,0+2}=2 When i=3 j=1/2/3 A’B,1)=min{A'@,1), A'G,2)+A' (2,1)}=min{3,7+6} =3 A’(3,2)=min{ A’, 2), A'(3,2)+A' (2,2)}=min{7,7+0}=7 A?,3)=min{ A',3), A'G,2)+A' (2,3)}=min{(0,0+2}=0 Dr.DSK INCSE--DAA —-UNIT-V_— Dynamic Programming Page 28 046 cost Adjacency Matrix A*(i,j)=|6 0 2 370 Step 3 For k=3 ie. going from i to j through intermediate vertex 3 When i=l &: (2/3 AG D=min{ATG |) ATTEHAATK D} A\(L)=min{A(L,)), A’ (1,3)+A?(3,)} = min{0,6+3}=0 A’(L,2)=min{A’(1,2), 4°(1,3)+4"3,2)}=min(4,6+7}=4 A’(,3)=min{A°(L,3), A°(,3) +A G,3)} = min{6,6+0} =6 When ues A'(2,1)=min(A"(2,), A°(2,3)+A? B,D} = min{6,2+3}=5 A’(2,2)=min{ A? (2,2), A?(2,3)+A?(3,2)} =min{0,2+7}=0 A’(2,3)=min{A’ (2,3), A°(2,3)+4’G,3)}=min{2,2+0}=2 When p A°G,D=min{A’ B,D, A’3,3)+4°B,1)}=min{3,0+3}=3 A°G,2)=min{A’3, 2), A°(3,3)+.4°3,2)}=min{7,0+7}=7 A\(3,3)=min{A?(3,3), 7,3) +A’ (3,3)}=min{0,0+0}=0 046 cost Adjacency Matrix A’(i,j)=|5 0 2 370 This matrix gives the all pairs shortest path solution Algorithm all_pairs_shortest_path(W,A,n) JW is weighted array matrix, n is the number of vertices, 11 Ais the cost of shortest path from vertex i (oj. { foris= 1 ton do for j= 1 ton do Alijl:= Wii] fork=1 ton do for i= 1 ton do for j:= 1 ton do Alijh:=min(Afij}AUKIAI Kj] } ‘The Time Complexity for this method is O(n') Dr.DSK IINCSE--DAA —-UNIT-V_— Dynamic Programming Page 29 Important Questions 1) 2) 3) 4) 5) 6 D 8) %» Devise an algorithm to find the optimal order of multiplying n matrices using dynamic programming technique. Write an algorithm of matrix chain multiplication. Write a pseudo code of the dynamic programming algorithm for solving optimal binary search tree. Using algorithm of OBST, compute w(ij), rij) and c(i), O<=i<=j<=4 for identifier set (al, a2, a3, a4) = ( cout, float, int, while) with pl=1/20, p2=1/5, p3=1/10, p4=1/20, q0=1/5, qI=1/10, q2=1/5, q3=1/20, q4=1/20 using rj) construct the optimal binary search tree. Write an algorithm of all pair shortest path problem Discuss the dynamic programming solutions for the traveling sales person problem Discuss the dynamic programming solutions for the problems of reliability design. Design a system with 3 components D1, D2, D3 each costs Rs.30, Rs.15 Rs.30 respectively. The cost of the system is to be no more than Rs.105. The reliability of each device type is 0.9, 0.8 and 0.5 respectively. Discuss O/1 knapsack problem, Consider (p1.p2,p3)=(1,2,5) and m=6 (wl,w2,w3} 4) Dr.DSK INCSE--DAA —-UNIT-V_— Dynamic Programming Page 30

You might also like