BCA Semester IV Design & Analysis of Algorithms Module 4
BCA Semester IV Design & Analysis of Algorithms Module 4
Dynamic programming, as greedy method, is a powerful algorithm design technique that can
be used when the solution to the problem may be viewed as the result of a sequence of
decisions
• Dynamic Programming algorithm solves each sub-problem just once and then saves
its answer in a table, thereby avoiding the work of re-computing the answer every
time.
• Two main properties of a problem suggest that the given problem can be solved using
Dynamic Programming.
• These properties are overlapping sub-problems and optimal substructure.
• Overlapping Sub-Problems
• Similar to Divide-and-Conquer approach, Dynamic Programming also combines
solutions to sub-problems.
• The computed solutions are stored in a table, so that these don’t have to be re-
computed. Hence, this technique is needed where overlapping sub-problem exists.
• For example, Binary Search does not have overlapping sub-problem. Whereas
recursive program of Fibonacci numbers have many overlapping sub-problems.
Optimal Sub-Structure
• A given problem has Optimal Substructure Property, if the optimal solution of the
given problem can be obtained using optimal solutions of its sub-problems.
• For example, the Shortest Path problem has the following optimal substructure
property −
• If a node x lies in the shortest path from a source node u to destination node v, then
the shortest path from u to v is the combination of the shortest path from u to x, and
the shortest path from x to v.
• The principle of optimality states that no matter whatever the initial state and initial
decision are, the remaining decision sequence must constitute an optimal decision
sequence with regard to the state resulting from the first decision.
1
Steps of Dynamic Programming Approach
1.MULTISTAGE GRAPHS
1. The multistage graph problem is to find a minimum cost from a source to a sink.
2. A multistage graph is a directed graph having a number of multiple stages, where
stages element should be connected consecutively.
3. In this multiple stage graph, there is a vertex whose in degree is 0 that is known as the
source. And the vertex with only one out degree is 0 is known as the destination
vertex.
4. A multistage graph G = (V, E) is a directed graph where vertices are partitioned
into k (where k > 1) number of disjoint subsets S = {s1,s2,…,sk}such that edge (u, v) is
in E, then u Є si and v Є s1 + 1 for some subsets in the partition and |s1| = |sk| = 1.
5. The vertex s Є s1 is called the source and the vertex t Є sk is called sink.
6. G is usually assumed to be a weighted graph. In this graph, cost of an edge (i, j) is
represented by c(i, j). Hence, the cost of path from source s to sink t is the sum of
costs of each edges in this path.
7. The multistage graph problem is finding the path with minimum cost from source s to
sink t.
2
Example:Consider the following example to understand the concept of multistage graph.
According to the formula, we have to calculate the cost (i, j) using the following steps
l c Vi + 1
<j, l> c E
In this step, three nodes (node 4, 5. 6) are selected as j. Hence, we have three options to
choose the minimum cost at this step.
Two nodes are selected as j because at stage k - 3 = 2 there are two nodes, 2 and 3. So, the
value i = 2 and j = 2 and 3.
Cost(2, 3) = {c(3, 4) + Cost(4, 8) + Cost(8, 9), c(3, 5) + Cost(5, 8)+ Cost(8, 9), c(3, 6) +
Cost(6, 8) + Cost(8, 9)} = 10
3
Step-3: Cost (K-4, j)
Cost (1, 1) = {c(1, 2) + Cost(2, 6) + Cost(6, 8) + Cost(8, 9), c(1, 3) + Cost(3, 5) + Cost(5, 8)
+ Cost(8, 9))} = 12
1. Algorithm Fgraph(G, k, n, p)
2. // The input is a k-stage graph G = (V, E) with n vertices // indexed in order or
stages. E is a set of edges and c [i, j] // is the cost of (i, j). p [1 : k] is a minimum
cost path.
3. {
4. cost [n] := 0.0;
5. for j:= n - 1 to 1 step – 1 do
6. { // compute cost [j]
7. let r be a vertex such that (j, r) is an edge of G and c [j, r] + cost [r] is minimum;
8. cost [j] := c [j, r] + cost [r];
9. d [j] := r:
10. }
11. // Find a minimum cost path.
The multistage graph problem can also be solved using the backward approach. Let bp(i, j) be a
minimum cost path from vertex s to j vertex in Vi. Let Bcost(i, j) be the cost ofbp(i, j). From the
backward approach we obtain:
l e Vi - 1
<l, j> e E
4
1. Algorithm Bgraph(G, k, n, p)
2. // Same function as Fgraph {
3. Bcost [1] := 0.0; for j := 2 to n do { // Compute Bcost [j].
4. Let r be such that (r, j) is an edge of
5. G and Bcost [r] + c [r, j] is minimum;
6. Bcost [j] := Bcost [r] + c [r, j];
7. D [j] := r;
8. } //find a minimum cost path
9. p [1] := 1; p [k] := n;
10. for j:= k - 1 to 2 do p [j] := d [p [j + 1]];
11. }
BACKWARD APPROACH:
We use the following equation to find the minimum cost path from t to s: Bcost (i, J) = min
{Bcost (i – 1, l) + c (l, J)}
l c vi – 1
<l, j> c E
Bcost (5, 12) = min {Bcost (4, 9) + c (9, 12), Bcost (4, 10) + c (10, 12), Bcost (4, 11) + c (11,
12)}
Bcost (4, 9) = min {Bcost (3, 6) + c (6, 9), Bcost (3, 7) + c (7, 9)} = min {Bcost (3, 6) + 6,
Bcost (3, 7) + 4}
Bcost (3, 6) = min {Bcost (2, 2) + c (2, 6), Bcost (2, 3) + c (3, 6)} = min {Bcost (2, 2) + 4,
Bcost (2, 3) + 2} DESIGN AND ANALYSIS OF ALGORITHMS Page 61
Bcost (2, 2) = min {Bcost (1, 1) + c (1, 2)} = min {0 + 9} = 9 Bcost (2, 3) = min {Bcost (1, 1)
+ c (1, 3)} = min {0 + 7} = 7 Bcost (3, 6) = min {9 + 4, 7 + 2} = min {13, 9} = 9
Bcost (3, 7) = min {Bcost (2, 2) + c (2, 7), Bcost (2, 3) + c (3, 7), Bcost (2, 5) + c (5, 7)}
Bcost (3, 7) = min {9 + 2, 7 + 7, 2 + 11} = min {11, 14, 13} = 11 Bcost (4, 9) = min {9 + 6,
11 + 4} = min {15, 15} = 15
Bcost (4, 10) = min {Bcost (3, 6) + c (6, 10), Bcost (3, 7) + c (7, 10), Bcost (3, 8) + c (8, 10)}
Bcost (3, 8) = min {Bcost (2, 2) + c (2, 8), Bcost (2, 4) + c (4, 8), Bcost (2, 5) + c (5, 8)}
5
Bcost (3, 8) = min {9 + 1, 3 + 11, 2 + 8} = min {10, 14, 10} = 10 Bcost (4, 10) = min {9 + 5,
11 + 3, 10 + 5} = min {14, 14, 15) = 14
Bcost (4, 11) = min {Bcost (3, 8) + c (8, 11)} = min {Bcost (3, 8) + 6} = min {10 + 6} = 16
• The all pairs shortest path problem is to determine a matrix A such that A (i, j) is the
length of a shortest path from i to j.
• The shortest i to j path in G, i ≠ j originates at vertex i and goes through some
intermediate vertices (possibly none) and terminates at vertex j.
• If k is an intermediate vertex on this shortest path, then the subpaths from i to k and
from k to j must be shortest paths from i to k and k to j, respectively.
Let Ak(i, j) represent the length of a shortest path from i to j going through no vertex of index
greater than k, we obtain:
Ak (i, j) = {min {min {Ak-1 (i, k) + Ak-1 (k, j)}, c (i, j)}
• 1<k<n
6
Example:
Solution
The edges have assigned weights which may be either the distance between the 2 cities
connected by the edge or the average time to drive along that section of highway.
For example if A motorist wishing to drive from city A to B then we must answer the
following questions
The length of a path is defined to be the sum of the weights of the edges on that path.
Given a directed graph G(V,E) with weight edge w(u,v). e have to find a shortest path from
source vertex S∈v to every other vertex v1∈ v-s.
To find SSSP for directed graphs G(V,E) there are two different algorithms.
Bellman-Ford Algorithm
Dijkstra’s algorithm
7
Bellman-Ford Algorithm:- allow –ve weight edges in input graph. This algorithm either
finds a shortest path form source vertex S∈V to other vertex v∈V or detect a –ve weight
cycles in G, hence no solution. If there is no negative weight cycles are reachable form source
vertex S∈V to every other vertex v∈V
Dijkstra’s algorithm:- allows only +ve weight edges in the input graph and finds a
shortest path from source vertex S∈V to every other vertex v∈V.
Consider the above directed graph, if node 1 is the source vertex, then shortest path from 1 to
2 is 1,4,5,2. The length is 10+15+20=45.
To formulate a greedy based algorithm to generate the shortest paths, we must conceive of
a multistage solution to the problem and also of an optimization measure.
As an optimization measure we can use the sum of the lengths of all paths so far generated.
If we have already constructed ‘i’ shortest paths, then using this optimization measure, the
next path to be constructed should be the next shortest minimum length path.
The greedy way to generate the shortest paths from Vo to the remaining vertices is to
generate these paths in non-decreasing order of path length.
For this 1st, a shortest path of the nearest vertex is generated. Then a shortest path to the
2nd nearest vertex is generated and so on.
8
Algorithm for finding Shortest Path
1. Algorithm ShortestPath(v, cost, dist, n)
2. //dist[j], 1≤j≤n, is set to the length of the shortest path from vertex v
to vertex j in graph g with n-vertices.
3. // dist[v] is zero
4. { for i=1 to n do{
5. s[i]=false;
6. dist[i]=cost[v,i];
7. }
8. s[v]=true;
9. dist[v]:=0.0; // put v in s
10. for num=2 to n do{
11. // determine n-1 paths from v
12. choose u form among those vertices not in s such that dist[u] is
minimum.
13. s[u]=true; // put u in s
14. for (each w adjacent to u with s[w]=false) do
15. if(dist[w]>(dist[u]+cost[u, w])) then
16. dist[w]=dist[u]+cost[u, w];
17. }
18. }
Example
Using vertex 5 as the source (setting its distance to 0), we initialize all the other
distances to ∞.
9
teration 1: Edges (u5,u2) and (u5,u4) relax updating the distances to 2 and 4
Iteration 2: Edges (u2,u1), (u4,u2) and (u4,u3) relax updating the distances to 1, 2,
and 4 respectively. Note edge (u4,u2) finds a shorter path to vertex 2 by going
through vertex 4
Iteration 3: Edge (u2,u1) relaxes (since a shorter path to vertex 2 was found in the
previous iteration) updating the distance to 1
10
Iteration 4: No edges relax
Complexity
Algorithm Time complexity
Bellman–Ford algorithm O(VE)
O(V 2 log V)
11
4.Traveling sales person problem
Travelling Salesman Problem (TSP): Given a set of cities and distance between every pair
of cities, the problem is to find the shortest possible route that visits every city exactly once
and returns to the starting point.
• Let us consider a graph G = (V, E), where V is a set of cities and E is a set of
weighted edges. An edge e(u, v) represents that vertices u and v are connected.
Distance between vertex u and v is d(u, v), which should be non-negative.
• Suppose we have started at city 1 and after visiting some cities now we are in city j.
Hence, this is a partial tour. We certainly need to know j, since this will determine
which cities are most convenient to visit next. We also need to know all the cities
visited so far, so that we don't repeat any of them. Hence, this is an appropriate sub-
problem.
• For a subset of cities S Є {1, 2, 3, ... , n} that includes 1, and j Є S, let C(S, j) be the
length of the shortest path visiting each node in S exactly once, starting at 1 and ending
at j.
• When |S| > 1, we define C(S, 1) = ∝ since the path cannot start and end at 1.
• Now, let express C(S, j) in terms of smaller sub-problems. We need to start at 1 and
end at j. We should select the next city in such a way that
C(S,j)=minC(S−{j},i)+d(i,j)wherei∈Sandi≠j
Algorithm: Traveling-Salesman-Problem
1. C ({1}, 1) = 0
2. for s = 2 to n do
3. for all subsets S Є {1, 2, 3, … , n} of size s and containing 1
4. C (S, 1) = ∞
5. for all j Є S and j ≠ 1
6. C (S, j) = min {C (S – {j}, i) + d(i, j) for i Є S and i ≠ j}
7. Return minj C ({1, 2, 3, …, n}, j) + d(j, i)
Analysis
There are at the most 2n.n sub-problems and each one takes linear time to solve.
Therefore, the total running time is O(2n.n^2)
12
Example
1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
S=Φ
Cost(2,Φ,1)=d(2,1)=5
Cost(2,Φ,1)=d(2,1)=5
Cost(3,Φ,1)=d(3,1)=6
Cost(3,Φ,1)=d(3,1)=6
Cost(4,Φ,1)=d(4,1)=8
Cost(4,Φ,1)=d(4,1)=8
S=1
Cost(i,s)=min{Cost(j,s–(j))+d[i,j]}Cost(i,s)=min{Cost(j,s–(j))+d[i,j]}
Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15
Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18
13
Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18
Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20
Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15
Cost(4,{2},1)=d[4,2]+Cost(2,Φ,1)=8+5=13
S=2
Cost(2,{3,4},1)={d[2,3]+Cost(3,{4},1)=9+20=29d[2,4]+Cost(4,{3},1)=10+15=25
Cost(3,{2,4},1)={d[3,2]+Cost(2,{4},1)=13+18=31d[3,4]+Cost(4,{2},1)=12+13=25
Cost(4,{2,3},1)={d[4,2]+Cost(2,{3},1)=8+15=23d[4,3]+Cost(3,{2},1)=9+18=27=
S=3
Cost(1,{2,3,4},1)=⎧⎩⎨d[1,2]+Cost(2,{3,4},1)=10+25=35
d[1,3]+Cost(3,{2,4},1)=15+25=40
d[1,4]+Cost(4,{2,3},1)=20+23=43=35
Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3, select the
path from 1 to 2 (cost is 10) then go backwards. When s = 2, we get the minimum value for d
[4, 2]. Select the path from 2 to 4 (cost is 10) then go backwards.
When s = 1, we get the minimum value for d [4, 2] but 2 and 4 is already selected. Therefore,
we select d [4, 3] (two possible values are 15 for d [2, 3] and d [4, 3], but our last node of the
path is 4). Select path 4 to 3 (cost is 9), then go to s = Φ step. We get the minimum value
for d [3, 1] (cost is 6).
Given weights and values of n items, put these items in a knapsack of capacity W to get the
maximum total value in the knapsack.
• In other words, given two integer arrays val[0..n-1] and wt[0..n-1] which represent
values and weights associated with n items respectively.
• Also given an integer W which represents knapsack capacity,
• find out the maximum value subset of val[] such that sum of the weights of this
subset is smaller than or equal to W.
14
• You cannot break an item, either pick the complete item, or don’t pick it (0-1
property).
Weights : 2 3 3 4 6
Values : 12594
From the above input, the capacity of the knapsack is 15 kgs and there are 5 items to choose
from. Please note that there are no items with zero weight. A very simple idea is to evaluate
each item and find out if it should be included in the knapsack or not in order to maximize the
value.
Approach
Have a look at the below table and then I will explain the algorithm.
items at our disposal are 2(1), 3(2), 3(5) and 4(9), then the maximum value we can get by
picking items from this list would be 10.
• One more time, cell (4, 10) has the value 16. This means that the maximum value we
can get from items 2(1), 3(2), 3(5), 4(9) and 6(4) is 16 within the knapsack’s weight
limit of 10.
.Analysis
The running time of the algorithm is also proportional to M * W because there are two loops
and the outer one runs for M times and the inner one runs for W times. Hence the running time
would be O(M*W).
f(0,g) = 0, f(k,0) = 0
15
Recursion formula:
Solution tabulated:
f = f(n,M) = f(4,8) = 29
g=8
The solution is x = (1,0,1,1) i.e. items 1,3, and 4 are selected. value of
the knapsack is 29
16
Algorithm
17