CC 1
CC 1
Dynamic Programming
Dynamic Programming is also used in optimization problems. Like divide-and-conquer method,
Dynamic Programming solves problems by combining the solutions of sub problems.
Moreover, Dynamic Programming algorithm solves each sub-problem just once and then saves
its answer in a table, thereby avoiding the work of re-computing the answer every time. Two
main properties of a problem suggest that the given problem can be solved using Dynamic
Programming. These properties are overlapping sub-problems and optimal substructure.
Overlapping Sub-Problems
For example, Binary Search does not have overlapping sub-problem. Whereas recursive
program of Fibonacci numbers have many overlapping sub-problems.
Optimal Sub-Structure
A given problem has Optimal Substructure Property, if the optimal solution of the given
problem can be obtained using optimal solutions of its sub-problems.
For example, the Shortest Path problem has the following optimal substructure property −
If a node x lies in the shortest path from a source node u to destination node v, then the shortest
path from u to v is the combination of the shortest path from u to x, and the shortest path
from x to v.
The standard All Pair Shortest Path algorithms like Floyd-Warshall and Bellman-Ford are
typical examples of Dynamic Programming
Divide & Conquer algorithm partition the problem into disjoint sub problems solve the sub
problems recursively and then combine their solution to solve the original problems. Divide &
Conquer algorithm can be a Bottom-up approach and Top down approach.
Dynamic Programming is used when the sub problems are not independent, e.g. when they share
the same sub problems. In this case, divide and conquer may do more work than necessary,
because it solves the same sub problem multiple times. Dynamic Programming solves each sub
problems just once and stores the result in a table so that it can be repeatedly retrieved if needed
again.
Dynamic Programming is a Bottom-up approach- we solve all possible small problems and
then combine to obtain solutions for bigger problems.
Dynamic Programming is a paradigm of algorithm design in which an optimization problem is
solved by a combination of achieving sub-problem solutions and appearing to the "principle of
optimality".
It deals (involves) three steps at each level of It involves the sequence of four steps:
recursion:
o Characterize the structure of optimal
Divide the problem into a number of sub
solutions.
problems.
o Recursively defines the values of
Conquer the sub problems by solving them
optimal solutions.
recursively.
o Compute the value of optimal solutions
Combine the solution to the sub problems into
in a Bottom-up minimum.
the solution for original sub problems.
o Construct an Optimal Solution from
computed information.
It does more work on sub problems and hence It solves sub problems only once and then stores in
has more time consumption. the table.
In this sub problems are independent of each In this sub problems are interdependent.
other.
For example: Merge Sort & Binary Search etc. For example: Matrix Multiplication.
Example:
Given a sequence X = (x1 x2.....xm) we define the ith prefix of X for i=0, 1, and 2...m as Xi=
(x1 x2.....xi). For example: if X = (A, B, C, B, C, A, B, C) then X4= (A, B, C, B)
Optimal Substructure of an LCS: Let X = (x1 x2....xm) and Y = (y1 y2.....) yn) be the
sequences and let Z = (z1 z2......zk) be any LCS of X and Y.
Step 2: Recursive Solution: LCS has overlapping subproblems property because to find LCS of
X and Y, we may need to find the LCS of Xm-1 and Yn-1. If xm ≠ yn, then we must solve two
subproblems finding an LCS of X and Yn-1.Whenever of these LCS's longer is an LCS of x and
y. But each of these subproblems has the subproblems of finding the LCS of Xm-1 and Yn-1.
Let c [i,j] be the length of LCS of the sequence Xiand Yj.If either i=0 and j =0, one of the
sequences has length 0, so the LCS has length 0. The optimal substructure of the LCS problem
given the recurrence formula
Step3: Computing the length of an LCS: let two sequences X = (x1 x2.....xm) and Y =
(y1 y2..... yn) as inputs. It stores the c [i,j] values in the table c [0......m,0..........n].Table b
[1..........m, 1..........n] is maintained which help us to construct an optimal solution. c [m, n]
contains the length of an LCS of X,Y.
1. m ← length [X]
2. n ← length [Y]
3. for i ← 1 to m
4. do c [i,0] ← 0
5. for j ← 0 to m
6. do c [0,j] ← 0
7. for i ← 1 to m
8. do for j ← 1 to n
9. do if xi= yj
1. W ≤ capacity
2. Value ← Max
Example
Let us consider that the capacity of the knapsack is W = 25 and the items are as shown in the
following table.
Item A B C D
Profit 24 18 18 10
Weight 24 10 10 7
Without considering the profit per unit weight (pi/wi), if we apply dynamic approach to solve
this problem, first item A will be selected as it will contribute maximum profit among all the
elements.
After selecting item A, no more item will be selected. Hence, for this given set of items total
profit is 24. Whereas, the optimal solution can be achieved by selecting items, B and C, where
the total profit is 18 + 18 = 36.
In general:
If A = [aij] is a p x q matrix
B = [bij] is a q x r matrix
C = [cij] is a p x r matrix
Then
Given following matrices {A1, A2, A3,...An} and we have to perform the matrix multiplication,
which can be accomplished by a series of matrix multiplications
A1 xA2 x, A3 x.....x An
It can be observed that the total entries in matrix 'C' is 'pr' as the matrix is of dimension p x r
Also each entry takes O (q) times to compute, thus the total time to compute all possible entries
for the matrix 'C' which is a multiplication of 'A' and 'B' is proportional to the product of the
dimension p q r.
It is also noticed that we can save the number of operations by reordering the parenthesis.
Example:
Let us have 3 matrices, A1,A2,A3 of order (10 x 100), (100 x 5) and (5 x 50) respectively.
1. A1,(A2,A3): First multiplying(A2 and A3) then multiplying and resultant withA1.
2. (A1,A2),A3: First multiplying(A1 and A2) then multiplying and resultant withA3.
To find the best possible way to calculate the product, we could simply parenthesis the
expression in every possible fashion and count each time how many scalar multiplication are
required.
Matrix Chain Multiplication Problem can be stated as "find the optimal parenthesization of a
chain of matrices to be multiplied such that the number of scalar multiplication is minimized".
There are very large numbers of ways of parenthesizing these matrices. If there are n items, there
are (n-1) ways in which the outer most pair of parenthesis can place.
(A1) (A2,A3,A4,................An)
........................
Or(A1,A2,A3.............An-1) (An)
It can be observed that after splitting the kth matrices, we are left with two parenthesized
sequence of matrices: one consist 'k' matrices and another consist 'n-k' matrices.
Now there are 'L' ways of parenthesizing the left sublist and 'R' ways of parenthesizing the right
sublist then the Total will be L.R:
c (n) =
c (n) = Ω
Problem Statement
A traveler needs to visit all the cities from a list, where distances between all the cities are
known and each city should be visited just once. What is the shortest possible route that he
visits each city exactly once and returns to the origin city?
Solution
Travelling salesman problem is the most notorious computational problem. We can use brute-
force approach to evaluate every possible tour and select the best one. For n number of vertices
in a graph, there are (n - 1)! number of possibilities.
Instead of brute-force using dynamic programming approach, the solution can be obtained in
lesser time, though there is no polynomial time algorithm.
Let us consider a graph G = (V, E), where V is a set of cities and E is a set of weighted edges.
An edge e(u, v) represents that vertices u and v are connected. Distance between
vertex u and v is d(u, v), which should be non-negative.
Suppose we have started at city 1 and after visiting some cities now we are in city j. Hence, this
is a partial tour. We certainly need to know j, since this will determine which cities are most
convenient to visit next. We also need to know all the cities visited so far, so that we don't
repeat any of them. Hence, this is an appropriate sub-problem.
For a subset of cities S Є {1, 2, 3, ... , n} that includes 1, and j Є S, let C(S, j) be the length of
the shortest path visiting each node in S exactly once, starting at 1 and ending at j.
When |S| > 1, we define C(S, 1) = ∝ since the path cannot start and end at 1.
Now, let express C(S, j) in terms of smaller sub-problems. We need to start at 1 and end at j.
We should select the next city in such a way that
Example
In the following example, we will illustrate the steps to solve the travelling salesman problem.
1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
S=Φ
Cost(2,Φ,1) = d(2,1 ) = 5
Cost(3,Φ,1) = d(3,1) = 6
Cost(4,Φ,1) = d(4,1) = 8
S=1
Cost(i,s) = min{Cost(j,s–(j))+d[i,j]}
Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15
Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18
Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18
Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20
Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15
Cost(4,{2},1)=d[4,2]+Cost(2,Φ,1)=8+5=13
S=2
Cost(2,{3,4},1) = d[2,3]+Cost(3,{4},1)=9+20=29
d[2,4]+Cost(4,{3},1)=10+15=25=25
Cost(3,{2,4},1) = d[3,2]+Cost(2,{4},1)=13+18=31
d[3,4]+Cost(4,{2},1)=12+13=25=25
Cost(4,{2,3},1) = d[4,2]+Cost(2,{3},1)=8+15=23
d[4,3]+Cost(3,{2},1)=9+18=27=23
S=3
Cost(1,{2,3,4},1) = d[1,2]+Cost(2,{3,4},1)=10+25=35
d[1,3]+Cost(3,{2,4},1)=15+25=40
d[1,4]+Cost(4,{2,3},1)=20+23=43=35
Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3, select the
path from 1 to 2 (cost is 10) then go backwards. When s = 2, we get the minimum value for d
[4, 2]. Select the path from 2 to 4 (cost is 10) then go backwards.
When s = 1, we get the minimum value for d [4, 3]. Selecting path 4 to 3 (cost is 9), then we
shall go to then go to s = Φ step. We get the minimum value for d [3, 1] (cost is 6).
• In data structures/ADA,
• Shortest path problem is a problem of finding the shortest path(s) between vertices of a
given graph.
• Shortest path between two vertices is a path that has the least cost as compared to all
other existing paths.
• Shortest path algorithms are a family of algorithms used for solving the shortest path
problem.
• Google Maps
• Road Networks
• Logistics Research
• It is a shortest path problem where the shortest path between a given pair of vertices is
computed.
• A* Search Algorithm is a famous algorithm used for solving single-pair shortest path
problem.
• It is a shortest path problem where the shortest path from a given source vertex to all
other remaining vertices is computed.
• Dijkstra’s Algorithm and Bellman Ford Algorithm are the famous algorithms used for
solving single-source shortest path problem.
• It is a shortest path problem where the shortest path from all the vertices to a single
destination vertex is computed.
• By reversing the direction of each edge in the graph, this problem reduces to single-
source shortest path problem.
• It is a shortest path problem where the shortest path between every pair of vertices is
computed.
• Floyd-Warshall Algorithm and Johnson’s Algorithm are the famous algorithms used for
solving All pairs shortest path problem.
Dijkstra’s Algorithm
• It computes the shortest path from one particular source node to all other remaining nodes
of the graph.
In this figure Source Vertex is “S” So Now calculate shortest path from
“S” to “a”,
“S” to “b” ,
“S” to “c” ,
“S” to “d” ,
“S” to “e” ,
After directed path calculation from source vertex, we will try to find another path from source
vertex
In-Directed path from “S” to “a” not available. So Final Path will remain same.
• In-Directed path from “S” to “b” is “s to a and a to b” is 3,. Only One In-directed path is
available from S to B. After that we will compare between directed path and In-directed
path from “s to b” and select minimum path as a final path. So selected path from “s to
b’” is 3 which is shortest path as compare directed path which is 5.
• Only one In-Directed path is available from “S” to “c” is “ s to a and a to c is 3. After that
we will compare between directed path and In-directed path from “s to c” and select
minimum path as a final path. So selected path from “s to a’” is 3 which is shortest path
as compare directed path which is ∞.
• Total 4 In-Directed path from “S” to “d” is available in the graph, so we will select
shortest path which is
• “s to a and a to d” is 2
• Total 4 In-Directed path from “S” to “e” is available in the graph, so we will select
shortest path which is
• Now,
• It represents the shortest path from source vertex ‘S’ to all other remaining vertices.
BACKTRACKING
Backtracking is used to solve problem in which a sequence of objects is chosen from a
specified set so that the sequence satisfies some criterion.
The solution is based on finding one or more vectors that maximize, minimize, or satisfy
a criterion function P (x1, . . . . . , xn).
Form a solution and check at every step if this has any chance of success. If the solution
at any point seems not promising, ignore it.
All solutions require a set of constraints divided into two categories: explicit and implicit
constraints.
Explicit constraints:
Explicit constraints are rules that restrict each xi to take on values only from a given set.
Explicit constraints depend on the particular instance I of problem being solved. .
All tuples that satisfy the explicit constraints define a possible solution space for I.
Implicit constraints:
Implicit constraints are rules that determine which of the tuples in the solution space of I
satisfy the criterion function.
Thus, implicit constraints describe the way in which the xi’s must relate to each other
Example:
For 4-queens problem: Let us consider, N = 4. Then 4-Queens Problem is to place eight queens
on an 4 x 4 chessboard so that no two “attack”, that is, no two of them are on the same row,
column, or diagonal see Figure 1. All solutions to the 4-queens problem can be represented as 4-
tuples (x1, . . . . , x4), where xi is the column of the ith row where the ith queen is placed.
In this problem Explicit and Implicit constraints are as:
Explicit constraints using 4-tuple formation, for this problem are S= {1, 2, 3, 4}.
Implicit constraints for this problem are that no two queens can be the same (i.e., all
queens must be on different columns) and no two queens can be on the same diagonal
The following figure1 illustrates a solution to the 4-Queens Problem: none of the 4 queens can
capture each other.
Backtracking is a form of recursion. The usual scenario is that we are faced with a number of
options, and we must choose one of these. After we make choice we will get a new set of
options; just what set of options we get depends on what choice we made. This procedure is
repeated over and over until we reach a final state or final result. If we made a good sequence of
choices, we achieve final state is a goal state; if we didn't, it isn't.
We start at the root of a tree (see figure 2); the tree probably has some good leaves and some bad
leaves, though it may be that the leaves are all good or all bad. We want to get to a good leaf. At
each node, beginning with the root, we choose one of its children to move to, and we keep this
up until we get to a leaf. Suppose we get to a bad leaf. We can backtrack to continue the search
for a good leaf by revoking our most recent choice, and trying out the next option in that set of
options. If we run out of options, revoke the choice that got us here, and try another choice at that
node. If we end up at the root with no options left, there are no good leaves to be found.
Example.
In this example we drew a figure 2 of a tree. The tree is an abstract model of the possible
sequences of choices we could make. There is also a data structure called a tree, but usually we
don't have a data structure to tell us what choices we have. (If we do have an actual tree data
structure, backtracking on it is called depth-first tree searching.)
Problems :
4 X 4 Queen Problem
For example we are given a 4 x 4 (4 cross 4) board and 4 queens. We have to design an
algorithm to place those 4 queens in 4 x 4 cross boards, so that none of the queens are clashing
(row, column and diagonal wise). The will be the minimum value of 4 X 4. to find out this we
start through study on queen problem are as follow
If the board is 1 x 1, then I can place the 1 queen at (0, 0) grid.
If board is 2 x 2, then I cannot place 2 queens. They will definitely clash, and no solution
exists for this.
For board 3 x 3, also it’s not possible to get the gird arrangement of queens.
For 4 x 4, I can place four queens at (0, 1) , (1, 3), (2, 0), (3, 2).
Check the size of the grid as 4, return the possible arrangement.
Solution Approach:
Step 1 : After placing 1st Queen at first row and first column
Step 2: After placing 2nd Queen at second row and third column
Step 3: It is clear from step 2 that we are not avail to place all 4 queens on 4 X 4 cross board.
Only 3 queens can be placed at a time.
Step 4: Now, we have a failure at step 3 as there is no possibility to place a 4th queen in the third
row (figure a) and similarly it is not possible to place 4th queen in the fourth row: there simply
cannot be a solution with this configuration. The solver has to backtrack till step 1 and change
the position of 1st queen.
Step 5: to find out perfect results the solver should continue the search, it would have to
backtrack and try to place the queen in the right row and column. After repeating the same
process so many time to get final results which is