[go: up one dir, main page]

0% found this document useful (0 votes)
5 views21 pages

Unit 4

Dynamic Programming is an algorithm design technique that optimizes recursive solutions by storing results of subproblems to avoid redundant computations, thus reducing time complexity from exponential to polynomial. The document explains key concepts such as the Principle of Optimality, Memoization, and the Tabulation method, illustrated through examples like the Fibonacci sequence and the 0/1 Knapsack problem. Additionally, it briefly mentions other problems solvable by Dynamic Programming, including the Single Source Shortest Path and Hamiltonian Cycle.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views21 pages

Unit 4

Dynamic Programming is an algorithm design technique that optimizes recursive solutions by storing results of subproblems to avoid redundant computations, thus reducing time complexity from exponential to polynomial. The document explains key concepts such as the Principle of Optimality, Memoization, and the Tabulation method, illustrated through examples like the Fibonacci sequence and the 0/1 Knapsack problem. Additionally, it briefly mentions other problems solvable by Dynamic Programming, including the Single Source Shortest Path and Hamiltonian Cycle.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

UNIT-04

Dynamic Programming

It is one of the algorithm design techniques used to solve optimization


problems. It is mainly an optimization over plain recursion. Wherever we
encounter a recursive solution with repeated calls for the same inputs, we
can optimize it using Dynamic Programming. The idea is to store the results
of subproblems so that we do not have to re-compute them when needed
later. This simple optimization reduces time complexities from exponential to
polynomial.

Principle of Optimality : The principle of optimality, developed by Richard


Bellman, is the basic principle of dynamic programming. It states that in an
optimal sequence of decisions, each subsequence must also be optimal.

Memoization : It is the top-down approach (start solving the given problem


by breaking it down) . If we want, we can use this approach in Dynamic
programming as well, but we generally use iterative method (tabulation
method), which is the bottom-up approach, in Dynamic programming.

Let’s tíy to undeístand Dynamic Píogíamming appíoach with a suitable example.

Find Fibonacci term using plain recursion ( recursive program).

Fibonacci seíies : 0 1 1 2 3 5 . . .
Fn: 0 1 2 3 4 5 . . . (Fibonacci teíms)
F3 teím= 2 , F1 teím=1 , F4 teím=3, etc.

int fib(int n)
{
if(n<=1)
return n;
return fib(n-2) + fib(n-1);
}

n if n<=1
T(n) = T(n-2) + T(n-1) +1 if n>1

Time complexity ( Upper bound )


T(n) = 2T(n-1) + 1 [ Since T(n-1) is almost equal to T(n-2)]
Using master method for decreasing functions, we get the time complexity
O(2n) , which is exponential.

Now, try to observe repeated recursive calls for the same argument (input
value ) using a recursive tracing tree.
Count of Repeated Recursive calls in fig 1:
fib(3) – 2 times repeated, fib(2) – 3 times repeated, fib(1) – 5 times
repeated, and fib(0) -- 3 times repeated
We have got repeated recursive calls for the same input. It makes this
approach have exponential running time. It is where Dynamic Programming
approach comes into the picture, which reduces time complexity drastically
by avoiding repetitive computation for the same recursive call.
Ïind Ïibonacci teím using memoization (Dynamic Píogíamming Appíoach).

int F[20]; // Global aííay


int fib(int n) // Function definition
{

if(n <= 1)
íetuín n;
if(F[n] != -1)
íetuín F[n];

F[n] = fib(n-2) + fib(n-1); // íecuísive call


Retuín F[n];
}

void main(void)
{
int i, result=0;

for( i=0 ; i< 20 ; i++)


F[i] = -1;
result = fib(5);
printf(“%d”, result); }
From the above example, we can observe the following points:
If we use memoization method to solve the same problem, we don’t have to
go for repetitive computation for the same recursive calls. It means for fib(5),
we have to compute recursive function calls only 6 times ( fib(5), fib(4),
fib(3), fib(2), fib(1) and fib(0)).
If we generalize it for fib(n), the number of recursive calls will be n+1.It
means time complexity will be O(n)- linear.

Note – We generally don’t use the memoization method in Dynamic


programming as it consumes more space due to recursion.
Note – Memoization follows top-down approach.
Iterative Method (tabulation method) for the Same Problem [ bottom-up
approach]
int F[20];
int fib(int n)
{
if(n <=1)
{
return n;
}
F[0]=0;
F[1]=1;
for(int i = 2; i<=n; i++)
{
F[i]= F[i-2] + F[i-1];
}
return F[n];
}

1. 0/1 Knapsack Problem


The knapsack problem deals with putting items in the knapsack based on
the value/profit of the items. Its aim is to maximize the value inside the bag.
In 0-1 Knapsack, you can either put the item or discard it; there is no concept
of putting some part of an item in the knapsack like fractional knapsack.

Q. Find an optimal solution to the 0/1 Knapsack instances n=4 and Knapsack
capacity m=8 where profits and weights are as follows p= {1, 2, 5, 6} and w
= {2, 3, 4, 5}
Note – If weights are not given in the increasing order, then arrange them
in the increasing order and also arrange profits accordingly.

The matrix (mat[5][9]) will contain 9 columns ( as capacity (m) = 8 is given)


and 5 rows (as n= 4 is given)

Pi = profits
Wi = weights
i = Objects

Formula to fill out cells : mat[i, w] = max ( mat[i-1, w], mat[i-1, w-weight[i]+ p[i])

Short-cut to fill the table


1. Fill the first row and the first column with zero.

2. For the first object, check the weight (wi) of the first object, which is 2.
We have capacity w=2, so place profit of this object in the cell having
capacity of 2 units (mat[1][2]=1). So far, we have only one object to
consider, so we can put the first object (i = 1) having 2 units of weight
(w1 = 2 ) in the knapsack having capacity (w) 3,4,5,6,7 and 8 units.
Therefore, fill mat[1][3], mat[1][4], mat[1][5], mat[1][6], mat[1][7] and
mat[1][8] with 1.
3. For the cell(s) left side of the current cell, we just consider the
maximum value between left side and above of the current cell. For
example, for the left side of mat[1][2], we need to pick max(mat[1][1],
and mat[0][2]), which is 0. Therefore, place zero in the mat[1][1].

4. For the second object, weight is given 3 units. Now, we can consider
two objects ( 1 and 2) together. The second object having 3 units of
weight can be placed in the cell [2][3] having 3 units of capacity. Both
objects together have 5 units of weight, which can be placed in the cells
[2][5], [2][6], [2][7] and [2][8] having 5 units of capacity. For the cell
[2][2], pick max(mat[2][1], mat[1][2]) which is 1. And follow the same
for the cell [2][4].

5. For the third object, 4 units of weight is given. Now, we can consider
three objects (1,2, and 3 objects) together . Weight of the third object
is 4 units , so we can place its profit (5) in the cell [4][4] having 4 units
of capacity. Objects 2 and 3 together have 7 units of weight and 7 units
of profit (5+2), so we can place them in the cell [3][7] having 7 units of
capacity. Object 1 and 3 together have 6 units of weight, so we can
place them in the cell [3][6] having 6 units of capacity. To fill out
remaining cells , follow above steps.

Maximum profits = 8 ( placed in the last cell of the matrix )


Selection of objects Xi = X1 X2 X3 X4 ( 0 1 0 1 )
Only objects 2 and 4 have been placed in the knapsack to gain
maximum profit.

Capacity(w)
Instances/
Objects (i)
0 1 2 3 4 5 6 7 8
Pi Wi 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 5 6 6 7 8

2. Single Source Shortest Path using Bellman-Ford Algorithm ( Dynamic


Programming)
Kindly refer unit-03 notes

3. All Pairs Shortest Path ( Floyd-Warshall Algorithm)

Apply Floyd-Warshall algorithm on the below graph:


3
V.V.I
V.V.I
Hamiltonian Cycle (using backtíacking)

A Hamilľonian Cycle is a cycle in a gíaph ľhaľ visiľs eveíy veíľex exacľly once and íeľuíns ľo ľhe sľaíľing
veíľex.
Gíaph Coloíing Píoblem (using backtíacking)

ľhe goal is ľo assign coloís ľo ľhe veíľices of a gíaph in such a way ľhaľ no ľwo adjacenľ veíľices (connecľedby
an edge) have ľhe same coloí, while using ľhe minimum numbeí of coloís.
ľíavelling Salesman Píoblem using Least Cost Bíanch and Bound

In this Problem, a salesman must visits n cities. We can say that salesman wishes to make a
tour, visiting each city exactly once and finishing at the city he starts from. The goal is to find
a tour of minimum cost. We assume that every two cities are connected. We can model the
cities as a complete graph of n vertices, where each vertex represents a city.

You might also like