[go: up one dir, main page]

0% found this document useful (0 votes)
111 views26 pages

Lec Dynamic Prog 1

Uploaded by

amnaaamna771
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views26 pages

Lec Dynamic Prog 1

Uploaded by

amnaaamna771
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 26

Dynamic Programming-1

Applied Algorithms
Fall 2003

Dynamic Programming 1
Dynamic Prog & Div and Conq
 Dynamic programming like the divide and conquer
method, solves problem by combining the solutions of
sub problems

 Divideand conquer method partitions the problem into


independent sub problems,

 solves the sub problems recursively and

 then combine their solutions to solve the original


problem.
Dynamic Programming 2
Dynamic Prog & Div and Conq
 Dynamic programming is applicable, when the sub-
problems are NOT independent, that is when sub-
problems share sub sub-problems.

 Itis making a set of choices to arrive at optimal


solution.

 Ifsub-problems are not independent, we have to


further divide the problem.

 In worst case, we may end-up with an exponential


time algorithm.
Dynamic Programming 3
Dynamic Prog & Div and Conq
 Frequently, there are a polynomial number of sub-
problems, but they do get repeated.
 A dynamic programming algorithm solves every sub-
problem just once and then saves its answer in a
table, thereby avoiding the work of re-computing the
answer every time the sub-problem is encountered
 So we end up having a polynomial time algorithm.

 Which is better, Dynamic Programming or Divide &


conquer?

Dynamic Programming 4
Optimization problems
 Dynamic problem is typically applied to
Optimization Problems
 Inoptimization problems there can be many
possible solutions.
 Each solution has a value and the task is to find the
solution with the optimal ( Maximum or Minimum)
value. There can be several such solutions.

Dynamic Programming 5
4 steps of Dynamic Programming Algorithm

1. Characterize the structure of an optimal solution.

2. Recursively define the value of an optimal solution.

3. Compute the value of an optimal solution bottom-up.

4. Construct an optimal solution from computed


information

Often only the value of the optimal solution is


required so step-4 is not necessary.
Dynamic Programming 6
Recursive Definition: Fibonacci Numbers

The Fibonacci numbers are a series of numbers as follows:

fib(1) = 1
fib(2) = 1 1, n <= 2
fib(3) = 2 fib(n) = fib(n-1) + fib(n-2), n > 2
fib(4) = 3
fib(5) = 5
... fib(3) = 1 + 1 = 2
fib(4) = 2 + 1 = 3
fib(5) = 2 + 3 = 5

Dynamic Programming 7
Recursive Algorithm: Fibonacci Numbers

 Takes Exponential time.


 Actualsub problems are polynomial (O(n))
but they get repeated
 Sub problems are not INDEPENDENT.
 Sub problems share sub-sub problems.
 We can solve using Dynamic programming.

Dynamic Programming 8
Benefits of Dynamic Programming

 Run an O(n) time loop, keep a temp variable


to store the solution of sub-problems and
then reuse them rather then recalculating
them.

 Soby using dynamic programming we can


solve a problem in polynomial time which
otherwise was solved in exponential time.

Dynamic Programming 9
Matrix Chain Multiplication

Our second example of dynamic

programming is an algorithm that solves

the problem of

Matrix Chain Multiplication

Dynamic Programming 10
Matrix Chain Multiplication
 We are given a sequence (chain) <A1, A2, A3, ...,An> ,
of n matrices to be multiplied together. And we wish to
compute the product A1A2A3…An

A product of matrices is fully parenthesized, if it is


either a single matrix or product of two fully
parenthesized matrix product, surrounded by
parenthesis.

 Asmatrix multiplication is associative, so all


parenthesizations yield the same product/result.
Dynamic Programming 11
Dynamic Programming 12
Dynamic Programming 13
Matrix Chain Multiplication revisited

We can multiply two matrices A and B, only if


the number of columns of A is equal to the
number of rows of B.

If A is a p x q matrix, and B is a q x r matrix,


the resulting matrix C is p x r matrix.

So the time to compute C, is dominated by


scalar multiplications i.e. pqr

Dynamic Programming 14
Dynamic Programming 15
Matrix Chain Multiplication revisited

Thus computing the product according to the 1 st


parenthesization is 10 times faster.

This clearly demonstrates the benefit of calculating the


optimum order before commencing the product
calculations.

Dynamic Programming 16
Counting the number of parenthesization.

 For a single matrix, we have only one parenthesization.

 Thenumber of alternative parenthesization for a


sequence of n matrices is denoted by P( n).

 We can split a sequence of n matrices between the kth


and (k+1)st matrices for any k=1,2…n-1, and then
parenthesize the two resulting subsequences
independently.

Dynamic Programming 17
Dynamic Programming 18
Dynamic Programming 19
Dynamic Programming 20
Dynamic Programming 21
Dynamic Programming 22
Computing the optimal costs

The idea of dynamic programming is, rather


than computing m (min number of scalar
multiplications) recursively, computing it
bottom up.

A recursive computation takes exponential


time; a bottom-up computation O(n3) time.

Dynamic Programming 23
Dynamic Programming 24
Example
Matrix Dimension
 Let n=6 A1 30 x 35

A2 35 x 15

A3 15 x 5

A4 5 x 10

A5 10 x 20

A6 20 x 25

Dynamic Programming 25
Order Matrix Size Products
A2 x A3 = 35 x 5 2,625
A1 (A2 x A3) = 30 X 5 5,250 + 2,625

Dynamic Programming 26

You might also like