[go: up one dir, main page]

0% found this document useful (0 votes)
5 views68 pages

DP

The document discusses dynamic programming, a strategy for efficiently solving problems with recursive structures and overlapping subproblems, particularly in optimization contexts. It provides an example of minimizing coin transactions using both greedy and brute force methods, highlighting the inefficiencies of the latter. The document then introduces top-down and bottom-up dynamic programming techniques, including memoization and iterative table-filling approaches to optimize the solution.

Uploaded by

Andrea Fontana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views68 pages

DP

The document discusses dynamic programming, a strategy for efficiently solving problems with recursive structures and overlapping subproblems, particularly in optimization contexts. It provides an example of minimizing coin transactions using both greedy and brute force methods, highlighting the inefficiencies of the latter. The document then introduces top-down and bottom-up dynamic programming techniques, including memoization and iterative table-filling approaches to optimize the solution.

Uploaded by

Andrea Fontana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

[CSE102 / Lecture 3]

Dynamic programming

Noam Zeilberger

Bachelor 1, Ecole Polytechnique

3 March 2025

1 / 44
What is dynamic programming?

The name of a strategy for efficiently solving certain kinds of problems which have
both recursive structure and overlapping subproblems.

Originally introduced in the context of mathematical control theory, and often applied
to solving optimization problems, but has wider applications.

Today: several examples illustrating both top-down and bottom-up DP.

2 / 44
Example #1: minimizing transactions

3 / 44
Problem statement

Given a list C of coin denominations and an amount of money n, find the minimum
number of coins needed to pay n. (We’ll assume 1 ∈ C to avoid impossible payments.)

For example, you can pay 0.79€ using only five coins:

79 = 50 + 20 + 5 + 2 + 2

4 / 44
Greedy algorithm?

Idea: just take the largest c ∈ C such that c ≤ n, and repeat with n − c.

Well, this approach works for eurocents (C = {1, 2, 5, 10, 20, 50, 100, 200}), but will
not always yield an optimal payment for arbitrary C .

For example, taking1 C = {1, 2, 5, 10, 20, 50, 88, 100, 200}, an optimal payment of
n = 264 requires only three coins, but the greedy payment uses five.

1
See https://en.wikipedia.org/wiki/Fijian_dollar.
5 / 44
Brute force solution

1. Enumerate all ways of paying n with (any number of copies of) coins from C .
2. Return the minimum length of a payment.

6 / 44
Brute force solution2

1 def payments(C, n):


2 '''Generate all possible ways to pay n using coins from list C.'''
3 if n == 0:
4 yield [] # only one way to pay nothing
5 return
6 for c in C: # otherwise, try all possible coins c
7 if c > n: continue # (unless they're worth more than n)
8 for p in payments(C, n-c): # and find a way to pay n-c recursively
9 yield [c] + p # and yield the combined payment
10
11 def mincoins_brute(C, n):
12 mincoins = n
13 for p in payments(C,n):
14 mincoins = min(mincoins, len(p))
15 return mincoins
16 # ...or equivalently:
17 # return min (len(p) for p in payments(C,n))

2
Good for making payments up to about 0.30€
7 / 44
Step one of dynamic programming: finding a recurrence

Let MC (n) be the minimum number of coins needed to pay n units, drawing coins
from C . Observe that MC (n) obeys the following recurrence:

MC (0) = 0 MC (n) = min 1 + MC (n − c)


c∈C
c≤n

Since it takes zero coins to pay nothing, and the best way to pay n units is to choose a
coin c ∈ C making MC (n − c) is minimal, and use c together with an optimal
payment of n − c.

8 / 44
The recurrence translated to a recursive function

def mincoins_rec(C, n):


if n == 0: return 0
else: return min (1 + mincoins_rec(C, n-c) for c in C if c <= n)

This is still incredibly inefficient. Why?

9 / 44
The recurrence translated to a recursive function

def mincoins_rec(C, n):


if n == 0: return 0
else: return min (1 + mincoins_rec(C, n-c) for c in C if c <= n)

This is still incredibly inefficient. Why?


...We compute the same values over and over! (aka “overlapping subproblems”)

9 / 44
The recurrence translated to a recursive function

def mincoins_rec(C, n):


if n == 0: return 0
else: return min (1 + mincoins_rec(C, n-c) for c in C if c <= n)

This is still incredibly inefficient. Why?


...We compute the same values over and over! (aka “overlapping subproblems”)
Easier to see this if we add some debugging info...
def mincoins_rec_traced(C, n, from_n):
print(f'...computing MC({n}) needed for MC({from_n})')
if n == 0: return 0
else: return min (1 + mincoins_rec_traced(C, n-c, n) for c in C if c <= n)

9 / 44
>>> mincoins_rec_traced([1,2,4], 5, 5) ...computing MC(0) needed for MC(2)
...computing MC(5) needed for MC(5) ...computing MC(0) needed for MC(4)
...computing MC(4) needed for MC(5) ...computing MC(3) needed for MC(5)
...computing MC(3) needed for MC(4) ...computing MC(2) needed for MC(3)
...computing MC(2) needed for MC(3) ...computing MC(1) needed for MC(2)
...computing MC(1) needed for MC(2) ...computing MC(0) needed for MC(1)
...computing MC(0) needed for MC(1) ...computing MC(0) needed for MC(2)
...computing MC(0) needed for MC(2) ...computing MC(1) needed for MC(3)
...computing MC(1) needed for MC(3) ...computing MC(0) needed for MC(1)
...computing MC(0) needed for MC(1) ...computing MC(1) needed for MC(5)
...computing MC(2) needed for MC(4) ...computing MC(0) needed for MC(1)
...computing MC(1) needed for MC(2) 2
...computing MC(0) needed for MC(1)

10 / 44
>>> mincoins_rec_traced([1,2,4], 6, 6) ...computing MC(0) needed for MC(1)
...computing MC(6) needed for MC(6) ...computing MC(1) needed for MC(5)
...computing MC(5) needed for MC(6) ...computing MC(0) needed for MC(1)
...computing MC(4) needed for MC(5) ...computing MC(4) needed for MC(6)
...computing MC(3) needed for MC(4) ...computing MC(3) needed for MC(4)
...computing MC(2) needed for MC(3) ...computing MC(2) needed for MC(3)
...computing MC(1) needed for MC(2) ...computing MC(1) needed for MC(2)
...computing MC(0) needed for MC(1) ...computing MC(0) needed for MC(1)
...computing MC(0) needed for MC(2) ...computing MC(0) needed for MC(2)
...computing MC(1) needed for MC(3) ...computing MC(1) needed for MC(3)
...computing MC(0) needed for MC(1) ...computing MC(0) needed for MC(1)
...computing MC(2) needed for MC(4) ...computing MC(2) needed for MC(4)
...computing MC(1) needed for MC(2) ...computing MC(1) needed for MC(2)
...computing MC(0) needed for MC(1) ...computing MC(0) needed for MC(1)
...computing MC(0) needed for MC(2) ...computing MC(0) needed for MC(2)
...computing MC(0) needed for MC(4) ...computing MC(0) needed for MC(4)
...computing MC(3) needed for MC(5) ...computing MC(2) needed for MC(6)
...computing MC(2) needed for MC(3) ...computing MC(1) needed for MC(2)
...computing MC(1) needed for MC(2) ...computing MC(0) needed for MC(1)
...computing MC(0) needed for MC(1) ...computing MC(0) needed for MC(2)
...computing MC(0) needed for MC(2) 2
...computing MC(1) needed for MC(3) 11 / 44
Memoization

Of course, we know that the value of mincoins_rec(C, n) doesn’t change for fixed
C and given n, but Python does not have a general mechanism for noticing that. Still,
we can program such memoization by hand.

Recipe:
1. Keep a “memo table” with all values of the function you’ve already computed.
2. Each time the function is called, check whether the output for the given input
value is already in the memo table, and if so return it.
3. Otherwise, compute the output, store it in the memo table, and return the output.

12 / 44
Mincoins with recursion and memoization, aka top-down dynamic programming

1 def mincoins_td(C, n, memo=None):


2 if memo is None: memo = {}
3 if n in memo: return memo[n]
4 if n == 0: out = 0
5 else: out = min (1 + mincoins_td(C, n-c, memo) for c in C if c <= n)
6 memo[n] = out
7 return out

The combination of recursion + memoization is often referred to as top-down DP.

13 / 44
Mincoins with recursion and memoization, aka top-down dynamic programming

1 def mincoins_td(C, n, memo=None):


2 if memo is None: memo = {}
3 if n in memo: return memo[n]
4 if n == 0: out = 0
5 else: out = min (1 + mincoins_td(C, n-c, memo) for c in C if c <= n)
6 memo[n] = out
7 return out

The combination of recursion + memoization is often referred to as top-down DP.

“Top-down” because computation starts up at the root of the recursion tree and works
down towards the leaves, while using the memo table to avoid recomputing subtrees.
(For some reason computer scientists like drawing trees upside-down.)

13 / 44
Bottom-up dynamic programming

In “bottom-up” DP we start at the leaves and work up to the root.

In other words, starting with the smallest subproblems, we progressively consider larger
and larger cases until we reach the goal.

Concretely, this is often carried out using iteration while progressively filling in a table.

14 / 44
Mincoins using bottom-up dynamic programming

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if i-c >= 0))
5 return table[n]

15 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0
[t[i-c] for c in C if c<=i] []
t[i] 0

16 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0 1
[t[i-c] for c in C if c<=i] [] [0]
t[i] 0 1

16 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0 1 2
[t[i-c] for c in C if c<=i] [] [0] [1]
t[i] 0 1 2

16 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0 1 2 3
[t[i-c] for c in C if c<=i] [] [0] [1] [2]
t[i] 0 1 2 3

16 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0 1 2 3 4
[t[i-c] for c in C if c<=i] [] [0] [1] [2] [3]
t[i] 0 1 2 3 4

16 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0 1 2 3 4 5
[t[i-c] for c in C if c<=i] [] [0] [1] [2] [3] [4,0]
t[i] 0 1 2 3 4 1

16 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0 1 2 3 4 5 6
[t[i-c] for c in C if c<=i] [] [0] [1] [2] [3] [4,0] [1,1]
t[i] 0 1 2 3 4 1 2

16 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0 1 2 3 4 5 6 7
[t[i-c] for c in C if c<=i] [] [0] [1] [2] [3] [4,0] [1,1] [2,2]
t[i] 0 1 2 3 4 1 2 3

16 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0 1 2 3 4 5 6 7 8
[t[i-c] for c in C if c<=i] [] [0] [1] [2] [3] [4,0] [1,1] [2,2] [3,3,0]
t[i] 0 1 2 3 4 1 2 3 1

16 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0 1 2 3 4 5 6 7 8 9
[t[i-c] for c in C if c<=i] [] [0] [1] [2] [3] [4,0] [1,1] [2,2] [3,3,0] [1,4,1]
t[i] 0 1 2 3 4 1 2 3 1 2

16 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0 1 2 3 4 5 6 7 8 9 10
[t[i-c] for c in C if c<=i] [] [0] [1] [2] [3] [4,0] [1,1] [2,2] [3,3,0] [1,4,1] [2,1,2]
t[i] 0 1 2 3 4 1 2 3 1 2 2

16 / 44
Animation of the bottom-up algorithm (added post-lecture)

1 def mincoins_bu(C, n):


2 table = [0]
3 for i in range(1, n+1):
4 table.append(min (1 + table[i-c] for c in C if c<=i))
5 return table[n]

Example with C = {1, 5, 8}, n = 11 (abbreviating t for table):

i 0 1 2 3 4 5 6 7 8 9 10 11
[t[i-c] for c in C if c<=i] [] [0] [1] [2] [3] [4,0] [1,1] [2,2] [3,3,0] [1,4,1] [2,1,2] [2,2,3]
t[i] 0 1 2 3 4 1 2 3 1 2 2 3

16 / 44
Obtaining a more useful solution

Knowing that we can pay (say) 17.29 FJD using 12 coins is interesting, but not so
useful if we don’t know how to make such an optimal payment.

17 / 44
Computing the best payment with top-down DP

1 def bestpayment_td(C, n, memo=None):


2 if memo is None: memo = {}
3 if n in memo: return memo[n]
4 if n == 0: out = []
5 else: out = min (([c] + bestpayment_td(C, n-c, memo) for c in C if c<=n), \
6 key=len)
7 memo[n] = out
8 return out

Idea: memo table stores optimal payments.


(This has a large space overhead.)

18 / 44
Computing the best payment with bottom-up DP

1 def bestpayment_bu(C, n):


2 table, best_coins = [0], [1]
3 for i in range(1, n+1):
4 min_coins, best_coin = i, 1
5 for c in C:
6 if c<=i and 1+table[i-c] < min_coins:
7 min_coins = 1 + table[i-c]
8 best_coin = c
9 table.append(min_coins)
10 best_coins.append(best_coin)
11 p = []
12 while n>0:
13 p.append(best_coins[n])
14 n -= best_coins[n]
15 return p
Idea: maintain additional table for storing the best coin to use at each value.

19 / 44
Advantages and disadvantages of top-down vs bottom-up DP, in general

Top-down DP:
+ relatively easy to convert a recursive function to use memoization

20 / 44
Advantages and disadvantages of top-down vs bottom-up DP, in general

Top-down DP:
+ relatively easy to convert a recursive function to use memoization
− can have large memory overhead

20 / 44
Advantages and disadvantages of top-down vs bottom-up DP, in general

Top-down DP:
+ relatively easy to convert a recursive function to use memoization
− can have large memory overhead
− computational overhead to testing if a value is in the memo table
(to keep this overhead small, input values should be hashable, or at least ordered)

20 / 44
Advantages and disadvantages of top-down vs bottom-up DP, in general

Top-down DP:
+ relatively easy to convert a recursive function to use memoization
− can have large memory overhead
− computational overhead to testing if a value is in the memo table
(to keep this overhead small, input values should be hashable, or at least ordered)

Bottom-up DP:
− requires more “programming” of the computation in advance

20 / 44
Advantages and disadvantages of top-down vs bottom-up DP, in general

Top-down DP:
+ relatively easy to convert a recursive function to use memoization
− can have large memory overhead
− computational overhead to testing if a value is in the memo table
(to keep this overhead small, input values should be hashable, or at least ordered)

Bottom-up DP:
− requires more “programming” of the computation in advance
+ usually more efficient than top-down DP

20 / 44
Advantages and disadvantages of top-down vs bottom-up DP, in general

Top-down DP:
+ relatively easy to convert a recursive function to use memoization
− can have large memory overhead
− computational overhead to testing if a value is in the memo table
(to keep this overhead small, input values should be hashable, or at least ordered)

Bottom-up DP:
− requires more “programming” of the computation in advance
+ usually more efficient than top-down DP
− occasionally wasteful if values of subproblems are ultimately unneeded
(one example: parsing words by a grammar using the CYK algorithm...)

20 / 44
Another perspective on optimal payments

A best payment is equivalent to a shortest path from n to 0 in the following graph:


▶ nodes 0, . . . , n
▶ an edge n → (n − c) for every c ∈ C .

21 / 44
Another perspective on optimal payments

A best payment is equivalent to a shortest path from n to 0 in the following graph:


▶ nodes 0, . . . , n
▶ an edge n → (n − c) for every c ∈ C .

For example, taking n = 13, C = {1, 2, 5, 7, 9}:

0 1 2 3 4 5 6 7 8 9 10 11 12 13

21 / 44
Another perspective on optimal payments

A best payment is equivalent to a shortest path from n to 0 in the following graph:


▶ nodes 0, . . . , n
▶ an edge n → (n − c) for every c ∈ C .

For example, taking n = 13, C = {1, 2, 5, 7, 9}:

0 1 2 3 4 5 6 7 8 9 10 11 12 13

Key point: we do not need to try all paths to find a shortest path.
(We will return to the shortest path problem later in the course.)

21 / 44
Interlude

22 / 44
Origin of the name “dynamic programming”

from Richard Bellman, Eye of the Hurricane: An Autobiography (1984):

I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage
decision processes. An interesting question is, “Where did the name, dynamic programming,
come from?” The 1950s were not good years for mathematical research. We had a very
interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he
actually had a pathological fear and hatred of the word “research”. I’m not using the term
lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get
violent if people used the term research in his presence. You can imagine how he felt, then,
about the term mathematical. [...] In the first place I was interested in planning, in decision
making, in thinking. But planning, is not a good word for various reasons. I decided therefore
to use the word “programming”. I wanted to get across the idea that this was dynamic, this
was multistage, this was time-varying. I thought, let’s kill two birds with one stone. Let’s
take a word that has an absolutely precise meaning, namely dynamic, in the classical physical
sense. It also has a very interesting property as an adjective, and that is it’s impossible to use
the word dynamic in a pejorative sense. Try thinking of some combination that will possibly
give it a pejorative meaning. It’s impossible.

23 / 44
Example #2: Fibonacci numbers revisited

24 / 44
(credit: user Romain on Wikipedia)

25 / 44
The Fibonacci recurrence naively translated into Python

def fib_rec(n):
if n == 0: return 0
elif n == 1: return 1
else: return fib_rec(n-1) + fib_rec(n-2)

26 / 44
The Fibonacci recurrence naively translated into Python

def fib_rec(n):
if n == 0: return 0
elif n == 1: return 1
else: return fib_rec(n-1) + fib_rec(n-2)

As we observed in Lecture 1, evaluating fib_rec(n) is very slow – much much slower


than the iterative version fib_it(n).
Why?

26 / 44
Recursion tree of F4

F4

F2 F3

F0 F1 F1 F2

0 1 1 F0 F1

0 1

27 / 44
Recursion tree of F5

F5

F3 F4

F1 F2 F2 F3

1 F0 F1 F0 F1 F1 F2

0 1 0 1 1 F0 F1

0 1

28 / 44
Recursion tree of F6

F6

F4 F5

F2 F3 F3 F4

F0 F1 F1 F2 F1 F2 F2 F3

0 1 1 F0 F1 1 F0 F1 F0 F1 F1 F2

0 1 0 1 0 1 1 F0 F1

0 1

29 / 44
Recursion tree of F6

F6

F4 F5

F2 F3 F3 F4

F0 F1 F1 F2 F1 F2 F2 F3

0 1 1 F0 F1 1 F0 F1 F0 F1 F1 F2

0 1 0 1 0 1 1 F0 F1

0 1
The tree grows exponentially!

29 / 44
Recursion tree of F6

F6

F4 F5

F2 F3 F3 F4

F0 F1 F1 F2 F1 F2 F2 F3

0 1 1 F0 F1 1 F0 F1 F0 F1 F1 F2

0 1 0 1 0 1 1 F0 F1

0 1
...but it contains many redundancies...

30 / 44
Overlapping subproblems

The structure of the recursion leads to the same numbers being computed over and
over, and so we can compute them efficiently using either top-down or bottom-up DP.

31 / 44
Fibonacci with top-down DP

1 def fib_td(n, memo=None):


2 if memo is None: memo = {}
3 if n in memo: return memo[n]
4
5 if n == 0: out = 0
6 elif n == 1: out = 1
7 else: out = fib_td(n-1, memo) + fib_td(n-2, memo)
8
9 memo[n] = out
10 return out

32 / 44
Fibonacci with top-down DP (illustration)

5
3

2
1
!2 !3

0 1
!1 !1

x add x to the memo table

!x lookup x from the memo table

33 / 44
Fibonacci with bottom-up DP

1 def fib_bu(n):
2 table = [0, 1]
3 for i in range(2,n+1):
4 table.append(table[i-1] + table[i-2])
5 return table[n]

34 / 44
Fibonacci with bottom-up DP

1 def fib_bu(n):
2 table = [0, 1]
3 for i in range(2,n+1):
4 table.append(table[i-1] + table[i-2])
5 return table[n]

Notice that we really only need to lookup the last two values (table[i-1] and
table[i-2]) to compute the next value. This leads to a space optimization...

34 / 44
Optimized bottom-up Fibonacci

def fib_bu(n): 1 def fib_bu_opt(n):


table = [0, 1] 2 a, b = 0, 1
for i in range(2,n+1): ; 3 for _ in range(2,n+1):
table.append(table[i-1] + table[i-2]) 4 a, b = b, a+b
return table[n] 5 return a if n == 0 else b

This version is essentially the same as fib_it!

35 / 44
Comparison

A helper function for timing functions:


import time
def timed(f, x):
'''Runs f(x) and returns a pair of the resulting value and the elapsed time.'''
t0 = time.time()
v = f(x)
t1 = time.time()
return (v, t1-t0)

36 / 44
Comparison

In[1]: timed(fib_rec, 20)

Out[1]: (6765, 0.0031309127807617188)

In[2]: timed(fib_rec, 34)

Out[2]: (5702887, 1.1502575874328613)

In[3]: timed(fib_rec, 35)

Out[3]: (9227465, 1.9626975059509277)

37 / 44
Comparison
In[4]: timed(fib_td, 35)

Out[4]: (9227465, 2.3126602172851562e-05)

In[5]: timed(fib_bu, 35)

Out[5]: (9227465, 1.621246337890625e-05)

In[6]: timed(fib_td, 10000)

Out[6]: (336447648764317832666216120051075433103021484606800639065647...,
0.009203672409057617)

In[7]: timed(fib_bu, 10000)

Out[7]: (336447648764317832666216120051075433103021484606800639065647...,
0.003816843032836914)

In[8]: timed(fib_bu_opt, 10000)

Out[8]: (336447648764317832666216120051075433103021484606800639065647...,
0.002521514892578125)
38 / 44
Example #3: minimum-cost edit sequence

39 / 44
Problem statement

Given two strings, find optimal sequence of operations transforming one into another.
The cost |δ| of an optimal sequence A −→ B is called the “edit distance” of A from B.
δ

We assume the following cost model:3

|copy | = 0 |insert| = 1 |delete| = 1 |replace| = 1

3
Edit distance under this cost model is called the Levenshtein distance between strings.
40 / 44
A recursive formula

def edist(A, B):


if not A:
return len(B)
elif not B:
return len(A)
elif A[0] == B[0]:
return edist(A[1:],B[1:])
else:
return 1 + min(edist(A[1:],B[1:]), edist(A,B[1:]), edist(A[1:],B))

Exercise: make this efficient using either top-down or bottom-up DP!

41 / 44
Computing the edit distance from A to B, by hand
K I T T E N
0 1 2 3 4 5 6

S
1
Each corner of the grid represents a pair of prefixes
I of A and B. Our goal is to label each corner (i, j)
2
by the edit distance between A[: i] and B[: j].
T Yellow cells represent where the words match.
3

T We start with the indicated labels, and we repeatedly


apply the following rule: label a white cell by 1 +
4

I
the minimum of the three labels to its North, West,
and North-West; and label a yellow cell by simply
5
copying the label to its NW.
N
6

G
7

42 / 44
d(kitten, sitting) = 3

K I T T E N
0 1 2 3 4 5 6

S S/K
1 1 2 3 4 5 6

I copy
2 2 1 2 3 4 5

T copy
3 3 2 1 2 3 4

T copy
4 4 3 2 1 2 3

I I/E
5 5 4 3 2 2 3

N copy
6 6 5 4 3 3 2

G ins G
7 7 6 5 4 4 3
43 / 44
Summary of dynamic programming

Technique for solving problems with recursive structure and overlapping subproblems.
Comes in “top-down” and “bottom-up” versions.

Top-down DP:
▶ starts at root of recursion tree and works “down” towards the leaves
▶ implemented as a recursive function with memoization for reusing computations
▶ recipe: 1. figure out recursive structure, 2. write recursive fn, 3. add memoization

Bottom-up DP:
▶ starts at leaves of recursion tree and works “up” towards the root
▶ implemented using iteration rather than (function) recursion
▶ requires figuring out recursive structure and suitable plan of computation

44 / 44

You might also like