[go: up one dir, main page]

0% found this document useful (0 votes)
49 views149 pages

Dynamic Programming: Md. Bakhtiar Hasan

The document introduces dynamic programming as an algorithm design technique for solving optimization problems. It uses the Fibonacci sequence problem to demonstrate dynamic programming. A naive recursive solution is presented first, which redundantly calculates overlapping subproblems and runs in exponential time. Then a memoized approach is proposed that stores results of subproblems to avoid recomputing them, running in polynomial time by reusing stored values.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views149 pages

Dynamic Programming: Md. Bakhtiar Hasan

The document introduces dynamic programming as an algorithm design technique for solving optimization problems. It uses the Fibonacci sequence problem to demonstrate dynamic programming. A naive recursive solution is presented first, which redundantly calculates overlapping subproblems and runs in exponential time. Then a memoized approach is proposed that stores results of subproblems to avoid recomputing them, running in polynomial time by reusing stored values.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 149

Dynamic Programming

Part I

Md. Bakhtiar Hasan

Lecturer
Department of Computer Science and Engineering
Islamic University of Technology
Motivation

Algorithm design technique

MBH (CSE, IUT) Dynamic Programming 2 / 16


Motivation

Algorithm design technique


Optimization problems

MBH (CSE, IUT) Dynamic Programming 2 / 16


Motivation

Algorithm design technique


Optimization problems
• Minimize
• Maximize

MBH (CSE, IUT) Dynamic Programming 2 / 16


Motivation

Algorithm design technique


Optimization problems
• Minimize
• Maximize
Exhaustive search

MBH (CSE, IUT) Dynamic Programming 2 / 16


Motivation

Algorithm design technique


Optimization problems
• Minimize
• Maximize
Exhaustive search
• Requires exponential time

MBH (CSE, IUT) Dynamic Programming 2 / 16


Motivation

Algorithm design technique


Optimization problems
• Minimize
• Maximize
Exhaustive search
• Requires exponential time
• Done cleverly

MBH (CSE, IUT) Dynamic Programming 2 / 16


Motivation

Algorithm design technique


Optimization problems
• Minimize
• Maximize
Exhaustive search
• Requires exponential time
• Done cleverly → Polynomial time

MBH (CSE, IUT) Dynamic Programming 2 / 16


Motivation

Algorithm design technique


Optimization problems
• Minimize
• Maximize
Exhaustive search
• Requires exponential time
• Done cleverly → Polynomial time
Subproblems + “Reuse”

MBH (CSE, IUT) Dynamic Programming 2 / 16


Motivation

Algorithm design technique


Optimization problems
• Minimize
• Maximize
Exhaustive search
• Requires exponential time
• Done cleverly → Polynomial time
Subproblems + “Reuse”
History

MBH (CSE, IUT) Dynamic Programming 2 / 16


Problem At Hand

Fibonacci Series
F1 = 1
F2 = 1
Fn = Fn−1 + Fn−2

MBH (CSE, IUT) Dynamic Programming 3 / 16


Problem At Hand

Fibonacci Series
F1 = 1
F2 = 1
Fn = Fn−1 + Fn−2

Goal: Compute the nth Fibonacci Number

MBH (CSE, IUT) Dynamic Programming 3 / 16


Naive Recursive Approach

fib(n):

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm
Complexity?

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm
Complexity?
T (n)

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm
Complexity?
T (n) = T (n − 1) + T (n − 2) + Θ(1)

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm
Complexity?
 √ 
1+ 5
T (n) = T (n − 1) + T (n − 2) + Θ(1)→ ϕn , where ϕ= 2

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm
Complexity?
 √ 
1+ 5
T (n) = T (n − 1) + T (n − 2) + Θ(1)→ ϕn , where ϕ= 2
T (n) ≥ 2 × T (n − 2)

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm
Complexity?
 √ 
1+ 5
T (n) = T (n − 1) + T (n − 2) + Θ(1)→ ϕn , where ϕ= 2
T (n) ≥ 2 × T (n − 2)
T (n) ≥ 2 × 2 × T (n − 4)

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm
Complexity?
 √ 
1+ 5
T (n) = T (n − 1) + T (n − 2) + Θ(1)→ ϕn , where ϕ= 2
T (n) ≥ 2 × T (n − 2)
T (n) ≥ 2 × 2 × T (n − 4)
...

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm
Complexity?
 √ 
1+ 5
T (n) = T (n − 1) + T (n − 2) + Θ(1)→ ϕn , where ϕ= 2
T (n) ≥ 2 × T (n − 2)
T (n) ≥ 2 × 2 × T (n − 4)
...
(How many times can I subtract 2 from n?)

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm
Complexity?
 √ 
1+ 5
T (n) = T (n − 1) + T (n − 2) + Θ(1)→ ϕn , where ϕ= 2
T (n) ≥ 2 × T (n − 2)
T (n) ≥ 2 × 2 × T (n − 4)
...
(How many times can I subtract 2 from n?)
n
T (n) ≥ 2 2

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm
Complexity?
 √ 
1+ 5
T (n) = T (n − 1) + T (n − 2) + Θ(1)→ ϕn , where ϕ= 2
T (n) ≥ 2 × T (n − 2)
T (n) ≥ 2 × 2 × T (n − 4)
...
(How many times can I subtract 2 from n?)
n
T (n) ≥ 2 2
n
T (n) = Ω(2 2 )

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f
Translates the formula into code → Correct algorithm
Complexity?
 √ 
1+ 5
T (n) = T (n − 1) + T (n − 2) + Θ(1)→ ϕn , where ϕ= 2
T (n) ≥ 2 × T (n − 2)
T (n) ≥ 2 × 2 × T (n − 4)
...
(How many times can I subtract 2 from n?)
n
T (n) ≥ 2 2
n
T (n) = Ω(2 2 )→ Exponential!

MBH (CSE, IUT) Dynamic Programming 4 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f

MBH (CSE, IUT) Dynamic Programming 5 / 16


Naive Recursive Approach

fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n - 2)
return f

Fn

MBH (CSE, IUT) Dynamic Programming 5 / 16


Naive Recursive Approach

Observation

MBH (CSE, IUT) Dynamic Programming 6 / 16


Naive Recursive Approach

Observation
• The problem can be divided into subproblems

MBH (CSE, IUT) Dynamic Programming 6 / 16


Naive Recursive Approach

Observation
• The problem can be divided into subproblems
• The subproblems overlap

MBH (CSE, IUT) Dynamic Programming 6 / 16


Naive Recursive Approach

Observation
• The problem can be divided into subproblems
• The subproblems overlap
• Calculate once and store

MBH (CSE, IUT) Dynamic Programming 6 / 16


Naive Recursive Approach

Observation
• The problem can be divided into subproblems
• The subproblems overlap
• Calculate once and store
• No cost of recalculation

MBH (CSE, IUT) Dynamic Programming 6 / 16


Memoized Approach

memo = {}

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f
fib(k) only recurses the first time it’s called, ∀k

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f
fib(k) only recurses the first time it’s called, ∀k
Memoized calls cost Θ(1)

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f
fib(k) only recurses the first time it’s called, ∀k
Memoized calls cost Θ(1) → Free

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f
fib(k) only recurses the first time it’s called, ∀k
Memoized calls cost Θ(1) → Free
# of non-memoized calls

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f
fib(k) only recurses the first time it’s called, ∀k
Memoized calls cost Θ(1) → Free
# of non-memoized calls → n

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f
fib(k) only recurses the first time it’s called, ∀k
Memoized calls cost Θ(1) → Free
# of non-memoized calls → n
Cost of each non-memoized calls

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f
fib(k) only recurses the first time it’s called, ∀k
Memoized calls cost Θ(1) → Free
# of non-memoized calls → n
Cost of each non-memoized calls → Θ(1)

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f
fib(k) only recurses the first time it’s called, ∀k
Memoized calls cost Θ(1) → Free
# of non-memoized calls → n
Cost of each non-memoized calls → Θ(1)
Running Time: # of non-memoized calls × Cost of each call

MBH (CSE, IUT) Dynamic Programming 7 / 16


Memoized Approach

memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f
fib(k) only recurses the first time it’s called, ∀k
Memoized calls cost Θ(1) → Free
# of non-memoized calls → n
Cost of each non-memoized calls → Θ(1)
Running Time: # of non-memoized calls × Cost of each call→ Θ(n)

MBH (CSE, IUT) Dynamic Programming 7 / 16


General Form

Start from top

MBH (CSE, IUT) Dynamic Programming 8 / 16


General Form

Start from top


Divide the problem into subproblems

MBH (CSE, IUT) Dynamic Programming 8 / 16


General Form

Start from top


Divide the problem into subproblems
• Subproblems help solve the original problem

MBH (CSE, IUT) Dynamic Programming 8 / 16


General Form

Start from top


Divide the problem into subproblems
• Subproblems help solve the original problem
Solve the subproblems

MBH (CSE, IUT) Dynamic Programming 8 / 16


General Form

Start from top


Divide the problem into subproblems
• Subproblems help solve the original problem
Solve the subproblems
Memoize (Remember)

MBH (CSE, IUT) Dynamic Programming 8 / 16


General Form

Start from top


Divide the problem into subproblems
• Subproblems help solve the original problem
Solve the subproblems
Memoize (Remember)
Reuse the solutions

MBH (CSE, IUT) Dynamic Programming 8 / 16


General Form

Dynamic Programming: Recursion + Memoization


Start from top
Divide the problem into subproblems
• Subproblems help solve the original problem
Solve the subproblems
Memoize (Remember)
Reuse the solutions

MBH (CSE, IUT) Dynamic Programming 8 / 16


General Form

Dynamic Programming: Recursion + Memoization


Start from top
Divide the problem into subproblems
• Subproblems help solve the original problem
Solve the subproblems
Memoize (Remember)
Reuse the solutions
Running Time: # of subproblems × Time/subproblem

MBH (CSE, IUT) Dynamic Programming 8 / 16


General Form

Dynamic Programming: Recursion + Memoization


Start from top
Divide the problem into subproblems
• Subproblems help solve the original problem
Solve the subproblems
Memoize (Remember)
Reuse the solutions
Running Time: # of subproblems × Time/subproblem
Don’t count memoized recursions

MBH (CSE, IUT) Dynamic Programming 8 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}
for k in range(1, n + 1):

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}
for k in range(1, n + 1):
if k <= 2: f = 1

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}
for k in range(1, n + 1):
if k <= 2: f = 1
else: f = fib[k-1] + fib[k-2]

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}
for k in range(1, n + 1):
if k <= 2: f = 1
else: f = fib[k-1] + fib[k-2]
fib[k] = f

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}
for k in range(1, n + 1):
if k <= 2: f = 1
else: f = fib[k-1] + fib[k-2]
fib[k] = f
return fib[n]

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}
for k in range(1, n + 1):
if k <= 2: f = 1
else: f = fib[k-1] + fib[k-2]
fib[k] = f
return fib[n]
Observation
• Exactly the same computation as the memoized version

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}
for k in range(1, n + 1):
if k <= 2: f = 1
else: f = fib[k-1] + fib[k-2]
fib[k] = f
return fib[n]
Observation
• Exactly the same computation as the memoized version
• Topological sort of subproblem dependency DAG

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}
for k in range(1, n + 1):
if k <= 2: f = 1
else: f = fib[k-1] + fib[k-2]
fib[k] = f
return fib[n]
Observation
• Exactly the same computation as the memoized version
• Topological sort of subproblem dependency DAG
• Can save space

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}
for k in range(1, n + 1):
if k <= 2: f = 1
else: f = fib[k-1] + fib[k-2]
fib[k] = f
return fib[n]
Observation
• Exactly the same computation as the memoized version
• Topological sort of subproblem dependency DAG
• Can save space → Only the last two values are required

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}
for k in range(1, n + 1):
if k <= 2: f = 1
else: f = fib[k-1] + fib[k-2]
fib[k] = f
return fib[n]
Observation
• Exactly the same computation as the memoized version
• Topological sort of subproblem dependency DAG
• Can save space → Only the last two values are required
• Obvious running time

MBH (CSE, IUT) Dynamic Programming 9 / 16


Bottom-up Approach

F1 F2 F3 Fn−2 Fn−1 Fn

fib = {}
for k in range(1, n + 1):
if k <= 2: f = 1
else: f = fib[k-1] + fib[k-2]
fib[k] = f
return fib[n]
Observation
• Exactly the same computation as the memoized version
• Topological sort of subproblem dependency DAG
• Can save space → Only the last two values are required
• Obvious running time → One for loop, constant work inside

MBH (CSE, IUT) Dynamic Programming 9 / 16


Problem At Hand

Single source shortest paths

MBH (CSE, IUT) Dynamic Programming 10 / 16


Problem At Hand

Single source shortest paths


Compute δ(S, v )∀v

MBH (CSE, IUT) Dynamic Programming 10 / 16


Problem At Hand

Single source shortest paths


Compute δ(S, v )∀v
Idea:

MBH (CSE, IUT) Dynamic Programming 10 / 16


Problem At Hand

Single source shortest paths


Compute δ(S, v )∀v
Idea:
Find a naive recursive approach

MBH (CSE, IUT) Dynamic Programming 10 / 16


Problem At Hand

Single source shortest paths


Compute δ(S, v )∀v
Idea:
Find a naive recursive approach
Memoize

MBH (CSE, IUT) Dynamic Programming 10 / 16


Problem At Hand

Single source shortest paths


Compute δ(S, v )∀v
Idea:
Find a naive recursive approach
Memoize
Tool:

MBH (CSE, IUT) Dynamic Programming 10 / 16


Problem At Hand

Single source shortest paths


Compute δ(S, v )∀v
Idea:
Find a naive recursive approach
Memoize
Tool:
Guessing

MBH (CSE, IUT) Dynamic Programming 10 / 16


Problem At Hand

Single source shortest paths


Compute δ(S, v )∀v
Idea:
Find a naive recursive approach
Memoize
Tool:
Guessing
• Don’t know the answer?

MBH (CSE, IUT) Dynamic Programming 10 / 16


Problem At Hand

Single source shortest paths


Compute δ(S, v )∀v
Idea:
Find a naive recursive approach
Memoize
Tool:
Guessing
• Don’t know the answer?→ Guess

MBH (CSE, IUT) Dynamic Programming 10 / 16


Problem At Hand

Single source shortest paths


Compute δ(S, v )∀v
Idea:
Find a naive recursive approach
Memoize
Tool:
Guessing
• Don’t know the answer?→ Guess
• Try all guesses

MBH (CSE, IUT) Dynamic Programming 10 / 16


Problem At Hand

Single source shortest paths


Compute δ(S, v )∀v
Idea:
Find a naive recursive approach
Memoize
Tool:
Guessing
• Don’t know the answer?→ Guess
• Try all guesses
• Take the best one

MBH (CSE, IUT) Dynamic Programming 10 / 16


Solution Idea

s v

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s v

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s v

Which incoming edge is included in the shortest path to v ?

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )
• Then, recursively compute δ(s, u)

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )
• Then, recursively compute δ(s, u)
• If correct: δ(s, v ) = δ(s, u) + cost[u][v ]

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )
• Then, recursively compute δ(s, u)
• If correct: δ(s, v ) = δ(s, u) + cost[u][v ]
• If not? → Try others and take the minimum

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )
• Then, recursively compute δ(s, u)
• If correct: δ(s, v ) = δ(s, u) + cost[u][v ]
• If not? → Try others and take the minimum
• δ(s, v ) = min (δ(s, u) + cost[u][v ])
(u,v )∈E

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )
• Then, recursively compute δ(s, u)
• If correct: δ(s, v ) = δ(s, u) + cost[u][v ]
• If not? → Try others and take the minimum
• δ(s, v ) = min (δ(s, u) + cost[u][v ])
(u,v )∈E
• Exponential Complexity

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )
• Then, recursively compute δ(s, u)
• If correct: δ(s, v ) = δ(s, u) + cost[u][v ]
• If not? → Try others and take the minimum
• δ(s, v ) = min (δ(s, u) + cost[u][v ])
(u,v )∈E
• Exponential Complexity
• Memoize!

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )
• Then, recursively compute δ(s, u)
• If correct: δ(s, v ) = δ(s, u) + cost[u][v ]
• If not? → Try others and take the minimum
• δ(s, v ) = min (δ(s, u) + cost[u][v ])
(u,v )∈E
• Exponential Complexity
• Memoize!
What should we memoize?

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )
• Then, recursively compute δ(s, u)
• If correct: δ(s, v ) = δ(s, u) + cost[u][v ]
• If not? → Try others and take the minimum
• δ(s, v ) = min (δ(s, u) + cost[u][v ])
(u,v )∈E
• Exponential Complexity
• Memoize!
What should we memoize?
• Check if δ(s, u) is in memo[u]

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )
• Then, recursively compute δ(s, u)
• If correct: δ(s, v ) = δ(s, u) + cost[u][v ]
• If not? → Try others and take the minimum
• δ(s, v ) = min (δ(s, u) + cost[u][v ])
(u,v )∈E
• Exponential Complexity
• Memoize!
What should we memoize?
• Check if δ(s, u) is in memo[u]
I Yes? → Return it

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )
• Then, recursively compute δ(s, u)
• If correct: δ(s, v ) = δ(s, u) + cost[u][v ]
• If not? → Try others and take the minimum
• δ(s, v ) = min (δ(s, u) + cost[u][v ])
(u,v )∈E
• Exponential Complexity
• Memoize!
What should we memoize?
• Check if δ(s, u) is in memo[u]
I Yes? → Return it
I No? → Recurse and calculate

MBH (CSE, IUT) Dynamic Programming 11 / 16


Solution Idea

s u v

Which incoming edge is included in the shortest path to v ?


• Don’t know → Guess (u, v )
• Then, recursively compute δ(s, u)
• If correct: δ(s, v ) = δ(s, u) + cost[u][v ]
• If not? → Try others and take the minimum
• δ(s, v ) = min (δ(s, u) + cost[u][v ])
(u,v )∈E
• Exponential Complexity
• Memoize!
What should we memoize?
• Check if δ(s, u) is in memo[u]
I Yes? → Return it
I No? → Recurse and calculate
Base case? → δ(s, s) = 0
MBH (CSE, IUT) Dynamic Programming 11 / 16
Complexity

# of subproblems × Time/subproblem

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )
• indegree(v )

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )
• indegree(v ) → Depends on V

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1
P
Total Time (indegree(v ) + 1)
v ∈V

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1
P
Total Time (indegree(v ) + 1)
P v ∈V P
= indegree(v ) + 1
v ∈V v ∈V

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1
P
Total Time (indegree(v ) + 1)
P v ∈V P
= indegree(v ) + 1
v ∈V v ∈V
= Θ(V + E )

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1
P
Total Time (indegree(v ) + 1)
P v ∈V P
= indegree(v ) + 1
v ∈V v ∈V
= Θ(V + E )
But that’s TopSort + One Pass Relaxation!

MBH (CSE, IUT) Dynamic Programming 12 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems→ δ(s, v )
# of subproblems→ V
Time for subproblem δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1
P
Total Time (indegree(v ) + 1)
P v ∈V P
= indegree(v ) + 1
v ∈V v ∈V
= Θ(V + E )
But that’s TopSort + One Pass Relaxation! → Doing the same thing
here.

MBH (CSE, IUT) Dynamic Programming 12 / 16


Cyclic Graphs

What if the graph contains a cycle?

MBH (CSE, IUT) Dynamic Programming 13 / 16


Cyclic Graphs

What if the graph contains a cycle?→ A positive one

s v

What’s wrong with positive cycles?

MBH (CSE, IUT) Dynamic Programming 13 / 16


Cyclic Graphs

What if the graph contains a cycle?→ A positive one


δ(s, v )
a

s v

What’s wrong with positive cycles?

MBH (CSE, IUT) Dynamic Programming 13 / 16


Cyclic Graphs

What if the graph contains a cycle?→ A positive one


δ(s, v )
a
δ(s, a)
s v

What’s wrong with positive cycles?

MBH (CSE, IUT) Dynamic Programming 13 / 16


Cyclic Graphs

What if the graph contains a cycle?→ A positive one


δ(s, v )
a
δ(s, a)
s v
δ(s, b)
b

What’s wrong with positive cycles?

MBH (CSE, IUT) Dynamic Programming 13 / 16


Cyclic Graphs

What if the graph contains a cycle?→ A positive one


δ(s, v )
a
δ(s, a)
s v
δ(s, b)
b
δ(s, s) δ(s, v )
What’s wrong with positive cycles?

MBH (CSE, IUT) Dynamic Programming 13 / 16


Cyclic Graphs

What if the graph contains a cycle?→ A positive one


δ(s, v )
a
δ(s, a)
s v
δ(s, b)
b
δ(s, s) δ(s, v )
What’s wrong with positive cycles?
• Can we find it from memo table?

MBH (CSE, IUT) Dynamic Programming 13 / 16


Cyclic Graphs

What if the graph contains a cycle?→ A positive one


δ(s, v )
a
δ(s, a)
s v
δ(s, b)
b
δ(s, s) δ(s, v )
What’s wrong with positive cycles?
• Can we find it from memo table?
I Haven’t finished calculating δ(s, v ) yet

MBH (CSE, IUT) Dynamic Programming 13 / 16


Cyclic Graphs

What if the graph contains a cycle?→ A positive one


δ(s, v )
a
δ(s, a)
s v
δ(s, b)
b
δ(s, s) δ(s, v )
What’s wrong with positive cycles?
• Can we find it from memo table?
I Haven’t finished calculating δ(s, v ) yet
What do we do now?

MBH (CSE, IUT) Dynamic Programming 13 / 16


Cyclic Graphs

What if the graph contains a cycle?→ A positive one


δ(s, v )
a
δ(s, a)
s v
δ(s, b)
b
δ(s, s) δ(s, v )
What’s wrong with positive cycles?
• Can we find it from memo table?
I Haven’t finished calculating δ(s, v ) yet
What do we do now?
• Make the graph acyclic

MBH (CSE, IUT) Dynamic Programming 13 / 16


Cyclic Graphs

s a t

MBH (CSE, IUT) Dynamic Programming 14 / 16


Cyclic Graphs

s a t

Simple path will take (v − 1) edges

MBH (CSE, IUT) Dynamic Programming 14 / 16


Cyclic Graphs

s0 a0 t0

s a t s1 a1 t1

s2 a2 t2

Simple path will take (v − 1) edges


Make k = (v − 1) copies of the graph

MBH (CSE, IUT) Dynamic Programming 14 / 16


Cyclic Graphs

s0 a0 t0

s a t s1 a1 t1
time
s2 a2 t2

Simple path will take (v − 1) edges


Make k = (v − 1) copies of the graph
Edge from current layer to next layer

MBH (CSE, IUT) Dynamic Programming 14 / 16


Cyclic Graphs

s0 a0 t0

s a t s1 a1 t1
time
s2 a2 t2

Simple path will take (v − 1) edges


Make k = (v − 1) copies of the graph
Edge from current layer to next layer
δk (s, v ) = Weight of the shortest path from s to v using ≤ k edges

MBH (CSE, IUT) Dynamic Programming 14 / 16


Cyclic Graphs

s0 a0 t0

s a t s1 a1 t1
time
s2 a2 t2

Simple path will take (v − 1) edges


Make k = (v − 1) copies of the graph
Edge from current layer to next layer
δk (s, v ) = Weight of the shortest path from s to v using ≤ k edges
General Form: δk (s, v ) = min (δk−1 (s, u) + cost[u][v ])
(u,v )∈E

MBH (CSE, IUT) Dynamic Programming 14 / 16


Cyclic Graphs

s0 a0 t0

s a t s1 a1 t1
time
s2 a2 t2

Simple path will take (v − 1) edges


Make k = (v − 1) copies of the graph
Edge from current layer to next layer
δk (s, v ) = Weight of the shortest path from s to v using ≤ k edges
General Form: δk (s, v ) = min (δk−1 (s, u) + cost[u][v ])
(u,v )∈E
Goal: δ|V |−1 (s, v )

MBH (CSE, IUT) Dynamic Programming 14 / 16


Complexity

# of subproblems × Time/subproblem

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems
• Value of k → [0, V − 1]

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems
• Value of k → [0, V − 1]
• For each k → v ∈ V

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems
• Value of k → [0, V − 1]
• For each k → v ∈ V
• Total: V 2

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems
• Value of k → [0, V − 1]
• For each k → v ∈ V
• Total: V 2
For each k, time for subproblem, δ(s, v )

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems
• Value of k → [0, V − 1]
• For each k → v ∈ V
• Total: V 2
For each k, time for subproblem, δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems
• Value of k → [0, V − 1]
• For each k → v ∈ V
• Total: V 2
For each k, time for subproblem, δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1
Total time for each k → Θ(V + E )

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems
• Value of k → [0, V − 1]
• For each k → v ∈ V
• Total: V 2
For each k, time for subproblem, δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1
Total time for each k → Θ(V + E )
How many ks?

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems
• Value of k → [0, V − 1]
• For each k → v ∈ V
• Total: V 2
For each k, time for subproblem, δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1
Total time for each k → Θ(V + E )
How many ks? → [0, |V | − 1] → |V |

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems
• Value of k → [0, V − 1]
• For each k → v ∈ V
• Total: V 2
For each k, time for subproblem, δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1
Total time for each k → Θ(V + E )
How many ks? → [0, |V | − 1] → |V |
• Total: Θ(V · E )

MBH (CSE, IUT) Dynamic Programming 15 / 16


Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems
• Value of k → [0, V − 1]
• For each k → v ∈ V
• Total: V 2
For each k, time for subproblem, δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1
Total time for each k → Θ(V + E )
How many ks? → [0, |V | − 1] → |V |
• Total: Θ(V · E )
But that’s Bellman-Ford!
MBH (CSE, IUT) Dynamic Programming 15 / 16
Complexity

# of subproblems × Time/subproblem
Subproblems → δk (s, v )
# of subproblems
• Value of k → [0, V − 1]
• For each k → v ∈ V
• Total: V 2
For each k, time for subproblem, δ(s, v )
• indegree(v ) → Depends on V
• Cannot take direct product here → Sum over all subproblems
• Calculate min each time → Θ(1)
• Total: indegree(v ) + 1
Total time for each k → Θ(V + E )
How many ks? → [0, |V | − 1] → |V |
• Total: Θ(V · E )
But that’s Bellman-Ford!→ Yes, exactly!
MBH (CSE, IUT) Dynamic Programming 15 / 16
Reading

CLRS 15.1, 15.3


Dasgupta 6.1, 6.2
Jeff Erickson 3.1, 3.6

MBH (CSE, IUT) Dynamic Programming 16 / 16

You might also like