[go: up one dir, main page]

0% found this document useful (0 votes)
14 views13 pages

7-Dynamic Programming-17-01-2024

This document discusses dynamic programming and how it differs from divide-and-conquer. It explains that dynamic programming is useful when subproblems are only slightly smaller than the original problem, leading to an exponential number of nodes in the recursion tree. It also describes the characteristics of problems that can be solved with dynamic programming, including overlapping subproblems and optimal substructure. Examples of dynamic programming include the knapsack problem, shortest paths, longest common subsequence, and traveling salesman problem.

Uploaded by

Baladhithya T
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views13 pages

7-Dynamic Programming-17-01-2024

This document discusses dynamic programming and how it differs from divide-and-conquer. It explains that dynamic programming is useful when subproblems are only slightly smaller than the original problem, leading to an exponential number of nodes in the recursion tree. It also describes the characteristics of problems that can be solved with dynamic programming, including overlapping subproblems and optimal substructure. Examples of dynamic programming include the knapsack problem, shortest paths, longest common subsequence, and traveling salesman problem.

Uploaded by

Baladhithya T
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

BCSE204L Design and

Analysis of Algorithms
Module-2
Topic-1: Dynamic Programming
Divide-and-conquer Vs Dynamic programming
• Why did recursion work so well with divide-and-conquer?
• A problem is expressed in terms of subproblems that are substantially smaller,
say half the size.
• Because of this drop in problem size, the full recursion tree has only
logarithmic depth and a polynomial number of nodes
• When dynamic programming method enter the picture?
• A problem is reduced to subproblems that are only slightly smaller— say A[j]
relies on A[j-1].
• Because of this, recursion tree generally has polynomial depth and an
exponential number of nodes.
Need of dynamic programming
Simple
optimization
using
memoization:
A problem can be solved using
Dynamic Programming if it has some
characteristics:
1. Subproblems
2. Overlapping Subproblems
3. Optimal Substructure
Optimal Substructure in DP
• Key principle in DP
• A problem is said to satisfy the Principle of Optimality
if the sub-solutions of an optimal solution of the
problem are themselves optimal solutions for their
subproblems
• Eg.:- The shortest path problem satisfies the Principle
of Optimality.
• This is because if 𝑎, 𝑥1 , 𝑥2 , . . . , 𝑥𝑛 , 𝑏 is a shortest path
from node 𝑎 to node 𝑏 in a graph, then the portion of
𝑥𝑖 to 𝑥𝑗 on that path is a shortest path from 𝑥i to 𝑥𝑗
DP vs Greedy
• Both used to solve optimization problems
• Greedy follows predefined procedure
• D.P. tries out all possible solutions and finds the optimal one
• D. P. uses recursive formula
• D. P. follows principle of optimality. ie., solve problem by taking
sequence of decisions to get optimal solution (in every step)
• In greedy one time decision making
• In DP optimal solutions to subproblems are retained so as to
avoid recomputing their values
• Memoization and tabulation
Example- From n items, in how
many ways you can choose r
items? Implement a program to
do this.
Memoization and Tabulation
Memoization: Top Down Tabulation: Bottom Up

1. Starts its transition from the bottom-most


1. Memory is not filled in a sequential base case dp[0] and reaches its destination
manner state dp[n].
2. Starts from the top most destination 2. DP table is being populated sequentially and
state and compute its answer , till we are directly accessing the calculated states
reach the bottom-most base state from the table
Dynamic programming
• Dynamic programming solves problems by combining the solutions to
subproblems
• Dynamic programming applies when the subproblems overlap - that
is, when subproblems share subsubproblems
• A divide-and-conquer algorithm does more work than necessary,
repeatedly solving the common subsubproblems.
• A dynamic-programming algorithm solves each subsubproblem just
once and then saves its answer in a table, thereby avoiding the work
of recomputing the answer every time it solves each subsubproblems
DP Examples
• 0/1 knapsack
• Floyd-Warshall algorithm (All pair shortest path)
• LCS (Longest Common Subsequence)
• TSP (Travelling Salesman Problem)
• Matrix chain multiplication
•Assembly Line Scheduling
•….and many more

You might also like