[go: up one dir, main page]

0% found this document useful (0 votes)
6 views15 pages

M4 - Chapter 8 1

Chapter 8 discusses Dynamic Programming as an optimization technique for recursive solutions, particularly highlighting its ability to reduce time complexity from exponential to polynomial by storing results of subproblems. It covers the Knapsack Problem and introduces memory functions that combine top-down and bottom-up approaches for efficiency. Additionally, it explains Warshall’s and Floyd’s algorithms for transitive closure and all-pairs shortest paths in graphs, both having a time efficiency of Θ(n^3).

Uploaded by

gasaja6790
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views15 pages

M4 - Chapter 8 1

Chapter 8 discusses Dynamic Programming as an optimization technique for recursive solutions, particularly highlighting its ability to reduce time complexity from exponential to polynomial by storing results of subproblems. It covers the Knapsack Problem and introduces memory functions that combine top-down and bottom-up approaches for efficiency. Additionally, it explains Warshall’s and Floyd’s algorithms for transitive closure and all-pairs shortest paths in graphs, both having a time efficiency of Θ(n^3).

Uploaded by

gasaja6790
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Chapter 8

Dynamic Programming
• Dynamic Programming is mainly an optimization
over plain recursion. Wherever we see a recursive
solution that has repeated calls for same inputs,
we can optimize it using Dynamic Programming.
The idea is to simply store the results of
subproblems, so that we do not have to re-
compute them when needed later. This simple
optimization reduces time complexities from
exponential to polynomial.
• For example, if we write simple recursive solution
for Fibonacci Numbers, we get exponential time
complexity and if we optimize it by storing
solutions of subproblems, time complexity
reduces to linear.

1
https://www.youtube.com/watch?v=HC4kUZoIiv0

1. The Knapsack Problem and Memory Functions


Knapsack Problem using Dynamic Programming

2
3
• The time efficiency and space efficiency of this algorithm are both in (nW ).
• The time needed to find the composition of an optimal solution is in O(n). 4
5
https://www.youtube.com/watch?v=T8w-7Brnvr4

Memory functions

• An unsatisfying aspect of bottom-up approach is that solutions to some of these smaller


subproblems are often not necessary for getting a solution to the problem given. Since this
drawback is not present in the top-down approach, it is natural to try to combine the
strengths of the top-down and bottom-up approaches.
• The goal is to get a method that solves only subproblems that are necessary and does so
only once. Such a method exists; it is based on using memory functions.
• This method solves a given problem in the top-down manner but, in addition, maintains a
table of the kind that would have been used by a bottom-up dynamic programming
algorithm. Initially, all the table’s entries are initialized with a special “null” symbol to
indicate that they have not yet been calculated. Thereafter, whenever a new value needs to
be calculated, the method checks the corresponding entry in the table first: if this entry is
not “null,” it is simply retrieved from the table; otherwise, it is computed by the recursive
call whose result is then recorded in the table.

6
7
• This algorithm may be less space-efficient than a bottom-up algorithm. But its time efficiency class is the same as
that of the bottom-up algorithm.
8
https://www.youtube.com/watch?v=Ar71s3mrl3c&t=616s

2. Warshall’s and Floyd’s Algorithms

1. Warshall’s algorithm

DEFINITION The transitive closure (reach-ability matrix) of a directed graph with n vertices can be defined as
the n × n boolean matrix T = {tij}, in which the element in the ith row and the jth column is 1 if there exists a
nontrivial path (i.e., directed path of a positive length) from the ith vertex to the jth vertex; otherwise, tij is 0.

• We can generate the transitive closure of a digraph with the help of DFS or BFS. Performing either traversal
starting at the ith vertex gives the information about the vertices reachable from it and hence the columns that
contain 1’s in the ith row of the transitive closure. Thus, doing such a traversal for every vertex as a starting
point yields the transitive closure in its entirety.
• Since this method traverses the same digraph several times, we should hope that a better algorithm can be
found. It is called Warshall’s algorithm after Stephen Warshall, who discovered it. It is convenient to
assume that the digraph’s vertices and hence the rows and columns of the adjacency matrix are numbered
from 1 to n. Warshall’s algorithm constructs the transitive closure through a series of n × n boolean matrices:

9
EXAMPLE:

10
time efficiency is Θ(n3)

11
https://www.youtube.com/watch?v=oNI0rf2P9gE

2. Floyd’s (Floyd-Warhshall) Algorithm


• Algorithm for the All-Pairs Shortest-Paths Problem
• Given a weighted connected graph (undirected or directed), the all-pairs shortest paths problem asks to find
the distances—i.e., the lengths of the shortest paths— from each vertex to all other vertices.
• It is convenient to record the lengths of shortest paths in an n × n matrix D called the distance matrix: the
element dij in the ith row and the j th column of this matrix indicates the length of the shortest path from the
ith vertex to the j th vertex.
• It is applicable to both undirected and directed weighted graphs provided that they do not contain a cycle of
a negative length. (The distance between any two vertices in such a cycle can be made arbitrarily small by
repeating the cycle enough times.)
• Floyd’s algorithm computes the distance matrix of a weighted graph with n vertices through a series of n × n
matrices:

12
EXAMPLE:

13
time efficiency is Θ(n3)

14
Tutorial:
1. Apply Warshall’s algorithm to find the transitive closure of the digraph defined by
the following adjacency matrix and graph:

a.
b.

2. Apply Floyd’s Algorithm to the given graph for finding the All-Pairs Shortest-Paths

a. b. c.

By:
Dr. Geeta S Hukkeri
15

You might also like