DESIGN AND ANALYSIS OF ALGORITHM
Unit -3
Dynamic Programming – Definition
Dynamic Programming is an optimization technique for solving complex problems by breaking
them down into smaller, overlapping subproblems. The solutions to these subproblems are stored
to avoid redundant computations, improving efficiency.
Key Properties:
1. Optimal Substructure: An optimal solution to the problem can be constructed from
optimal solutions of its subproblems.
2. Overlapping Subproblems: The problem can be divided into subproblems that are reused
multiple times.
Approaches:
• Top-Down (Memoization): A recursive approach that stores results.
• Bottom-Up (Tabulation): An iterative approach that solves subproblems first.
Example Problems:
• Fibonacci Series
• Knapsack Problem
• Shortest Path (e.g., Floyd-Warshall)
Multistage Graph – Definition
A Multistage Graph is a directed, weighted graph where vertices are divided into stages such
that edges only connect vertices of one stage to the next. It is used to model problems where
decisions are made in sequential stages.
Key Features:
1. Staged Structure: The graph is partitioned into k disjoint stages (e.g., Stage 1, Stage 2,
..., Stage k).
2. Directional Edges: Edges only go from one stage to the next (no backward edges).
3. Single Source & Single Sink: Typically has one start node (Stage 1) and one end node
(Stage k).
Applications:
• Project scheduling
• Resource allocation
• Shortest path problems in staged networks
1
Introduction to shortest path problem
The Shortest Path Problem is a fundamental problem in graph theory and network
optimization. The goal is to find the minimum-cost path between two nodes (or vertices) in
a weighted graph. The "cost" can represent distance, time, expense, or any other measurable
quantity.
Types of Shortest Path Problems
1. Single-Source Shortest Path (SSSP)
o Finds the shortest paths from a single source node to all other nodes in the
graph.
o Algorithms:
▪ Dijkstra’s Algorithm (non-negative weights)
▪ Bellman-Ford Algorithm (handles negative weights)
2. Single-Destination Shortest Path
o Reverse of SSSP: finds the shortest paths from all nodes to a single destination.
o Solved by reversing the graph and applying SSSP.
3. All-Pairs Shortest Path (APSP)
o Computes the shortest paths between every pair of nodes.
o Algorithms:
▪ Floyd-Warshall Algorithm (DP-based, handles negative weights)
▪ Johnson’s Algorithm (for sparse graphs)
4. Shortest Path in Unweighted Graphs
o All edges have the same weight (or no weight).
o Solved efficiently using Breadth-First Search (BFS).
Applications of Shortest Path Problems
• Navigation Systems (Google Maps, GPS)
• Network Routing (Internet data packets)
• Flight/Train Scheduling
• Robotics Path Planning
• Game AI (Finding optimal paths in games)
2
Multistage Graph Shortest Path
A Multistage Graph is a directed, weighted graph where nodes are divided into sequential
stages, and edges only connect nodes from one stage to the next. These graphs are used to model
problems where decisions occur in multiple stages, such as project scheduling, resource
allocation, and network routing. Finding the shortest path in a multistage graph
involves determining the minimum cost path from a source node to a destination node across all
stages. This is typically done using dynamic programming, either with a forward or backward
approach.
Key Characteristics of Multistage Graphs
1. Staged Structure
o The graph is divided into k disjoint stages (e.g., Stage 1, Stage 2, ..., Stage k).
o Each node belongs to exactly one stage.
2. Directional Edges
o Edges only go forward from one stage to the next (no backward edges).
o Example: A node in Stage 2 can only connect to nodes in Stage 3, not Stage 1.
3. Single Source & Single Sink
o Source (Stage 1): Starting node (e.g., project start).
o Sink (Stage k): Target node (e.g., project completion).
4. Edge Weights
o Each edge has a cost/weight (e.g., time, distance, expense).
Problem Statement
Given a directed weighted graph divided into k stages, find the shortest path from the first stage
(source) to the last stage (sink).
Dynamic Programming Approach
1. Define subproblems: Let cost[i] represent the minimum cost from vertex i to the sink.
2. Base case: cost[sink] = 0
3. Recurrence relation: For each vertex i, cost[i] = min(weight[i][j] + cost[j]) for all
adjacent vertices j
4. Order of computation: Process vertices from sink to source or from source to sink
Approach 1: Forward Pass (From Stage 1 to Stage k)
Forward Approach: Moves from source to sink, updating costs based on incoming edges.
3
1. Initialize:
o Cost of source node, Cost[S] = 0.
o Costs of other nodes = ∞.
2. For each stage, update the minimum cost to reach each node:
o Cost[v] = min(Cost[u] + weight(u, v)) for all u in the previous stage.
3. Final Cost: Cost[T] gives the shortest path cost.
Approach 2: Backward Pass (From Stage k to Stage 1)
Backward Approach: Moves from sink to source, updating costs based on outgoing edges.
1. Initialize:
o Cost of sink node, Cost[T] = 0.
o Costs of other nodes = ∞.
2. For each stage in reverse, update the minimum cost:
o Cost[u] = min(Cost[v] + weight(u, v)) for all v in the next stage.
3. Final Cost: Cost[S] gives the shortest path cost.
Advantages:
Efficient Optimization: Problems with sequential decisions can be solved using Dynamic
Programming (DP).
Reduces Complexity: Breaks large problems into smaller subproblems.
Applications:
o Project management (critical path analysis).
o Manufacturing process optimization.
o Communication network design.
Example of a Multistage Graph
Consider a 5-stage graph representing a project with different phases:
4
Stage 1: {S}
Stage 2: {A, B}
Stage 3: {C, D, E}
Stage 4: {F, G}
Stage 5: {T}
Goal: Find the shortest path from S to T.
Solution to the Example Problem (Forward DP)
1. Stage 1: Cost[S] = 0.
2. Stage 2:
o Cost[A] = Cost[S] + 1 = 1
o Cost[B] = Cost[S] + 2 = 2
3. Stage 3:
o Cost[C] = Cost[A] + 3 = 4
o Cost[D] = min(Cost[A] + 2, Cost[B] + 1) = min(3, 3) = 3
o Cost[E] = Cost[B] + 4 = 6
4. Stage 4:
o Cost[F] = min(Cost[C] + 2, Cost[D] + 3) = min(6, 6) = 6
o Cost[G] = min(Cost[D] + 2, Cost[E] + 1) = min(5, 7) = 5
5. Stage 5:
o Cost[T] = min(Cost[F] + 4, Cost[G] + 3) = min(10, 8) = 8
Shortest Path: S → B → D → G → T (Total Cost = 8)
• Time Complexity: O(V + E), where V is the number of vertices and E is the number of
edges
• Space Complexity: O(V) for storing costs and paths
5
All pairs shortest path problem
The All-Pairs Shortest Path (APSP) problem aims to find the shortest paths between every pair
of vertices in a weighted graph. One of the most efficient dynamic programming approaches to
solve this problem is the Floyd-Warshall algorithm.
Floyd-Warshall Algorithm (Dynamic Programming Approach)
Floyd-Warshall is a dynamic programming method for solving the all-pairs shortest path problem.
It iteratively updates a distance matrix, considering all possible intermediate nodes to find the
shortest paths.
Problem Statement
Given a directed graph G = (V, E) with edge weights, find the shortest path from every vertex
to every other vertex. The graph may contain negative edge weights, but it does not contain
any negative-weight cycles. The final output is a matrix representing the minimum distance from
any node to all other nodes in the graph.
Let dist[i][j] be the shortest distance from vertex i to vertex j.
dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
This checks whether the path from i to j via k is shorter than the current known distance.
Algorithm Steps
1. Initialize the distance matrix:
o If i == j, then dist[i][j] = 0
o If there is an edge from i to j, then dist[i][j] = weight(i, j)
o Otherwise, dist[i][j] = ∞
2. Iterate through all vertices k as intermediate nodes.
3. For every pair (i, j), update dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
Create a distance matrix dist where dist[i][j] represents the distance between node i and
node j. Initialize dist[i][i] to 0, and dist[i][j] to the weight of the direct edge between i and j if it
exists, otherwise infinity.
For each node k (from 1 to the number of nodes):
• For each pair of nodes i and j:
• If the distance from i to j via k ( dist[i][k] + dist[k][j] ) is shorter than the current
distance dist[i][j], update dist[i][j] to the shorter distance.
Solving an all-pairs shortest path problem using dynamic programming algorithm
Consider a graph with 4 nodes and the following initial distances:
6
Initial Distance Matrix (Weight Matrix)
D(0) 1 2 3 4
1 0 3 ∞ 7
2 ∞ 0 2 ∞
3 ∞ ∞ 0 1
4 6 ∞ ∞ 0
Step 1: k=1(Using vertex 1 as intermediate)
Example:
• D[2][4] = min (D[2][4], D[2][1] + D[1][4]) = min (∞, ∞+7) = ∞ (No change).
• D[4][2] = min (D[4][2], D[4][1] + D[1][2]) = min (∞, 6+3) = 9.
D(1) 1 2 3 4
1 0 3 ∞ 7
2 ∞ 0 2 ∞
3 ∞ ∞ 0 1
4 6 9 ∞ 0
Step 2: k=2 (Using vertex 2 as intermediate)
• D[1][3] = min(D[1][3], D[1][2] + D[2][3]) = min (∞, 3+2) = 5
• D[4][3] = min(D[4][3], D[4][2] + D[2][3]) = min (∞, 9+2) = 11
D(2) 1 2 3 4
1 0 3 5 7
2 ∞ 0 2 ∞
3 ∞ ∞ 0 1
4 6 9 11 0
7
Step 3: k=3 (Using vertex 3 as intermediate)
• D[1][4] = min(D[1][4], D[1][3] + D[3][4]) = min (7,5+1) = 6
• D[2][4] = min(D[2][4], D[2][3] + D[3][4]) = min (∞,2+1) = 3
D(3) 1 2 3 4
1 0 3 5 6
2 ∞ 0 2 3
3 ∞ ∞ 0 1
4 6 9 11 0
Step 4: k=4 (Using vertex 4 as intermediate)
• D[3][1] = min (D[3][1], D[3][4] + D[4][1]) = min (∞,1+6) = 7
• D[3][2] = min (D[3][2], D[3][4] + D[4][2]) = min (∞,1+9) = 10
Final Distance Matrix
D(4) 1 2 3 4
1 0 3 5 6
2 ∞ 0 2 3
3 7 10 0 1
4 6 9 11 0
Time and Space Complexity
• Time Complexity: O(V³)
• Space Complexity: O(V²) for the distance matrix
Single-source shortest path problem
The single-source shortest path problem is to find the shortest paths from a given source vertex to
all other vertices in a weighted graph. The Bellman-Ford algorithm is a dynamic programming
solution for the SSSP problem. It works even with negative edge weights (but no negative weight
cycles).
Problem Statement
8
Given a graph G = (V, E) with edge weights (possibly negative), and a source vertex s, find the
shortest path from s to every other vertex.
Dynamic programming method for the single-source shortest path problem
The single-source shortest path (SSSP) problem can be solved using dynamic programming
approaches like the Bellman-Ford algorithm, which handles graphs with negative edge weights
(but no negative cycles).
Bellman-Ford Algorithm (Dynamic Programming Approach)
The Bellman-Ford algorithm uses dynamic programming by progressively relaxing edges and
building solutions to subproblems of increasing path length. It relies on the optimal
substructure property: the shortest path to a vertex using at most i edges can be computed from
the shortest paths to its neighbors using at most i−1 edges.
Subproblems Definition
Define D(i, v) as the length of the shortest path from the source s to vertex v using at most i edges.
The goal is to find D(n−1,v) for all vertices v, where n is the number of vertices in the graph.
This means the shortest path to v with at most i edges is either:
• The shortest path to v with at most i−1 edges, or
• The shortest path to some neighbor u with at most i−1 edges plus the edge weight
from u to v.
This algorithm relaxes all edges repeatedly to find the shortest paths from a single source vertex
to all other vertices.
Negative Cycle Detection
After the n−1 iterations, one more iteration is performed. If any distance can still be relaxed, it
indicates a negative-weight cycle reachable from the source.
9
Time Complexity
The algorithm runs in O(nm) time, where n is the number of vertices and mm the number of
edges.
Example
Consider a graph with vertices and weighted edges (including some negative weights but no
negative cycles). The goal is to find the shortest paths from source S to all other vertices.
Input: V = 5, Edges = [[0, 1, 5], [1, 2, 1], [1, 3, 2], [2, 4, 1], [4, 3, -1]], src = 0
Output: [0, 5, 6, 6, 7]
Explanation:
For 0 to 1 minimum distance will be 5. By following path 0 → 1
For 0 to 2 minimum distance will be 6. By following path 0 → 1 → 2
For 0 to 3 minimum distance will be 6. By following path 0 → 1 → 2 → 4 → 3
For 0 to 4 minimum distance will be 7. By following path 0 → 1 → 2 → 4
Dijkstra's Algorithm for Single-Source Shortest Path
Dijkstra's algorithm is a greedy algorithm that solves the single-source shortest path problem for
a graph with non-negative edge weights.
Key Features:
1. Non-Negative Weights: Dijkstra's algorithm only works with non-negative edge weights
2. Greedy Approach: Always selects the vertex with the minimum distance
3. Priority Queue: Uses a min-heap for efficient extraction of the minimum distance vertex
4. Time Complexity: O((V+E) log V) with priority queue implementation
5. Space Complexity: O(V) for distance and visited arrays
Given:
• Graph G = (V, E)
10
• A source vertex s
Steps:
1. Set distance[s] = 0 (distance to source is 0)
2. Set distance[v] = ∞ for all other vertices v ≠ s
3. Mark all vertices as unvisited
4. While there are unvisited vertices:
a. Pick the unvisited vertex u with the smallest distance
b. Mark u as visited
c. For each unvisited neighbor v of u:
- If distance[u] + weight(u, v) < distance[v], then
distance[v] = distance[u] + weight(u, v)
Example: Consider a graph
Node Initial Distance from A
A 0
B ∞
C ∞
D ∞
• Start from A: distances updated → C=5, B=10
• Visit C (closest): update D = 5 + 2 = 7
• Visit D: update B = min(10, 7+1) = 8
• Visit B
Final distances:
• A=0
• B=8
• C=5
• D=7
11
Dynamic programming method for String editing problem
The String Editing Problem (also called Edit Distance or Levenshtein Distance) is a classic
dynamic programming problem. The goal is to find the minimum number of operations
(insertions, deletions, substitutions) needed to convert one string into another.
Operations:
1. Insert a character
2. Delete a character
3. Replace a character
Problem Statement:
Given two strings A (length m) and B (length n), find the minimum number of operations
required to convert string A into string B.
Dynamic Programming Approach:
The standard dynamic programming approach uses a 2D table (dp) where dp[i][j] represents the
minimum number of operations required to convert the first i characters of string A to the first j
characters of string B.
Algorithm:
If A[i-1] == B[j-1]:
dp[i][j] = dp[i-1][j-1] // No operation needed
Else:
dp[i][j] = 1 + min(
dp[i-1][j], // Delete from A
dp[i][j-1], // Insert into A
dp[i-1][j-1] // Replace in A
)
Base Cases Initialization:
To convert an empty string to a string of length j requires j insertions: dp[j] = j
To convert a string of length i to an empty string requires i deletions: dp[i] = i
dp[0][j] = j → Convert empty A to first j characters of B (insert j times)
dp[i][0] = i → Convert first i characters of A to empty B (delete i times)
Algorithm Steps
12
• Initialize dp[0][j] and dp[i][0].
• Fill the table using the recurrence.
• The answer is in dp[m][n].
Time and Space Complexity:
Both time and space complexity are O(mn), where m and n are the lengths of the two strings.
Introduction to 0/1 Knapsack Problem
Given a set of items, each with a weight and a value, and a knapsack with a maximum weight
capacity. Determine the combination of items that maximizes the total value within the capacity
constraint.
The task is to place the items into the bag so that the total profit associated with them is maximized.
Note: The constraint here is that we can either put an item completely into the bag or not put it at
all.
Dynamic programming algorithm for solving 0/1 Knapsack Problem
Problem Statement:
Given:
• n items, each with:
o weight w[i]
o value v[i]
• A knapsack with maximum capacity M
Goal:
Select a subset of items so that:
• Total weight ≤ M
• Total value is maximized
• Each item can be chosen only once (i.e., 0 or 1 times)
The "0/1" aspect means you can either take an entire item or leave it, you cannot take fractions of
items.
Objective: Maximize the total value of the items in the knapsack without exceeding its capacity.
13
Dynamic Programming:
A common approach to solving this problem is to use dynamic programming, which breaks down
the problem into smaller, overlapping subproblems and stores their solutions to avoid redundant
computations.
Dynamic Programming Table:
Create a table dp of size (n+1) x (M+1), where n is the number of items and M is the knapsack's
capacity.
• Rows - items allowed to use (0 up to n).
• Columns - the knapsack capacity from 0 up to M.
dp[i][w] will store the maximum value that can be achieved using the first i items with a knapsack
capacity of w.
Initialization:
The first row and column of the table are initialized to 0. This represents the case with no items or
no capacity.
Row i = 0 (no items)
Column w = 0 (zero capacity)
Iteration:
Iterate through the table, filling in each cell dp[i][w] based on the following logic:
If the current item's weight (wt[i-1]) is greater than the current capacity (w): The maximum value
is the same as the maximum value achievable with the previous i-1 items and the same capacity
w. So, dp[i][w] = dp[i-1][w].
If the current item's weight is less than or equal to the current capacity, the maximum value is the
greater of the two options:
• Not including the current item: dp[i-1][w].
• Including the current item: val[i-1] (the value of the current item) + dp[i-1][w-wt[i-1]] (the
maximum value achievable with the remaining capacity and previous items).
Result:
The maximum value that can be achieved with all n items and capacity M is stored in dp[n][M].
Example:1
Let's consider items with weights {2, 3, 4, 5} and values (Profit) {1, 2, 5, 6}, and the knapsack
capacity is 8.
Capacity columns: 0,1,2,3,4,5,6,7,8.
14
Pi Wi 0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 i 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 5 6 6 7 8
V[i, w] = max{V[i-1, w], V[i-1, w-w[i]] + P[i]}
V[4,5] = max{V[3,5], V[3,5-5]+6} = max{5, 6] = 6
V[4,6] = max{V[3,6], V[3,6-5]+6} = max{6, 6] = 6
V[4,7] = max{V[3,7], V[3,7-5]+6} = max{7, 7] = 7
V[4,8] = max{V[3,8], V[3,8-5]+6} = max{7, 8] = 8
• The optimal value is at dp[n][M] = dp[4][8] = 8.
• Solution: [x1, x2, x3, x4] = [0, 1, 0, 1]
Example:2
15
Introduction to Graphs - Traversals
Graph:
• A graph is a data structure used to represent connections (relationships) between
objects.
• It consists of:
o Vertices (nodes): The objects.
o Edges (links): Connections between pairs of vertices.
Types of Graphs
• Directed graph (digraph): Edges have direction (A → B).
• Undirected graph: Edges do not have direction (A — B).
• Weighted graph: Edges have weights (distances, costs).
• Unweighted graph: Edges have no weights.
Graph Traversal
Graph traversal is the process of systematically visiting all the vertices (and often edges) of a
graph exactly once, following some specific order.
Why Do We Traverse a Graph?
• To find whether a vertex is reachable from another.
• To discover connected components.
• To find shortest paths (in unweighted graphs).
• To detect cycles.
• To perform operations like topological sorting, spanning tree generation, etc.
Applications of graph traversal
• To visit all vertices or search for a specific value.
• Used in:
o Finding paths
o Checking connectivity
o Solving puzzles (mazes)
o Web crawling
Graph Traversal Methods
Depth-First Search (DFS)
• Explore as deep as possible along each branch before backtracking.
• Implemented using:
o Stack (LIFO: Last In, First Out)
Process
• Start from a source node.
• Visit it and mark as visited.
• For each unvisited neighbor:
• Recursively perform DFS (or use a stack).
16
Applications:
• Topological sorting
• Solving puzzles
• Connected components
Time Complexity
• O(V + E)
Breadth-First Search (BFS)
• Explore the graph level by level, visiting all immediate neighbors of a node before going
deeper.
• Implemented using:
o Queue (FIFO: First In, First Out)
Process
1. Start from a source node.
2. Visit it and mark it as visited.
3. Put it into a queue.
4. While the queue is not empty:
o Remove the front node from the queue.
o Visit all its unvisited neighbors and add them to the queue.
Applications:
• Shortest path in unweighted graphs
• Social networking (finding people within "k" connections)
Time Complexity
• O(V + E) for a graph with V vertices and E edges.
DFS and BFS Traversal Example:
B C
D E F
17
Breadth-First Search (BFS) (using a Queue)
Start from A
Step Queue Visited Action
1 A — Add A to queue
2 B, C A Visit A, enqueue neighbors B, C
3 C, D, E A, B Visit B, enqueue D, E
4 D, E, F A, B, C Visit C, enqueue F
5 E, F A, B, C, D Visit D (no new neighbors)
6 F A, B, C, D, E Visit E (no new neighbors)
7 — A, B, C, D, E, F Visit F (no new neighbors)
Final BFS order
A→B→C→D→E→F
Depth-First Search (DFS) (using a Stack or recursion)
Start from A
Step Stack Visited Action
1 A — Push A to stack
2 C, B A Pop A, push B & C
3 C, E, D A, B Pop B, push D & E
4 C, E A, B, D Pop D (no new neighbors)
5 C A, B, D, E Pop E (no new neighbors)
6 — A, B, D, E, C Pop C, push F
7 — A, B, D, E, C, F Pop F (no new neighbors)
Final DFS order
A→B→D→E→C→F
18