daa-unit-3
daa-unit-3
Dynamic Programming (DP) is a powerful technique used in computer science and mathematics to solve
problems by breaking them down into smaller subproblems, solving each subproblem only once, and
storing the results to avoid solving the same problem again.
3. The problem shows optimal substructure – the solution of a bigger problem depends on the solution of
its smaller subparts.
Real-Life Analogy
Imagine you're climbing stairs. Each time, you can either take 1 step or 2 steps. You want to know: how many
different ways can you reach the top?
If you solve this problem by trying every possibility, it will take a long time.
Key Concepts in DP
1. Overlapping Subproblems
You can build the solution to the main problem using solutions to smaller problems.
} dp[0] = 0; dp[1] = 1;
Example:
Examples of DP Problems
1. Fibonacci Numbers
Steps In DP
1. Break the Problem into Smaller Pieces
First, we break down the big problem into smaller, simpler subproblems. This makes it easier to solve
each small part.
2. Solve the Small Problems
We then find the best or optimal solution to each of these smaller problems.
3. Save the Answers
After solving each small problem, we store the answers. This is called memoization. It helps to keep
track of solutions, so we don’t need to redo the same work.
4. Reuse the Stored Answers
Whenever we encounter the same subproblem again, we simply reuse the answer that we stored earlier.
This saves time because we don’t calculate it again.
5. Put Everything Together
Finally, we combine the answers to all the small problems to find the solution to the original, bigger
problem.
What is a Multistage Graph?
A Multistage Graph is a directed graph in which nodes are divided into stages (groups). You move from one
stage to the next by following edges, and you want to find the shortest path (or longest path) from the first
node (source) in the first stage to the last node (destination) in the final stage.
It is often solved using Dynamic Programming to avoid recalculating the same paths multiple times.
• (S, C, F, T) 5+2+2 = 9.
4 9 D
A D d(D, T)
1 18
11 9
5 d(E, T)
2 5 13
B E T
S B E T
16 2
d(F, T)
16
5 F
C 2
F
d(S, A) = 1
d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
d(S, B) = 2
d(A,T) = min{4+d(D,T), 11+d(E,T)}
d(S, C) = 5
= min{4+18, 11+13} = 22.
d(S,D)=min{d(S,A)+d(A,D), d(S,B)+d(B,D)}
1 A = min{ 1+4, 2+9 } = 5
d(A, T)
d(S,E)=min{d(S,A)+d(A,E), d(S,B)+d(B,E)}
2 d(B, T)
S B T = min{ 1+11, 2+5 } = 7
d(S,F)=min{d(S,B)+d(B,F), d(S,C)+d(C,F)}
d(C, T) = min{ 2+16, 5+2 } = 7
5
C
d(S,T) = min{d(S, D)+d(D, T), d(S,E)+d(E,T), d(S,
F)+d(F, T)}
# Example Graph
# Each index represents a node, and inside we have (target, cost)
graph = [
[(1, 2), (2, 1)], # Node 0 -> Node 1 (cost 2), Node 2 (cost 1)
[(3, 3)], # Node 1 -> Node 3 (cost 3)
[(3, 1)], # Node 2 -> Node 3 (cost 1)
[] # Node 3 is the destination
]
source = 0
destination = 3
stages = 4
shortest_path(graph, stages, source, destination)
Transitive Closure of a graph tells us which vertices are reachable from which other vertices.
In simple words, it helps us find all possible paths, not just direct ones.
Easy Example
A→B
B→C
Now, even though there is no direct edge from A to C, you can still go from A → B → C.
So, A can reach C.
This is what transitive closure helps to find — all such indirect paths.
The Binomial Coefficient, written as C(n, k) or "n choose k", tells you:
How many ways you can choose k items from n total items.
It’s used in combinations, probability, Pascal’s Triangle, and more.
Formula:
C(n,k)=n!k!(n−k)!C(n, k) = \frac{n!}{k!(n-k)!}C(n,k)=k!(n−k)!n!
C(n, 0) = 1
C(n, n) = 1
o dp[i][0] = 1
o dp[i][i] = 1
i\j 0 1 2
0 1
1 1 1
2 1 2 1
3 1 3 3
4 1 4 6
5 1 5 10
This means it tells us the shortest distance from every node to every other node, even if the path goes
through other nodes in between.
Algorithm
Step 1 − Construct an adjacency matrix A with all the costs of edges present in the graph. If there is no path
between two vertices, mark the value as ∞.
Step 2 − Derive another adjacency matrix A1 from A keeping the first row and first column of the original
adjacency matrix intact in A1. And for the remaining values, say A1[i,j], if A[i,j]>A[i,k]+A[k,j] then
replace A1[i,j] with A[i,k]+A[k,j]. Otherwise, do not change the values. Here, in this step, k = 1 (first vertex
acting as pivot).
Step 3 − Repeat Step 2 for all the vertices in the graph by changing the k value for every pivot vertex until the
final matrix is achieved.
Step 4 − The final adjacency matrix obtained is the final solution with all the shortest paths
Pseudocode
# n is the number of vertices in the graph # Step 2: Loop for each possible intermediate
# graph[i][j] holds the weight of the edge from node node
i to node j for k in range(n):
# If there is no direct edge between i and j, use a # Step 3: Loop for each pair (i, j)
large number (like float('inf')) for i in range(n):
def floyd_warshall(graph): for j in range(n):
n = len(graph) # number of nodes # Step 4: Check if path through k is
# Step 1: Initialize distance matrix shorter
dist = [[graph[i][j] for j in range(n)] for i in if dist[i][j] > dist[i][k] + dist[k][j]:
range(n)] dist[i][j] = dist[i][k] + dist[k][j]
# Step 5: Final result in dist matrix
return dist
Example
Consider the following directed weighted graph G = {V, E}. Find the shortest paths between all the vertices of
the graphs using the Floyd-Warshall algorithm.
Analysis
From the pseudocode above, the Floyd-Warshall algorithm operates using three for loops to find the shortest
distance between all pairs of vertices within a graph. Therefore, the time complexity of the Floyd-Warshall
algorithm is O(n3), where n is the number of vertices in the graph. The space complexity of the algorithm
is O(n2).
Advantages
Limitations
Slow for very large graphs because of its cubic time complexity.
As we know that in binary search tree, the nodes in the left subtree have lesser value than the root node and the
nodes in the right subtree have greater value than the root node.
We know the key values of each node in the tree, and we also know the frequencies of each node in terms of
searching means how much time is required to search a node. The frequency and key-value determine the
overall cost of searching a node. The cost of searching is a very important factor in various applications. The
overall cost of searching a node should be less. The time required to search a node in BST is more than the
balanced binary search tree as a balanced binary search tree contains a lesser number of levels than the BST.
There is one way that can reduce the cost of a binary search tree is known as an optimal binary search tree.
Let's understand through an example. In the above tree, all the nodes on the left subtree
are smaller than the value of the root node, and all
If the keys are 10, 20, 30, 40, 50, 60, 70
the nodes on the right subtree are larger than the
value of the root node. The maximum time required
to search a node is equal to the minimum height of
the tree, equal to logn.
The above trees have different frequencies. The tree When j=4, i=3 then j-i = 1
with the lowest frequency would be considered the
Now to calculate the cost, we will consider only the
optimal binary search tree. The tree with the
jth value.
frequency 17 is the lowest, so it would be
considered as the optimal binary search tree. The cost of c[0,1] is 4 (The key is 10, and the cost
corresponding to key 10 is 4).
Dynamic Approach
The cost of c[1,2] is 2 (The key is 20, and the cost
Consider the below table, which contains the keys
corresponding to key 20 is 2).
and frequencies.
The cost of c[2,3] is 6 (The key is 30, and the cost
corresponding to key 30 is 6)
The minimum cost is 10; therefore, c[1,3] = 10 In the above tree, 10 is the root node, 20 is the right
child of node 10, and 30 is the right child of node
o When i=2 and j=4, we will consider the keys 20.
at 3 and 4, i.e., 30 and 40. There are two
possible trees that can be made out from Cost would be: 1*4 + 2*2 + 3*6 = 26
these two keys shown as below:
In the first binary tree, cost would be: 1*6 + 2*3 =
12
In the above tree, 20 is the root node, 30 is the right The minimum value is 5; therefore, c[1,4] = 5+11 =
child of node 20, and 10 is the left child of node 20. 16
= min{4 + 12} + 15
= 16 + 15 = 31
= min {8 + 3} + 15
= 26
= min{20 + 0} + 15
= 35
Where Is It Used?
Search engines
Dictionary lookups
0/1 Knapsack problem Dynamic Programming knapsack in such a way total value produces a
maximum profit.
o 0/1 knapsack problem Set the first row and first column to 0:
First,
When i=1, W=1
we create a matrix shown as below:
w1 = 3; Since we have only one item in the set
having weight 3, but the capacity of the knapsack is
0 1 2 3 4 5 6 7 8
1. We cannot fill the item of 3kg in the knapsack of
capacity 1 kg so add 0 at M[1][1] shown as below:
0
0 1 2 3 4 5 6 7 8
1
0 0 0 0 0 0 0 0 0 0
2
1 0 0
3
2 0
4
3 0
In the above matrix, columns represent the weight,
i.e., 8. The rows represent the profits and weights of
4 0
items. Here we have not taken the weight 8 directly,
problem is divided into sub-problems, i.e., 0, 1, 2,
When i = 1, W = 2
3, 4, 5, 6, 7, 8. The solution of the sub-problems
would be saved in the cells and answer to the w1 = 3; Since we have only one item in the set
problem would be stored in the final cell. First, we having weight 3, but the capacity of the knapsack is
write the weights in the ascending order and profits 2. We cannot fill the item of 3kg in the knapsack of
according to their weights shown as below: capacity 2 kg so add 0 at M[1][2] shown as below:
wi = {3, 4, 5, 6}
0 1 2 3 4 5 6 7 8
pi = {2, 3, 4, 1}
2 0
0 0 0 0 0 0 0 0 0 0
3 0
1 0
W1 = 3; Since we have only one item in the set
4 0
having weight equal to 3, and weight of the
knapsack is 5; therefore, we can fill the knapsack
When i=1, W=3
with an item of weight equal to 3. We put profit
w1 = 3; Since we have only one item in the set corresponding to the weight 3, i.e., 2 at M[1][5]
having weight equal to 3, and weight of the shown as below:
knapsack is also 3; therefore, we can fill the
knapsack with an item of weight equal to 3. We put 0 1 2 3 4 5 6 7 8
profit corresponding to the weight 3, i.e., 2 at
M[1][3] shown as below:
0 0 0 0 0 0 0 0 0 0
0 1 2 3 4 5 6 7 8
1 0 0 0 2 2 2
0 0 0 0 0 0 0 0 0 0
2 0
1 0 0 0 2
3 0
2 0
4 0
3 0
When i =1, W=6
0 1 2 3 4 5 6 7 8
1 0 0 0 2 2 2 2
0 0 0 0 0 0 0 0 0 0
2 0
1 0 0 0 2 2
3 0
2 0
4 0
3 0
When i=1, W = 7
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2 1 0 0 0 2 2 2 2 2 2
2 0 0 0 2 2 0 0 0 2 3 3
3 0 3 0
4 0 4 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2 1 0 0 0 2 2 2 2 2 2
2 0 0 0 0 3 3 3 5 2 0 0 0 2 3 3 3 5 5
3 0 3 0 0
4 0 4 0
When i = 2, W = 8 When i = 3, W = 2
The weight corresponding to the value 2 is 4, i.e., The weight corresponding to the value 3 is 5, i.e.,
w2 = 4. Since we have two items in the set having w3 = 5. Since we have three items in the set having
weights 3 and 4, and the weight of the knapsack is weight 3, 4, and 5, and the weight of the knapsack
7. We can put item of weight 4 and 3 in a knapsack is 1. We cannot put neither of the items in a
and the profits corresponding to weights are 2 and knapsack, so we add 0 at M[3][2] shown as below:
3; therefore, the total profit is 5, so we add 5 at
M[2][7] shown as below: 0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2
1 0 0 0 2 2 2 2 2 2
2 0 0 0 2 3 3 3 5 5
2 0 0 0 2 3 3 3 5 5
3 0 0 0
3 0
4 0
4 0 When i = 3, W = 3
1 0 0 0 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0
2 0 0 0 2 3 3 3 5 5 1 0 0 0 2 2 2 2 2 2
3 0 0 0 2 2 0 0 0 2 3 3 3 5 5
4 0 3 0 0 0 1 3 3
When i = 3, W = 4
4 0
The weight corresponding to the value 3 is 5, i.e.,
w3 = 5. Since we have three items in the set of When i =3, W = 6
weight 3, 4, and 5 respectively, and weight of the
The weight corresponding to the value 3 is 5, i.e.,
knapsack is 4. We can keep the item of either
w3 = 5. Since we have three items in the set of
weight 3 or 4; the profit (3) corresponding to the
weight 3, 4, and 5 respectively, and weight of the
weight 4 is more than the profit corresponding to
knapsack is 6. We can keep the item of either
the weight 3 so we add 3 at M[3][4] shown as
weight 3, 4 or 5; the profit (3) corresponding to the
below:
weight 4 is more than the profits corresponding to
the weight 3 and 5 so we add 3 at M[3][6] shown as
0 1 2 3 4 5 6 7 8
below:
0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8
1 0 0 0 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0
2 0 0 0 2 3 3 3 5 5 1 0 0 0 2 2 2 2 2 2
3 0 0 0 1 3 2 0 0 0 2 3 3 3 5 5
4 0 3 0 0 0 1 3 3 3
When i = 3, W = 5
4 0
The weight corresponding to the value 3 is 5, i.e.,
w3 = 5. Since we have three items in the set of When i =3, W = 7
weight 3, 4, and 5 respectively, and weight of the
The weight corresponding to the value 3 is 5, i.e.,
knapsack is 5. We can keep the item of either
w3 = 5. Since we have three items in the set of
weight 3, 4 or 5; the profit (3) corresponding to the
weight 3, 4, and 5 respectively, and weight of the
weight 4 is more than the profits corresponding to
knapsack is 7. In this case, we can keep both the
the weight 3 and 5 so we add 3 at M[3][5] shown as
items of weight 3 and 4, the sum of the profit would
below:
be equal to (2 + 3), i.e., 5, so we add 5 at M[3][7]
shown as below:
add any item in the knapsack; Therefore, we add 0
0 1 2 3 4 5 6 7 8
at M[4][1] shown as below:
0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8
1 0 0 0 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0
2 0 0 0 2 3 3 3 5 5 1 0 0 0 2 2 2 2 2 2
3 0 0 0 1 3 3 3 5 2 0 0 0 2 3 3 3 5 5
4 0 3 0 0 0 1 3 3 3 5 5
When i = 3, W = 8
4 0 0
The weight corresponding to the value 3 is 5, i.e.,
w3 = 5. Since we have three items in the set of When i = 4, W = 2
weight 3, 4, and 5 respectively, and the weight of
The weight corresponding to the value 4 is 6, i.e.,
the knapsack is 8. In this case, we can keep both the
w4 = 6. Since we have four items in the set of
items of weight 3 and 4, the sum of the profit would
weights 3, 4, 5, and 6 respectively, and the weight
be equal to (2 + 3), i.e., 5, so we add 5 at M[3][8]
of the knapsack is 2. The weight of all the items is
shown as below:
more than the weight of the knapsack, so we cannot
add any item in the knapsack; Therefore, we add 0
0 1 2 3 4 5 6 7 8
at M[4][2] shown as below:
0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8
1 0 0 0 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0
2 0 0 0 2 3 3 3 5 5 1 0 0 0 2 2 2 2 2 2
3 0 0 0 1 3 3 3 5 5 2 0 0 0 2 3 3 3 5 5
4 0 3 0 0 0 1 3 3 3 5 5
When i = 4, W = 1 When i = 4, W = 3
The weight corresponding to the value 4 is 6, i.e., The weight corresponding to the value 4 is 6, i.e.,
w4 = 6. Since we have four items in the set of w4 = 6. Since we have four items in the set of
weights 3, 4, 5, and 6 respectively, and the weight weights 3, 4, 5, and 6 respectively, and the weight
of the knapsack is 1. The weight of all the items is of the knapsack is 3. The item with a weight 3 can
more than the weight of the knapsack, so we cannot be put in the knapsack and the profit corresponding
to the weight 4 is 2, so we will add 2 at M[4][3] to the weight 4 is 3, so we will add 3 at M[4][5]
shown as below: shown as below:
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2 1 0 0 0 2 2 2 2 2 2
2 0 0 0 2 3 3 3 5 5 2 0 0 0 2 3 3 3 5 5
3 0 0 0 1 3 3 3 5 5 3 0 0 0 1 3 3 3 5 5
4 0 0 0 2 4 0 0 0 2 3 3
When i = 4, W = 4 When i = 4, W = 6
The weight corresponding to the value 4 is 6, i.e., The weight corresponding to the value 4 is 6, i.e.,
w4 = 6. Since we have four items in the set of w4 = 6. Since we have four items in the set of
weights 3, 4, 5, and 6 respectively, and the weight weights 3, 4, 5, and 6 respectively, and the weight
of the knapsack is 4. The item with a weight 4 can of the knapsack is 6. In this case, we can put the
be put in the knapsack and the profit corresponding items in the knapsack either of weight 3, 4, 5 or 6
to the weight 4 is 3, so we will add 3 at M[4][4] but the profit, i.e., 4 corresponding to the weight 6
shown as below: is highest among all the items; therefore, we add 4
at M[4][6] shown as below:
0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2
1 0 0 0 2 2 2 2 2 2
2 0 0 0 2 3 3 3 5 5
2 0 0 0 2 3 3 3 5 5
3 0 0 0 1 3 3 3 5 5
3 0 0 0 1 3 3 3 5 5
4 0 0 0 2 3
4 0 0 0 2 3 3 4
When i = 4, W = 5
When i = 4, W = 7
The weight corresponding to the value 4 is 6, i.e.,
w4 = 6. Since we have four items in the set of The weight corresponding to the value 4 is 6, i.e.,
weights 3, 4, 5, and 6 respectively, and the weight w4 = 6. Since we have four items in the set of
of the knapsack is 5. The item with a weight 4 can weights 3, 4, 5, and 6 respectively, and the weight
be put in the knapsack and the profit corresponding of the knapsack is 7. Here, if we add two items of
weights 3 and 4 then it will produce the maximum
profit, i.e., (2 + 3) equals to 5, so we add 5 at pointer will be shifted upwards as shown in the
M[4][7] shown as below: below table:
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2 1 0 0 0 2 2 2 2 2 2
2 0 0 0 2 3 3 3 5 5 2 0 0 0 2 3 3 3 5 5
3 0 0 0 2 3 3 3 5 5 3 0 0 0 2 3 3 3 5 5
4 0 0 0 2 3 3 4 5 4 0 0 0 2 3 3 4 5 5
0 1 2 3 4 5 6 7 8
1 0 0 0 2 2 2 2 2 2
0 0 0 0 0 0 0 0 0 0
2 0 0 0 2 3 3 3 5 5
1 0 0 0 2 2 2 2 2 2
3 0 0 0 2 3 3 3 5 5
2 0 0 0 2 3 3 3 5 5
4 0 0 0 2 3 3 4 5 5
X = {1, 1, 0, 0}
1 0 0 0 2 2 2 2 2 2
The profit corresponding to the weight is 2.
Therefore, the remaining profit is 0. We compare 0
2 0 0 0 2 3 3 3 5 5 value with the above row. Since the above row
contains a 0 value but the profit corresponding to
this row is 0. In this problem, two weights are
3 0 0 0 2 3 3 3 5 5
selected, i.e., 3 and 4 to maximize the profit.
Time Complexity:
O(n × W)
Where n is the number of items and W is the maximum capacity of the knapsack.
This is because we use a dynamic programming table with n+1 rows and W+1 columns.
Space Complexity:
Advantage Explanation
Efficient Avoids solving the same subproblems repeatedly by storing previous results.
Guaranteed optimality Always finds the best possible solution within the given constraints.
Easy to implement Once the concept is understood, it's straightforward to implement in code.
Useful for constrained problems Works well when items must be selected under strict size or cost limits.
Project scheduling Selecting a set of projects to complete within a time or cost constraint.
Cargo loading Packing a truck with items to maximize value without exceeding weight.
Computer games Deciding which items a character can carry in a game based on capacity.
Cryptography (historical) Was once used in cryptographic algorithms like Merkle–Hellman knapsack.
2. n = number of vertices
Step 1: Start
Set distance to all others as ∞ (we don’t As we can observe in the above graph that some of
know them yet). the weights are negative. The above graph contains
Step 2: Relax Edges 6 vertices so we will go on relaxing till the 5
vertices. Here, we will relax all the edges 5 times.
Repeat this process V-1 times (V = number The loop will iterate 5 times to get the correct
of nodes): answer. If the loop is iterated more than 5 times
o For each edge in the graph: then also the answer will be the same, i.e., there
would be no change in the distance between the d(v) = ∞
vertices.
c(u , v) = 4
Relaxing means:
Since (0 + 4) is less than ∞, so update
1. If (d(u) + c(u , v) < d(v))
1. d(v) = d(u) + c(u , v)
2. d(v) = d(u) + c(u , v)
d(v) = 0 + 4 = 4
To find the shortest path of the above graph, the
Therefore, the distance of vertex C is 4.
first step is note down all the edges which are given
below: Consider the edge (A, D). Denote vertex 'A' as 'u'
and vertex 'D' as 'v'. Now use the relaxing formula:
(A, B), (A, C), (A, D), (B, E), (C, E), (D, C), (D, F),
(E, F), (C, B) d(u) = 0
d(v) = 0 + 5 = 5
d(u) = 6
d(v) = ∞
Since the graph has six vertices so it will have five c(u , v) = -1
iterations.
Since (6 - 1) is less than ∞, so update
First iteration
1. d(v) = d(u) + c(u , v)
Consider the edge (A, B). Denote vertex 'A' as 'u'
and vertex 'B' as 'v'. Now use the relaxing formula: d(v) = 6 - 1= 5
d(v) = ∞ Consider the edge (C, E). Denote vertex 'C' as 'u'
and vertex 'E' as 'v'. Now use the relaxing formula:
c(u , v) = 6
d(u) = 4
Since (0 + 6) is less than ∞, so update
d(v) = 5
1. d(v) = d(u) + c(u , v)
c(u , v) = 3
d(v) = 0 + 6 = 6
Since (4 + 3) is greater than 5, so there will be no
Therefore, the distance of vertex B is 6. updation. The value at vertex E is 5.
Consider the edge (A, C). Denote vertex 'A' as 'u' Consider the edge (D, C). Denote vertex 'D' as 'u'
and vertex 'C' as 'v'. Now use the relaxing formula: and vertex 'C' as 'v'. Now use the relaxing formula:
d(u) = 0 d(u) = 5
d(v) = 4 In the second iteration, we again check all the
edges. The first edge is (A, B). Since (0 + 6) is
c(u , v) = -2
greater than 1 so there would be no updation in the
Since (5 -2) is less than 4, so update vertex B.
1. d(v) = d(u) + c(u , v) The next edge is (A, C). Since (0 + 4) is greater
than 3 so there would be no updation in the vertex
d(v) = 5 - 2 = 3
C.
Therefore, the distance of vertex C is 3.
The next edge is (A, D). Since (0 + 5) equals to 5 so
Consider the edge (D, F). Denote vertex 'D' as 'u' there would be no updation in the vertex D.
and vertex 'F' as 'v'. Now use the relaxing formula:
The next edge is (B, E). Since (1 - 1) equals to 0
d(u) = 5 which is less than 5 so update:
1. d(v) = d(u) + c(u , v) The next edge is (C, E). Since (3 + 3) equals to 6
which is greater than 5 so there would be no
d(v) = 5 - 1 = 4
updation in the vertex E.
Therefore, the distance of vertex F is 4.
The next edge is (D, C). Since (5 - 2) equals to 3 so
Consider the edge (E, F). Denote vertex 'E' as 'u' there would be no updation in the vertex C.
and vertex 'F' as 'v'. Now use the relaxing formula:
The next edge is (D, F). Since (5 - 1) equals to 4 so
d(u) = 5 there would be no updation in the vertex F.
d(u) = 3
d(v) = 6
c(u , v) = -2
d(v) = 3 - 2 = 1
Now the first iteration is completed. We move to the We will perform the same steps as we did in the
second iteration. previous iterations. We will observe that there will
be no updation in the distance of vertices.
Second iteration:
1. The following are the distances of vertices:
2. A: 0 example. Consider the below graph.
3. B: 1
4. C: 3
5. D: 5
6. E: 0
7. F: 3
Time Complexity
5. distance[S] <- 0
6.
o The bellman ford algorithm does not Consider the edge (1, 2). Denote vertex '1' as 'u'
produce a correct answer if the sum of the and vertex '2' as 'v'. Now use the relaxing
edges of a cycle is negative. Let's formula:
understand this property through an d(u) = 0
d(v) = ∞
c(u , v) = 4 Second iteration
Since (0 + 4) is less than ∞, so update Now, again we will check all the edges. The first
edge is (1, 3). Since (0 + 5) equals to 5 which is
1. d(v) = d(u) + c(u , v)
greater than -4 so there would be no updation in the
d(v) = 0 + 4 = 4 vertex 3.
Therefore, the distance of vertex 2 is 4. The next edge is (1, 2). Since (0 + 4) equals to 4 so
there would be no updation in the vertex 2.
Consider the edge (3, 2). Denote vertex '3' as 'u'
and vertex '2' as 'v'. Now use the relaxing The next edge is (3, 2). Since (-4 + 7) equals to 3
formula: which is less than 4 so update:
c(u , v) = 7 = -4 + 7 = 3
Time Complexity
O(V * E)
This is because we relax all edges V - 1 times, and each relaxation takes O(1) time.
Advantages
Cannot be used if you only need the shortest path between two fixed points
A salesperson needs to visit a number of cities exactly once and then return to the starting city. The goal is to
find the shortest possible route that visits each city once and comes back to where they started.
It models real-world problems like delivery routes, circuit design, and route planning.
3. For every possible combination of visited cities, calculate the minimum cost to reach a city.
4. Try to go from the current city to a new unvisited city, and update the cost if the new path is cheaper.
5. Repeat this process for all combinations of visited cities and destination cities.
6. After all cities are visited, add the cost to return back to the starting city.
7. The final answer is the minimum total cost to complete the full round trip.
We try a solution
This helps us explore all possible ways of placing queens and stop when we find a correct solution.
Step-by-step process:
3. For each column, check if the queen is safe (not attacked by any previous queen).
6. If you reach a row where no position is safe, go back to the previous row and move the queen to a new
column.
7. Keep repeating this process until all 4 queens are placed safely.
Visual Example for 4-Queen This is a correct solution because no two queens
attack each other.
In the sum of subsets problem, there is a given set with some non-negative integer elements. And another sum
value is also provided, our task is to find all possible subsets of the given set whose sum is the same as the
given sum value.
up to a given number .
A={5,7,10,12,15,18,20}
up to 35 are:
Subset sum problem is to find subset of
{15,20},{18,7,10},{5,10,20}, and
elements
{18,12,5}
that are selected from a given set whose sum
adds
If the current sum equals the target, it’s a valid solution — save it!
If the sum becomes greater than target, stop going forward on this path.
Coloring maps (no two neighboring countries should have the same color)
Scheduling exams (no two connected subjects can have exams at the same time)
Problem Statement
You are given:
Backtracking Approach
Steps:
1. Start with the first vertex.
o Check if it is safe to assign this color (i.e., none of its adjacent vertices have the same color).
o Or all options are tried and no valid coloring is found (no solution)
Applications :
1. Map Coloring
2. Timetable Scheduling
3. Register Allocation (Compilers)
6. Job Scheduling
Time Complexity:
Worst-case: O(m^V)
Where:
o V = number of vertices
o m = number of colors
o This is because each vertex can be assigned any of the m colors.
Space Complexity:
Hamiltonian Cycle
In an undirected graph, the Hamiltonian path is a path, that visits each vertex exactly once, and the
Hamiltonian cycle or circuit is a Hamiltonian path, that there is an edge from the last vertex to the first
vertex.
In this problem, we will try to determine whether a graph contains a Hamiltonian cycle or not. And when a
Hamiltonian cycle is present, also print the cycle.
• Algorithm
• isValid(v, k)
• Begin
• return false
• return false
• End
Time Complexity : O(N!), where N is number of vertices.
Auxiliary Space : O(1), since no extra space used.
1. Branch:
o This means splitting the problem into smaller subproblems (like making choices at each level).
2. Bound:
o This means calculating the best possible result (upper or lower bound) that can be achieved
from a given subproblem.
o If this bound is worse than the current best, we discard that branch.
o This helps us know the best result that branch could give.
4. Compare bounds:
o If a branch’s bound is worse than the current best solution → prune (cut off) that branch.
Pruning: Skipping a node because it can’t give a better result than what we already have.
Advantages
Disadvantages
In basic search methods like Breadth-First Search (BFS) and Depth-First Search (DFS), the algorithm follows a
fixed rule for choosing the next step. This means it does not check whether one step might be better or faster
than the other. These methods are called “blind” because they do not give any preference to options that might
lead to the correct answer more quickly. As a result, BFS and DFS may explore many useless or longer paths,
which makes the search slow.
To make the search faster and smarter, we use an approach called Branch and Bound. In this approach, we use
a cost function to decide which option (or node) to explore next. This cost function helps us pick the option
that seems the most promising or least expensive. So, instead of blindly following the next available option, we
always pick the one with the smallest estimated cost. This increases our chances of finding a good or optimal
solution more quickly
1. Row-Based Cost
Pick the job with the lowest cost for that worker.
Do this for all remaining workers.
Pick the worker with the lowest cost for that job.
The first approach is followed. Finally, job 1 gets assigned to worker C as it has
minimum cost among unassigned jobs and job 4
Let’s take below example and try to calculate
gets assigned to worker D as it is only Job left. Total
promising cost when Job 2 is assigned to worker A.
cost becomes 2 + 3 + 5 + 4 = 14.
The Traveling Salesperson Problem (TSP) is a famous problem where a person (like a salesperson) must visit
a number of cities exactly once and then return to the starting city. The goal is to find the shortest possible
path that visits all cities once and ends back at the beginning. It is a hard problem because as the number of
cities increases, the number of possible paths grows very fast.
How It Works
We start from the first city and try visiting other cities one by one. At each step, we calculate the minimum
possible cost (a lower bound) of completing the rest of the tour from that point. If the cost is more than the
current best, we stop exploring that path.
We use a priority queue (like a waiting list) to keep track of which path to explore next. The path with the
lowest estimated cost is explored first. This way, we are always trying the most promising path before moving
on to others.
To make this efficient, we often reduce the cost matrix (which shows the distances between cities). For
example, we subtract the smallest value from each row and column to lower the overall cost estimate.
You Tube Video Link : Learn form this video = https://www.youtube.com/watch?v=LMjDeWIkih4
You have:
o A weight (w_i).
o A value (v_i).
Efficient: Prunes branches that cannot lead to optimal solutions, reducing computation.
Limitations
Complexity: Still exponential in worst cases (O(2^n) nodes possible), though pruning reduces this.
Upper Bound Quality: A weak bound may lead to less pruning and more computation.
problems are divided into classes known as Complexity Classes. In complexity theory, a Complexity Class is a
set of problems with related complexity. With the help of complexity theory, we try to cover the following.
P Class
The P in the P class stands for Polynomial Time. It is the collection of decision problems(problems with a
“yes” or “no” answer) that can be solved by a deterministic machine (our computers) in polynomial time.
Features:
P is often a class of computational problems that are solvable and tractable. Tractable means that the
problems can be solved in theory as well as in practice. But the problems that can be solved in theory
but not in practice are known as intractable.
Most of the coding problems that we solve fall in this category like the below.
3. Merge Sort
NP Class
The NP in NP class stands for Non-deterministic Polynomial Time. It is the collection of decision problems
that can be solved by a non-deterministic machine (note that our computers are deterministic) in polynomial
time.
Features:
The solutions of the NP class might be hard to find since they are being solved by a non-deterministic
machine but the solutions are easy to verify.
Example:
Let us consider an example to better understand the NP class. Suppose there is a company having a total
of 1000 employees having unique employee IDs. Assume that there are 200 rooms available for them. A
selection of 200 employees must be paired together, but the CEO of the company has the data of some
employees who can’t work in the same room due to personal reasons.
This is an example of an NP problem. Since it is easy to check if the given choice of 200 employees proposed
by a coworker is satisfactory or not i.e. no pair taken from the coworker list appears on the list given by the
CEO. But generating such a list from scratch seems to be so hard as to be completely impractical.
It indicates that if someone can provide us with the solution to the problem, we can find the correct and
incorrect pair in polynomial time. Thus for the NP class problem, the answer is possible, which can be
calculated in polynomial time.
This class contains many problems that one would like to be able to solve effectively:
3. Graph coloring.
Co-NP Class
Co-NP stands for the complement of NP Class. It means if the answer to a problem in Co-NP is No, then there
is proof that can be checked in polynomial time.
Features:
For an NP and CoNP problem, there is no need to verify all the answers at once in polynomial time,
there is a need to verify only one particular answer “yes” or “no” in polynomial time for a problem to be
in NP or CoNP.
An NP-hard problem is at least as hard as the A problem is NP-complete if it is both NP and NP-
hardest problem in NP and it is a class of problems hard. NP-complete problems are the hard problems
such that every problem in NP reduces to NP-hard. in NP.
Features: Features:
All NP-hard problems are not in NP. NP-complete problems are special as any
problem in NP class can be transformed or
It takes a long time to check them. This
reduced into NP-complete problems in
means if a solution for an NP-hard problem
polynomial time.
is given then it takes a long time to check
whether it is right or not. If one could solve an NP-complete problem
in polynomial time, then one could also
A problem A is in NP-hard if, for every
solve any NP problem in polynomial time.
problem L in NP, there exists a polynomial-
time reduction from L to A. Some example problems include:
NP-hard All NP-hard problems are not in NP and it takes a long time to check them.