[go: up one dir, main page]

0% found this document useful (0 votes)
12 views47 pages

daa-unit-3

Uploaded by

ittry415
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views47 pages

daa-unit-3

Uploaded by

ittry415
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

What is Dynamic Programming?

Dynamic Programming (DP) is a powerful technique used in computer science and mathematics to solve
problems by breaking them down into smaller subproblems, solving each subproblem only once, and
storing the results to avoid solving the same problem again.

When do we use Dynamic Programming?

You can use DP when:

1. The problem can be divided into smaller subproblems.

2. Subproblems repeat – meaning the same subproblem is solved multiple times.

3. The problem shows optimal substructure – the solution of a bigger problem depends on the solution of
its smaller subparts.

Real-Life Analogy

Imagine you're climbing stairs. Each time, you can either take 1 step or 2 steps. You want to know: how many
different ways can you reach the top?

If you solve this problem by trying every possibility, it will take a long time.

But with dynamic programming, you can:


 Start from the bottom, solve for small number of steps (like 1 or 2).

 Store those results.

 Use them to build up the solution for higher steps.

Key Concepts in DP

1. Overlapping Subproblems

You solve the same subproblem multiple times.

 Example: Calculating Fibonacci numbers.


2. Optimal Substructure

You can build the solution to the main problem using solutions to smaller problems.

 Example: Shortest path in a graph.

Types of Dynamic Programming

1. Top-Down (Memoization)  Avoid recomputing the same thing.

 Use recursion to solve problems. Example:


 Store the answers of already solved int fib(int n) {
subproblems (usually using an array or hash
if (n <= 1) return n;
map).
if (dp[n] != -1) return dp[n]; int fib(int n) {

return dp[n] = fib(n-1) + fib(n-2); int dp[n+1];

} dp[0] = 0; dp[1] = 1;

2. Bottom-Up (Tabulation) for (int i = 2; i <= n; i++)

 Start from the smallest problem. dp[i] = dp[i-1] + dp[i-2];

 Use a loop to build up the solution. return dp[n];


 No recursion needed. }

Example:

Examples of DP Problems

1. Fibonacci Numbers

2. 0/1 Knapsack Problem

3. Longest Common Subsequence


4. Edit Distance

5. Coin Change Problem

6. Matrix Chain Multiplication

Advantages of Dynamic Programming

 Avoids recomputing the same subproblems.

 Greatly improves performance compared to brute force.


 Useful for solving complex problems efficiently.

Dynamic Programming VS. Other Approaches


Dynamic Programming vs Recursion (Without Memoization)

Feature Dynamic Programming Simple Recursion


Speed Much faster due to avoiding repeated calculations Slow when subproblems repeat
Memory Uses extra memory (for memoization or table) Uses call stack memory
Use Case Problems with overlapping subproblems Simple problems without repetition
Example Fibonacci with memoization Fibonacci with pure recursion

Dynamic Programming vs Greedy Algorithms

Feature Dynamic Programming Greedy Algorithms


Strategy Looks at all possible solutions to find the best one Makes locally optimal choices step by step
Accuracy Always gives the optimal solution May not always be correct
Feature Dynamic Programming Greedy Algorithms
Speed Usually slower due to checking more options Very fast and simple
Use Case 0/1 Knapsack, Longest Common Subsequence Activity Selection, Huffman Coding
Example Coin Change (min coins) Making change with greedy coins (e.g. 1, 5, 10)

Dynamic Programming vs Divide and Conquer

Feature Dynamic Programming Divide and Conquer


Overlapping
Yes No (usually)
Subproblems
Doesn’t store – solves every call
Storage Stores results to avoid rework
independently
Fibonacci, Matrix Chain
Examples Merge Sort, Quick Sort, Binary Search
Multiplication
Optimal Substructure Required Also required

Dynamic Programming vs Brute Force

Feature Dynamic Programming Brute Force


Efficiency Efficient Very inefficient
Time Complexity Polynomial or better Exponential (most of the time)
Use Case When brute force is too slow Very small input sizes
Example Knapsack using DP Knapsack using all possible combinations

Steps In DP
1. Break the Problem into Smaller Pieces
First, we break down the big problem into smaller, simpler subproblems. This makes it easier to solve
each small part.
2. Solve the Small Problems
We then find the best or optimal solution to each of these smaller problems.
3. Save the Answers
After solving each small problem, we store the answers. This is called memoization. It helps to keep
track of solutions, so we don’t need to redo the same work.
4. Reuse the Stored Answers
Whenever we encounter the same subproblem again, we simply reuse the answer that we stored earlier.
This saves time because we don’t calculate it again.
5. Put Everything Together
Finally, we combine the answers to all the small problems to find the solution to the original, bigger
problem.
What is a Multistage Graph?
A Multistage Graph is a directed graph in which nodes are divided into stages (groups). You move from one
stage to the next by following edges, and you want to find the shortest path (or longest path) from the first
node (source) in the first stage to the last node (destination) in the final stage.

It is often solved using Dynamic Programming to avoid recalculating the same paths multiple times.

Shortest Paths in Multi stage Graph • d(C, T) = min{ 2+d(F, T) } = 2+2 = 4


• The greedy method can not be applied to • d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C,
this case: (S, A, D, T) 1+4+18 = 23. T)}

• The real shortest path is: = min{1+22, 2+18, 5+4} = 9.

• (S, C, F, T) 5+2+2 = 9.

4 9 D
A D d(D, T)
1 18
11 9
5 d(E, T)
2 5 13
B E T
S B E T
16 2
d(F, T)
16
5 F
C 2
F

d(S, A) = 1
d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
d(S, B) = 2
 d(A,T) = min{4+d(D,T), 11+d(E,T)}
d(S, C) = 5
= min{4+18, 11+13} = 22.
d(S,D)=min{d(S,A)+d(A,D), d(S,B)+d(B,D)}
1 A = min{ 1+4, 2+9 } = 5
d(A, T)
d(S,E)=min{d(S,A)+d(A,E), d(S,B)+d(B,E)}
2 d(B, T)
S B T = min{ 1+11, 2+5 } = 7

d(S,F)=min{d(S,B)+d(B,F), d(S,C)+d(C,F)}
d(C, T) = min{ 2+16, 5+2 } = 7
5
C
d(S,T) = min{d(S, D)+d(D, T), d(S,E)+d(E,T), d(S,
F)+d(F, T)}

= min{ 5+18, 7+13, 7+2 }


• d(B, T) = min{9+d(D, T), 5+d(E, T),
16+d(F, T)} =9
= min{9+18, 5+13, 16+2} = 18.
# Multistage Graph using Dynamic Programming
# Shortest path from source to destination

def shortest_path(graph, stages, source, destination):


n = len(graph)
dp = [float('inf')] * n # Store minimum cost from each node to destination
dp[destination] = 0 # Cost to reach destination from destination is 0
path = [-1] * n # To store path

# Start from second last stage and go backward


for i in range(n-2, -1, -1):
for (v, cost) in graph[i]:
if dp[i] > cost + dp[v]:
dp[i] = cost + dp[v]
path[i] = v
# Print the shortest path
print("Minimum cost from source to destination:", dp[source])
print("Path:", end=" ")
node = source
while node != -1:
print(node, end=" ")
node = path[node]

# Example Graph
# Each index represents a node, and inside we have (target, cost)
graph = [
[(1, 2), (2, 1)], # Node 0 -> Node 1 (cost 2), Node 2 (cost 1)
[(3, 3)], # Node 1 -> Node 3 (cost 3)
[(3, 1)], # Node 2 -> Node 3 (cost 1)
[] # Node 3 is the destination
]
source = 0
destination = 3
stages = 4
shortest_path(graph, stages, source, destination)

What is Transitive Closure?

Transitive Closure of a graph tells us which vertices are reachable from which other vertices.
In simple words, it helps us find all possible paths, not just direct ones.

Easy Example

Imagine a graph where:

 A→B
 B→C
Now, even though there is no direct edge from A to C, you can still go from A → B → C.
So, A can reach C.

This is what transitive closure helps to find — all such indirect paths.

Transitive closure of above graphs is


1111
1111
1111
0001

What is a Binomial Coefficient?

The Binomial Coefficient, written as C(n, k) or "n choose k", tells you:

How many ways you can choose k items from n total items.
It’s used in combinations, probability, Pascal’s Triangle, and more.

Formula:

C(n,k)=n!k!(n−k)!C(n, k) = \frac{n!}{k!(n-k)!}C(n,k)=k!(n−k)!n!

Where ! means factorial (e.g., 4! = 4×3×2×1)

Key Idea of DP Approach

The Binomial Coefficient has a recursive property:

C(n,k)=C(n−1,k−1)+C(n−1,k)C(n, k) = C(n-1, k-1) + C(n-1, k)C(n,k)=C(n−1,k−1)+C(n−1,k)


with base cases:

 C(n, 0) = 1

 C(n, n) = 1

Steps to Solve using Dynamic Programming

Step 1: Create a DP Table

 Use a 2D array dp[n+1][k+1] to store values.


 dp[i][j] will store the value of C(i, j)
Step 2: Fill the Table

 Use the recursive formula:


dp[i][j] = dp[i-1][j-1] + dp[i-1][j]

 Fill the base cases:

o dp[i][0] = 1

o dp[i][i] = 1

Step 3: Return dp[n][k]

That’s your final answer!


Let’s calculate C(5, 2):

i\j 0 1 2

0 1

1 1 1

2 1 2 1

3 1 3 3

4 1 4 6

5 1 5 10

What is the Floyd-Warshall Algorithm?


The Floyd-Warshall Algorithm is a famous algorithm used to find the shortest paths between all pairs of
nodes in a weighted graph.

This means it tells us the shortest distance from every node to every other node, even if the path goes
through other nodes in between.

Algorithm

Step 1 − Construct an adjacency matrix A with all the costs of edges present in the graph. If there is no path
between two vertices, mark the value as ∞.
Step 2 − Derive another adjacency matrix A1 from A keeping the first row and first column of the original
adjacency matrix intact in A1. And for the remaining values, say A1[i,j], if A[i,j]>A[i,k]+A[k,j] then
replace A1[i,j] with A[i,k]+A[k,j]. Otherwise, do not change the values. Here, in this step, k = 1 (first vertex
acting as pivot).

Step 3 − Repeat Step 2 for all the vertices in the graph by changing the k value for every pivot vertex until the
final matrix is achieved.

Step 4 − The final adjacency matrix obtained is the final solution with all the shortest paths
Pseudocode

# n is the number of vertices in the graph # Step 2: Loop for each possible intermediate
# graph[i][j] holds the weight of the edge from node node
i to node j for k in range(n):
# If there is no direct edge between i and j, use a # Step 3: Loop for each pair (i, j)
large number (like float('inf')) for i in range(n):
def floyd_warshall(graph): for j in range(n):
n = len(graph) # number of nodes # Step 4: Check if path through k is
# Step 1: Initialize distance matrix shorter
dist = [[graph[i][j] for j in range(n)] for i in if dist[i][j] > dist[i][k] + dist[k][j]:
range(n)] dist[i][j] = dist[i][k] + dist[k][j]
# Step 5: Final result in dist matrix
return dist

Example
Consider the following directed weighted graph G = {V, E}. Find the shortest paths between all the vertices of
the graphs using the Floyd-Warshall algorithm.
Analysis

From the pseudocode above, the Floyd-Warshall algorithm operates using three for loops to find the shortest
distance between all pairs of vertices within a graph. Therefore, the time complexity of the Floyd-Warshall
algorithm is O(n3), where n is the number of vertices in the graph. The space complexity of the algorithm
is O(n2).

Advantages

 Simple and easy to implement.

 Works for both directed and undirected graphs.

 Can handle negative edge weights (but not negative cycles).

 Finds shortest paths between all pairs of nodes.

Limitations
 Slow for very large graphs because of its cubic time complexity.

 Cannot handle graphs with negative weight cycles.

Optimal Binary Search Tree

As we know that in binary search tree, the nodes in the left subtree have lesser value than the root node and the
nodes in the right subtree have greater value than the root node.

We know the key values of each node in the tree, and we also know the frequencies of each node in terms of
searching means how much time is required to search a node. The frequency and key-value determine the
overall cost of searching a node. The cost of searching is a very important factor in various applications. The
overall cost of searching a node should be less. The time required to search a node in BST is more than the
balanced binary search tree as a balanced binary search tree contains a lesser number of levels than the BST.
There is one way that can reduce the cost of a binary search tree is known as an optimal binary search tree.

Let's understand through an example. In the above tree, all the nodes on the left subtree
are smaller than the value of the root node, and all
If the keys are 10, 20, 30, 40, 50, 60, 70
the nodes on the right subtree are larger than the
value of the root node. The maximum time required
to search a node is equal to the minimum height of
the tree, equal to logn.

Now we will see how many binary search trees can


be made from the given number of keys.

For example: 10, 20, 30 are the keys, and the


following are the binary search trees that can be
made out from these keys.

The Formula for calculating the number of trees:


When we use the above formula, then it is found
that total 5 number of trees can be created.
In the above tree, the average number of
The cost required for searching an element depends comparisons that can be made as:
on the comparisons to be made to search an
element. Now, we will calculate the average cost of
time of the above binary search trees.

In the above tree, total number of 3 comparisons


In the above tree, the total number of comparisons
can be made. The average number of comparisons
can be made as 3. Therefore, the average number of
can be made as:
comparisons that can be made as:

In the above tree, the total number of comparisons


can be made as 3. Therefore, the average number of
comparisons that can be made as:
In the above tree, the average number of
comparisons that can be made as:
In the third case, the number of comparisons is less Therefore, c[0, 0] = 0, c[1 , 1] = 0, c[2,2] = 0, c[3,3]
because the height of the tree is less, so it's a = 0, c[4,4] = 0
balanced binary search tree.
Now we will calculate the values where j-i equal
Till now, we read about the height-balanced binary to 1.
search tree. To find the optimal binary search tree,
When j=1, i=0 then j-i = 1
we will determine the frequency of searching a key.
When j=2, i=1 then j-i = 1
Let's assume that frequencies associated with the
keys 10, 20, 30 are 3, 2, 5. When j=3, i=2 then j-i = 1

The above trees have different frequencies. The tree When j=4, i=3 then j-i = 1
with the lowest frequency would be considered the
Now to calculate the cost, we will consider only the
optimal binary search tree. The tree with the
jth value.
frequency 17 is the lowest, so it would be
considered as the optimal binary search tree. The cost of c[0,1] is 4 (The key is 10, and the cost
corresponding to key 10 is 4).
Dynamic Approach
The cost of c[1,2] is 2 (The key is 20, and the cost
Consider the below table, which contains the keys
corresponding to key 20 is 2).
and frequencies.
The cost of c[2,3] is 6 (The key is 30, and the cost
corresponding to key 30 is 6)

The cost of c[3,4] is 3 (The key is 40, and the cost


corresponding to key 40 is 3)

Now we will calculate the values where j-i = 2

When j=2, i=0 then j-i = 2


First, we will calculate the values where j-i is When j=3, i=1 then j-i = 2
equal to zero.
When j=4, i=2 then j-i = 2
When i=0, j=0, then j-i = 0
In this case, we will consider two keys.
When i = 1, j=1, then j-i = 0
o When i=0 and j=2, then keys 10 and 20.
When i = 2, j=2, then j-i = 0 There are two possible trees that can be
made out from these two keys shown below:
When i = 3, j=3, then j-i = 0

When i = 4, j=4, then j-i = 0


In the first binary tree, cost would be: 4*1 + 2*2 = 8

In the second binary tree, cost would be: 4*2 + 2*1


= 10

The minimum cost is 8; therefore, c[0,2] = 8


Now we will calculate the values when j-i = 3

When j=3, i=0 then j-i = 3

When j=4, i=1 then j-i = 3

o When i=0, j=3 then we will consider three


keys, i.e., 10, 20, and 30.

The following are the trees that can be made if 10 is


considered as a root node.

o When i=1 and j=3, then keys 20 and 30.


There are two possible trees that can be
made out from these two keys shown below:

In the first binary tree, cost would be: 1*2 + 2*6 =


14

In the second binary tree, cost would be: 1*6 + 2*2


= 10

The minimum cost is 10; therefore, c[1,3] = 10 In the above tree, 10 is the root node, 20 is the right
child of node 10, and 30 is the right child of node
o When i=2 and j=4, we will consider the keys 20.
at 3 and 4, i.e., 30 and 40. There are two
possible trees that can be made out from Cost would be: 1*4 + 2*2 + 3*6 = 26
these two keys shown as below:
In the first binary tree, cost would be: 1*6 + 2*3 =
12

In the second binary tree, cost would be: 1*3 + 2*6


= 15

The minimum cost is 12, therefore, c[2,4] = 12


In the above tree, 10 is the root node, 30 is the right In the above tree, 30 is the root node, 10 is the left
child of node 10, and 20 is the left child of node 20. child of node 30 and 20 is the right child of node
10.
Cost would be: 1*4 + 2*6 + 3*2 = 22
Cost would be: 1*6 + 2*4 + 3*2 = 20
The following tree can be created if 20 is
considered as the root node. Therefore, the minimum cost is 20 which is the
3rd root. So, c[0,3] is equal to 20.
o When i=1 and j=4 then we will consider the
keys 20, 30, 40
c[1,4] = min{ c[1,1] + c[2,4], c[1,2] + c[3,4], c[1,3]
+ c[4,4] } + 11

= min{0+12, 2+3, 10+0}+ 11


= min{12, 5, 10} + 11

In the above tree, 20 is the root node, 30 is the right The minimum value is 5; therefore, c[1,4] = 5+11 =
child of node 20, and 10 is the left child of node 20. 16

Cost would be: 1*2 + 4*2 + 6*2 = 22

The following are the trees that can be created if 30


is considered as the root node.

o Now we will calculate the values when j-i


=4

When j=4 and i=0 then j-i = 4


In the above tree, 30 is the root node, 20 is the left
child of node 30, and 10 is the left child of node 20. In this case, we will consider four keys, i.e., 10, 20,
30 and 40. The frequencies of 10, 20, 30 and 40 are
Cost would be: 1*6 + 2*2 + 3*4 = 22
4, 2, 6 and 3 respectively.
w[0, 4] = 4 + 2 + 6 + 3 = 15

If we consider 10 as the root node then

C[0, 4] = min {c[0,0] + c[1,4]}+ w[0,4]

= min {0 + 16} + 15= 31

If we consider 20 as the root node then

C[0,4] = min{c[0,1] + c[2,4]} + w[0,4]

= min{4 + 12} + 15
= 16 + 15 = 31

If we consider 30 as the root node then,


C[0,4] = min{c[0,2] + c[3,4]} +w[0,4]

= min {8 + 3} + 15

= 26

If we consider 40 as the root node then,

C[0,4] = min{c[0,3] + c[4,4]} + w[0,4]

= min{20 + 0} + 15
= 35

In the above cases, we have observed that 26 is the


minimum cost; therefore, c[0,4] is equal to 26.

The optimal binary tree can be created as:

General formula for calculating the minimum cost


is:
C[i,j] = min{c[i, k-1] + c[k,j]} + w(i,j)

Where Is It Used?

 Search engines

 Dictionary lookups

 Code compilers (for symbol tables)


 Auto-complete features

0/1 Knapsack problem Dynamic Programming knapsack in such a way total value produces a
maximum profit.

For example, the weight of the container is 20 kg.


Here knapsack is like a container or a bag. Suppose
We have to select the items in such a way that the
we have given some items which have some
sum of the weight of items should be either smaller
weights or profits. We have to put some items in the
than or equal to the weight of the container, and the
profit should be maximum.
There are two types of knapsack problems: Step 3: Initialize the Table

o 0/1 knapsack problem Set the first row and first column to 0:

o Fractional knapsack problem  dp[0][w] = 0 for all weights (no items = 0


value)
We will discuss both the problems one by one. First,
we will learn about the 0/1 knapsack problem.  dp[i][0] = 0 for all items (0 weight capacity
= 0 value)
What is the 0/1 knapsack problem?
Step 4: Fill the Table
The 0/1 knapsack problem means that the items are
either completely or no items are filled in a Go through all items and weights:
knapsack. For example, we have two items having
In simple words:
weights 2kg and 3kg, respectively. If we pick the
2kg item then we cannot pick 1kg item from the Try both choices: take the item or leave it
2kg item (item is not divisible); we have to pick the Choose the one that gives more value
2kg item completely. This is a 0/1 knapsack
Step 5: Final Answer
problem in which either we pick the item
completely or we will pick that item. The 0/1 Look at the last cell:
knapsack problem is solved by the dynamic dp[n][W] = maximum value you can carry
programming.

What is the fractional knapsack problem?


Example of 0/1 knapsack problem.
The fractional knapsack problem means that we can
Consider the problem having weights and profits
divide the item. For example, we have an item of 3
are:
kg then we can pick the item of 2 kg and leave the
item of 1 kg. The fractional knapsack problem is Weights: {3, 4, 6, 5}
solved by the Greedy approach.
Profits: {2, 3, 1, 4}
How 0-1 Knapsack Works (Step-by-Step):
The weight of the knapsack is 8 kg
Step 1: Start
The number of items is 4
You are given:
The above problem can be solved by using the
 n items following method:
 Each item has a weight and a value xi = {1, 0, 0, 1}
 A knapsack (bag) with maximum weight = {0, 0, 0, 1}
capacity W
= {0, 1, 0, 1}
Goal: Pick items to maximize total value, but don’t
The above are the possible combinations. 1 denotes
exceed weight limit.
that the item is completely picked and 0 means that
Step 2: Create a Table no item is picked. Since there are 4 items so
Make a 2D table dp[n+1][W+1] possible combinations will be:

 Rows = items (0 to n) 24 = 16; So. There are 16 possible combinations


that can be made by using the above problem. Once
 Columns = weight limits (0 to W) all the combinations are made, we have to select the
combination that provides the maximum profit.
Each cell dp[i][w] means:
"What’s the max value we can get with the first i Another approach to solve the problem is dynamic
items and total weight w?" programming approach. In dynamic programming
approach, the complicated problem is divided into
2 0
sub-problems, then we find the solution of a sub-
problem and the solution of the sub-problem will be
used to find the solution of a complex problem. 3 0

How this problem can be solved by using the


Dynamic programming approach? 4 0

First,
When i=1, W=1
we create a matrix shown as below:
w1 = 3; Since we have only one item in the set
having weight 3, but the capacity of the knapsack is
0 1 2 3 4 5 6 7 8
1. We cannot fill the item of 3kg in the knapsack of
capacity 1 kg so add 0 at M[1][1] shown as below:
0
0 1 2 3 4 5 6 7 8

1
0 0 0 0 0 0 0 0 0 0
2
1 0 0
3
2 0
4
3 0
In the above matrix, columns represent the weight,
i.e., 8. The rows represent the profits and weights of
4 0
items. Here we have not taken the weight 8 directly,
problem is divided into sub-problems, i.e., 0, 1, 2,
When i = 1, W = 2
3, 4, 5, 6, 7, 8. The solution of the sub-problems
would be saved in the cells and answer to the w1 = 3; Since we have only one item in the set
problem would be stored in the final cell. First, we having weight 3, but the capacity of the knapsack is
write the weights in the ascending order and profits 2. We cannot fill the item of 3kg in the knapsack of
according to their weights shown as below: capacity 2 kg so add 0 at M[1][2] shown as below:

wi = {3, 4, 5, 6}
0 1 2 3 4 5 6 7 8
pi = {2, 3, 4, 1}

The first row and the first column would be 0 as 0 0 0 0 0 0 0 0 0 0


there is no item for w=0
1 0 0 0
0 1 2 3 4 5 6 7 8

2 0
0 0 0 0 0 0 0 0 0 0

3 0
1 0
W1 = 3; Since we have only one item in the set
4 0
having weight equal to 3, and weight of the
knapsack is 5; therefore, we can fill the knapsack
When i=1, W=3
with an item of weight equal to 3. We put profit
w1 = 3; Since we have only one item in the set corresponding to the weight 3, i.e., 2 at M[1][5]
having weight equal to 3, and weight of the shown as below:
knapsack is also 3; therefore, we can fill the
knapsack with an item of weight equal to 3. We put 0 1 2 3 4 5 6 7 8
profit corresponding to the weight 3, i.e., 2 at
M[1][3] shown as below:
0 0 0 0 0 0 0 0 0 0

0 1 2 3 4 5 6 7 8
1 0 0 0 2 2 2

0 0 0 0 0 0 0 0 0 0
2 0

1 0 0 0 2
3 0

2 0
4 0

3 0
When i =1, W=6

W1 = 3; Since we have only one item in the set


4 0
having weight equal to 3, and weight of the
knapsack is 6; therefore, we can fill the knapsack
When i=1, W = 4
with an item of weight equal to 3. We put profit
W1 = 3; Since we have only one item in the set corresponding to the weight 3, i.e., 2 at M[1][6]
having weight equal to 3, and weight of the shown as below:
knapsack is 4; therefore, we can fill the knapsack
with an item of weight equal to 3. We put profit 0 1 2 3 4 5 6 7 8
corresponding to the weight 3, i.e., 2 at M[1][4]
shown as below:
0 0 0 0 0 0 0 0 0 0

0 1 2 3 4 5 6 7 8
1 0 0 0 2 2 2 2

0 0 0 0 0 0 0 0 0 0
2 0

1 0 0 0 2 2
3 0

2 0
4 0

3 0
When i=1, W = 7

W1 = 3; Since we have only one item in the set


4 0
having weight equal to 3, and weight of the
knapsack is 7; therefore, we can fill the knapsack
When i=1, W = 5
with an item of weight equal to 3. We put profit a knapsack, so we add 0 at M[2][1] shown as
corresponding to the weight 3, i.e., 2 at M[1][7] below:
shown as below:
0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2
1 0 0 0 2 2 2 2 2
2 0 0
2 0
3 0
3 0
4 0
4 0
When i =2, W = 2
When i =1, W =8
The weight corresponding to the value 2 is 4, i.e.,
W1 = 3; Since we have only one item in the set w2 = 4. Since we have only one item in the set
having weight equal to 3, and weight of the having weight equal to 4, and the weight of the
knapsack is 8; therefore, we can fill the knapsack knapsack is 2. We cannot put the item of weight 4 in
with an item of weight equal to 3. We put profit a knapsack, so we add 0 at M[2][2] shown as
corresponding to the weight 3, i.e., 2 at M[1][8] below:
shown as below:
0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2
1 0 0 0 2 2 2 2 2 2
2 0 0 0
2 0
3 0
3 0
4 0
4 0
When i =2, W = 3
Now the value of 'i' gets incremented, and
The weight corresponding to the value 2 is 4, i.e.,
becomes 2.
w2 = 4. Since we have two items in the set having
When i =2, W = 1 weights 3 and 4, and the weight of the knapsack is
3. We can put the item of weight 3 in a knapsack, so
The weight corresponding to the value 2 is 4, i.e.,
we add 2 at M[2][3] shown as below:
w2 = 4. Since we have only one item in the set
having weight equal to 4, and the weight of the
knapsack is 1. We cannot put the item of weight 4 in
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 2 2 2 2 2 2 1 0 0 0 2 2 2 2 2 2

2 0 0 0 2 2 0 0 0 2 3 3

3 0 3 0

4 0 4 0

When i =2, W = 4 When i = 2, W = 6


The weight corresponding to the value 2 is 4, i.e., The weight corresponding to the value 2 is 4, i.e.,
w2 = 4. Since we have two items in the set having w2 = 4. Since we have two items in the set having
weights 3 and 4, and the weight of the knapsack is weights 3 and 4, and the weight of the knapsack is
4. We can put item of weight 4 in a knapsack as the 6. We can put item of weight 4 in a knapsack and
profit corresponding to weight 4 is more than the the profit corresponding to weight is 3, so we add 3
item having weight 3, so we add 3 at M[2][4] at M[2][6] shown as below:
shown as below:
0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2
1 0 0 0 2 2 2 2 2 2
2 0 0 0 2 3 3 3
2 0 0 0 2 3
3 0
3 0
4 0
4 0
When i = 2, W = 7
When i = 2, W = 5
The weight corresponding to the value 2 is 4, i.e.,
The weight corresponding to the value 2 is 4, i.e., w2 = 4. Since we have two items in the set having
w2 = 4. Since we have two items in the set having weights 3 and 4, and the weight of the knapsack is
weights 3 and 4, and the weight of the knapsack is 7. We can put item of weight 4 and 3 in a knapsack
5. We can put item of weight 4 in a knapsack and and the profits corresponding to weights are 2 and
the profit corresponding to weight is 3, so we add 3 3; therefore, the total profit is 5, so we add 5 at
at M[2][5] shown as below: M[2][7] shown as below:
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 2 2 2 2 2 2 1 0 0 0 2 2 2 2 2 2

2 0 0 0 0 3 3 3 5 2 0 0 0 2 3 3 3 5 5

3 0 3 0 0

4 0 4 0

When i = 2, W = 8 When i = 3, W = 2
The weight corresponding to the value 2 is 4, i.e., The weight corresponding to the value 3 is 5, i.e.,
w2 = 4. Since we have two items in the set having w3 = 5. Since we have three items in the set having
weights 3 and 4, and the weight of the knapsack is weight 3, 4, and 5, and the weight of the knapsack
7. We can put item of weight 4 and 3 in a knapsack is 1. We cannot put neither of the items in a
and the profits corresponding to weights are 2 and knapsack, so we add 0 at M[3][2] shown as below:
3; therefore, the total profit is 5, so we add 5 at
M[2][7] shown as below: 0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2

1 0 0 0 2 2 2 2 2 2
2 0 0 0 2 3 3 3 5 5

2 0 0 0 2 3 3 3 5 5
3 0 0 0

3 0
4 0

4 0 When i = 3, W = 3

The weight corresponding to the value 3 is 5, i.e.,


Now the value of 'i' gets incremented, and
w3 = 5. Since we have three items in the set of
becomes 3.
weight 3, 4, and 5 respectively and weight of the
When i = 3, W = 1 knapsack is 3. The item with a weight 3 can be put
in the knapsack and the profit corresponding to the
The weight corresponding to the value 3 is 5, i.e.,
item is 2, so we add 2 at M[3][3] shown as below:
w3 = 5. Since we have three items in the set having
weights 3, 4, and 5, and the weight of the knapsack
0 1 2 3 4 5 6 7 8
is 1. We cannot put neither of the items in a
knapsack, so we add 0 at M[3][1] shown as below:
0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8

1 0 0 0 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0

2 0 0 0 2 3 3 3 5 5 1 0 0 0 2 2 2 2 2 2

3 0 0 0 2 2 0 0 0 2 3 3 3 5 5

4 0 3 0 0 0 1 3 3

When i = 3, W = 4
4 0
The weight corresponding to the value 3 is 5, i.e.,
w3 = 5. Since we have three items in the set of When i =3, W = 6
weight 3, 4, and 5 respectively, and weight of the
The weight corresponding to the value 3 is 5, i.e.,
knapsack is 4. We can keep the item of either
w3 = 5. Since we have three items in the set of
weight 3 or 4; the profit (3) corresponding to the
weight 3, 4, and 5 respectively, and weight of the
weight 4 is more than the profit corresponding to
knapsack is 6. We can keep the item of either
the weight 3 so we add 3 at M[3][4] shown as
weight 3, 4 or 5; the profit (3) corresponding to the
below:
weight 4 is more than the profits corresponding to
the weight 3 and 5 so we add 3 at M[3][6] shown as
0 1 2 3 4 5 6 7 8
below:

0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8

1 0 0 0 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0

2 0 0 0 2 3 3 3 5 5 1 0 0 0 2 2 2 2 2 2

3 0 0 0 1 3 2 0 0 0 2 3 3 3 5 5

4 0 3 0 0 0 1 3 3 3

When i = 3, W = 5
4 0
The weight corresponding to the value 3 is 5, i.e.,
w3 = 5. Since we have three items in the set of When i =3, W = 7
weight 3, 4, and 5 respectively, and weight of the
The weight corresponding to the value 3 is 5, i.e.,
knapsack is 5. We can keep the item of either
w3 = 5. Since we have three items in the set of
weight 3, 4 or 5; the profit (3) corresponding to the
weight 3, 4, and 5 respectively, and weight of the
weight 4 is more than the profits corresponding to
knapsack is 7. In this case, we can keep both the
the weight 3 and 5 so we add 3 at M[3][5] shown as
items of weight 3 and 4, the sum of the profit would
below:
be equal to (2 + 3), i.e., 5, so we add 5 at M[3][7]
shown as below:
add any item in the knapsack; Therefore, we add 0
0 1 2 3 4 5 6 7 8
at M[4][1] shown as below:

0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8

1 0 0 0 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0

2 0 0 0 2 3 3 3 5 5 1 0 0 0 2 2 2 2 2 2

3 0 0 0 1 3 3 3 5 2 0 0 0 2 3 3 3 5 5

4 0 3 0 0 0 1 3 3 3 5 5

When i = 3, W = 8
4 0 0
The weight corresponding to the value 3 is 5, i.e.,
w3 = 5. Since we have three items in the set of When i = 4, W = 2
weight 3, 4, and 5 respectively, and the weight of
The weight corresponding to the value 4 is 6, i.e.,
the knapsack is 8. In this case, we can keep both the
w4 = 6. Since we have four items in the set of
items of weight 3 and 4, the sum of the profit would
weights 3, 4, 5, and 6 respectively, and the weight
be equal to (2 + 3), i.e., 5, so we add 5 at M[3][8]
of the knapsack is 2. The weight of all the items is
shown as below:
more than the weight of the knapsack, so we cannot
add any item in the knapsack; Therefore, we add 0
0 1 2 3 4 5 6 7 8
at M[4][2] shown as below:

0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8

1 0 0 0 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0

2 0 0 0 2 3 3 3 5 5 1 0 0 0 2 2 2 2 2 2

3 0 0 0 1 3 3 3 5 5 2 0 0 0 2 3 3 3 5 5

4 0 3 0 0 0 1 3 3 3 5 5

Now the value of 'i' gets incremented and


4 0 0 0
becomes 4.

When i = 4, W = 1 When i = 4, W = 3

The weight corresponding to the value 4 is 6, i.e., The weight corresponding to the value 4 is 6, i.e.,
w4 = 6. Since we have four items in the set of w4 = 6. Since we have four items in the set of
weights 3, 4, 5, and 6 respectively, and the weight weights 3, 4, 5, and 6 respectively, and the weight
of the knapsack is 1. The weight of all the items is of the knapsack is 3. The item with a weight 3 can
more than the weight of the knapsack, so we cannot be put in the knapsack and the profit corresponding
to the weight 4 is 2, so we will add 2 at M[4][3] to the weight 4 is 3, so we will add 3 at M[4][5]
shown as below: shown as below:

0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 2 2 2 2 2 2 1 0 0 0 2 2 2 2 2 2

2 0 0 0 2 3 3 3 5 5 2 0 0 0 2 3 3 3 5 5

3 0 0 0 1 3 3 3 5 5 3 0 0 0 1 3 3 3 5 5

4 0 0 0 2 4 0 0 0 2 3 3

When i = 4, W = 4 When i = 4, W = 6

The weight corresponding to the value 4 is 6, i.e., The weight corresponding to the value 4 is 6, i.e.,
w4 = 6. Since we have four items in the set of w4 = 6. Since we have four items in the set of
weights 3, 4, 5, and 6 respectively, and the weight weights 3, 4, 5, and 6 respectively, and the weight
of the knapsack is 4. The item with a weight 4 can of the knapsack is 6. In this case, we can put the
be put in the knapsack and the profit corresponding items in the knapsack either of weight 3, 4, 5 or 6
to the weight 4 is 3, so we will add 3 at M[4][4] but the profit, i.e., 4 corresponding to the weight 6
shown as below: is highest among all the items; therefore, we add 4
at M[4][6] shown as below:
0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
1 0 0 0 2 2 2 2 2 2
1 0 0 0 2 2 2 2 2 2
2 0 0 0 2 3 3 3 5 5
2 0 0 0 2 3 3 3 5 5
3 0 0 0 1 3 3 3 5 5
3 0 0 0 1 3 3 3 5 5
4 0 0 0 2 3
4 0 0 0 2 3 3 4
When i = 4, W = 5
When i = 4, W = 7
The weight corresponding to the value 4 is 6, i.e.,
w4 = 6. Since we have four items in the set of The weight corresponding to the value 4 is 6, i.e.,
weights 3, 4, 5, and 6 respectively, and the weight w4 = 6. Since we have four items in the set of
of the knapsack is 5. The item with a weight 4 can weights 3, 4, 5, and 6 respectively, and the weight
be put in the knapsack and the profit corresponding of the knapsack is 7. Here, if we add two items of
weights 3 and 4 then it will produce the maximum
profit, i.e., (2 + 3) equals to 5, so we add 5 at pointer will be shifted upwards as shown in the
M[4][7] shown as below: below table:

0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 2 2 2 2 2 2 1 0 0 0 2 2 2 2 2 2

2 0 0 0 2 3 3 3 5 5 2 0 0 0 2 3 3 3 5 5

3 0 0 0 2 3 3 3 5 5 3 0 0 0 2 3 3 3 5 5

4 0 0 0 2 3 3 4 5 4 0 0 0 2 3 3 4 5 5

When i = 4, W = 8 Again, we will compare the value 5 from the above


row, i.e., i = 2. Since the above row contains the
The weight corresponding to the value 4 is 6, i.e.,
value 5 so the pointer will again be shifted upwards
w4 = 6. Since we have four items in the set of
as shown in the below table:
weights 3, 4, 5, and 6 respectively, and the weight
of the knapsack is 8. Here, if we add two items of
0 1 2 3 4 5 6 7 8
weights 3 and 4 then it will produce the maximum
profit, i.e., (2 + 3) equals to 5, so we add 5 at
M[4][8] shown as below: 0 0 0 0 0 0 0 0 0 0

0 1 2 3 4 5 6 7 8
1 0 0 0 2 2 2 2 2 2

0 0 0 0 0 0 0 0 0 0
2 0 0 0 2 3 3 3 5 5

1 0 0 0 2 2 2 2 2 2
3 0 0 0 2 3 3 3 5 5

2 0 0 0 2 3 3 3 5 5
4 0 0 0 2 3 3 4 5 5

3 0 0 0 2 3 3 3 5 5 Again, we will compare the value 5 from the above


row, i.e., i = 1. Since the above row does not
contain the same value so we will consider the row
4 0 0 0 2 3 3 4 5 5
i=1, and the weight corresponding to the row is 4.
Therefore, we have selected the weight 4 and we
As we can observe in the above table that 5 is the
have rejected the weights 5 and 6 shown below:
maximum profit among all the entries. The pointer
points to the last row and the last column having 5 x = { 1, 0, 0}
value. Now we will compare 5 value with the
The profit corresponding to the weight is 3.
previous row; if the previous row, i.e., i = 3 contains
Therefore, the remaining profit is (5 - 3) equals to 2.
the same value 5 then the pointer will shift upwards.
Now we will compare this value 2 with the row i =
Since the previous row contains the value 5 so the
2. Since the row (i = 1) contains the value 2;
4 0 0 0 2 3 3 4 5 5
therefore, the pointer shifted upwards shown below:

Again we compare the value 2 with a above row,


0 1 2 3 4 5 6 7 8
i.e., i = 1. Since the row i =0 does not contain the
value 2, so row i = 1 will be selected and the weight
0 0 0 0 0 0 0 0 0 0 corresponding to the i = 1 is 3 shown below:

X = {1, 1, 0, 0}
1 0 0 0 2 2 2 2 2 2
The profit corresponding to the weight is 2.
Therefore, the remaining profit is 0. We compare 0
2 0 0 0 2 3 3 3 5 5 value with the above row. Since the above row
contains a 0 value but the profit corresponding to
this row is 0. In this problem, two weights are
3 0 0 0 2 3 3 3 5 5
selected, i.e., 3 and 4 to maximize the profit.

Time and Space Complexity

Time Complexity:

 O(n × W)

 Where n is the number of items and W is the maximum capacity of the knapsack.

 This is because we use a dynamic programming table with n+1 rows and W+1 columns.

Space Complexity:

 Usually O(n × W) if a 2D table is used.


 Can be optimized to O(W) if only the previous row is needed.

Advantages of Using Dynamic Programming for 0/1 Knapsack

Advantage Explanation

Efficient Avoids solving the same subproblems repeatedly by storing previous results.

Guaranteed optimality Always finds the best possible solution within the given constraints.

Easy to implement Once the concept is understood, it's straightforward to implement in code.

Useful for constrained problems Works well when items must be selected under strict size or cost limits.

Applications of 0/1 Knapsack Problem

Area of Use Application Description

Investment selection Choosing which investments to make under a limited budget.


Area of Use Application Description

Project scheduling Selecting a set of projects to complete within a time or cost constraint.

Cargo loading Packing a truck with items to maximize value without exceeding weight.

Resource allocation Distributing limited resources to get the highest benefit.

Computer games Deciding which items a character can carry in a game based on capacity.

Cryptography (historical) Was once used in cryptographic algorithms like Merkle–Hellman knapsack.

Bellman Ford Algorithm  If the new path is shorter,


update the distance.
Bellman ford algorithm is a single-source shortest
path algorithm. This algorithm is used to find the  (Like if you found a shortcut,
shortest distance from the single vertex to all the you update your plan.)
other vertices of a weighted graph. There are
Step 3: Check for Negative Cycle
various other algorithms used to find the shortest
path like Dijkstra algorithm, etc. If the weighted  Go through all the edges one more time.
graph contains the negative weight values, then the
 If any distance changes again, it means
Dijkstra algorithm does not confirm whether it
there's a negative cycle.
produces the correct answer or not. In contrast to
Dijkstra algorithm, bellman ford algorithm  A negative cycle is a loop that makes your
guarantees the correct answer even if the weighted distance go lower and lower forever — not
graph contains the negative weight values. good!

Rule of this algorithm

1. We will go on relaxing all the edges (n - 1) ti Consider the below graph:


mes where,

2. n = number of vertices

How it Works (Step-by-Step):

Step 1: Start

 Choose a source node (like starting from


city A).

 Set distance of source to 0.

 Set distance to all others as ∞ (we don’t As we can observe in the above graph that some of
know them yet). the weights are negative. The above graph contains
Step 2: Relax Edges 6 vertices so we will go on relaxing till the 5
vertices. Here, we will relax all the edges 5 times.
 Repeat this process V-1 times (V = number The loop will iterate 5 times to get the correct
of nodes): answer. If the loop is iterated more than 5 times
o For each edge in the graph: then also the answer will be the same, i.e., there
would be no change in the distance between the d(v) = ∞
vertices.
c(u , v) = 4
Relaxing means:
Since (0 + 4) is less than ∞, so update
1. If (d(u) + c(u , v) < d(v))
1. d(v) = d(u) + c(u , v)
2. d(v) = d(u) + c(u , v)
d(v) = 0 + 4 = 4
To find the shortest path of the above graph, the
Therefore, the distance of vertex C is 4.
first step is note down all the edges which are given
below: Consider the edge (A, D). Denote vertex 'A' as 'u'
and vertex 'D' as 'v'. Now use the relaxing formula:
(A, B), (A, C), (A, D), (B, E), (C, E), (D, C), (D, F),
(E, F), (C, B) d(u) = 0

Let's consider the source vertex as 'A'; therefore, the d(v) = ∞


distance value at vertex A is 0 and the distance
c(u , v) = 5
value at all the other vertices as infinity shown as
below: Since (0 + 5) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 0 + 5 = 5

Therefore, the distance of vertex D is 5.

Consider the edge (B, E). Denote vertex 'B' as 'u'


and vertex 'E' as 'v'. Now use the relaxing formula:

d(u) = 6
d(v) = ∞
Since the graph has six vertices so it will have five c(u , v) = -1
iterations.
Since (6 - 1) is less than ∞, so update
First iteration
1. d(v) = d(u) + c(u , v)
Consider the edge (A, B). Denote vertex 'A' as 'u'
and vertex 'B' as 'v'. Now use the relaxing formula: d(v) = 6 - 1= 5

d(u) = 0 Therefore, the distance of vertex E is 5.

d(v) = ∞ Consider the edge (C, E). Denote vertex 'C' as 'u'
and vertex 'E' as 'v'. Now use the relaxing formula:
c(u , v) = 6
d(u) = 4
Since (0 + 6) is less than ∞, so update
d(v) = 5
1. d(v) = d(u) + c(u , v)
c(u , v) = 3
d(v) = 0 + 6 = 6
Since (4 + 3) is greater than 5, so there will be no
Therefore, the distance of vertex B is 6. updation. The value at vertex E is 5.
Consider the edge (A, C). Denote vertex 'A' as 'u' Consider the edge (D, C). Denote vertex 'D' as 'u'
and vertex 'C' as 'v'. Now use the relaxing formula: and vertex 'C' as 'v'. Now use the relaxing formula:
d(u) = 0 d(u) = 5
d(v) = 4 In the second iteration, we again check all the
edges. The first edge is (A, B). Since (0 + 6) is
c(u , v) = -2
greater than 1 so there would be no updation in the
Since (5 -2) is less than 4, so update vertex B.

1. d(v) = d(u) + c(u , v) The next edge is (A, C). Since (0 + 4) is greater
than 3 so there would be no updation in the vertex
d(v) = 5 - 2 = 3
C.
Therefore, the distance of vertex C is 3.
The next edge is (A, D). Since (0 + 5) equals to 5 so
Consider the edge (D, F). Denote vertex 'D' as 'u' there would be no updation in the vertex D.
and vertex 'F' as 'v'. Now use the relaxing formula:
The next edge is (B, E). Since (1 - 1) equals to 0
d(u) = 5 which is less than 5 so update:

d(v) = ∞ d(v) = d(u) + c(u, v)

c(u , v) = -1 d(E) = d(B) +c(B , E)

Since (5 -1) is less than ∞, so update =1-1=0

1. d(v) = d(u) + c(u , v) The next edge is (C, E). Since (3 + 3) equals to 6
which is greater than 5 so there would be no
d(v) = 5 - 1 = 4
updation in the vertex E.
Therefore, the distance of vertex F is 4.
The next edge is (D, C). Since (5 - 2) equals to 3 so
Consider the edge (E, F). Denote vertex 'E' as 'u' there would be no updation in the vertex C.
and vertex 'F' as 'v'. Now use the relaxing formula:
The next edge is (D, F). Since (5 - 1) equals to 4 so
d(u) = 5 there would be no updation in the vertex F.

d(v) = ∞ The next edge is (E, F). Since (5 + 3) equals to 8


which is greater than 4 so there would be no
c(u , v) = 3
updation in the vertex F.
Since (5 + 3) is greater than 4, so there would be no
The next edge is (C, B). Since (3 - 2) equals to 1` so
updation on the distance value of vertex F.
there would be no updation in the vertex B.
Consider the edge (C, B). Denote vertex 'C' as 'u'
and vertex 'B' as 'v'. Now use the relaxing formula:

d(u) = 3

d(v) = 6

c(u , v) = -2

Since (3 - 2) is less than 6, so update

1. d(v) = d(u) + c(u , v)

d(v) = 3 - 2 = 1

Therefore, the distance of vertex B is 1. Third iteration

Now the first iteration is completed. We move to the We will perform the same steps as we did in the
second iteration. previous iterations. We will observe that there will
be no updation in the distance of vertices.
Second iteration:
1. The following are the distances of vertices:
2. A: 0 example. Consider the below graph.

3. B: 1

4. C: 3

5. D: 5

6. E: 0

7. F: 3
Time Complexity

The time complexity of Bellman ford algorithm


would be O(E|V| - 1). o In the above graph, we consider vertex 1 as
1. function bellmanFord(G, S) the source vertex and provides 0 value to it.
We provide infinity value to other vertices
2. for each vertex V in G shown as below:
3. distance[V] <- infinite

4. previous[V] <- NULL

5. distance[S] <- 0

6.

7. for each vertex V in G


8. for each edge (U,V) in G

9. tempDistance <- distance[U] + edge_wei Edges can be written as:


ght(U, V) (1, 3), (1, 2), (2, 4), (3, 2), (4, 3)
10. if tempDistance < distance[V] First iteration
11. distance[V] <- tempDistance Consider the edge (1, 3). Denote vertex '1' as 'u'
12. previous[V] <- U and vertex '3' as 'v'. Now use the relaxing
formula:
13.
d(u) = 0
14. for each edge (U,V) in G
d(v) = ∞
15. If distance[U] + edge_weight(U, V) < dist
ance[V} c(u , v) = 5

16. Error: Negative Cycle Exists Since (0 + 5) is less than ∞, so update

17. 1. d(v) = d(u) + c(u , v)

18. return distance[], previous[] d(v) = 0 + 5 = 5

Drawbacks of Bellman ford algorithm Therefore, the distance of vertex 3 is 5.

o The bellman ford algorithm does not Consider the edge (1, 2). Denote vertex '1' as 'u'
produce a correct answer if the sum of the and vertex '2' as 'v'. Now use the relaxing
edges of a cycle is negative. Let's formula:
understand this property through an d(u) = 0

d(v) = ∞
c(u , v) = 4 Second iteration

Since (0 + 4) is less than ∞, so update Now, again we will check all the edges. The first
edge is (1, 3). Since (0 + 5) equals to 5 which is
1. d(v) = d(u) + c(u , v)
greater than -4 so there would be no updation in the
d(v) = 0 + 4 = 4 vertex 3.

Therefore, the distance of vertex 2 is 4. The next edge is (1, 2). Since (0 + 4) equals to 4 so
there would be no updation in the vertex 2.
Consider the edge (3, 2). Denote vertex '3' as 'u'
and vertex '2' as 'v'. Now use the relaxing The next edge is (3, 2). Since (-4 + 7) equals to 3
formula: which is less than 4 so update:

d(u) = 5 d(v) = d(u) + c(u, v)

d(v) = 4 d(2) = d(3) +c(3, 2)

c(u , v) = 7 = -4 + 7 = 3

Since (5 + 7) is greater than 4, so there would be no Therefore, the value at vertex 2 is 3.


updation in the vertex 2.
The next edge is (2, 4). Since ( 3+7) equals to 10
Consider the edge (2, 4). Denote vertex '2' as 'u' which is less than 11 so update
and vertex '4' as 'v'. Now use the relaxing
d(v) = d(u) + c(u, v)
formula:
d(4) = d(2) +c(2, 4)
d(u) = 4
= 3 + 7 = 10
d(v) = ∞
Therefore, the value at vertex 4 is 10.
c(u , v) = 7
The next edge is (4, 3). Since (10 - 15) equals to -5
Since (4 + 7) equals to 11 which is less than ∞, so
which is less than -4 so update:
update
d(v) = d(u) + c(u, v)
1. d(v) = d(u) + c(u , v)
d(3) = d(4) +c(4, 3)
d(v) = 4 + 7 = 11
= 10 - 15 = -5
Therefore, the distance of vertex 4 is 11.
Therefore, the value at vertex 3 is -5.
Consider the edge (4, 3). Denote vertex '4' as 'u'
and vertex '3' as 'v'. Now use the relaxing Now we move to the third iteration.
formula:
Third iteration
d(u) = 11
Now again we will check all the edges. The first
d(v) = 5 edge is (1, 3). Since (0 + 5) equals to 5 which is
greater than -5 so there would be no updation in the
c(u , v) = -15
vertex 3.
Since (11 - 15) equals to -4 which is less than 5, so
The next edge is (1, 2). Since (0 + 4) equals to 4
update
which is greater than 3 so there would be no
1. d(v) = d(u) + c(u , v) updation in the vertex 2.

d(v) = 11 - 15 = -4 The next edge is (3, 2). Since (-5 + 7) equals to 2


which is less than 3 so update:
Therefore, the distance of vertex 3 is -4.
d(v) = d(u) + c(u, v)
Now we move to the second iteration.
d(2) = d(3) +c(3, 2) Since the graph contains 4 vertices, so according to
the bellman ford algorithm, there would be only 3
= -5 + 7 = 2
iterations. If we try to perform 4th iteration on the
Therefore, the value at vertex 2 is 2. graph, the distance of the vertices from the given
vertex should not change. If the distance varies, it
The next edge is (2, 4). Since (2 + 7) equals to 9
means that the bellman ford algorithm is not
which is less than 10 so update:
providing the correct answer.
d(v) = d(u) + c(u, v)
4th iteration
d(4) = d(2) +c(2, 4)
The first edge is (1, 3). Since (0 +5) equals to 5
=2+7=9 which is greater than -6 so there would be no
change in the vertex 3.
Therefore, the value at vertex 4 is 9.
The next edge is (1, 2). Since (0 + 4) is greater than
The next edge is (4, 3). Since (9 - 15) equals to -6
2 so there would be no updation.
which is less than -5 so update:
The next edge is (3, 2). Since (-6 + 7) equals to 1
d(v) = d(u) + c(u, v)
which is less than 3 so update:
d(3) = d(4) +c(4, 3)
d(v) = d(u) + c(u, v)
= 9 - 15 = -6
d(2) = d(3) +c(3, 2)
Therefore, the value at vertex 3 is -6.
= -6 + 7 = 1

In this case, the value of the vertex is updated. So,


we conclude that the bellman ford algorithm does
not work when the graph contains the negative
weight cycle.

Therefore, the value at vertex 2 is 1.

Time Complexity

 O(V * E)

 V = number of vertices (nodes)

 E = number of edges (connections)

This is because we relax all edges V - 1 times, and each relaxation takes O(1) time.

Advantages

 Works even with negative weights


 Detects negative cycles

 Simpler to implement than Dijkstra in some cases


Disadvantages

 Slower than Dijkstra's algorithm for large graphs

 Cannot be used if you only need the shortest path between two fixed points

What is the Travelling Salesperson Problem?


The Travelling Salesperson Problem (TSP) is a classic problem in computer science and optimization.

The problem is:

A salesperson needs to visit a number of cities exactly once and then return to the starting city. The goal is to
find the shortest possible route that visits each city once and comes back to where they started.

Why is TSP Important?

 It models real-world problems like delivery routes, circuit design, and route planning.

 It’s used in logistics, robotics, and transportation.

Basic Idea Behind the Algorithm


We want to try all possible orders of visiting cities, and pick the one that has the least total distance. But
since there are so many combinations, this becomes very slow when the number of cities increases.
So we use Dynamic Programming to solve it faster.

Steps of TSP using Dynamic Programming:

1. Start from the first city.

2. Use a bitmask to keep track of which cities have been visited.

3. For every possible combination of visited cities, calculate the minimum cost to reach a city.

4. Try to go from the current city to a new unvisited city, and update the cost if the new path is cheaper.

5. Repeat this process for all combinations of visited cities and destination cities.

6. After all cities are visited, add the cost to return back to the starting city.

7. The final answer is the minimum total cost to complete the full round trip.

Time and Space Complexity

 Time Complexity: O(n² × 2ⁿ)

 Space Complexity: O(n × 2ⁿ)

This is faster than checking all n! (factorial) permutations.


Limitations

 Still slow for large number of cities (like 20 or more)

 Better suited for small and medium-sized problems

N-Queen Problem – Introduction

The N-Queen problem is a famous backtracking problem in computer science.

The challenge is:

 Place N queens on an N×N chessboard

 No two queens should attack each other

Queens can attack horizontally, vertically, and diagonally.

So, you have to place the queens in such a way that:


 No two queens are in the same row

 No two queens are in the same column

 No two queens are in the same diagonal


Why Backtracking?

Backtracking is a method where:

 We try a solution

 If it doesn’t work, we go back (backtrack) and try a different option

This helps us explore all possible ways of placing queens and stop when we find a correct solution.

Example: 4-Queen Problem

We have a 4×4 chessboard. We need to place 4 queens.

Step-by-step process:

1. Start from the first row.

2. Try placing a queen in each column, one by one.

3. For each column, check if the queen is safe (not attacked by any previous queen).

4. If safe, place the queen and move to the next row.


5. Repeat the process.

6. If you reach a row where no position is safe, go back to the previous row and move the queen to a new
column.
7. Keep repeating this process until all 4 queens are placed safely.

Visual Example for 4-Queen This is a correct solution because no two queens
attack each other.

How to Check if a Queen Placement is Safe?

For any new queen, we must check:

 Same column: Is there any queen above in


the same column?

 Left diagonal: Is there any queen on the


top-left diagonal?

 Right diagonal: Is there any queen on the


top-right diagonal?
Sum of Subset Problem Using Backtracking

In the sum of subsets problem, there is a given set with some non-negative integer elements. And another sum
value is also provided, our task is to find all possible subsets of the given set whose sum is the same as the
given sum value.
up to a given number .

Ex: let A be a set

A={5,7,10,12,15,18,20}

and given sum m=35

so we have the following subsets that add

up to 35 are:
Subset sum problem is to find subset of
{15,20},{18,7,10},{5,10,20}, and
elements
{18,12,5}
that are selected from a given set whose sum
adds

How Subset Sum Works with Backtracking (Step-by-Step)

Step 1: Start with the first number.

Decide: Do you include it in your subset or skip it?

Step 2: Move to the next number.

Again, try both:

 Include it, and add to the current sum.


 Skip it, and keep the current sum the same.

Step 3: Check your sum.

 If the current sum equals the target, it’s a valid solution — save it!
 If the sum becomes greater than target, stop going forward on this path.

 If not yet at the target, keep moving to the next number.

Step 4: Continue this process recursively.

Keep exploring all paths until all options are checked.

What is Graph Coloring?


Graph coloring is a way of coloring the vertices (nodes) of a graph so that no two connected vertices have the
same color.
The goal is to color all the vertices using at most M different colors, where M is given.

This is called the M-Coloring Problem.

 The objective is to minimize the number of colors while coloring a


 graph.
 . The smallest number of colors required to color a graph G is called its
 chromatic number of that graph.
 . Graph coloring problem is a NP Complete problem.
 . No efficient algorithm is available to implement graph coloring mainly
 we use Greedy and Backtracking algorithm.

Why is this problem important?


This type of problem appears in real life, such as:

 Coloring maps (no two neighboring countries should have the same color)

 Scheduling exams (no two connected subjects can have exams at the same time)

 Register allocation in computers

 Solving puzzles like Sudoku

Problem Statement
You are given:

 A graph with N vertices

 A number M (the maximum number of colors allowed)

You have to assign one color to each vertex such that:

 No two adjacent (connected) vertices have the same color

 You can use only colors from 1 to M

Backtracking Approach

We use backtracking to try different color combinations for the vertices.

Steps:
1. Start with the first vertex.

2. Try all colors (from 1 to M) for the current vertex.

3. For each color:

o Check if it is safe to assign this color (i.e., none of its adjacent vertices have the same color).

o If it is safe, assign the color and move to the next vertex.

4. If you reach a vertex where no color is safe:


o Go back to the previous vertex (this is called backtracking)

o Try a different color for the previous vertex.

5. Repeat the process until:

o All vertices are colored successfully (solution found)

o Or all options are tried and no valid coloring is found (no solution)

Applications :

1. Map Coloring

2. Timetable Scheduling
3. Register Allocation (Compilers)

4. Frequency Assignment (Mobile Networks)


5. Sudoku Solver

6. Job Scheduling

Time Complexity:

 Worst-case: O(m^V)
Where:

o V = number of vertices

o m = number of colors
o This is because each vertex can be assigned any of the m colors.

Space Complexity:

 O(V) for storing color assignments to each vertex.

Hamiltonian Cycle

In an undirected graph, the Hamiltonian path is a path, that visits each vertex exactly once, and the
Hamiltonian cycle or circuit is a Hamiltonian path, that there is an edge from the last vertex to the first
vertex.

In this problem, we will try to determine whether a graph contains a Hamiltonian cycle or not. And when a
Hamiltonian cycle is present, also print the cycle.

Output: The algorithm finds the Hamiltonian path


of the given graph. For this case it is (0, 1, 2, 4, 3,
0). This graph has some other Hamiltonian paths. If
one graph has no Hamiltonian path, the algorithm
should return false.

• Algorithm

• isValid(v, k)

• Input − Vertex v and position k.

• Output − Checks whether placing v in the position k is valid or not.

• Begin

• if there is no edge between node(k-1) to v, then

• return false

• if v is already taken, then

• return false

• return true; //otherwise it is valid

• End
Time Complexity : O(N!), where N is number of vertices.
Auxiliary Space : O(1), since no extra space used.

What is Branch and Bound?

Branch and Bound is a problem-solving technique used to solve optimization problems.


It is used when we want to find the best solution (minimum or maximum) among many possible solutions,
without checking all of them directly.

Why is it Called Branch and Bound?

1. Branch:

o This means splitting the problem into smaller subproblems (like making choices at each level).

o It's like building a tree of decisions.

2. Bound:

o This means calculating the best possible result (upper or lower bound) that can be achieved
from a given subproblem.

o If this bound is worse than the current best, we discard that branch.

How Branch and Bound Works (Simple Steps)

1. Start with the main problem.

o This is the root of the decision tree.

2. Create branches for every possible decision.

o For example, include or exclude an item, go to this city or that one.

3. Calculate the bound for each branch.

o This helps us know the best result that branch could give.
4. Compare bounds:

o If a branch’s bound is worse than the current best solution → prune (cut off) that branch.

o Otherwise, continue branching.

5. Repeat until you find the best possible answer.

Key Terms to Know

 Node: A subproblem or decision point in the solution tree.


 Bound: An estimate of the best answer we can get from this node.

 Pruning: Skipping a node because it can’t give a better result than what we already have.
Advantages

 Much faster than brute force.

 Smart pruning avoids unnecessary calculations.

 Guarantees the optimal answer (unlike greedy algorithms).

Disadvantages

 May still explore many nodes if bounds are not tight.

 Requires good bound functions to be efficient.

 Can use a lot of memory in big problems.

Job Assignment Problem Using branch and bound


The Job Assignment Problem is about assigning N workers to N jobs in such a way that each worker gets
exactly one job and each job is done by only one worker, with the total cost being minimum.

We are given a cost matrix where:


 Rows represent workers.

 Columns represent jobs.

 Each cell (i, j) shows the cost of assigning worker i to job j.

Solution 4: Finding Optimal Solution using Branch and Bound

In basic search methods like Breadth-First Search (BFS) and Depth-First Search (DFS), the algorithm follows a
fixed rule for choosing the next step. This means it does not check whether one step might be better or faster
than the other. These methods are called “blind” because they do not give any preference to options that might
lead to the correct answer more quickly. As a result, BFS and DFS may explore many useless or longer paths,
which makes the search slow.
To make the search faster and smarter, we use an approach called Branch and Bound. In this approach, we use
a cost function to decide which option (or node) to explore next. This cost function helps us pick the option
that seems the most promising or least expensive. So, instead of blindly following the next available option, we
always pick the one with the smallest estimated cost. This increases our chances of finding a good or optimal
solution more quickly

There are two approaches to calculate the cost function:

1. Row-Based Cost

 For every worker, look at all the unassigned jobs.

 Pick the job with the lowest cost for that worker.
 Do this for all remaining workers.

 Add all those minimum values — that’s your cost estimate.


2. Column-Based Cost

 For every job, look at all the unassigned workers.

 Pick the worker with the lowest cost for that job.

 Do this for all remaining jobs.

 Add all those minimum values — that’s another cost estimate.

The first approach is followed. Finally, job 1 gets assigned to worker C as it has
minimum cost among unassigned jobs and job 4
Let’s take below example and try to calculate
gets assigned to worker D as it is only Job left. Total
promising cost when Job 2 is assigned to worker A.
cost becomes 2 + 3 + 5 + 4 = 14.

Since Job 2 is assigned to worker A (marked in


green), cost becomes 2 and Job 2 and worker A Below diagram shows complete search space
becomes unavailable (marked in red). diagram showing optimal solution path in green.

Now we assign job 3 to worker B as it has


minimum cost from list of unassigned jobs. Cost
becomes 2 + 3 = 5 and Job 3 and worker B also
becomes unavailable.
Traveling Salesperson Problem Using B & B

The Traveling Salesperson Problem (TSP) is a famous problem where a person (like a salesperson) must visit
a number of cities exactly once and then return to the starting city. The goal is to find the shortest possible
path that visits all cities once and ends back at the beginning. It is a hard problem because as the number of
cities increases, the number of possible paths grows very fast.

How It Works
We start from the first city and try visiting other cities one by one. At each step, we calculate the minimum
possible cost (a lower bound) of completing the rest of the tour from that point. If the cost is more than the
current best, we stop exploring that path.

We use a priority queue (like a waiting list) to keep track of which path to explore next. The path with the
lowest estimated cost is explored first. This way, we are always trying the most promising path before moving
on to others.

To make this efficient, we often reduce the cost matrix (which shows the distances between cities). For
example, we subtract the smallest value from each row and column to lower the overall cost estimate.
You Tube Video Link : Learn form this video = https://www.youtube.com/watch?v=LMjDeWIkih4

01 Knapsack Problem Using Branch and Bound

You have:

 A knapsack with a maximum weight capacity (W).

 A set of n items, each with:

o A weight (w_i).

o A value (v_i).

 Goal: Choose a subset of items to include in the knapsack such that:


o The total weight does not exceed W.

o The total value is maximized.

 Constraint: Each item is either included or excluded (no partial selection).


Video Link For This Topic : https://www.youtube.com/watch?v=CwM-Mv0Bm4Y

Advantages of Branch and Bound for 01 Knapsack

 Efficient: Prunes branches that cannot lead to optimal solutions, reducing computation.

 Optimal: Guarantees the best solution (unlike greedy methods).

 Flexible: Can be adapted to variations of the problem.

Limitations

 Complexity: Still exponential in worst cases (O(2^n) nodes possible), though pruning reduces this.

 Memory: Requires storing nodes in a queue, which can grow large.

 Upper Bound Quality: A weak bound may lead to less pruning and more computation.

P, NP, CoNP, NP hard and NP complete | Complexity Classes

problems are divided into classes known as Complexity Classes. In complexity theory, a Complexity Class is a
set of problems with related complexity. With the help of complexity theory, we try to cover the following.

 Problems that cannot be solved by computers.

 Problems that can be efficiently solved (solved in Polynomial time) by computers.


 Problems for which no efficient solution (only exponential time algorithms) exist.

Types of Complexity Classes

This article discusses the following complexity classes:

P Class

The P in the P class stands for Polynomial Time. It is the collection of decision problems(problems with a
“yes” or “no” answer) that can be solved by a deterministic machine (our computers) in polynomial time.

Features:

 The solution to P problems is easy to find.

 P is often a class of computational problems that are solvable and tractable. Tractable means that the
problems can be solved in theory as well as in practice. But the problems that can be solved in theory
but not in practice are known as intractable.

Most of the coding problems that we solve fall in this category like the below.

1. Calculating the greatest common divisor.

2. Finding a maximum matching.

3. Merge Sort
NP Class

The NP in NP class stands for Non-deterministic Polynomial Time. It is the collection of decision problems
that can be solved by a non-deterministic machine (note that our computers are deterministic) in polynomial
time.
Features:

 The solutions of the NP class might be hard to find since they are being solved by a non-deterministic
machine but the solutions are easy to verify.

 Problems of NP can be verified by a deterministic machine in polynomial time.

Example:

Let us consider an example to better understand the NP class. Suppose there is a company having a total
of 1000 employees having unique employee IDs. Assume that there are 200 rooms available for them. A
selection of 200 employees must be paired together, but the CEO of the company has the data of some
employees who can’t work in the same room due to personal reasons.
This is an example of an NP problem. Since it is easy to check if the given choice of 200 employees proposed
by a coworker is satisfactory or not i.e. no pair taken from the coworker list appears on the list given by the
CEO. But generating such a list from scratch seems to be so hard as to be completely impractical.

It indicates that if someone can provide us with the solution to the problem, we can find the correct and
incorrect pair in polynomial time. Thus for the NP class problem, the answer is possible, which can be
calculated in polynomial time.

This class contains many problems that one would like to be able to solve effectively:

1. Boolean Satisfiability Problem (SAT).

2. Hamiltonian Path Problem.

3. Graph coloring.

Co-NP Class

Co-NP stands for the complement of NP Class. It means if the answer to a problem in Co-NP is No, then there
is proof that can be checked in polynomial time.

Features:

 If a problem X is in NP, then its complement X’ is also in CoNP.

 For an NP and CoNP problem, there is no need to verify all the answers at once in polynomial time,
there is a need to verify only one particular answer “yes” or “no” in polynomial time for a problem to be
in NP or CoNP.

Some example problems for CoNP are:

1. To check prime number.


2. Integer Factorization.
NP-hard class NP-complete class

An NP-hard problem is at least as hard as the A problem is NP-complete if it is both NP and NP-
hardest problem in NP and it is a class of problems hard. NP-complete problems are the hard problems
such that every problem in NP reduces to NP-hard. in NP.
Features: Features:

 All NP-hard problems are not in NP.  NP-complete problems are special as any
problem in NP class can be transformed or
 It takes a long time to check them. This
reduced into NP-complete problems in
means if a solution for an NP-hard problem
polynomial time.
is given then it takes a long time to check
whether it is right or not.  If one could solve an NP-complete problem
in polynomial time, then one could also
 A problem A is in NP-hard if, for every
solve any NP problem in polynomial time.
problem L in NP, there exists a polynomial-
time reduction from L to A. Some example problems include:

Some of the examples of problems in Np-hard are: 1. Hamiltonian Cycle.

1. Halting problem. 2. Satisfiability.

2. Qualified Boolean formulas. 3. Vertex cover.


3. No Hamiltonian cycle.

Complexity Class Characteristic feature

P Easily solvable in polynomial time.

NP Yes, answers can be checked in polynomial time.

Co-NP No, answers can be checked in polynomial time.

NP-hard All NP-hard problems are not in NP and it takes a long time to check them.

NP-complete A problem that is NP and NP-hard is NP-complete.

You might also like