[go: up one dir, main page]

0% found this document useful (0 votes)
7 views15 pages

DAA endsem notes

The document covers key concepts in dynamic programming, including the principle of optimality, overlapping subproblems, and optimal substructure, with applications such as the 0/1 Knapsack problem and the Coin Change problem. It also discusses the Bellman-Ford algorithm for finding shortest paths in graphs, including its ability to handle negative edge weights and detect negative cycles. Additionally, it touches on the Travelling Salesperson Problem (TSP) and explains why dynamic programming is impractical for larger instances of TSP, suggesting alternative approaches instead.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views15 pages

DAA endsem notes

The document covers key concepts in dynamic programming, including the principle of optimality, overlapping subproblems, and optimal substructure, with applications such as the 0/1 Knapsack problem and the Coin Change problem. It also discusses the Bellman-Ford algorithm for finding shortest paths in graphs, including its ability to handle negative edge weights and detect negative cycles. Additionally, it touches on the Travelling Salesperson Problem (TSP) and explains why dynamic programming is impractical for larger instances of TSP, suggesting alternative approaches instead.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

DAA endsem notes

UNIT 3 DP

Principle of optimality
The optimal solution to the dynamic optimization problem can be found using the
optimal solution to its subproblem

If solutions to the optimal solution are themselves optimal

Overlappin subproblems
Optimal substructure

0/1 Knapsack

1. Problem Definition
 Given n items with:
o Weight: wiw_iwi

o Value: viv_ivi

 Knapsack with a weight capacity WWW.


 Goal: Maximize the total value without exceeding WWW.
 Constraint: Items are either included (1) or excluded (0).

2. Dynamic Programming Approach


Recurrence Relation

Base Cases
 F(0,j)=0: No items → zero value.
 F(i,0)=0: Knapsack with zero capacity → zero value.

3. Implementation Approaches
Bottom-Up (Tabulation)
 Table Size: (n+1)×(W+1)
 Steps:
1. Build table row by row using the recurrence relation.
2. Final solution: F(n,W) (bottom-right cell).
 Time Complexity: O(nW)
 Space Complexity: O(nW)

Top-Down (Memoization)
 Use recursion with memoization:
o Before solving F(i,j)F(i, j)F(i,j), check if already computed.

o If not, solve using the recurrence relation and store the result.

Recursive Function:
int MFKnapsack(int i, int j) {
if (i == 0 || j == 0) return 0;
if (table[i][j] != -1) return table[i][j];

if (w[i] > j)
table[i][j] = MFKnapsack(i-1, j);
else
table[i][j] = max(MFKnapsack(i-1, j), v[i] + MFKnapsack(i-1, j - w[i]));

return table[i][j];
}

Coin Change Problem


Problem Definition
Given:
 A target amount n.
 A set of coin denominations D = {d1, d2, ..., dm}, where d1 = 1 to ensure
a solution always exists.
Goal:
1. The minimum number of coins needed to make the exact amount n.
2. The specific coin denominations used to achieve this minimum.
Constraints:
 Unlimited supply of each coin denomination.
 Only coins from the given set D can be used.

Dynamic Programming Solution


The Dynamic Programming (DP) approach builds the solution systematically
by solving smaller subproblems.
Key Idea
The principle of optimal substructure ensures that the minimum number of
coins for an amount i can be determined by:
 Considering all coins dj where dj <= i.
 Using the optimal solution for i - dj (previously computed).
 Adding 1 for including coin dj.

Recurrence Relation
Let F(i) represent the minimum number of coins needed to make the amount i.
1. Base Case:
F(0) = 0 (zero amount requires zero coins).
2. Recursive Case:
F(i) = min{F(i - dj)} + 1 for all dj <= i, where i > 0.
This checks all denominations dj that are less than or equal to i and adds 1 for
including coin dj in the solution.

Bottom-Up DP Algorithm
We iteratively compute F(1) through F(n) using the recurrence relation:
Steps:
1. Initialize a DP table F of size n+1:
o F[0] = 0

o F[i] = infinity for all i > 0.

2. For each amount i from 1 to n:


o For each coin dj in D:
 If dj <= i, update:
F[i] = min(F[i], F[i - dj] + 1).
3. Output:
o F[n] gives the minimum number of coins needed for amount n.

Time and Space Complexity


 Time Complexity: O(n*m), where n is the target amount and m is the
number of denominations.
 Space Complexity: O(n) (to store the DP table).

Bellman-Ford Algorithm
The Bellman-Ford algorithm is used to compute the shortest paths from a
single source vertex to all other vertices in a weighted graph, even if there are
negative edge weights. However, it does not work if the graph contains a
negative cycle.

Key Steps of the Algorithm


1. Initialization
 The algorithm begins by initializing the distances:
o The distance to the source vertex is set to 0.

o The distances to all other vertices are set to infinity (or a very
large value).
2. Iteration
 The core of the algorithm involves iterating through the edges of the
graph multiple times.
 For each edge (u,v):
o The algorithm checks whether the distance to vertex u can be
shortened:
dist[u] + dist[u,v] < dist[v]
o If this condition is true, the distance to vertex u is updated:
dist[v] = dist[u] + dist[u,v]
 This step is repeated for a number of iterations equal to V - 1 (number of
vertices minus one).

3. Negative Cycle Detection


 After the iterations, the algorithm checks for the presence of a negative
cycle:
o If any distance value changes during the last iteration, it signifies
the presence of a negative cycle.
o In this case, the shortest paths are not well-defined.

Efficiency and Time Complexity


 Time Complexity: The algorithm depends on the graph representation:
o Adjacency List:

 Each iteration takes O(E) time.


 Total complexity: O(VE).
o Adjacency Matrix:

 Each iteration takes O(V²) time.


 Total complexity: O(V³).
 Optimizations exist to improve performance in practice:
1. Early Termination:

 If no distance values change during an iteration, the


algorithm can stop early.
2. Queue-Based Optimization:

 Use a queue to process only the vertices whose distances


have changed.
 This reduces unnecessary computations.
Correctness and Applications
 The algorithm’s correctness is proven in various studies.
 It can also be extended to:
o Detect negative cycles.

o Retrieve the shortest paths along with their lengths.

Summary
The Bellman-Ford algorithm is a reliable method for single-source shortest path
problems in graphs with negative edge weights. While it is slower than Dijkstra's
algorithm for graphs without negative weights, its ability to detect negative
cycles makes it more versatile.

Here’s the table with bold formatting for the headers and key points:

Feature Bellman-Ford Algorithm Dijkstra's Algorithm

Handling of Can handle negative edge Cannot handle negative


Negative weights, which makes it edge weights and may
Edge Weights suitable for graphs where produce incorrect results or
some edges may represent a fail in the presence of such
cost or loss. It can also edges.
detect negative cycles.

Core Iterative improvement. Greedy approach. Builds a


Principle Starts with an initial estimate shortest-path tree by always
of distances and iteratively choosing the next closest
refines them by checking all vertex to the source.
edges repeatedly.

Time O(VE), where V is the O(E log V) using a min-


Complexity number of vertices and E is priority queue, though this
the number of edges. can vary depending on the
implementation of the priority
queue.

Best Use Suitable for sparse graphs Typically more efficient for
Case (where E is much smaller dense graphs and graphs
than V²) and when negative with non-negative edge
edge weights may be weights.
present.

Iteration Iterates through all edges Expands a subtree by


Strategy multiple times (up to V-1 selecting the closest fringe
times). vertex and updates
neighbors of that vertex.

Data Can use adjacency lists for Typically uses a min-priority


Structures O(VE) complexity or queue (e.g., a min-heap) to
adjacency matrices for efficiently select the next
O(V³) complexity. closest vertex, along with
adjacency lists or
adjacency matrices.

Optimisation Can terminate early if no Can be optimized by how the


s distances change in an min priority queue is
iteration, and can use a implemented (e.g., binary
queue to process only heap vs. Fibonacci heap).
vertices whose distances
have changed.

Early Can terminate early if a pass The algorithm terminates


Termination over the edges does not when all vertices reachable
change any of the distances. from the source have been
added to the shortest-path
tree.

Negative Can detect negative Does not have a built-in way


Cycle cycles. If after V-1 iterations, to detect negative cycles; the
Detection distance values are still presence of negative weights
changing, a negative cycle is makes the correctness of the
present. algorithm not guaranteed.

You can now paste this table directly into Microsoft Word, and the bold formatting
will be preserved.

Multistage graph
Here the graph consists of different stages
F(2,3)  min cost to final node from stage 2 and vertex 3,
TSP
The Travelling Salesperson Problem (TSP) is a classic optimisation problem in
computer science. The goal is to find the shortest possible route that
visits each city exactly once and returns to the origin city. It's easily
visualised as a weighted graph where cities are vertices and distances between
cities are edge weights. The problem is to find the shortest Hamiltonian circuit (a
cycle that visits every vertex exactly once) in this graph. The TSP has numerous
real-world applications, including route planning, logistics, and even more
modern applications like circuit board fabrication and genetic engineering.
Why dynamic programming is unsuitable for solving TSP:
While dynamic programming offers efficient solutions for certain problems by
breaking them into overlapping subproblems and storing their solutions, it's
highly impractical for the TSP, especially for larger instances. Here's why:
 Exponential Complexity: A naive dynamic programming approach to
the TSP would involve exploring all possible permutations of the cities. The
number of such permutations is n!, where n is the number of cities. This
factorial growth makes the computational cost explode exponentially as n
increases, rendering the approach infeasible for anything beyond a very
small number of cities. Even an improved dynamic programming
approach, which reduces complexity to O(n 2 2ⁿ), remains impractical for
realistically sized problems.
 Space Requirements: The dynamic programming solution requires
storing intermediate results for all possible subsets of cities and partial
tours. The space complexity for this approach is O(n * 2ⁿ), which is also
exponential and becomes unmanageable for even moderately sized TSP
instances. This immense storage requirement outweighs any potential
speedup the dynamic programming approach might offer.
 Alternative Approaches: Because of its inherent complexity, the TSP is
typically tackled using approximation algorithms or heuristics. These
methods might not find the absolute best solution, but they provide
reasonably good solutions in a reasonable amount of time for large
problem instances. Branch and bound is another technique which can
sometimes effectively handle larger instances than a pure dynamic
programming approach.
In short, while dynamic programming is a powerful tool, its exponential time and
space complexity makes it unsuitable for efficiently solving the TSP except for
extremely small instances. Approximation algorithms and heuristics are far
more practical for handling realistic-sized TSP problems.
unit 4
This is not a optimization problem
This is a method to find all possible solutions
M colouring decision problem  can we color the graph using the given colurs?
M colouring optimization  minimum colors to color the graph (chromatic
numee problem)
Hamiltonian cycle  visit all vertices exactly once and come back to start
Graph may be directed/undirected
Should be connected
Duplicate cycle not allowed
Not possible in cases:

You might also like