[go: up one dir, main page]

0% found this document useful (0 votes)
2 views108 pages

Daa 2

The document discusses greedy algorithms, highlighting their simplicity and use in optimization problems, particularly the activity-selection problem. It explains the greedy choice property and optimal substructure, demonstrating how these properties allow for the development of efficient algorithms like those for the fractional knapsack and job sequencing problems. Additionally, it introduces Huffman coding and minimum spanning trees, detailing their applications and algorithms.

Uploaded by

kaushik07oct2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views108 pages

Daa 2

The document discusses greedy algorithms, highlighting their simplicity and use in optimization problems, particularly the activity-selection problem. It explains the greedy choice property and optimal substructure, demonstrating how these properties allow for the development of efficient algorithms like those for the fractional knapsack and job sequencing problems. Additionally, it introduces Huffman coding and minimum spanning trees, detailing their applications and algorithms.

Uploaded by

kaushik07oct2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 108

Greedy Algorithms

• Similar to dynamic programming, but simpler approach


• Also used for optimization problems

• Idea: When we have a choice to make, make the one that looks best right now
• Make a locally optimal choice in hope of getting a globally optimal solution

• Greedy algorithms don’t always yield an optimal solution

• Makes the choice that looks best at the moment in order to get optimal solution.
Greedy Algorithms
• A greedy algorithm always makes the choice that looks best at the
moment
• Everyday examples:
• Walking to the school
• Playing a bridge hand
• The hope: a locally optimal choice will lead to a globally optimal solution
• For some problems, it works
• Dynamic programming can be overkill; greedy algorithms tend to be
easier to code
Activity-Selection Problem
• Problem: get your money’s worth out of a carnival
• Buy a wristband that lets you onto any ride
• Lots of rides, each starting and ending at different times
• Your goal: ride as many rides as possible
• Another, alternative goal that we don’t solve here: maximize time spent on rides

• This is an activity selection problem


Activity-Selection Problem
• Input: a set S ={1, 2, …, n} of n activities
• si = Start time of activity i,
• fi = Finish time of activity i
• Activity i takes place in [si, fi )

• Aim: Find max-size subset A of mutually compatible activities


• Max number of activities, not max time spent in activities
• Activities i and j are compatible if intervals [si, fi ) and [sj, fj ) do not overlap,
i.e., either si ≥ fj or sj ≥ fi
Activity-Selection Problem - Example
Activity Selection: Optimal Substructure
Theorem: Let k be the activity with the earliest finish time in an optimal
solution A ⊆ S then A-{k} is an optimal solution to sub-problem Sk´ =
{i∈S: si ≥ fk }
• In words: once activity #1 is selected, the problem reduces to finding an optimal solution for activity-
selection over activities in S compatible with #1

Proof (by contradiction):


• Let B´ be an optimal solution to Sk´ and
|B´| > | A-{k}| = |A| - 1
• Then, B = B´ ∪ {k} is compatible and
|B| = |B´|+1 > |A|
• Contradiction to the optimality of A Q.E.D.
Activity Selection: Repeated Subproblems
• Consider a recursive algorithm that tries all possible
compatible subsets to find a maximal set, and notice
repeated subproblems:

S
1A?
yes no

S’ S-{1}
2A? 2A?
yes no yes no

S’’ S’-{2} S’’ S-{1,2}


Greedy Choice Property
• Repeated sub-problems and optimal substructure properties hold in
activity selection problem
• Greed choice property:
a sequence of locally optimal (greedy) choices
⇒ an optimal solution
• Some problems (such as activity selection) has greedy choice property, we can use a greedy algorithm
for those problems.
• We may use dynamic programming (or memoize) approch for them too, but it will be
cumbersome.
• If a problem does not have greedy choice property, we may not use a greedy algorithm for that
problem, but we can still use a dynamic programming approach.
• Assume (without loss of generality) f1 ≤ f2 ≤ … ≤ fn
– If not sort activities according to their finish times in nondecreasing order
Greedy Choice Property in Activity Selection
Activity Selection: A Greedy Algorithm
• So actual algorithm is simple:
• Sort the activities by finish time
• Schedule the first activity
• Then schedule the next activity in
sorted list which starts after
previous activity finishes
• Repeat until no more activities
• Intuition is even more simple:
• Always pick the shortest ride
available at the time
Activity Selection Problem: An Example
S={[1, 4 ), [5, 7 ), [2, 8 ), [3, 11 ), [8, 15 ), [13, 18 )}
 A={1, 2, 5}
Greedy vs Dynamic Programming
• Optimal substructure property exploited by both Greedy and DP strategies
• Greedy Choice Property: A sequence of locally optimal choices ⇒ an
optimal solution
– We make the choice that seems best at the moment
– Then solve the subproblem arising after the choice is made
• DP: We also make a choice/decision at each step, but the choice may
depend on the optimal solutions to subproblems
• Greedy: The choice may depend on the choices made so far, but it cannot
depend on any future choices or on the solutions to subproblems
• DP is a bottom-up strategy
• Greedy is a top-down strategy
• each greedy choice in the sequence iteratively reduces each problem to a similar but
smaller problem
Proof of Correctness of Greedy Algorithms
• Examine a globally optimal solution
• Show that this solution can be modified so that
1) A greedy choice is made as the first step
2) This choice reduces the problem to a similar but smaller problem

• Apply induction to show that a greedy choice can be used at every


step
• Showing (2) reduces the proof of correctness to proving that the
problem exhibits optimal substructure property
Elements of whether
• How can you judge Greedy Strategy
a greedy algorithm will solve a particular optimization
problem?
• Two key ingredients
– Greedy choice property
– Optimal substructure property
• Greedy Choice Property: A globally optimal solution can be arrived at by making locally
optimal (greedy) choices
• In DP,we make a choice at each step but the choice may depend on the solutions to
subproblems
• In Greedy Algorithms, we make the choice that seems best at that moment then solve the
subproblems arising after the choice is made
– The choice may depend on choices so far, but it cannot depend on any
future choice or on the solutions to subproblems
• DP solves the problem bottom-up
• Greedy usually progresses in a top-down fashion by making one greedy choice after
another reducing each given problem instance to a smaller one
Key Ingredients: Greedy Choice Property
• We must prove that a greedy choice at each step yields a globally optimal
solution
• The proof examines a globally optimal solution
• Shows that the soln can be modified so that a greedy choice made as the
first step reduces the problem to a similar but smaller subproblem
• Then induction is applied to show that a greedy choice can be used at each
step
• Hence, this induction proof reduces the proof of correctness to
demonstrating that an optimal solution must exhibit optimal substructure
property
Key Ingredients: Optimal Substructure
• A problem exhibits optimal substructure if an optimal solution to the
problem contains within it optimal solutions to subproblems

Example: Activity selection problem S


If an optimal solution A to S begins with activity 1 then the set of
activities
A´ = A-{1}
is an optimal solution to the activity selection problem
S´ = {i∈S: si ≥ f1 }
Key Ingredients: Optimal Substructure
• Optimal substructure property is exploited by both greedy and
dynamic programming strategies

• Hence one may


– Try to generate a dynamic programming solution to a problem when a greedy
strategy suffices
– Or, may mistakenly think that a greedy solution works when in fact a DP
solution is required

Example: Knapsack Problems(S, w)


Questions
• 1. What is the greedy algorithm design paradigm, and how does it differ from other approaches like dynamic
programming?

• 2. Explain the key properties (e.g., greedy-choice property and optimal substructure) that a problem must
satisfy to be solved using a greedy algorithm.

• 3. Describe the process of solving the Activity Selection problem using a greedy algorithm. What is its time
complexity?

• 4. Provide an example of a problem where a greedy approach fails to produce the optimal solution. Why
does the greedy approach not work in this case?

• 5. Solve the Fractional Knapsack problem using the greedy algorithm and explain why the greedy approach
works optimally for this problem.

18
Introduction
• The Fractional Knapsack problem involves selecting items with given
weights and values to maximize total value within a weight limit. The
greedy algorithm solves this efficiently by considering the value-to-
weight ratio.
Problem Statement
• Given:
• - A set of items, each with a weight and value.
• - A maximum weight capacity of the knapsack.

• Objective:
• Maximize the total value of items in the knapsack while staying within the
weight limit.

• Constraints:
• - Items can be divided into fractions.
Greedy Approach
• 1. Calculate the value-to-weight ratio for each item.
• 2. Sort items in descending order of their value-to-weight ratio.
• 3. Add items to the knapsack starting from the highest ratio until the
capacity is filled.
• 4. If an item's weight exceeds the remaining capacity, take a fraction
of it.
Steps of the Algorithm
• 1. Input: Array of items with weights and values, knapsack capacity.
• 2. Calculate value-to-weight ratio for all items.
• 3. Sort items by ratio in descending order.
• 4. Initialize total value = 0.
• 5. For each item:
• - If the item's weight fits, add it to the knapsack.
• - Else, take the fraction that fits and stop.
• 6. Output the total value.
Example Walkthrough
• Items:
• - Item 1: Weight = 10, Value = 60
• - Item 2: Weight = 20, Value = 100
• - Item 3: Weight = 30, Value = 120

• Knapsack Capacity: 50
• Solution:
• 1. Ratios: Item 1 = 6, Item 2 = 5, Item 3 = 4.
• 2. Add Item 1 and Item 2 fully, take 2/3 of Item 3.
• 3. Total Value = 60 + 100 + 80 = 240.
Applications
• 1. Resource allocation problems.
• 2. Optimizing transportation costs.
• 3. Financial portfolio optimization.
• 4. Scheduling tasks with resource constraints.
Questions
• 1. What is the key difference between Fractional and 0/1 Knapsack
problems?
• 2. Why does the greedy approach work optimally for the Fractional
Knapsack problem?
• 3. How is the value-to-weight ratio used in solving the Fractional
Knapsack problem?
• 4. Explain the steps of the algorithm with an example.
• 5. What are some real-world applications of the Fractional Knapsack
problem?
Introduction
• The Job Sequencing Problem involves scheduling jobs with deadlines
and profits. The objective is to maximize total profit while ensuring
that jobs are completed within their deadlines. The greedy algorithm
solves this efficiently.
Problem Statement
• Given:
• - A set of jobs, each with a deadline and profit.
• - A timeline for scheduling jobs.

• Objective:
• Maximize the total profit by scheduling jobs within their deadlines.

• Constraints:
• - Each job takes exactly one unit of time.
• - No job can be scheduled after its deadline.
Greedy Approach
• 1. Sort jobs in descending order of their profits.
• 2. Initialize an array to keep track of free time slots.
• 3. For each job:
• - If a free slot exists before its deadline, schedule the job.
• - Update the slot as occupied.
• 4. Calculate the total profit of the scheduled jobs.
Steps of the Algorithm
• 1. Input: Array of jobs with deadlines and profits.
• 2. Sort jobs by profit in descending order.
• 3. Initialize a result array and a boolean array to track free slots.
• 4. For each job:
• - Find a free slot before its deadline.
• - If found, assign the job to that slot and update the result array.
• 5. Output the scheduled jobs and the total profit.
Example Walkthrough
• Jobs:
• - Job 1: Deadline = 4, Profit = 20
• - Job 2: Deadline = 1, Profit = 10
• - Job 3: Deadline = 1, Profit = 40
• - Job 4: Deadline = 1, Profit = 30

• Solution:
• 1. Sort: Job 3, Job 4, Job 1, Job 2.
• 2. Assign Job 3 to slot 1, Job 4 to slot 2, and Job 1 to slot 3.
• 3. Total Profit = 40 + 30 + 20 = 90.
Applications
• 1. Scheduling tasks with deadlines and rewards.
• 2. Resource allocation in project management.
• 3. Timetable scheduling for schools or events.
• 4. Real-time job scheduling in operating systems.
Questions
• 1. What is the key difference between the Job Sequencing Problem
and the Knapsack Problem?
• 2. Why does the greedy approach work well for the Job Sequencing
Problem?
• 3. Explain the importance of sorting jobs by profit in descending
order.
• 4. Solve an example of the Job Sequencing Problem with at least five
jobs.
• 5. Discuss real-world scenarios where the Job Sequencing Problem
can be applied.
Introduction to Huffman Coding
• Huffman Coding is a lossless data compression algorithm.
• Developed by David A. Huffman in 1952.
• Used in compression formats like JPEG, MP3, and ZIP.
How Huffman Coding Works
• 1. Frequency Analysis: Determine the frequency of each character.
• 2. Binary Tree Construction: Build a tree with characters as leaf nodes.
• 3. Prefix Code Assignment: Assign shorter codes to more frequent
characters.
• 4. Data Compression: Replace characters with their binary codes.
The Huffman Coding Algorithm
• 1. Create a priority queue of all characters with their frequencies.
• 2. Extract two nodes with the smallest frequency.
• 3. Create a new internal node with these two nodes as children.
• 4. Insert the new node back into the priority queue.
• 5. Repeat until the queue has only one node (the root of the tree).
Example Walkthrough
• Given: Characters A(5), B(9), C(12), D(13), E(16), F(45)
• Step 1: Build a Huffman Tree.
• Step 2: Assign binary codes: A=1100, B=1101, etc.
• Step 3: Compress data using these codes.
Advantages and Limitations
• Advantages:
• - Efficient for text compression.
• - No loss of information (lossless).

• Limitations:
• - Requires building the tree for each dataset.
• - Not optimal for all types of data.
Applications of Huffman Coding
• 1. Compression of text files (e.g., ZIP, GZIP).
• 2. Multimedia compression (e.g., JPEG, MP3).
• 3. Network data transmission.
• 4. Efficient encoding in embedded systems.
Questions and Practice Problems
• 1. Explain how Huffman Coding ensures no two codes are prefixes of
each other.
• 2. Create a Huffman Tree for the characters: A(3), B(6), C(8), D(10).
• 3. Compare Huffman Coding with Run-Length Encoding.
• 4. What are the limitations of Huffman Coding for image
compression?
Introduction to Minimum Spanning Tree
(MST)
• 1. A Minimum Spanning Tree (MST) is a subset of the edges of a
connected, weighted graph.
• 2. It connects all vertices without any cycles and minimizes the total
edge weight.
• 3. Common algorithms for finding MST:
• - Prim's Algorithm
• - Kruskal's Algorithm
Prim's Algorithm
• 1. Starts with a single vertex and grows the MST one edge at a time.
• 2. At each step, it adds the smallest edge that connects a vertex in the
MST to a vertex outside it.
• 3. Suitable for dense graphs.
Steps of Prim's Algorithm
• 1. Initialize a set for MST vertices and an empty MST edge list.
• 2. Choose an arbitrary starting vertex.
• 3. Repeat until all vertices are included:
• - Select the smallest edge connecting an MST vertex to a non-MST
vertex.
• - Add the edge to the MST.
• 4. Output the MST and its total weight.
Kruskal's Algorithm
• 1. Sort all edges in non-decreasing order of weight.
• 2. Use a union-find data structure to detect cycles.
• 3. Add edges to the MST, ensuring no cycles are formed.
• 4. Suitable for sparse graphs.
Steps of Kruskal's Algorithm
• 1. Sort all edges by weight.
• 2. Initialize an empty MST edge list.
• 3. For each edge in sorted order:
• - If it does not form a cycle, add it to the MST.
• - Use union-find to check for cycles.
• 4. Output the MST and its total weight.
Example Walkthrough
• Graph: Vertices A, B, C, D, and edges with weights.
• 1. Apply Prim's Algorithm starting from vertex A.
• 2. Apply Kruskal's Algorithm by sorting edges and using union-find.
• 3. Compare the resulting MSTs and total weights.
Applications
• 1. Network design (e.g., computer, telephone, or electrical networks).
• 2. Transportation and logistics optimization.
• 3. Circuit design in electronics.
• 4. Clustering algorithms in machine learning.
Questions
• 1. What is the key difference between Prim's and Kruskal's
algorithms?
• 2. Explain the role of the union-find data structure in Kruskal's
algorithm.
• 3. Solve an example graph using both Prim's and Kruskal's algorithms.
• 4. How do the algorithms handle graphs with equal edge weights?
• 5. Discuss the time complexity of Prim's and Kruskal's algorithms.
Introduction to Activity Selection Problem
• Definition of Activity Selection Problem
• Importance in Scheduling and Resource Management
Problem Statement
• Given n activities with start and finish times, select the maximum
number of activities that can be performed by a single person or
machine, assuming that a person can only work on one activity at a
time.
The Greedy Approach
• Sort activities based on their finish times
• Select the first activity and check for non-overlapping activities
• Iteratively add compatible activities to the solution set
Steps of the Algorithm
• 1. Sort the activities based on their finish times
• 2. Select the first activity from the sorted list
• 3. For each subsequent activity, check if it starts after the previous
selected activity ends
• 4. If yes, select the activity and repeat the process
Example with Explanation
• Given activities:
• Activity A: Start 1, End 3
• Activity B: Start 2, End 5
• Activity C: Start 4, End 6
• Optimal Solution: Select Activity A and Activity C
Applications
• 1. Job Scheduling
• 2. Event Management
• 3. Resource Allocation Problems
• 4. Project Planning
Questions and Practice Problems
• 1. What is the time complexity of the greedy algorithm for the Activity
Selection Problem?
• 2. Why is sorting by finish time crucial for the algorithm?
• 3. Solve for given activities with start and finish times: A(1,4), B(3,5),
C(0,6), D(5,7), E(8,9)
• 4. How does this algorithm differ from dynamic programming
approaches to scheduling problems?
Introduction to Dynamic Programming
• Definition and Importance of Dynamic Programming
• Real-World Use Cases
Key Characteristics
• Overlapping Subproblems
• Optimal Substructure
Steps to Solve Using DP
• 1. Define the Problem in Terms of Subproblems
• 2. Identify Base Cases
• 3. Recursive Relation
• 4. Memoization or Tabulation
Questions and Practice Problems
• 1. What is the difference between memoization and tabulation?
• 2. Provide examples of overlapping subproblems in popular DP
problems.
• 3. Solve a Fibonacci sequence using DP.
Introduction to 0/1 Knapsack Problem
• The 0/1 Knapsack Problem is a combinatorial optimization problem.
• Given weights and values of items, determine the maximum value
that can be achieved with a limited weight capacity.
• Items cannot be divided; either take an item or leave it.
Problem Statement
• Input:
• - n items with weights (w1, w2, ..., wn) and values (v1, v2, ..., vn).
• - A knapsack with a weight capacity W.

• Output:
• - The maximum value that can be obtained while staying within the
weight limit.
Greedy Approach and Its Limitations
• Greedy Approach:
• - Select items based on the highest value-to-weight ratio.
• - Works well for Fractional Knapsack Problem.

• Limitations:
• - Does not guarantee optimal solution for 0/1 Knapsack Problem.
• - Example: Item A (value=60, weight=10), Item B (value=100,
weight=20).
Dynamic Programming Solution
• 1. Define dp[i][w] as the maximum value for the first i items and
weight w.
• 2. Recurrence Relation:
• - If item i is included: dp[i][w] = max(dp[i-1][w], dp[i-1][w-wt[i]] +
val[i])
• - If item i is not included: dp[i][w] = dp[i-1][w]
• 3. Base Case: dp[0][w] = 0 for all w.
Example Walkthrough
• Items:
• 1. Item A: Weight=1, Value=1
• 2. Item B: Weight=3, Value=4
• 3. Item C: Weight=4, Value=5
• 4. Item D: Weight=5, Value=7

• Knapsack Capacity: W=7


• Solution: Use Dynamic Programming Table to find the optimal value.
Variations of the Knapsack Problem
• 1. Fractional Knapsack Problem
• 2. Unbounded Knapsack Problem
• 3. Multiple Knapsack Problem
• 4. 0/1 Knapsack with Constraints (e.g., group constraints)
Applications
• 1. Resource Allocation Problems
• 2. Project Selection
• 3. Investment Decisions
• 4. Loading Problems in Transportation and Logistics
Introduction to LCS Problem
• The LCS Problem is a classic dynamic programming problem.
• It involves finding the longest subsequence common to two
sequences.
• Applications in text comparison, bioinformatics, and data analysis.
Problem Definition
• Input:
• - Two sequences X[1..m] and Y[1..n].

• Output:
• - The length of the longest subsequence common to both sequences.
Key Properties of LCS
• 1. Subsequence: A sequence derived by deleting some or no elements
of a sequence.
• 2. Commonality: Elements must appear in the same order in both
sequences.
• 3. Optimal Substructure: The problem can be broken into smaller
subproblems.
Dynamic Programming Approach
• 1. Define dp[i][j] as the length of LCS of X[1..i] and Y[1..j].
• 2. Recurrence Relation:
• - If X[i] == Y[j]: dp[i][j] = dp[i-1][j-1] + 1
• - Else: dp[i][j] = max(dp[i-1][j], dp[i][j-1])
• 3. Base Case: dp[0][j] = 0, dp[i][0] = 0.
Example Walkthrough
• Given Strings:
• - X = "ABCBDAB"
• - Y = "BDCAB"

• Steps:
• 1. Build a DP table to compute LCS length.
• 2. Traceback to reconstruct the LCS: "BCAB".
Applications of LCS
• 1. Text Comparison Tools (e.g., diff utilities).
• 2. Bioinformatics: DNA/RNA sequence alignment.
• 3. Data Synchronization.
• 4. Plagiarism Detection.
Questions and Practice Problems
• 1. Explain the difference between LCS and Longest Common
Substring.
• 2. Solve for X="AXYT", Y="AYZX".
• 3. Derive the time complexity of the DP solution.
• 4. How can space optimization be applied to the LCS problem?
Web Resources and References
• 1. GeeksforGeeks - LCS Problem:
https://www.geeksforgeeks.org/longest-common-subsequence-dp-4/
• 2. TutorialsPoint - LCS: https://www.tutorialspoint.com/longest-
common-subsequence
• 3. Brilliant.org - LCS Optimization: https://brilliant.org/wiki/longest-
common-subsequence/
• 4. Wikipedia - LCS:
https://en.wikipedia.org/wiki/Longest_common_subsequence_probl
em
Decrease and Conquer
Approach: Topological Sort[CO3]
An Efficient Algorithm for Directed Acyclic Graphs (DAGs)
Table of Contents
• 1. Introduction
• 2. Decrease and Conquer Approach
• 3. Topological Sort Definition
• 4. Algorithm Steps
• 5. Example Walkthrough
• 6. Applications
• 7. Questions
• 8. References and Web Resources
Introduction
• Topological Sort is a linear ordering of vertices in a Directed Acyclic
Graph (DAG) such that for every directed edge UV, vertex U comes
before vertex V. It is commonly used in scheduling problems and
dependency resolution.
Decrease and Conquer Approach
• 1. Decrease and Conquer is a problem-solving strategy.
• 2. It involves reducing the problem size by solving smaller instances.
• 3. Topological Sort applies this approach by removing vertices with no
dependencies.
Topological Sort Definition
• 1. Applicable only for Directed Acyclic Graphs (DAGs).
• 2. Ensures that for every directed edge UV, U appears before V in the
ordering.
• 3. Common Algorithms:
• - Kahn's Algorithm (BFS-based)
• - Depth-First Search (DFS-based)
Algorithm Steps
• 1. Identify all vertices with no incoming edges (indegree = 0).
• 2. Add these vertices to the ordering.
• 3. Remove these vertices and their outgoing edges from the graph.
• 4. Repeat until all vertices are processed.
Example Walkthrough
• 1. Input: A DAG with vertices and edges.
• 2. Step-by-step execution of Kahn's or DFS-based algorithm.
• 3. Output: A valid topological ordering of vertices.
Applications
• 1. Task scheduling with dependencies.
• 2. Course prerequisite ordering.
• 3. Compilation and build systems.
• 4. Detecting cycles in a graph.
Questions
• 1. What is the primary requirement for applying Topological Sort?
• 2. Explain the steps of Kahn's Algorithm for Topological Sort.
• 3. How can Topological Sort detect cycles in a graph?
• 4. Compare Kahn's Algorithm and DFS-based Topological Sort in terms
of complexity.
• 5. Provide a real-world application where Topological Sort is useful.
Introduction to Bellman-Ford Algorithm
• The Bellman-Ford Algorithm finds the shortest paths from a single
source vertex to all other vertices in a weighted graph.
• It works with graphs that have negative weight edges.
Algorithm Steps
• 1. Initialize distances from the source to all vertices as infinity, except
the source (0).
• 2. Relax all edges |V|-1 times.
• 3. Check for negative-weight cycles by iterating through all edges.
Example Walkthrough
• Graph: A->B (4), A->C (5), B->C (-6), etc.
• Step-by-step relaxation and distance updates.
Advantages and Limitations
• Advantages:
• - Works with negative weight edges.
• - Provides useful detection of negative weight cycles.

• Limitations:
• - Slower compared to Dijkstra's Algorithm for non-negative weight
graphs.
• - Not suitable for graphs with many vertices.
Questions and Practice Problems
• 1. Explain the difference between Bellman-Ford and Dijkstra's
Algorithm.
• 2. Detect if a given graph has a negative weight cycle using Bellman-
Ford.
• 3. Solve a shortest path problem using Bellman-Ford for a given
graph.
Introduction to Floyd-Warshall Algorithm
• The Floyd-Warshall Algorithm finds shortest paths between all pairs
of vertices in a weighted graph.
• It works on both directed and undirected graphs.
Algorithm Steps
• 1. Initialize a distance matrix with weights of edges and infinity for no
direct edge.
• 2. Iterate through all vertices as intermediate points.
• 3. Update distances using the formula: dist[i][j] = min(dist[i][j],
dist[i][k] + dist[k][j]).
Example Walkthrough
• Graph: A->B (3), A->C (8), B->C (2), etc.
• Step-by-step distance matrix updates for each intermediate vertex.
Advantages and Limitations
• Advantages:
• - Simple implementation using matrix operations.
• - Detects negative weight cycles.

• Limitations:
• - High time complexity: O(V^3).
• - Not suitable for sparse graphs with a large number of vertices.
Questions and Practice Problems
• 1. How does the Floyd-Warshall Algorithm detect negative weight
cycles?
• 2. Implement the algorithm for a given distance matrix.
• 3. Compare Floyd-Warshall with Dijkstra's Algorithm for all-pairs
shortest path.
Introduction to Optimal BST
• An Optimal Binary Search Tree minimizes the search cost for given
frequencies of keys.
Dynamic Programming Approach
• 1. Define cost[i][j] as the minimum search cost for keys i to j.
• 2. Recurrence Relation:
• cost[i][j] = min(cost[i][k-1] + cost[k+1][j] + sum(freq[i..j]))
• 3. Base Case: cost[i][i] = freq[i].
Example Walkthrough
• Given keys: {10, 20, 30}, Frequencies: {34, 8, 50}, calculate cost table
step-by-step.
Questions and Practice Problems
• 1. Explain the significance of frequency in constructing an Optimal
BST.
• 2. Solve for given keys and frequencies.
Introduction to Matrix Chain Multiplication
• Determine the most efficient way to multiply a sequence of matrices.
Dynamic Programming Approach
• 1. Define dp[i][j] as the minimum cost for multiplying matrices i to j.
• 2. Recurrence Relation:
• dp[i][j] = min(dp[i][k] + dp[k+1][j] + p[i-1]*p[k]*p[j])
• 3. Base Case: dp[i][i] = 0.
Example Walkthrough
• Matrix Dimensions: {10, 20, 30, 40}, calculate dp table step-by-step.
Questions and Practice Problems
• 1. Explain the role of parentheses placement in Matrix Chain
Multiplication.
• 2. Solve for a given set of dimensions.
Introduction to Coin Change Problem
• Determine the number of ways to make change for a given amount
using a set of denominations.
Dynamic Programming Approach
• 1. Define dp[i][j] as the number of ways to make amount j using the
first i coins.
• 2. Recurrence Relation:
• dp[i][j] = dp[i-1][j] + dp[i][j-coin[i]]
• 3. Base Case: dp[i][0] = 1.
Example Walkthrough
• Coins: {1, 2, 5}, Amount: 5, calculate dp table step-by-step.
Questions and Practice Problems
• 1. Solve for given coins and target amounts.
• 2. Discuss the space-optimized version of the solution.
Introduction to TSP
• Find the shortest possible route that visits each city once and returns
to the origin city.
Dynamic Programming Approach
• 1. Define dp[mask][i] as the minimum cost to visit cities in 'mask'
ending at city i.
• 2. Recurrence Relation:
• dp[mask][i] = min(dp[mask^(1<<i)][j] + cost[j][i])
• 3. Base Case: dp[1<<i][i] = cost[0][i].
Example Walkthrough
• Cities: {A, B, C, D}, calculate dp table for each subset of cities.
Questions and Practice Problems
• 1. Explain the significance of bitmasking in TSP.
• 2. Solve a given instance of TSP.

You might also like