[go: up one dir, main page]

0% found this document useful (0 votes)
1 views30 pages

DAA_MOD_3

The document outlines various algorithms and dynamic programming techniques for solving graph-related problems, including detecting negative-weight cycles, minimizing coin usage, computing reflexive and transitive closures, and finding the longest path in a directed acyclic graph. It also discusses the Traveling Salesman Problem and methods for partitioning integers and cities to minimize distances. Each problem is accompanied by a detailed algorithm, pseudocode, and time complexity analysis.

Uploaded by

pageha6196
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views30 pages

DAA_MOD_3

The document outlines various algorithms and dynamic programming techniques for solving graph-related problems, including detecting negative-weight cycles, minimizing coin usage, computing reflexive and transitive closures, and finding the longest path in a directed acyclic graph. It also discusses the Traveling Salesman Problem and methods for partitioning integers and cities to minimize distances. Each problem is accompanied by a detailed algorithm, pseudocode, and time complexity analysis.

Uploaded by

pageha6196
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Module 3

4.Modify the all-pairs shortest path algorithm to detect negative-weight cycles in a weighted graph.

5. For the money change problem with n coins of values v1 = 1, v_2, ……, vn , and a target
amount X :

a. Design a dynamic programming algorithm to minimize the number of coins used.

b. Analyze the time and space complexities of the algorithm.

7.For a directed graph G = (V, E) with adjacency matrix M , devise a dynamic programming
algorithm to compute the reflexive and transitive closure M^*, where M^*(u, v) = 1 if u = v or
there is a path from u to v , and 0 otherwise. (Hint: Adapt the Floyd-Warshall algorithm.)

8. For a directed graph G = (V, E) , compute the n X n distance matrix D , where D(u, v) is the
shortest path length (in number of edges) from u to v , or 0 if no path exists. (Hint: Modify the
Floyd-Warshall algorithm.)

9. For a directed acyclic graph (DAG) with vertices u and v (both with zero indegree and
outdegree):

a. Develop a dynamic programming algorithm to find the longest path from u to v .

b. Determine the time complexity of the algorithm.

10. Given n cities and their intercity distances, design an algorithm to find the minimum tour length
for a cycle visiting each city exactly once (except the starting city). Analyze its time complexity and
determine if it can be solved in O(n^2 2^n) .

4. Detecting Negative-Weight Cycles in All-Pairs Shortest Path

Problem:
Modify an all-pairs shortest path algorithm (e.g., Floyd-Warshall) to detect if the graph contains a
negative-weight cycle.

Solution (Floyd-Warshall Modification):

1. Initialize the distance matrix D where:

o D[i][j] = 0 if i == j,

o D[i][j] = weight(i, j) if edge (i, j) exists,

o D[i][j] = ∞ otherwise.

2. Run Floyd-Warshall to compute shortest paths:

o For each intermediate vertex k, update D[i][j] = min(D[i][j], D[i][k] + D[k][j]).

3. Check for Negative Cycles:

o After the algorithm, if any D[i][i] < 0, then vertex i is part of a negative-weight cycle.
Time Complexity:

• O(V³) (same as standard Floyd-Warshall).

5. Coin Change Problem (Minimum Coins)

(a) Dynamic Programming Algorithm

Problem:
Given coins v₁=1, v₂, ..., vₙ and amount X, find the minimum number of coins to make X.

DP Approach:

1. Define dp[x] = min coins needed to make amount x.

2. Base Case: dp[0] = 0.

3. Transition:
For each x from 1 to X,
dp[x] = min(dp[x], dp[x - vᵢ] + 1) for all coins vᵢ ≤ x.

code

def min_coins(coins, X):

dp = [float('inf')] * (X + 1)

dp[0] = 0

for x in range(1, X + 1):

for coin in coins:

if coin <= x:

dp[x] = min(dp[x], dp[x - coin] + 1)

return dp[X] if dp[X] != float('inf') else -1

(b) Time & Space Complexity

• Time: O(n × X) (n = number of coins, X = target amount).

• Space: O(X) (can be optimized further if needed).

7. Reflexive and Transitive Closure (Modified Floyd-Warshall)

Problem:
Given adjacency matrix M of a directed graph, compute M* where M*[u][v] = 1 if u = v or a path
exists from u to v.

Solution (Floyd-Warshall Adaptation):

1. Initialize M* as M with diagonal M*[i][i] = 1 (reflexive).


2. For each intermediate vertex k, update:

o M*[i][j] = M*[i][j] OR (M*[i][k] AND M*[k][j]).

Pseudocode:

def transitive_closure(M):

n = len(M)

closure = [row[:] for row in M]

for k in range(n):

for i in range(n):

for j in range(n):

closure[i][j] = closure[i][j] or (closure[i][k] and closure[k][j])

return closure

Time Complexity:

• O(V³) (same as Floyd-Warshall).

8. Shortest Path Distance Matrix (Modified Floyd-Warshall)

Problem:
Compute an n × n matrix D where D[u][v] = shortest path length (edge count) from u to v, or 0 if no
path exists.

Solution:

1. Initialize D where:

o D[i][j] = 1 if edge (i, j) exists,

o D[i][j] = 0 if i == j,

o D[i][j] = ∞ otherwise.

2. Run Floyd-Warshall, updating D[i][j] = min(D[i][j], D[i][k] + D[k][j]).

3. Post-process: Replace ∞ with 0.

Time Complexity:

• O(V³).

9. Longest Path in a DAG (Dynamic Programming)

(a) DP Algorithm
Problem:
Find the longest path from u to v in a DAG where u and v have zero indegree and outdegree,
respectively.

Solution:

1. Topological Sort the DAG.

2. Initialize dp where dp[i] = longest path to i.

3. Process vertices in topological order:

o For each neighbor j of i, update dp[j] = max(dp[j], dp[i] + weight(i, j)).

Pseudocode:

python

Copy

Download

def longest_path_dag(graph, u, v):

top_order = topological_sort(graph)

dp = [-inf] * len(graph)

dp[u] = 0

for node in top_order:

for neighbor, weight in graph[node]:

if dp[neighbor] < dp[node] + weight:

dp[neighbor] = dp[node] + weight

return dp[v]

(b) Time Complexity

• O(V + E) (topological sort + DP traversal).

10. Traveling Salesman Problem (TSP) with DP

Problem:
Find the minimum tour length visiting all n cities exactly once (cycle).

DP Approach (Held-Karp Algorithm):

1. State: dp[mask][u] = min cost to visit all cities in mask ending at u.

2. Base Case: dp[1 << u][u] = 0 (starting at city u).

3. Transition:
For each subset mask and city u, update:
o dp[mask | (1 << v)][v] = min(dp[mask | (1 << v)][v], dp[mask][u] + dist[u][v]).

Time Complexity:

• O(n² 2ⁿ) (feasible for small n, e.g., n ≤ 20).

Pseudocode:

python

Copy

Download

def tsp(dist):

n = len(dist)

dp = [[inf] * n for _ in range(1 << n)]

for u in range(n):

dp[1 << u][u] = 0

for mask in range(1 << n):

for u in range(n):

if mask & (1 << u):

for v in range(n):

if not (mask & (1 << v)):

new_mask = mask | (1 << v)

dp[new_mask][v] = min(dp[new_mask][v], dp[mask][u] + dist[u][v])

return min(dp[(1 << n) - 1][u] + dist[u][0] for u in range(n))

Summary:

• TSP is NP-Hard, but DP reduces brute-force O(n!) to O(n² 2ⁿ).

• Space: O(n 2ⁿ) (can be optimized with bitmask tricks).

Final Summary Table

Problem Algorithm Time Complexity

4. Negative Cycle Detection Floyd-Warshall O(V³)

5. Coin Change (Min Coins) DP (1D) O(n × X)


Problem Algorithm Time Complexity

7. Reflexive-Transitive Closure Modified Floyd-Warshall O(V³)

8. Shortest Path Distance Matrix Modified Floyd-Warshall O(V³)

9. Longest Path in DAG DP + Topological Sort O(V + E)

10. TSP (Minimum Tour) DP (Held-Karp) O(n² 2ⁿ)

Practical Applications

1. A motorist seeks the shortest route from Kolkata to Mumbai using a roadmap with distances
between adjacent intersections. Describe how to find the shortest route.

2. Given n integers in the range 0 ….. K , partition them into two subsets to minimize the absolute
difference |S1 - S2| , where S1 and S2 are the sums of the subsets.

3. For an ordered sequence of n cities with pairwise distances, partition the cities into two
subsequences (not necessarily contiguous) such that person A visits the first subsequence and
person B visits the second, both in order, starting at their respective first cities. Minimize the total
distance traveled by A and B.

1. Shortest Route from Kolkata to Mumbai

Problem:
Find the shortest path between two cities (Kolkata and Mumbai) on a roadmap where distances
between adjacent intersections are known.

Solution Approach (Dijkstra's Algorithm):

• Model the roadmap as a weighted graph:

o Vertices (V): Represent intersections or cities.

o Edges (E): Represent roads between intersections, weighted by distance.

• Algorithm Steps:

1. Initialize distances:

▪ Set distance to Kolkata (source) as 0.

▪ Set distance to all other intersections as ∞.

2. Priority Queue (Min-Heap):


▪ Start with Kolkata, then explore neighboring intersections.

3. Relaxation Step:

▪ For each neighbor, update the distance if a shorter path is found.

4. Termination:

▪ Once Mumbai (destination) is dequeued, return the shortest path.

Example:
Suppose the roadmap has intersections:

• Kolkata → A (5 km), Kolkata → B (10 km)

• A → C (3 km), B → C (1 km), C → Mumbai (7 km)

• Shortest Path: Kolkata → A → C → Mumbai (Total = 5 + 3 + 7 = 15 km).

Time Complexity:

• O((V + E) log V) (with a priority queue).

2. Partition Integers into Two Subsets with Minimum Absolute Difference

Problem:
Given n integers in [0, K], partition them into two subsets S1 and S2 such that |sum(S1) - sum(S2)| is
minimized.

Solution Approach (Dynamic Programming - Subset Sum Problem):

• Key Insight:

o The optimal partition will have sums as close as possible to total_sum / 2.

• DP Approach:

1. Compute the total sum S = sum(arr).

2. Find the largest subset sum ≤ S/2 using DP.

3. The minimum difference is S - 2 * subset_sum.

Algorithm Steps:

1. DP Table Definition:

o dp[i][j] = True if a subset of the first i elements sums to j.

2. Base Case:

o dp[0][0] = True (empty subset sums to 0).

3. Fill DP Table:

o For each number, update possible sums.

4. Find Closest to S/2:

o The largest j ≤ S/2 where dp[n][j] = True.


Example:

• Input: [1, 6, 11, 5]

• Total Sum = 23 → Target = 11

• Best Subset: [1, 6, 5] (Sum = 12)

• Minimum Difference = |12 - 11| = 1

Time Complexity:

• O(n × K) (pseudo-polynomial).

3. Partitioning Ordered Cities Between Two Travelers for Minimum Total Distance

Problem Restatement

Given an ordered sequence of n cities with pairwise distances, partition them into two
subsequences (A and B) such that:

1. Person A visits cities in subsequence A in order.

2. Person B visits cities in subsequence B in order.

3. Both start at their first assigned city.

4. Goal: Minimize the sum of distances traveled by A and B.

Key Observations

1. Order Matters: Since cities must be visited in their original sequence, we cannot rearrange
them.

2. Partitioning Choices: At each step, we decide whether the next city goes to A or B.

3. Optimal Substructure: The best assignment for the first k cities depends on the last cities
assigned to A and B.

Dynamic Programming Approach

We define a DP table where:

• State: dp[i][j] = minimum total distance when:

o The last city assigned to A is i.

o The last city assigned to B is j.

• Transition: For the next city k = max(i, j) + 1:

o If assigned to A, update dp[k][j] = min(dp[k][j], dp[i][j] + dist(i, k)).

o If assigned to B, update dp[i][k] = min(dp[i][k], dp[i][j] + dist(j, k)).

• Initialization: dp[0][0] = 0 (both start at a dummy city before the first).

• Final Answer: The minimum value among all dp[i][n] and dp[n][j].

Algorithm Steps
1. Initialize dp as a 2D array filled with ∞, except dp[0][0] = 0.

2. Iterate over all possible (i, j) where i ≤ n and j ≤ n:

o Let k = max(i, j) + 1 (next city to assign).

o If k > n, skip (all cities assigned).

o Option 1: Assign city k to A.

▪ Update dp[k][j] = min(dp[k][j], dp[i][j] + dist(i, k)).

o Option 2: Assign city k to B.

▪ Update dp[i][k] = min(dp[i][k], dp[i][j] + dist(j, k)).

3. Result: The minimum among dp[i][n] and dp[n][j] for all i, j.

Example

Cities: [A, B, C]
Distances: A→B = 2, B→C = 3
Possible Partitions:

1. A = [A, B, C], B = []

o Distance: A→B→C = 2 + 3 = 5 (B travels 0) → Total = 5.

2. A = [A, C], B = [B]

o A: A→C (assume direct distance = 5 if given, else invalid since order matters).

o B: B (distance = 0) → Total = 5 + 0 = 5.

3. A = [A], B = [B, C]

o A: A (distance = 0).

o B: B→C = 3 → Total = 0 + 3 = 3 (optimal).

DP Table Snapshots:

• dp[0][0] = 0 (start).

• Assign A to A: dp[1][0] = 0 + 0 = 0.

• Assign B to B: dp[0][2] = 0 + 2 = 2.

• Assign C to B: dp[0][3] = 2 + 3 = 5.

• Assign C to A: dp[3][2] = 0 + 5 = 5.

• Optimal: dp[1][3] = 3 (A = [A], B = [B, C]).

Time & Space Complexity

• Time: O(n²) (filling an n × n DP table).

• Space: O(n²) (can be optimized to O(n) with careful updates).

Pseudocode
python

Copy

Download

def min_total_distance(cities, dist):

n = len(cities)

dp = [[float('inf')] * (n + 1) for _ in range(n + 1)]

dp[0][0] = 0

for i in range(n + 1):

for j in range(n + 1):

k = max(i, j) + 1

if k > n:

continue

# Assign to A

if i != 0:

dp[k][j] = min(dp[k][j], dp[i][j] + dist[i][k])

else:

dp[k][j] = min(dp[k][j], dp[i][j]) # A starts at k

# Assign to B

if j != 0:

dp[i][k] = min(dp[i][k], dp[i][j] + dist[j][k])

else:

dp[i][k] = min(dp[i][k], dp[i][j]) # B starts at k

return min(min(dp[i][n] for i in range(n + 1)), min(dp[n][j] for j in range(n + 1)))

Key Takeaways

• The problem is solved by DP with state tracking of the last cities assigned to A and B.

• Order preservation restricts transitions, making the DP approach efficient.

• Initialization and transitions must handle cases where a traveler hasn’t started yet
(i=0 or j=0).
Dijkstra’s Algorithm and Variants

1. Provide an example of a directed graph with negative-weight edges where Dijkstra’s algorithm fails
to produce correct shortest paths.

3. Explain Dijkstra’s algorithm with an example and answer:

a. Does it work for graphs with negative weights?

b. Is it applicable to both directed and undirected graphs?

5. For a weighted directed graph where edges leaving the source s may have negative weights, but
other edges are non-negative and no negative-weight cycles exist, prove that Dijkstra’s algorithm
correctly finds shortest paths from s.

6. Evaluate the efficiency of Dijkstra’s algorithm using two different graph representations.

7. Prove that the time complexity of Dijkstra’s algorithm is

8. Find the shortest paths from vertex 1 to all other vertices in a given graph using Dijkstra’s
algorithm.

9. Assign edge weights to a graph such that Dijkstra’s algorithm fails to find the correct shortest path
from s to t .

11. Prove that if all edge weights in a graph are positive, any subset of edges connecting all vertices
with minimum total weight is a tree. Provide an example showing this does not hold for non-positive
weights.

12. Write a greedy algorithm to compute the shortest path.

13. Develop a simplified version of Dijkstra’s algorithm that computes only the distances from a given
vertex to all others, using a weight matrix representation.

14. Describe Dijkstra’s algorithm for finding shortest paths with a suitable example.

Solutions :

1. Provide an example of a directed graph with negative-weight edges where Dijkstra’s algorithm
fails to produce correct shortest paths.

Example: Let the graph have 3 vertices: A, B, C


Edges:
• A → B (weight = 4)

• A → C (weight = 1)

• C → B (weight = –2)

Expected shortest path from A to B: A → C → B = 1 + (–2) = –1

Dijkstra’s Output: Chooses A → B directly with weight 4, ignoring negative update via C.
Failure Reason: Dijkstra assumes once a node is visited with the smallest cost, it won’t get better—a
condition violated with negative weights.

3. Explain Dijkstra’s algorithm with an example.

Steps:

1. Set distance to source = 0; all others = ∞.

2. Use min-priority queue to select unvisited vertex with smallest distance.

3. For each neighbor, update distance if a shorter path is found.

Example: Graph:

• A → B (2), A → C (5), B → C (1)

Start: A
Distances: A=0, B=∞, C=∞
Step 1: Visit A → B=2, C=5
Step 2: Visit B → C=2+1=3 (better than 5)
Result: A→B=2, A→C=3

(a) Does Dijkstra’s algorithm work for graphs with negative weights?
No. Dijkstra’s algorithm assumes that once a node is visited, its shortest path is final, which fails with
negative weights.

(b) Is it applicable to both directed and undirected graphs?


Yes. As long as edge weights are non-negative, Dijkstra works for both graph types.

5. For a graph where only edges leaving the source may be negative (others non-negative), prove
Dijkstra still works.

Proof Sketch:

• Once source vertex s processes its neighbors, negative weights may temporarily mislead.

• But since all subsequent edges are non-negative, no better path can reduce the cost further.

• Hence, once a node is visited, its shortest distance remains optimal.


6. Evaluate the efficiency of Dijkstra’s algorithm using two graph representations.

Representation Time Complexity

Adjacency Matrix O(V2)

Adjacency List + Min-Heap (Priority Queue) O((V+E)logV)

• Matrix is simple but inefficient for sparse graphs.

• List + heap is better for large/sparse graphs.

7. Prove that the time complexity of Dijkstra’s algorithm is:

• O((V+E)logV) using binary heap with adjacency list.

• O(V^2)using adjacency matrix without heap.

• O(VlogV+E) using Fibonacci heap (theoretical).

8. Find shortest paths from vertex 1 to all others using Dijkstra’s algorithm.

This depends on a provided graph. If you upload or describe it, I’ll solve it with steps.

9. Assign edge weights so that Dijkstra fails to find the correct shortest path from sss to ttt.

Use:

• s→a with weight = 10

• s→b with weight = 1

• b→a with weight = –5

• a→t with weight = 1

Correct path: s → b → a → t = 1 – 5 + 1 = –3
Dijkstra's path: s → a → t = 10 + 1 = 11

Fails due to negative weight in b → a.

11. Prove that if all edge weights are positive, any subset of edges connecting all vertices with
minimum total weight is a tree.

Proof Sketch:

• A connected, acyclic subset of n vertices with n–1 edges is a tree.

• If it connects all nodes and has minimum cost, no cycles are allowed (cycles add unnecessary
cost).

• Therefore, it’s a Minimum Spanning Tree (MST).


Counterexample (non-positive weights):
Allow 0 or negative weights in a cycle → including a cycle might reduce total weight → not a tree.

12. Write a greedy algorithm to compute the shortest path.

Greedy Algorithm (similar to Dijkstra):

13. Develop a simplified Dijkstra version using weight matrix.

14. Describe Dijkstra’s algorithm for finding shortest paths with a suitable example.
Steps of Dijkstra’s Algorithm:

1. Initialize:

o Set the distance to the source node as 0 and to all other nodes as ∞.

o Mark all nodes as unvisited. Set the source as the current node.

2. Visit the current node:

o For the current node, consider all unvisited neighbors.

o Calculate their tentative distances through the current node.

o If the calculated distance of a node is less than the known distance, update it.

3. Mark the current node as visited:

o A visited node will not be checked again.

4. Select the unvisited node with the smallest tentative distance as the new current node,
and repeat from step 2.

5. Repeat until all nodes are visited or the smallest tentative distance among the unvisited
nodes is ∞.

Example:

Consider a graph with 5 vertices (A, B, C, D, E) and the following weighted edges:

/|\

4 1 2

/ | \

B C D

\ | /

5 3

Edge weights:

• A–B: 4

• A–C: 1

• A–D: 2

• B–E: 5

• C–E: 3

• D–E: 3
Goal: Find the shortest path from A to all other nodes.

Step-by-Step Execution:

Node Tentative Distance Previous Node

A 0 —

B ∞ —

C ∞ —

D ∞ —

E ∞ —

1. Start at A:

• A → B = 4 → update B = 4

• A → C = 1 → update C = 1

• A → D = 2 → update D = 2

Updated distances:

• B: 4, C: 1, D: 2, E: ∞

2. Next node with smallest distance: C

• C → E = 3 → A to C to E = 1 + 3 = 4 → update E = 4

3. Next: D

• D → E = 3 → A to D to E = 2 + 3 = 5 (not better than existing E=4)

4. Next: B

• B → E = 5 → A to B to E = 4 + 5 = 9 (not better)

Final Shortest Distances from A:

• A: 0

• B: 4

• C: 1

• D: 2

• E: 4

Shortest Paths:

• A to B: A → B
• A to C: A → C

• A to D: A → D

• A to E: A → C → E

Time Complexity:

• O(V²) for simple implementations using arrays.

• O((V + E) log V) with a priority queue (using a min-heap).

Disjoint Set Data Structures

2.Starting with eight singleton sets {1}, {2}, ……., {8} , each represented by a single-node tree, apply
the union-find algorithm to compute the tree representation after the operations:

a. union(1, 2)

b. union(3, 4)

c. union(5, 6)

d. union(7, 8)

e. union(1, 3)

f. union(5, 7)

g. find(1)

h. find(5)

i. find(7).

4. Prove that for a node x in a tree resulting from union-find operations with union by rank and path
compression, the rank of x is an upper bound on its height.

5. Show that if all unions occur before finds in a union-find sequence with union by rank and path
compression, the time complexity is optimized.

7. Prove that the weighting rule guarantees a tree height of O(log n).

8. For a tree resulting from union-find with union by rank and path compression, prove that the ranks
of nodes on the path from a leaf to the root form a strictly increasing sequence.

9. For merging two sets S1 and S2 represented by linked lists L1 and L2 :

a. Improve the representation to achieve O(1) time for find operations.

b. Compute the worst-case cost of n-1 unions.


10. If the set with more elements provides the name for the merged set, calculate the total cost
of n-1 unions O(n log n).

11. a. Define the union-find algorithm.

b. Explain union-find problems.

c. Describe the fast disjoint set union algorithm.

d. Compute the connected components of a graph with vertices {a, b, c, d, e, f, g, h, i, j, k} and


edges processed in the order: (d, i), (f, k), (g, i), (b, g), (a, h), (i, j), (d, k), (b, j), (d, f), (g, j), (a, e), (i, d),
using disjoint set operations.

2. Apply union-find to sets {1}, {2}, ..., {8} with these operations:

Operations:
a. union(1, 2)
b. union(3, 4)
c. union(5, 6)
d. union(7, 8)
e. union(1, 3)
f. union(5, 7)
g. find(1)
h. find(5)
i. find(7)

Initial sets:
{1}, {2}, {3}, {4}, {5}, {6}, {7}, {8}

After each union:

• a. {1,2}

• b. {3,4}

• c. {5,6}

• d. {7,8}

• e. {1,2,3,4}

• f. {5,6,7,8}

Now we have two disjoint sets:

• {1,2,3,4}

• {5,6,7,8}

find(1): → root is 1 or representative of {1,2,3,4}


find(5): → root is 5 or representative of {5,6,7,8}
find(7): → root is 5 (after union(5,7))

4. Prove that the rank of a node xxx is an upper bound on its height (with union by rank and path
compression).
• Rank is used to approximate the height.

• During union by rank, we attach the smaller tree under the root of the larger tree.

• Hence, actual height is ≤ rank, and rank is never decreased.

• Path compression flattens trees without increasing rank.

5. Show that if all unions occur before finds, time complexity is optimized.

• When find operations are performed after all union operations, the structure is already near
flat due to path compression.

• This results in amortized time per operation: O(α(n)) — very close to constant.

7. Prove that the weighting rule guarantees a tree height of O(logn).

• When we always attach the smaller tree under the larger one, the number of nodes at each
level doubles at most.

• Hence, tree height grows as nlogn, ensuring logarithmic depth.

8. Prove that ranks of nodes on path from a leaf to root are strictly increasing.

• During union by rank, higher rank becomes parent.

• So, as we move from leaf to root in a union-find tree:

o each successive node has higher rank than the one before.

• Therefore, ranks strictly increase along the path.

9. For linked list representation of sets:

(a) Improve to O(1) find:

• Maintain a pointer from each node to the representative (head of list).

• Find becomes O(1), but union becomes more expensive.

(b) Compute worst-case cost of n−1 unions:

• If smaller list is always appended to larger:

o Total cost is O(nlogn) (logarithmic time per union due to increasing list size).

10. If the set with more elements gives name in each union, prove total cost is O(nlogn).

• Similar to weighted union:

o Each element can move to a larger set at most nlogn times.


• So total operations over all elements = O(nlogn).

11.

(a) Define union-find algorithm:

• A data structure to manage a partition of elements into disjoint sets, supporting:

o find(x): returns the representative of the set containing xxx

o union(x, y): merges the sets containing xxx and yyy

(b) Explain union-find problems:

• Track connected components in graphs.

• Detect cycles.

• Used in Kruskal’s MST, image segmentation, etc.

(c) Describe fast disjoint set union algorithm:

• Combines:

o Union by rank/size

o Path compression

• Achieves nearly constant time per operation: O(α(n))O(\alpha(n))O(α(n))

(d) Compute connected components from the given edges:

Vertices: {a, b, c, d, e, f, g, h, i, j, k}
Edges:
(d, i), (f, k), (g, i), (b, g), (a, h), (i, j),
(d, k), (b, j), (d, f), (g, j), (a, e), (i, d)

Apply union-find:

Let’s step through unions:

1. union(d, i)

2. union(f, k)

3. union(g, i) → connects g, d, i

4. union(b, g) → adds b

5. union(a, h)

6. union(i, j)

7. union(d, k) → f, k added to d, i, g

8. union(b, j)

9. union(d, f) – redundant
10. union(g, j) – redundant

11. union(a, e)

12. union(i, d) – redundant

Final connected components:

• {a, e, h}

• {b, d, f, g, i, j, k}

• {c} (not involved)

Network Flow Problems

6. Prove that a flow in a network with infinite-capacity edges (from multiple sources/sinks
conversion) has finite value if the original edges have finite capacity.

7. Convert a flow network with multiple sources (producing p_i units) and sinks (consuming qj units)
into a single-source, single-sink network to find a flow satisfying the constraints.

9. Show that the Ford-Fulkerson algorithm remains correct if the residual network disallows edges
into the source s .

10.For two flows f and f' , determine if the augmented flow f f' satisfies flow conservation and
capacity constraints.

11.Show how to compute a maximum flow using at most |E| augmenting paths.

12. Determine the edge connectivity of an undirected graph by running maximum-flow algorithms on
at most |V| flow networks, each with O(V) vertices and O(E) edges.

13.For a flow network with an edge (v, s) having flow f(v, s) = 1 , prove there exists another
flow f' with f'(v, s) = 0 and equal value. Provide an O(E) -time algorithm to compute f' .

14. Modify the capacities of a flow network G to create a new network G' where any minimum cut
in G' has the smallest number of edges among minimum cuts in G .

6. Prove that a flow in a network with infinite-capacity edges (from multiple sources/sinks
conversion) has finite value if original edges have finite capacity.

Proof Sketch:

• When converting a network with multiple sources/sinks to a single-source/sink network:

o Add edges from a super source to original sources and from sinks to a super sink.

o Assign infinite capacity to these new edges.


• The actual flow through these edges is limited by capacities of original network edges.

• So, the overall maximum flow is finite since it depends on original finite-capacity edges.

9. Show that Ford-Fulkerson remains correct if residual network disallows edges into source sss.

Yes, it remains correct.

• Residual network edges are used to adjust flows.

• Disallowing back edges into source doesn’t affect correctness, because:

o Source never receives flow (only sends).

o Only edges out of source are used in augmenting paths.

10. For two flows f and f′, determine if augmented flow f+f′ satisfies conservation and capacity.


12. Determine edge connectivity of an undirected graph using at most ∣V∣ flow networks.

Steps:

1. For each pair of vertices (u,v), compute max-flow from u to v.

2. The minimum of these max-flows is the edge connectivity.

3. Optimization:

o Pick one vertex and compute flow to all others: O(V) max-flow computations.

o Use flow networks of size O(V) vertices and O(E) edges.

13. Given edge (v,s) with flow 1, prove there exists another flow f′ with f′(v,s)=0 and same total
flow.

Proof Idea:

• Since s is the source, ideally it should only send flow.

• If there's a unit of flow going into s, there must be a cycle.

• Redirect the unit flow in a cycle so (v,s) has zero flow.

Algorithm (O(E)):

1. Find cycle containing edge (v,s).

2. Reduce 1 unit of flow on (v,s), increase along alternate path.


3. Flow value at sink remains unchanged.

14. Modify network G into G′ such that any minimum cut in G′ has the fewest edges among all
minimum cuts.

Approach:

• For each edge (u,v) in G, set:

o New capacity = C⋅w(u,v)+1


Where w(u,v) is original weight, and C is a large constant > total number of edges.

• This transformation:

o Prioritizes cut capacity first, and minimizes edge count when capacities tie.

Result: All minimum cuts in G′correspond to cuts with:

• Minimum capacity, and

• Minimum number of edges among such cuts.

Graph Theory Basics

1. Define a graph.

2. List common types and categories of graphs.

3. Differentiate between a tree and a graph.

4. Explain how to find the minimum number of edges to keep a graph connected.

5. Compare adjacency lists and adjacency matrices for graph representation.

6. Define an incidence matrix and its use cases.

7. Describe the BFS traversal method.

8. Describe the DFS traversal method.

9. Highlight key differences between BFS and DFS.

10. Draw a complete graph with four nodes and all its spanning trees.

11. Explain why Kruskal’s algorithm always produces an optimal spanning tree.

12. Determine the running time of an unweighted shortest path algorithm using the path with the
fewest edges.

1. Define a graph.
A graph G=(V,E) is a mathematical structure consisting of:
• V: A set of vertices (or nodes).

• E: A set of edges, each connecting a pair of vertices.

o In undirected graphs, edges are unordered pairs {u,v}.

o In directed graphs (digraphs), edges are ordered pairs (u,v).

2. List common types and categories of graphs.

• Simple Graph: No loops or multiple edges.

• Multigraph: Multiple edges between same vertices allowed.

• Directed Graph (Digraph): Edges have directions.

• Undirected Graph: Edges are bidirectional.

• Weighted Graph: Edges have weights.

• Unweighted Graph: No weights on edges.

• Connected Graph: Path exists between every pair of vertices.

• Complete Graph: Every vertex is connected to every other.

• Cyclic/ Acyclic Graphs: With/without cycles.

• Tree: Acyclic, connected graph.

3. Differentiate between a tree and a graph.

Feature Tree Graph

Cycles No cycles (acyclic) May contain cycles

Connectivity Always connected May be disconnected

Edges n−1n - 1n−1 edges for n vertices Varies

Unique Paths Exactly one path between any two nodes Not necessarily

4. Explain how to find the minimum number of edges to keep a graph connected.

• For an undirected graph with n vertices, at least n−1edges are needed to keep it connected.

• If disconnected, find the number of connected components (c) and add c−1edges.

5. Compare adjacency lists and adjacency matrices.


Feature Adjacency List Adjacency Matrix

Space Complexity O(V+E) O(V2)

Edge Lookup Slower O(degree)O(degree)O(degree) Fast O(1)O(1)O(1)

Sparse Graphs Efficient Wastes space

Dense Graphs Less efficient Efficient

6. Define an incidence matrix and its use cases.

• An incidence matrix is a V×EV \times EV×E matrix where:

o Rows represent vertices.

o Columns represent edges.

o Entry is:

▪ 1 if the vertex is incident (connected) to the edge.

▪ -1 if direction matters (e.g., for digraphs).

• Use cases: Analyzing flows, graph algorithms, circuits.

7. Describe the BFS traversal method.

• Breadth-First Search (BFS) explores graph level-by-level.

• Uses a queue and marks visited nodes.

• Useful for:

o Shortest path in unweighted graphs.

o Connectivity checks.

• Time Complexity: O(V+E)

8. Describe the DFS traversal method.

• Depth-First Search (DFS) explores as deep as possible before backtracking.

• Uses stack (recursion or explicit).

• Used for:

o Topological sort.

o Cycle detection.

o Strongly connected components.


• Time Complexity: O(V+E)

9. Highlight key differences between BFS and DFS.

Feature BFS DFS

Structure Queue Stack or Recursion

Use Case Shortest Path (Unweighted) Topological Sort, SCC

Search Order Level by level Depth-wise

Time O(V+E) O(V+E)

10. Draw a complete graph with four nodes and all its spanning trees.

• A complete graph K4 has:

o 4 vertices, 6 edges.

• Number of spanning trees in Kn: nn−2 → 42=16 spanning trees.

• Each spanning tree will have exactly 3 edges connecting all 4 nodes without cycles.

11. Explain why Kruskal’s algorithm always produces an optimal spanning tree.

• Kruskal's algorithm selects the minimum-weight edge that does not form a cycle.

• Based on the greedy approach and cut property:


o The smallest edge crossing any cut must be in the MST.

• Proven by matroid theory and exchange argument.

12. Determine the running time of an unweighted shortest path algorithm using the path with the
fewest edges.

• Use BFS for unweighted shortest paths.

• Time Complexity:

o Adjacency list: O(V+E)

o Adjacency matrix: O(V2)

Graph analysis

6. Prove that the sum of vertex degrees in an undirected graph equals twice the number of edges.

7. Prove or disprove: A finite directed graph with each vertex having out-degree at least one contains
a directed cycle.

8. a. Show that a connected undirected graph with n vertices has at least n-1 edges, and those
with exactly n-1 edges are trees.

b. Find the minimum number of edges in a strongly connected digraph with n vertices and
describe its structure.

9. Prove the equivalence of the following for an undirected graph G with n vertices:

a. G is a tree.

b. G is connected but becomes disconnected if any edge is removed.

c. There is exactly one simple path between any two distinct vertices.

d. G has no cycles and has n-1 edges.

2. Prove that Prim’s algorithm generates a minimum-cost spanning tree.

3.Show that adding an edge to a spanning tree creates a unique cycle.

5. For a weighted connected graph G :

a. Prove that the heaviest edge in any cycle does not belong to a minimum-cost spanning tree.

b. Show that any F -heavy edge in a forest F (subgraph of G ) does not belong to a minimum-cost
spanning tree.

6. Show that the number of spanning trees in an n -vertex complete graph can exceed 2^{n-1} - 2 .

6. Sum of Vertex Degrees = Twice the Number of Edges (Handshaking Lemma)


Proof:

• In an undirected graph, each edge contributes to the degree of two vertices (one at each
end).

• Suppose the graph has mm edges.

• Then, the total sum of degrees is 2m, since each edge is counted twice (once for each vertex
it connects).

7. Directed Graph with Out-Degree ≥ 1 Contains a Directed Cycle

Proof:

• Start at any vertex v0v0. Since out-degree ≥ 1, follow edges to v1,v2,…v1,v2,….

• Since the graph is finite, we must eventually revisit a vertex, forming a cycle.

• Conclusion: Every finite directed graph with out-degree ≥ 1 has a directed cycle.

8. (a) Connected Undirected Graph with n Vertices

Proof:

• Minimum edges: A connected graph must have at least n−1 edges (otherwise, it would be
disconnected).

• Trees: If it has exactly n−1 edges and is connected, it cannot have cycles (since removing any
edge would disconnect it). Hence, it is a tree.

(b) Minimum Edges in a Strongly Connected Digraph

• Minimum edges: n (forms a directed cycle).

• Structure: A Hamiltonian cycle where each vertex points to the next.

9. Equivalence of Tree Definitions

Proof:
We show (a) ⇒ (b) ⇒ (c) ⇒ (d) ⇒ (a).

• (a) ⇒ (b): A tree is connected and acyclic; removing any edge disconnects it.

• (b) ⇒ (c): Connectedness ensures a path; uniqueness follows from acyclicity.

• (c) ⇒ (d): Unique paths imply no cycles, and connectedness with n−1n−1 edges defines a
tree.

• (d) ⇒ (a): A connected, acyclic graph with n−1n−1 edges is a tree.


2. Prim’s Algorithm Generates an MST

Proof:

• Prim’s algorithm grows the MST by always adding the lightest edge connecting the tree to a
new vertex.

• By the cut property, this edge must be in the MST.

• Since it never forms a cycle, the final result is a spanning tree of minimal cost.

3. Adding an Edge to a Spanning Tree Creates a Unique Cycle

Proof:

• A spanning tree TT has exactly n−1n−1 edges and no cycles.

• Adding an edge ee between two vertices u and v introduces a cycle because T already has a
unique path between u and v.

• The cycle is unique because any other cycle would imply multiple paths between uu and vv,
contradicting tree properties.

5. (a) Heaviest Edge in a Cycle is Not in MST

Proof:

• Suppose edge ee is the heaviest in cycle C.

• Removing e leaves a path connecting the same vertices at lower cost.

• Hence, e cannot be in the MST (otherwise, a cheaper spanning tree exists).

(b) FF-Heavy Edge Not in MST

• An edge ee is F-heavy if it is the heaviest in some cycle of F.

• By part (a), such an edge cannot be in the MST.

You might also like