[go: up one dir, main page]

0% found this document useful (0 votes)
5 views26 pages

Aad End Sem

The document provides an overview of algorithms and their applications in product design, including definitions, characteristics, and examples of algorithms like Binary Search, Machine Learning, and Dynamic Programming. It discusses techniques such as Divide and Conquer, Greedy Algorithms, and their comparisons, along with specific problems like the 0/1 Knapsack and Activity Selection. Additionally, it covers graph algorithms like Prim's and Kruskal's for Minimum Spanning Trees and Dijkstra's for shortest paths.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views26 pages

Aad End Sem

The document provides an overview of algorithms and their applications in product design, including definitions, characteristics, and examples of algorithms like Binary Search, Machine Learning, and Dynamic Programming. It discusses techniques such as Divide and Conquer, Greedy Algorithms, and their comparisons, along with specific problems like the 0/1 Knapsack and Activity Selection. Additionally, it covers graph algorithms like Prim's and Kruskal's for Minimum Spanning Trees and Dijkstra's for shortest paths.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

SECTION 1: Algorithm Basics + Applications in Product

Design

🔹 Q1: What is an algorithm? Explain its characteristics and advantages.


(4 marks)

Answer:​
An algorithm is a finite set of well-defined instructions used to solve a particular problem or
perform a computation. It takes some input, processes it, and produces an output.
Algorithms are the foundation of computer programs.

Characteristics of a good algorithm:

1.​ Input: Clearly defined zero or more inputs.​

2.​ Output: At least one output (result).​

3.​ Finiteness: Must terminate after a finite number of steps.​

4.​ Definiteness: Every step should be precisely defined and unambiguous.​

5.​ Effectiveness: All operations must be basic enough to be done manually if required.​

6.​ Generality: Should work for a set of inputs, not just one case.

Advantages of algorithms:

●​ Clarity: Helps in understanding and structuring logic before coding.​

●​ Efficiency: Leads to optimized use of resources (time, memory).​

●​ Reusability: Can be reused for similar problems.​

●​ Debugging & Maintenance: Easier to test and update compared to raw code.

Example:​
Binary Search Algorithm is an efficient algorithm used to search for a value in a sorted
array.

🔹 Q2: How are algorithms used in product design?


(4 marks)
Answer:​
Algorithms play a critical role in product design, especially in digital and smart products.
They enable intelligent behavior, automation, and data-driven decision-making.

Applications of algorithms in product design:

1.​ Personalization Algorithms: Used in apps like Spotify or Netflix to suggest


personalized content based on user data.​

2.​ Recommendation Engines: E-commerce platforms use collaborative filtering


algorithms to recommend products.​

3.​ Design Automation: Generative design software like Autodesk Fusion uses
algorithms to create and evaluate design options.​

4.​ User Flow Optimization: Algorithms analyze user behavior to streamline UI/UX
flows.

Real-life example:​
In a fitness app, algorithms track user activity and suggest workouts using data like age,
weight, and goal.

🔹 Q3: Explain how Machine Learning is used in Product Design. Give


examples.

(4 marks)

Answer:​
Machine Learning (ML) in product design is used to build intelligent and adaptive
systems that learn from user data and improve the user experience automatically.

Applications of ML in product design:

1.​ User Behavior Prediction: ML models analyze how users interact with a product to
suggest features or improvements.​

2.​ Image and Voice Recognition: Products like Face ID or Alexa use ML for biometric
access or voice commands.​

3.​ Sentiment Analysis: Analyzing user feedback or reviews to guide future design
decisions.​

4.​ Augmented Design Tools: ML-powered tools help designers generate layouts, color
palettes, or UX flows automatically.
Example:​
Instagram uses ML to reorder your feed based on your interaction patterns — likes, time
spent on posts, and comments.

🔹 Q4: What is a recurrence relation? Explain with an example.


(4 marks)

Answer:​
A recurrence relation is a mathematical expression that defines each term of a sequence
based on one or more previous terms.

General form:​
T(n) = aT(n/b) + f(n)​
Where:

●​ T(n) is the total time,​

●​ a is the number of subproblems,​

●​ n/b is the size of each subproblem,​

●​ f(n) is the cost of dividing/merging.

Types of recurrence relations:

1.​ Linear Recurrence: Involves only the previous one or two terms.​
E.g., Fibonacci: F(n) = F(n-1) + F(n-2)​

2.​ Divide and Conquer: Used in algorithms like Merge Sort.​


E.g., T(n) = 2T(n/2) + O(n)

Example:​
In Merge Sort, the recurrence relation is:​
T(n) = 2T(n/2) + O(n)​
Using the Master Theorem, this gives O(n log n) complexity.
SECTION 2: Divide and Conquer & Greedy Technique

🔹 Q5: Explain the Divide and Conquer technique with a real-life and
algorithmic example.

(4 marks)

Answer:​
Divide and Conquer is an algorithmic paradigm that solves a problem by recursively
dividing it into smaller sub-problems, solving each sub-problem independently, and then
combining the results.

Steps:

1.​ Divide: Break the problem into smaller sub-problems.​

2.​ Conquer: Solve each sub-problem recursively.​

3.​ Combine: Merge the solutions of sub-problems to form the solution of the original
problem.

Real-life example:​
Sorting books by category: First divide by genre, then by author within each genre, then sort
titles alphabetically within author groups. Finally, combine them into a complete organized
library.

Algorithmic example – Merge Sort:

●​ Divide: Split array into two halves.​

●​ Conquer: Recursively sort each half.​

●​ Combine: Merge the sorted halves.​

Time Complexity: O(n log n)

🔹 Q6: What is a Greedy Algorithm? Explain with a suitable example.


(4 marks)

Answer:​
A Greedy Algorithm builds a solution step-by-step by choosing the locally optimal option at
each stage, assuming this will lead to the global optimum.
Key idea:

●​ Never looks back or reconsiders past choices.​

●​ Does not guarantee the best solution for all problems but works well for some.

Example – Coin Change Problem:​


Goal: Make a given amount using the least number of coins.

●​ Denominations: ₹10, ₹5, ₹2, ₹1​

●​ To make ₹18: Pick ₹10 → ₹5 → ₹2 → ₹1 = 4 coins​

Note: This works only if coin denominations are canonical (standard). Otherwise, Greedy
may not be optimal.

Other Examples:

●​ Kruskal’s Algorithm for Minimum Spanning Tree​

●​ Fractional Knapsack Problem​

●​ Huffman Encoding (used in compression)

🔹 Q7: Compare Divide and Conquer vs Greedy Approach.


(4 marks)

Answer:

Aspect Divide and Conquer Greedy Technique

Strategy Divide problem recursively Take best local option at each step

Optimality Often gives optimal solution Not always optimal

Examples Merge Sort, Quick Sort Coin Change, Huffman Coding

Backtracking Solves all subproblems Does not backtrack


?

Complexity Often higher (e.g. O(n log n)) Usually faster (e.g. O(n log n) for sort)

Summary:​
Divide and Conquer is powerful for solving large problems with recursive structure, while
Greedy works best when local choices lead to a global solution.
SECTION 3: Dynamic Programming, Knapsack,
Fibonacci

🔹 Q8: What is Dynamic Programming (DP)? Explain its characteristics


with an example.

(4 marks)

Answer:​
Dynamic Programming (DP) is a technique for solving problems by breaking them into
overlapping subproblems, solving each subproblem just once, and storing its result for future
reference (memoization or tabulation).

Key Characteristics:

1.​ Optimal Substructure: Solution of the problem can be built from optimal solutions of
subproblems.​

2.​ Overlapping Subproblems: The same subproblems are solved multiple times.

Example – Fibonacci Sequence:

●​ Naive recursion solves same subproblems repeatedly (Exponential time).​

●​ DP stores results in array or table:​


fib(n) = fib(n-1) + fib(n-2)​
Stores fib(0), fib(1), ... fib(n-1) to compute fib(n) in O(n) time.

Real-life analogy:​
Climbing stairs: To reach the 5th step, you can come from the 4th or 3rd step. If you already
know ways to reach 4 and 3, you can build the result for 5 quickly.

🔹 Q9: Explain the 0/1 Knapsack Problem using Dynamic Programming.


(4 marks)

Answer:​
In the 0/1 Knapsack problem, we are given:

●​ A set of n items, each with a weight w[i] and value v[i]​

●​ A knapsack of capacity W​
●​ Objective: Maximize the value without exceeding the weight, and you either pick an
item or leave it (0/1 choice).​

DP Approach:

●​ Define a DP table dp[i][j] = max value using first i items and capacity j​

Recurrence relation:

dp[i][j] = dp[i-1][j] if w[i] > j

dp[i][j] = max(dp[i-1][j], dp[i-1][j - w[i]] + v[i]) if w[i] ≤ j

Time Complexity: O(n × W)

Example:​
Items: (value, weight) → (60, 10), (100, 20), (120, 30)​
Capacity = 50​
Max value = 220 by picking (100, 20) and (120, 30)

🔹 Q10: What is the difference between 0/1 Knapsack and Fractional


Knapsack?

(4 marks)

Answer:

Feature 0/1 Knapsack Fractional Knapsack

Selection Either take whole item or Can take a fraction of the item
not

Approach Dynamic Programming Greedy Algorithm

Optimality with Not always optimal with Always optimal with greedy
Greedy greedy
Time Complexity O(nW) O(n log n) (after sorting by
value/weight)

Example:

●​ 0/1: Can’t take half of an item. Choose optimal combination.​

●​ Fractional: If 1 item gives high value, take as much of it as capacity allows.​

🔹 Q11: Explain Fibonacci number calculation using Memoization.


(4 marks)

Answer:​
Memoization is a top-down dynamic programming technique where results of subproblems
are stored in a dictionary or array to avoid repeated calculations.

Fibonacci using Memoization (Python):

def fib(n, memo={}):

if n <= 1:

return n

if n not in memo:

memo[n] = fib(n-1, memo) + fib(n-2, memo)

return memo[n]

Without memoization: Time complexity = O(2^n)​


With memoization: Time complexity = O(n)

Benefits:

●​ Speeds up recursive algorithms​

●​ Saves memory and computation​

●​ Especially useful in problems like Fibonacci, Knapsack, Matrix Chain


Multiplication
SECTION 4: Graph Algorithms (MSTs & Shortest Paths)

🔹 Q12: Explain Prim’s Algorithm for Minimum Spanning Tree (MST).


(4 marks)

Answer:​
Prim’s Algorithm is a greedy algorithm used to find the Minimum Spanning Tree (MST) of
a connected, weighted graph. It builds the MST by growing one vertex at a time, always
choosing the edge with the minimum weight that connects a vertex inside the MST to a
vertex outside.

Steps:

1.​ Start with an arbitrary node.​

2.​ Initialize a visited set and a priority queue (min-heap).​

3.​ At each step, pick the smallest-weight edge that connects to an unvisited node.​

4.​ Add that node to the MST and repeat until all vertices are included.

Time Complexity:

●​ O(V²) for adjacency matrix​

●​ O(E log V) using priority queue and adjacency list (with Min Heap)

Example:​
For graph with edges:

A - B (2), A - C (3), B - C (1), B - D (4), C - D (5)

MST: A–B–C–D with cost 2 + 1 + 4 = 7

🔹 Q13: Explain Kruskal’s Algorithm with an example.


(4 marks)

Answer:​
Kruskal’s Algorithm is a greedy algorithm to find the Minimum Spanning Tree (MST) by
always choosing the smallest weight edge that doesn’t form a cycle.
Steps:

1.​ Sort all edges by weight.​

2.​ Initialize MST as empty.​

3.​ For each edge, use Disjoint Set (Union-Find) to check if adding it forms a cycle.​

4.​ If not, include it in the MST.​

5.​ Repeat until MST contains (V-1) edges.​

Time Complexity:

●​ O(E log E) due to sorting edges​

●​ Efficient with Union-Find using path compression​

Example:​
Edges:

A - B (1), B - C (3), A - C (2), C - D (4)

Sorted: (A-B), (A-C), (B-C), (C-D)​


MST: A–B, A–C, C–D with cost = 1 + 2 + 4 = 7

🔹 Q14: Compare Prim’s and Kruskal’s Algorithm.


(4 marks)

Answer:

Feature Prim’s Algorithm Kruskal’s Algorithm

Technique Grows MST from a node Adds edges by increasing


weight

Data Structures Priority Queue (Min Heap) Disjoint Set (Union-Find)


Graph Type Dense Graphs (many Sparse Graphs (fewer edges)
Preferred edges)

Cycle Handling Implicitly avoids cycles Explicitly checks for cycles

Edge Sorting Not required Required

Conclusion:​
Both give MST, but use different approaches. Prim is vertex-based; Kruskal is edge-based.

🔹 Q15: Explain Dijkstra’s Algorithm with an example.


(4 marks)

Answer:​
Dijkstra’s Algorithm finds the shortest path from a single source to all other vertices in a
weighted graph with non-negative weights.

Steps:

1.​ Initialize distances of all vertices to ∞, except source (0).​

2.​ Use a priority queue (min-heap) to extract vertex with the minimum distance.​

3.​ For each neighbor of current vertex, update the distance if a shorter path is found.​

4.​ Repeat until all nodes are visited.​

Time Complexity:

●​ O(V²) with adjacency matrix​

●​ O(E log V) using priority queue (heap)​

Example:​
Graph:

A - B (2), A - C (4), B - C (1), B - D (7), C - D (3)


From A:

●​ A → B = 2​

●​ A → C = 3 (via B)​

●​ A → D = 6 (via C)​

🔹 Q16: What is the Floyd-Warshall Algorithm? Explain with an example.


(4 marks)

Answer:​
Floyd-Warshall Algorithm is a Dynamic Programming algorithm used to find shortest
paths between all pairs of vertices in a weighted graph (positive or negative weights, but
no negative cycles).

Steps:

1.​ Initialize a 2D matrix dist[i][j] with direct distances.​

2.​ For each intermediate vertex k, update:​


dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
3.​ After all iterations, dist[i][j] holds the shortest distance between i and j.​

Time Complexity: O(V³)

Example:​
Initial matrix:

A B C

A 0 3 ∞

B ∞ 0 1

C ∞ ∞ 0

After running Floyd-Warshall: A to C = 3 + 1 = 4


SECTION 5: Greedy Techniques & Backtracking

🔹 Q17: Explain the Activity Selection Problem using the Greedy


method.

(4 marks)

Answer:​
The Activity Selection Problem involves selecting the maximum number of
non-overlapping activities that a single person or machine can perform, given their start and
finish times. The Greedy Approach solves this by always choosing the activity that finishes
the earliest.

Steps:

1.​ Sort the activities by their finish times.​

2.​ Select the first activity.​

3.​ For the rest, pick only those whose start time is ≥ finish time of last selected activity.​

4.​ Repeat till all are considered.​

Time Complexity:

●​ O(n log n) for sorting, O(n) for selection​

Example:​
Activities (start, finish):​
A(1, 3), B(2, 5), C(4, 6), D(6, 8), E(5, 9), F(8, 9)​
Sorted by finish time → A, B, C, D, E, F​
Selected: A, C, D, F

Max activities: 4

🔹 Q18: Why is the Greedy method used in the Activity Selection


problem?

(4 marks)

Answer:​
The Greedy method is used in the Activity Selection problem because:
1.​ Local Optimal → Global Optimal:​
Choosing the activity that ends earliest ensures more time is left for future activities.​

2.​ Efficient Decision-Making:​


At each step, the best immediate choice (earliest finish time) leads to an optimal final
solution.​

3.​ Simplicity & Efficiency:​


The problem structure fits the greedy paradigm and can be solved in O(n log n) time.​

4.​ No Overlaps:​
Selecting earliest finish time avoids overlaps and maximizes count of selected
activities.​

Conclusion:​
Greedy works here because the problem has optimal substructure and greedy choice
property.
SECTION 6: Backtracking

🔹 Q19: What is Backtracking? Give one example.


(4 marks)

Answer:​
Backtracking is a problem-solving technique that builds a solution step by step and
abandons a path as soon as it determines that it will not lead to a valid solution. It’s a
depth-first search over possible solution spaces.

Used when:

●​ All possible combinations need to be checked.​

●​ Solutions have constraints (like Sudoku, N-Queens).

Key Steps:

1.​ Choose → Explore → Un-choose (Backtrack)​

2.​ If current path violates constraints, discard and backtrack

Example – N-Queens Problem:​


Place N queens on an N×N board such that no two attack each other. Use backtracking to
try placing a queen row by row and backtrack if any conflicts arise.

🔹 Q20: Solve the 4-Queens problem using Backtracking.


(4 marks)

Answer:​
The 4-Queens Problem requires placing 4 queens on a 4×4 board so that no two queens
threaten each other.

Steps:

1.​ Place queen in first row, first column.​

2.​ Move to next row, place queen in a column not attacked by previous queens.​

3.​ If a safe column isn’t found, backtrack to the previous row.​

4.​ Repeat until all 4 queens are placed safely.​


Constraints Checked:

●​ No other queen in same column​

●​ No queen in left and right diagonals​

One Solution:

[. Q . .]

[. . . Q]

[Q . . .]

[. . Q .]

Q denotes a queen; . is empty cell.

Time Complexity: O(N!) for N-Queens

🔹 Q21: What is the Subset Sum Problem? Solve using Backtracking.


(4 marks)

Answer:​
The Subset Sum Problem asks whether a subset of a given set of integers adds up to a
given target sum.

Using Backtracking:

●​ Try including or excluding each number.​

●​ If at any point sum = target → valid subset.​

●​ If sum exceeds target → backtrack.

Example:​
Set = [3, 34, 4, 12, 5, 2], Target = 9​
Subset = [4, 5] is a valid solution.

Steps:

1.​ Include 3 → sum = 3 → try next.​


2.​ Include 4 → sum = 7 → include 2 → sum = 9 ​

3.​ If sum > 9, or all elements are used → backtrack.

Time Complexity: O(2^n)


SECTION 4: Dynamic Programming

🔹 Q22: What is Dynamic Programming? How is it different from


recursion and divide & conquer?

(4 marks)

Answer:​
Dynamic Programming (DP) is a technique used to solve optimization problems by
breaking them down into overlapping subproblems and solving each subproblem only
once, storing the result for reuse.

Key Characteristics of DP:

1.​ Overlapping Subproblems – Subproblems repeat.​

2.​ Optimal Substructure – Optimal solution can be formed from optimal sub-solutions.​

3.​ Memoization (Top-Down) or Tabulation (Bottom-Up) is used.​

Difference from Recursion & Divide and Conquer:

Technique Handles Overlap? Reuses Example Problem


Results?

Recursion ❌ ❌ Fibonacci (naive)

Divide & Conquer ❌ ❌ Merge Sort

Dynamic Programming ✅ ✅ 0/1 Knapsack, LCS


🔹 Q23: Explain the 0/1 Knapsack Problem using Dynamic
Programming.

(4 marks)

Answer:​
In the 0/1 Knapsack problem, you are given weights and profits of items, and a knapsack
with a max weight capacity W. The goal is to maximize profit without exceeding the weight,
and each item can be taken or not (0/1).

DP Approach:

●​ Let n be number of items.​

●​ wt[] = weights, val[] = values​

●​ dp[i][w] = max value using first i items and weight w

Recurrence Relation:

matlab

Copy code

dp[i][w] = max(dp[i-1][w], val[i-1] + dp[i-1][w - wt[i-1]]) if


wt[i-1] ≤ w

= dp[i-1][w] otherwise

Initialization:​
dp[0][w] = 0 for all w (0 items)​
dp[i][0] = 0 for all i (0 weight)

Time & Space Complexity:

●​ Time: O(nW)​

●​ Space: O(nW)​

🔹 Q24: Solve a 0/1 Knapsack Problem with n = 3, W = 50


Items:

●​ Weights = [10, 20, 30]​


●​ Profits = [60, 100, 120]​

Answer:​
Create a 2D table dp[4][51]. Fill based on recurrence:

Final Max Profit = 220​


Take item 2 and 3 (20kg + 30kg = 50kg, 100 + 120 = 220)

🔹 Q25: What is Matrix Chain Multiplication in DP?


(4 marks)

Answer:​
Matrix Chain Multiplication finds the most efficient way to multiply a chain of matrices by
determining the optimal parenthesization that minimizes the number of scalar
multiplications.

Key Idea:​
Matrix multiplication is associative, so order matters for cost but not result.

Given matrix dimensions:​


A1: p0×p1, A2: p1×p2, ..., An: pn-1×pn

DP Approach:

Let m[i][j] = minimum cost to multiply matrices from i to j

m[i][j] = min over k of (m[i][k] + m[k+1][j] + pi-1*pk*pj)

Initialization:​
m[i][i] = 0 (one matrix, no multiplication)

Time & Space Complexity:

●​ Time: O(n³)​

●​ Space: O(n²)​

🔹 Q26: What is Longest Common Subsequence (LCS)? Explain with DP.


(4 marks)
Answer:​
LCS is the longest subsequence (not necessarily contiguous) that appears in the same
order in both strings.

Example:​
X = "ABCDGH", Y = "AEDFHR" → LCS = "ADH", Length = 3

DP Table Construction:

Let dp[i][j] = LCS length of X[0…i-1] and Y[0…j-1]

Recurrence:

if X[i-1] == Y[j-1]:

dp[i][j] = 1 + dp[i-1][j-1]

else:

dp[i][j] = max(dp[i-1][j], dp[i][j-1])

Initialization:​
dp[0][j] = 0, dp[i][0] = 0 (if one string is empty, LCS is 0)

Time & Space Complexity:

●​ Time: O(m×n)​

●​ Space: O(m×n)​
SECTION 5: Graph Algorithms

🔹 Q27: What is Breadth-First Search (BFS)? Explain with an example.


(4 marks)

Answer:​
Breadth-First Search (BFS) is a graph traversal algorithm that explores vertices level by
level, using a queue. It starts at a source node and visits all neighbors before moving to the
next level.

How it works:

1.​ Start from source vertex.​

2.​ Use a queue to keep track of the order.​

3.​ Visit all neighbors (adjacent vertices).​

4.​ Mark visited to avoid revisits.

Example (Graph):

/ \

B C

/ \

D E

BFS Traversal (from A):​


A→B→C→D→E

Applications:

●​ Finding shortest path in unweighted graphs​

●​ Web crawlers, social network analysis


Time Complexity:

●​ O(V + E), where V = vertices, E = edges​

🔹 Q28: What is Depth-First Search (DFS)? Explain with an example.


(4 marks)

Answer:​
Depth-First Search (DFS) is a traversal method that explores as deep as possible along
each branch before backtracking. It uses a stack (or recursion).

How it works:

1.​ Start at a source node.​

2.​ Visit an unvisited neighbor.​

3.​ Continue deeper until no unvisited neighbors.​

4.​ Backtrack and repeat.

Example (Graph):

/ \

B C

/ \

D E

DFS Traversal (from A):​


A→B→D→C→E

Applications:

●​ Topological sort​

●​ Cycle detection​

●​ Solving puzzles (mazes, etc.)​


Time Complexity:

●​ O(V + E)​

🔹 Q29: What is Prim’s Algorithm? How does it find a Minimum


Spanning Tree (MST)?

(4 marks)

Answer:​
Prim’s Algorithm finds the Minimum Spanning Tree (MST) of a connected, weighted
graph, using a greedy approach.

Steps:

1.​ Start from any node.​

2.​ Add the minimum weight edge that connects a visited node to an unvisited node.​

3.​ Repeat until all nodes are included.

Data Structures Used:

●​ Min Heap / Priority Queue​

●​ Visited array or MST Set

Example:

Graph edges:

●​ A-B (2), A-C (3), B-C (1), B-D (4)​

MST from A:​


A → B (2), B → C (1), B → D (4)

Time Complexity:

●​ With Adjacency List + Min Heap: O(E log V)


🔹 Q30: What is Kruskal’s Algorithm? How is it different from Prim’s?
(4 marks)

Answer:​
Kruskal’s Algorithm builds a Minimum Spanning Tree (MST) by always adding the
smallest edge that doesn’t form a cycle, using a greedy approach.

Steps:

1.​ Sort all edges by weight.​

2.​ Use Disjoint Set (Union-Find) to check if adding an edge forms a cycle.​

3.​ Add the edge if it connects different components.

Difference from Prim’s:

Kruskal’s Prim’s

Works on edges Works on vertices

Uses Union-Find (DSU) Uses Min Heap /


Queue

Good for sparse graphs Good for dense graphs

Time Complexity:

●​ O(E log E)​

🔹 Q31: What is Dijkstra’s Algorithm? How does it work?


(4 marks)

Answer:​
Dijkstra’s Algorithm finds the shortest path from a source vertex to all other vertices in a
weighted graph with non-negative weights.
Steps:

1.​ Set distance of source to 0, others to ∞.​

2.​ Use Priority Queue to pick the node with minimum distance.​

3.​ Relax edges:​


If dist[u] + weight(u,v) < dist[v], then update dist[v].

Example:​
Graph:

●​ A → B (1), A → C (4), B → C (2)​

Shortest Path from A:​


A → B (1), A → B → C (1+2=3)

Time Complexity:

●​ With Min Heap: O((V + E) log V)

Applications:

●​ GPS systems​

●​ Network routing​

●​ Game AI movement planning

You might also like