Course Code : MCS-208
Course Title : Data Structures and Algorithms
Assignment Number : BCA_NEW(III)-208/Assignment/2025-26
Maximum Marks : 100
Weightage : 25%
Last Date of Submission : 31st October, 2025 (for July session)
: 30th April, 2026 (for January session)
There are four questions in this assignment, which carry 80 marks. Each question carries 20 marks. Rest
20 marks are for viva voce. All algorithms should be written nearer to C programming language. You may
use illustrations and diagrams to enhance the explanations, if necessary. Please go through the guidelines
regarding assignments given in the Programme Guide for the format of presentation.
Attempt all the questions
Question1: For each of the Singly Linked List, Circularly Singly Linked List, Doubly Linked
List, Circularly Doubly Linked List, write one application that is exclusively suitable for that list.
For example, X may be an application for whose implementation, only Circularly Singly Linked
List is suitable and others are not suitable. Justify your answer. (20 Marks)
1. Singly Linked List (SLL)
Exclusive Application: Polynomial Arithmetic (Addition of Polynomials)
Justification:
• In polynomial addition, each term can be stored as a node containing coefficient and
exponent.
• Terms are added in sequence by comparing exponents, which naturally aligns with forward-
only traversal.
• SLL is space-efficient and straightforward to implement for such linear, one-pass operations.
• Doubly or circular lists add unnecessary complexity and memory overhead for such use-
cases.
➡ Why Others Are Not Suitable?
• Circular List adds complexity with tail-pointing-head logic — unnecessary.
• Doubly Linked List uses extra memory for backward links which are not used.
• Circular Doubly List is overkill — extra logic and space are not needed.
2. Circular Singly Linked List (CSLL)
Exclusive Application: Round-Robin Scheduling (e.g., CPU Scheduling)
Justification:
• In round-robin, processes are handled in a circular manner.
• CSLL allows moving from one process to the next, and upon reaching the last process, the
pointer directly goes back to the first.
• No need to handle NULL pointers like in SLL.
• Efficient for repeated cycling through data where no start or end is needed.
➡ Why Others Are Not Suitable?
• Singly Linked List: No circular behavior — extra logic needed to reset.
• Doubly Linked List: No benefit in backward movement in scheduling.
• Circular Doubly List: Adds unnecessary memory usage and complexity.
3. Doubly Linked List (DLL)
Exclusive Application: Web Browser’s Back and Forward History Navigation
Justification:
• User can move back and forth in the history.
• DLL allows easy traversal in both directions — prev for back, next for forward.
• Each page is a node, and we can dynamically insert or delete based on user’s navigation.
➡ Why Others Are Not Suitable?
• SLL: No backward movement supported.
• Circular Singly List: Still no backward movement; not intuitive for history.
• Circular Doubly List: Circularity is not needed — web history has a start and end point.
4. Circular Doubly Linked List (CDLL)
Exclusive Application: Music Playlist Navigation (Repeat Mode / Shuffle in Both Directions)
Justification:
• CDLL allows:
o Moving to the next song (next)
o Going to the previous song (prev)
o Restarting from beginning when end is reached (circularity)
• Perfect for players where user can loop songs and jump forward/backward easily.
➡ Why Others Are Not Suitable?
• SLL: Can’t go back to previous song.
• CSLL: Can’t go backward, only forward looping.
• DLL: Can’t loop — reaching end requires extra logic to reset.
Summary Table
Linked List Exclusive Application Key Feature Used Why Others Not
Type Suitable
Singly Linked Polynomial Arithmetic Simple forward traversal Others are overkill
List (SLL)
Circular Singly Round-Robin Scheduling Natural looping through Others are inefficient
LL (CSLL) nodes
Doubly Linked Web Browser History Bidirectional traversal Others don’t support
List (DLL) Navigation back
Circular Doubly Music Playlist in Repeat Mode Circular & bidirectional Others lack both
LL (CDLL) traversal features
Question2: We can test whether a node ‘ m’ is a proper ancestor of a node ‘ n’ by testing
whether ‘ m’ precedes ‘ n’ in X-order but follows ‘ n’ in Y-order , where X and Y are chosen
from {pre, post, in}. Determine all those pairs X and Y for which this statement holds. (20
Marks)
Introduction to Tree Traversals:
Binary tree traversal orders:
1. Preorder (pre):
o Visit current node → Left subtree → Right subtree
o Order: Root → Left → Right
2. Inorder (in):
o Visit left subtree → current node → right subtree
o Order: Left → Root → Right
3. Postorder (post):
o Visit left subtree → right subtree → current node
o Order: Left → Right → Root
Each traversal gives a linear sequence of the nodes, but the relative position of nodes
(ancestor vs. descendant) varies depending on the traversal.
Concept of Proper Ancestor:
• A node m is a proper ancestor of node n if:
o There is a path from m to n moving downward.
o m ≠ n (i.e., not the same node).
So, for a traversal-based test:
We want m to appear before n in one traversal (X),
but after n in another traversal (Y).
Let’s Analyze All Possible 9 Pairs of (X, Y)
We now examine all 9 combinations from the set {pre, in, post}:
X Y Is Explanation
Valid?
pre in Inorder position of ancestor and descendant is not fixed — depends on
tree shape. May or may not satisfy the condition.
pre post In preorder, ancestor m appears before descendant n. In postorder,
ancestor appears after descendant. Always true.
pre pre If both X and Y are preorder, both will have same order — condition
fails.
in pre Inorder is ambiguous — can't guarantee relative positions.
in in Same traversal — no reversal of order. Fails.
in post Again, inorder can’t guarantee ancestor comes before/after.
post pre In postorder, ancestor appears after descendant. In preorder, ancestor
appears before. Valid.
post in Inorder ambiguous — not reliable for determining ancestry.
post post Same order — no inversion — fails.
Only Valid Pairs:
X Y Why Valid?
pre post m comes before n in preorder, and after n in postorder → confirms proper
ancestry.
post pre m comes after n in postorder, but before n in preorder → confirms proper
ancestry.
Example for Illustration
Consider the binary tree:
• Preorder: A B D E C F G
• Inorder: D B E A F C G
• Postorder:D E B F G C A
Let’s test for ancestor B and descendant E:
• Preorder: B appears before E
• Postorder: B appears after E
→ Valid for (pre, post)
Now try (pre, in):
• Preorder: B before E
• Inorder: B before E
→ But may not hold in all trees. So invalid.
Question3: Explain Left Leaning Red Black Trees. What are their advantages and
disadvantages? (20 Marks)
Introduction to LLRB Trees
Left-Leaning Red-Black (LLRB) Trees are a simplified form of Red-Black Trees introduced by
Robert Sedgewick. They are a type of self-balancing binary search tree (BST) used to maintain
sorted data efficiently. The key idea behind LLRB trees is to represent 2-3 trees (a kind of B-tree)
using binary search trees with colored links (red and black). This approach maintains balance
during insertions and deletions, ensuring logarithmic time complexity for search, insert, and delete
operations.
Basic Concepts
1. 2-3 Tree Analogy
• A 2-3 tree is a balanced search tree where each internal node can have either 2 or 3
children.
• LLRB simulates 2-3 trees using binary trees by encoding 3-nodes using left-leaning red
links.
2. Node Color
• Red Link: Represents that the node is part of a 3-node and connects to its parent.
• Black Link: Represents a standard BST connection.
3. LLRB Properties
1. Red links lean left.
2. No node has two red links connected to it (no two red links in a row).
3. Every path from the root to a null reference has the same number of black links (black-
height property).
Operations in LLRB Trees
1. Insertion
• Insert like a standard BST.
• Use rotations and color flips to fix violations:
o Left rotation if right child is red.
o Right rotation if left child and its left child are both red.
o Color flip if both children are red.
2. Deletion
• More complex than insertion.
• Before removing a node, ensure the path to it has a red link to maintain balance.
• Rebalancing is done using rotations and color flips during the traversal.
Example Illustration
The red links always lean left, and we maintain black balance in the tree.
Advantages of LLRB Trees
1. Simplicity:
o Easier to implement than classic red-black trees.
o Uses fewer cases during balancing (only left-leaning red links allowed).
2. Performance:
o Efficient for dynamic set operations (search, insert, delete) in O(log n) time.
o Matches AVL and traditional RB trees in performance with less code complexity.
3. Memory Efficient:
o Requires only one extra bit (color) per node.
4. Balanced Tree Structure:
o Ensures short height, good for maintaining fast access.
5. Direct Simulation of 2-3 Trees:
o The 1:1 mapping with 2-3 trees simplifies understanding of balancing logic.
Disadvantages of LLRB Trees
1. Deletion is Complex:
o Compared to insertion, deletion requires more careful handling with several cases
(like moveRedLeft, moveRedRight).
2. Not Widely Used in Standard Libraries:
o Most standard implementations (like Java TreeMap) use classical red-black trees, so
LLRB is less known.
3. Slower than AVL Trees for Searching:
o AVL trees provide better balance, which can slightly improve search times (at the
cost of more rotations).
4. Harder Debugging:
o Although easier than full red-black trees, color flips and rotations still make debugging
non-trivial.
Applications
• Symbol tables in compilers.
• Databases and file systems.
• Priority queues and associative arrays.
• Real-time systems requiring logarithmic time bounds.
Conclusion
Left-Leaning Red-Black Trees are a practical and simplified alternative to traditional Red-Black
Trees. By enforcing left-leaning red links and simulating 2-3 trees, LLRB trees provide a balance
between performance and simplicity. While insertions are straightforward and efficient, deletions
remain somewhat complex. Nevertheless, they are suitable for many applications requiring balanced
search trees with guaranteed performance
Question4: Write a short note on the recent developments in the area of finding minimum cost
spanning trees. (20 Marks)
1. Introduction
The Minimum Spanning Tree (MST) problem—finding a subset of edges of least total weight that
connects all vertices—remains a central topic in algorithmic graph theory. Beyond the classical
algorithms of Kruskal and Prim, recent research focuses on scaling to massive, dynamic, and
specialized graph models while improving worst-case guarantees.
2. Advances in Static MST Algorithms
• Near-Linear Deterministic Bounds
o Pettie–Ramachandran Framework: Achieves an $O(m,\alpha(m,n))$ time bound
(where $\alpha$ is the inverse Ackermann function), matching the best randomized
algorithms with a deterministic method.
o $O(m\log\log n)$ Algorithms: New techniques exploit improved priority queue
structures to push static MST time down to $O(m\log\log n)$, narrowing the gap to
linear time .
• Planar and Minor-Closed Graphs
o Linear Time for Planar MST: Leveraging planar separators, MSTs can now be
computed in truly $O(n)$ time for graphs embeddable on surfaces without crossings
.
3. Dynamic MST Maintenance
• Fully Dynamic Algorithms
o Holm-de Lichtenberg-Thorup (2001) set the stage with polylogarithmic update times
for edge insertions/deletions.
o Recent Practical Frameworks:
▪ Innovations in Dynamic MST Algorithms surveys GPU-accelerated data
structures, memory-bound optimizations, and representation learning to maintain
MSTs under continuous updates .
▪ A new approach uses top-trees and link-cut trees variants to achieve sub-$\log
n$ expected update time in practice.
• Incremental & Temporal MST
o AM-Tree Data Structure: Designed for temporal graphs where edges appear/disappear
over time, offering $O(\log n)$ amortized update and query bounds, outperforming
classic link-cut trees by 7–13× in benchmarks .
4. Parallel and Distributed MST
• Massively Parallel Computation (MPC) Models
o Algorithms now work in $O(1)$ rounds with $\tilde O(m)$ total work by combining
graph sparsification with clever contraction techniques.
o Graph-Contraction Frameworks permit near-optimal scaling on large clusters,
essential for web-scale networks.
5. Streaming and Approximate MST
• Semi-Streaming Model
o With $O(n\log n)$ space, single‐pass algorithms maintain a
$(1+\varepsilon)$-approximate MST in $O(\log\frac1\varepsilon)$ passes.
• High-Dimensional Data
o Randomized projection techniques yield approximate MSTs for metric data in
subquadratic time.
6. Specialized Graph Models
• Infinite and Countable Graphs
o Recent theoretical work extends MST existence and computation to countable infinite
graphs under finiteness constraints.
• Multiobjective MST (MO-MST)
o New dynamic-programming algorithms handle Pareto‐optimal trade-offs among multiple
weight functions .
7. Conclusion
Continued progress in MST research blends deep theory with practical systems. From
deterministically nearly-linear static algorithms to highly efficient dynamic and parallel frameworks,
modern MST techniques address the challenges of massive, evolving, and specialized networks—
ensuring that the MST problem remains both theoretically rich and practically vital.