[go: up one dir, main page]

0% found this document useful (0 votes)
123 views16 pages

Data Strcuture 2023 Answer Paper

Model answer paper for Sen exam

Uploaded by

donnoorain69
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
123 views16 pages

Data Strcuture 2023 Answer Paper

Model answer paper for Sen exam

Uploaded by

donnoorain69
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Q1)

a) Explain Linked lists in detail


Ans:-Linked List is a linear data structure, in which elements are not stored at a
contiguous location, rather they are linked using pointers. Linked List forms a series
of connected nodes, where each node stores the data and the address of the next node.
Node Structure: A node in a linked list typically consists of two components:
Data: It holds the actual value or data associated with the node.
Next Pointer or Reference : It stores the memory address (reference) of the next node
in the sequence.
Head and Tail: The linked list is accessed through the head node, which points to the
first node in the list. The last node in the list points to NULL or nullptr, indicating
the end of the list. This node is known as the tail node.
Types of linked lists:
There are mainly three types of linked lists:
Single-linked list
Double linked list
Circular linked list
1. Single-linked list:
In a singly linked list, each node contains a reference to the next node in the
sequence. Traversing a singly linked list is done in a forward direction.
2. Double-linked list:
In a doubly linked list, each node contains references to both the next and previous
nodes. This allows for traversal in both forward and backward directions, but it
requires additional memory for the backward reference.
3.Circular linked list:
In a circular linked list, the last node points back to the head node, creating a
circular structure. It can be either singly or doubly linked.

(b) List down the applications of stack.


1. Ans:Function calls: Stacks are used to keep track of the return addresses of
function calls, allowing the program to return to the correct location after a
function has finished executing.
2. Recursion: Stacks are used to store the local variables and return addresses of
recursive function calls, allowing the program to keep track of the current state
of the recursion.
3. Expression evaluation: Stacks are used to evaluate expressions in postfix
notation (Reverse Polish Notation).
4. Syntax parsing: Stacks are used to check the validity of syntax in programming
languages and other formal languages.
5. Memory management: Stacks are used to allocate and manage memory in some
operating systems and programming languages.
6. Expression conversion:- a)infix to postfix b)infix to perfix c)postfix to infix
d)perfix to infix

(c) Explain winding and unwinding phase of recursion.


Ans:-In recursion, the winding and unwinding phases refer to the process of function
calls and their subsequent returns as the recursive function progresses and then
resolves. Understanding these phases is crucial for analyzing how recursive functions
work.
1. Winding Phase (Going deeper into recursion)
The winding phase occurs when the recursive function keeps calling itself with a new
argument.
2:-During this phase, each recursive call adds a new frame (which contains
parameters, local variables, return addresses, etc.) to the call stack.
3:-The winding phase continues until a base case (or termination condition) is
reached.
2. Unwinding Phase (Returning from recursion)
1:-The unwinding phase begins when the base case is reached, and the function
starts returning values to its previous calls.
2:-Each return pops the corresponding function frame off the stack and hands over
the return value to the previous function call.
3:-The unwinding phase continues until all recursive calls are resolved, and control
returns to the initial function call.

(d) Briefly explain memory fragmentation.


Ans:-Memory fragmentation in data structures occurs when memory is allocated and
deallocated in a way that leaves small, non-contiguous blocks of free memory
scattered throughout. There are two types of fragmentation:
1:-External fragmentation: This happens when there is enough total free memory to
satisfy a memory allocation request, but the available memory is split into small non-
contiguous blocks, making it impossible to allocate a large continuous block of
memory.
2:-Internal fragmentation: This occurs when memory is allocated in fixed-size blocks
(e.g., pages or chunks), and the actual memory requested is smaller than the
allocated block, leaving unused space within the allocated block.

Both types of fragmentation can lead to inefficient memory usage, slowing down
programs or causing them to run out of usable memory. This is a concern when
analyzing data structures that dynamically allocate memory, such as linked lists,
trees, or hash tables. Proper memory management strategies are needed to minimize
fragmentation and optimize performance.

Q2)
(a) Design an algorithm to implement circular queue using an array.
Ans:-
1:-Initialize the Circular Queue:-
Create an array of fixed size.
Set front and rear to -1 (representing an empty queue).
Set capacity to the size of the array.

2:-Enqueue Operation: Add an element to the queue.Check if the queue is full:-


If (rear + 1) % capacity == front, the queue is full.
If the queue is empty (i.e., front == -1), set front = 0 and rear = 0.
Else, update rear = (rear + 1) % capacity.
Insert the element at queue[rear].

3:-Dequeue Operation: Remove an element from the queue.


Check if the queue is empty (front == -1).
Retrieve the element at queue[front].
If there is only one element left (i.e., front == rear), reset front = -1 and rear = -1.
Otherwise, update front = (front + 1) % capacity.

4:-Peek Operation: Retrieve the front element without dequeuing.


Check if the queue is empty (front == -1).
Return the element at queue[front].

5:-IsEmpty Operation: Check if the queue is empty (front == -1).


6:-IsFull Operation: Check if the queue is full ((rear + 1) % capacity == front).

(b)Explain quick sort with example by giving its algorithm and comment on its
complexity
Ans:-Quick Sort is a highly efficient sorting algorithm that uses a divide-and-conquer
strategy to sort an array or list. It works by selecting a 'pivot' element from the array
and partitioning the other elements into two sub-arrays according to whether they
are less than or greater than the pivot. The sub-arrays are then sorted recursively.
Algorithm:-
Step 1: Initialize the Quick Sort
Input: An array A of n elements, with indices ranging from 0 to n-1.
Call the Function:
Start the Quick Sort by calling quicksort(A, 0, n-1).
Step 2: Define the Quick Sort Function

Base Condition:

Check if low < high. If not, return (the sub-array is already sorted).
Partitioning:
Call the partition(A, low, high) function to partition the array.
Store the returned index as pivot_index.
Recursive Calls:
Recursively call quicksort(A, low, pivot_index - 1) for the left sub-array.
Recursively call quicksort(A, pivot_index + 1, high) for the right sub-array.

Step 3: Define the Partition Function


Choose a Pivot:
Select the last element of the sub-array as the pivot: pivot = A[high].
Initialize an index i to low - 1.
Rearranging Elements:
For each element A[j] from low to high - 1:
If A[j] < pivot:
Increment i (move the boundary for smaller elements).
Swap A[i] with A[j] (move the smaller element to the left).
Final Placement of Pivot:
After the loop, swap A[i + 1] with A[high] to place the pivot in its correct position.
Return the Pivot Index:
Return i + 1, which is the new index of the pivot in the sorted array.
Example:-Input Array: [10, 7, 8, 9, 1, 5]
Step 1: Choose pivot 5. After partitioning: [1, 5, 8, 9, 10, 7]
Step 2: Recursively sort [1] (left) and [8, 9, 10, 7] (right).
Step 3: Choose pivot 7 for the right sub-array, leading to [7, 9, 10, 8].
Step 4: Continue this process until fully sorted.
Final Sorted Array: [1, 5, 7, 8, 9, 10]

Complexity Analysis:-

 Best Case: O(nlogn)


 Average Case: O(nlogn)
 Worst Case: O(n^2) (occurs with poor pivot choices)
 Space Complexity: O(logn)(for recursion stack)

Q4)
(a) Write an algorithm to covert infix expression to postfix expression.
Ans:-Infix to Postfix Conversion: Simple Steps
Objective: Convert an infix expression (like A + B * C) to postfix notation (like A B C
* +).
Steps to Follow

1:-Initialize:
Create an empty stack for operators.
Create an empty list for the output (postfix expression).

2:-Define Operator Precedence:

+ and - have lower precedence (1).


* and / have higher precedence (2).

3:-Process Each Token:


Read the infix expression from left to right:
If the token is an operand (like A, B, C):
Add it to the output list.
If the token is a left parenthesis (:
Push it onto the stack.
If the token is a right parenthesis ):
Pop from the stack to the output list until you find a left parenthesis. Discard the left
parenthesis.
If the token is an operator (like +, -, *, /):
While there is an operator at the top of the stack with greater or equal precedence:
Pop operators from the stack to the output list.
Push the current operator onto the stack.

4:-Pop Remaining Operators:After reading the entire expression, pop all remaining
operators from the stack to the output list.

5:-Final Output:-The output list now contains the postfix expression.

Example Walkthrough
Infix Expression: A + B * C

Initialize:
output = []
stack = []

Process Tokens:
Read A: Add to output → output = [A]
Read +: Push to stack → stack = [+]
Read B: Add to output → output = [A, B]
Read *: Push to stack (higher precedence than +) → stack = [+, *]
Read C: Add to output → output = [A, B, C]

Pop Remaining Operators:


Pop *: output = [A, B, C, *]
Pop +: output = [A, B, C, *, +]

Final Output:The postfix expression is: A B C * +


(b) Explain various collision resolution techniques in hashing.
Ans:-There are two types of collision resolution techniques.
Separate chaining (open hashing)
Open addressing (closed hashing)

Separate chaining: This method involves making a linked list out of the slot where
the collision happened, then adding the new key to the list. Separate chaining is the
term used to describe how this connected list of slots resembles a chain. It is more
frequently utilized when we are unsure of the number of keys to add or remove.
Time complexity
Its worst-case complexity for searching is o(n).
Its worst-case complexity for deletion is o(n).

Advantages of separate chaining


It is easy to implement.
The hash table never fills full, so we can add more elements to the chain.
It is less sensitive to the function of the hashing.

Disadvantages of separate chaining


In this, the cache performance of chaining is not good.
Memory wastage is too much in this method.
It requires more space for element links.

Open addressing: To prevent collisions in the hashing table, open addressing is


employed as a collision-resolution technique. No key is kept anywhere else besides
the hash table. As a result, the hash table’s size is never equal to or less than the
number of keys. Additionally known as closed hashing.

The following techniques are used in open addressing:


Linear probing
Quadratic probing
Double hashing

Linear probing: This involves doing a linear probe for the following slot when a
collision occurs and continuing to do so until an empty slot is discovered.
The worst time to search for an element in linear probing is O. The cache performs
best with linear probing, but clustering is a concern. This method’s key benefit is that
it is simple to calculate.

Disadvantages of linear probing:


The main problem is clustering.
It takes too much time to find an empty slot.

Quadratic probing: When a collision happens in this, we probe for the i2-nd slot in
the ith iteration, continuing to do so until an empty slot is discovered. In comparison
to linear probing, quadratic probing has a worse cache performance. Additionally,
clustering is less of a concern with quadratic probing.

Double hashing: In this, you employ a different hashing algorithm, and in the
ith iteration, you look for (i * hash 2(x)). The determination of two hash functions
requires more time. Although there is no clustering issue, the performance of the
cache is relatively poor when using double probing.
Q4 )

(a)Define Minimum Spanning Tree. Construct a minimum spanning tree shown in


figure 1 using Kruskal’s and Prim’s Algorithm and find out the cost with all
intermediate steps

Kruskal's Algorithm:
Kruskal’s algorithm builds the MST by sorting all edges by their weights and adding
the smallest edges one by one, ensuring no cycles are formed. The steps are as follows:
Steps:

Sort edges by increasing order of weight:


(A-B) = 1, (A-C) = 4, (D-E) = 2, (E-F) = 2, (D-F) = 4, (C-F) = 5, (B-E) = 6, (E-G) = 7, (F-
G) = 6.

Pick the smallest edge and add it to the MST if it doesn't form a cycle.

Pick (A-B) with weight 1 → No cycle, add to MST.


Pick (D-E) with weight 2 → No cycle, add to MST.
Pick (E-F) with weight 2 → No cycle, add to MST.
Pick (A-C) with weight 4 → No cycle, add to MST.
Pick (D-F) with weight 4 → No cycle, add to MST.

At this point, adding any other edges would form a cycle, so we stop. The total cost
is:1+2+2+4+4=13

Prim’s Algorithm:
Prim's algorithm starts with a single vertex and grows the MST by adding the
smallest edge that connects a vertex inside the MST to a vertex outside the MST.
Steps:
1:-Start with vertex A and add the smallest edge connected to A.
Add edge (A-B) with weight 1 → MST: (A-B), total cost: 1.
2:-From vertices in the MST (A, B), choose the smallest edge connecting to a vertex
outside the MST.
Add edge (A-C) with weight 4 → MST: (A-B), (A-C), total cost: 5.
3:-From vertices in the MST (A, B, C), add the smallest edge.
Add edge (D-E) with weight 2 → MST: (A-B), (A-C), (D-E), total cost: 7.
4:-Continue adding the smallest edges until all vertices are connected.
Add edge (E-F) with weight 2 → MST: (A-B), (A-C), (D-E), (E-F), total cost: 9.
Add edge (D-F) with weight 4 → MST: (A-B), (A-C), (D-E), (E-F), (D-F), total cost: 13.
Minimum Spanning Tree (MST) using Prim's:
The MST consists of the edges: (A-B), (A-C), (D-E), (E-F), (D-F) with a total cost of 13.

(c) Define AVL Tree. Step by step construct a AVL tree for the following data 23, 12,
25, 01, 45,63, 27, 29 ,90,78,5,6,10
Q5)

(a) Write down the algorithm for addition of two polynomials


Ans:-
(b)Define Binary Search Tree. Give the algorithms for various tree traversals.
Ans:-A Binary Search Tree (BST) is a type of binary tree that has the following
properties:
Node Structure: Each node in a BST contains a key, a left child, and a right child.
Ordering Property:The key of any node in the left subtree is less than the key of the
parent node.
The key of any node in the right subtree is greater than the key of the parent node.
No Duplicates: BSTs typically do not allow duplicate keys, ensuring each key is
unique.
This structure allows for efficient searching, insertion, and deletion operations, with
an average time complexity of O(log⁡n)O(\log n)O(logn) for balanced trees.

Algorithms for Tree Traversals


Tree traversals are methods for visiting all the nodes in a tree data structure. There
are three main types of depth-first traversals for a BST:

In-Order Traversal (Left, Root, Right):

This traversal visits nodes in ascending order:-


Alogrithm:-
InOrder(node):
if node is not null:
InOrder(node.left)
visit(node) // Process the current node (e.g., print the key)
InOrder(node.right)
Pre-Order Traversal (Root, Left, Right):
This traversal is useful for creating a copy of the tree or expressing the tree structure.
Algorithm:
PreOrder(node):
if node is not null:
visit(node) // Process the current node (e.g., print the key)
PreOrder(node.left)
PreOrder(node.right)
Post-Order Traversal (Left, Right, Root):
This traversal is useful for deleting the tree or calculating the height of the tree.
Algorithm:
PostOrder(node):
if node is not null:
PostOrder(node.left)
PostOrder(node.right)
visit(node) // Process the current node (e.g., print the key)
Traversal Algorithms:
In-Order: Visits nodes in ascending order.
Pre-Order: Visits the root before its subtrees.
Post-Order: Visits the root after its subtrees.

Q6)Solve any Four:(20)


a) Threaded Binary Tree
b) Depth First Search
c) Game Tree
d) Selection Sort
e) B+-tree
Ans:-a) Threaded Binary Tree
Definition:
A Threaded Binary Tree is a type of binary tree in which the null pointers are
replaced with pointers to the in-order predecessor and successor nodes. This allows
for an efficient traversal of the tree without using a stack or recursion.
Properties:
Threading: In threaded binary trees, every node has two additional pointers (or
threads) that point to the next node in the in-order traversal.
In-Order Traversal: This allows for efficient in-order traversal in O(n)O(n)O(n) time
without the need for recursion.

B)
C)Game Tree
Definition:
A Game Tree is a conceptual representation of all possible moves in a game. Each
node represents a state of the game, and edges represent possible moves from one
state to another.
Properties:
Nodes: Each node represents a game state.
Edges: Each edge represents a player's move.
Leaves: Terminal nodes represent outcomes (win/loss/draw).

D)
E)B+-Tree
Definition:
A B+-Tree is a self-balancing tree data structure that maintains sorted data and
allows for efficient insertion, deletion, and search operations. It is an extension of B-
trees where all values are stored at the leaf nodes.
Properties:
Balanced: All leaf nodes are at the same level.
Multi-way: Each node can have multiple children.
Order: Typically defined by a parameter mmm (maximum children per node).
Leaf Nodes: Contains pointers to the actual data records.

You might also like