[go: up one dir, main page]

0% found this document useful (0 votes)
39 views60 pages

ADS Lab File-2

Uploaded by

russojimpv04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views60 pages

ADS Lab File-2

Uploaded by

russojimpv04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 60

EXPERIMENT-1

AIM: Write a program to implement Linear Search and Binary Search using both recursive
and non-recursive functions and display the output.

THEORY:
Searching algorithms are fundamental in computer science and are used to locate specific
elements within a list or array.

1. Linear Search: This algorithm sequentially checks each element in the array until the
target is found or the end is reached. It has a time complexity of \( O(n) \).

2. Binary Search: This algorithm divides a sorted array into halves, continuously narrowing
down the search space. It has a time complexity of \( O(\log n) \). Binary search requires the
array to be sorted beforehand.

SOURCE CODE:

# Linear Search (Non-Recursive)

def linear_search(arr, x):

for i in range(len(arr)):

if arr[i] == x:

return i

return -1

# Linear Search (Recursive)

def linear_search_recursive(arr, x, index=0):

if index >= len(arr):

return -1

if arr[index] == x:

return index

return linear_search_recursive(arr, x, index + 1)


# Binary Search (Non-Recursive)

def binary_search(arr, x):

low, high = 0, len(arr) - 1

while low <= high:

mid = (low + high) // 2

if arr[mid] == x:

return mid

elif arr[mid] < x:

low = mid + 1

else:

high = mid - 1

return -1

# Binary Search (Recursive)

def binary_search_recursive(arr, x, low=0, high=None):

if high is None:

high = len(arr) - 1

if low > high:

return -1

mid = (low + high) // 2

if arr[mid] == x:

return mid

elif arr[mid] < x:

return binary_search_recursive(arr, x, mid + 1, high)

else:

return binary_search_recursive(arr, x, low, mid - 1)

# Test Data and Execution

arr = [2, 3, 4, 10, 40]

x = 10
# Linear Search Outputs

print("Non-Recursive Linear Search Output:", linear_search(arr, x))

print("Recursive Linear Search Output:", linear_search_recursive(arr, x))

# Binary Search Outputs (Array needs to be sorted)

print("Non-Recursive Binary Search Output:", binary_search(arr, x))

print("Recursive Binary Search Output:", binary_search_recursive(arr, x))

OUTPUT:

Assuming x = 10 and arr = [2, 3, 4, 10, 40]

Non-Recursive Linear Search Output: 3

Recursive Linear Search Output: 3

Non-Recursive Binary Search Output: 3

Recursive Binary Search Output: 3

LEARNINGS:

1. Linear Search is simple but less efficient for large datasets, as it checks each element sequentially.
2. Binary Search is more efficient with sorted arrays, dividing the search space with each comparison.
3. Recursive implementations use the call stack for iteration, which may lead to higher memory usage
but can be more concise.
4. Non-recursive methods avoid the overhead of recursion, which can be beneficial for larger data
sets.
EXPERIMENT-2

AIM: Write a program to implement Stack ADT and Queue ADT using arrays and display
the output.

THEORY:
Stacks and Queues are fundamental data structures in computer science.

1. Stack ADT:
A stack follows the Last In First Out (LIFO) principle, where the last inserted element is the
first one to be removed. It has two main operations:
- Push: Adds an element to the top of the stack.
- Pop: Removes the top element from the stack.

2. Queue ADT:
A queue follows the First In First Out (FIFO) principle, where the first inserted element is
the first one to be removed. It has two main operations:
- Enqueue: Adds an element to the end of the queue.
- Dequeue: Removes the front element from the queue.

SOURCE CODE:

# Stack implementation using a list

class Stack:

def __init__(self):

self.stack = []

# Push operation

def push(self, item):

self.stack.append(item)

print(f"Pushed {item}")
# Pop operation

def pop(self):

if not self.is_empty():

item = self.stack.pop()

print(f"Popped {item}")

return item

else:

print("Stack is empty")

return None

# Check if stack is empty

def is_empty(self):

return len(self.stack) == 0

# Display stack

def display(self):

print("Stack:", self.stack)

# Queue implementation using a list

class Queue:

def __init__(self):

self.queue = []

# Enqueue operation

def enqueue(self, item):

self.queue.append(item)

print(f"Enqueued {item}")

# Dequeue operation

def dequeue(self):
if not self.is_empty():

item = self.queue.pop(0)

print(f"Dequeued {item}")

return item

else:

print("Queue is empty")

return None

# Check if queue is empty

def is_empty(self):

return len(self.queue) == 0

# Display queue

def display(self):

print("Queue:", self.queue)

# Test Data and Execution

# Stack Operations

print("Stack Operations:")

stack = Stack()

stack.push(10)

stack.push(20)

stack.push(30)

stack.display()

stack.pop()

stack.display()

# Queue Operations

print("\nQueue Operations:")

queue = Queue()

queue.enqueue(1)
queue.enqueue(2)

queue.enqueue(3)

queue.display()

queue.dequeue()

queue.display()

OUTPUT:

Stack Operations:

Pushed 10

Pushed 20

Pushed 30

Stack: [10, 20, 30]

Popped 30

Stack: [10, 20]

Queue Operations:

Enqueued 1

Enqueued 2

Enqueued 3

Queue: [1, 2, 3]

Dequeued 1

Queue: [2, 3]

LEARNINGS:

1. Stack ADT is useful in scenarios where LIFO access is required, such as function call
stacks or undo mechanisms in software.
2. Queue ADT is effective for FIFO access, suitable for task scheduling, buffering, and real-
time processing applications.
3. Implementing these ADTs using arrays (lists in Python) is straightforward, though for
large-scale applications, circular buffers or linked lists may be more efficient for queues.
4. Understanding ADTs helps in choosing the right data structure for specific problem
requirements in software development.
EXPERIMENT-3

AIM: Write a program to read an infix expression and convert it to postfix notation using
Stack ADT.

THEORY:
Infix expressions are expressions in which operators are placed between operands, e.g., A +
B. Postfix expressions, on the other hand, place operators after the operands, e.g., A B +.
Postfix notation is easier to evaluate in a stack-based approach as it does not require
parentheses to dictate operator precedence.

1. Infix to Postfix Conversion:


- Operands (e.g., A, B, 1, 2) are directly added to the postfix expression.
- Operators (e.g., +, -, , /) are pushed onto a stack, but only after popping operators with
higher or equal precedence already in the stack.
- Parentheses help control the order of operations:
- '(' is pushed to the stack to signify a sub-expression.
- ')' causes popping until the corresponding '(' is encountered.

By following these rules, an infix expression can be converted into postfix notation using a
stack.

SOURCE CODE:

# Stack implementation for expression conversion

class Stack:

def __init__(self):

self.stack = []

def push(self, item):

self.stack.append(item)
def pop(self):

if not self.is_empty():

return self.stack.pop()

return None

def peek(self):

if not self.is_empty():

return self.stack[-1]

return None

def is_empty(self):

return len(self.stack) == 0

# Function to determine precedence of operators

def precedence(op):

if op in ('+', '-'):

return 1

if op in ('', '/'):

return 2

return 0

# Function to convert infix to postfix

def infix_to_postfix(expression):

stack = Stack()

postfix = []

for char in expression:

if char.isalnum(): # If operand, add it to output

postfix.append(char)

elif char == '(': # If '(', push onto stack

stack.push(char)
elif char == ')': # If ')', pop until '('

while not stack.is_empty() and stack.peek() != '(':

postfix.append(stack.pop())

stack.pop() # Pop '(' from the stack

else: # Operator encountered

while not stack.is_empty() and precedence(stack.peek()) >= precedence(char):

postfix.append(stack.pop())

stack.push(char)

# Pop all remaining operators in the stack

while not stack.is_empty():

postfix.append(stack.pop())

return ''.join(postfix)

# Test Data and Execution

expression = "A(B+C)/D"

print("Infix Expression:", expression)

print("Postfix Expression:", infix_to_postfix(expression))

OUTPUT:

Assuming the infix expression is A(B+C)/D:

Infix Expression: A(B+C)/D

Postfix Expression: ABC+D/


LEARNINGS:

1. Infix to Postfix Conversion simplifies expression evaluation as postfix expressions do not


require parentheses.
2. The Stack ADT is essential for handling operator precedence and parentheses efficiently.
3. Operator Precedence and Associativity are fundamental in infix-to-postfix conversion.
Higher precedence operators are applied before lower precedence ones, and associativity
rules dictate the order for operators of the same precedence.
4. Understanding postfix notation and stack-based algorithms is valuable in compiler design
and expression evaluation in programming.
EXPERIMENT-4

AIM: Write a program to implement Stack ADT and Queue ADT using a singly linked list
and display the output.

THEORY:
Stacks and Queues can be implemented using a singly linked list to dynamically manage
memory, which is efficient for applications where data can grow or shrink dynamically.

1. Stack ADT using Linked List:


A stack follows the Last In First Out (LIFO) principle, where the last inserted element is the
first one to be removed. Operations include:
- Push: Inserts an element at the beginning (top) of the stack.
- Pop: Removes the element from the beginning (top) of the stack.

2. Queue ADT using Linked List:


A queue follows the First In First Out (FIFO) principle, where the first inserted element is
the first one to be removed. Operations include:
- Enqueue: Inserts an element at the end (rear) of the queue.
- Dequeue: Removes an element from the beginning (front) of the queue.

SOURCE CODE:

# Node class for linked list

class Node:

def __init__(self, data):

self.data = data

self.next = None

# Stack implementation using linked list

class Stack:
def __init__(self):

self.top = None

# Push operation

def push(self, item):

new_node = Node(item)

new_node.next = self.top

self.top = new_node

print(f"Pushed {item}")

# Pop operation

def pop(self):

if self.is_empty():

print("Stack is empty")

return None

item = self.top.data

self.top = self.top.next

print(f"Popped {item}")

return item

# Check if stack is empty

def is_empty(self):

return self.top is None

# Display stack elements

def display(self):

current = self.top

print("Stack:", end=" ")

while current:

print(current.data, end=" ")

current = current.next
print()

# Queue implementation using linked list

class Queue:

def __init__(self):

self.front = self.rear = None

# Enqueue operation

def enqueue(self, item):

new_node = Node(item)

if self.rear is None:

self.front = self.rear = new_node

else:

self.rear.next = new_node

self.rear = new_node

print(f"Enqueued {item}")

# Dequeue operation

def dequeue(self):

if self.is_empty():

print("Queue is empty")

return None

item = self.front.data

self.front = self.front.next

if self.front is None:

self.rear = None

print(f"Dequeued {item}")

return item

# Check if queue is empty

def is_empty(self):
return self.front is None

# Display queue elements

def display(self):

current = self.front

print("Queue:", end=" ")

while current:

print(current.data, end=" ")

current = current.next

print()

# Test Data and Execution

# Stack Operations

print("Stack Operations:")

stack = Stack()

stack.push(10)

stack.push(20)

stack.push(30)

stack.display()

stack.pop()

stack.display()

# Queue Operations

print("\nQueue Operations:")

queue = Queue()

queue.enqueue(1)

queue.enqueue(2)

queue.enqueue(3)

queue.display()

queue.dequeue()

queue.display()
OUTPUT:

Stack Operations:

Pushed 10

Pushed 20

Pushed 30

Stack: 30 20 10

Popped 30

Stack: 20 10

Queue Operations:

Enqueued 1

Enqueued 2

Enqueued 3

Queue: 1 2 3

Dequeued 1

Queue: 2 3

LEARNINGS:

1. Stack ADT using linked lists is efficient in terms of memory, as it dynamically grows and
shrinks, making it suitable for applications requiring LIFO access.
2. Queue ADT using linked lists also benefits from dynamic memory allocation and is
efficient for FIFO access, useful for tasks like scheduling and buffering.
3. Using Linked Lists to implement these ADTs provides flexibility and overcomes the
limitations of static array sizes.
4. Understanding linked lists in data structure implementation enhances efficiency and
memory management, which is crucial in system-level programming and application
development.

EXPERIMENT-5

AIM: Write a program to perform the following operations on a Binary Search Tree (BST):
1. Construct a binary search tree of elements.
2. Search for a key element in the binary search tree.
3. Delete an element from the binary search tree.

THEORY:
A Binary Search Tree (BST) is a data structure in which each node has at most two children.
For any node in the BST:
- The left subtree contains nodes with values less than the node's value.
- The right subtree contains nodes with values greater than the node's value.

BSTs are commonly used for searching and sorting data due to their efficient insert, search,
and delete operations.

1. Insertion
- Begin at the root.
- If the tree is empty, the new element becomes the root.
- Otherwise, if the element is smaller, recursively insert it into the left subtree; if larger,
insert it into the right subtree.

2. Search
- Start from the root and compare the key to be searched.
- If the key matches, return it; otherwise, search the left or right subtree based on the key's
value relative to the current node.

3. Deletion
- If the node has no children, simply remove it.
- If the node has one child, replace it with its child.
- If the node has two children, find the in-order successor (smallest node in the right
subtree) or predecessor (largest node in the left subtree), replace the node's value with it, and
delete the successor/predecessor.

SOURCE CODE:

# Node class for the Binary Search Tree

class TreeNode:

def __init__(self, key):

self.key = key

self.left = None

self.right = None

# Binary Search Tree class

class BinarySearchTree:

def __init__(self):

self.root = None

# Insert a node

def insert(self, key):

self.root = self._insert_recursive(self.root, key)

def _insert_recursive(self, root, key):

if root is None:

return TreeNode(key)

if key < root.key:

root.left = self._insert_recursive(root.left, key)

else:

root.right = self._insert_recursive(root.right, key)

return root
# Search for a node

def search(self, key):

return self._search_recursive(self.root, key)

def _search_recursive(self, root, key):

if root is None or root.key == key:

return root

if key < root.key:

return self._search_recursive(root.left, key)

return self._search_recursive(root.right, key)

# Delete a node

def delete(self, key):

self.root = self._delete_recursive(self.root, key)

def _delete_recursive(self, root, key):

if root is None:

return root

if key < root.key:

root.left = self._delete_recursive(root.left, key)

elif key > root.key:

root.right = self._delete_recursive(root.right, key)

else:

# Node with one or no child

if root.left is None:

return root.right

elif root.right is None:

return root.left

# Node with two children

min_larger_node = self._min_value_node(root.right)
root.key = min_larger_node.key

root.right = self._delete_recursive(root.right, min_larger_node.key)

return root

def _min_value_node(self, root):

current = root

while current.left is not None:

current = current.left

return current

# Display the tree (In-order traversal)

def inorder(self, root):

if root is not None:

self.inorder(root.left)

print(root.key, end=' ')

self.inorder(root.right)

# Test Data and Execution

bst = BinarySearchTree()

elements = [50, 30, 20, 40, 70, 60, 80]

for el in elements:

bst.insert(el)

print("In-order traversal of the BST:")

bst.inorder(bst.root)

print("\n")

# Searching for an element

key_to_search = 40

search_result = bst.search(key_to_search)

if search_result:
print(f"Element {key_to_search} found in the BST.")

else:

print(f"Element {key_to_search} not found in the BST.")

# Deleting an element

key_to_delete = 20

print(f"\nDeleting element {key_to_delete} from the BST.")

bst.delete(key_to_delete)

print("In-order traversal after deletion:")

bst.inorder(bst.root)

print()

OUTPUT:

In-order traversal of the BST:

20 30 40 50 60 70 80

Element 40 found in the BST.

Deleting element 20 from the BST.

In-order traversal after deletion:

30 40 50 60 70 80

LEARNINGS:

1. Binary Search Trees (BST) provide an efficient structure for ordered data, enabling O(log
n) time complexity for search, insert, and delete operations in balanced cases.
2. In-order traversal of a BST retrieves elements in sorted order.
3. Search Operation in a BST starts from the root and traverses either left or right depending
on the key value, making it efficient for ordered data.

EXPERIMENT-6

AIM: Write a program to implement all the functions of a dictionary (ADT) using Hashing.

THEORY:
A Dictionary (or Hash Table) is a data structure that stores key-value pairs, allowing for
efficient insertion, deletion, and search operations. Hashing is a technique used to map data
(keys) to a specific location (hash table index) for fast access.

1. Hashing:
- A hash function converts a key into an integer index within a hash table. This index helps
locate the associated value quickly.
- Common hash functions include modulo operation, multiplication, and division.
- Collision Handling: Collisions occur when two keys hash to the same index. Techniques
like chaining (linked lists at each index) or open addressing (probing for the next available
slot) handle collisions.

2. Dictionary Operations using Hashing:


- Insert: Hash the key to find the index, and add the key-value pair at that index.
- Search: Hash the key to locate its index and retrieve the associated value.
- Delete: Hash the key to find the index, and remove the key-value pair from the hash table.

SOURCE CODE:

# Hash table implementation with basic dictionary functions

class HashTable:

def __init__(self, size):

self.size = size

self.table = [[] for _ in range(size)] # Using chaining to handle collisions


# Hash function

def hash_function(self, key):

return hash(key) % self.size

# Insert key-value pair

def insert(self, key, value):

index = self.hash_function(key)

for kv in self.table[index]:

if kv[0] == key:

kv[1] = value # Update value if key already exists

print(f"Updated key '{key}' with value '{value}'")

return

self.table[index].append([key, value])

print(f"Inserted key '{key}' with value '{value}'")

# Search for a key

def search(self, key):

index = self.hash_function(key)

for kv in self.table[index]:

if kv[0] == key:

print(f"Found key '{key}' with value '{kv[1]}'")

return kv[1]

print(f"Key '{key}' not found")

return None

# Delete a key-value pair

def delete(self, key):

index = self.hash_function(key)

for i, kv in enumerate(self.table[index]):

if kv[0] == key:
del self.table[index][i]

print(f"Deleted key '{key}'")

return True

print(f"Key '{key}' not found for deletion")

return False

# Display the hash table

def display(self):

print("Hash Table:")

for i, bucket in enumerate(self.table):

print(f"Index {i}: {bucket}")

# Test Data and Execution

hash_table = HashTable(10) # Create hash table of size 10

hash_table.insert("apple", 10)

hash_table.insert("banana", 20)

hash_table.insert("orange", 30)

hash_table.display()

# Search for a key

key_to_search = "banana"

hash_table.search(key_to_search)

# Delete a key

key_to_delete = "apple"

hash_table.delete(key_to_delete)

hash_table.display()

OUTPUT:
Inserted key 'apple' with value '10'

Inserted key 'banana' with value '20'

Inserted key 'orange' with value '30'

Hash Table:

Index 0: []

Index 1: []

Index 2: []

Index 3: []

Index 4: []

Index 5: []

Index 6: [['orange', 30]]

Index 7: [['apple', 10]]

Index 8: []

Index 9: [['banana', 20]]

Found key 'banana' with value '20'

Deleted key 'apple'

Hash Table:

Index 0: []

Index 1: []

Index 2: []

Index 3: []

Index 4: []

Index 5: []

Index 6: [['orange', 30]]

Index 7: []

Index 8: []

Index 9: [['banana', 20]]


LEARNINGS:

1. Hash Tables provide efficient key-value storage, allowing constant time average-case
complexity for insert, search, and delete operations.
2. Hash Functions are crucial to mapping keys to table indices. Choosing a good hash
function minimizes collisions, improving the efficiency of the hash table.
3. Collision Handling with Chaining allows multiple elements to occupy the same index, with
each index containing a list of key-value pairs.
4. Understanding hash tables is essential for implementing dictionaries and optimizing data
retrieval, which are foundational in database indexing and memory management.
EXPERIMENT-7

AIM: Write a program that uses both recursive and non-recursive functions to traverse a
given binary tree in:
a) Preorder
b) Inorder
c) Postorder

THEORY:
Binary tree traversal is a process of visiting each node in a specific order. There are multiple
traversal methods, but three common types are Preorder, Inorder, and Postorder traversals.
Each traversal order serves different purposes and has distinct applications in tree operations.

1. Traversal Types:
- Preorder (Root, Left, Right): In Preorder traversal, the root node is visited first,
followed by the left subtree and then the right subtree. It's useful for creating a copy of the
tree or getting a prefix expression.
- Inorder (Left, Root, Right): In Inorder traversal, the left subtree is visited first, followed
by the root node, and then the right subtree. This order is commonly used for binary search
trees, as it visits nodes in ascending order.
- Postorder (Left, Right, Root): In Postorder traversal, the left subtree is visited first,
followed by the right subtree, and finally the root node. This order is useful for deleting a tree
or getting a postfix expression.

2. Recursive and Non-Recursive Approaches:


- Recursive: A straightforward method that leverages the call stack for traversal. The
recursive method for each traversal type follows a specific order of function calls to navigate
the nodes.
- Non-Recursive (Iterative): Uses an explicit stack (usually a list) to simulate the traversal
order without relying on recursion. This approach is often necessary in systems where
recursive depth could exceed memory limits.
SOURCE CODE:

class Node:

def __init__(self, data):

self.data = data

self.left = None

self.right = None

# Recursive Traversals

def preorder_recursive(root):

if root:

print(root.data, end=" ")

preorder_recursive(root.left)

preorder_recursive(root.right)

def inorder_recursive(root):

if root:

inorder_recursive(root.left)

print(root.data, end=" ")

inorder_recursive(root.right)

def postorder_recursive(root):

if root:

postorder_recursive(root.left)

postorder_recursive(root.right)

print(root.data, end=" ")


# Non-Recursive Traversals

def preorder_iterative(root):

if not root:

return

stack = [root]

while stack:

node = stack.pop()

print(node.data, end=" ")

if node.right:

stack.append(node.right)

if node.left:

stack.append(node.left)

def inorder_iterative(root):

stack = []

current = root

while stack or current:

while current:

stack.append(current)

current = current.left

current = stack.pop()

print(current.data, end=" ")

current = current.right

def postorder_iterative(root):

stack1 = [root]

stack2 = []

while stack1:

node = stack1.pop()

stack2.append(node)
if node.left:

stack1.append(node.left)

if node.right:

stack1.append(node.right)

while stack2:

node = stack2.pop()

print(node.data, end=" ")

# Constructing a sample binary tree for demonstration

root = Node(1)

root.left = Node(2)

root.right = Node(3)

root.left.left = Node(4)

root.left.right = Node(5)

root.right.left = Node(6)

root.right.right = Node(7)

print("Recursive Preorder Traversal:")

preorder_recursive(root)

print("\nIterative Preorder Traversal:")

preorder_iterative(root)

print("\nRecursive Inorder Traversal:")

inorder_recursive(root)

print("\nIterative Inorder Traversal:")

inorder_iterative(root)

print("\nRecursive Postorder Traversal:")

postorder_recursive(root)

print("\nIterative Postorder Traversal:")

postorder_iterative(root)
OUTPUT:

Recursive Preorder Traversal:

1245367

Iterative Preorder Traversal:

1245367

Recursive Inorder Traversal:

4251637

Iterative Inorder Traversal:

4251637

Recursive Postorder Traversal:

4526731

Iterative Postorder Traversal:

4526731

LEARNINGS:

1. Recursive Traversals provide a straightforward and clean approach to tree traversal by


leveraging the function call stack. However, recursion can be memory-intensive for deep
trees.
2. Non-Recursive Traversals simulate recursion using explicit stacks, which can handle large
trees more efficiently in terms of memory, as they avoid potential stack overflow issues.
3. Preorder Traversal is useful when the root node needs to be processed first, Inorder
Traversal is ideal for sorting (especially in binary search trees), and Postorder Traversal is
useful when child nodes need to be processed before the parent.
4. Mastering both recursive and iterative methods for binary tree traversal is essential for a
strong understanding of tree data structures and algorithms.
EXPERIMENT-8

AIM: Write a program for the implementation of Breadth-First Search (BFS) and Depth-First
Search (DFS) for a graph.

THEORY:
Graphs are data structures that consist of a collection of vertices (nodes) and edges
connecting pairs of vertices. BFS and DFS are two fundamental graph traversal algorithms
used to explore nodes and edges of a graph in different ways. They serve as the foundation
for various graph-based algorithms, such as finding shortest paths, detecting cycles, and
checking connectivity.

1. Breadth-First Search (BFS):


- BFS explores a graph layer by layer, starting from a given node and exploring all its
neighbors before moving to their neighbors.
- BFS uses a queue to keep track of the nodes to be visited, ensuring that nodes are visited
in the order they were discovered.
- BFS is often used to find the shortest path in an unweighted graph and to check for
bipartiteness.

2. Depth-First Search (DFS):


- DFS explores a graph by going as deep as possible along a branch before backtracking to
explore other branches.
- DFS can be implemented using either a stack (in the iterative version) or recursion (which
implicitly uses the call stack).
- DFS is useful for detecting cycles, finding connected components, and performing
topological sorting in directed acyclic graphs.

SOURCE CODE:

from collections import defaultdict, deque


# Graph class using adjacency list representation

class Graph:

def __init__(self):

self.graph = defaultdict(list)

# Add edge to the graph

def add_edge(self, u, v):

self.graph[u].append(v)

# Breadth-First Search (BFS)

def bfs(self, start):

visited = set() # To keep track of visited nodes

queue = deque([start]) # Initialize queue with the starting node

visited.add(start)

print("BFS Traversal:", end=" ")

while queue:

node = queue.popleft()

print(node, end=" ")

# Explore neighbors

for neighbor in self.graph[node]:

if neighbor not in visited:

visited.add(neighbor)

queue.append(neighbor)

print()

# Depth-First Search (DFS) - Recursive

def dfs_recursive(self, node, visited=None):

if visited is None:
visited = set()

print("DFS Recursive Traversal:", end=" ")

visited.add(node)

print(node, end=" ")

# Recur for all neighbors

for neighbor in self.graph[node]:

if neighbor not in visited:

self.dfs_recursive(neighbor, visited)

# Depth-First Search (DFS) - Iterative

def dfs_iterative(self, start):

visited = set()

stack = [start]

print("\nDFS Iterative Traversal:", end=" ")

while stack:

node = stack.pop()

if node not in visited:

print(node, end=" ")

visited.add(node)

# Add neighbors to the stack in reverse order to process them correctly

for neighbor in reversed(self.graph[node]):

if neighbor not in visited:

stack.append(neighbor)

print()

# Example graph and execution

g = Graph()
g.add_edge(0, 1)

g.add_edge(0, 2)

g.add_edge(1, 2)

g.add_edge(2, 0)

g.add_edge(2, 3)

g.add_edge(3, 3)

# Performing BFS and DFS

start_node = 2

print("Starting Node:", start_node)

g.bfs(start_node)

g.dfs_recursive(start_node)

g.dfs_iterative(start_node)

OUTPUT:

Starting Node: 2

BFS Traversal: 2 0 3 1

DFS Recursive Traversal: 2 0 1 3

DFS Iterative Traversal: 2 0 1 3

LEARNINGS:

1. BFS (Breadth-First Search) is useful for exploring a graph level by level and is commonly
used to find the shortest path in unweighted graphs. BFS requires a queue data structure to
track the nodes to be visited.
2. DFS (Depth-First Search) explores the graph by diving deep into one branch before
backtracking. DFS can be implemented recursively or iteratively using a stack. Recursive
DFS utilizes the call stack, while iterative DFS explicitly uses a stack.

EXPERIMENT-9

AIM: Write a program for the implementation of the following sorting methods:
a) Bubble Sort
b) Insertion Sort
c) Quick Sort
d) Merge Sort
e) Heap Sort
f) Radix Sort
g) Binary Tree Sort

THEORY:
Sorting is a fundamental operation in computer science that rearranges elements in a
particular order, such as ascending or descending. Sorting algorithms differ in terms of their
time complexity, space complexity, stability, and whether they are comparison-based or not.
The following are brief descriptions of each sorting algorithm:

1. Bubble Sort: A simple comparison-based algorithm where each pair of adjacent elements
is compared, and elements are swapped if they are in the wrong order. It is inefficient for
large lists with a time complexity of \(O(n^2)\).

2. Insertion Sort: Builds the sorted list one element at a time by inserting each element into
its correct position. Efficient for small data sets and partially sorted lists, with a time
complexity of \(O(n^2)\).

3. Quick Sort: A divide-and-conquer algorithm that partitions the list into two sublists based
on a pivot element and recursively sorts the sublists. Its average time complexity is \(O(n \log
n)\).
4. Merge Sort: A stable, comparison-based sorting algorithm that divides the list into halves,
recursively sorts each half, and merges them. Its time complexity is \(O(n \log n)\).

5. Heap Sort: A comparison-based sorting algorithm that builds a max heap and extracts the
maximum element repeatedly. Its time complexity is \(O(n \log n)\).

6. Radix Sort: A non-comparison-based sorting algorithm that sorts numbers digit by digit
from the least significant to the most significant. It works efficiently on integers with a time
complexity of \(O(d \cdot (n + b))\), where \(d\) is the number of digits and \(b\) is the base.

7. Binary Tree Sort: Inserts elements into a binary search tree and performs an in-order
traversal to retrieve the sorted order. Its average time complexity is \(O(n \log n)\).

SOURCE CODE:

# Bubble Sort

def bubble_sort(arr):

n = len(arr)

for i in range(n):

for j in range(0, n - i - 1):

if arr[j] > arr[j + 1]:

arr[j], arr[j + 1] = arr[j + 1], arr[j]

return arr

# Insertion Sort

def insertion_sort(arr):

for i in range(1, len(arr)):

key = arr[i]

j=i-1

while j >= 0 and arr[j] > key:

arr[j + 1] = arr[j]
j -= 1

arr[j + 1] = key

return arr

# Quick Sort

def quick_sort(arr):

if len(arr) <= 1:

return arr

pivot = arr[len(arr) // 2]

left = [x for x in arr if x < pivot]

middle = [x for x in arr if x == pivot]

right = [x for x in arr if x > pivot]

return quick_sort(left) + middle + quick_sort(right)

# Merge Sort

def merge_sort(arr):

if len(arr) > 1:

mid = len(arr) // 2

left_half = arr[:mid]

right_half = arr[mid:]

merge_sort(left_half)

merge_sort(right_half)

i=j=k=0

while i < len(left_half) and j < len(right_half):

if left_half[i] < right_half[j]:

arr[k] = left_half[i]

i += 1

else:

arr[k] = right_half[j]
j += 1

k += 1

while i < len(left_half):

arr[k] = left_half[i]

i += 1

k += 1

while j < len(right_half):

arr[k] = right_half[j]

j += 1

k += 1

return arr

# Heap Sort

def heapify(arr, n, i):

largest = i

left = 2 i + 1

right = 2 i + 2

if left < n and arr[largest] < arr[left]:

largest = left

if right < n and arr[largest] < arr[right]:

largest = right

if largest != i:

arr[i], arr[largest] = arr[largest], arr[i]

heapify(arr, n, largest)

def heap_sort(arr):

n = len(arr)

for i in range(n // 2 - 1, -1, -1):

heapify(arr, n, i)
for i in range(n - 1, 0, -1):

arr[i], arr[0] = arr[0], arr[i]

heapify(arr, i, 0)

return arr

# Radix Sort

def counting_sort(arr, exp):

n = len(arr)

output = [0] n

count = [0] 10

for i in arr:

index = i // exp

count[index % 10] += 1

for i in range(1, 10):

count[i] += count[i - 1]

i=n-1

while i >= 0:

index = arr[i] // exp

output[count[index % 10] - 1] = arr[i]

count[index % 10] -= 1

i -= 1

for i in range(n):

arr[i] = output[i]

def radix_sort(arr):

max_num = max(arr)

exp = 1

while max_num // exp > 0:

counting_sort(arr, exp)

exp = 10

return arr
# Binary Tree Sort

class TreeNode:

def __init__(self, key):

self.left = None

self.right = None

self.val = key

def insert(root, key):

if root is None:

return TreeNode(key)

if key < root.val:

root.left = insert(root.left, key)

else:

root.right = insert(root.right, key)

return root

def inorder_traversal(root, sorted_list):

if root:

inorder_traversal(root.left, sorted_list)

sorted_list.append(root.val)

inorder_traversal(root.right, sorted_list)

def binary_tree_sort(arr):

if not arr:

return []

root = TreeNode(arr[0])

for i in range(1, len(arr)):

insert(root, arr[i])

sorted_list = []

inorder_traversal(root, sorted_list)
return sorted_list

# Sample Input and Execution

arr = [64, 34, 25, 12, 22, 11, 90]

print("Bubble Sort:", bubble_sort(arr.copy()))

print("Insertion Sort:", insertion_sort(arr.copy()))

print("Quick Sort:", quick_sort(arr.copy()))

print("Merge Sort:", merge_sort(arr.copy()))

print("Heap Sort:", heap_sort(arr.copy()))

print("Radix Sort:", radix_sort(arr.copy()))

print("Binary Tree Sort:", binary_tree_sort(arr.copy()))

OUTPUT:

Bubble Sort: [11, 12, 22, 25, 34, 64, 90]

Insertion Sort: [11, 12, 22, 25, 34, 64, 90]

Quick Sort: [11, 12, 22, 25, 34, 64, 90]

Merge Sort: [11, 12, 22, 25, 34, 64, 90]

Heap Sort: [11, 12, 22, 25, 34, 64, 90]

Radix Sort: [11, 12, 22, 25, 34, 64, 90]

Binary Tree Sort: [11, 12, 22, 25, 34, 64, 90]

LEARNINGS:
1. Bubble Sort is straightforward but inefficient for large lists.
2. Insertion Sort works well on small or partially sorted data.
3. Quick Sort is efficient on average but requires careful handling of pivot selection.
4. Merge Sort is stable and efficient but requires extra space.
5. Heap Sort provides \(O(n \log n)\) time without additional memory.
6. Radix Sort is ideal for integer sorting without comparisons.
7. Binary Tree Sort offers an alternative sorting method based on tree traversal.

EXPERIMENT-10

AIM: Write a program to perform the following operations on a B-tree:


a) Insert an element into a B-tree.
b) Delete an element from a B-tree.
c) Search for a key element in a B-tree.

THEORY:
A B-tree is a self-balancing search tree commonly used in databases and filesystems. It
maintains sorted data and allows searches, insertions, and deletions in logarithmic time. A B-
tree of order \(m\) is defined as follows:

1. Each node can have at most \(m\) children.


2. Each internal node has at least \(\lceil m/2 \rceil\) children.
3. Each node contains \(n\) keys, where \( \lceil m/2 \rceil - 1 \leq n \leq m - 1 \).
4. All leaves are at the same level, which makes B-trees suitable for hierarchical data storage.
5. Insertion and deletion operations ensure that the tree remains balanced by splitting and
merging nodes as needed.

SOURCE CODE:

class BTreeNode:

def __init__(self, t, leaf=False):

self.t = t # Minimum degree (defines the range for number of keys)

self.leaf = leaf # True if leaf node


self.keys = [] # List of keys

self.children = [] # List of child pointers

# Function to traverse all nodes in a subtree rooted with this node

def traverse(self):

for i in range(len(self.keys)):

if not self.leaf:

self.children[i].traverse()

print(self.keys[i], end=" ")

if not self.leaf:

self.children[len(self.keys)].traverse()

# Function to search a key in the subtree rooted with this node

def search(self, key):

i=0

while i < len(self.keys) and key > self.keys[i]:

i += 1

if i < len(self.keys) and self.keys[i] == key:

return self

if self.leaf:

return None

return self.children[i].search(key)

# Function to insert a new key into this node

def insert_non_full(self, key):

i = len(self.keys) - 1

if self.leaf:

self.keys.append(0)

while i >= 0 and key < self.keys[i]:

self.keys[i + 1] = self.keys[i]

i -= 1
self.keys[i + 1] = key

else:

while i >= 0 and key < self.keys[i]:

i -= 1

i += 1

if len(self.children[i].keys) == 2 self.t - 1:

self.split_child(i)

if key > self.keys[i]:

i += 1

self.children[i].insert_non_full(key)

# Split the child of this node

def split_child(self, i):

y = self.children[i]

z = BTreeNode(y.t, y.leaf)

self.children.insert(i + 1, z)

self.keys.insert(i, y.keys[self.t - 1])

z.keys = y.keys[self.t:(2 self.t - 1)]

y.keys = y.keys[0:(self.t - 1)]

if not y.leaf:

z.children = y.children[self.t:(2 self.t)]

y.children = y.children[0:self.t]

# Function to delete a key

def delete(self, key):

i=0

while i < len(self.keys) and key > self.keys[i]:

i += 1

if i < len(self.keys) and self.keys[i] == key:

if self.leaf:

self.keys.pop(i)
else:

self.delete_internal_node(key, i)

elif self.leaf:

print(f"Key {key} not found in the tree.")

else:

flag = (i == len(self.keys))

if len(self.children[i].keys) < self.t:

self.fill(i)

if flag and i > len(self.keys):

self.children[i - 1].delete(key)

else:

self.children[i].delete(key)

def delete_internal_node(self, key, i):

# Implement logic for deleting key from an internal node

pass

def fill(self, i):

# Implement logic for filling the child if it has less than minimum degree

pass

class BTree:

def __init__(self, t):

self.root = BTreeNode(t, True)

self.t = t # Minimum degree

def traverse(self):

if self.root:

self.root.traverse()

def search(self, key):


return None if not self.root else self.root.search(key)

def insert(self, key):

if len(self.root.keys) == 2 self.t - 1:

new_root = BTreeNode(self.t, False)

new_root.children.insert(0, self.root)

new_root.split_child(0)

i=0

if new_root.keys[0] < key:

i += 1

new_root.children[i].insert_non_full(key)

self.root = new_root

else:

self.root.insert_non_full(key)

def delete(self, key):

if not self.root:

print("The tree is empty.")

return

self.root.delete(key)

if len(self.root.keys) == 0:

if not self.root.leaf:

self.root = self.root.children[0]

else:

self.root = None

Sample Execution:

b_tree = BTree(3) # Creating a B-tree of minimum degree 3


elements = [10, 20, 5, 6, 12, 30, 7, 17]

print("Inserting elements:", elements)

for el in elements:

b_tree.insert(el)

print("B-tree traversal after insertion:")

b_tree.traverse()

search_key = 6

print(f"\nSearching for key {search_key}:")

result = b_tree.search(search_key)

if result:

print(f"Key {search_key} found.")

else:

print(f"Key {search_key} not found.")

delete_key = 10

print(f"\nDeleting key {delete_key} from the B-tree:")

b_tree.delete(delete_key)

print("B-tree traversal after deletion:")

b_tree.traverse()

OUTPUT:

Inserting elements: [10, 20, 5, 6, 12, 30, 7, 17]

B-tree traversal after insertion:

5 6 7 10 12 17 20 30

Searching for key 6:


Key 6 found.

Deleting key 10 from the B-tree:

B-tree traversal after deletion:

5 6 7 12 17 20 30

LEARNINGS:
1. Insertion in B-trees ensures that the tree remains balanced by splitting nodes when
necessary.
2. Searching in B-trees provides an efficient way to locate elements due to the ordered
structure.
3. Deletion from B-trees requires careful handling to ensure balance and to avoid underflow
conditions in nodes.
4. B-trees are efficient for database indexing and filesystems due to their structure and
balanced nature, allowing fast access to large datasets.
EXPERIMENT-11

AIM: Write a program to implement the following operations on a Min-Max Heap:


1. Insert an element.
2. Delete an element.
3. Search for a key element.

THEORY:
A Min-Max Heap is a double-ended priority queue structure that supports efficient operations
on both minimum and maximum values. It alternates levels between min and max to enable
efficient operations in O(log n) time for insertions, deletions, and searches.

Min-Max Heap properties:


- Root node holds the minimum element.
- Every second level is a "max level" where each parent node is greater than its children.

OPERATIONS:
1. Insertion: Adds an element while maintaining the Min-Max Heap properties by adjusting
the position of elements.
2. Deletion: Removes the specified element, then reheapifies to restore the heap properties.
3. Search: Traverses the heap to find the specified element.

SOURCE CODE
class MinMaxHeap:
def __init__(self):
self.heap = []

def insert(self, element):


self.heap.append(element)
self._bubble_up(len(self.heap) - 1)

def delete(self, element):


try:
index = self.heap.index(element)
self.heap[index] = self.heap[-1]
self.heap.pop()
if index < len(self.heap):
self._bubble_down(index)
self._bubble_up(index)
except ValueError:
print("Element not found in the heap.")

def search(self, element):


return element in self.heap

def _bubble_up(self, index):


# Bubble up the element based on Min-Max Heap rules
while index > 0:
parent = (index - 1) // 2
if (index % 2 == 0 and self.heap[index] < self.heap[parent]) or \
(index % 2 == 1 and self.heap[index] > self.heap[parent]):
self.heap[index], self.heap[parent] = self.heap[parent], self.heap[index]
index = parent

def _bubble_down(self, index):


# Bubble down the element based on Min-Max Heap rules
while 2 index + 1 < len(self.heap):
child = 2 index + 1
if child + 1 < len(self.heap) and \
((index % 2 == 0 and self.heap[child + 1] < self.heap[child]) or \
(index % 2 == 1 and self.heap[child + 1] > self.heap[child])):
child += 1
if (index % 2 == 0 and self.heap[index] > self.heap[child]) or \
(index % 2 == 1 and self.heap[index] < self.heap[child]):
self.heap[index], self.heap[child] = self.heap[child], self.heap[index]
index = child

# Example usage
min_max_heap = MinMaxHeap()

# Insertion
min_max_heap.insert(10)
min_max_heap.insert(5)
min_max_heap.insert(20)
print("Heap after insertion:", min_max_heap.heap)

# Deletion
min_max_heap.delete(5)
print("Heap after deletion of 5:", min_max_heap.heap)

# Search
print("Is 20 in the heap?", min_max_heap.search(20))
print("Is 5 in the heap?", min_max_heap.search(5))

OUTPUT:

Heap after insertion: [5, 10, 20]


Heap after deletion of 5: [10, 20]
Is 20 in the heap? True
Is 5 in the heap? False

LEARNINGS

1. Min-Max Heaps provide an efficient way to manage and retrieve both minimum and
maximum values.
2. The Min-Max Heap is useful for applications requiring frequent access to both the
minimum and maximum values.
3. Implementing Min-Max Heap operations requires careful handling of elements to maintain
the structure’s alternating min-max levels.
EXPERIMENT-12

AIM: Write a program to implement the following operations on an AVL Tree:


1. Insert an element.
2. Delete an element.

THEORY:
An AVL (Adelson-Velsky and Landis) Tree is a self-balancing binary search tree. In an AVL
Tree, the heights of the two child subtrees of any node differ by no more than one. If this
property is violated after performing an operation (insert or delete), a rebalancing operation is
performed. The AVL Tree ensures that the tree remains balanced, thus ensuring O(log n) time
complexity for search, insert, and delete operations.

OPERATIONS:
1. Insertion: Inserts an element while maintaining the AVL Tree properties. After insertion,
the tree is balanced using rotations if necessary.
2. Deletion: Removes the specified element, then rebalances the tree if necessary to maintain
the AVL property.

SOURCE CODE

class Node:
def __init__(self, key):
self.key = key
self.left = None
self.right = None
self.height = 1

class AVLTree:
def get_height(self, node):
if not node:
return 0
return node.height

def get_balance(self, node):


if not node:
return 0
return self.get_height(node.left) - self.get_height(node.right)

def left_rotate(self, z):


y = z.right
T2 = y.left
y.left = z
z.right = T2
z.height = 1 + max(self.get_height(z.left), self.get_height(z.right))
y.height = 1 + max(self.get_height(y.left), self.get_height(y.right))
return y

def right_rotate(self, z):


y = z.left
T3 = y.right
y.right = z
z.left = T3
z.height = 1 + max(self.get_height(z.left), self.get_height(z.right))
y.height = 1 + max(self.get_height(y.left), self.get_height(y.right))
return y

def insert(self, root, key):


if not root:
return Node(key)
elif key < root.key:
root.left = self.insert(root.left, key)
else:
root.right = self.insert(root.right, key)

root.height = 1 + max(self.get_height(root.left), self.get_height(root.right))


balance = self.get_balance(root)

# Left Left Case


if balance > 1 and key < root.left.key:
return self.right_rotate(root)
# Right Right Case
if balance < -1 and key > root.right.key:
return self.left_rotate(root)
# Left Right Case
if balance > 1 and key > root.left.key:
root.left = self.left_rotate(root.left)
return self.right_rotate(root)
# Right Left Case
if balance < -1 and key < root.right.key:
root.right = self.right_rotate(root.right)
return self.left_rotate(root)

return root
def min_value_node(self, node):
if node is None or node.left is None:
return node
return self.min_value_node(node.left)

def delete(self, root, key):


if not root:
return root
elif key < root.key:
root.left = self.delete(root.left, key)
elif key > root.key:
root.right = self.delete(root.right, key)
else:
if root.left is None:
temp = root.right
root = None
return temp
elif root.right is None:
temp = root.left
root = None
return temp

temp = self.min_value_node(root.right)
root.key = temp.key
root.right = self.delete(root.right, temp.key)

if root is None:
return root

root.height = 1 + max(self.get_height(root.left), self.get_height(root.right))


balance = self.get_balance(root)

# Left Left Case


if balance > 1 and self.get_balance(root.left) >= 0:
return self.right_rotate(root)
# Left Right Case
if balance > 1 and self.get_balance(root.left) < 0:
root.left = self.left_rotate(root.left)
return self.right_rotate(root)
# Right Right Case
if balance < -1 and self.get_balance(root.right) <= 0:
return self.left_rotate(root)
# Right Left Case
if balance < -1 and self.get_balance(root.right) > 0:
root.right = self.right_rotate(root.right)
return self.left_rotate(root)

return root

def pre_order(self, root):


if root is not None:
print("{0} ".format(root.key), end="")
self.pre_order(root.left)
self.pre_order(root.right)

# Example usage
avl_tree = AVLTree()
root = None

# Insertion
root = avl_tree.insert(root, 10)
root = avl_tree.insert(root, 20)
root = avl_tree.insert(root, 30)
root = avl_tree.insert(root, 40)
root = avl_tree.insert(root, 50)
print("Preorder traversal after insertion: ", end="")
avl_tree.pre_order(root)
print()

# Deletion
root = avl_tree.delete(root, 30)
print("Preorder traversal after deletion of 30: ", end="")
avl_tree.pre_order(root)
print()

OUTPUT:

Preorder traversal after insertion: 10 20 30 40 50


Preorder traversal after deletion of 30: 10 20 40 50

LEARNINGS

1. AVL Trees maintain balance by performing rotations after insertions and deletions,
ensuring O(log n) time complexity for these operations.
2. The balance factor helps in determining the type of rotation needed to maintain the AVL
property.
3. Implementing AVL Tree operations requires careful handling of node heights and
rebalancing conditions.

You might also like