ADS Lab File-2
ADS Lab File-2
AIM: Write a program to implement Linear Search and Binary Search using both recursive
and non-recursive functions and display the output.
THEORY:
Searching algorithms are fundamental in computer science and are used to locate specific
elements within a list or array.
1. Linear Search: This algorithm sequentially checks each element in the array until the
target is found or the end is reached. It has a time complexity of \( O(n) \).
2. Binary Search: This algorithm divides a sorted array into halves, continuously narrowing
down the search space. It has a time complexity of \( O(\log n) \). Binary search requires the
array to be sorted beforehand.
SOURCE CODE:
for i in range(len(arr)):
if arr[i] == x:
return i
return -1
return -1
if arr[index] == x:
return index
if arr[mid] == x:
return mid
low = mid + 1
else:
high = mid - 1
return -1
if high is None:
high = len(arr) - 1
return -1
if arr[mid] == x:
return mid
else:
x = 10
# Linear Search Outputs
OUTPUT:
LEARNINGS:
1. Linear Search is simple but less efficient for large datasets, as it checks each element sequentially.
2. Binary Search is more efficient with sorted arrays, dividing the search space with each comparison.
3. Recursive implementations use the call stack for iteration, which may lead to higher memory usage
but can be more concise.
4. Non-recursive methods avoid the overhead of recursion, which can be beneficial for larger data
sets.
EXPERIMENT-2
AIM: Write a program to implement Stack ADT and Queue ADT using arrays and display
the output.
THEORY:
Stacks and Queues are fundamental data structures in computer science.
1. Stack ADT:
A stack follows the Last In First Out (LIFO) principle, where the last inserted element is the
first one to be removed. It has two main operations:
- Push: Adds an element to the top of the stack.
- Pop: Removes the top element from the stack.
2. Queue ADT:
A queue follows the First In First Out (FIFO) principle, where the first inserted element is
the first one to be removed. It has two main operations:
- Enqueue: Adds an element to the end of the queue.
- Dequeue: Removes the front element from the queue.
SOURCE CODE:
class Stack:
def __init__(self):
self.stack = []
# Push operation
self.stack.append(item)
print(f"Pushed {item}")
# Pop operation
def pop(self):
if not self.is_empty():
item = self.stack.pop()
print(f"Popped {item}")
return item
else:
print("Stack is empty")
return None
def is_empty(self):
return len(self.stack) == 0
# Display stack
def display(self):
print("Stack:", self.stack)
class Queue:
def __init__(self):
self.queue = []
# Enqueue operation
self.queue.append(item)
print(f"Enqueued {item}")
# Dequeue operation
def dequeue(self):
if not self.is_empty():
item = self.queue.pop(0)
print(f"Dequeued {item}")
return item
else:
print("Queue is empty")
return None
def is_empty(self):
return len(self.queue) == 0
# Display queue
def display(self):
print("Queue:", self.queue)
# Stack Operations
print("Stack Operations:")
stack = Stack()
stack.push(10)
stack.push(20)
stack.push(30)
stack.display()
stack.pop()
stack.display()
# Queue Operations
print("\nQueue Operations:")
queue = Queue()
queue.enqueue(1)
queue.enqueue(2)
queue.enqueue(3)
queue.display()
queue.dequeue()
queue.display()
OUTPUT:
Stack Operations:
Pushed 10
Pushed 20
Pushed 30
Popped 30
Queue Operations:
Enqueued 1
Enqueued 2
Enqueued 3
Queue: [1, 2, 3]
Dequeued 1
Queue: [2, 3]
LEARNINGS:
1. Stack ADT is useful in scenarios where LIFO access is required, such as function call
stacks or undo mechanisms in software.
2. Queue ADT is effective for FIFO access, suitable for task scheduling, buffering, and real-
time processing applications.
3. Implementing these ADTs using arrays (lists in Python) is straightforward, though for
large-scale applications, circular buffers or linked lists may be more efficient for queues.
4. Understanding ADTs helps in choosing the right data structure for specific problem
requirements in software development.
EXPERIMENT-3
AIM: Write a program to read an infix expression and convert it to postfix notation using
Stack ADT.
THEORY:
Infix expressions are expressions in which operators are placed between operands, e.g., A +
B. Postfix expressions, on the other hand, place operators after the operands, e.g., A B +.
Postfix notation is easier to evaluate in a stack-based approach as it does not require
parentheses to dictate operator precedence.
By following these rules, an infix expression can be converted into postfix notation using a
stack.
SOURCE CODE:
class Stack:
def __init__(self):
self.stack = []
self.stack.append(item)
def pop(self):
if not self.is_empty():
return self.stack.pop()
return None
def peek(self):
if not self.is_empty():
return self.stack[-1]
return None
def is_empty(self):
return len(self.stack) == 0
def precedence(op):
if op in ('+', '-'):
return 1
if op in ('', '/'):
return 2
return 0
def infix_to_postfix(expression):
stack = Stack()
postfix = []
postfix.append(char)
stack.push(char)
elif char == ')': # If ')', pop until '('
postfix.append(stack.pop())
postfix.append(stack.pop())
stack.push(char)
postfix.append(stack.pop())
return ''.join(postfix)
expression = "A(B+C)/D"
OUTPUT:
AIM: Write a program to implement Stack ADT and Queue ADT using a singly linked list
and display the output.
THEORY:
Stacks and Queues can be implemented using a singly linked list to dynamically manage
memory, which is efficient for applications where data can grow or shrink dynamically.
SOURCE CODE:
class Node:
self.data = data
self.next = None
class Stack:
def __init__(self):
self.top = None
# Push operation
new_node = Node(item)
new_node.next = self.top
self.top = new_node
print(f"Pushed {item}")
# Pop operation
def pop(self):
if self.is_empty():
print("Stack is empty")
return None
item = self.top.data
self.top = self.top.next
print(f"Popped {item}")
return item
def is_empty(self):
def display(self):
current = self.top
while current:
current = current.next
print()
class Queue:
def __init__(self):
# Enqueue operation
new_node = Node(item)
if self.rear is None:
else:
self.rear.next = new_node
self.rear = new_node
print(f"Enqueued {item}")
# Dequeue operation
def dequeue(self):
if self.is_empty():
print("Queue is empty")
return None
item = self.front.data
self.front = self.front.next
if self.front is None:
self.rear = None
print(f"Dequeued {item}")
return item
def is_empty(self):
return self.front is None
def display(self):
current = self.front
while current:
current = current.next
print()
# Stack Operations
print("Stack Operations:")
stack = Stack()
stack.push(10)
stack.push(20)
stack.push(30)
stack.display()
stack.pop()
stack.display()
# Queue Operations
print("\nQueue Operations:")
queue = Queue()
queue.enqueue(1)
queue.enqueue(2)
queue.enqueue(3)
queue.display()
queue.dequeue()
queue.display()
OUTPUT:
Stack Operations:
Pushed 10
Pushed 20
Pushed 30
Stack: 30 20 10
Popped 30
Stack: 20 10
Queue Operations:
Enqueued 1
Enqueued 2
Enqueued 3
Queue: 1 2 3
Dequeued 1
Queue: 2 3
LEARNINGS:
1. Stack ADT using linked lists is efficient in terms of memory, as it dynamically grows and
shrinks, making it suitable for applications requiring LIFO access.
2. Queue ADT using linked lists also benefits from dynamic memory allocation and is
efficient for FIFO access, useful for tasks like scheduling and buffering.
3. Using Linked Lists to implement these ADTs provides flexibility and overcomes the
limitations of static array sizes.
4. Understanding linked lists in data structure implementation enhances efficiency and
memory management, which is crucial in system-level programming and application
development.
EXPERIMENT-5
AIM: Write a program to perform the following operations on a Binary Search Tree (BST):
1. Construct a binary search tree of elements.
2. Search for a key element in the binary search tree.
3. Delete an element from the binary search tree.
THEORY:
A Binary Search Tree (BST) is a data structure in which each node has at most two children.
For any node in the BST:
- The left subtree contains nodes with values less than the node's value.
- The right subtree contains nodes with values greater than the node's value.
BSTs are commonly used for searching and sorting data due to their efficient insert, search,
and delete operations.
1. Insertion
- Begin at the root.
- If the tree is empty, the new element becomes the root.
- Otherwise, if the element is smaller, recursively insert it into the left subtree; if larger,
insert it into the right subtree.
2. Search
- Start from the root and compare the key to be searched.
- If the key matches, return it; otherwise, search the left or right subtree based on the key's
value relative to the current node.
3. Deletion
- If the node has no children, simply remove it.
- If the node has one child, replace it with its child.
- If the node has two children, find the in-order successor (smallest node in the right
subtree) or predecessor (largest node in the left subtree), replace the node's value with it, and
delete the successor/predecessor.
SOURCE CODE:
class TreeNode:
self.key = key
self.left = None
self.right = None
class BinarySearchTree:
def __init__(self):
self.root = None
# Insert a node
if root is None:
return TreeNode(key)
else:
return root
# Search for a node
return root
# Delete a node
if root is None:
return root
else:
if root.left is None:
return root.right
return root.left
min_larger_node = self._min_value_node(root.right)
root.key = min_larger_node.key
return root
current = root
current = current.left
return current
self.inorder(root.left)
self.inorder(root.right)
bst = BinarySearchTree()
for el in elements:
bst.insert(el)
bst.inorder(bst.root)
print("\n")
key_to_search = 40
search_result = bst.search(key_to_search)
if search_result:
print(f"Element {key_to_search} found in the BST.")
else:
# Deleting an element
key_to_delete = 20
bst.delete(key_to_delete)
bst.inorder(bst.root)
print()
OUTPUT:
20 30 40 50 60 70 80
30 40 50 60 70 80
LEARNINGS:
1. Binary Search Trees (BST) provide an efficient structure for ordered data, enabling O(log
n) time complexity for search, insert, and delete operations in balanced cases.
2. In-order traversal of a BST retrieves elements in sorted order.
3. Search Operation in a BST starts from the root and traverses either left or right depending
on the key value, making it efficient for ordered data.
EXPERIMENT-6
AIM: Write a program to implement all the functions of a dictionary (ADT) using Hashing.
THEORY:
A Dictionary (or Hash Table) is a data structure that stores key-value pairs, allowing for
efficient insertion, deletion, and search operations. Hashing is a technique used to map data
(keys) to a specific location (hash table index) for fast access.
1. Hashing:
- A hash function converts a key into an integer index within a hash table. This index helps
locate the associated value quickly.
- Common hash functions include modulo operation, multiplication, and division.
- Collision Handling: Collisions occur when two keys hash to the same index. Techniques
like chaining (linked lists at each index) or open addressing (probing for the next available
slot) handle collisions.
SOURCE CODE:
class HashTable:
self.size = size
index = self.hash_function(key)
for kv in self.table[index]:
if kv[0] == key:
return
self.table[index].append([key, value])
index = self.hash_function(key)
for kv in self.table[index]:
if kv[0] == key:
return kv[1]
return None
index = self.hash_function(key)
for i, kv in enumerate(self.table[index]):
if kv[0] == key:
del self.table[index][i]
return True
return False
def display(self):
print("Hash Table:")
hash_table.insert("apple", 10)
hash_table.insert("banana", 20)
hash_table.insert("orange", 30)
hash_table.display()
key_to_search = "banana"
hash_table.search(key_to_search)
# Delete a key
key_to_delete = "apple"
hash_table.delete(key_to_delete)
hash_table.display()
OUTPUT:
Inserted key 'apple' with value '10'
Hash Table:
Index 0: []
Index 1: []
Index 2: []
Index 3: []
Index 4: []
Index 5: []
Index 8: []
Hash Table:
Index 0: []
Index 1: []
Index 2: []
Index 3: []
Index 4: []
Index 5: []
Index 7: []
Index 8: []
1. Hash Tables provide efficient key-value storage, allowing constant time average-case
complexity for insert, search, and delete operations.
2. Hash Functions are crucial to mapping keys to table indices. Choosing a good hash
function minimizes collisions, improving the efficiency of the hash table.
3. Collision Handling with Chaining allows multiple elements to occupy the same index, with
each index containing a list of key-value pairs.
4. Understanding hash tables is essential for implementing dictionaries and optimizing data
retrieval, which are foundational in database indexing and memory management.
EXPERIMENT-7
AIM: Write a program that uses both recursive and non-recursive functions to traverse a
given binary tree in:
a) Preorder
b) Inorder
c) Postorder
THEORY:
Binary tree traversal is a process of visiting each node in a specific order. There are multiple
traversal methods, but three common types are Preorder, Inorder, and Postorder traversals.
Each traversal order serves different purposes and has distinct applications in tree operations.
1. Traversal Types:
- Preorder (Root, Left, Right): In Preorder traversal, the root node is visited first,
followed by the left subtree and then the right subtree. It's useful for creating a copy of the
tree or getting a prefix expression.
- Inorder (Left, Root, Right): In Inorder traversal, the left subtree is visited first, followed
by the root node, and then the right subtree. This order is commonly used for binary search
trees, as it visits nodes in ascending order.
- Postorder (Left, Right, Root): In Postorder traversal, the left subtree is visited first,
followed by the right subtree, and finally the root node. This order is useful for deleting a tree
or getting a postfix expression.
class Node:
self.data = data
self.left = None
self.right = None
# Recursive Traversals
def preorder_recursive(root):
if root:
preorder_recursive(root.left)
preorder_recursive(root.right)
def inorder_recursive(root):
if root:
inorder_recursive(root.left)
inorder_recursive(root.right)
def postorder_recursive(root):
if root:
postorder_recursive(root.left)
postorder_recursive(root.right)
def preorder_iterative(root):
if not root:
return
stack = [root]
while stack:
node = stack.pop()
if node.right:
stack.append(node.right)
if node.left:
stack.append(node.left)
def inorder_iterative(root):
stack = []
current = root
while current:
stack.append(current)
current = current.left
current = stack.pop()
current = current.right
def postorder_iterative(root):
stack1 = [root]
stack2 = []
while stack1:
node = stack1.pop()
stack2.append(node)
if node.left:
stack1.append(node.left)
if node.right:
stack1.append(node.right)
while stack2:
node = stack2.pop()
root = Node(1)
root.left = Node(2)
root.right = Node(3)
root.left.left = Node(4)
root.left.right = Node(5)
root.right.left = Node(6)
root.right.right = Node(7)
preorder_recursive(root)
preorder_iterative(root)
inorder_recursive(root)
inorder_iterative(root)
postorder_recursive(root)
postorder_iterative(root)
OUTPUT:
1245367
1245367
4251637
4251637
4526731
4526731
LEARNINGS:
AIM: Write a program for the implementation of Breadth-First Search (BFS) and Depth-First
Search (DFS) for a graph.
THEORY:
Graphs are data structures that consist of a collection of vertices (nodes) and edges
connecting pairs of vertices. BFS and DFS are two fundamental graph traversal algorithms
used to explore nodes and edges of a graph in different ways. They serve as the foundation
for various graph-based algorithms, such as finding shortest paths, detecting cycles, and
checking connectivity.
SOURCE CODE:
class Graph:
def __init__(self):
self.graph = defaultdict(list)
self.graph[u].append(v)
visited.add(start)
while queue:
node = queue.popleft()
# Explore neighbors
visited.add(neighbor)
queue.append(neighbor)
print()
if visited is None:
visited = set()
visited.add(node)
self.dfs_recursive(neighbor, visited)
visited = set()
stack = [start]
while stack:
node = stack.pop()
visited.add(node)
stack.append(neighbor)
print()
g = Graph()
g.add_edge(0, 1)
g.add_edge(0, 2)
g.add_edge(1, 2)
g.add_edge(2, 0)
g.add_edge(2, 3)
g.add_edge(3, 3)
start_node = 2
g.bfs(start_node)
g.dfs_recursive(start_node)
g.dfs_iterative(start_node)
OUTPUT:
Starting Node: 2
BFS Traversal: 2 0 3 1
LEARNINGS:
1. BFS (Breadth-First Search) is useful for exploring a graph level by level and is commonly
used to find the shortest path in unweighted graphs. BFS requires a queue data structure to
track the nodes to be visited.
2. DFS (Depth-First Search) explores the graph by diving deep into one branch before
backtracking. DFS can be implemented recursively or iteratively using a stack. Recursive
DFS utilizes the call stack, while iterative DFS explicitly uses a stack.
EXPERIMENT-9
AIM: Write a program for the implementation of the following sorting methods:
a) Bubble Sort
b) Insertion Sort
c) Quick Sort
d) Merge Sort
e) Heap Sort
f) Radix Sort
g) Binary Tree Sort
THEORY:
Sorting is a fundamental operation in computer science that rearranges elements in a
particular order, such as ascending or descending. Sorting algorithms differ in terms of their
time complexity, space complexity, stability, and whether they are comparison-based or not.
The following are brief descriptions of each sorting algorithm:
1. Bubble Sort: A simple comparison-based algorithm where each pair of adjacent elements
is compared, and elements are swapped if they are in the wrong order. It is inefficient for
large lists with a time complexity of \(O(n^2)\).
2. Insertion Sort: Builds the sorted list one element at a time by inserting each element into
its correct position. Efficient for small data sets and partially sorted lists, with a time
complexity of \(O(n^2)\).
3. Quick Sort: A divide-and-conquer algorithm that partitions the list into two sublists based
on a pivot element and recursively sorts the sublists. Its average time complexity is \(O(n \log
n)\).
4. Merge Sort: A stable, comparison-based sorting algorithm that divides the list into halves,
recursively sorts each half, and merges them. Its time complexity is \(O(n \log n)\).
5. Heap Sort: A comparison-based sorting algorithm that builds a max heap and extracts the
maximum element repeatedly. Its time complexity is \(O(n \log n)\).
6. Radix Sort: A non-comparison-based sorting algorithm that sorts numbers digit by digit
from the least significant to the most significant. It works efficiently on integers with a time
complexity of \(O(d \cdot (n + b))\), where \(d\) is the number of digits and \(b\) is the base.
7. Binary Tree Sort: Inserts elements into a binary search tree and performs an in-order
traversal to retrieve the sorted order. Its average time complexity is \(O(n \log n)\).
SOURCE CODE:
# Bubble Sort
def bubble_sort(arr):
n = len(arr)
for i in range(n):
return arr
# Insertion Sort
def insertion_sort(arr):
key = arr[i]
j=i-1
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key
return arr
# Quick Sort
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
# Merge Sort
def merge_sort(arr):
if len(arr) > 1:
mid = len(arr) // 2
left_half = arr[:mid]
right_half = arr[mid:]
merge_sort(left_half)
merge_sort(right_half)
i=j=k=0
arr[k] = left_half[i]
i += 1
else:
arr[k] = right_half[j]
j += 1
k += 1
arr[k] = left_half[i]
i += 1
k += 1
arr[k] = right_half[j]
j += 1
k += 1
return arr
# Heap Sort
largest = i
left = 2 i + 1
right = 2 i + 2
largest = left
largest = right
if largest != i:
heapify(arr, n, largest)
def heap_sort(arr):
n = len(arr)
heapify(arr, n, i)
for i in range(n - 1, 0, -1):
heapify(arr, i, 0)
return arr
# Radix Sort
n = len(arr)
output = [0] n
count = [0] 10
for i in arr:
index = i // exp
count[index % 10] += 1
count[i] += count[i - 1]
i=n-1
while i >= 0:
count[index % 10] -= 1
i -= 1
for i in range(n):
arr[i] = output[i]
def radix_sort(arr):
max_num = max(arr)
exp = 1
counting_sort(arr, exp)
exp = 10
return arr
# Binary Tree Sort
class TreeNode:
self.left = None
self.right = None
self.val = key
if root is None:
return TreeNode(key)
else:
return root
if root:
inorder_traversal(root.left, sorted_list)
sorted_list.append(root.val)
inorder_traversal(root.right, sorted_list)
def binary_tree_sort(arr):
if not arr:
return []
root = TreeNode(arr[0])
insert(root, arr[i])
sorted_list = []
inorder_traversal(root, sorted_list)
return sorted_list
OUTPUT:
Binary Tree Sort: [11, 12, 22, 25, 34, 64, 90]
LEARNINGS:
1. Bubble Sort is straightforward but inefficient for large lists.
2. Insertion Sort works well on small or partially sorted data.
3. Quick Sort is efficient on average but requires careful handling of pivot selection.
4. Merge Sort is stable and efficient but requires extra space.
5. Heap Sort provides \(O(n \log n)\) time without additional memory.
6. Radix Sort is ideal for integer sorting without comparisons.
7. Binary Tree Sort offers an alternative sorting method based on tree traversal.
EXPERIMENT-10
THEORY:
A B-tree is a self-balancing search tree commonly used in databases and filesystems. It
maintains sorted data and allows searches, insertions, and deletions in logarithmic time. A B-
tree of order \(m\) is defined as follows:
SOURCE CODE:
class BTreeNode:
def traverse(self):
for i in range(len(self.keys)):
if not self.leaf:
self.children[i].traverse()
if not self.leaf:
self.children[len(self.keys)].traverse()
i=0
i += 1
return self
if self.leaf:
return None
return self.children[i].search(key)
i = len(self.keys) - 1
if self.leaf:
self.keys.append(0)
self.keys[i + 1] = self.keys[i]
i -= 1
self.keys[i + 1] = key
else:
i -= 1
i += 1
if len(self.children[i].keys) == 2 self.t - 1:
self.split_child(i)
i += 1
self.children[i].insert_non_full(key)
y = self.children[i]
z = BTreeNode(y.t, y.leaf)
self.children.insert(i + 1, z)
if not y.leaf:
y.children = y.children[0:self.t]
i=0
i += 1
if self.leaf:
self.keys.pop(i)
else:
self.delete_internal_node(key, i)
elif self.leaf:
else:
flag = (i == len(self.keys))
self.fill(i)
self.children[i - 1].delete(key)
else:
self.children[i].delete(key)
pass
# Implement logic for filling the child if it has less than minimum degree
pass
class BTree:
def traverse(self):
if self.root:
self.root.traverse()
if len(self.root.keys) == 2 self.t - 1:
new_root.children.insert(0, self.root)
new_root.split_child(0)
i=0
i += 1
new_root.children[i].insert_non_full(key)
self.root = new_root
else:
self.root.insert_non_full(key)
if not self.root:
return
self.root.delete(key)
if len(self.root.keys) == 0:
if not self.root.leaf:
self.root = self.root.children[0]
else:
self.root = None
Sample Execution:
for el in elements:
b_tree.insert(el)
b_tree.traverse()
search_key = 6
result = b_tree.search(search_key)
if result:
else:
delete_key = 10
b_tree.delete(delete_key)
b_tree.traverse()
OUTPUT:
5 6 7 10 12 17 20 30
5 6 7 12 17 20 30
LEARNINGS:
1. Insertion in B-trees ensures that the tree remains balanced by splitting nodes when
necessary.
2. Searching in B-trees provides an efficient way to locate elements due to the ordered
structure.
3. Deletion from B-trees requires careful handling to ensure balance and to avoid underflow
conditions in nodes.
4. B-trees are efficient for database indexing and filesystems due to their structure and
balanced nature, allowing fast access to large datasets.
EXPERIMENT-11
THEORY:
A Min-Max Heap is a double-ended priority queue structure that supports efficient operations
on both minimum and maximum values. It alternates levels between min and max to enable
efficient operations in O(log n) time for insertions, deletions, and searches.
OPERATIONS:
1. Insertion: Adds an element while maintaining the Min-Max Heap properties by adjusting
the position of elements.
2. Deletion: Removes the specified element, then reheapifies to restore the heap properties.
3. Search: Traverses the heap to find the specified element.
SOURCE CODE
class MinMaxHeap:
def __init__(self):
self.heap = []
# Example usage
min_max_heap = MinMaxHeap()
# Insertion
min_max_heap.insert(10)
min_max_heap.insert(5)
min_max_heap.insert(20)
print("Heap after insertion:", min_max_heap.heap)
# Deletion
min_max_heap.delete(5)
print("Heap after deletion of 5:", min_max_heap.heap)
# Search
print("Is 20 in the heap?", min_max_heap.search(20))
print("Is 5 in the heap?", min_max_heap.search(5))
OUTPUT:
LEARNINGS
1. Min-Max Heaps provide an efficient way to manage and retrieve both minimum and
maximum values.
2. The Min-Max Heap is useful for applications requiring frequent access to both the
minimum and maximum values.
3. Implementing Min-Max Heap operations requires careful handling of elements to maintain
the structure’s alternating min-max levels.
EXPERIMENT-12
THEORY:
An AVL (Adelson-Velsky and Landis) Tree is a self-balancing binary search tree. In an AVL
Tree, the heights of the two child subtrees of any node differ by no more than one. If this
property is violated after performing an operation (insert or delete), a rebalancing operation is
performed. The AVL Tree ensures that the tree remains balanced, thus ensuring O(log n) time
complexity for search, insert, and delete operations.
OPERATIONS:
1. Insertion: Inserts an element while maintaining the AVL Tree properties. After insertion,
the tree is balanced using rotations if necessary.
2. Deletion: Removes the specified element, then rebalances the tree if necessary to maintain
the AVL property.
SOURCE CODE
class Node:
def __init__(self, key):
self.key = key
self.left = None
self.right = None
self.height = 1
class AVLTree:
def get_height(self, node):
if not node:
return 0
return node.height
return root
def min_value_node(self, node):
if node is None or node.left is None:
return node
return self.min_value_node(node.left)
temp = self.min_value_node(root.right)
root.key = temp.key
root.right = self.delete(root.right, temp.key)
if root is None:
return root
return root
# Example usage
avl_tree = AVLTree()
root = None
# Insertion
root = avl_tree.insert(root, 10)
root = avl_tree.insert(root, 20)
root = avl_tree.insert(root, 30)
root = avl_tree.insert(root, 40)
root = avl_tree.insert(root, 50)
print("Preorder traversal after insertion: ", end="")
avl_tree.pre_order(root)
print()
# Deletion
root = avl_tree.delete(root, 30)
print("Preorder traversal after deletion of 30: ", end="")
avl_tree.pre_order(root)
print()
OUTPUT:
LEARNINGS
1. AVL Trees maintain balance by performing rotations after insertions and deletions,
ensuring O(log n) time complexity for these operations.
2. The balance factor helps in determining the type of rotation needed to maintain the AVL
property.
3. Implementing AVL Tree operations requires careful handling of node heights and
rebalancing conditions.