Data Structures: J.B. Institute of Engineering and Technology
Data Structures: J.B. Institute of Engineering and Technology
DATA STRUCTURES
L E C T U R E NOTES
(R20)
II YEAR – I SEM
J.B.INSTITUTE OF ENGINEERING & TECHNOLOGY
UGC AUTONOMOUS
Bhaskar Nagar, Moinabad (M), RR Dist, Telangana-500075
B.Tech. : CSE L T P D C
II Year -I Semester 3 0 0 0 3
DATA STRUCTURES
(Common to CSE, IT & ECM)
Course objectives:
At the end of the course, students will :
1. Define the basic data structures like linked list .
2. Understand the fundamentals and applications of linked list, stacks and queues.
3. Classify different types of tree data structures
4. Understand the concepts of graph data structures.
5. Know the fundamentals of basic searching, sorting and pattern matching algorithms.
Course outcomes:
At the end of the course, students will be able to:
1. Demonstrate operations like searching, insertion, deletion, traversing mechanism
using linked list.
2. Use linear and non-linear data structures like stacks, queues etc.
3. Implement different types of tree data structures.
4. Implement the concepts of graph data structures.
5. Apply the basic searching, sorting and pattern matching Techniques.
UNIT - I:
Basic concepts - Algorithm Specification, Data Abstraction , Performance analysis - time
complexity and space complexity, Asymptotic Notation - Big O, Omega and Theta
notations, Introduction to Linear and Non Linear data structures.
Linear list – singly linked list implementation, insertion, deletion and searching operations
on linear list, circular linked list implementation, Doubly linked list implementation,
insertion, deletion and searching operations. Applications of linked lists.
UNIT - II:
Stacks-Operations, array and linked representations of stacks, stack applications-infix to
postfix conversion, postfix expression evaluation, recursion implementation.
Queues-operations, array and linked representations. Circular Queue operations, Dequeue,
applications of queue.
UNIT - III:
Trees – Terminology, Representation of Trees, Binary tree ADT, Properties of Binary
Trees, Binary Tree Representations-array and linked representations, Binary Tree traversals,
Threaded binary trees, Binary Heap-Properties,Max and Min Heap, Operations-Insertion
and Deletion.
Search Trees-Binary Search tree, Tree traversals, AVL tree – operations, B-tree –
operations, B+ trees, Red Black tree.
UNIT - IV:
Graphs-Terminology, sequential and linked representation, graph traversals : Depth First
Search & Breadth First Search implementation. Spanning trees, Prims and Kruskals
method.
Searching and Sorting - Linear Search, Binary Search, Insertion Sort, Selection Sort,
Merge Sort, Quick sort, Heap Sort.
UNIT - V:
Hashing-Hash table, Hash table representations, hash functions, collision resolution
techniques-separate chaining, open addressing-linear probing, quadratic probing, double
hashing,Re hashing, Extendible hashing,
Pattern matching : Introduction, Brute force, the Boyer –Moore algorithm, Knuth-Morris-
Pratt algorithm.
Textbooks:
1. Data Structures Using C, Reema Thareja, Oxford University Press, 2011 Learning.
2. Introduction to Algorithms, TH Cormen, PHI
References:
1. Data Structures & Algorithm Analysis in C++, Mark Allen Weiss, Pearson
Education.
2. Design methods and analysis of Algorithms, SK Basu, PHI.
3. Fundamentals of Computer Algorithms, 2nd Edition, Ellis Horowitz, Sartaj Sahni,
Sanguthevar Rajasekaran, Universities Press.
Prepared by D.HIMAGIRI
Data Structures
----------------------------------------------------------------------------------------------------------
UNIT-1:
Basic concepts - Algorithm Specification, Data Abstraction , Performance analysis - time
complexity and space complexity, Asymptotic Notation - Big O, Omega and Theta notations,
Introduction to Linear and Non Linear data structures.
Linear list – Singly linked list implementation, insertion, deletion and searching operations on
linear list, Circular linked list implementation, Doubly linked list implementation, insertion,
deletion and searching operations. Applications of linked lists.
---------------------------------------------------------------------------------------------------------------------
Data Structure:
Data Structure is a way to store and organize data in a computer so that it can be used
efficiently.
A data structure is seen as a logical concept that address two fundamental concerns.
i) How the data will be stored, and
ii) What operations will be performed on it.
There are two types of data structures-Linear and Non-Linear Data Structures.
Examples:Arrays,Stack,Queues,Trees, etc..
1
Prepared by D.HIMAGIRI
Algorithm Specification
Algorithm:
An Algorithm is a finite set of instructions that, if followed, accomplishes a particular
task.
Characteristics of an Algorithm:
iv. FINITENESS : If we trace out the instructions of an algorithm, then for all cases,
the algorithm terminates after a finite number of steps.
2
Prepared by D.HIMAGIRI
Algorithm Specification....
Graphic representation called flowchart: This method will work well when the
intended for human reading rather than machine reading. There is no need to follow all
Data Abstraction:
Abstraction means displaying only essential information and hiding the details. Data
abstraction refers to providing only essential information about the data to the outside world,
hiding the background details or implementation. 3
Prepared by D.HIMAGIRI
Performance Analysis
Performance analysis of an algorithm depends upon two factors i.e. amount of memory used
and amount of compute time consumed on any CPU.
i) Space Complexity:
4
Prepared by D.HIMAGIRI
Performance Analysis....
Time Complexity:
1. Algorithm Sum(a,n) No. of elementary steps
2.{
3. s=0.0; -------------------------------- 1
4. for i=1 to n do ------------------------- n+1
5. s=s+a[i]; ------------------------------ n
6. return s; ------------------------------- 1
7. }
Total---------------------------- 2n+3
6
Prepared by D.HIMAGIRI
Performance Analysis....
Example 2:
1. Algorithm Add(A,B,n) No. of elementary steps
2.{
3. for(i=0;i< n;i++) ------------------- n+1
4. {
5. for(j=0;j< n;j++)------------------- n*(n+1)
6. {
7. C[i,j]=A[i,j]+B[i,j] ------------ n*n
8. }
9. }
10. }
Total-------------------------n+1+n*(n+1)+n*n=2n2+ 2n+1
Time Complexity=O(n2)
Space Complexity is : Fixed components are i,j,n------each 1 word=3 words
Variable components are A-n*n,B-n*n and C-n*n=3n2
Total=3n2+3
Space complexity=O(n2)
7
Prepared by D.HIMAGIRI
Asymptotic Notations
8
Prepared by D.HIMAGIRI
Asymptotic Notations...
if f(n) ≤ cg(n) ∀ n≥n0 where c>0 and n0 >=1 then we can say that f(n) = O(g(n)) .
9
Prepared by D.HIMAGIRI
Asymptotic Notations...
Big-Omega Notation(Ω)
It describes the best case scenerio,it represents the lower bound running time
complexity of an algorithm .
if f(n)>=c.g(n) ∀ n≥n0 where c>0 and n0 >=1 then we can say that f(n) = Ω (g(n)) .
10
Prepared by D.HIMAGIRI
Asymptotic Notations...
if c1g(n)<=f(n)<=c2.g(n) ∀ n≥n0 where c1,c2>0 and n0 >=1 then we can say that
f(n) = Θ (g(n)) .
11
Prepared by D.HIMAGIRI
Linear & Non-linear Data Structure
12
Prepared by D.HIMAGIRI
Linear & Non-linear Data Structure....
Difference between Linear and Non-Linear Data structures :
Sr. No. Key Linear Data Structures Non-linear Data Structures
In single linked list every node contains two fields, data field and link field -a pointer to
the next node/address of next node.
14
Prepared by D.HIMAGIRI
Single Linked List....
The node in a single linked list is declared as
struct node
{
int data;
struct node *next;
};
The last node address field in the single linked list contains NULL.
I. Insertion
II. Deletion
III. Searching
IV. Traversing
15
Prepared by D.HIMAGIRI
Single Linked List....
Insertion:
Case 1: The new node is inserted at the beginning
16
Prepared by D.HIMAGIRI
Single Linked List....
Case 2: The new node is inserted at the end
17
Prepared by D.HIMAGIRI
Single Linked List....
Case 3: The new node is inserted after a given node
18
Prepared by D.HIMAGIRI
Single Linked List....
Case 4: The new node is inserted before a given node
19
Prepared by D.HIMAGIRI
Single Linked List....
Deletion : Case 1: The first node is deleted
20
Prepared by D.HIMAGIRI
Single Linked List....
Case 3: The node after a given node is deleted
21
Prepared by D.HIMAGIRI
Single Linked List....
Searching:
22
Prepared by D.HIMAGIRI
Single Linked List....
Traversing:
Traversing a linked list means accessing the nodes of the list in order to perform some
processing on them.
23
Prepared by D.HIMAGIRI
Doubly Linked List
A doubly linked list or a two-way linked list is a more complex type of linked list which
contains a pointer to the next as well as the previous node in the sequence.
It is a collection of node , in which each node will contain three fields- a pointer to the
previous node ,data, a pointer to the next node.
II. Deletion
III. Searching
IV. Traversing
Insertion:
Case 1: The new node is inserted at the beginning.
25
Prepared by D.HIMAGIRI
Doubly Linked List....
Case 2: The new node is inserted at the end.
26
Prepared by D.HIMAGIRI
Doubly Linked List....
Case 3: The new node is inserted after a given node.
27
Prepared by D.HIMAGIRI
Doubly Linked List....
Case 4: The new node is inserted before a given node.
28
Prepared by D.HIMAGIRI
Doubly Linked List....
Deletion:
Case 1: The first node is deleted.
29
Prepared by D.HIMAGIRI
Doubly Linked List....
Deletion:
Case 2: The last node is deleted.
30
Prepared by D.HIMAGIRI
Doubly Linked List....
Deletion:
Case 3: The node after a given node is deleted.
31
Prepared by D.HIMAGIRI
Doubly Linked List....
Deletion:
Case 4: The node before a given node is deleted.
32
Prepared by D.HIMAGIRI
Circular Linked List
Circular Linked List Types:
I. Circular Single Linked List
II. Circular Doubly Linked List
Circular Single Linked List:
In a circular single linked list, the last node contains a pointer to the first node of the
list i,e the last node address field contains the address of the first node.
II. Deletion
III. Searching
IV. Traversing
33
Prepared by D.HIMAGIRI
Circular Single Linked List.....
Insertion:
Case 1: The new node is inserted at the beginning of the circular linked list.
34
Prepared by D.HIMAGIRI
Circular Single Linked List.....
Insertion:
Case 2: The new node is inserted at the end of the circular linked list.
35
Prepared by D.HIMAGIRI
Circular Single Linked List.....
Deletion:
Case 1: The first node is deleted.
36
Prepared by D.HIMAGIRI
Circular Single Linked List.....
Deletion:
Case 2: The last node is deleted.
37
Prepared by D.HIMAGIRI
Circular Doubly Linked List
Circular Doubly Linked List:
Circular doubly linked list doesn't contain NULL in any of the node.
The last node NEXT field of the list contains the address of the first node of the list.
The first node PREV field of the list contain address of the last node of the list.
Operations:
I. Insertion
II. Deletion
III. Searching
IV. Traversing
Implementation is more complex than other linked lists.
38
Prepared by D.HIMAGIRI
Applications of Linked Lists
Polynomial Representation
************
39
Prepared by D.HIMAGIRI
UNIT - II
----------------------------------------------------------------------------------------------------------
Stacks-Operations, array and linked representations of stacks, stack applications-infix
to postfix conversion, postfix expression evaluation, recursion implementation.
Queues-operations, array and linked representations. Circular Queue operations,
Dequeue, applications of queue.
----------------------------------------------------------------------------------------------------------
Stack :
A stack is a linear data structure, which works on the principle of LIFO/FILO.
Stack is closed at one end and opened at other end.
The elements can be added and removed from the stack only at the top(opened end).
Basic Operations performed on stack are:
PUSH-Insertion
POP-Deletion
When an element is pushed into the stack TOP value is incremented by 1 i,e TOP++.
When an element is popped from the stack TOP value is decremented by1 I,e TOP--.
Here, the maximum size(MAX) of the stack is 10 i,e it can hold up to 10 elements.
Basic Operations performed on stack implemented using arrays are:
PUSH
POP
PEEK
2
Prepared by D.HIMAGIRI
Stack Implementation using Array…
PUSH:
The push operation is used to insert an element into the stack. The new element is added at
the topmost position of the stack.
While inserting the value, we must first check if TOP=MAX–1, because if that is the
case, then the stack is full and no more insertions can be done .This condition is called as
stack overflow .
If the above condition fails then we can perform the insertion into the stack , then TOP
value is incremented by 1.
3
Prepared by D.HIMAGIRI
Stack Implementation using Array…
POP:
The POP operation is used to delete an element from top of the stack.
Before deleting an element we have to check whether TOP=-1 ,if that is the case POP
operation is not possible . This condition is called as stack underflow.
Other wise we can pop an element from the stack by decrementing the TOP by 1.
4
Prepared by D.HIMAGIRI
Stack Implementation using Array…
PEEK:
PEEK is an operation that returns the value of the topmost element of the stack without
deleting it from the stack.
If we perform PEEK operation on the stack ,it will first check
whether the stack is empty or not, if stack is not empty then it returns the
top most element in the stack i,e 5 other wise it has to print stack is empty.
5
Prepared by D.HIMAGIRI
Stack Implementation using Linked List
All insertions and deletions are done at the node pointed by TOP.
A linked stack supports all the three stack operations, that is, push, pop, and peek.
6
Prepared by D.HIMAGIRI
Stack Implementation using Linked List….
PUSH:
The push operation is used to insert an element into the stack. The new element is
added at the topmost position of the stack.
Push 5
New Node
TOP
Push 6
TOP
7
Prepared by D.HIMAGIRI
Stack Implementation using Linked List….
POP
The pop operation is used to delete the topmost element from a stack.
Before deleting the element, we must first check
if TOP=NULL (stack empty).
If an attempt is made to delete a value from a stack
that is already empty, an Underflow message is printed.
If TOP!=NULL , then top most element is deleted from
the stack i,e in this case element 1 is deleted from the
stack.
8
Prepared by D.HIMAGIRI
Stack Implementation using Linked List….
PEEK
It is used to print the top element of stack.
If TOP=NULL returns stack is empty, else prints the top element in the linked stack.
If Peek operation is performed on the above stack ,it will return element 1.
Step 1:if(top==NULL)
return “stack is empty”;
Step 2: else
return top ->data;
9
Prepared by D.HIMAGIRI
Applications of Stack
20 20
15 15 15 15
30 30 30 30 30 30
10 10 10 10 10 10 10 10
Reverse List: 20 15 30 10
10
Prepared by D.HIMAGIRI
Applications of Stack…
2. Parentheses checker :
Stacks can be used to check the validity of parentheses in any algebraic expression.
An algebraic expression is valid if for every open bracket there is a corresponding
closing bracket. Example: { A + ( B – C ) }
Here we scan the expression from left to right,
when open bracket comes push it on to the
stack ,when closed bracket comes pop the
corresponding closed bracket from the stack.
Finally if the stack is empty then parenthesis are (
well balanced other wise not well balanced. { { {
12
Prepared by D.HIMAGIRI
Conversion of an infix expression into a postfix expression...
Example: Infix Expression: A – (B / C + (D % E * F) / G)* H
Step 1: A – (B / C + (D % E * F) / G)* H )
Steps 2-5:
Postfix Expression :A B C / D E F * % G / + H * -
13
Prepared by D.HIMAGIRI
Applications of Stack…
3. Evaluation of a postfix expression
The computer first converts the infix expression into the equivalent postfix notation
and then evaluates the postfix expression.
Both these tasks—converting the infix notation into postfix notation and evaluating
the postfix expression—make extensive use of stacks as the primary tool.
Example:The infix expression 9 – ((3 * 4) + 8) / 4 can be written as 9 3 4 * 8 + 4 / – in
postfix notation.
Algorithm for Evaluation of a postfix expression :
14
Prepared by D.HIMAGIRI
Applications of Stack…
4. Conversion of an infix expression into a prefix expression
Algorithm:
Example:
Infix expression: (A – B / C) * (A / K – L)
Step 1: Reverse the infix string.
while reversing the string we
must interchange left and right
parentheses.
(L – K / A) * (C / B – A)
Step 2: Obtain the corresponding postfix
expression of the infix expression
obtained as a result of Step 1.
(L – K / A) * (C / B – A)
= (L-(K/A))*(C/B-A) = (L-(K/A))*((C/B)-A) = ( (L-( K / A ) )*( ( C/B )-A ) )
Postfix Expression is : L K A / – C B / A – *
Step 3: Reverse the postfix expression to get the prefix expression
Therefore, the prefix expression is: * – A / B C – /A K L
15
Prepared by D.HIMAGIRI
Applications of Stack…
5. Evaluation of a prefix expression
16
Prepared by D.HIMAGIRI
Applications of Stack…
6. Recursion
The process in which a function calls itself directly or indirectly is called recursion and
the corresponding function is called as recursive function.
Since a recursive function repeatedly calls itself, it makes use of the system stack to
temporarily store the return address and local variables of the calling function.
Every recursive solution has two major cases. They are
1. Base case: In which the problem is simple enough to be solved directly without
making any further calls to the same function.
2. Recursive case: In which first the problem at hand is divided into simpler sub-parts.
Second the function calls itself but with sub-parts of the problem obtained in the first
step. Third, the result is obtained by combining the solutions of simpler sub-parts.
For the factorial function,
1. Base case is when n = 1, because if n = 1,
the result will be 1 as 1! = 1.
2. Recursive case of the factorial function will
call itself but with a smaller value of n, this
case can be given as Fact(n) = n × Fact (n–1).
17
Prepared by D.HIMAGIRI
Recursion…
18
Prepared by D.HIMAGIRI
Queue
Queue:
A Queue is a linear data structure that works on the principle of FIFO.
The elements can be inserted at the rear end and deleted from the front end.
Queue maintains two variables Front and Rear. Initial value of Front and Rear are -1.
Basic operations performed on queue are:
Insertion-Inserts the element at rear end of the queue .
19
Prepared by D.HIMAGIRI
Queue…
Implementation of queue:
Array Implementation of queue
Linked List Implementation of queue
Array Implementation of queue:
Queues can be represented using linear arrays.
Queue Operations:
Insertion:
This operation is used to insert an element into the queue at rear end. Before
inserting an element we have to check whether the queue is full or not . if
queue is already full it has to display
insertions.
21
Prepared by D.HIMAGIRI
Array Implementation of Queue …
Deletion:
This operation is used to delete an element from front end of the queue.
While deleting an element ,we have to check for underflow condition. An underflow
occurs if front = –1 or front > rear.i,e we cannot delete an element from empty queue.
When an element is deleted, Front is incremented by 1.
Rear value does not change while deleting an element from queue.
Example:
22
Prepared by D.HIMAGIRI
Linked List Implementation of Queue
Insertion:
The new element is added as the last element of the queue.
Initially FRONT=NULL and REAR = NULL.
if FRONT=NULL, then the queue is empty.
There is no overflow condition in linked list implementation of queue.
Example:
23
Prepared by D.HIMAGIRI
Linked List Implementation of Queue
Deletion:
The delete operation is used to delete the element that is first inserted in a queue, i.e.,
the element whose address is stored in FRONT.
Before deleting the value, we must first check if FRONT=NULL because if this is the
case, then the queue is empty and no more deletions can be done.
If an attempt is made to delete a value from a queue that is already empty, an
underflow.
Example:
24
Prepared by D.HIMAGIRI
Types of Queue
Circular Queue:
In linear queues, insertions can be done only at one end called the REAR and deletions
are always done from the other end called the FRONT.
Suppose if we delete first two elements i,e 54 and 9 from the above queue, we get
After deletion, the space cannot be reused for inserting the new elements even though
there is space available, the overflow condition still exists because the condition REAR
= MAX – 1 still holds true. This is a major drawback of a linear queue.
To resolve this problem, we have two solutions.
First, shift the elements to the left so that the vacant space can be occupied and
utilized efficiently. But this can be very time-consuming, especially when the
queue is quite large.
Second, to use a circular queue.
25
Prepared by D.HIMAGIRI
Circular Queue…
In the circular queue, the first index comes right after the last index.
The circular queue will be full only when
FRONT = 0 and REAR = Max – 1.
Operations performed on circular queue are
Insertion
Deletion
Insertion:
For insertion, we now have to check for the following three conditions:
1. If front=0 and Rear=MAX-1, then circular queue is full.
2. If REAR != MAX – 1 , then REAR will be incremented and the value will be
inserted.
26
Prepared by D.HIMAGIRI
Circular Queue…
3. If FRONT != 0 and REAR = MAX – 1, then it means that the queue is not full. So, set
REAR = 0 and insert the new element there.
27
Prepared by D.HIMAGIRI
Circular Queue…
Deletion:
To delete an element, we check for following three conditions:
1. If FRONT = –1, then there are no elements in the queue,
This condition is called as Underflow condition.
3. If the queue is not empty and FRONT = MAX–1, then after deleting the element at
the front, FRONT is set to 0.
28
Prepared by D.HIMAGIRI
Dequeue
Dequeue:
A Dequeue is a list in which the elements can be inserted or deleted at either end.
It is also known as a head-tail linked list because elements can be added to or removed
from either the front (head) or the back (tail) end.
Types of Dequeue:
1. Input Restricted Dequeue : In this dequeue insertions can be done only at
one end but deletions can be done from both the ends.
2. Output Restricted Dequeue : In this dequeue deletions can be done only at
one end but insertions can be done on both the ends.
29
Prepared by D.HIMAGIRI
Queue Applications
Applications of Queue are:
1. Queues are widely used as waiting lists for a single shared resource like printer, disk,
CPU.
2. Queues are used to transfer data asynchronously (data not necessarily received at same
rate as sent) between two processes.
3. Queues are used as buffers on MP3 players and portable CD players, iPod playlist.
4. Queues are used in Playlist for jukebox to add songs to the end, play from the front of
the list.
5. Queues are used in operating system for handling interrupts. When programming a real-
time system that can be interrupted, for example, by a mouse click, it is necessary to
process the interrupts immediately, before proceeding with the current job. If the
interrupts have to be handled in the order of arrival, then a FIFO queue is the appropriate
data structure.
*******
30
Prepared by D.HIMAGIRI
UNIT - III
----------------------------------------------------------------------------------------------------------
Trees – Terminology, Representation of Trees, Binary tree ADT, Properties of Binary Trees,
Binary Tree Representations-array and linked representations, Binary Tree traversals, Threaded
binary trees, Binary Heap-Properties,Max and Min Heap, Operations-Insertion and Deletion.
Search Trees-Binary Search tree, Tree traversals, AVL tree – operations, B-tree – operations, B+
trees, Red Black tree.
----------------------------------------------------------------------------------------------------------
Tree:
A tree is a non linear hierarchical data structure,consists of nodes connected by edges.
Why Tree Data Structure?
Other data structures such as arrays, linked list, stack,
and queue are linear data structures that store data
sequentially. In order to perform any operation in
a linear data structure, the time complexity increases
with the increase in the data size. But, it is not acceptable in
today's computational world.
Different tree data structures allow quicker and easier access to the
data as it is a non-linear data structure.
1
Prepared by D.HIMAGIRI
Tree Terminology
Node-A node is an entity that contains a key or value and pointers/link to its child nodes.
Edge-It is the link between any two nodes.
Root -The node at the top/start of the tree is called root. There is only one root per tree.
Parent - Any node except the root node has one edge upward to a node called parent.
Child - The node below a given node connected by its edge downward is called its child node.
Leaf - The node which does not have any child node is called the leaf node.
Subtree -Subtree represents the descendants of a node.
Levels -Level of a node represents the generation of a node. If the root node is at level 0, then
its next child node is at level 1, its grandchild is at level 2, and so on.
2
Siblings: if two are more nodes are having same parent in a tree, then they are called siblings.
Prepared by D.HIMAGIRI
Tree Terminology...
. Height of a Node-The height of a node is the number of edges from the node to the deepest
leaf (ie. the longest path from the node to a leaf node).
Depth of a Node-The depth of a node is the number of edges from the root to the node.
Height of a Tree-The height of a Tree is the height of the root node or the depth of the deepest
node or maximum of all nodes height or maximum of all
nodes depth.
In a tree, height of a tree is equal to depth of a tree.
3
Prepared by D.HIMAGIRI
Types of Trees
Different types of tree are:
1. Binary Tree
I. Strict Binary Tree
II. Complete Binary Tree
III. Perfect Binary Tree
IV. Extended Binary Tree
2. Expression Tree
3. Threaded Binary Tree
4. Binary Heap
5. Search Trees
I. Binary Search Tree(BST)
II. AVL Tree
III. Red Black Tree
IV. B-Tree
V. B+-Tree
4
Prepared by D.HIMAGIRI
Binary Tree
A Binary tree is a tree which is either empty or each node can have maximum of two child
nodes, i.e. each node can have 0, or 1 or 2 child nodes.
Binary Tree Representations:
1. Array Representation
2. Linked List Representation
Array Representation of Binary Tree
Binary Tree is stored in a single array.
Number are assigned to each node from leftmost to rightmost node.
Root node always assign no. 1
Left Child is placed at position [2 * K] (k is position of root/parent)
Right Child is placed at position [2 * K + 1] A 1
Size of array is Depends on Depth of tree i.e. 2 d+1
2 3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 B C
A B C D E F G H I 4 5 6 7
D E F G
Note: If root is placed at 0th position then
8 9
L Child = [2 * k + 1 ]
H I 5
R child = [2 * K + 2 ]
Prepared by D.HIMAGIRI
Binary Tree Representations...
Linked List Representation of Binary Tree:
Each node contains three fields –Left child address , Data and Right child address.
If the node does not contain left /right child hen address field contains NULL.
Node of a binary tree is declared as:
struct node E.g.
{ A
struct node*left;
int data; B C
struct node *right;
}; D E F G
6
Prepared by D.HIMAGIRI
Types of Binary Tree
1. Strict Binary Tree:
Each node in a tree is either leaf or has exactly two children.
7
Prepared by D.HIMAGIRI
Types of Binary Tree...
2.Complete Binary Tree:
A complete binary tree is a binary tree in which all the levels are completely filled except
possibly the lowest one, which is filled from the left.
8
Prepared by D.HIMAGIRI
Types of Binary Tree...
4.Extended Binary Tree:
Extended binary tree is a type of binary tree in which all the null sub tree of the original
tree are replaced with special nodes called external nodes whereas other nodes are
called internal nodes.
Properties of External binary tree
1. The nodes from the original tree are internal nodes and the special nodes are
external nodes.
2. All external nodes are leaf nodes and the internal nodes are non-leaf nodes.
3. Every internal node has exactly two children and every external node is a leaf.
Example:
9
Prepared by D.HIMAGIRI
Binary Tree Traversals
Binary Tree Traversal
Traversal is a process to visit all the nodes of a tree and print their values . Because, all
nodes are connected via edges (links) we always start from the root node.
Different types of tree traversals are:
1. In-order Traversal
2. Pre-order Traversal
3. Post-order Traversal
1. In-order Traversal (LDR)
1. First, visit all the nodes in the left sub tree.
2. Then the root node.
3. Visit all the nodes in the right sub tree.
In-order: D → B → E → A → F → C → G
10
Prepared by D.HIMAGIRI
Binary Tree Traversals
2. Pre-order Traversal (DLR)
1. Then the root node.
2. First, visit all the nodes in the left sub tree.
3. Visit all the nodes in the right sub tree.
Pre -order: A → B → D → E → C → F → G
3.Postorder traversal
1. Visit all the nodes in the left subtree.
2. Visit all the nodes in the right subtree.
3. Visit the root node.
11
Post -order: D → E → B → F → G → C → A
Prepared by D.HIMAGIRI
Expression Tree
Expression tree is a binary tree in which each internal node corresponds to operator and each
leaf node corresponds to operand.
Example: Expression tree for 3 + ((5+9)*2)
Steps for construction of Expression tree for a given
expression:
1. Convert the given expression to post fix expression.
2. Scan the postfix expression from left to right one character
at a time.
i. If character is operand push that into stack.
ii. If character is operator pop two values from stack make them its child and push
current node again.
3. At the end only element of stack will be root of expression tree.
Problem: Expression is (a+b)*c
1. The postfix is: a b + c *
2. The first symbol ‘a’ is operand, it is pushed onto the stack
12
Prepared by D.HIMAGIRI
Expression Tree...
Next symbol ‘b’ is an operand , insert ‘b’ onto stack
Next, '+' symbol, so topmost two operands are popped , a new tree is formed and push a
pointer to it onto the stack.
Next, 'c' is read, we create one node tree and push a pointer to it onto the stack.
13
Prepared by D.HIMAGIRI
Expression Tree...
Finally, the last symbol is read ' * ', we pop topmost two tree pointers and form a new tree
with ' * ' as root, and a pointer to the final tree is pushed on the stack.
Pop the only element from the stack , we get an expression tree
14
Prepared by D.HIMAGIRI
Threaded Binary Tree
Threaded Binary Tree:
Binary tree is said to be threaded by making all right child pointers that would normally be
null point to the inorder successor of the node (if it exists), and all left child pointers that
would normally be null point to the inorder predecessor of the node .
Why do we need Threaded Binary Tree?
1. Binary trees have a lot of wasted space, the leaf nodes each have 2 null pointers. We can
use these pointers to help us in inorder traversals.
2. Threaded binary tree makes the tree traversal faster since we do not need stack or
recursion for traversal.
Example:
15
Prepared by D.HIMAGIRI
Threaded Binary Tree...
Types of threaded binary tree
1. Right threaded binary tree
The right NULL pointer of the node
is replaced by a thread to the inorder
successor of that node is called
as right threaded binary tree.
2. Left threaded binary tree
The left NULL pointer of the node
is replaced by a thread to the inorder
predecessor of that node is called
as right threaded binary tree.
3. Fully threaded binary tree
Both left and right NULL pointers can
be used to point to Inorder predecessor and
successor of that node respectively,
is called a fully threaded binary tree.
16
Prepared by D.HIMAGIRI
Threaded Binary Tree...
Construct threaded binary tree for following binary tree
1. Write the inorder traversal for the given binary tree
Inorder is D B H E A F C G
2. Represent the binary tree using linked list representation
3. Link the left and right NULL pointers to inorder Successor or predecessor respectively
as shown below
17
Prepared by D.HIMAGIRI
Binary Heap
A binary tree is said to be binary heap if it follows the following two properties:
1. Heap order property
Every node is less than or equal to its children (Min Heap) or every node is greater than
or equal to its children(Max Heap).
2. Structure property
The tree is completely filled except possibly the bottom level, which is filled from left
to right.
18
Prepared by D.HIMAGIRI
Binary Heap...
Operations:
1. Insertion:
Steps:
1. First increase the heap size by 1, so that it can store the new element.
2. Insert the new element at the end of the Heap.
3. This newly inserted element may distort the properties of Heap for its parents. So, in
order to keep the properties of Heap, heapify this newly inserted element following a
bottom-up approach.
19
Prepared by D.HIMAGIRI
Binary Heap...
2. Deletion:
1. Delete the root element
2. Replace the root by the last element.
3. Delete the last element from the Heap.
4. Since, the last element is now placed at the position of the root node. So, it may not
follow the heap property. Therefore, heapify the last node placed at the position of
root.
20
Prepared by D.HIMAGIRI
Binary Search Tree(BST)
Binary Search Tree:
A binary search tree, also known as an ordered binary tree, in which all the nodes in the left
sub-tree have a value less than that of the root node. Correspondingly, all the nodes in the
right sub-tree have a value either equal to or greater than the root node. The same rule is
applicable to every sub-tree in the tree.
Example :
Operations :
1. Insert: insert a node in the BST.
2. Search: Searches for a node in the BST.
3. Delete: deletes a node from the BST. 21
Prepared by D.HIMAGIRI
Binary Search Tree(BST)...
Insert:
Inorder to insert an element, first locate its proper location. Start searching from the root
node, then if the data is less than the key value, search for the empty location in the left sub
tree and insert the data. Otherwise, search for the empty location in the right sub tree and
insert the data.
22
Prepared by D.HIMAGIRI
Binary Search Tree(BST)...
Search:
Inorder to search an element in BST, start from the root node. if the data is less than the key
value, search for the element in the left sub tree. Otherwise, search for the element in the right
sub tree. Follow the same procedure for each node.
Example: Search for element 12
23
Prepared by D.HIMAGIRI
Binary Search Tree(BST)...
Delete:
Deleting a node from Binary search tree includes following three cases:
Case 1: Deleting a Leaf node (A node with no children)
Case 2: Deleting a node with one child
Case 3: Deleting a node with two children
24
Prepared by D.HIMAGIRI
Binary Search Tree(BST)...
Case 2: Deleting a node with one child
In the second case, the node to be deleted lies has a single child node. In such a case follow the
steps below:
1. Replace that node with its child node.
2. Remove the child node from its original position.
25
Prepared by D.HIMAGIRI
Binary Search Tree(BST)...
Case 3: Deleting a node with two children
In the third case, the node to be deleted has two children. In such a case follow the steps below:
1. Replace the node’s value with its in-order predecessor (largest value in the left sub-tree)
or in-order successor (smallest value in the right sub-tree).
2. The in-order predecessor or the successor can then be deleted using case1 or case2.
26
Prepared by D.HIMAGIRI
Binary Search Tree(BST)...
Deletion:
27
Prepared by D.HIMAGIRI
AVL Tree
AVL Tree:
It is a self-balancing binary search tree invented by G.M. Adelson-Velsky and E.M.
Landis in 1962.
The structure of an AVL tree is the same as that of a binary search tree, in addition to this
every node in this tree is associated with balance factor.
The balance factor of a node is calculated by subtracting the height of its right sub-tree
from the height of its left sub-tree.
Balance factor = Height (left sub-tree) – Height (right sub-tree)
A binary search tree in which every node has a balance factor of –1, 0, or 1 is said to be
height balanced. A node with any other balance factor is considered to be unbalanced and
requires rebalancing of the tree.
28
AVL Tree Not AVL Tree
Prepared by D.HIMAGIRI
AVL Tree...
The Unbalanced node in the AVL tree is called critical node.
In order to balance the unbalanced tree/node we have to make rotations .Four types of rotations
are:
1. LL rotation: The new node is inserted in the left sub-tree of the left sub-tree of the
critical node.
29
Prepared by D.HIMAGIRI
AVL Tree...
LL Rotation Example:
2.RR Rotation: The new node is inserted in the right sub-tree of the right sub-tree of the
critical node.
30
Prepared by D.HIMAGIRI
AVL Tree...
RR Rotation Example
31
Prepared by D.HIMAGIRI
AVL Tree...
3.LR Rotation :The new node is inserted in the right sub-tree of the left sub-tree of the critical
node.
32
Prepared by D.HIMAGIRI
AVL Tree...
LR Rotation Example
33
Prepared by D.HIMAGIRI
AVL Tree...
4.RL Rotation: The new node is inserted in the left sub-tree of the right sub-tree of the
critical node.
34
Prepared by D.HIMAGIRI
AVL Tree...
RL Rotation Example
35
Prepared by D.HIMAGIRI
AVL Tree...
Operations:
1. Insertion
2. Deletion
3. Searching ( Search operation in AVL is same as BST)
1. Insertion:
Insertion in an AVL tree is done in the same way as it is done in a binary search tree.
In the AVL tree, the new node is always inserted as the leaf node.
After insertion the balance factor of each node has to be updated. If any unbalance node
is found then rotation has to be performed to balance the tree.
Example:
Construct an AVL tree for the elements: H, I, J, B, A, E, C, F, D, G, K, L
Insert H Insert I
36
Prepared by D.HIMAGIRI
AVL Tree...
Example...
Elements: H, I, J, B, A, E, C, F, D, G, K
Insert J
Insert B
37
Prepared by D.HIMAGIRI
AVL Tree...
Example...
Elements: H, I, J, B, A, E, C, F, D, G, K
Insert A
Insert E
38
Prepared by D.HIMAGIRI
AVL Tree...
Example...
Elements: H, I, J, B, A, E, C, F, D, G, K
Insert C Insert F
Insert D
39
Prepared by D.HIMAGIRI
AVL Tree...
Example...
Elements: H, I, J, B, A, E, C, F, D, G, K
Insert G
Insert K
40
Prepared by D.HIMAGIRI
AVL Tree...
2. Deletion:
Deleting a node from an AVL tree is similar to that in a binary search tree. Deletion may
disturb the balance factor of an AVL tree and therefore the tree needs to be rebalanced in
order to maintain the AVLness. For this purpose, we need to perform rotations. The two types
of rotations are L rotation and R rotation.(L rotations are mirror images of R Rotations)
R rotations are
1. R0 rotation (Node B has balance factor 0 )
41
Prepared by D.HIMAGIRI
AVL Tree...
R0 Rotation Example
42
Prepared by D.HIMAGIRI
AVL Tree...
R0 Rotation Example
43
Prepared by D.HIMAGIRI
AVL Tree...
2. R1 Rotation (Node B has balance factor 1)
Example
44
Prepared by D.HIMAGIRI
AVL Tree...
2. R-1 Rotation (Node B has balance factor -1)
Example:
45
Prepared by D.HIMAGIRI
Red Black Tree
Red Black Tree:
It is a self balanced Binary Search Tree in which every node is colored either RED or
BLACK .
In Red Black Tree, the color of a node is decided based on the properties of Red-Black Tree.
Every Red Black Tree has the following properties.
Properties of Red Black Tree
1. Red - Black Tree must be a Binary Search Tree.
2. The ROOT node must be colored BLACK.
3. The children of Red colored node must be colored BLACK. (There should not be two
consecutive RED nodes).
4. In all the paths of the tree, there should be same number of BLACK colored nodes.
5. Every new node must be inserted with RED color.
6. Every leaf (e.i. NULL node) must be colored BLACK.
46
Prepared by D.HIMAGIRI
Red Black Tree...
Operations:
1.Insertion:
Insertion in red black tree is same as BST.
The color of inserted node is Red.
In above Red Black tree g- Grand parent of node x, p is parent of x and u is uncle of x,
where x is newly inserted node.
In Red-Black tree, we use two tools to do balancing.
1) Recoloring
2) Rotation
If uncle is red, we do recoloring.
If uncle is black, we do rotations and/or recoloring.
Color of a NULL node is considered as BLACK.
47
Prepared by D.HIMAGIRI
Red Black Tree...
Insertion Algorithm:
Let x be the newly inserted node, the color of newly inserted nodes as RED .
Case 1: If x is root, change color of x as BLACK
48
Prepared by D.HIMAGIRI
Red Black Tree...
Case 3: If x’s uncle is BLACK or no uncle, The possible four configurations are
1. Left Left Case (p is left child of g and x is left child of p).
i. Right Rotate Grand parent g
ii. Swap colors of g and p
49
Prepared by D.HIMAGIRI
Red Black Tree...
2. Right Right Case (p is right child of g and x is right child of p).
i. Left Rotate Grand parent g
ii. Swap colors of g and p
50
Prepared by D.HIMAGIRI
Red Black Tree...
3. Left Right Case (p is left child of g and x is right child of p).
i. Left Rotate p
ii. Apply Left-Left Case
51
Prepared by D.HIMAGIRI
Red Black Tree...
4. Right Left Case (p is right child of g and x is left child of p).
i. Right Rotate p
ii. Apply Right-Right Case
52
Prepared by D.HIMAGIRI
Red Black Tree...
Problem: Insert the following elements into Red black tree
2, 1, 4, 5, 9, 3, 6, 7
53
Prepared by D.HIMAGIRI
Red Black Tree...
Problem: Insert the following elements into Red black tree
2, 1, 4, 5, 9, 3, 6, 7
54
Prepared by D.HIMAGIRI
Red Black Tree...
Problem: Insert the following elements into Red black tree
2, 1, 4, 5, 9, 3, 6, 7
55
Prepared by D.HIMAGIRI
Red Black Tree...
Deletion:
Important points to be remembered during deletion:
1. Red Black tree follows BST Deletion
2. In this only last nodes(leave nodes) are deleted ,Nodes with children are replaced.
3. In case node to be replaced has two children then replace it with closest predecessor
from left sub tree.
4. In case node to be replaced has one child then replace it with its child.
5. In case node to be replaced has no child then replace it with NULL node.
6. If black node replaces black node then it will become double black.
7. If red node replaces black node then it will become single black.
Deletion Cases:
Cases 1:
i) if deleted node D is RED-Delete it directly and do nothing.
56
Prepared by D.HIMAGIRI
Red Black Tree...
ii) if deleted node D is not RED-then Double Black DB Exists.
Case 2:If DB is a root node –then remove DB directly and make it single black.
Case 3:
3.1 if DB’s sibling is red
i. Rotate P in DB direction
ii. Swap colors of P and S
iii. Reapply cases if needed.
57
Prepared by D.HIMAGIRI
Red Black Tree...
3.2 if DB’s sibling is black and both children are black(SL and ST are black)
i. Remove DB
ii. Add black to P:if P is black make DB ,if P is red make single black.
iii. Make S as RED
iv. DB still exist ,follow other cases.
58
Prepared by D.HIMAGIRI
Red Black Tree...
3.3 if DB’s sibling is black ,inline child SL is black and ST is RED
i. Swap colors of S and ST
ii. Rotate S in opposite direction to DB
iii. Follow case 3.4
59
Prepared by D.HIMAGIRI
Red Black Tree...
3.4 if DB’s sibling is black ,inline child SL is RED
i. Swap colors of P and S
ii. Rotate P in same direction to DB
iii. Remove BD
iv. Change color of SL to Black
60
Prepared by D.HIMAGIRI
Red Black Tree...
Deletion Example: Delete 50,20,100,90,40,60,10,30,70,80 from the following RB Tree
Delete 50
Case3.1
61
Prepared by D.HIMAGIRI
Red Black Tree...
Deletion Example: Delete 50,20,100,90 from the following RB Tree
Case3.2
Delete 20
Case3.2
62
Prepared by D.HIMAGIRI
Red Black Tree...
Deletion Example: Delete 50,20,100,90 from the following RB Tree
Case3.2 Case 2
Delete 100
Case 1
63
Prepared by D.HIMAGIRI
Red Black Tree...
Deletion Example: Delete 50,20,100,90 from the following RB Tree
Delete 90
64
Prepared by D.HIMAGIRI
Red Black Tree...
Deletion Example: Delete 50,20,100,90 from the following RB Tree
Case 3.4 –
Case 3.4 -swap color of Rotate P in same
S and P direction of DB
65
Prepared by D.HIMAGIRI
B Tree
B-Tree is a M-way search tree that can be widely used for disk access.
One of the main reason of using B tree is its capability to store large number of keys in a
single node by keeping the height of the tree relatively small.
Properties of B-Tree:
1. B-Tree of order M will have atmost M children.
2. Every node in a B-tree except root and the leaf nodes has at least ⌈M/2⌉ children.
3. All leaves are at the same level.
4. All nodes may contain maximum M – 1 keys and minimum of ⌈M/2⌉ -1 keys.
5. All keys of a node are sorted in increasing order. The child between two keys k1 and k2
contains all keys in the range from k1 and k2.
6. The root has at least two children if it is not a leaf node.
66
Prepared by D.HIMAGIRI
B Tree...
Operations :
Searching: Searching in B-tree is similar to BST
Insertion:
In a B-Tree, a new element must be added only at the leaf node. That means, the new key
value is always attached to the leaf node only. The insertion operation is performed as
follows...
Step 1 - Check whether tree is Empty.
Step 2 - If tree is Empty, then create a new node with new key value and insert it into the
tree as a root node.
Step 3 - If tree is Not Empty, then find the suitable leaf node to which the new key value is
added using Binary Search Tree logic.
Step 4 - If that leaf node has empty position, add the new key value to that leaf node in
ascending order of key value within the node.
Step 5 - If that leaf node is already full, split that leaf node by sending middle value to its
parent node. Repeat the same until the sending value is fixed into a node.
Step 6 - If the splitting is performed at root node then the middle value becomes new root
node for the tree and the height of the tree is increased by one.
67
Prepared by D.HIMAGIRI
B Tree...
Insert the following elements into a B-tree of order 3
1,2,3,4,5,6,7,8,9,10
Max no. of keys =M-1= 3-1=2
Min no. of keys = ⌈M/2⌉ -1 =3/2-1=2-1=1
Min no. of children = ⌈M/2⌉ =3/2=2
Insert 1
Insert 2
Insert 3
68
Prepared by D.HIMAGIRI
B Tree...
Insert the following elements into a B-tree of order 3
1,2,3,4,5,6,7,8,9,10
Max no. of keys =M-1= 3-1=2
Min no. of keys = ⌈M/2⌉ -1 =3/2-1=2-1=1
Min no. of children = ⌈M/2⌉ =3/2=2
Insert 4
Insert 5
Insert 6
69
Prepared by D.HIMAGIRI
B- Tree...
Insert the following elements into a B-tree of order 3
1,2,3,4,5,6,7,8,9,10
Max no. of keys =M-1= 3-1=2
Min no. of keys = ⌈M/2⌉ -1 =3/2-1=2-1=1
Min no. of children = ⌈M/2⌉ =3/2=2
Insert 7
Insert 8
70
Prepared by D.HIMAGIRI
B Tree...
Insert the following elements into a B-tree of order 3
1,2,3,4,5,6,7,8,9,10
Max no. of keys =M-1= 3-1=2
Min no. of keys = ⌈M/2⌉ -1 =3/2-1=2-1=1
Min no. of children = ⌈M/2⌉ =3/2=2
Insert 9
Insert 10
71
Prepared by D.HIMAGIRI
B Tree...
Insert the following elements into a B-tree of order 3
1,2,3,4,5,6,7,8,9,10
Max no. of keys =M-1= 3-1=2
Min no. of keys = ⌈M/2⌉ -1 =3/2-1=2-1=1
Min no. of children = ⌈M/2⌉ =3/2=2
Finally, B-tree is
72
Prepared by D.HIMAGIRI
B Tree...
Deletion:
Deleting an element on a B-tree consists of three main events: searching the node where
the key to be deleted exists, deleting the key and balancing the tree if required.
While deleting a tree, a condition called underflow may occur. Underflow occurs when a
node contains less than the minimum number of keys it should hold.
There are three main cases for deletion operation in a B tree.
Case I:The key to be deleted lies in the leaf. There are two cases for it.
i) The deletion of the key does not violate the property of the minimum number of
keys.
73
Prepared by D.HIMAGIRI
B Tree...
ii) The deletion of the key violates the property of the minimum number of keys.
In this case, we borrow a key from its immediate neighbouring sibling node in the order of
left to right. First, visit the immediate left sibling. If the left sibling node has more than a
minimum number of keys, then borrow a key from this node. Else, check to borrow from the
immediate right sibling node
74
Prepared by D.HIMAGIRI
B Tree...
iii) If both the immediate sibling nodes already have a minimum number of keys:
then merge the node with either the left sibling node or the right sibling node. This merging is
done through the parent node.
75
Prepared by D.HIMAGIRI
B Tree...
Case II :If the key to be deleted lies in the internal node, the following cases occur.
i) The internal node, which is deleted, is replaced by an Inorder predecessor if the left child
has more than the minimum number of keys.
76
Prepared by D.HIMAGIRI
B Tree...
Case II :If the key to be deleted lies in the internal node, the following cases occur.
ii) If either child has exactly a minimum number of keys then, merge the left and the right
children. After merging if the parent node has less than the minimum number of keys
then, look for the siblings as in Case I.
77
Prepared by D.HIMAGIRI
B Tree...
Case III :In this case, the height of the tree shrinks. If the target key lies in an internal node, and
the deletion of the key leads to a fewer number of keys in the node (i.e. less than the minimum
required), then look for the inorder predecessor and the inorder successor. If both the children
contain a minimum number of keys then, borrowing cannot take place. This leads to Case II(ii)
i.e. merging the children.
Again, look for the sibling to borrow a key. But, if the sibling also has only a minimum number
of keys then, merge the node with the sibling along with the parent. Arrange the children
accordingly (increasing order).
78
Prepared by D.HIMAGIRI
B+ Tree...
B+ Tree is an extension of B Tree which allows efficient insertion, deletion and search
operations.
In B Tree, Keys and records both can be stored in the internal as well as leaf nodes. Whereas, in
B+ tree, records (data) can only be stored on the leaf nodes while internal nodes can only store
the key values. The internal nodes of B+ tree are often called index nodes.
The leaf nodes of a B+ tree are linked together in the form of a singly linked lists to make the
search queries more efficient.
79
Prepared by D.HIMAGIRI
B+ Tree...
Operations:
Insertion:
Case I
If the leaf is not full, insert the key into the leaf node in increasing order.
Case II
1. If the leaf is full, insert the key into the leaf node in increasing order and balance
the tree in the following way.
2. Break the node at m/2th position.
3. Add m/2th key to the parent node as well.
4. If the parent node is already full, follow steps 2 to 3.
Example:
Construct B+ Tree of order 3 for the elements 5,15, 25, 35, 45.
Max no. of keys =M-1= 3-1=2
Min no. of keys = ⌈M/2⌉ -1 =3/2-1=2-1=1
Min no. of children = ⌈M/2⌉ =3/2=2
80
Prepared by D.HIMAGIRI
B+ Tree...
Construct B+ Tree of order 3 for the elements 5,15, 25, 35, 45.
Max no. of keys =M-1= 3-1=2
Min no. of keys = ⌈M/2⌉ -1 =3/2-1=2-1=1
Min no. of children = ⌈M/2⌉ =3/2=2
Insert 5
Insert 15
Insert 25
81
Prepared by D.HIMAGIRI
B+ Tree...
Construct B+ Tree of order 3 for the elements 5,15, 25, 35, 45.
Max no. of keys =M-1= 3-1=2
Min no. of keys = ⌈M/2⌉ -1 =3/2-1=2-1=1
Min no. of children = ⌈M/2⌉ =3/2=2
Insert 35
Insert 45
82
Prepared by D.HIMAGIRI
B+ Tree...
83
Prepared by D.HIMAGIRI
B+ Tree...
Deletion: (B+ Tree of order 3)
Case I:The key to be deleted is present only at the leaf node not in the indexes (or
internal nodes).
There are two cases for it:
i) There is more than the minimum number of keys in the node. Simply delete the key.
Delete 40:
84
Prepared by D.HIMAGIRI
B+ Tree...
Deletion: (B+ Tree of order 3)
Case I:The key to be deleted is present only at the leaf node not in the indexes (or
internal nodes).
ii) There is an exact minimum number of keys in the node. Delete the key and borrow a key
from the immediate sibling. Minimum from right child through parent.
Delete 5:
85
Prepared by D.HIMAGIRI
B+ Tree...
Deletion: (B+ Tree of order 3)
Case 2:The key to be deleted is present in the internal nodes and leaf node. Then we have to
remove them from the internal nodes and leaf node.
There are the following cases for this situation:
i) If there is more than the minimum number of keys in the node, simply delete the key from
the leaf node and delete the key from the internal node as well.Fill the empty space in the
internal node with the Inorder successor.
Delete 45:
86
Prepared by D.HIMAGIRI
B+ Tree...
iii) This case is similar to Case 2(i) but here, empty space is generated above the immediate
parent node.After deleting the key, merge the empty space with its sibling.
Fill the empty space in the grandparent node with the inorder successor.
Delete 25:
87
Prepared by D.HIMAGIRI
B+ Tree...
Case 3:In this case, the height of the tree gets shrinked. It is a little complicated.Deleting 55 from
the tree below leads to this condition. It can be understood in the illustrations below.
Delete 55:
88
Prepared by D.HIMAGIRI
Comparison between B-Tree and B+ Tree
89
Prepared by D.HIMAGIRI
UNIT - IV
----------------------------------------------------------------------------------------------------------
Graphs-Terminology, sequential and linked representation, graph traversals : Depth
First Search & Breadth First Search implementation. Spanning trees, Prim’s and
Kruskal’s method.
Searching and Sorting - Linear Search, Binary Search, Insertion Sort, Selection Sort,
Merge Sort, Quick sort, Heap Sort.
----------------------------------------------------------------------------------------------------------
Graph:
A graph G is defined as an ordered set (V, E), where V(G) represents the set of vertices and
E(G) represents the edges that connect these vertices.
The figure shows a graph with
V(G) = {A, B, C, D and E}
and E(G) = {(A, B), (B, C), (A, D), (B, D), (D, E), (C, E)}.
A graph can be directed or undirected.
In an undirected graph, edges do not have any direction
associated with them. That is, if an edge is drawn between nodes A and B,
then the nodes can be traversed from A to B
as well as from B to A.
In a directed graph , edges form an ordered pair. If there is an edge from Ato B, then there 1is a
path from Ato B but not from B to A.
Graph Terminology
Degree of a node: Degree of a node u, deg(u), is the total number of edges
containing the node u. If deg(u) = 0, it means that u does not belong to any edge and
such a node is known as an isolated node.
Regular graph :It is a graph where
each vertex has the same number of
neighbours. That is, every node has
the same degree. A regular graph with vertices
of degree k is called a k–regular graph or a
regular graph of degree k.
Path: A path P written as P = {v0 , v1 , v2 , ..., vn ),
of length n from a node u to v is defined as a
sequence of (n+1) nodes.
Closed path: A path P is known as a closed path if the edge
has the same end-points. That is, if v0 = vn
Simple path A path P is known as a simple path if all the nodes in the path are distinct with an
exception that v0 may be equal to vn . If v0 = vn , then the path is called a closed simple path.
Cycle A path in which the first and the last vertices are same. A simple cycle has no
2
repeated edges or vertices (except the first and last vertices).
Prepared by D.HIMAGIRI
Graph Terminology…
Connected graph: A graph is said to be connected if for any two vertices (u, v) in V there is a
path from u to v. That is to say that there are no isolated nodes in a connected graph. A connected
graph that does not have any cycle is called a tree. Therefore, a tree is treated as a special graph.
3
Prepared by D.HIMAGIRI
Graph Terminology…
Loop: An edge that has identical end-points is called a loop.
That is, e = (u, u). Example: e=(2,2)
Size of a graph : The size of a graph is the total number of
edges in it.
Multiple edges: Distinct edges which connect the same
end-points are called multiple edges. That is,
e = (u, v) and e' = (u, v) are known as
multiple edges of G.
Multi-graph: A graph with multiple edges and/or loops
is called a multi-graph.
Labelled graph or weighted graph:
A graph is said to be labelled if every
edge in the graph is assigned some data.
In a weighted graph, the edges of the
graph are assigned some weight or length.
Two types of weighted graphs are:
i) Undirected Weighted Graph
ii) Directed Weighted Graph 4
Prepared by D.HIMAGIRI
Graph Terminology…
Out-degree of a node :The out-degree of a node u, written as outdeg(u), is the number of
edges that originate at u.
In-degree of a node: The in-degree of a node u, written as indeg(u), is the number of edges
that terminate at u.
Degree of a node :The degree of a node, written as deg(u), is equal to the sum of in-degree and
out-degree of that node. Therefore, deg(u) = indeg(u) + outdeg (u).
Directed Graphs :
A directed graph G, also known as a digraph, is a graph in which every edge has a direction
assigned to it. An edge of a directed graph is given as an ordered pair (u, v) of nodes in G. For an
edge (u, v),
The edge begins at u and terminates at v.
u is known as the origin or initial point of e.
v is known as the destination or terminal point of e.
u is the predecessor of v.
v is the successor of u.
Nodes u and v are adjacent to each other.
5
Prepared by D.HIMAGIRI
Graph Representations
Graphs are represented in two ways:
i) Adjacency Matrix
ii) Adjacency List
Adjacency Matrix Representation:
Adjacency matrix representation is also called as sequential representation.
In adjacency matrix, the rows and columns are represented by the graph vertices.
A graph having n vertices, adjacency matrix will have a dimension n x n.
An entry Mij in the adjacency matrix representation of an undirected graph G will be 1 if
there exists an edge between Vi and Vj otherwise 0.
6
Prepared by D.HIMAGIRI
Adjacency Matrix Representations...
In directed graph, an entry Mij will be 1 only when there is an edge directed from Vi to Vj
otherwise 0.
If the graph is a weighted ,then Non- zero entries of the adjacency matrix are represented by the
weight of respective edges.
7
Adjacency List Representation Prepared by D.HIMAGIRI
An adjacency list is another way in which graphs can be represented in the computer’s memory.
An adjacency list is maintained for each node present in the graph which stores the node value
and a pointer to the next adjacent node to the respective node. If all the adjacent nodes are
traversed then store the NULL in the pointer field of last node of the list.
For a directed graph, the sum of the lengths of all adjacency lists is equal to the number of
edges in G.
For an undirected graph, the sum of the lengths of all adjacency lists is equal to twice the
number of edges in G because an edge (u, v) means an edge from node u to v as well as an edge
from v to u.
8
Adjacency List Representation... Prepared by D.HIMAGIRI
9
Prepared by D.HIMAGIRI
Graph Traversals
There are two standard methods of graph traversal:
1. Breadth-first search
2. Depth-first search
Value of status and its significance :
Breadth-First Search :
Breadth first search is a graph traversal algorithm that starts traversing the graph from root
node and explores all the neighbouring nodes. Then, it selects the nearest node and explore all
the unexplored nodes. The algorithm follows the same process for each of the nearest node
until it finds the goal.
BFS uses the Queue data structure that will hold the nodes that are waiting for further
processing and a variable STATUS to represent the current state of the node.
10
Prepared by D.HIMAGIRI
Breadth-First Search…
Example:
11
Prepared by D.HIMAGIRI
Breadth-First Search…
During the execution of the algorithm, we use two arrays:
QUEUE and ORIG. While QUEUE is used to hold the nodes that have to be processed, ORIG
is used to keep track of the origin of each edge.
Initially, FRONT = REAR = –1. The algorithm for this is as follows:
(a) Add A to QUEUE and add NULL to ORIG.
(c) Dequeue a node by setting FRONT = FRONT + 1 and enqueue the neighbours of B. Also, add
B as the ORIG of its neighbours.
12
Prepared by D.HIMAGIRI
Breadth-First Search…
(d) Dequeue a node by setting FRONT = FRONT + 1 and enqueue the neighbours of C. Also, add
C as the ORIG of its neighbours. Note that C has two neighbours B and G. Since B has already
been added to the queue and it is not in the Ready state, we will not add B and only add G.
(f) Dequeue a node by setting FRONT = FRONT + 1 and enqueue the neighbours of E. Also, add
E as the ORIG of its neighbours. Note that E has two neighbours C and F. Since C has already
been added to the queue and it is not in the Ready state, we will not add C and add only F.
13
Prepared by D.HIMAGIRI
Breadth-First Search…
Since F has already been added to the queue, we will only add H and I. As I is our final
destination, we stop the execution of this algorithm as soon as it is encountered and added to the
QUEUE. Now, backtrack from I using ORIG to find the minimum path P. Thus, we have
P as A -> C -> G -> I.
14
Prepared by D.HIMAGIRI
Depth-first Search
The depth-first search algorithm progresses by expanding the starting node of G and then
going deeper and deeper until the goal node is found, or until a node that has no children is
encountered.
When a dead-end is reached, the algorithm backtracks, returning to the most recent node
that has not been completely explored.
Depth first search uses stack data structure.
15
Prepared by D.HIMAGIRI
Depth-first Search…
(a) Push H onto the stack.
STACK: H
(b) Pop and print the top element of
the STACK, that is, H. Push all the
neighbours of H onto the stack that
are in the ready state. The STACK now
becomes
PRINT: H STACK: E, I
(c) Pop and print the top element of the STACK, that is, I. Push all the neighbours of I onto the
stack that are in the ready state. The STACK now becomes
PRINT: I STACK: E, F
(d) Pop and print the top element of the STACK, that is, F. Push all the neighbours of F onto the
stack that are in the ready state. (Note F has two neighbours, C and H. But only C will be added,
as H is not in the ready state.) The STACK now becomes
PRINT: F STACK: E, C
(e) Pop and print the top element of the STACK, that is, C. Push all the neighbours of C onto the
stack that are in the ready state. The STACK now becomes
PRINT: C STACK: E, B, G
16
Prepared by D.HIMAGIRI
Depth-first Search...
(f) Pop and print the top element of the STACK, that is, G.
Push all the neighbours of G onto the stack that are in the
ready state. Since there are no neighbours of G that are in
the ready state, no push operation is performed.
The STACK now becomes
PRINT: G STACK: E, B
(g)Pop and print the top element of the STACK, that is, B. Push all the neighbours of B onto the
stack that are in the ready state. Since there are no neighbours of B that are in the ready state,
no push operation is performed. The STACK now becomes
PRINT: B STACK: E
(h) Pop and print the top element of the STACK, that is, E. Push all the neighbours of E onto the
stack that are in the ready state. Since there are no neighbours of E that are in the ready state,
no push operation is performed. The STACK now becomes empty.
PRINT: E STACK:
Since the STACK is now empty, the depth-first search of G starting at node H is complete and
the nodes which were printed are:
H, I, F, C, G, B, E
These are the nodes which are reachable from the node H.
17
Prepared by D.HIMAGIRI
Applications of BFS &DFS
Applications of Breadth-First Search :
Finding the shortest path between two nodes, u and v, of an unweighted graph.
Finding the shortest path between two nodes, u and v, of a weighted graph.
18
Prepared by D.HIMAGIRI
Minimum Spanning Tree
A spanning tree is a subset of Graph G, which has all the vertices covered with minimum
possible number of edges.
It does not have cycles and it cannot be disconnected.
The total number of spanning trees with n vertices that can be created from a complete graph is
equal to n(n-2).
A minimum spanning tree (MST) is defined as a spanning tree with weight less than or equal
to the weight of every other spanning trees.
In other words, a minimum spanning tree is a spanning tree that has weights associated with its
edges, and the total weight of the tree (the sum of the weights of its edges) is at a minimum.
19
Prepared by D.HIMAGIRI
Minimum Spanning Tree...
Consider an unweighted graph G given below. From G, we can draw many distinct spanning
trees. Eight of them are given here. For an unweighted graph, every spanning tree is a minimum
spanning tree.
Consider a weighted graph G. From G, we can draw three distinct spanning trees. But only a
single minimum spanning tree can be obtained, that is, the one that has the minimum weight
(cost) associated with it of all the spanning trees given in, the one that is highlighted is called the
minimum spanning tree, as it has the lowest cost associated with it.
20
Prepared by D.HIMAGIRI
Minimum Spanning Trees...
Applications of MST:
MSTs are widely used for designing networks. For instance, people separated by varying
distances wish to be connected together through a telephone network. A minimum spanning
tree is used to determine the least costly paths with no cycles in this network, thereby
providing a connection that has the minimum cost involved.
MSTs are used to find airline routes. While the vertices in the graph denote cities, edges
represent the routes between these cities. No doubt, more the distance between the cities,
higher will be the amount charged. Therefore, MSTs are used to optimize airline routes by
finding the least costly path with no cycles.
MSTs are also used to find the cheapest way to connect terminals, such as cities, electronic
components or computers via roads, airlines, railways, wires or telephone lines.
MSTs are applied in routing algorithms for finding the most efficient path.
21
Prepared by D.HIMAGIRI
Kruskal’s Algorithm
Kruskal’s algorithm is used to find the minimum spanning tree for a connected weighted graph.
In this algorithm, we use a priority queue in which edges that have minimum weight takes a
priority over any other edge in the graph.
The algorithm aims to find a subset of the edges that forms a tree that includes every vertex,
the total weight of all the edges in the tree is minimized.
When the Kruskal’s algorithm terminates, the forest has only one component and forms a
minimum spanning tree of the graph.
Kruskal’s Algorithm:
Step 1: Create a forest in such a way that each graph is a separate tree.
Step 2: Create a priority queue Q that contains all the edges of the graph.
Step 3: Repeat Steps 4 and 5 while Q is NOT EMPTY
Step 4: Remove an edge from Q
Step 5: IF the edge obtained in Step 4 connects two different trees, then Add it to the forest
(for combining two trees into one tree).
ELSE
Discard the edge
Step 6: END
22
Prepared by D.HIMAGIRI
Kruskal’s Algorithm...
Example : Apply Kruskal’s algorithm on the graph
23
Prepared by D.HIMAGIRI
Kruskal’s Algorithm...
Add BC to the MST: Add CD to MST:
The next step is to add AE, but we can't add that as it will cause a cycle.
The next edge to be added is AC, but it can't be added as it will cause a cycle.
The next edge to be added is AD, but it can't be added as it will contain a cycle.
24
Prim’s Algorithm Prepared by D.HIMAGIRI
Prim’s algorithm is a greedy algorithm that is used to form a minimum spanning tree for a
connected weighted undirected graph.
This algorithm maintains three sets of vertices which can be given as below:
1. Tree vertices :Vertices that are a part of the minimum spanning tree T.
2. Fringe vertices : Vertices that are currently not a part of T, but are adjacent to some tree
vertex.
3. Unseen vertices : Vertices that are neither tree vertices nor fringe vertices fall under this
category.
The steps involved in the Prim’s algorithm are:
25
Prim’s Algorithm... Prepared by D.HIMAGIRI
Example : Construct a minimum spanning tree of the graph given using Prim’s algorithm.
Step 1 : Choose a starting vertex B.
Step 2: Add the vertices that are adjacent to B. the edges that connecting
the vertices are shown by dotted lines.
Step 3: Choose the edge with the minimum weight among all. i.e. BD
and add it to MST. Add the adjacent vertices of D i.e. C and E.
Step 4: Choose the edge with the minimum weight among all. In this
case, the edges DE and CD are such edges. Add them to MST and
explore the adjacent of C i.e. E and A.
Step 5: Choose the edge with the minimum weight i.e. CA. We can't
choose CE as it would cause cycle in the graph.
26
cost(MST) = 4 + 2 + 1 + 3 = 10 units.
Prepared by D.HIMAGIRI
Searching
Searching means to find whether a particular value is present in an array or not.
If the value is present in the array, then searching is said to be successful and the searching
process gives the location of that value in the array.
if the value is not present in the array, the searching process displays an appropriate message
and in this case searching is said to be unsuccessful.
There are two popular methods for searching the array elements:
i) Linear Search
ii) Binary Search.
Linear Search:
Linear search, also called as sequential search, is a very simple method used for searching an
array for a particular value.
It works by comparing the value to be searched with every element of the array one by one in a
sequence until a match is found.
Linear search is mostly used to search an unordered list of elements.
For example :
int A[] = {10, 8, 2, 7, 3, 4, 9, 1, 6, 5};
The value to be searched is VAL = 7, then searching means to find whether the value ‘7’ is
present in the array or not. If yes, then it returns the position of its occurrence.
Here, POS = 3 (index starting from 0). 27
Prepared by D.HIMAGIRI
Linear Search...
Linear Search Algorithm:
The best case of linear search is when VAL is equal to the first element of the array. In this
case, only one comparison will be made.
The worst case will happen when either VAL is not present in the array or it is equal to the last
element of the array. In both the cases, n comparisons will have to be made.
28
Prepared by D.HIMAGIRI
Binary Search
Binary search is a searching algorithm that works
efficiently with a sorted list.
In this algorithm, BEG and END are the beginning
and ending positions of the list that we are looking
to search for the element. MID is calculated as
(BEG + END)/2.
if VAL is not equal to A[MID], then the values of
BEG, END, and MID will be changed depending
on whether VAL is smaller or greater than A[MID].
(a) If VAL < A[MID], then VAL will be present
in the left segment of the array. So, the value of
END will be changed as END = MID – 1.
(b) If VAL > A[MID], then VAL will be present
in the right segment of the array. So, the
value of BEG will be changed as BEG = MID + 1.
The algorithm will terminate when A[MID] = VAL. When the algorithm ends, we will set
POS = MID. POS is the position at which the value is present in the array.
Finally, if VAL is not present in the array, then eventually, END will be less than BEG. When
this happens, the algorithm will terminate and the search will be unsuccessful. 29
Prepared by D.HIMAGIRI
Binary Search
30
Prepared by D.HIMAGIRI
Binary Search…
31
Prepared by D.HIMAGIRI
Sorting
Sorting means arranging the elements of an array so that they are placed in some relevant order
which may be either ascending or descending.
For example, if we have an array that is declared and initialized as
int A[] = {21, 34, 11, 9, 1, 0, 22};
Then the sorted array (ascending order) can be given as:
A[] = {0, 1, 9, 11, 21, 22, 34};
There are two types of sorting:
1. Internal sorting : which deals with sorting the data stored in the computer’s memory
2. External sorting : which deals with sorting the data stored in files. External sorting is
applied when there is voluminous data that cannot be stored in the memory.
Bubble Sort:
Bubble sort is a very simple method that sorts the array elements by repeatedly
moving the largest element to the highest index position of the array segment.
Here consecutive adjacent pairs of elements in the array are compared with each
other.
If the element at the lower index is greater than the element at the higher index, the
two elements are interchanged so that the smaller element is placed before the bigger
one.
This process will continue till the list of unsorted elements are over.
32
Prepared by D.HIMAGIRI
Bubble Sort ...
Ex: A[] = {30, 52, 29, 87, 63, 27, 19, 54}
Pass1:
(a)Compare 30 and 52. Since 30 < 52,
no swapping is done.
(b)Compare 52 and 29. Since 52 > 29,
swapping is done.
30, 29, 52, 87, 63, 27, 19, 54
(c)Compare 52 and 87. Since 52 < 87,
no swapping is done.
(d)Compare 87 and 63. Since 87 > 63,
swapping is done.
30, 29, 52, 63, 87, 27, 19, 54
(e)Compare 87 and 27. Since 87 > 27,
swapping is done.
30, 29, 52, 63, 27, 87, 19, 54
(f)Compare 87 and 19. Since 87 > 19, swapping is done.
30, 29, 52, 63, 27, 19, 87, 54
(g)Compare 87 and 54. Since 87 > 54, swapping is done.
30, 29, 52, 63, 27, 19, 54, 87
After the end of the first pass, the largest element is placed at the highest index of the array.
33
Prepared by D.HIMAGIRI
Bubble Sort...
Pass 2:
(a)Compare 30 and 29. Since 30 > 29, swapping is done.
29, 30, 52, 63, 27, 19, 54, 87
(b)Compare 30 and 52. Since 30 < 52, no swapping is done.
(c)Compare 52 and 63. Since 52 < 63, no swapping is done.
(d)Compare 63 and 27. Since 63 > 27, swapping is done.
29, 30, 52, 27, 63, 19, 54, 87
(e) Compare 63 and 19. Since 63 > 19, swapping is done.
29, 30, 52, 27, 19, 63, 54, 87
(f) Compare 63 and 54. Since 63 > 54, swapping is done.
29, 30, 52, 27, 19, 54, 63, 87
After the end of the second pass, the second largest element is placed at the second highest
index of the array.
Pass 3:
(a)Compare 29 and 30. Since 29 < 30, no swapping is done.
(b)Compare 30 and 52. Since 30 < 52, no swapping is done.
(c)Compare 52 and 27. Since 52 > 27, swapping is done.
29, 30, 27, 52, 19, 54, 63, 87
(d)Compare 52 and 19. Since 52 > 19, swapping is done.
29, 30, 27, 19, 52, 54, 63, 87 34
Prepared by D.HIMAGIRI
Bubble Sort ...
(e) Compare 52 and 54. Since 52 < 54, no swapping is done.
Observe that after the end of the third pass, the third largest element is placed at the third
highest index of the array. All the other elements are still unsorted.
Pass 4:
(a)Compare 29 and 30. Since 29 < 30, no swapping is done.
(b)Compare 30 and 27. Since 30 > 27, swapping is done.
29, 27, 30, 19, 52, 54, 63, 87
(c)Compare 30 and 19. Since 30 > 19, swapping is done.
29, 27, 19, 30, 52, 54, 63, 87
(d)Compare 30 and 52. Since 30 < 52, no swapping is done.
After the end of the fourth pass, the fourth largest element is placed
at the fourth highest index of the array.
Pass 5:
(a)Compare 29 and 27. Since 29 > 27, swapping is done.
27, 29, 19, 30, 52, 54, 63, 87
(b)Compare 29 and 19. Since 29 > 19, swapping is done. 27, 19, 29, 30, 52,
54, 63, 87
(c)Compare 29 and 30. Since 29 < 30, no swapping is done
After the end of the fifth pass, the fifth largest element is placed at the fifth highest index of the
35
array.
Prepared by D.HIMAGIRI
Bubble Sort ...
Pass 6:
(a)Compare 27 and 19. Since 27 > 19, swapping is done.
19, 27, 29, 30, 52, 54, 63, 87
(b)Compare 27 and 29. Since 27 < 29, no swapping is done.
After the end of the sixth pass, the sixth largest element is placed at the sixth largest index of
the array.
Pass 7:
(a) Compare 19 and 27. Since 19 < 27, no swapping is done.
Observe that the entire list is sorted now.
19, 27, 29, 30, 52, 54, 63, 87
36
Prepared by D.HIMAGIRI
Insertion Sort
Insertion sort works as follows:
The array of values to be sorted is divided into two sets. One that stores sorted values and
another that contains unsorted values.
The sorting algorithm will proceed until there are elements in the unsorted set.
Suppose there are n elements in the array. Initially, the element with index 0
(assuming LB = 0) is in the sorted set. Rest of the elements are in the unsorted set.
The first element of the unsorted partition has array index 1 (if LB = 0).
During each iteration of the algorithm, the first element in the unsorted set is picked up and
inserted into the correct position in the sorted set.
37
Prepared by D.HIMAGIRI
Insertion sort...
Example : Sort the elements using Insertion sort
50,10,30,80,70,20,60,40
38
Selection sort
Selection sort works as follows:
First find the smallest value in the array and place it in the first position. Then, find the second
smallest value in the array and place it in the second position. Repeat this procedure until the
entire array is sorted.
In Pass 1, find the position POS of the smallest value in the array and then swap ARR[POS]
and ARR[0]. Thus, ARR[0] is sorted.
In Pass 2, find the position POS of the smallest value in sub-array of N–1 elements. Swap
ARR[POS] with ARR[1]. Now, ARR[0] and ARR[1] is sorted.
In Pass N–1, find the position POS of the smaller of the elements ARR[N–2] and ARR[N–1].
Swap ARR[POS] and ARR[N–2] so that ARR[0], ARR[1], ..., ARR[N–1] is sorted.
39
Prepared by D.HIMAGIRI
Selection Sort…
Example : Perform selection sort on the following elements 39,9,81,45,90,27,72,18
PASS POS ARR[0] ARR[1] ARR[2] ARR[3] ARR[4] ARR[5] ARR[6] ARR[7]
1 1 9 39 81 45 90 27 72 18
2 7 9 18 81 45 90 27 72 39
3 5 9 18 27 45 90 81 72 39
4 7 9 18 27 39 90 81 72 45
5 7 9 18 27 39 45 81 72 90
6 6 9 18 27 39 45 72 81 90
7 6 9 18 27 39 45 72 81 90
40
Prepared by D.HIMAGIRI
Merge Sort
Merge sort is a sorting algorithm that uses the divide, conquer, and combine algorithmic
paradigm.
Divide means partitioning the n-element array to be sorted into two sub-arrays of n/2 elements.
If A is an array containing zero or one element, then it is already sorted. However, if there are
more elements in the array, divide A into two sub-arrays, A 1and A2 , each containing about
half of the elements of A.
Conquer means sorting the two sub-arrays recursively using merge sort.
Combine means merging the two sorted sub-arrays of size n/2 to produce the sorted array of n
elements.
Example: 39,9,81,45,90,27,72,18
Divide and Conquer the array
39 9 81 45 90 27 72 18
39 9 81 45 90 27 72 18
39 9 81 45 90 27 72 18
39 9 81 45 90 27 72 18
41
Prepared by D.HIMAGIRI
Merge Sort…
Combine the elements to form the sorted array:
39 9 81 45 90 27 72 18
9 39 45 81 27 90 18 72
9 39 45 81 18 27 72 90
9 18 27 39 45 72 81 90
The merge sort algorithm recursively divides the list into smaller lists, the merge algorithm
conquers the list to sort the elements in individual lists. Finally, the smaller lists are merged to
form one list.
42
Prepared by D.HIMAGIRI
Merge Sort…
Combine the elements to form the sorted array:
43
Prepared by D.HIMAGIRI
Merge Sort…
44
Quick Sort… Prepared by D.HIMAGIRI
Technique
Quick sort works as follows:
1.Set the index of the first element in the array to loc and left variables. Also, set the index of the
last element of the array to the right variable.
That is, loc = 0, left = 0, and right = n–1 (where n in the number of elements in the array)
2.Start from the element pointed by right and scan the array from right to left, comparing each
element on the way with the element pointed by the variable loc.
That is, a[loc] should be less than a[right].
(a)If that is the case, then simply continue comparing until right becomes equal to loc. Once
right = loc, it means the pivot has been placed in its correct position.
(b)However, if at any point, we have a[loc] > a[right], then interchange the two values and
jump to Step 3.
(c)Set loc = right
3.Start from the element pointed by left and scan the array from left to right, comparing each
element on the way with the element pointed by loc.
That is, a[loc] should be greater than a[left].
(a)If that is the case, then simply continue comparing until left becomes equal to loc. Once
left = loc, it means the pivot has been placed in its correct position.
(b)However, if at any point, we have a[loc] < a[left], then interchange the two values and jump
to Step 2.
45
(c)Set loc = left.
Prepared by D.HIMAGIRI
Quick Sort…
46
Prepared by D.HIMAGIRI
Quick Sort…
47
Prepared by D.HIMAGIRI
UNIT - V
----------------------------------------------------------------------------------------------------------
Hashing-Hash table, Hash table representations, hash functions, collision resolution techniques-
separate chaining, open addressing-linear probing, quadratic probing, double hashing, Re
hashing, Extendible hashing,
Pattern matching : Introduction, Brute force, the Boyer –Moore algorithm, Knuth-Morris-Pratt
algorithm.
----------------------------------------------------------------------------------------------------------
Hashing: Hashing is a technique to convert a range of key values into a range of indexes of an
array. Hashing techniques is implemented using hash function and hash table.
Hash Table: It is data structure which
contains index and associated data.
Access of data becomes very fast
if we know the index of the desired data.
Hash Function: It is a function which is
used to map the key to a hash value.
It is represented as h(x).
Collision: If the same index or hash value
is produced by the hash function for
multiple keys then, conflict arises.
1
This situation is called collision.
Prepared by D.HIMAGIRI
Hashing…
Types of hash functions:
Division method
It is the most simple method of hashing an integer x. This method divides x by m and then
uses the remainder obtained as hash value. In this case, the hash function can be given as
h(x) = x mod m.
It requires only a single division operation, therefore this method works very fast.
Example:
calculate the hash values of keys 1234 and 5462,where m=97.
h(1234) = 1234 % 97 = 70 , h(5642) = 5642 % 97 = 16
Multiplication method
The steps involved in the multiplication method are as follows:
Step 1: choose a constant a such that 0 < a < 1.
Step 2: multiply the key k by a.
Step 3: extract the fractional part of ka.
Step 4: multiply the result of step 3 by the size of hash table (m).
Hence, the hash function can be given as:
h(k) = m (ka mod 1) where, (ka mod 1) gives the fractional part of ka and m is the total
number of indices in the hash table.
2
Prepared by D.HIMAGIRI
Hash Function Types…
Example:
Given a hash table of size 1000, map the key 12345 to an appropriate location in the hash
table.
we will use a = 0.618033, m = 1000, and k = 12345
h(12345) = 1000 (12345 * 0.618033 mod 1)
= 1000 (7629.617385 mod 1)
= 1000 (0.617385)
= 617.385
= 617
4
Prepared by D.HIMAGIRI
Hash FunctionTypes…
Example:
Given a hash table of 100 locations, calculate the hash value using folding method for
keys 5678, 321, and 34567.
Since there are 100 memory locations to address, we will break the key into parts where
each part (except the last) will contain two digits. The hash values can be obtained as shown
below:
5
Prepared by D.HIMAGIRI
Collision Resolution Techniques
The collision resolution techniques are :
1. Separate chaining
2. Open addressing
Separate chaining:
In this technique, if a hash function produces the same index for multiple elements, these
elements are stored in the same index by using a linked list.
if no element is hashed to a particular index then it will contain NULL.
6
Prepared by D.HIMAGIRI
Collision Resolution Techniques…
Insert the keys 7, 24, 18, 52, 36, 54, 11, and 23 in a chained hash table of 9 memory locations.
Use hash function h(k) = k mod m.
In this case, m=9.
Insert 7
Insert 24 Insert 18
7
Prepared by D.HIMAGIRI
Collision Resolution Techniques…
Insert the keys 7, 24, 18, 52, 36, 54, 11, and 23 in a chained hash table of 9 memory locations.
Use hash function h(k) = k mod m.
Insert 52 Insert 36
Insert 54 Insert 11
8
Prepared by D.HIMAGIRI
Collision Resolution Techniques…
Insert the keys 7, 24, 18, 52, 36, 54, 11, and 23 in a chained hash table of 9 memory locations.
Use hash function h(k) = k mod m.
Insert 23
9
Prepared by D.HIMAGIRI
Collision Resolution Techniques…
Open Addressing:
In this technique, the hash table contains two types of values: sentinel values (e.g., –1) and
data values. The presence of a sentinel value indicates that the location contains no data
value at present but can be used to hold a value.
When a key is mapped to a particular memory location, then the value it holds is checked.
If it contains a sentinel value, then the location is free and the data value can be stored in
it.
if the location already has some data value stored in it, then other slots are examined
systematically in the forward direction to find a free slot. If even a single free location is
not found, then we have an overflow condition.
The process of examining memory locations in the hash table is called probing.
Open addressing technique can be implemented using:
1. Linear probing
2. Quadratic probing
3. Double hashing
4. Rehashing.
10
Prepared by D.HIMAGIRI
Collision Resolution Techniques…
Linear probing:
The simplest approach to resolve a collision is linear probing. In this technique, if a value is
already stored at a location generated by h(k), then the following hash function is used to resolve
the collision:
H(k, i) = [h¢(k) + i] mod m
Where m is the size of the hash table, h¢(k) = (k mod m), and i is the probe number that varies
from 0 to m–1.
Example: Consider a hash table of size 10. Using linear probing, insert the keys 72, 27, 36, 24,
63, 81, 92 into the table.
Let h¢(k) = k mod m, m = 10 ,Initially, the hash table can be given as:
11
Prepared by D.HIMAGIRI
Linear Probing…
Example: Consider a hash table of size 10. Using linear probing, insert the keys 72, 27, 36, 24,
63, 81, 92 into the table.
Let h¢(k) = k mod m, m = 10
12
Prepared by D.HIMAGIRI
Linear Probing…
Example: Consider a hash table of size 10. Using linear probing, insert the keys 72, 27, 36, 24,
63, 81, 92 into the table.
Let h¢(k) = k mod m, m = 10
13
Prepared by D.HIMAGIRI
Linear Probing…
Example: Consider a hash table of size 10. Using linear probing, insert the keys 72, 27, 36, 24,
63, 81, 92 into the table.
Let h¢(k) = k mod m, m = 10
14
Prepared by D.HIMAGIRI
Linear Probing…
Example: Consider a hash table of size 10. Using linear probing, insert the keys 72, 27, 36, 24,
63, 81, 92 into the table.
Let h¢(k) = k mod m, m = 10
15
Prepared by D.HIMAGIRI
Linear Probing…
Advantages:
Easy to compute.
Disadvantages:
Primary Clustering: Any key that hashes into the cluster will require several attempts to
16
Prepared by D.HIMAGIRI
Quadratic probing
Quadratic probing:
In this technique, if a value is already stored at a location generated by h(k), then the following
hash function is used to resolve the collision:
H(k, i) = [h¢(k) + c1i + c2i2] mod m
Where,
m is the size of the hash table,
h¢(k) = (k mod m),
i is the probe number that varies from 0 to M–1, and
c1 and c2 are constants.
Quadratic probing eliminates the primary clustering phenomenon of linear probing because
instead of doing a linear search, it does a quadratic search.
Example: Consider a hash table of size 10. Using quadratic probing, insert the keys 72, 27, 36,
24, 63, 81, and 101 into the table. Take c = 1 and c = 3.
17
Prepared by D.HIMAGIRI
Quadratic probing…
Example: Consider a hash table of size 10. Using quadratic probing, insert the keys 72, 27, 36,
24, 63, 81, and 101 into the table. Take c = 1 and c = 3.
18
Prepared by D.HIMAGIRI
Quadratic probing…
Example: Consider a hash table of size 10. Using quadratic probing, insert the keys 72, 27, 36,
24, 63, 81, and 101 into the table. Take c = 1 and c = 3.
19
Prepared by D.HIMAGIRI
Quadratic probing…
Example: Consider a hash table of size 10. Using quadratic probing, insert the keys 72, 27, 36,
24, 63, 81, and 101 into the table. Take c = 1 and c = 3.
20
Prepared by D.HIMAGIRI
Quadratic probing…
Example: Consider a hash table of size 10. Using quadratic probing, insert the keys 72, 27, 36,
24, 63, 81, and 101 into the table. Take c = 1 and c = 3.
21
Prepared by D.HIMAGIRI
Quadratic probing…
Advantage:
Eliminates Primary Clustering problem.
Disadvantage:
Secondary Clustering: Elements that hash to the same position will probe the same alternative
cells.
Double Hashing:
In double hashing, we use two hash functions rather than a single function. The hash function
in the case of double hashing can be given as:
H(k, i) = [h1(k) + ih2(k)] mod m
Where m is the size of the hash table, h (k) and h (k) are two hash functions given as h1 (k) = k
mod m and h2 (k) = k mod m', i is the probe number that varies from 0 to m–1, and m' is
chosen to be less than m. We can choose m' = m–1 or m–2.
Advantage:
Double hashing minimizes repeated collisions and the effects of clustering. That is, double
hashing is free from problems associated with primary clustering as well as secondary
clustering.
Disadvantage:
Implementation is difficult as it uses two hash functions. 22
Prepared by D.HIMAGIRI
Double Hashing…
Example : consider a hash table of size = 10. Using double hashing, insert the keys 72, 27, 36,
24, 63, 81, 92 into the table. Take h1 = (k mod 10) and h2 = (k mod 8).
23
Prepared by D.HIMAGIRI
Double Hashing…
Example : consider a hash table of size = 10. Using double hashing, insert the keys 72, 27, 36,
24, 63, 81, 92 into the table. Take h1 = (k mod 10) and h2 = (k mod 8).
24
Prepared by D.HIMAGIRI
Double Hashing…
Example : consider a hash table of size = 10. Using double hashing, insert the keys 72, 27, 36,
24, 63, 81, 92 into the table. Take h1 = (k mod 10) and h2 = (k mod 8).
25
Prepared by D.HIMAGIRI
Double Hashing…
Example : consider a hash table of size = 10. Using double hashing, insert the keys 72, 27, 36,
24, 63, 81, 92 into the table. Take h1 = (k mod 10) and h2 = (k mod 8).
26
Prepared by D.HIMAGIRI
Rehashing
Rehashing :
When the hash table becomes nearly full, the number of collisions increases, thereby degrading
the performance of insertion and search operations. In such cases, a better option is to create a
new hash table with size double of the original hash table.
All the entries in the original hash table will then have to be moved to the new hash table. This
is done by taking each entry, computing its new hash value, and then inserting it in the new
hash table. Though rehashing seems to be a simple process, it is quite expensive and must
therefore not be done frequently.
Example : Consider the hash table of size 5 given below. The hash function used is h(x)= x % 5.
Rehash the entries into to a new hash table.
27
Prepared by D.HIMAGIRI
Extendible Hashing
Extendible Hashing :
It is a dynamic hashing method in which directories and buckets are used to hash data.
It is an aggressively flexible method in which the hash function also experiences dynamic
changes.
Terminology used in Extendible hashing:
Directories: The directories store addresses of
the buckets in pointers. An id is assigned to each
directory which may change each time when
Directory Expansion takes place.
Buckets: The buckets are used to hash the actual
data.
Global Depth: It is associated with the Directories.
They denote the number of bits which are used by
the hash function to categorize the keys.
Global Depth = Number of bits in directory id.
Local Depth: It is the same as that of Global Depth except for the fact that Local Depth is
associated with the buckets and not the directories. Local depth in accordance with the global
depth is used to decide the action that to be performed in case an overflow occurs. Local Depth is
28
always less than or equal to the Global Depth.
Prepared by D.HIMAGIRI
Extendible Hashing...
Bucket Splitting: When the number of elements in a bucket exceeds a particular size, then the
bucket is split into two parts.
Directory Expansion: Directory Expansion Takes place when a bucket overflows. Directory
Expansion is performed when the local depth of the overflowing bucket is equal to the global
depth.
Basic Working of Extendible Hashing:
Directory
Points to buckets
Data
traverse to
bucket
Bucket
Data is stored /Hashed here
29
Prepared by D.HIMAGIRI
Extendible Hashing...
Tackling Over Flow Condition during Data Insertion:
Many times, while inserting data in the buckets, it might happen that the Bucket overflows. In
such cases, we need to follow an appropriate procedure to avoid mishandling of data.
First, Check if the local depth is less than or equal to the global depth. Then choose one of the
cases below.
Case1: If the local depth of the overflowing Bucket is equal to the global depth, then
Directory Expansion, as well as Bucket Split, needs to be performed. Then increment the
global depth and the local depth value by 1. And, assign appropriate pointers.
Directory expansion will double the number of directories present in the hash structure.
Case2: In case the local depth is less than the global depth, then only Bucket Split takes
place. Then increment only the local depth value by 1. And, assign appropriate pointers.
16 10000
4 00100
6 00110
22 10110
Insert 16: 24 11000
The binary format of 16 is 10000 and global-depth is 1.
10 01010
The hash function returns 1 LSB of 10000 which is 0.
Hence, 16 is mapped to the directory with id=0. 31 11111
7 00111
9 01001
20 10100
26 01101
31
Prepared by D.HIMAGIRI
Extendible Hashing...
Insert 4:
The binary format of 4 is 00100 and global-depth is 1. Key Binary
The hash function returns 1 LSB of 00100 which is 0. Representation
Hence, 4 is mapped to the directory with id=0.
16 10000
4 00100
6 00110
22 10110
24 11000
10 01010
Insert 6: 31 11111
Hash(6)=00110
7 00111
9 01001
20 10100
26 01101
32
Prepared by D.HIMAGIRI
Extendible Hashing...
Insert 22
Hash(22)=10110, The bucket pointed by directory 0 is already full. Hence, Over Flow occurs.
Since Local Depth = Global Depth, the bucket splits and directory expansion takes place. Also,
rehashing of numbers present in the overflowing bucket takes place after the split. And, since the
global depth is incremented by 1, now,the global depth is 2. Hence, 16,4,6,22 are now rehashed
w.r.t 2 LSBs.[ 16(10000),4(100),6(110),22(10110) ]
33
Prepared by D.HIMAGIRI
Extendible Hashing...
Inserting 24 and 10: 24(11000) and 10 (1010) can be hashed based on directories with id 00
and 10
Inserting 31,7,9: All of these elements[ 31(11111), 7(111), 9(1001) ] have either 01 or 11 in
their LSBs. Hence, they are mapped on the bucket pointed out by 01 and 11.
34
Prepared by D.HIMAGIRI
Extendible Hashing...
Inserting 20: Insertion of data element
20 (10100) will again cause the overflow
problem.
35
Prepared by D.HIMAGIRI
Extendible Hashing...
Inserting 26: Global depth is 3. Hence, 3 LSBs of 26(11010) are considered. Therefore 26 best
fits in the bucket pointed out by directory 010.
Key Binary
Representation
16 10000
4 00100
6 00110
22 10110
24 11000
10 01010
31 11111
7 00111
9 01001
20 10100
26 01101
36
Prepared by D.HIMAGIRI
Extendible Hashing...
The bucket overflows, and the local depth of bucket < Global depth (2<3), directories are not
doubled but, only the bucket is split and elements are rehashed.
Finally, the output of hashing the given list of numbers is obtained. Key Binary
Representation
16 10000
4 00100
6 00110
22 10110
24 11000
10 01010
31 11111
7 00111
9 01001
20 10100
26 01101
37
Prepared by D.HIMAGIRI
Pattern Matching
Pattern Matching:
The Pattern Matching algorithms are also referred as String Searching Algorithms
These algorithms are useful in the case of searching a pattern within the text or searching a sub
string within a String.
39
Prepared by D.HIMAGIRI
Brute Force Algorithm...
Example:
40
Prepared by D.HIMAGIRI
KMP Algorithm
KMP Algorithm is one of the most popular patterns matching algorithms. KMP stands for
Knuth Morris Pratt.
KMP algorithm was invented by Donald Knuth and Vaughan Pratt together and
independently by James H Morris in the year 1970. In the year 1977, all the three jointly
published KMP Algorithm.
KMP algorithm is used to find a "Pattern" in a "Text". This algorithm compares character
by character from left to right. But whenever a mismatch occurs, it uses a pre-processed table
called "Prefix Table" to skip characters comparison while matching.
Prefix table is also known as LPS Table. Here LPS stands for "Longest proper Prefix
which is also Suffix".
Steps for Creating LPS Table (Prefix Table)
Step 1 : Define a one dimensional array with the size equal to the length of the Pattern.
(LPS[size])
Step 2 : Define variables i & j. Set i = 0, j = 1 and LPS[0] = 0.
Step 3 : Compare the characters at Pattern[i] and Pattern[j].
Step 4 : If both are matched then set LPS[j] = i+1 and increment both i & j values by one.
Goto to Step 3.
Step 5 : If both are not matched then check the value of variable 'i'. If it is '0' then set LPS[j]
= 0 and increment 'j' value by one, if it is not '0' then set i = LPS[i-1]. Goto Step 3.
41
Step 6: Repeat above steps until all the values of LPS[] are filled.
Prepared by D.HIMAGIRI
KMP Algorithm...
42
Prepared by D.HIMAGIRI
KMP Algorithm...
43
Prepared by D.HIMAGIRI
KMP Algorithm...
44
Prepared by D.HIMAGIRI
KMP Algorithm...
How to use LPS Table:
We use the LPS table to decide how many characters are to be skipped for comparison when a
mismatch has occurred.
When a mismatch occurs, check the LPS value of the previous character of the mismatched
character in the pattern. If it is '0' then start comparing the first character of the pattern with the
next character to the mismatched character in the text.
If it is not '0' then start comparing the character which is at an index value equal to the LPS
value of the previous character to the mismatched character in pattern with the mismatched
character in the Text.
Example : Apply the KMP for the following
45
Prepared by D.HIMAGIRI
KMP Algorithm...
46
Prepared by D.HIMAGIRI
KMP Algorithm...
47
Prepared by D.HIMAGIRI
KMP Algorithm...
48
Prepared by D.HIMAGIRI
Boyer Moore Algorithm
It was developed by Robert S. Boyern and J Strother Moore in 1977.
The Boyer-Moore algorithm is consider the most efficient string-matching algorithm.
It works the fastest when the Text is moderately sized and the pattern is relatively long.
Boyer Moore is a combination of following two approaches:
1) Bad Character Heuristic
2) Good Suffix Heuristic
Both of the above heuristics can also be used independently to search a pattern in a text.
It processes the pattern and creates different arrays for both heuristics. At every step, it slides
the pattern by the max of the slides suggested by the two heuristics. So it uses best of the two
heuristics at every step.
Boyer Moore algorithm starts matching from the last character of the pattern.
Bad Character Heuristic:
The character of the text which doesn’t match with the current character of the pattern is called
the Bad Character.
Upon mismatch, we shift the pattern until –
1) The mismatch becomes a match.
2) Pattern P move past the mismatched character.
49
Prepared by D.HIMAGIRI
Boyer Moore Algorithm
Case 1 : Mismatch become match We will lookup the position of last occurrence of mismatching
character in pattern and if mismatching character exist in pattern then we’ll shift the pattern such
that it get aligned to the mismatching character in text T.
Case 2: Pattern move past the mismatch character We’ll lookup the position of last occurence of
mismatching character in pattern and if character does not exist we will shift pattern past the
mismatching character.
50
DATA STRUCTURES UNIT-1 QUESTION BANK
Short Questions
Blooms Taxonomy
S. No Question
Level
Define the term algorithm and state the criteria the algorithm
1 Remember
should satisfy?
Long Questions
Short Questions
Blooms Taxonomy
S. No Question
Level
1 Define Stack? Remember
14 Convert the infix expression (a+b)-(c*d) into post fix form? Apply
15 List how Stacks and Queues are represented in data structure ? Understand
Long Questions
Short Questions
Blooms Taxonomy
S. No Question
Level
Define Tree? Remember
1
Define the terms degree, siblings, depth/height, level? Remember
2
Define path in a tree. Remember
3
Define Binary Tree? Remember
4
Define full binary tree? Remember
5
Define complete binary tree? Remember
6
State the properties of a Binary Tree? Remember
7
Discuss how to represent Binary Tree? Remember
8
List the different tree traversals? Remember
9
Discuss threaded binary tree? Remember
10
Define heap? Remember
11
Differentiate Max-heap and Min-heap? Understand
12
List the properties of Red black tree Remember
13
Remember
14 Define the balanced factor in AVL Tree.
Understand
15 Differentiate B-tree and B+ Tree
Long Questions
Write inorder, preoreder, post order traversal of the following Apply
tree
Short Questions
Blooms Taxonomy
S. No Question
Level
Define graph? Remember
1
Discuss representation of graph with examples? Understand
2
List the different graph traversals? Remember
3
Differentiate BFS and DFS? Understand
4
What is a spanning tree? Remember
5
Define Minimum Spanning Tree. Remember
6
What are the disadvantages of linear search? Remember
7
List the algorithms used for finding minimum spanning trees. Remember
8
What is adjacency matrix? Remember
9
Write the algorithm for bubble sort. Understand
10
Define Directed graph. Remember
11
Write the algorithm for Insertion sort. Understand
12
Long Questions
Understand
1 Write a program to implement the Quick Sort.
Understand
2 Write a program to implement the Quick Sort.
Construct the minimum spanning tree using Kruskals algorihm Apply
for the following graph
3
Illustrate DFS and BFS traversals of following graph Apply
Short Questions
Blooms Taxonomy
S. No Question
Level
1 Define Hashing? Remember
Long Questions
Use quadratic probing to fill the Hash table of size 11. Data
1 Apply
elements are 23,0,52,61,78,33,100,8,90,10,14
Apply KMP algorithm on pattern “abaa” and text
2 Apply
“abbbaababaab”
Analyze input (371, 323, 173, 199, 344, 679, 989) and hash
3 function h(x)=x mod 10, Show the result Separate Chaining, Apply
linear probing
Explain Boyers Moore pattern matching algorithm with suitable
4 Understand
example.
Analyze input (371, 323, 173, 199, 344, 679, 989) and hash
5 function h(x)=x mod 10, Show the result using quadratic Apply
probing.
Explain Collision Resolutin techniques.
6 Understand
Apply quadratic hashing to fill the hash table of size 11 elements
7 Apply
20,5,10,22,33,40,50,30,51,31
Apply Brute force algorithm on pattern “abacab” and text
8 Apply
“abacaabaccabacabaabb”
Show the each step of hash table entries for the given data set
9 using Double hashing probing 12,45,67,88,27,78,20,62,36,55 Apply
(size=10),h1(x)=x mod 10 and h2(x)=x mod 9