Data Structure 3
Data Structure 3
1. Time Complexity
Time complexity measures the amount of time an algorithm
takes to complete as a function of the size of the input data
(n). It focuses on how the number of operations grows with
input size.
o Common notations:
2. Space Complexity
Space complexity measures the amount of memory an
algorithm for its complete execution, including the input
data,and call function.
#include <iostream>
class Stack {
public:
intarr[SIZE];
int top;
public:
if (top == SIZE - 1)
else
arr[++top] = value;
// Pop operation
void pop() {
if (top == -1)
else
// Peek operation
int peek()
if (top == -1)
return -1;
else
{
returnarr[top];
boolisEmpty() {
};
int main() {
Stack s;
s.push(10);
s.push(20);
s.push(30);
s.push(40);
s.push(50);
s.pop();
s.pop();
s.pop();
s.pop();
return 0;
Example 2:
#include <iostream>
class Stack
{
public:
//constructor
Stack()
size = 10;
current = -1;
int pop()
return A[current--];
void push(int x)
{A[++current] = x;
int top()
return A[current];
intisEmpty()
{return ( current == -1 );
intisFull()
}
};
int main()
else
else
#include <iostream>
using namespace std;
public:
// Constructor initializes the stack as empty
Stack() : top(nullptr) {}
// Push operation
void push(int value) {
Node* newNode = new Node(value); // Create a new node
newNode->next = top; // Link the new node to the current top
top = newNode; // Update top to the new node
cout<< value << " pushed onto stack" <<endl;
}
// Pop operation
void pop() {
{
Node* temp = top; // Store the top node
top = top->next; // Update top to the next node
cout<< temp->data << " popped from stack" <<endl;
delete temp; // Free memory of the popped node
}
}
// Peek operation
int peek() {
{
return top->data; // Return the data of the top node
}
}
int main() {
Stack s;
s.push(10);
s.push(20);
s.push(30);
s.push(40);
s.push(50);
s.push(60); // This item will still be added (no size limit in linked list)
cout<< "Top element is: " <<s.peek() <<endl;
s.pop();
s.pop();
s.pop();
s.pop();
return 0;
}
#include <iostream>
public:
public:
//create constructor
// Enqueue operation
voidenqueue(int value)
if (count == SIZE)
else
arr[rear] = value;
count++;
// Dequeue operation
voiddequeue() {
if (count == 0)
else
{
cout<<arr[front] << " dequeued(Droped) from the queue"<<endl;
front = front + 1 ;
count--;
// Peek operation
int peek() {
if (count == 0)
return -1;
} else
returnarr[front];
boolisEmpty() {
return count == 0;
boolisFull() {
};
int main() {
Queue q;
q.enqueue(10);
q.enqueue(20);
q.enqueue(30);
q.enqueue(40);
q.enqueue(50);
q.dequeue();
q.dequeue();
q.dequeue();
q.dequeue();
q.dequeue();
q.peek();
return 0;
#include <iostream>
// Node structure
struct Node {
};
// Queue class
class Queue {
public:
public:
voidenqueue(int value) {
front = rear = newNode; // Both front and rear point to the new node
} else {
// Dequeue operation
voiddequeue() {
if (isEmpty()) {
} else {
// Peek operation
int peek() {
if (isEmpty()) {
return -1;
}
boolisEmpty() {
};
int main() {
Queue q;
q.enqueue(10);
q.enqueue(20);
q.enqueue(30);
q.enqueue(40);
q.enqueue(50);
q.dequeue();
q.dequeue();
q.dequeue();
q.dequeue();
q.dequeue();
return 0;
Sorting Algorithm
A sorting algorithm is a method or procedure used to arrange the elements of a list (or array) in a
particular order—typically ascending or descending. Sorting is a fundamental task in computer science
and is used in many applications such as searching, data processing, and optimization.
Below is a detailed list of popular sorting algorithms, organized by their approach, along with their main
characteristics, time and space complexities, and typical use cases.
Types of Sorting Algorithms in Data Structure
These are the main types of sorting in data structures:
1. Comparison-based:
(Selection sort, Insertion sort, Merge sort, Quick sort, Bubble sort, Heap sort, shell sort, radix, bucket
sort)
Selection Sort:
Selection Sort –
1. find the smallest element
2. put it in the first position
3. find the next smallest element
4. put it in the second position …
− And so on, until you get to the end of the list
This technique is so simple that you might have found it yourself already. You search the whole array and
find the smallest number. The smallest number is put on the first position of the array while the previous
element in this position is moved somewhere else. Find the second smallest number and then put that
number in the second position in the array, again the previous number in that position is shifted
somewhere else. We keep on performing this activity again and again and eventually we get the array
sorted. This technique is called selection sort because we select elements for their sorted positions.
Let’s understand this algorithm by considering an example with the help of
figures. Consider an array that has four numbers i.e. 19, 5, 7 and 12 respect
Now we want to sort this array in ascending order. To sort the array, selecti
algorithm will be applied on it.
We find that 5 is the smallest number and bring it to the first position i.e. in
Later, number 19 is put at the position that was occupied by number 5.
Thus, in a sense, we swap these numbers. Now 5, the smallest number in th
has come at its final position i.e. index 0
Now we look at the remaining elements 19, 7, 12 and find the smallest num
among them. Here 7 is the smallest so we change its position with 19 to br
at its position. Thus 5 and 7 have come at their final positions. Now there a
elements are left behind i.e. 12 and 19. We search the smallest among thes
find that 12 is the smallest. So, we swap it with 19 to bring it at index 2 i.e.
final position. Now there is last element remaining and obviously it is at its
position as there is no element to compare with it. The array is now in the s
form with ascending numbers.
Insertion Sort
Start by considering the first two elements of the array data. If found out of order, swap them
Consider the third element;
Insert it into the proper position among the first three elements.
Consider the fourth element; insert it into the proper position among the first four elements.
Let’s understand this algorithm by considering an example with the help of
figures. The array consists of the elements 19, 12, 5 and 7. We take the firs
numbers i.e. 19 and 12. As we see 12 is less than 19, so we swap their pos
Thus 12 comes at index 0 and 19 goes to index 1. Now we pick the third nu
i.e. 5. We have to find the position of this number by comparing it with the t
already sorted numbers. These numbers are 12 and 19. We see that 5 is sm
these two. So, it should come before these two numbers. Thus, the proper
of 5
is index 0. To insert it at index 0, we shift the numbers 12 and 19 before ins
5 at index 0. Thus 5 has come at its position. Now we pick the number 7 an
its position between 5 and 12. To insert 7 after 5 and before 12, we have to
the numbers 12 and 19 to the right. After this shifting, we put number 7 at i
position. Now the whole array has been sorted so the process stops here.
Bubble Sort
The third sorting algorithm is bubble sort. The basic idea of this algorithm is that we bring the smaller
elements upward in the array step by step and as a result, the larger elements go downward. If we think
about array as a vertical one, we do bubble sort. The smaller elements come upward and the larger
elements go downward in the array. Thus, it seems a bubbling phenomenon. Due to this bubbling nature,
this is called the bubble sort. Thus, the basic idea is that the lighter bubbles (smaller numbers) rise to the
top. This is for the sorting in ascending order. We can do this in the reverse order for the descending
order.
The steps in the bubble sort can be described as below
Exchange neighboring items until the largest item reaches the end of the array
Repeat the above step for the rest of the array
In this sort algorithm, we do pair-wise
swapping. We will take first the elements
and swap the smaller with the larger num
Then we do the swap between the next p
repeating this process, the larger number
will be going to the end of the array and
smaller elements come to the start of the
array.First of all, we compare the first pa
i.e. 19 and 5. As 5 is less than 19, we swa
theseelements. Now 5 is at its place and
take the next pair. This pair is 19, 12 and
not 12, 7. In this pair 12 is less than 19,
we swap 12 and 19. After this, the next p
is 19, 7. Here 7 is less than 19 so we swa
it. Now 7 is at its place as compared to 1
but it is not at its final position.
The element19 is at its final position. Now we repeat the pair wise swapping on the array from
index 0 to 2 as the value at index 3 is at its position. So we compare 5 and 12. As 5 is less than 12 so it
is at its place (that is before 12) and we need not to swap them. Now we take the next pair that is 12
and 7. In this pair, 7 is less than 12 so we swap these elements. Now 7 is at its position with respect to
the pair 12 and 7. Thus we have sorted the array up to index 2 as 12 is now at its final position. The
element 19 is already at its final position. Note that here in the bubble sort, we are not using additional
storage (array). Rather, we are replacing the elements in the same array. Thus, bubble sort is also an in-
place algorithm. Now as index 2 and 3 have their final values, we do the swap process up to the index 1.
Here, the first pair is 5 and 7 and, in this pair, we need no swapping as 5 is less than 7 and is at its
position (i.e. before 7). Thus 7 is also at its final position and the array is sorted.
Merge sort is a sorting algorithm that follows the divide-and-conquer approach. It works by recursively
dividing the input array into smaller subarrays and sorting those subarrays then merging them back
together to obtain the sorted array.
In simple terms, we can say that the process of merge sort is to divide the array into two halves, sort
each half, and then merge the sorted halves back together. This process is repeated until the entire
array is sorted.
Merge sort is a popular sorting algorithm known for its efficiency and stability. It follows the divide-and-
conquer approach to sort a given array of elements.
Here’s a step-by-step explanation of how merge sort works:
1. Divide: Divide the list or array recursively into two halves until it can no more be divided.
2. Conquer: Each subarray is sorted individually using the merge sort algorithm.
3. Merge: The sorted subarrays are merged back together in sorted order. The process
continues until all elements from both subarrays have been merged.
QuickSort
Quick sort is a sorting algorithm based on the Divide and Conquer that picks an element as a
pivot and partitions the given array around the picked pivot by placing the pivot in its correct
position in the sorted array.
Quick Sort works on the principle of divide and conquer, breaking down the problem into smaller
sub-problems.
There are mainly three steps in the algorithm:
1. Choose a Pivot: Select an element from the array as the pivot. The choice of pivot can
vary (e.g., first element, last element, random element, or median).
2. Partition the Array: Rearrange the array around the pivot. After partitioning, all elements
smaller than the pivot will be on its left, and all elements greater than the pivot will be on
its right. The pivot is then in its correct position, and we obtain the index of the pivot.
3. Recursively Call: Recursively apply the same process to the two partitioned sub-arrays
(left and right of the pivot).
4. Base Case: The recursion stops when there is only one element left in the sub-array, as a
single element is already sorted.
Heap Sort
First convert the array into a max heap using heapify, Please note that this happens in-place.
The array elements are re-arranged to follow heap properties. Then one by one delete the root
node of the Max-heap and replace it with the last node and heapify. Repeat this process while
size of heap is greater than 1.
Rearrange array elements so that they form a Max Heap.
Repeat the following steps until the heap contains only one element:
o Swap the root element of the heap (which is the largest element in current heap)
with the last element of the heap.
o Remove the last element of the heap (which is now in the correct position). We
mainly reduce heap size and do not remove element from the actual array.
o Heapify the remaining elements of the heap.
Finally we get sorted array.
Radix Sort
Radix Sort is a linear sorting algorithm that sorts elements by processing them digit by digit.It is an
efficient sorting algorithm for integers or strings with fixed-size keys. Rather than comparing elements
directly, Radix Sort distributes the elements into buckets based on each digit’s value. By repeatedly
sorting the elements by their significant digits, from the least significant to the most significant, Radix
Sort achieves the final sorted order
Bucket sort
Bucket sort is a sorting technique that involves dividing elements into various groups, or buckets. These
buckets are formed by uniformly distributing the elements. Once the elements are divided into buckets,
they can be sorted using any other sorting algorithm. Finally, the sorted elements are gathered together in
an ordered fashion.
Bucket sort, also known as bin sort, is a sorting algorithm that divides an array's elements into several
buckets. The buckets are then sorted one at a time, either using a different sorting algorithm or by
recursively applying the bucket sorting algorithm.
The bucket sort method is as follows:
Create an array of "buckets" that are initially empty
Scatter: Go through the original array, placing each object in its appropriate bucket
Each non-empty bucket should be sorted
Gather: Return all elements to the original array after visiting the buckets in order
For example, have the input array is:
Components of Hashing
There are majorly three components of hashing:
1. Key: A Key can be anything string or integer which is fed as input in the hash function the
technique that determines an index or location for storage of an item in a data structure.
2. Hash Function: Receives the input key and returns the index of an element in an array
called a hash table. The index is known as hash index.(K Mod N (where N is No of Items)
or (K mod 10)
3. Hash Table: Hash table is typically an array of lists. It stores values corresponding to the
keys. Hash stores the data in an associative manner in an array where each data value
has its own unique index.
We Follow (K Mod Table Size) or (K Mod N (N No of Items in Hashing table)
How does Hashing work?
Suppose we have a set of strings {“ab”, “cd”, “efg”} and we would like to store it in a table.
Step 1: We know that hash functions (which are some mathematical formula) are used to
calculate the hash value which acts as the index of the data structure where the value will
be stored.
Step 2: So, let’s assign
o “a” = 1,
o “b”=2, .. etc, to all alphabetical characters.
Step 3: Therefore, the numerical value by summation of all characters of the string:
“ab” = 1 + 2 = 3,
“cd” = 3 + 4 = 7 ,
“efg” = 5 + 6 + 7 = 18
Step 4: Now, assume that we have a table of size 7 to store these strings. The hash
function that is used here is the sum of the characters in key mod Table size . We can
compute the location of the string in the array by taking the sum(string) mod 7 .
Step 5: So we will then store
o “ab” in 3 mod 7 = 3,
o “cd” in 7 mod 7 = 0, and
o “efg” in 18 mod 7 = 4.
What is indexing?
Indexing improves database performance by minimizing the number of disc visits required to fulfill a query.
It is a data structure technique used to locate and quickly access data in databases. Several database
fields are used to generate indexes. The main key or candidate key of the table is duplicated in the first
column, which is the Search key. To speed up data retrieval, the values are also kept in sorted order. It
should be highlighted that sorting the data is not required. The second column is the Data Reference or
Pointer which contains a set of pointers holding the address of the disk block where that particular key
value can be found.
A graph (G) can be represent on ordered pair (V, E) where V indicates the No of Vertices and
E indicates the No of Edges.
G= (V,E)
G= (6,6)
Components:
1. Vertices: The vertices are sometimes also referred to as nodes and the
2. Edges are lines or arcs that connect any two nodes in the graph.
“A collection of vertices and edges is called Graph”
Types of Graphs in Data Structure and Algorithms:
1. Null Graph: A graph is known as a null graph if there are no edges in the graph.
2. Trivial Graph: Graph having only a single vertex (Node), it is also the smallest graph
possible
3. Undirected Graph: A graph in which edges do not have any direction. That is the nodes are
unordered pairs in the definition of every edge.
4. Directed Graph: A graph in which edge has directions. That is the nodes are ordered
pairs in the definition of every edge.
5. Connected Graph: The graph in which from one node we can visit any other node in the
graph is known as a connected graph.
6. Disconnected Graph: The graph in which at least one node is not reachable from a node is
known as a disconnected graph
7. Complete Graph: The graph in which from each node there is an edge to each other node
8. Cycle Graph: The graph in which the graph is a cycle in itself, the minimum value of
degree of each vertex is 2.
9. Cyclic Graph: A graph containing at least one cycle is known as a Cyclic graph
10. Directed Acyclic Graph: A Directed Acyclic Graph, often abbreviated as DAG, is a
fundamental concept in graph theory. DAGs are used to show how things
are related or depend on each other in a clear and organized way.
“A Directed Acyclic Graph (DAG) is a directed graph that does not contain any cycles.”
Directed Acyclic Graph has two important features:
Directed Edges: In Directed Acyclic Graph, each edge has a direction,
meaning it goes from one vertex (node) to another. This direction signifies
a one-way relationship or dependency between nodes.
Acyclic: The term "acyclic" indicates that there are no cycles or closed loops
within the graph. In other words, you cannot traverse a sequence of directed
edges and return to the same node, following the edge directions. Formation
of cycles is prohibited in DAG. Hence this characteristic is essential.
A, B, E, C, D, F
↓ C (There can be multiple valid solutions.)
E→F
Shortest Path
In graph theory, the shortest path problem is about finding the shortest possible
route between two nodes in a graph. The path length is usually measured by the
sum of edge weights (or the number of edges in an unweighted graph).
Common Shortest Path Algorithms
1. Dijkstra’s Algorithm (For graphs with non-negative weights)
Uses a priority queue to find the shortest path from a single
source to all nodes
2. Bellman-Ford Algorithm (For graphs with negative weights)
Finds the shortest path from a single source and can handle
graphs with negative weights
Dijkstra’s Algorithm
Algorithm Steps:
1. Initialize:
Set the distance of the source node to 0 and all other nodes
to infinity.
Use a priority queue (min-heap) to always explore the node
with the shortest known distance.
2. Process Nodes:
Example:
Output: 0 4 12 19 21 11 9 8 14
Floyd-Warshall Algorithm:
Floyd-Warshall Algorithm is an algorithm for finding the shortest path between all
the pairs of vertices in a weighted graph. This algorithm works for both the
directed and undirected weighted graphs. unlike Dijkstra and Bellman-Ford which
are single source shortest path algorithms. This algorithm works for both
the directed and undirected weighted graphs.
Note: It does not work for the graphs with negative cycles (where the sum of the
edges in a cycle is negative).
Example:
Given Graph: Final Result:
Representation of graph
The process of representing the graph with the help of vertices and edges is
called graph representation.
There are two types of technique to represent the graph
1. Adjacency Matrix
2. Adjacency List
Adjacency Matrix
An adjacency matrix is a two-dimensional array that stores the edges between
two vertices as boolean values. The rows and columns of the matrix represent
the vertices of the graph.
Explanation: At the top we have a graph with some vertices and edges. At the bottom we
have the adjacency matrix representation of the graph. The rows and columns represent the
vertices of the graph. If we look at the vertex with the value of 1, we can see that it has
edges to vertices 2 and 4. So we put a 1 in the first row for 2 and 4. This is an undirected
graph, so it goes the other way as well. So we can put a 1 in the 2 row and the 4 row for 1. If
this were a directed graph, we would only put a 1 in the 1 row for 2 and 4
Implementation of Adjacency Matrix:
#include <iostream>
using namespace std;
class Graph {
public:
int adjMatrix[10][10] ; // 10x10 adjacency matrix, initialized to 0
int numVertices = 4; // Fixed number of vertices (can be changed)
public:
// Constructor
Graph() {
// This constructor initializes all entries in the matrix to 0
for (int i = 0; i < numVertices; i++)
for (int j = 0; j < numVertices; j++)
adjMatrix[i][j] = 0;
}
int main() {
Graph g; // Create a Graph object
// Add edges between the vertices
g.addEdge(0, 1); // Connect vertex 0 and 1
g.addEdge(0, 2); // Connect vertex 0 and 2
g.addEdge(1, 2); // Connect vertex 1 and 2
g.addEdge(2, 3); // Connect vertex 2 and 3
return 0;
}
Adjacency List
An adjacency list is a collection of linked lists or arrays that lists all of the other vertices that
are connected. Let's look at an example that uses linked lists. Every vertex has a linked list
of all the vertices that it is connected to:
int main() {
Graph g; // Create a Graph object
return 0;
}
Which one is better?
The answer is: it depends. It depends on the type of graph and the type of operations you
want to perform on the graph. If you have a graph with a lot of edges, then the adjacency
matrix might be better. If you have a graph with a lot of vertices, then the adjacency list
might be better. If you want to be able to quickly tell if there is an edge between two vertices,
then the adjacency matrix might be better. If you want to be able to quickly iterate over all
the edges of a vertex, then the adjacency list might be better. I think overall, in my own
opinion, the adjacency list is easier and more commonly used. But it really depends on the
situation.
memory management and garbage collection are essential for efficient program
operation. Memory management is the process of allocating and deallocating memory
space during program execution, while garbage collection is a specific type of memory
management that automatically reclaims memory that is no longer reachable by the
program
Memory Management:
Allocation:
When a program needs to store data, it allocates memory to hold that data, often from a
large pool of available memory known as the heap.
Deallocation:
Once the program no longer needs the data, it should deallocate the memory, returning it to
the heap so it can be used by other parts of the program.
Dynamic Allocation:
In languages like C++, memory allocation and deallocation are often handled manually by
the programmer using functions like malloc and free.
Static Allocation:
Some data structures, like arrays declared with a fixed size, have their memory allocated at
compile time (static allocation)
Garbage Collection:
Automatic Memory Reclamation:
Garbage collection is an automatic memory management technique that identifies and
reclaims memory that is no longer in use by the program, preventing memory leaks.
Reachable vs. Unreachable:
The garbage collector determines which memory is reachable by the program (i.e., still
pointed to by variables or other objects) and which is not.
Unreachable Memory as Garbage:
Memory that is no longer reachable is considered "garbage" and can be safely reclaimed by
the garbage collector.
Languages with Garbage Collection:
Languages like Java and C# have built-in garbage collection, relieving programmers from
the burden of manual memory management.
Benefits:
Garbage collection simplifies programming, reduces the risk of memory leaks, and improves
program reliability.
Garbage Collection in Data Structures:
Linked Lists and Trees:
In linked lists and trees, each node is allocated memory dynamically. Garbage collection
ensures that memory occupied by nodes that are no longer part of the data structure is
reclaimed.
Arrays:
Garbage collection can also play a role in managing memory for arrays, especially dynamic
arrays or those where the size can change during runtime.
Efficiency:
Efficient garbage collection algorithms can minimize the overhead associated with
reclaiming memory, ensuring that program performance is not negatively impacted.