Unit 1
Unit 1
DATA STRUCTURES
(GR24A1017)
LECTURE MATERIAL
B. Tech I Year
(CSE,ECE,EEE,IT,CE,ME,DS,AIML)
Preface
The main objective of the course entitled “Data Structures” is to make first year B.Tech
students familiar with the Computer Programming concepts and language fundamentals in a
more systematic manner. This material is written according to GRIET GR-24(Autonomous)
syllabus. This book has been prepared to meet the requirements of Data Structures
1. S T G Y.Sandhya
2. D. SugunaKumari
3. R.S.Shalini
4. G.Vandhana
5. N.Sashi Prabha
6. V.Vijay Kumar
7. M.Mounica
8. K.Shoban Babu
9. M.Sunitha
Course Code: GR24A1017 L/T/P/C:2/0/0/2
I Year II Semester
Course Outcomes:
1. Implement various sorting techniques and analyze the computational
complexity of algorithms.
2. Analyze the basics of data structures and its types and translate to programs the
operations on stack and queue and their applications.
3. Develop algorithms for various operations on linked lists and convert them to programs.
4. Interpret operations on non-linear data structure binary tree and BST.
5. Summarize the operations on graphs and apply graph traversals techniques and
outline hashing techniques.
UNIT I
Algorithms and Complexities: Analysis of algorithms, Basic concept of order of
complexity, Asymptotic Notations: Big Oh notation, Omega notation, Theta notation, little
oh notation and little omega notation.
Sorting: Bubble sort, Insertion Sort, Selection Sort, Quick Sort, Merge Sort, Radix Sort,
Counting sort.
UNIT II
Stacks: Introduction to Data Structures and types, Stack – Operations: pop, push, display,
peek, Representation and implementation of stack operations using arrays, stack
applications, recursion, infix to postfix transformation, evaluating postfix expressions.
Queues: Queue – Operations: enqueue, dequeue, display, representation and implementation
of queue operations using array, applications of queues, circular queues - representation and
implementation.
UNIT III
LIST: Introduction, dynamic memory allocation, self-referential structures, single linked
list, advantages and disadvantages of single linked list, single linked list vs arrays,
representation of a linked list in memory, operations-insertion, deletion, display, search.
Types and applications: Circular linked list, double linked list, implementation of stack,
queue using linked list.
UNIT IV
Trees: Basic tree concepts, Binary trees: properties, types, representation of binary trees
using arrays and linked lists, traversals of binary tree.
Binary Search Tree –Representation and implementation of operations, Binary Search Tree
Traversals (recursive), creation of binary tree and BST from given traversals.
UNIT V
Graphs: Definition, basic terminology, representation of graphs, graph traversal techniques –
Breadth First Traversal, Depth First Traversal.
Hashing - Introduction to hashing, hash function and types, hash table, implementation,
collision resolution techniques–separate chaining, linear probing, quadratic probing, double
hashing (only examples – no implementation).
Teaching methodologies:
TEXT BOOKS
1.Data Structures, 2/e, Richard F, Gilberg, Forouzan, Cengage
2.Data Structures and Algorithms, 2008, G.A.V.Pai, TMH
REFERENCE BOOKS
1.Data Structures with C, Seymour Lipschutz, TMH
2.Classic Data Structures, 2/e, Debasis, Samanta, PHI, 2009
3.Fundamentals of Data Structures in C, 2/e, Horowitz, Sahni, Anderson Freed, University Press
UNIT – I
Algorithms and Complexities: Analysis of algorithms, Basic concept of order of complexity,
Asymptotic Notations: Big Oh Notation, Omega Notation, Theta Notation, little oh Notation and
little omega Notation.
Sorting: Bubble Sort, Insertion Sort, Selection Sort, Quick Sort, and Merge Sort, Radix Sort, and
Counting sort.
Analysis of an Algorithm:
An algorithm is a finite sequence of instructions, each of which has a clear meaning and can be
performed with a finite amount of effort in a finite length of time. No matter what the input values may be, an
algorithm terminates after executing a finite number of instructions.
In addition, every algorithm must satisfy the following criteria:
1) Input: There are zero or more quantities, which are externally supplied
2) Output: At least one quantity is produced
3) Definiteness: Each instruction must be clear and unambiguous
4) Finiteness: If we trace out the instructions of an algorithm, then for all cases the algorithm will
terminate after a finite number of steps
5) Effectiveness: Every instruction must be sufficiently basic that it can in principle be carried out by a
person using only pencil and paper. It is not enough that each operation be definite, but it must also
be feasible.
Recursive Algorithms:
An algorithm is said to be recursive if the same algorithm is invoked in the body. An algorithm that
calls itself is DIRECT Recursive. An algorithm A is said to be INDIRECT Recursive if it calls another
algorithm which in turn calls A.
Example:
Algorithm fact(n)0253
{
if(n==1) then
return 1;
else
return n*fact(n-1);
}
Pseudo - code: It is a combination of Natural language and Programming language. it is easier for a non-
programmer to understand the general workings of the program.
Example1:
Algorithm for Maximum elements in array
Algorithm max(A,n)
{
x:=A[0];
for i := 1 to n-1 do
{
if x<A[i] then
{9++++++++++++++++++++++++++++++++++++++++++++++
X := A[i];
}
}
return x;
write “max value” x;
}
Example2:
Algorithm evenodd(num)
{
for num0; num<=range; numnum+1 do
if num % 2 = 0 then
print num is even
else
print num is odd
endif
endfor
}
Performance Analysis:
Performance of an algorithm is a process of making evaluative judgment about algorithms. That means when we
have multiple algorithms to solve a problem, we need to select a suitable algorithm to solve that problem. We
compare algorithms with each other which are solving the same problem, to select the best algorithm. To
compare algorithms, we use a set of parameters or set of elements like memory required by that algorithm, the
execution speed of that algorithm, easy to understand, easy to implement, etc.,
When we want to analyze an algorithm, we consider only the space and time required by that algorithm and we
ignore all the remaining elements. Based on this information, performance analysis of an algorithm can also be
defined as follows.
Performance analysis of an algorithm is the process of calculating space and time required by that
algorithm.
1) Space required to complete the task of that algorithm (Space Complexity).
2) Time required to complete the task of that algorithm (Time Complexity)
Space complexity:
When we design an algorithm to solve a problem, it needs some computer memory to complete its
execution. For any algorithm, memory is required for the following purposes...
1) To store program instructions.
2) To store constant values.
3) To store variable values.
4) And for few other things like function calls, jumping statements etc.
Example1:
Algo add(a,b,c)
{
Return (a+b+c); The total space complexity is - 3
}
Example2:
Algo Add(X,n)
{
for i:= 1 to n do Sum :=
Sum + X[i]; The total space complexity is – n+3
return Sum;
}
Example3:
Algo xyz(x,y,z)
{
d= (x+y+y*z)/(x+y); The total space complexity is – 4
}
Time Complexity:
Every algorithm requires some amount of computer time to execute its instruction to perform the task.
This computer time required is called time complexity. The time complexity of an algorithm is the total amount
of time required by an algorithm to complete its execution. The execution time depends on many factors such as
1) System load
2) No. of other programs running
3) Instruction set
4) Speed of hardware
The time complexity of an algorithm can be defined as follows:
Suppose M is an algorithm, and suppose n is the size of the input data. Clearly the complexity f(n) of M
increases as n increases. It is usually the rate of increase of f(n) with some standard functions. The most
common computing times are
O(1), O(log2 n), O(n), O(n log2 n), O(n2), O(n3), O(2n)
Example:
Program Segment A Program Segment B Program Segment C
x=x+2
for j =1 to n do
for x = 1
for k =1 to n do
to n do x =x
+ 2;
end;
end;
x =x + 2; 1
Total Frequency Count 1
x =x + 2; n
end; n
Time Complexity:
Complexit Notation Descriptio
y n
Constant O(1) Constant number of operations, not depending on the input data
size.
Quadratic O(n2 ) Number of operations proportional to the square of the size of the
input data.
Cubic O(n3 ) Number of operations proportional to the cube of the size of the
input data.
yuASYMPTOTIC NO TATIONS:
When it comes to analyzing the complexity of any algorithm in terms of time and
space, we can never provide an exact number to define the time required and the space required
by the algorithm, instead we express it using some standard notations, also known as
Asymptotic Notations.
0
Majorly, we use THREE types of Asymptotic Notations and those are as follows...
Big - Oh (O)
Big - Omega (Ω)
Big - Theta (
Along with these, there are TWO more notations
Little – Omega (ω)
Little – Oh (o)
It provides possibly asymptotically tight upper bound for f(n) and it does not give best case
complexity but can give worst case complexity. Let f be a non negative function. Then we define
the three most common asymptotic bounds as follows.
We say that f( n) is Big- O of g( n), written as f( n) = O( g( n)), if and only if(iff) there are positive constants
d n0 such that 0≤ f( n) ≤ c g( n) for all n ≥ n0 If f( n) = O( g( n)), we say that g(n) is an
Now to determine the value of c, we see that 4/n is maximum when n=1, Therefore c=4.
To determine the values of n0,
⇨ 0 <= 1 <= n0
This means n0=1. Therefore, 0 <= 4n^2 <= 4n^3, for all n >= n0=1.
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
Example3: To find upper bound of f(n), we have to find c and n0 such that 0<=f(n)<=c*g(n) for
all n >= n0. Prove that 3n+2 = O(n) for the statement 3n+2<=4n, where n>=1.
Solution: By definition we have, 0<=f(n)<=c*g(n)
Substituting 3n+2 as f(n) and 4n as g(n), we get
Now, find out the proper n0, such that f(n) <= c*g(n)
n 3n+2 4n
1 5 4
2 8 8
3 11 12
4 14 16
5 17 20
So, from the above representation 3n+2 <= 4n. Where c=4, g(n)=n and n0=3. Hence proved
Example4: To find upper bound of f(n), we have to find c and n0 such that 0 <= f(n) <= c*g(n)
for all n >= n0. Find out is 10n2+4n+2 = O(n) for the statement 10n2+4n+2 <= 11n2, where
n>=1.
Solution: By definition we have, 0 <=f(n) <= c*g(n)
Substituting 10n2+4n+2 as f(n) and 11n2 as g(n), we need to prove
Now, find out the proper n0, such that f(n) <= c*g(n)
10n2+4n+2 11n2
16 11
50 44
104 99
178
272 275
So, from the above representation 10n2+4n+2 <= 11n2. Where c=11, g(n)=n2 and n0=5. Hence
proved.
It provides possibly asymptotically tight lower b ound for f(n) and it does not give worst case complexity but
can give best case complexity f(n) is said to be Big- Omega of g( n), written as f(n) = Ω( g( n)), if there are
positive constants c and n0 such that f( n) = Ω(f( n))
lower bound 0 c *g( n) f( n) , we say that g( n) is a on f( n).
Solution: To find lower bound of f(n), we have to find c and n 0 such that 0 c.g(n) f(n) for all
n n
0
0 c × g(n) f(n)
0 c ×
0 6n 6n + 3 true, for all n n 0
0 5n 6n + 3 true, for all n n 0
Above both inequalities are true and there exists such infinite inequalities. So,
Example2: Find lower bound of running time of quadratic function f(n) = 3n2 + 2n + 4.
Solution: To find lower bound of f(n), we have to find c and n0
0
0 c × g(n) f(n)
0 c × g(n) 3n 2 + 2n + 4
0 3n 2 2
0 n 2 2
Above both inequalities are true and there exists such infinite inequalities.
Example3: Find lower bound of running time of quadratic function f(n) = 2n3 + 4n + 5.
Solution: To find lower bound of f(n), we have to find c and n0
0
0 c × g (n) f(n)
3
0 c × g (n) 2n + 4n + 5
3 3
0 2n 2n + 4n + 5 true, for all n 1
3 3
0 n + 4n + 5 true, for all n 1
Above both inequalities are true and there exists such infinite inequalities.
Example 1:
To show that 3n+3 = (n) or 3n+3 (n) we will verify that f(n) g(n) or not with the help of the definition i.e (g(n)) = {f(n): if
Solution:
In the given problem f(n)= 3n+3 and g(n)=n to prove f(n) g(n) we have to find c1,c2 and n0 such that 0c1g(n)f(n) c2g(n) for
⇨
We can write f(n)=3n+3 as f(n)=3n+3 3n+3n (write f(n) in terms of g(n) such that mathematically ineqaulity should be
⇨ We can write f(n)=3n+3 3n (again write f(n) in terms of g(n) such that mathematically ineqaulity should be
true)
⇨ c1=3 for all n, n0=1
⇨
i.e we are able to find, c1=3, c2=6 n0=1 such that 0c1g(n)f(n) c2g(n) for all n, n n0 So, f(n)= (g(n)) for all n>=1.
Example 2
: To show that 10n2 +4n+2= (n2 ) or 10n2 +4n+2 (n2 ) we will verify that f(n) g(n) or not with the help of the definition i.e
Solution:
In the given problem f(n)= 10n2 +4n+2 and g(n)= n2 to prove f(n) g(n) we have to find c1,c2 and n0 such that 0c1g(n)f(n) c2
⇨
to verify f(n) c2g(n) We can write f(n)= 10 n2 +4n+210n2 +4n2 +2n2 (write f(n) in terms of g(n) such that mathematic
⇨
To verify 0c1g(n)f(n) We can write f(n)= 10 n2 +4n+2 10n2 (write f(n) in terms of g(n) such that mathematically ineqa
⇨
i.e we are able to find, c1=10, c2=16 n0=1such that 0c1g(n)f(n) c2g(n) for all n, n n0 So, f(n)= (g(n)) for all n 1
Little o Notation is used to describe an upper bound that cannot be tight. In other words, loose upper bound of
f(n). Def: Let f(n) and g(n) are the functions that map positive real numbers. We can say that the function f(n) is
o(g(n)) if for any real positive constant c, there exists an integer constant c>0, n0>0 and n>n0 such that f(n) > 0.
The result is 0, and it satisfies the equation mentioned above. So we can say that f(n) = o(g(n)).
Example 1:
To show 2n+4=o(n2 ) we will verify that f(n) g(n) or not with the help of the definition i.e o (g(n)) = {f(n): for any positive
Example 2:
To show 2n=o(n2 ) we will verify that f(n) g(n) or not with the help of the definition i.e o (g(n)) = {f(n): for any positive c
Little ω notation
ω notation to denote a lower bound that is not asymptotically tight. And, f(n) ∈ ω(g(n)) if and only if g(n) ∈
ο((f(n)). In mathematical relation, if f(n) ∈ ω(g(n)) then, lim f(n)/g(n) = ∞ ,
where n→∞ .
The little omega (ο) running time can be proven by applying limit formula given below.
➢
If Lim f(n)/g(n) = ∞ then functions f(n) is ω(g(n))
➢
n→∞ here, we have functions f(n)=4n+6 and g(n)=1 lim(4n+6)/(1) = ∞
➢
n→∞ also for any c we can get n0 for this inequality
➢
0 <= c*g(n) < f(n), 0 <= c*1 < 4n+6 . Hence proved.
Example 1: To show 2n2 +4n+6= ω(n) we will verify that f(n) ∈ g(n) or not with the help of the definition i.e ω
(g(n)) = {f(n): for any
Solution: In the given problem
f(n)= 2n2 +4n+6
g(n)= n
To show 0 cg(n) < f(n) for all n >= n0 We can write as
f(n)= 2n2 +4n+6
cn < 2n2 +4n+6 for any c >0 , for all n >= n0, n0=1
i.e we can find, c=1 , n0=1
Hence, f(n)= ω(g(n)) i.e f(n)= ω(n)
Example 2: To show 2n3 +3n 2 +1= ω(n) we will verify that f(n) ∈ g(n) or not with the help of the definition i.e ω
(g(n)) = {f(n): for any positive constant c > 0 there exist a constant n0 > 0 such that 0 cg(n) < f(n) for all n n0}.
SORTING
Sorting is the process of arranging the elements in a list in ascending or descending order, with numerical data or
alphabetically with character data.
For example, we want to obtain the telephone number of a person. If the telephone directory is not arranged in
alphabetical order, one has to search from the very first page to till the last page. If the directory is sorted, we can
easily search for the telephone number.
1) Internal Sorts: - time in memory, is called internal sorting. There is a limitation for internal sorts i.e., they
can only process relatively small lists due to memory constraints.
Ex:-Selection sort algorithm, Insertion sort algorithm, Bubble Sort Algorithm, Quick sort algorithm
2)External Sorts: - Sorting large amount of data requires external or secondary memory. This process uses
external memory such as HDD, to store the data which is not fit into the main memory. So, primary memory
holds the currently being sorted data only. All external sorts are based on process of merging. Different parts of
data are sorted separately and merged together.
Bubble Sort:
This is the simplest sorting technique when colmpared with all the other sorting techniques. It is also called as
exchange sort. In this sorting technique the adjacent elements are compared and interchanged if necessary.
Process:
Compare first and second elements. If the first element is greater than the second element, then interchange these
two elements compare second and third elements. If the second element is greater than third element, then make
an interchange.
Program:
#include< stdio.h>
#include<conio.h>
void bubble sort(int a[],int);
void main()
{
int i, n, temp,j,a[20];
printf ("Enter the number of elements in the Array: ") ;
scanf ("%d", &n);
printf("\n Enter the elements:\n\n");
for (i=0;i<n-1;i++)
{
for (j=0; j<n-i-1; j++)
{
If (a[j]>a[j+1])
{
temp =a[j];
a[j]=a[j+1];
a[j+1]=temp;
}
}
}
}
Selection sort:
Selection sort is also known as push-down sorting. As the name suggests the first
element of the list is selected. It is compared with all the other elements of the list to
identify the smallest element. The smallest element is now interchanged with the first
location element. Again the second element is compared with the remaining elements to
find the smaller one and is interchanged with the second location element in the list. This
process repeats until all the elements in the list are sorted.
Process:
1) From the list select the smallest element and interchange with the first location (0 th location)
element. Now the first element is sorted.
2) From the elements in positions 2 to n, select the next smallest element and interchange with the
second location (1st location) element. Now the second element is sorted.
3) Repeat the above steps for n-1 times. The entire list gets sorted within (n-1)th pass.
Program:
#include <stdio.h>
void selectionsort(int[],int);
int main()
{
int a[10],i,n;
printf("enter thTe size of array:\n");
scanf("%d",&n);
printf("enter the elements of array:\n");
for(i=0;i<n;i++)
scanf("%d",&a[i]);
printf("\nThe unsorted elements are:\n");
for(i=0;i<n;i++)
printf("%3d",a[i]);
selectionsort(a,n);
printf("\nThe sorted elements are:\n");
for(i=0;i<n;i++)
printf("%3d",a[i]);
}
void selectionsort(int a[],int n)
{
int min,temp,i,j;
for(i=0;i<n-1;i++)
{
min=i;
for(j=i+1;j<n;j++)
{
if(a[min]>a[j])
min=j;
}
if(min!=i)
{
temp=a[min];
a[min]=a[i];
a[i]=temp;
}
}
}
Insertion Sort:
Both the selection and bubble sorts exchange elements. But insertion sort does not
exchange elements. In insertion sort the element is inserted at an appropriate place
similar to card insertion. To sort the cards in our hand we extract a card, shift the
remaining cards and then insert the extracted card in the correct place. This process is
repeated until all the cards are in the correct sequence. Here the list is divided into two
parts sorted and unsorted sub-lists. In each pass, the first element of unsorted sub list
is picked up and moved into the sorted sub list by inserting it in suitable position.
Suppose we have ‘n’ elements, we need n-1 passes to sort the elements.
Process:
1) Select the second element in the list and compare it with the first element. If the first element is greater
than the second element then the second element is inserted at first Location by shifting the first element
to the second position. Otherwise proceed with the next step.
2) Select the third element in the list and compare it with the sorted two elements and insert at the
appropriate position.
3) Select the fourth element and compare it with previous three sorted elements and insert it in proper
position among the elements which are compared.
4) Repeat the above steps for n-1 times. The entire list gets sorted within (n-1)th pass.
Algorithm for insertion sort: insertion sort (A, N)
Step 1: repeat step 2 to 5 for i=1 to N
Step 2: set temp=A[i]
Step 3: set j=i-1
Step 4: repeat while j>=0 and temp <a[j]
Set A [j+1]=A[j]
Set j=j-1 [End while loop]
Step 5: Set a[j+1]=temp
[End of loop]
Step 6: Exit
Ex: - A list of unsorted elements are: 78 23 45 8 32 36.
The results of insertion sort for each pass is as follows:-
Program:
#include <stdio.h>
void insertion sort(int[],int);
int main()
{
int a[10],i,n;
printf ("enter the size of array:\n");
scanf("%d",&n);
printf("enter the elements of array:\n");
for(i=0;i<n;i++)
scanf("%d",&a[i]);
printf("\the unsorted elements are:\n");
for(i=0;i<n;i++)
printf("%3d",a[i]);
insertion sort(a,n);
printf ("\nThe sorted elements are:\n");
for(i=0;i<n;i++)
printf("%3d",a[i]);
}
void insertion sort(int a[],int n)
{
int i,j,index;
for(i=1;i<n;i++)
{
index=a[i];
j=i;
while ((j>0)&&(a[j-1]>index))
{
a[j]=a[j-1];
j--;
}
a[j]=index;
}
}
Quick sort:
This sorting is also called as partition exchange sorting. This method is based on divide
and conquer technique. i.e., the entire list is divided into various partitions and sorting is
applied again and again on the partitions.
In this method, the list is divided into two parts, based on an element called pivot
element. Usually, the first element is considered to be the pivot element. Now move
the pivot element into its correct position in the list.
The elements to the left of the pivot are less than the pivot and the elements to the
right of the pivot are greater than the pivot. The procedure of choosing a pivot and
partitioning the list is applied recursively until we get sub-lists consisting of only one
element. The process is reapplied to each of these partitions.
Process:
1. Take the first element as pivot element and take two pointers left and right to move the positions
2. Repeatedly increment the pointer left by on position until list [left]>pivot.
3. Repeatedly decrement the pointer right by one position until list[right]<=pivot
4. If left < right then interchange list[left] and list[right]
5. The process is repeated until the condition in step 4 fails (i.e., right<=left).
6. At this point list[right] is interchanged with pivot.[so the final position is sorted]
Algorithm for Quick Sort:
Quick sort (A, LEFT, RIGHT)
1. It is a complex method of sorting so, it is little hard to implement than the other sorting methods.
Ex: - A list of unsorted elements are: 54 26 93 17 77 31 44 55 20
Program:
#include<stdio.h>
void quicksort (int [ ], int, int);
void main( )
{
int n,i,a[10];
printf("\How many elements you want to sort ? ");
scanf(“%d”,&n);
printf ("\Enter elements for an array:");
for (i=0; i<n; i++)
scanf ("%d",&a[i]);
quicksort (a,0,n-1);
printf ("\After Sorting the elements are:");
for(i=0;i<n;i++)
printf ("%3d ",a[i]);
}
void quicksort (int a[ ],int left, int right)
{
int pivot,t,i,j;
if(left<right)
{
pivot=a[left];
l=left;
r=right;
while(l<r)
{
while(a[l]<=pivot&&l<=right)
l++;
while(a[r]>pivot)
r--;
if(l<r)
{
t=a[l];
a[l]=a[r];
a[r]=t;
}
}
temp=a[left];
a[left]=a[r];
a[r]=temp;
quicksort (a, left, r-1);
quicksort(a,r+1,right);
}
}
Merge sort:
The basic concept of merge sort is to divide the list into two smaller sub-lists of
approximately equal size. Recursively repeat this procedure till only one element is left in
the sub-list. After this, various sorted sub-lists are merged to form sorted parent list. This
process goes on recursively till the original sorted list arrived.
The merge sort splits data list to be sorted into two equal halves, and places them in separate arrays. This sorting
method uses divide and conquer paradigm. It separates the list into two halves and then sorts the two data sets
recursively. Finally these are merged to obtain the complete sorted list.
To be more specific, the merge sort breaks an array down into smaller and smaller pieces until the individual
pieces are just one item in size. Since a single item is always considered to be sorted, the contiguous items can be
merged .The merge sort algorithm therefore breaks down the array into smaller chunks on the way down the
recursion tree. On the way back up, it merges these smaller pieces of the array into larger pieces.
This is a special case of recursion in which given problem is divided into two or more sub-problems of exactly
same type and solution to problem is expressed in terms of solution to sub-problem. i.e. Dividing the list of
elements into two approximately equal parts recursively and find solution independently then merged together
into single list. Quick and merge sorts are based on Divide and Conquer concept.
Two types of sorting algorithms which are based on this divide and conquer algorithm:
1. Quick sort
Quick sort also uses few comparisons (somewhat more than the other two). Like heap sort it can sort "in place"
by moving data in an array.
2. Merge sort
Merge sort is good for data that's too big to have in memory at once, because its pattern of storage access is very
regular. It also uses even fewer comparisons than heap sort, and is especially suited for data stored as linked lists.
Process:
Consider the initial list and divide the list into two sub-lists. Again these sub-lists are divided into many number
of sub-lists until each and every sub-list contains single element.
1. Combine these sub-lists into sorted order.
2. Finally we will get list of elements in sorted order.\Algorithm for Merge Sort:
Merge (A, LOW, MID, HIGH)
Step 1: Initialize i=Low=MID+1, k=0
Step2: repeat while (i<=MID) and (j<=HIGH) If A[i]<A[j]
Set temp [k++] =A [i++]
Else
Set temp [k++]=A[j++]
[End of If]
[End of for]
Step 3: Repeat while j<=HIGH
Set temp [k++]=A[i++]
[End of loop]
Repeat while i<=MID
Set temp [k++] =A [i++]
[End of loop]
Step 4:[copy the contents of temp back to A]
Repeat for k=LOW to HIGH
Set A[k] =temp[k]
[End of loop]
Step 5: Exit
Mergesort(A,LOW,HIGH)
Step 1: if LOW<HIGH
Set MID= (LOW+HIGH)/2
Call Merge sort(A,LOW,MID)
Call Mergesort(A,MID+1,HIGH)
Call Merge(A,LOW,MID,HIGH)
[End of If]
Step 2: Exit
Program:
#include<stdio.h>
void merge sort(int a[],int, in);
void merge(int a[],int,int,int);
main()
{
int a[10],i,n;
printf("enter n");
scanf("%d",&n);
for(i=0;i<n;i++)
scanf("%d",&a[i]);
printf("the unsorted list is:\n\n");
for(i=0;i<n;i++)
printf("%5d",a[i]);
mergesort(a,0,n-1);
printf("The final sorted list is:\n\n\n");
for(i=0;i<n;i++)
printf("%5d",a[i]);
}
void mergesort(int a[],int low,int high)
{
int mid; if(low<high)
{
mid=(low+high)/2;
mergesort(a,low,mid);
mergesort(a,mid+1,high);
merge(a,low,mid,high);
}
}
void merge(int a[],int low,int mid,int high)
{
int b[10],h,i,j,k;
i=low;
j=mid+1;
k=low;
while(i<=mid&&j<=high)
{
if(a[i]<a[j])
b[k++]=a[i++];
else b[k++]=a[j++];
}
while(i<=mid)
b[k++]=a[i++];
while(j<=high)
b[k++]=a[j++];
for(i=low;i<=high;i++)
a[i]=b[i];
}
#include<stdio.h>
void mergesort(int a[],int b[],int c[],int n1,int n2);
void main()
{
int a[10],b[10],c[10],i,n1,n2;
printf(“enter the size of two arraysa,b”);
scanf(“%d %d”,&n1,&n2);
printf(“enter the elements into a:”);
for(i=0;i<n1;i++)
scanf (“%d”,&a[i]);
printf (“enter the elements into b”);
for(i=0;i<n2;i++)
scanf(“%d”,&b[i]);
mergesort(a,b,c,n1,n2);
printf(“sorted list is:”);
for (i=0;i<n1+n2;i++)
printf(“%3d”,c[i]);
}
void mergesort(int a[],int b[],int c[],int n1,int n2)
{
int i=0,j=0,k=0;
while(i<n1&&j<n2)
{
if (a[i]<b[j])
c [k++]=a[i++];
else
c [k++]=b[j++];
}
while(i<n1)
c[k++]=a[i++];
while (j<n2)
c[k++]=b[j++];
}
RADIX SORT:
Radix sort is the linear sorting algorithm that is used for integers. In Radix sort, there is digit by digit
sorting is performed that is started from the least significant digit to the most significant digit. The
process of radix sort works similar to the sorting of students names, according to the alphabetical order.
In this case, there are 26 radix formed due to the 26 alphabets in English. In the first pass, the names of
students are grouped according to the ascending order of the first letter of their names. After that, in the
second pass, their names are grouped according to the ascending order of the second letter of their name.
And the process continues until we find the sorted list.
In the given array, the largest element is 736 that have 3 digits in it. So, the loop will run up to three
times (i.e., to the hundreds place). That means three passes are required to sort the array.
Now, first sort the elements on the basis of unit place digits (i.e., x = 0). Here, we are using the counting
sort algorithm to sort the elements.
Pass 1:
In the first pass, the list is sorted on the basis of the digits at 0's place.
Pass 2: In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 10th place).
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 100th place).
Radix sort is a non-comparative sorting algorithm that is better than the comparative sorting algorithms. It has
linear time complexity that is better than the comparative algorithms with complexity O(n logn).
#include <stdio.h>
int getMax(int a[], int n)
{
int max = a[0];
for(int i = 1; i<n; i++)
{
if(a[i] > max)
max = a[i];
}
return max;
}
Counting Sort:
Counting sort is a sorting technique that is based on the keys between specific ranges. This sorting
technique doesn't perform sorting by comparing elements. It performs sorting by counting objects having
distinct key values like hashing. After that, it performs some arithmetic operations to calculate each
object's index position in the output sequence. Counting sort is not used as a general-purpose sorting
algorithm.
Counting sort is effective when range is not greater than number of objects to be sorted. It can be used to sort the
negative input values.
Algorithm
1) countingSort(array, n)
2) max = find maximum element in the given array
3) create count array with size maximum + 1
4) Initialize count array with all 0's
5) for i = 0 to n
6) find the count of every unique element and
7) store that count at ith position in the count array
8) for j = 1 to max
9) Now, find the cumulative sum and store it in count array
10) for i = n to 1
11) Restore the array elements
12) Decrease the count of every restored element by 1
13) end countingSort
1. Find the maximum element from the given array. Let max be the maximum element.
2. Now, initialize array of length max + 1 having all 0 elements. This array will be used to store the count
of the elements in the given array.
3. Now, we have to store the count of each array element at their corresponding index in the count array.
The count of an element will be stored as - Suppose array element '4' is appeared two times, so the count
of element 4 is 2. Hence, 2 is stored at the 4 th position of the count array. If any element is not present in
the array, place 0, i.e. suppose element '3' is not present in the array, so, 0 will be stored at 3rd position.
|
Now, store the cumulative sum of count array elements. It will help to place the elements at the correct
index of the sorted array.
Time Complexity:
Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-case
time complexity of counting sort is O(n + k).
Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of counting sort is O(n + k).
Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending order.
The worst-case time complexity of counting sort is O(n + k).
In all above cases, the time complexity of counting sort is same. This is because the algorithm goes
through n+k times, regardless of how the elements are placed in the array.
Counting sort is better than the comparison-based sorting techniques because there is no comparison between
elements in counting sort. But, when the integers are very large the counting sort is bad because arrays of that
size have to be created.
Space Complexity: The space complexity of counting sort is O(max). The larger the range of elements, the
larger the space complexity.
Program:
#include<stdio.h>
int getMax(int a[], int n)
{
int max = a[0];
for(int i = 1; i<n; i++)
{
if(a[i] > max)
max = a[i];
}
return max;
}
void countSort(int a[], int n)
{
int output[n+1];
int max = getMax(a, n);
int count[max+1];
for (int i = 0; i <= max; ++i)
{
count[i] = 0;}
for (int i = 0; i < n; i++)
count[a[i]]++;
for(int i = 1; i<=max; i++)
count[i] += count[i-1];
for (int i = n - 1; i >= 0; i--)
{
output[count[a[i]] - 1] = a[i];
count[a[i]]--;
}
for(int i = 0; i<n; i++)
a[i] = output[i];
}
}
{
int i;
for (i = 0; i < n; i++)
printf("%d ", a[i]);.
}
int main()
{
int a[] = { 11, 30, 24, 7, 31, 16 };
int n = sizeof(a)/sizeof(a[0]);
printf("Before sorting array elements are - \n");
printArr(a, n);
countSort(a, n);
printf("\nAfter sorting array elements are - \n");
printArr(a, n);
return 0;
}