[go: up one dir, main page]

0% found this document useful (0 votes)
2 views44 pages

Unit 1

The document is a lecture material for the Data Structures course at Gokaraju Rangaraju Institute of Engineering and Technology, aimed at first-year B.Tech students across various engineering disciplines. It covers fundamental programming concepts, algorithm analysis, sorting techniques, and data structures such as stacks, queues, linked lists, trees, and graphs, along with their operations and applications. The material includes course outcomes, teaching methodologies, and references for further reading.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views44 pages

Unit 1

The document is a lecture material for the Data Structures course at Gokaraju Rangaraju Institute of Engineering and Technology, aimed at first-year B.Tech students across various engineering disciplines. It covers fundamental programming concepts, algorithm analysis, sorting techniques, and data structures such as stacks, queues, linked lists, trees, and graphs, along with their operations and applications. The material includes course outcomes, teaching methodologies, and references for further reading.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 44

GOKARAJU RANGARAJU INSTITUTE

OF ENGINEERING AND TECHNOLOGY


(Autonomous)

DATA STRUCTURES
(GR24A1017)
LECTURE MATERIAL
B. Tech I Year
(CSE,ECE,EEE,IT,CE,ME,DS,AIML)
Preface

The main objective of the course entitled “Data Structures” is to make first year B.Tech
students familiar with the Computer Programming concepts and language fundamentals in a
more systematic manner. This material is written according to GRIET GR-24(Autonomous)
syllabus. This book has been prepared to meet the requirements of Data Structures

This work is Contributed and verified by the faculty of CPDS- CSE/ECE/CSD/CSM

1. S T G Y.Sandhya
2. D. SugunaKumari
3. R.S.Shalini
4. G.Vandhana
5. N.Sashi Prabha
6. V.Vijay Kumar
7. M.Mounica
8. K.Shoban Babu
9. M.Sunitha
Course Code: GR24A1017 L/T/P/C:2/0/0/2
I Year II Semester

Course Outcomes:
1. Implement various sorting techniques and analyze the computational
complexity of algorithms.
2. Analyze the basics of data structures and its types and translate to programs the
operations on stack and queue and their applications.
3. Develop algorithms for various operations on linked lists and convert them to programs.
4. Interpret operations on non-linear data structure binary tree and BST.
5. Summarize the operations on graphs and apply graph traversals techniques and
outline hashing techniques.

UNIT I
Algorithms and Complexities: Analysis of algorithms, Basic concept of order of
complexity, Asymptotic Notations: Big Oh notation, Omega notation, Theta notation, little
oh notation and little omega notation.
Sorting: Bubble sort, Insertion Sort, Selection Sort, Quick Sort, Merge Sort, Radix Sort,
Counting sort.
UNIT II
Stacks: Introduction to Data Structures and types, Stack – Operations: pop, push, display,
peek, Representation and implementation of stack operations using arrays, stack
applications, recursion, infix to postfix transformation, evaluating postfix expressions.
Queues: Queue – Operations: enqueue, dequeue, display, representation and implementation
of queue operations using array, applications of queues, circular queues - representation and
implementation.
UNIT III
LIST: Introduction, dynamic memory allocation, self-referential structures, single linked
list, advantages and disadvantages of single linked list, single linked list vs arrays,
representation of a linked list in memory, operations-insertion, deletion, display, search.
Types and applications: Circular linked list, double linked list, implementation of stack,
queue using linked list.

UNIT IV
Trees: Basic tree concepts, Binary trees: properties, types, representation of binary trees
using arrays and linked lists, traversals of binary tree.
Binary Search Tree –Representation and implementation of operations, Binary Search Tree
Traversals (recursive), creation of binary tree and BST from given traversals.

UNIT V
Graphs: Definition, basic terminology, representation of graphs, graph traversal techniques –
Breadth First Traversal, Depth First Traversal.
Hashing - Introduction to hashing, hash function and types, hash table, implementation,
collision resolution techniques–separate chaining, linear probing, quadratic probing, double
hashing (only examples – no implementation).

Teaching methodologies:

1. Power Point Presentations


2. Tutorial Sheets
3. Assignments

TEXT BOOKS
1.Data Structures, 2/e, Richard F, Gilberg, Forouzan, Cengage
2.Data Structures and Algorithms, 2008, G.A.V.Pai, TMH

REFERENCE BOOKS
1.Data Structures with C, Seymour Lipschutz, TMH
2.Classic Data Structures, 2/e, Debasis, Samanta, PHI, 2009
3.Fundamentals of Data Structures in C, 2/e, Horowitz, Sahni, Anderson Freed, University Press
UNIT – I
Algorithms and Complexities: Analysis of algorithms, Basic concept of order of complexity,
Asymptotic Notations: Big Oh Notation, Omega Notation, Theta Notation, little oh Notation and
little omega Notation.
Sorting: Bubble Sort, Insertion Sort, Selection Sort, Quick Sort, and Merge Sort, Radix Sort, and
Counting sort.

Analysis of an Algorithm:
An algorithm is a finite sequence of instructions, each of which has a clear meaning and can be
performed with a finite amount of effort in a finite length of time. No matter what the input values may be, an
algorithm terminates after executing a finite number of instructions.
In addition, every algorithm must satisfy the following criteria:
1) Input: There are zero or more quantities, which are externally supplied
2) Output: At least one quantity is produced
3) Definiteness: Each instruction must be clear and unambiguous
4) Finiteness: If we trace out the instructions of an algorithm, then for all cases the algorithm will
terminate after a finite number of steps
5) Effectiveness: Every instruction must be sufficiently basic that it can in principle be carried out by a
person using only pencil and paper. It is not enough that each operation be definite, but it must also
be feasible.

Recursive Algorithms:
An algorithm is said to be recursive if the same algorithm is invoked in the body. An algorithm that
calls itself is DIRECT Recursive. An algorithm A is said to be INDIRECT Recursive if it calls another
algorithm which in turn calls A.

Example:
Algorithm fact(n)0253
{
if(n==1) then
return 1;
else
return n*fact(n-1);
}

Pseudo - code: It is a combination of Natural language and Programming language. it is easier for a non-
programmer to understand the general workings of the program.

Example1:
Algorithm for Maximum elements in array
Algorithm max(A,n)
{
x:=A[0];
for i := 1 to n-1 do

{
if x<A[i] then
{9++++++++++++++++++++++++++++++++++++++++++++++
X := A[i];
}
}
return x;
write “max value” x;
}
Example2:
Algorithm evenodd(num)
{
for num0; num<=range; numnum+1 do
if num % 2 = 0 then
print num is even
else
print num is odd
endif
endfor
}

Performance Analysis:
Performance of an algorithm is a process of making evaluative judgment about algorithms. That means when we
have multiple algorithms to solve a problem, we need to select a suitable algorithm to solve that problem. We
compare algorithms with each other which are solving the same problem, to select the best algorithm. To
compare algorithms, we use a set of parameters or set of elements like memory required by that algorithm, the
execution speed of that algorithm, easy to understand, easy to implement, etc.,

Generally, the performance of an algorithm depends on the following elements...


1) Whether that algorithm is providing the exact solution for the problem?
2) Whether it is easy to understand?
3) Whether it is easy to implement?
4) How much space (memory) it requires to solve the problem?
5) How much time it takes to solve the problem? Etc.,

When we want to analyze an algorithm, we consider only the space and time required by that algorithm and we
ignore all the remaining elements. Based on this information, performance analysis of an algorithm can also be
defined as follows.

Performance analysis of an algorithm is the process of calculating space and time required by that
algorithm.
1) Space required to complete the task of that algorithm (Space Complexity).
2) Time required to complete the task of that algorithm (Time Complexity)

Space complexity:

When we design an algorithm to solve a problem, it needs some computer memory to complete its
execution. For any algorithm, memory is required for the following purposes...
1) To store program instructions.
2) To store constant values.
3) To store variable values.
4) And for few other things like function calls, jumping statements etc.

Space complexity of an algorithm can be defined as follows...


Total amount of computer memory required by an algorithm to complete its execution is called as space
complexity of that algorithm.

Example1:
Algo add(a,b,c)
{
Return (a+b+c); The total space complexity is - 3
}
Example2:
Algo Add(X,n)
{
for i:= 1 to n do Sum :=
Sum + X[i]; The total space complexity is – n+3
return Sum;
}
Example3:
Algo xyz(x,y,z)
{
d= (x+y+y*z)/(x+y); The total space complexity is – 4
}

Time Complexity:
Every algorithm requires some amount of computer time to execute its instruction to perform the task.
This computer time required is called time complexity. The time complexity of an algorithm is the total amount
of time required by an algorithm to complete its execution. The execution time depends on many factors such as
1) System load
2) No. of other programs running
3) Instruction set
4) Speed of hardware
The time complexity of an algorithm can be defined as follows:

T(n) = Cop + C(n)


Where T(n) -> Running time of basic operation
Cop -> Time taken by the basic operation to execute.
C(n) -> No. of times the operation needs to be executed.

Suppose M is an algorithm, and suppose n is the size of the input data. Clearly the complexity f(n) of M
increases as n increases. It is usually the rate of increase of f(n) with some standard functions. The most
common computing times are
O(1), O(log2 n), O(n), O(n log2 n), O(n2), O(n3), O(2n)

Example:
Program Segment A Program Segment B Program Segment C
x=x+2
for j =1 to n do
for x = 1
for k =1 to n do
to n do x =x
+ 2;
end;
end;

Total Frequency Count of Program Segment A

Program Statements Frequency Count

x =x + 2; 1
Total Frequency Count 1

Total Frequency Count of Program Segment B

Program Statements Frequency Count


for k =1 to n do (n+1)

x =x + 2; n

end; n

Total Frequency Count 3n+1

Total Frequency Count of Program Segment C

Program Statements Frequency Count


The total frequency counts of
the program for j =1 to n do (n+1) segments A, B and C
given by 1, for x = 1 to n do n(n+ (3n+1) and
(3n2+3n+1) x =x + 1) respectively are
expressed as 2;
n2
O(1), O(n) and O(n2).
These are end referred to as the time
complexities end; n2 of the program
segments n since they are
Total Frequency 3n2+3n+1
indicative of the running times of
Count
the program segments. In a similar
manner space complexities of a program can also be expressed in terms of mathematical notations, which is
nothing but the amount of memory they require for their kexecution.

Time Complexity:
Complexit Notation Descriptio
y n
Constant O(1) Constant number of operations, not depending on the input data
size.

Logarithmi O(logn) Number of operations proportional of log(n) where n is the size of


c
the input data.
Linear O(n) Number of operations proportional to the input data size.

Quadratic O(n2 ) Number of operations proportional to the square of the size of the
input data.

Cubic O(n3 ) Number of operations proportional to the cube of the size of the
input data.

Exponential O(2n) Exponential number of operations, fast growing.


O(kn )
O(n!)

yuASYMPTOTIC NO TATIONS:

When it comes to analyzing the complexity of any algorithm in terms of time and
space, we can never provide an exact number to define the time required and the space required
by the algorithm, instead we express it using some standard notations, also known as
Asymptotic Notations.
0
Majorly, we use THREE types of Asymptotic Notations and those are as follows...
Big - Oh (O)
Big - Omega (Ω)
Big - Theta (
Along with these, there are TWO more notations
Little – Omega (ω)
Little – Oh (o)

Big-Oh Notation (O):0

It provides possibly asymptotically tight upper bound for f(n) and it does not give best case
complexity but can give worst case complexity. Let f be a non negative function. Then we define
the three most common asymptotic bounds as follows.
We say that f( n) is Big- O of g( n), written as f( n) = O( g( n)), if and only if(iff) there are positive constants
d n0 such that 0≤ f( n) ≤ c g( n) for all n ≥ n0 If f( n) = O( g( n)), we say that g(n) is an

upper bound on f( n).


Example1: Prove 4n^2 = O(n^3)
Solution: By definition we have, 0<=f(n)<=c*g(n)
Substituting 4n^2 as f(n) and n^3as g(n), we get 0<=4n2<=c*n^3
Divide by n^3,

⇨ 0/n^3 <= 4n^2/n^3 <= c*n^3/n^3

⇨ 0 <= 4/n <= c

Now to determine the value of c, we see that 4/n is maximum when n=1, Therefore c=4.
To determine the values of n0,

⇨ 0 <= 4/n0 <= 4

⇨0 <= 4/4 <= n0

⇨ 0 <= 1 <= n0

This means n0=1. Therefore, 0 <= 4n^2 <= 4n^3, for all n >= n0=1.

Example2: Prove 10n^3 + 20n O(n^2)


Solution: By definition we have, 0 <= f(n) <=c*g(n)
Substituting 10n^3 as f(n) and n^2 as g(n), we get 0 <= 10n^3 + 20n <= c*n^2
Divide by n^2

⇨ 0/n^2 <= 10n^3/n^2 + 20n/n^2 <= c*n^2/n^2

⇨ 0 <= 10n + 20/n <= c

Hence, 10n^3 + 20n O(n^2)


//;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
Example3: To find upper bound of f(n), we have to find c and n0 such that 0<=f(n)<=c*g(n) for
all n >= n0. Prove that 3n+2 = O(n) for the statement 3n+2<=4n, where n>=1.
Solution: By definition we have, 0<=f(n)<=c*g(n)
Substituting 3n+2 as f(n) and 4n as g(n), we get

⇨ 0 <= 3n+2 <= 4n where n>=1.

Now, find out the proper n0, such that f(n) <= c*g(n)

n 3n+2 4n

1 5 4

2 8 8

3 11 12

4 14 16

5 17 20
So, from the above representation 3n+2 <= 4n. Where c=4, g(n)=n and n0=3. Hence proved

Example4: To find upper bound of f(n), we have to find c and n0 such that 0 <= f(n) <= c*g(n)
for all n >= n0. Find out is 10n2+4n+2 = O(n) for the statement 10n2+4n+2 <= 11n2, where
n>=1.
Solution: By definition we have, 0 <=f(n) <= c*g(n)
Substituting 10n2+4n+2 as f(n) and 11n2 as g(n), we need to prove

⇨ 0 <= 10n2+4n+2 <= 11n2 where n>=1.

Now, find out the proper n0, such that f(n) <= c*g(n)

10n2+4n+2 11n2

16 11

50 44

104 99

178

272 275

So, from the above representation 10n2+4n+2 <= 11n2. Where c=11, g(n)=n2 and n0=5. Hence
proved.

Example5: Show n3 != O(n2).


Solution: Let's assume to the contrary that
n3 = O(n2)
Then there must exist constants c and n such that
0
n3 <= cn2 for all n >= n .
0
Dividing by n2, we get:
n <= c for all n >= n .
0
But this is not possible; we can never choose a constant c large enough that n will never exceed it, since n can grow without
bound. Thus, the original assumption, that n3 = O(n2), must be wrong so n3 != O(n2).

Big-Omega Notation (Ω):

It provides possibly asymptotically tight lower b ound for f(n) and it does not give worst case complexity but
can give best case complexity f(n) is said to be Big- Omega of g( n), written as f(n) = Ω( g( n)), if there are
positive constants c and n0 such that f( n) = Ω(f( n))
lower bound 0 c *g( n) f( n) , we say that g( n) is a on f( n).

Example1: Find lower bound of running time of a linear function f(n) = 6n + 3.

Solution: To find lower bound of f(n), we have to find c and n 0 such that 0 c.g(n) f(n) for all
n n
0

0 c × g(n) f(n)
0 c ×
0 6n 6n + 3 true, for all n n 0
0 5n 6n + 3 true, for all n n 0
Above both inequalities are true and there exists such infinite inequalities. So,

f(n) = Ω (g(n)) = Ω (n) for c = 6, n0 = 1


f(n) = Ω (g(n)) = Ω (n) for c = 5, n0 = 1
and so on.

Example2: Find lower bound of running time of quadratic function f(n) = 3n2 + 2n + 4.
Solution: To find lower bound of f(n), we have to find c and n0
0

0 c × g(n) f(n)
0 c × g(n) 3n 2 + 2n + 4
0 3n 2 2

0 n 2 2

Above both inequalities are true and there exists such infinite inequalities.

So, f(n) = Ω (g(n)) = Ω (n2) for c = 3, n0 = 1


f(n) = Ω (g(n)) = Ω (n2) for c = 1, n0 = 1
and so on.

Example3: Find lower bound of running time of quadratic function f(n) = 2n3 + 4n + 5.
Solution: To find lower bound of f(n), we have to find c and n0
0

0 c × g (n) f(n)
3
0 c × g (n) 2n + 4n + 5
3 3
0 2n 2n + 4n + 5 true, for all n 1
3 3
0 n + 4n + 5 true, for all n 1
Above both inequalities are true and there exists such infinite inequalities.

So, f(n) = Ω (g(n)) = Ω (n3) for c = 2, n0 = 1


f(n) = Ω (g(n)) = Ω (n3) for c = 1, n0 = 1
and so on.

Big-Theta Notation (Θ):


We say that f(n) is Big-Theta of g(n), written as f(n) = Θ(g(n)), if there are positive constants c1, c2 and n0
such that 0 ≤ c1 g( n) ≤ f( n) ≤ c2 g( n) for all n ≥ n0. Equivalently, f(n) = Θ(g( n)) if and only if f( n) = O(
g( n)) and f( n) = Ω( g( n)). If f( n) = Θ( g( n)),We say that g( n) is a tight bound on f( n).

Example 1:
To show that 3n+3 = (n) or 3n+3 (n) we will verify that f(n) g(n) or not with the help of the definition i.e (g(n)) = {f(n): if

Solution:
In the given problem f(n)= 3n+3 and g(n)=n to prove f(n) g(n) we have to find c1,c2 and n0 such that 0c1g(n)f(n) c2g(n) for

⇨ to verify f(n) c2g(n)


We can write f(n)=3n+3 as f(n)=3n+3 3n+3n (write f(n) in terms of g(n) such that mathematically ineqaulity should be

⇨ 6n for all n > 0

⇨ c2=6 for all n > 0 i.e n0=1


⇨ To verify 0c1g(n)f(n)

⇨ We can write f(n)=3n+3 3n (again write f(n) in terms of g(n) such that mathematically ineqaulity should be
true)
⇨ c1=3 for all n, n0=1

⇨ 3n3n+36n for all n n0, n0=1


i.e we are able to find, c1=3, c2=6 n0=1 such that 0c1g(n)f(n) c2g(n) for all n, n n0 So, f(n)= (g(n)) for all n>=1.

Example 2
: To show that 10n2 +4n+2= (n2 ) or 10n2 +4n+2 (n2 ) we will verify that f(n) g(n) or not with the help of the definition i.e

Solution:
In the given problem f(n)= 10n2 +4n+2 and g(n)= n2 to prove f(n) g(n) we have to find c1,c2 and n0 such that 0c1g(n)f(n) c2


to verify f(n) c2g(n) We can write f(n)= 10 n2 +4n+210n2 +4n2 +2n2 (write f(n) in terms of g(n) such that mathematic

⇨ 16 n2 c2=16 for all n


To verify 0c1g(n)f(n) We can write f(n)= 10 n2 +4n+2 10n2 (write f(n) in terms of g(n) such that mathematically ineqa

⇨ c1=10 for all n, n0=1

⇨ 10n2 10 n2 +4n+216 n2 for all n n0, n0=1


i.e we are able to find, c1=10, c2=16 n0=1such that 0c1g(n)f(n) c2g(n) for all n, n n0 So, f(n)= (g(n)) for all n 1

Little oh Notations (o):

Little o Notation is used to describe an upper bound that cannot be tight. In other words, loose upper bound of
f(n). Def: Let f(n) and g(n) are the functions that map positive real numbers. We can say that the function f(n) is
o(g(n)) if for any real positive constant c, there exists an integer constant c>0, n0>0 and n>n0 such that f(n) > 0.

Example on little o Asymptotic Notation


If f(n) = n2 and g(n) = n3 then check whether f(n) = o(g(n)) or not. The result is 0, and it satisfies the equation

mentioned above. So we can say that f(n) = o(g(n)).

The result is 0, and it satisfies the equation mentioned above. So we can say that f(n) = o(g(n)).

Example 1:
To show 2n+4=o(n2 ) we will verify that f(n) g(n) or not with the help of the definition i.e o (g(n)) = {f(n): for any positive

Solution: In the given problem f(n)=2n+4 , g(n)= n2


To show 0f(n)< cg(n) for all n n0
We can write as f(n)=2n+4< cn2 for any c >0 , for all n n0, n0=1
i.e we can find, c=1, n0=1
Hence, f(n)=o(g(n))

Example 2:
To show 2n=o(n2 ) we will verify that f(n) g(n) or not with the help of the definition i.e o (g(n)) = {f(n): for any positive c

Solution:In the given problem f(n)=2n , g(n)= n2


To show 0f(n)< cg(n) for all n n0
We can write as f(n)=2n < cn2 f(n) cg(n) 73 for any c >0 , for all n n0, n0=1
i.e we can find, c=1, n0=1
Hence, f(n)=o(g(n))

Little ω notation

ω notation to denote a lower bound that is not asymptotically tight. And, f(n) ∈ ω(g(n)) if and only if g(n) ∈
ο((f(n)). In mathematical relation, if f(n) ∈ ω(g(n)) then, lim f(n)/g(n) = ∞ ,
where n→∞ .

Example: Prove that 4n + 6 ∈ ω(1);

The little omega (ο) running time can be proven by applying limit formula given below.

If Lim f(n)/g(n) = ∞ then functions f(n) is ω(g(n))

n→∞ here, we have functions f(n)=4n+6 and g(n)=1 lim(4n+6)/(1) = ∞

n→∞ also for any c we can get n0 for this inequality

0 <= c*g(n) < f(n), 0 <= c*1 < 4n+6 . Hence proved.

Example 1: To show 2n2 +4n+6= ω(n) we will verify that f(n) ∈ g(n) or not with the help of the definition i.e ω
(g(n)) = {f(n): for any
Solution: In the given problem
f(n)= 2n2 +4n+6
g(n)= n
To show 0 cg(n) < f(n) for all n >= n0 We can write as
f(n)= 2n2 +4n+6
cn < 2n2 +4n+6 for any c >0 , for all n >= n0, n0=1
i.e we can find, c=1 , n0=1
Hence, f(n)= ω(g(n)) i.e f(n)= ω(n)

Example 2: To show 2n3 +3n 2 +1= ω(n) we will verify that f(n) ∈ g(n) or not with the help of the definition i.e ω
(g(n)) = {f(n): for any positive constant c > 0 there exist a constant n0 > 0 such that 0 cg(n) < f(n) for all n n0}.

Solution: In the given problem


f(n)= 2n3 +3n2 +1
g(n)= n
To show 0 cg(n) < f(n) for all n >= n0,We can write as
f(n)= 2n3 +3n2 +1
cn < 2n3 +3n2 +1 for any c >0 ,for all n >= n0, n0=1
i.e we can find, c=1 , n0=1
Hence, f(n)= ω(g(n)) i.e f(n)= ω(n)

SORTING

Sorting is the process of arranging the elements in a list in ascending or descending order, with numerical data or
alphabetically with character data.

For example, we want to obtain the telephone number of a person. If the telephone directory is not arranged in
alphabetical order, one has to search from the very first page to till the last page. If the directory is sorted, we can
easily search for the telephone number.

Sorting can be classified into two ways:

1) Internal Sorts: - time in memory, is called internal sorting. There is a limitation for internal sorts i.e., they
can only process relatively small lists due to memory constraints.

Ex:-Selection sort algorithm, Insertion sort algorithm, Bubble Sort Algorithm, Quick sort algorithm

2)External Sorts: - Sorting large amount of data requires external or secondary memory. This process uses
external memory such as HDD, to store the data which is not fit into the main memory. So, primary memory
holds the currently being sorted data only. All external sorts are based on process of merging. Different parts of
data are sorted separately and merged together.

Ex: - Merge Sort


Some of the sorting techniques are
1. Exchange(bubble sort, quick sort)
2. Insertion(insertion sort)
3. Selection(selection sort, heap sort)
4. Distribution(radix sort)
5. Merging(merge sort)

Bubble Sort:

This is the simplest sorting technique when colmpared with all the other sorting techniques. It is also called as
exchange sort. In this sorting technique the adjacent elements are compared and interchanged if necessary.
Process:
Compare first and second elements. If the first element is greater than the second element, then interchange these
two elements compare second and third elements. If the second element is greater than third element, then make
an interchange.

1. The process is repeated till the last element is reached.


2. When the last element is reached, it is said to be one pass. At the end of the first pass, the largest
element is bubbled out. That is it occupies the last position.
3. The steps 1 to 4 are repeated for the elements between 1 to n-1 because the n th element is already
sorted.
4. Repeat the above steps for n-1 passes. At the end of last pass, the entire list is sorted.
Algorithm for bubble sort:
Bubble sort (A, N)

Step 1: repeat step 2 for i=0 to n-1


Step 2: repeat for j=0 to n-i-1
Step 3: If A[j]>A [j+1]
Swap A[j] and A [j+1] [End of inner loop]
[End of outer loop]
Step 4: Exit
Example for Bubble sort: 75 18 90 5 25
Sorted list (after n-1 passes) : 5 18 25 75 90
Note: Bubble sort technique has n-1 passes where n is the number of elements.

Program:

#include< stdio.h>
#include<conio.h>
void bubble sort(int a[],int);
void main()
{
int i, n, temp,j,a[20];
printf ("Enter the number of elements in the Array: ") ;
scanf ("%d", &n);
printf("\n Enter the elements:\n\n");

for(i=0 ; i<n ; i++)


{
printf(" Array[%d] = ",i);
scanf("%d",&a[i]);
}
printf(“\nBefore sorting unsorted list is:\n\n”);
for(i=0 ; i<n ; i++)
printf(" %4d",arr[i]);
bubblesort (a,n);
printf("\n The Sorted Array is:\n\n");
for(i=0 ; i<n ; i++)
printf(" %4d",arr[i]);
}
void bubble sort(int a[], int n)
{
int i, j, temp;

for (i=0;i<n-1;i++)
{
for (j=0; j<n-i-1; j++)
{
If (a[j]>a[j+1])
{
temp =a[j];
a[j]=a[j+1];
a[j+1]=temp;
}
}
}
}

Advantages of Bubble sort:


1. It is relatively easy to write and understand.
2. Performance is good for nearly sorted list.
3. Works well for smallest list of elements.
Disadvantages of Bubble sort:
1. It is relatively slow algorithm.
2. It is not used for sorting the list of larger size.
3. It is insufficient algorithm because the number of iterations increases with increase in number of
elements to be sorted.
Time Complexity of Bubble Sort:
The complexity of sorting algorithm depends upon the number of comparisons that are
made. Total comparisons in Bubble sort is:
2
n (n – 1) / 2 n –n

1) Best case: O (n)


2) Average case: O (n2)
3) Worst case : O (n2)
Space Complexity of Bubble Sort: O(1)

Selection sort:
Selection sort is also known as push-down sorting. As the name suggests the first
element of the list is selected. It is compared with all the other elements of the list to
identify the smallest element. The smallest element is now interchanged with the first
location element. Again the second element is compared with the remaining elements to
find the smaller one and is interchanged with the second location element in the list. This
process repeats until all the elements in the list are sorted.

Process:

1) From the list select the smallest element and interchange with the first location (0 th location)
element. Now the first element is sorted.
2) From the elements in positions 2 to n, select the next smallest element and interchange with the
second location (1st location) element. Now the second element is sorted.
3) Repeat the above steps for n-1 times. The entire list gets sorted within (n-1)th pass.

Algorithm for Selection Sort:


Selection sort (A, N)
Step 1: repeat step 2 to 5 for i=0 to N-1
Step 2: set min=i
Step 3: set j=i
Step4: repeat for i+1 to N
if A[min] > A[j] set min=j
[End inner loop]
Step 5: swap A[i] and A[min]
[End for loop]
Step 6: Exit

Ex: - A list of unsorted elements are: 23 78 45 8 32 56


A list of sorted elements now: 8 23 32 45 56 78

Program:
#include <stdio.h>
void selectionsort(int[],int);
int main()
{
int a[10],i,n;
printf("enter thTe size of array:\n");
scanf("%d",&n);
printf("enter the elements of array:\n");
for(i=0;i<n;i++)
scanf("%d",&a[i]);
printf("\nThe unsorted elements are:\n");
for(i=0;i<n;i++)
printf("%3d",a[i]);
selectionsort(a,n);
printf("\nThe sorted elements are:\n");
for(i=0;i<n;i++)
printf("%3d",a[i]);
}
void selectionsort(int a[],int n)
{
int min,temp,i,j;
for(i=0;i<n-1;i++)
{
min=i;
for(j=i+1;j<n;j++)
{
if(a[min]>a[j])
min=j;
}
if(min!=i)
{
temp=a[min];
a[min]=a[i];
a[i]=temp;
}
}
}

Time Complexity of Selection Sort:


1. Best case: O (n2)
2. Average case (n2)
3. Worst case: O (n2)
Space Complexity of Bubble Sort: O(1)
Advantages of Selection Sort:
1. It is simple and easy to implement.
2. It can be used for small sets of data.
Disadvantages of Selection Sort:
1. Not stable in maintaining the order of equal elements//
2. Inefficient for large datasets compared to other algorithms

Insertion Sort:
Both the selection and bubble sorts exchange elements. But insertion sort does not
exchange elements. In insertion sort the element is inserted at an appropriate place
similar to card insertion. To sort the cards in our hand we extract a card, shift the
remaining cards and then insert the extracted card in the correct place. This process is
repeated until all the cards are in the correct sequence. Here the list is divided into two
parts sorted and unsorted sub-lists. In each pass, the first element of unsorted sub list
is picked up and moved into the sorted sub list by inserting it in suitable position.
Suppose we have ‘n’ elements, we need n-1 passes to sort the elements.
Process:
1) Select the second element in the list and compare it with the first element. If the first element is greater
than the second element then the second element is inserted at first Location by shifting the first element
to the second position. Otherwise proceed with the next step.
2) Select the third element in the list and compare it with the sorted two elements and insert at the
appropriate position.
3) Select the fourth element and compare it with previous three sorted elements and insert it in proper
position among the elements which are compared.
4) Repeat the above steps for n-1 times. The entire list gets sorted within (n-1)th pass.
Algorithm for insertion sort: insertion sort (A, N)
Step 1: repeat step 2 to 5 for i=1 to N
Step 2: set temp=A[i]
Step 3: set j=i-1
Step 4: repeat while j>=0 and temp <a[j]
Set A [j+1]=A[j]
Set j=j-1 [End while loop]
Step 5: Set a[j+1]=temp
[End of loop]
Step 6: Exit
Ex: - A list of unsorted elements are: 78 23 45 8 32 36.
The results of insertion sort for each pass is as follows:-

A list of sorted elements now: 8 23 32 36 45 7

Program:

#include <stdio.h>
void insertion sort(int[],int);
int main()
{
int a[10],i,n;
printf ("enter the size of array:\n");
scanf("%d",&n);
printf("enter the elements of array:\n");
for(i=0;i<n;i++)
scanf("%d",&a[i]);
printf("\the unsorted elements are:\n");
for(i=0;i<n;i++)
printf("%3d",a[i]);
insertion sort(a,n);
printf ("\nThe sorted elements are:\n");
for(i=0;i<n;i++)
printf("%3d",a[i]);
}
void insertion sort(int a[],int n)
{
int i,j,index;
for(i=1;i<n;i++)
{
index=a[i];
j=i;
while ((j>0)&&(a[j-1]>index))
{
a[j]=a[j-1];
j--;
}
a[j]=index;
}
}

Time Complexity of Insertion Sort:


1. Best case: O(n)
2. Average case: O(n2)
3. Worst Case: O(n2)
Space Complexity of Bubble Sort: O(1)

Advantages of Insertion Sort:

1. It is easy to implement and efficient to use on small sets of data.


2. It can be efficiently implemented on data sets that are already substantially sorted.
3. It performs better than algorithms like selection sort and bubble sort.
4. It requires less memory space (O(1) of additional memory space
Disadvantages of Insertion Sort :

1. It is less efficient on list containing more number of elements.

Quick sort:

This sorting is also called as partition exchange sorting. This method is based on divide
and conquer technique. i.e., the entire list is divided into various partitions and sorting is
applied again and again on the partitions.
In this method, the list is divided into two parts, based on an element called pivot
element. Usually, the first element is considered to be the pivot element. Now move
the pivot element into its correct position in the list.
The elements to the left of the pivot are less than the pivot and the elements to the
right of the pivot are greater than the pivot. The procedure of choosing a pivot and
partitioning the list is applied recursively until we get sub-lists consisting of only one
element. The process is reapplied to each of these partitions.

Process:
1. Take the first element as pivot element and take two pointers left and right to move the positions
2. Repeatedly increment the pointer left by on position until list [left]>pivot.
3. Repeatedly decrement the pointer right by one position until list[right]<=pivot
4. If left < right then interchange list[left] and list[right]
5. The process is repeated until the condition in step 4 fails (i.e., right<=left).
6. At this point list[right] is interchanged with pivot.[so the final position is sorted]
Algorithm for Quick Sort:
Quick sort (A, LEFT, RIGHT)

Step 1: If LEFT < RIGHT go to step 2 to 7 Otherwise go to step 8


Step 2: Set Pivot=A[LEFT] Set l=LEFT
Set r=RIGHT
Step 3: Repeat step 4 to 6 while l<r
Step 4: Repeat while A[l]<=pivot and l<right Set l=l+1
[End of while]
Step 5: Repeat while A[r]>pivot Set r=r-1
[End of while]
Step 6: if l<r
Swap A[l] and A[r] [End of while]
Step 7: If l>=r
Swap A[r] and A [LEFT]
Call quicksort (A, LEFT, r-1)
Call quicksort (A, r+1, RIGHT)
Step 8: Exit
Time Complexity of Quick Sort:

1. Best case: O (n log n)


2. Average case: O (n log n)
3. Worst Case: O (n2)

Space Complexity of Quick Sort: O(n)

Advantages of Quick Sort:


1. This is faster sorting method among all.
2. Its efficiency is also relatively good.
3. It requires relatively small amount of memory.

Disadvantages of Quick Sort:

1. It is a complex method of sorting so, it is little hard to implement than the other sorting methods.
Ex: - A list of unsorted elements are: 54 26 93 17 77 31 44 55 20
Program:

#include<stdio.h>
void quicksort (int [ ], int, int);
void main( )
{
int n,i,a[10];
printf("\How many elements you want to sort ? ");
scanf(“%d”,&n);
printf ("\Enter elements for an array:");
for (i=0; i<n; i++)
scanf ("%d",&a[i]);
quicksort (a,0,n-1);
printf ("\After Sorting the elements are:");
for(i=0;i<n;i++)
printf ("%3d ",a[i]);
}
void quicksort (int a[ ],int left, int right)
{
int pivot,t,i,j;
if(left<right)
{
pivot=a[left];
l=left;
r=right;
while(l<r)
{
while(a[l]<=pivot&&l<=right)
l++;
while(a[r]>pivot)
r--;
if(l<r)
{
t=a[l];
a[l]=a[r];
a[r]=t;
}
}
temp=a[left];
a[left]=a[r];
a[r]=temp;
quicksort (a, left, r-1);
quicksort(a,r+1,right);
}
}

Merge sort:

The basic concept of merge sort is to divide the list into two smaller sub-lists of
approximately equal size. Recursively repeat this procedure till only one element is left in
the sub-list. After this, various sorted sub-lists are merged to form sorted parent list. This
process goes on recursively till the original sorted list arrived.
The merge sort splits data list to be sorted into two equal halves, and places them in separate arrays. This sorting
method uses divide and conquer paradigm. It separates the list into two halves and then sorts the two data sets
recursively. Finally these are merged to obtain the complete sorted list.
To be more specific, the merge sort breaks an array down into smaller and smaller pieces until the individual
pieces are just one item in size. Since a single item is always considered to be sorted, the contiguous items can be
merged .The merge sort algorithm therefore breaks down the array into smaller chunks on the way down the
recursion tree. On the way back up, it merges these smaller pieces of the array into larger pieces.

Divide and Conquer: -

This is a special case of recursion in which given problem is divided into two or more sub-problems of exactly
same type and solution to problem is expressed in terms of solution to sub-problem. i.e. Dividing the list of
elements into two approximately equal parts recursively and find solution independently then merged together
into single list. Quick and merge sorts are based on Divide and Conquer concept.

The divide and conquer strategy solves a problem by:


● Breaking into sub problems that are themselves smaller instances of the same type of problem.

● Recursively solving these sub problems.

● Appropriately combining their answers.

Two types of sorting algorithms which are based on this divide and conquer algorithm:

1. Quick sort

Quick sort also uses few comparisons (somewhat more than the other two). Like heap sort it can sort "in place"
by moving data in an array.

2. Merge sort

Merge sort is good for data that's too big to have in memory at once, because its pattern of storage access is very
regular. It also uses even fewer comparisons than heap sort, and is especially suited for data stored as linked lists.
Process:
Consider the initial list and divide the list into two sub-lists. Again these sub-lists are divided into many number
of sub-lists until each and every sub-list contains single element.
1. Combine these sub-lists into sorted order.
2. Finally we will get list of elements in sorted order.\Algorithm for Merge Sort:
Merge (A, LOW, MID, HIGH)
Step 1: Initialize i=Low=MID+1, k=0
Step2: repeat while (i<=MID) and (j<=HIGH) If A[i]<A[j]
Set temp [k++] =A [i++]
Else
Set temp [k++]=A[j++]
[End of If]
[End of for]
Step 3: Repeat while j<=HIGH
Set temp [k++]=A[i++]
[End of loop]
Repeat while i<=MID
Set temp [k++] =A [i++]
[End of loop]
Step 4:[copy the contents of temp back to A]
Repeat for k=LOW to HIGH
Set A[k] =temp[k]
[End of loop]
Step 5: Exit
Mergesort(A,LOW,HIGH)
Step 1: if LOW<HIGH
Set MID= (LOW+HIGH)/2
Call Merge sort(A,LOW,MID)
Call Mergesort(A,MID+1,HIGH)
Call Merge(A,LOW,MID,HIGH)
[End of If]
Step 2: Exit

Time Complexity of Merge Sort:


1. Best case: O (n log n)
2. Average Case: O (n log n)
3. Worst Case: O (n log n)

Space Complexity of Bubble Sort: O(n)

Advantages of Merge Sort:


1. It is quicker for larger lists because unlike insertion and bubble sort it doesn’t go through the whole
list several times.
2. It has a consistent running time, carries out different bits with similar times in a stage.

Disadvantages of Merge Sort:


1. Slower comparative to the other sort algorithms for smaller tasks.
2. Goes through the whole process even if the list is sorted
3. Uses more memory space to store the sub elements of the initial split list.

Ex:- A list of unsorted elements are: 39 9 81 45 90 27 72 18


Sorted elements are: 9 18 27 39 45 72 81 90

Program:

#include<stdio.h>
void merge sort(int a[],int, in);
void merge(int a[],int,int,int);
main()
{
int a[10],i,n;
printf("enter n");
scanf("%d",&n);
for(i=0;i<n;i++)
scanf("%d",&a[i]);
printf("the unsorted list is:\n\n");
for(i=0;i<n;i++)
printf("%5d",a[i]);
mergesort(a,0,n-1);
printf("The final sorted list is:\n\n\n");
for(i=0;i<n;i++)
printf("%5d",a[i]);
}
void mergesort(int a[],int low,int high)
{
int mid; if(low<high)
{
mid=(low+high)/2;
mergesort(a,low,mid);
mergesort(a,mid+1,high);
merge(a,low,mid,high);
}
}
void merge(int a[],int low,int mid,int high)
{
int b[10],h,i,j,k;
i=low;
j=mid+1;
k=low;
while(i<=mid&&j<=high)
{
if(a[i]<a[j])
b[k++]=a[i++];
else b[k++]=a[j++];
}
while(i<=mid)
b[k++]=a[i++];
while(j<=high)
b[k++]=a[j++];
for(i=low;i<=high;i++)
a[i]=b[i];
}

Merge two sorted arrays into a third array


/*program to merge two sorted arrays into a third array*/

#include<stdio.h>
void mergesort(int a[],int b[],int c[],int n1,int n2);
void main()
{
int a[10],b[10],c[10],i,n1,n2;
printf(“enter the size of two arraysa,b”);
scanf(“%d %d”,&n1,&n2);
printf(“enter the elements into a:”);
for(i=0;i<n1;i++)
scanf (“%d”,&a[i]);
printf (“enter the elements into b”);
for(i=0;i<n2;i++)
scanf(“%d”,&b[i]);
mergesort(a,b,c,n1,n2);
printf(“sorted list is:”);
for (i=0;i<n1+n2;i++)
printf(“%3d”,c[i]);
}
void mergesort(int a[],int b[],int c[],int n1,int n2)
{
int i=0,j=0,k=0;
while(i<n1&&j<n2)
{
if (a[i]<b[j])
c [k++]=a[i++];
else
c [k++]=b[j++];
}
while(i<n1)
c[k++]=a[i++];
while (j<n2)
c[k++]=b[j++];
}

RADIX SORT:

Radix sort is the linear sorting algorithm that is used for integers. In Radix sort, there is digit by digit
sorting is performed that is started from the least significant digit to the most significant digit. The
process of radix sort works similar to the sorting of students names, according to the alphabetical order.
In this case, there are 26 radix formed due to the 26 alphabets in English. In the first pass, the names of
students are grouped according to the ascending order of the first letter of their names. After that, in the
second pass, their names are grouped according to the ascending order of the second letter of their name.
And the process continues until we find the sorted list.

Algorithm for Radix sort:


Radix Sort(arr)
max = largest element in the given array
d = number of digits in the largest element (or, max)
Now, create d buckets of size 0 - 9
for i -> 0 to d
sort the array elements using counting sort (or any stable sort) according to the digits at the ith place
First, we have to find the largest element (suppose max) from the given array. Suppose 'x' be the number
of digits in max. The 'x' is calculated because we need to go through the significant places of all
elements.
After that, go through one by one each significant place. Here, we have to use any stable sorting
algorithm to sort the digits of each significant place.
Now let's see the working of radix sort in detail by using an example. To understand it more clearly, let's
take an unsorted array and try to sort it using radix sort.

In the given array, the largest element is 736 that have 3 digits in it. So, the loop will run up to three
times (i.e., to the hundreds place). That means three passes are required to sort the array.
Now, first sort the elements on the basis of unit place digits (i.e., x = 0). Here, we are using the counting
sort algorithm to sort the elements.

Pass 1:
In the first pass, the list is sorted on the basis of the digits at 0's place.

After the first pass, the array elements are -

Pass 2: In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 10th place).

After the second pass, the array elements are -


Pass 3:

In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 100th place).

After the third pass, the array elements are -

Now, the array is sorted in ascending order.

Radix sort complexity


Now, let's see the time complexity of Radix sort in best case, average case, and worst case. We will also
see the space complexity of Radix sort.
Time Complexity

Case Time Complexity

Best Case Ω(n+k)

Average Case θ(nk)

Worst Case O(nk)

Radix sort is a non-comparative sorting algorithm that is better than the comparative sorting algorithms. It has
linear time complexity that is better than the comparative algorithms with complexity O(n logn).

Space Complexity: The space complexity of Radix sort is O(n + k).


Program:

#include <stdio.h>
int getMax(int a[], int n)
{
int max = a[0];
for(int i = 1; i<n; i++)
{
if(a[i] > max)
max = a[i];
}
return max;
}

void countingSort(int a[], int n, int place)


{
int output[n + 1];
int count[10] = {0};
for (int i = 0; i < n; i++)
count[(a[i] / place) % 10]++;
for (int i = 1; i < 10; i++)
count[i] += count[i - 1];
for (int i = n - 1; i >= 0; i--)
{
output[count[(a[i] / place) % 10] - 1] = a[i];
count[(a[i] / place) % 10]--;
}

for (int i = 0; i < n; i++)


a[i] = output[i];
}

void radixsort(int a[], int n)


{
int max = getMax(a, n);
for (int place = 1; max / place > 0; place *= 10)
countingSort(a, n, place);
}
void printArray(int a[], int n)
{
for (int i = 0; i < n; ++i)
{
printf("%d ", a[i]);
}
printf("\n");
}
int main() {
int a[] = {181, 289, 390, 121, 145, 736, 514, 888, 122};
int n = sizeof(a) / sizeof(a[0]);
printf("Before sorting array elements are - \n");
printArray(a,n);
radixsort(a, n);
printf("After applying Radix sort, the array elements are - \n");
printArray(a, n);
}

Counting Sort:
Counting sort is a sorting technique that is based on the keys between specific ranges. This sorting
technique doesn't perform sorting by comparing elements. It performs sorting by counting objects having
distinct key values like hashing. After that, it performs some arithmetic operations to calculate each
object's index position in the output sequence. Counting sort is not used as a general-purpose sorting
algorithm.
Counting sort is effective when range is not greater than number of objects to be sorted. It can be used to sort the
negative input values.

Algorithm
1) countingSort(array, n)
2) max = find maximum element in the given array
3) create count array with size maximum + 1
4) Initialize count array with all 0's
5) for i = 0 to n
6) find the count of every unique element and
7) store that count at ith position in the count array
8) for j = 1 to max
9) Now, find the cumulative sum and store it in count array
10) for i = n to 1
11) Restore the array elements
12) Decrease the count of every restored element by 1
13) end countingSort

Working of counting sort Algorithm


Now, let's see the working of the counting sort Algorithm.
To understand the working of the counting sort algorithm, let's take an unsorted array. It will be easier to
understand the counting sort via an example.
Let the elements of array are -

1. Find the maximum element from the given array. Let max be the maximum element.
2. Now, initialize array of length max + 1 having all 0 elements. This array will be used to store the count
of the elements in the given array.

3. Now, we have to store the count of each array element at their corresponding index in the count array.
The count of an element will be stored as - Suppose array element '4' is appeared two times, so the count
of element 4 is 2. Hence, 2 is stored at the 4 th position of the count array. If any element is not present in
the array, place 0, i.e. suppose element '3' is not present in the array, so, 0 will be stored at 3rd position.

|
Now, store the cumulative sum of count array elements. It will help to place the elements at the correct
index of the sorted array.

Similarly, the cumulative count of the count array is -

4. Now, find the index of each element of the original array


After placing element at its place, decrease its count by one. Before placing element 2, its count was 2,
but after placing it at its correct position, the new count for element 2 is 1.

Similarly, after sorting, the array elements are -

Now, the array is completely sorted.

Time Complexity:

Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-case
time complexity of counting sort is O(n + k).

Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of counting sort is O(n + k).

Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending order.
The worst-case time complexity of counting sort is O(n + k).

In all above cases, the time complexity of counting sort is same. This is because the algorithm goes
through n+k times, regardless of how the elements are placed in the array.

Counting sort is better than the comparison-based sorting techniques because there is no comparison between
elements in counting sort. But, when the integers are very large the counting sort is bad because arrays of that
size have to be created.

Space Complexity: The space complexity of counting sort is O(max). The larger the range of elements, the
larger the space complexity.

Program:
#include<stdio.h>
int getMax(int a[], int n)
{
int max = a[0];
for(int i = 1; i<n; i++)
{
if(a[i] > max)
max = a[i];
}
return max;
}
void countSort(int a[], int n)
{
int output[n+1];
int max = getMax(a, n);
int count[max+1];
for (int i = 0; i <= max; ++i)
{
count[i] = 0;}
for (int i = 0; i < n; i++)
count[a[i]]++;
for(int i = 1; i<=max; i++)
count[i] += count[i-1];
for (int i = n - 1; i >= 0; i--)
{
output[count[a[i]] - 1] = a[i];
count[a[i]]--;
}
for(int i = 0; i<n; i++)
a[i] = output[i];
}
}

void printArr(int a[], int n)

{
int i;
for (i = 0; i < n; i++)
printf("%d ", a[i]);.
}
int main()
{
int a[] = { 11, 30, 24, 7, 31, 16 };
int n = sizeof(a)/sizeof(a[0]);
printf("Before sorting array elements are - \n");
printArr(a, n);
countSort(a, n);
printf("\nAfter sorting array elements are - \n");
printArr(a, n);
return 0;
}

Time Complexity of all Sorting Algorithms:

You might also like