[go: up one dir, main page]

0% found this document useful (0 votes)
0 views32 pages

Algorithms and Complexity Analysis Lecture Notes

The document outlines the course CSC 3311 on Algorithms and Complexity Analysis, covering topics such as algorithm characteristics, advantages, analysis methods, and various sorting algorithms. It details the importance of asymptotic notations for algorithm efficiency and includes examples of time complexity for iterative and recursive algorithms. Additionally, it introduces the Masters Theorem and provides an analysis of the Insertion Sort algorithm, highlighting its complexity and operational steps.

Uploaded by

alameenadam456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views32 pages

Algorithms and Complexity Analysis Lecture Notes

The document outlines the course CSC 3311 on Algorithms and Complexity Analysis, covering topics such as algorithm characteristics, advantages, analysis methods, and various sorting algorithms. It details the importance of asymptotic notations for algorithm efficiency and includes examples of time complexity for iterative and recursive algorithms. Additionally, it introduces the Masters Theorem and provides an analysis of the Insertion Sort algorithm, highlighting its complexity and operational steps.

Uploaded by

alameenadam456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

ALGORITHMS AND COMPLEXITY ANALYSIS (CSC 3311)

COURSE OUTLINE
 Introduction to Algorithms
 Introduction to Asymptotic Notations
 Analyzing Space Complexity
 Time Complexity Analysis
 Time Analysis of Recursive Programs
 Masters Theorem
 Insertion Sort Algorithms
 Bubble Sort Algorithms
 Selection Sort Algorithms
 Divide and Conquer Algorithms
 Binary Search Algorithms
 Merge sort Algorithms
 Quick sort Algorithms
 Greedy Algorithms
 Minimum Spanning Tree
 Kruskal Algorithms
 Prim’s Algorithms

1
 Introduction to Algorithms
Algorithm is a finite collection of well-defined computational techniques each of which is clear and
unambiguous, capable of being performed with finite effort and terminates in a finite amount of
time. Basically, An algorithm can be defined as a finite set of steps to solve a problem. These
stepsmeans instructions which contains fundamental operators such as + , * , / , % , =, e.t.c. The
word algorithm comes from the name of the person author Abu Jafar Mohammed Ibn Musa Al
khowarizmi who wrote A text book entitled-”Algorithmi de numero indorum” Now term”
Algorithmi ”in the title of the book led to the term Algorithm.

 Characteristics of Algorithms

An algorithm must have the following properties:

1. Correctness: It should produce the output according to the requirement of the algorithm
2. Finiteness: Algorithm must complete after a finite number of instructions have been
executed.
3. An Absence of Ambiguity: Each step must be defined, having only one interpretation.
4. Definition of Sequence: Each step must have a unique defined preceding and succeeding
step. The first step and the last step must be noted.
5. Input/output: Number and classification of needed inputs and results must be stated.
6. Feasibility: It must be feasible to execute each instruction.
7. Flexibility: It should also be possible to make changes in the algorithm without putting so
much effort on it.
8. Efficiency: Efficiency is always measured in terms of time and space required in
implementing the algorithm, so the algorithm uses a little running time and memory space
as possible within the limits of acceptable development time.
9. Independent: An algorithm should focus on what are the inputs, outputs and how to derive
output without knowing the language it is defined in. Therefore, we can say that the
algorithm is independent of any language.

2
 Advantages of Algorithms

 Effective Communication: Since it is written in a natural language like English, it becomes


easy to understand the step by step delineation of a solution to any particular problem.

 Easy Debugging: A well-designed algorithm facilitates easy debugging to detect the logical
errors that occurred inside the program.

 Easy and Efficient Coding: An algorithm is nothing but a blueprint of a program that helps
develop a program.

 Independent of Programming Language: Since it is a language-independent, it can be


easily coded by incorporating any high-level language.

 Analysis of Algorithm
The analysis is a process of estimating the efficiency of an algorithm and that is, trying to know how
good or how bad an algorithm could be. There two types of algorithm analysis;

 PRIORY ANALYSIS

It is done before executing a program. It provides estimated values and uniform values. And it is
independent.

 POSTERIORI ANALYSIS

Posterior analysis is done after executing the program. It provides the exact values and non-
uniform values. It is also dependent. We can consider algorithms to be procedural solutions
to problems. These solutions are not answers but specific instructions for getting answers.
The steps for designing and analyzing an algorithm are shown in the diagram below.

3
Figure 1. Analysis of Algorithms

There are two main parameters based on which we can analyze the algorithm:

Space Complexity: The space complexity can be understood as the amount of space required by an
algorithm to run to completion.

Time Complexity: Time complexity is a function of input size n that refers to the amount of time
needed by an algorithm to run to completion.

4
 Asymptotic Notations

To compare two algorithms rate of growth with respect to time and space, we need what we call
the asymptotic notation. Asymptotic notations are used to write fastest and slowest possible
running time for an algorithm. These are also referred to as 'best case’, ‘average case’ and 'worst
case' scenarios respectively. "In asymptotic notations, we derive the complexity concerning the
size of the input. (Example interms of n)".

 Why is Asymptotic Notation Important?

1. They give simple characteristics of an algorithm's efficiency.

2. They allow the comparisons of the performances of various algorithms

Three notations are used to calculate the running time complexity of an algorithm.

1. Big Oh Notation (O)

Big-oh is the formal method of expressing the upper bound of an algorithm's running time. It is
the measure of the longest amount of time. It is also called Worst case. As the input size of the
algorithm increases, if the time is also increasing, then what is the Worst case or the upper bound
of the algorithm. The worst-case efficiency of an algorithm is its efficiency for the worst-case
input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest
among all possible inputs of that size. The way to determine the worst-case efficiency of an
algorithm is, in principle, quite straightforward: analyze the algorithm to see what kind of inputs
yield the largest value of the basic operation’s count.

The function f (n) = O (g(n)) [read as "f of n is big-oh of g of n"] if and only if there exist a
positive constant c, such that

f(n) ⩽ C g (n), f(n) ⩽ Cg(n) for ALL n>no

5
Hence, function g (n) is an upper bound for function f (n), as g (n) grows faster than f (n). Upper bound
means in a given algorithm, the time cannot exceed the complexity calculated

2. Big Omega (Ω) Notation (Best case)

The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which
is an input (or inputs) of size n for which the algorithm runs the fastest among all possible inputs
of that size.

6
3. Theta Notation (Average case)

7
 TIME COMPLEXITY ANALYSIS
There are two (2) types of algorithms
1. Iterative Algorithm
2. Recursive Algorithm
Iterative Algorithms: this means repeating steps or instructions, over and over again. It is often
called a loop.
Recursive: if an algorithm contains a function that calls itself, it is known as the recursive
functiom.
If the Algorithm is neither iterative nor recursive, then the time complexity is going to be
Constant.
Let’s see some examples of iterative programs which will give us some insight on how to
analyze the time complexity.

 Method
 Simple for loop
 Nested for loop
 If-else
Simple for loop
Example 1:
Find the time complexity of the following
Sum = 0;
For (i=1 ; i<=n; i++)
{

sum = sum + i:
}

The time complexity is O(n)

8
Example 2:
Find the time complexity of the following:
sum = 0;
for (i=1; i<=n; i=i*2)
{
sum = sum + i;
}
The time complexity is O(log2 n)

Example 3:
Find the time complexity of the following:
sum = 0;
for (i=n; i > 0; i = i/2)
{
sum = sum + i;
}

The time complexity is O(log2 n)

9
Nested for loop
Example 1:
Find the time complexity of the following algorithm;
sum = 0;
for (i = 1; i <=n; i++)
{
for (j = 1; j<= n; j++)
{
sum = sum + j;
}
}

The time complexity is O(n2)

Example 2:
Find the time complexity of the following algorithm;
sum = 0;
for (i = 1; i <= n; i++)
{
for (j = 1; j<= n; j = j * 3)
{
sum = sum + j;
}
}

The time complexity is O(n log3 n)

10
Example 3;
Find the time complexity of the following algorithm;
sum = 0;
for (i = 1; i <= n; i = i * 2)
{
for (j = 1; j<= n; j = j + 2)
{
sum = sum + j;
}
}
The time complexity is O(n log2n)

 Recursive Algorithm
Recursion is generally expressed in terms of recurrences. In other words, when an algorithm calls
to itself, we can often describe its running time by a recurrence equation which describes the
overall running time of a problem of size n in terms of the running time on smaller inputs.

Note:

Time complexity of recursive algorithm = Number of Function call

Space complexity of a recursive algorithm = Depth of recursive tree or number of actvation


speed.

Example 1:

Find the time complexity of the recursive function of Fibonacci sequence.

Int fib (int n)

if (n = = 0)

return 0;

if (n = = 1)
11
return 1

else

Return fib (n – 1) + fib (n – 2);

The time complexity is O(2n)

We have seen that there is a program called recursive program. A function that calls itself is
called a recursion.

Example:

Assume that

A (n)
{
if (n > 1)
return A(n-1)
}
To get the Complexity of this recursive algorithm, we need to generate the recursive equation.
The recursive equation of the above equation will be:
T(n) = 1 + T(n-1).
There are many ways of solving the recursive equation. Lets start with the back Substitution
method.

Back Substitution method.

T(n) = 1 + T(n-1)… .............................. 1


T(n-1) = 1 + T(n-2) ............................... 2

12
T(n-2) = 1 + T(n-3) ................................ 3

Substitute equation 2 in equation 1.


T(n) = 1 +1 + T(n-2)
= 2 + T(n-2)
= 3 + T(n-3)

= K + T(n-k)
So, where are we going to stop it. Looking at the algorithm, T(n) = 1 + T(n-1) is only valid if
and only if n > 1. So the algorithm means that n is goin to start with a number greater than 1 and
will gradually decrease. And at some point, it will reach 1. Whenever it reaches 1, we are going
to stop. This condition is called the base condition
So, in K + T(n-k), n-k = 1 and k = n – 1 arithmetically.
By substituting it, we have,
T(n) = n -1 + T(n-n-1)
= n – 1 + T(1)
=n–1+1
=n
So T(n) = n
The time complexity of the recursive algorithm is O(n).
Example 2:
Solve the following recursive equation using back substitution
T(n) = n + T(n-1)
Solution
T(n) =

n + T(n-1)… ............................................................................................................................ 1)
T(n-1) = (n-1) + T(n-2) ........................................................................................................... 2)
T(n-2) = (n-2) + T(n-3) ........................................................................................................... 3)
Substitute equation 2 into equation 1.
T(n) = n + (n-1) + T(n-2)

13
T(n) = n + (n-1) + (n-2) + T(n-3)
= n + (n-1) + (n-2) +………………(n-k) + (n-(k+1))… ................................................. 4)
But (n-(k+1)) = 1.
So, k = n – 2.
Substitute in Equation 4.
By substituting, we have,
n + (n-1) + (n-2) + ....................... (n-(n-2)) + (n-(n-2)+1)
n + (n-1) + (n-2) + ....................... + 2 + 1
So equation 5 is the sum of 10 natural numbers.
Which implies that we have, n(n+1)/2
Now, the time complexity of the recursive equation is O(n2)

 MASTERS THEOREM
The masters theorem states as follows: - Masters Theorem is used to solve recursive equations.
There are various conditions that should be met before the masters’ theorem can be used to solve a
problem. Hence, it cannot be used to solve all the recursive problems.

 INSERTION SORT ANALYSIS AND ALGORITHMS


We have various certain algorithms both in recursive and iterative. It is a very simple method to
sort the number in an increasing or decreasingorder. It has various advantages:
1. It is simple to implement.
2. It is efficient on small datasets.
3. It is stable (does not change the relative order of elements with equal keys)
4. It is in-place (only requires a constant amount O (1) of extra memory space).
5. It is an online algorithm, in that it can sort a list as it receives it.

Algorithm
INSERTION SORT (A)
1. For k ← 2 to length [A]
2. Do key ← A[k]
3. i=k-1

14
4. while i>0 and A[i]>key
5. do A[i+1] ← A[i]
6. i=i-1
7. A[i+1] ← key
Analysis
Input: n elements are given.
Output: the number of comparisons required to make sorting.
Logic: If we have n elements in insertion sort, then n-1 passes are required to find a sorted array.
In pass 1: no comparison is required
In pass 2: 1 comparison is required
In pass 3: 2 comparisons are required

In pass n: n-1 comparisons are required


Total comparisons: T (n) = 1+2+3+ .......... + n-1
= n (n-1)/2
= O (n2)
Therefore, the complexity is Order of n2

Example
Illustrate the operation of INSERTION SORT on the array A = (4, 15, 7, 18, and 16).
Solution
A [] =

4 15 7 18 16
For j=2 to 5
J=2, key=A [2]
Key=15
I=2-1=1, i=1
While i>0 and A [1]>15
A Condition false, so no change

Now=3, key=A [3] =7


I=3-1=2
I=2, key=7

15
While i>0 and A [2]>key
Condition is true
So A [2+1] ← A [2]
A [3] ← A [2]
i.e.
4 15 18 16
and i=2-1=1, i=1
while i>0 and A [1]>key
A Condition false. So no change

Then A [1+1] ← key


A [2] ← 7
That is
4 7 15 18 16
For j=4
Key=A [4]

Key = 18, i=3


Now, while 3>0 and A [3]>18
The Condition is false, No change.
Similarly, j=5
Key=A [5]
So key=16, i=4
Now while 4>0 and A [4]>16
Condition is true
So A [5] =16 and i=4-1=3
Now while 3>0 and A [3]>16
Condition is false
So A [3+1] =A [4] =16
And the sorted array is:
4 7 15 16 18

16
 BUBBLE SORT
Bubble Sort also known as Exchange Sort, is a simple sorting algorithm. It works by repeatedly
stepping throughout the list to be sorted, comparing two items at a time and swapping them if
they are in the wrong order. The pass through the list is duplicated until no swaps are desired,
which means the list is sorted.
This is the easiest method among all sorting algorithms.

BUBBLE SORT (A)


1. for i ← 1 to length [A]
2. for k ← length [A] down to i+1
3. if A[k] <A[k-1]
4. exchange (A[k], A [k-1])

Analysis

Input: n elements are given.

Output: the number of comparisons required to make sorting.

Logic: If we have n elements in bubble sort, then n-1 passes are required to

find a sorted array.

In pass 1: n-1 comparisons are required

In pass 2: n-2 comparisons are required

In pass 3: n-3 comparisons are required

In pass n-1: 1 comparisons is required

Total comparisons: T (n) = (n-1) + (n-2) + .......... + 1

= o (n2)

17
Therefore, complexity is of order n2

Example
Unsorted List: A = {7, 2, 1, 4, 5, 9, 6}
i.e., A [] =
7214396

Here length [A] =7


i=1 to 7 and j=7
i=1, j=7
A [7] =6 and A [6] =9. So A [7] <A [6]
Now Exchange (A [7], A [6])
Now, i=1, j=6 then A [6] =6

A [5] = 3 and A [5] < A [6]


Now, i=1, j=5 then A [5] = 3
A [4] = 4 and A [5] < A [4]
So, exchange (A [5], A [4])

And A [] =
7213469

Now, i=1, j=4 then A [4] =3


A [3] = 1 and A [4] > A [3]

Now, i=1, j=3 then A [3] = 1


A [2] = 2 and A [3] < A [2]
So, exchange (A [3], A [2])

Then A [] =
7123469

Now, i=2, j-2 then A [2] =1


A [1] =7 and A [2] <A [1]
So, exchange (A [2], A [1])

Then A [] =
1723469

Now, i=1, j=7 then A [7] =9


A [6] =6 and A [7] > A [6], No Exchange
Similarly, i=2, j=6, 5, 4. No change.
Then i = 2, j=3
18
A [3] = 2
A [2] = 7 and A [3] < A [2]
So, exchange (A [3], A [2])

And A [] =
1273469
Now, i=3, j=5, 6, 7 No change
Then i=3, j=4

A [4] = 3
A [3] = 7 and A [4] < A [3]
So, exchange (A [4], A [3])
Then A [] =

1237469
Now, i=4, j= 6, 7 No change
Now, i=4, j=5 then A [5] = 4
A [4] = 7 and A [5] < A [4]
So exchange (A [5], A [4])
And A [] =

1234769
Now, i=5, j= 6, 7 No change
Now, i=5, j=6 then A [6] = 6
A [5] = 7 and A [6] < A [5]
So exchange (A [6], A [5])
And A [] =
1234679
is the sorted array.

19
 SELECTION SORT

The selection sort enhances the bubble sort by making only a single swap for each pass through
the rundown. In order to do this, a selection sort searches for the biggest value as it makes a pass
and, after finishing the pass, places it in the best possible area. Similarly, as with a bubble sort,
after the first pass, the biggest item is in the right place. After the second pass, the following
biggest is set up. This procedure proceeds and requires n-1 goes to sort n item, since the last item
must be set up after the (n-1) st pass.

Algorithm

SELECTION SORT (A)

1. k ← length [A]

2. for j ←1 to n-1

3. smallest ← j

4. for I ← j + 1 to k

5. if A [i] < A [ smallest]

6. then smallest ← i

7. exchange (A [j], A [smallest])

Analysis
1. Input: n elements are given.
2. Output: the number of comparisons required to make sorting.
3. Logic: If we have n elements in selection sort, then n-1 passes
are required to find a sorted array.
In pass 1: n-1 comparisons are required
In pass 2: n-2 comparisons are required
In pass 3: n-3 comparisons are required

In pass n-1: 1 comparison is required


Total comparisons: T (n) = (n-1) + (n-2) + (n-3) + ....... + 1

20
=

= o (n2)

Therefore, complexity is of order n2

Example
Sort the following array using selection sort: A [] = (7, 4, 3, 6, 5).
A [] =
74365
1 Iteration:
Smallest =7
4 < 7, smallest = 4
3 < 4, smallest = 3
6 > 3, smallest = 3
5 > 3, smallest = 3
Swap 7 and 3
34 76 5
2nd iteration:
Smallest = 4
4 < 7, smallest = 4
4 < 6, smallest = 4
4 < 5, smallest = 4
No Swap
3 4 7 6 5
3rd iteration:
Smallest = 7
6 < 7, smallest = 6
5 < 7, smallest = 5
Swap 7 and 5
3 4 56 7
4th iteration:
Smallest = 6
6< 7, smallest = 6
No Swap
3 4 5 6 7
Finally, the sorted list is:
3 4 5 6 7

21
DIVIDE AND CONQUER

Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a
dispute on a huge input, break the input into minor pieces, decide the problem on each of the
small pieces, and then merge the piecewise solutions into a global solution. This mechanism of
solving the problem is called the Divide & Conquer Strategy.

Divide and Conquer algorithm consists of a dispute using the following three steps.

1. Divide the original problem into a set of sub problems.

2. Conquer: Solve every sub problem individually, recursively.

3. Combine: Put together the solutions of the sub problems to get

the solution to the whole problem.

Examples

The specific computer algorithms are based on the Divide & Conquer

approach:

1. Maximum and Minimum Problem

22
2. Binary Search

3. Sorting (merge sort, quick sort)

4. Towers of Hanoi.

Fundamental of Divide & Conquer Strategy

There are two fundamental of Divide & Conquer Strategy:

1. Relational Formula

2. Stopping Condition

1. Relational Formula: It is the formula that we generate from the given technique. After
generation of Formula we apply D&C Strategy, i.e. we break the problem recursively & solve
the broken sub problems.

2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then we
need to know that for how much time, we need to apply divide & Conquer. So the condition
where the need to stop our recursion steps of D&C is called as Stopping Condition.

Max - Min Problem

Problem
Analyze the algorithm to find the maximum and minimum element from an
array.

Algorithm
Max ?Min Element (a [])
Max: a [i]
Min: a [i]
For i= 2 to n do
If a[i]> max then
max = a[i]
if a[i] < min then
min: a[i]
return (max, min)

23
 BINARY SEARCH

Binary search is a fast search algorithm. This search algorithm works on the principle of divide and
conquer. Binary search looks for a particular item by comparing the middle most item of the
collection. If a match occurs, then the index of item is returned. If the middle item is greater than the
item, then the item is searched in the sub-array to the left of the middle item. Otherwise, the item is
searched for in the sub-array to the right of the middle item. This process continues on the sub-array
as well until the size of the subarray reduces to zero.
For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the
process of binary search with a pictorial example. The following is our sorted array and let us
assume that we need to search the location of value 31 using binary search.

First, we shall determine half of the array by using this formula −


mid = low + (high - low) / 2
Here it is, 0 + (9 - 0 ) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.
Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find that
the value at location 4 is 27, which is not a match. As the value is greater than 27 and we have a
sorted array, so we also know that the target value must be in the upper portion of the array.
We change our low to mid + 1 and find the new mid value again.

low = mid + 1 mid = low + (high - low) / 2

Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.
The value stored at location 7 is not a match, rather it is more than what we are looking for. So, the
value must be in the lower part from this location.
Hence, we calculate the mid again. This time it is 5.

We compare the value stored at location 5 with our target value. We find that it is a match.

24
We conclude that the target value 31 is stored at location 5. Binary search halves the searchable
items and thus reduces the count of comparisons to be made to very less numbers.

 MERGE SORT
It closely follows the divide & Conquers paradigm.
Conceptually, it works as follows:
1. Divide: Divide the unsorted list into two sublists of about half the size.
2. Conquer: Sort each of the two sublists recursively until we have list sizes of length 1, in which
case the list items are returned.
3. Combine: Join the two sorted Sub lists back into one sorted list.
The Main purpose is to sort the unsorted list in non-decreasing order.

Algorithm-Merge Sort
1. If p<r
2. Then q ← ( p+ r)/2
3. MERGE-SORT (A, p, q)
4. MERGE-SORT ( A, q+1,r)
5. MERGE ( A, p, q ,r)
The following figure illustrates the dividing (splitting) procedure.

25
Functions: MERGE (A, p, q, r)
1. n 1 = q-p+1
2. n 2= r-q
3. create arrays [1.....n 1 + 1] and R [ 1. ... n 2 +1 ]
4. for i ← 1 to n 1
5. do [i] ← A [ p+ i-1]
6. for j ← 1 to n2
7. do R[j] ← A[ q + j]
8. L [n 1+ 1] ← ∞
9. R[n 2+ 1] ← ∞
10. I ← 1
11. J ← 1

12. For k ← p to r
13. Do if L [i] ≤ R[j]
14. then A[k] ← L[ i]
15. i ← i +1
16. else A[k] ← R[j]
17. j ← j+1

26
In this method, we split the given list into two halves. Then recursively analyzing merge sort and
dividing. We get many sorted lists. At last, we combine the sorted lists.

Analysis of Merge Sort


Let T (n) be the total time taken in Merge Sort
Sorting two halves will be taken at the most 2T(n/2 ) time
When we merge the sorted lists, we have a total n-1 comparison because the last element which
will be left will just need to be copied down in the combined list and there will be no
comparison.
So, the relational formula becomes

27
28
 Quick Sort
It is an algorithm of Divide & Conquer type.
Divide: Rearrange the elements and split arrays into two subarrays and an element in between
search that each element in left sub array is less than or equal to the average element and each
element in the right sub- array is larger than the middle element.
Conquer: Recursively, sort two sub arrays.
Combine: Combine the already sorted array.
Algorithm
QUICKSORT (array A, int m, int n)
1 if (n > m)
2 then
3 i ← a random index from [m,n]
4 swap A [i] with A[m]
5 o ← PARTITION (A, m, n)
6 QUICKSORT (A, m, o - 1)
7 QUICKSORT (A, o + 1, n)
29
Example of Quick Sort
44 33 11 55 77 90 40 60 99 22 88
Let 44 be the Pivot element and scanning done from right to left
Comparing 44 to the right-side elements, and if right-side elements are smaller than 44, then
swap it. As 22 is smaller than 44 so swap them.
22 33 11 55 77 90 40 60 99 44
Now comparing 44 to the left side element and the element must be greater than 44 then swap
them. As 55 are greater than 44 so swap them.
22 33 11 44 77 90 40 60 99 55
Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot element 44 &
one right from pivot element.
22 33 11 40 77 90 44 60 99 55
Swap with 77:
22 33 11 40 44 90 77 60 99 55
Now, the element on the right side and left side are greater than and smaller than 44 respectively.
Now we get two sorted lists:

And these sub lists are sorted under the same process as above done. These two sorted sub
lists side by side.

30
Merging Sublists:

SORTED LISTS

31
32

You might also like