Algorithms and Complexity Analysis Lecture Notes
Algorithms and Complexity Analysis Lecture Notes
COURSE OUTLINE
Introduction to Algorithms
Introduction to Asymptotic Notations
Analyzing Space Complexity
Time Complexity Analysis
Time Analysis of Recursive Programs
Masters Theorem
Insertion Sort Algorithms
Bubble Sort Algorithms
Selection Sort Algorithms
Divide and Conquer Algorithms
Binary Search Algorithms
Merge sort Algorithms
Quick sort Algorithms
Greedy Algorithms
Minimum Spanning Tree
Kruskal Algorithms
Prim’s Algorithms
1
Introduction to Algorithms
Algorithm is a finite collection of well-defined computational techniques each of which is clear and
unambiguous, capable of being performed with finite effort and terminates in a finite amount of
time. Basically, An algorithm can be defined as a finite set of steps to solve a problem. These
stepsmeans instructions which contains fundamental operators such as + , * , / , % , =, e.t.c. The
word algorithm comes from the name of the person author Abu Jafar Mohammed Ibn Musa Al
khowarizmi who wrote A text book entitled-”Algorithmi de numero indorum” Now term”
Algorithmi ”in the title of the book led to the term Algorithm.
Characteristics of Algorithms
1. Correctness: It should produce the output according to the requirement of the algorithm
2. Finiteness: Algorithm must complete after a finite number of instructions have been
executed.
3. An Absence of Ambiguity: Each step must be defined, having only one interpretation.
4. Definition of Sequence: Each step must have a unique defined preceding and succeeding
step. The first step and the last step must be noted.
5. Input/output: Number and classification of needed inputs and results must be stated.
6. Feasibility: It must be feasible to execute each instruction.
7. Flexibility: It should also be possible to make changes in the algorithm without putting so
much effort on it.
8. Efficiency: Efficiency is always measured in terms of time and space required in
implementing the algorithm, so the algorithm uses a little running time and memory space
as possible within the limits of acceptable development time.
9. Independent: An algorithm should focus on what are the inputs, outputs and how to derive
output without knowing the language it is defined in. Therefore, we can say that the
algorithm is independent of any language.
2
Advantages of Algorithms
Easy Debugging: A well-designed algorithm facilitates easy debugging to detect the logical
errors that occurred inside the program.
Easy and Efficient Coding: An algorithm is nothing but a blueprint of a program that helps
develop a program.
Analysis of Algorithm
The analysis is a process of estimating the efficiency of an algorithm and that is, trying to know how
good or how bad an algorithm could be. There two types of algorithm analysis;
PRIORY ANALYSIS
It is done before executing a program. It provides estimated values and uniform values. And it is
independent.
POSTERIORI ANALYSIS
Posterior analysis is done after executing the program. It provides the exact values and non-
uniform values. It is also dependent. We can consider algorithms to be procedural solutions
to problems. These solutions are not answers but specific instructions for getting answers.
The steps for designing and analyzing an algorithm are shown in the diagram below.
3
Figure 1. Analysis of Algorithms
There are two main parameters based on which we can analyze the algorithm:
Space Complexity: The space complexity can be understood as the amount of space required by an
algorithm to run to completion.
Time Complexity: Time complexity is a function of input size n that refers to the amount of time
needed by an algorithm to run to completion.
4
Asymptotic Notations
To compare two algorithms rate of growth with respect to time and space, we need what we call
the asymptotic notation. Asymptotic notations are used to write fastest and slowest possible
running time for an algorithm. These are also referred to as 'best case’, ‘average case’ and 'worst
case' scenarios respectively. "In asymptotic notations, we derive the complexity concerning the
size of the input. (Example interms of n)".
Three notations are used to calculate the running time complexity of an algorithm.
Big-oh is the formal method of expressing the upper bound of an algorithm's running time. It is
the measure of the longest amount of time. It is also called Worst case. As the input size of the
algorithm increases, if the time is also increasing, then what is the Worst case or the upper bound
of the algorithm. The worst-case efficiency of an algorithm is its efficiency for the worst-case
input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest
among all possible inputs of that size. The way to determine the worst-case efficiency of an
algorithm is, in principle, quite straightforward: analyze the algorithm to see what kind of inputs
yield the largest value of the basic operation’s count.
The function f (n) = O (g(n)) [read as "f of n is big-oh of g of n"] if and only if there exist a
positive constant c, such that
5
Hence, function g (n) is an upper bound for function f (n), as g (n) grows faster than f (n). Upper bound
means in a given algorithm, the time cannot exceed the complexity calculated
The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which
is an input (or inputs) of size n for which the algorithm runs the fastest among all possible inputs
of that size.
6
3. Theta Notation (Average case)
7
TIME COMPLEXITY ANALYSIS
There are two (2) types of algorithms
1. Iterative Algorithm
2. Recursive Algorithm
Iterative Algorithms: this means repeating steps or instructions, over and over again. It is often
called a loop.
Recursive: if an algorithm contains a function that calls itself, it is known as the recursive
functiom.
If the Algorithm is neither iterative nor recursive, then the time complexity is going to be
Constant.
Let’s see some examples of iterative programs which will give us some insight on how to
analyze the time complexity.
Method
Simple for loop
Nested for loop
If-else
Simple for loop
Example 1:
Find the time complexity of the following
Sum = 0;
For (i=1 ; i<=n; i++)
{
sum = sum + i:
}
8
Example 2:
Find the time complexity of the following:
sum = 0;
for (i=1; i<=n; i=i*2)
{
sum = sum + i;
}
The time complexity is O(log2 n)
Example 3:
Find the time complexity of the following:
sum = 0;
for (i=n; i > 0; i = i/2)
{
sum = sum + i;
}
9
Nested for loop
Example 1:
Find the time complexity of the following algorithm;
sum = 0;
for (i = 1; i <=n; i++)
{
for (j = 1; j<= n; j++)
{
sum = sum + j;
}
}
Example 2:
Find the time complexity of the following algorithm;
sum = 0;
for (i = 1; i <= n; i++)
{
for (j = 1; j<= n; j = j * 3)
{
sum = sum + j;
}
}
10
Example 3;
Find the time complexity of the following algorithm;
sum = 0;
for (i = 1; i <= n; i = i * 2)
{
for (j = 1; j<= n; j = j + 2)
{
sum = sum + j;
}
}
The time complexity is O(n log2n)
Recursive Algorithm
Recursion is generally expressed in terms of recurrences. In other words, when an algorithm calls
to itself, we can often describe its running time by a recurrence equation which describes the
overall running time of a problem of size n in terms of the running time on smaller inputs.
Note:
Example 1:
if (n = = 0)
return 0;
if (n = = 1)
11
return 1
else
We have seen that there is a program called recursive program. A function that calls itself is
called a recursion.
Example:
Assume that
A (n)
{
if (n > 1)
return A(n-1)
}
To get the Complexity of this recursive algorithm, we need to generate the recursive equation.
The recursive equation of the above equation will be:
T(n) = 1 + T(n-1).
There are many ways of solving the recursive equation. Lets start with the back Substitution
method.
12
T(n-2) = 1 + T(n-3) ................................ 3
= K + T(n-k)
So, where are we going to stop it. Looking at the algorithm, T(n) = 1 + T(n-1) is only valid if
and only if n > 1. So the algorithm means that n is goin to start with a number greater than 1 and
will gradually decrease. And at some point, it will reach 1. Whenever it reaches 1, we are going
to stop. This condition is called the base condition
So, in K + T(n-k), n-k = 1 and k = n – 1 arithmetically.
By substituting it, we have,
T(n) = n -1 + T(n-n-1)
= n – 1 + T(1)
=n–1+1
=n
So T(n) = n
The time complexity of the recursive algorithm is O(n).
Example 2:
Solve the following recursive equation using back substitution
T(n) = n + T(n-1)
Solution
T(n) =
n + T(n-1)… ............................................................................................................................ 1)
T(n-1) = (n-1) + T(n-2) ........................................................................................................... 2)
T(n-2) = (n-2) + T(n-3) ........................................................................................................... 3)
Substitute equation 2 into equation 1.
T(n) = n + (n-1) + T(n-2)
13
T(n) = n + (n-1) + (n-2) + T(n-3)
= n + (n-1) + (n-2) +………………(n-k) + (n-(k+1))… ................................................. 4)
But (n-(k+1)) = 1.
So, k = n – 2.
Substitute in Equation 4.
By substituting, we have,
n + (n-1) + (n-2) + ....................... (n-(n-2)) + (n-(n-2)+1)
n + (n-1) + (n-2) + ....................... + 2 + 1
So equation 5 is the sum of 10 natural numbers.
Which implies that we have, n(n+1)/2
Now, the time complexity of the recursive equation is O(n2)
MASTERS THEOREM
The masters theorem states as follows: - Masters Theorem is used to solve recursive equations.
There are various conditions that should be met before the masters’ theorem can be used to solve a
problem. Hence, it cannot be used to solve all the recursive problems.
Algorithm
INSERTION SORT (A)
1. For k ← 2 to length [A]
2. Do key ← A[k]
3. i=k-1
14
4. while i>0 and A[i]>key
5. do A[i+1] ← A[i]
6. i=i-1
7. A[i+1] ← key
Analysis
Input: n elements are given.
Output: the number of comparisons required to make sorting.
Logic: If we have n elements in insertion sort, then n-1 passes are required to find a sorted array.
In pass 1: no comparison is required
In pass 2: 1 comparison is required
In pass 3: 2 comparisons are required
Example
Illustrate the operation of INSERTION SORT on the array A = (4, 15, 7, 18, and 16).
Solution
A [] =
4 15 7 18 16
For j=2 to 5
J=2, key=A [2]
Key=15
I=2-1=1, i=1
While i>0 and A [1]>15
A Condition false, so no change
15
While i>0 and A [2]>key
Condition is true
So A [2+1] ← A [2]
A [3] ← A [2]
i.e.
4 15 18 16
and i=2-1=1, i=1
while i>0 and A [1]>key
A Condition false. So no change
16
BUBBLE SORT
Bubble Sort also known as Exchange Sort, is a simple sorting algorithm. It works by repeatedly
stepping throughout the list to be sorted, comparing two items at a time and swapping them if
they are in the wrong order. The pass through the list is duplicated until no swaps are desired,
which means the list is sorted.
This is the easiest method among all sorting algorithms.
Analysis
Logic: If we have n elements in bubble sort, then n-1 passes are required to
= o (n2)
17
Therefore, complexity is of order n2
Example
Unsorted List: A = {7, 2, 1, 4, 5, 9, 6}
i.e., A [] =
7214396
And A [] =
7213469
Then A [] =
7123469
Then A [] =
1723469
And A [] =
1273469
Now, i=3, j=5, 6, 7 No change
Then i=3, j=4
A [4] = 3
A [3] = 7 and A [4] < A [3]
So, exchange (A [4], A [3])
Then A [] =
1237469
Now, i=4, j= 6, 7 No change
Now, i=4, j=5 then A [5] = 4
A [4] = 7 and A [5] < A [4]
So exchange (A [5], A [4])
And A [] =
1234769
Now, i=5, j= 6, 7 No change
Now, i=5, j=6 then A [6] = 6
A [5] = 7 and A [6] < A [5]
So exchange (A [6], A [5])
And A [] =
1234679
is the sorted array.
19
SELECTION SORT
The selection sort enhances the bubble sort by making only a single swap for each pass through
the rundown. In order to do this, a selection sort searches for the biggest value as it makes a pass
and, after finishing the pass, places it in the best possible area. Similarly, as with a bubble sort,
after the first pass, the biggest item is in the right place. After the second pass, the following
biggest is set up. This procedure proceeds and requires n-1 goes to sort n item, since the last item
must be set up after the (n-1) st pass.
Algorithm
1. k ← length [A]
2. for j ←1 to n-1
3. smallest ← j
4. for I ← j + 1 to k
6. then smallest ← i
Analysis
1. Input: n elements are given.
2. Output: the number of comparisons required to make sorting.
3. Logic: If we have n elements in selection sort, then n-1 passes
are required to find a sorted array.
In pass 1: n-1 comparisons are required
In pass 2: n-2 comparisons are required
In pass 3: n-3 comparisons are required
20
=
= o (n2)
Example
Sort the following array using selection sort: A [] = (7, 4, 3, 6, 5).
A [] =
74365
1 Iteration:
Smallest =7
4 < 7, smallest = 4
3 < 4, smallest = 3
6 > 3, smallest = 3
5 > 3, smallest = 3
Swap 7 and 3
34 76 5
2nd iteration:
Smallest = 4
4 < 7, smallest = 4
4 < 6, smallest = 4
4 < 5, smallest = 4
No Swap
3 4 7 6 5
3rd iteration:
Smallest = 7
6 < 7, smallest = 6
5 < 7, smallest = 5
Swap 7 and 5
3 4 56 7
4th iteration:
Smallest = 6
6< 7, smallest = 6
No Swap
3 4 5 6 7
Finally, the sorted list is:
3 4 5 6 7
21
DIVIDE AND CONQUER
Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a
dispute on a huge input, break the input into minor pieces, decide the problem on each of the
small pieces, and then merge the piecewise solutions into a global solution. This mechanism of
solving the problem is called the Divide & Conquer Strategy.
Divide and Conquer algorithm consists of a dispute using the following three steps.
Examples
The specific computer algorithms are based on the Divide & Conquer
approach:
22
2. Binary Search
4. Towers of Hanoi.
1. Relational Formula
2. Stopping Condition
1. Relational Formula: It is the formula that we generate from the given technique. After
generation of Formula we apply D&C Strategy, i.e. we break the problem recursively & solve
the broken sub problems.
2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then we
need to know that for how much time, we need to apply divide & Conquer. So the condition
where the need to stop our recursion steps of D&C is called as Stopping Condition.
Problem
Analyze the algorithm to find the maximum and minimum element from an
array.
Algorithm
Max ?Min Element (a [])
Max: a [i]
Min: a [i]
For i= 2 to n do
If a[i]> max then
max = a[i]
if a[i] < min then
min: a[i]
return (max, min)
23
BINARY SEARCH
Binary search is a fast search algorithm. This search algorithm works on the principle of divide and
conquer. Binary search looks for a particular item by comparing the middle most item of the
collection. If a match occurs, then the index of item is returned. If the middle item is greater than the
item, then the item is searched in the sub-array to the left of the middle item. Otherwise, the item is
searched for in the sub-array to the right of the middle item. This process continues on the sub-array
as well until the size of the subarray reduces to zero.
For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the
process of binary search with a pictorial example. The following is our sorted array and let us
assume that we need to search the location of value 31 using binary search.
Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.
The value stored at location 7 is not a match, rather it is more than what we are looking for. So, the
value must be in the lower part from this location.
Hence, we calculate the mid again. This time it is 5.
We compare the value stored at location 5 with our target value. We find that it is a match.
24
We conclude that the target value 31 is stored at location 5. Binary search halves the searchable
items and thus reduces the count of comparisons to be made to very less numbers.
MERGE SORT
It closely follows the divide & Conquers paradigm.
Conceptually, it works as follows:
1. Divide: Divide the unsorted list into two sublists of about half the size.
2. Conquer: Sort each of the two sublists recursively until we have list sizes of length 1, in which
case the list items are returned.
3. Combine: Join the two sorted Sub lists back into one sorted list.
The Main purpose is to sort the unsorted list in non-decreasing order.
Algorithm-Merge Sort
1. If p<r
2. Then q ← ( p+ r)/2
3. MERGE-SORT (A, p, q)
4. MERGE-SORT ( A, q+1,r)
5. MERGE ( A, p, q ,r)
The following figure illustrates the dividing (splitting) procedure.
25
Functions: MERGE (A, p, q, r)
1. n 1 = q-p+1
2. n 2= r-q
3. create arrays [1.....n 1 + 1] and R [ 1. ... n 2 +1 ]
4. for i ← 1 to n 1
5. do [i] ← A [ p+ i-1]
6. for j ← 1 to n2
7. do R[j] ← A[ q + j]
8. L [n 1+ 1] ← ∞
9. R[n 2+ 1] ← ∞
10. I ← 1
11. J ← 1
12. For k ← p to r
13. Do if L [i] ≤ R[j]
14. then A[k] ← L[ i]
15. i ← i +1
16. else A[k] ← R[j]
17. j ← j+1
26
In this method, we split the given list into two halves. Then recursively analyzing merge sort and
dividing. We get many sorted lists. At last, we combine the sorted lists.
27
28
Quick Sort
It is an algorithm of Divide & Conquer type.
Divide: Rearrange the elements and split arrays into two subarrays and an element in between
search that each element in left sub array is less than or equal to the average element and each
element in the right sub- array is larger than the middle element.
Conquer: Recursively, sort two sub arrays.
Combine: Combine the already sorted array.
Algorithm
QUICKSORT (array A, int m, int n)
1 if (n > m)
2 then
3 i ← a random index from [m,n]
4 swap A [i] with A[m]
5 o ← PARTITION (A, m, n)
6 QUICKSORT (A, m, o - 1)
7 QUICKSORT (A, o + 1, n)
29
Example of Quick Sort
44 33 11 55 77 90 40 60 99 22 88
Let 44 be the Pivot element and scanning done from right to left
Comparing 44 to the right-side elements, and if right-side elements are smaller than 44, then
swap it. As 22 is smaller than 44 so swap them.
22 33 11 55 77 90 40 60 99 44
Now comparing 44 to the left side element and the element must be greater than 44 then swap
them. As 55 are greater than 44 so swap them.
22 33 11 44 77 90 40 60 99 55
Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot element 44 &
one right from pivot element.
22 33 11 40 77 90 44 60 99 55
Swap with 77:
22 33 11 40 44 90 77 60 99 55
Now, the element on the right side and left side are greater than and smaller than 44 respectively.
Now we get two sorted lists:
And these sub lists are sorted under the same process as above done. These two sorted sub
lists side by side.
30
Merging Sublists:
SORTED LISTS
31
32