1
KCS503: Design and Analysis of Algorithm
Unit-01
Instructor: Md. Shahid
Introduction: Algorithms, analyzing algorithm,
complexity of algorithm, growth of functions,
performance management, sorting and order statistic-
shell sort, quick sort, Merge sort, Heap sort, comparison
of sorting algorithms, Sorting in linear time.
Algorithm— It is a combination of a sequence of finite steps to
solve a computational problem.
Program— A program, on the other hand, is a concrete
implementation of an algorithm using a
programming language.
Properties of Algorithm
➢ It should terminate after a finite time.
➢ It should produce at least one output.
➢ It should take zero or more input externally.
➢ It should be deterministic (unambiguous).
2
➢ It should be language independent.
Example to differentiate between algorithm and program
Add()
{
1. input two numbers a and b
2. sum a and b and store the result in c
3. return c
The above example is an algorithm as it follows its properties.
While(1)
{
printf(“MIET”);
}
The above example is not an algorithm as it will never
terminate. Though it is a program.
3
The main algorithm design techniques
1. Divide and conquer technique
2. Greedy technique
3. Dynamic programming
4. Branch and bound
5. Randomized
6. Backtracking
Note— The most basic approach to designing algorithms is the
brute force technique, where one attempts all possible solutions
to a problem and opts for the successful one. All computational
problems can be solved through the brute force method, though
often not achieving noteworthy efficiency in terms of space and
time complexity.
For example, search for an element in a sorted array of elements
using linear search.
4
Steps required to design an algorithm
1.Problem Definition: Clearly understand the problem you need to solve.
Define the input, output, constraints, and objectives of the algorithm.
2.Design Algorithm: Choose an appropriate algorithmic technique based
on the nature of the problem, such as greedy, divide and conquer,
dynamic programming, etc.
3.Draw Flowchart: Create a visual representation of your algorithm using
a flowchart. The flowchart helps to visualize the logical flow of steps.
4.Validation: Mentally or manually walk through your algorithm with
various inputs to verify its correctness. Ensure it produces the expected
results.
5.Analyze the Algorithm: Evaluate the efficiency of the algorithm in
terms of time complexity (how long it takes to run) and space complexity
(how much memory it uses).
6.Implementation (Coding): Translate your algorithm into actual code
using a programming language of your choice. Write clean, well-
organized code that follows best practices.
Q. Define 'algorithm,' discuss its main properties, and outline the steps
for designing it.
5
Analysis of Algorithms
The efficiency of an algorithm can be analyzed at two different stages:
before implementation (A Priori Analysis) and after implementation (A
Posteriori Analysis).
A Priori Analysis— This is a theoretical analysis of an algorithm's
efficiency before it's actually implemented. The analysis assumes that
certain factors, such as processor speed and memory, remain constant
and do not affect the algorithm's performance. It involves evaluating an
algorithm based on its mathematical characteristics, such as time
complexity (Big O notation) and space complexity. It provides insights into
how an algorithm will perform under ideal conditions and helps in
comparing different algorithms in terms of their theoretical efficiency.
A Posteriori Analysis— This is an empirical analysis that occurs after an
algorithm has been implemented in a programming language and
executed on an actual computer. The algorithm is tested and executed on
a target machine, and actual statistics like running time and space
required are collected. A Posteriori Analysis provides a more realistic view
of how an algorithm performs in a real-world setting, considering
hardware characteristics, compiler optimizations, and other factors. It
helps validate the theoretical analysis and may reveal unexpected
performance issues or bottlenecks.
Note—Writing a computer program that handles a small set of data is
entirely different from writing a program that takes a large number of
input data. The program written to handle a big number of input data
6
must be algorithmically efficient in order to produce the result in
reasonable time and space.
Asymptotic Analysis of Algorithms
Asymptotic analysis of algorithms is a method used to analyze the
efficiency and performance of algorithms as the input size grows towards
infinity. It focuses on understanding how an algorithm's performance
scales with larger inputs and provides a way to express the upper (worst-
case) and lower bounds (best-case) on its execution time or space
complexity. The primary goal of asymptotic analysis is to identify the
algorithm's growth rate, which helps in making comparisons between
different algorithms and determining their suitability for various
problem sizes.
Asymptotic Notations:
1. O [Big-oh] (upper bound)
2. Ω [Big-omega] (lower bound)
3. ɵ [Big-theta] (tight bound)
4. o [small-oh] (Not tightly upper bound)
5. w [small omega] (Not tightly lower bound)
1. Big-oh Notation (O) : It is used to describe the upper bound or worst-
case performance of an algorithm in terms of its time complexity or
space complexity. It provides an estimate of the maximum amount
of time or space an algorithm can require for its execution as the
input size increases.
7
Note: Most of the time, we are interested in finding only the worst-
case scenario of an algorithm (worst case time complexity). Big O
notation allows for a high-level understanding of how an algorithm's
efficiency scales without getting into specific constants or lower-
order terms.
We say that
f(n) = O g(n) if and only if
f(n) <= c . g(n) for some c >0 after n >= no >=0
Question: Find out upper bound for the function f(n) = 3n+2.
Solution:
Steps:
1. We know that definition of upper bound is f(n) <= c. g(n).
2. f(n) = 3n + 2 (Given)
3. We need to find out c and g(n).
4. If we choose c=5 and g(n) = n, then 3n+2 <= 5*n.
5. For c=5 and n0 = 1 (starting value of “n”), f(n)<=c. g(n).
6. Therefore, f(n) = O(n)
8
Note: Other functions that are larger than f(n), such as n^2, n^3, nlogn,
etc., will also serve as upper bounds for the function f(n). However, we
typically consider only the closest upper bound among all possible
candidates, as the remaining upper bounds are not as useful.
1. Given f(n) = 2n2 + 3n + 2 and g(n) = n2 , prove that f(n) = O g(n).
Solution:
Steps:
1. We have to show f(n) <= c. g(n) where c and n0 are some positive
constants for all n which is >= n0
2. Now, find out the value of c and n0 such that the equation-(1) gets
true.
2n2 + 3n + 2 <= c. (n2) ………(1)
If we put c = 7 (note— we can take any positive value for c), then
2n2 + 3n + 2 <= 7 n2
Now, put n=1 which is n0 (starting value for input n)
7 <= 7 [ True]
Hence, when c=7 and n0 =1, f(n) = O g(n) for all n which is > = n0
2. Given f(n) = 5n2 + 6n + 4 and g(n) = n2 , then prove that f(n) is O(n2).
Solution:
f(n) will be O(n2) if and only if the following condition holds good:
9
f(n) <= C. g(n) where C is some constant and n>=n0 >=0
5n2 + 6n + 4 < = C. n2
If we put C=15 and n0 = 1 , then we get
15 <= 15 ( which is true. )
It means f(n) is O(n2) where C=15 , and n0 =1
Note: We have to find out c and n0 (starting value of input n) to solve
such a question.
3. Solve the function f(n) = 2n + 6n2 + 3n and find the big-oh (O)
notation.
Steps:
1. Find out the greatest degree of “n” from f(n), which is big-oh.
2. Prove it using the formula f(n) <= O(g(n)).
Solution:
Big-oh ( upper bound ) of f(n) = 2n + 6n2 + 3n will be 2n iff
f(n)<= c. 2n for some constant c > 0 and n> = n0 >=0
2n + 6n2 + 3n < c. 2n
If we put c=11 and n0 =1, then we get
11 <= 22 ( It is true.)
It means big-oh of f(n) is 2n when c=11 and n0 = 1
10
2. Big-omega Notation (Ω): It is used to describe the lower bound or best-
case performance of an algorithm in terms of its time complexity or
space complexity. It provides an estimate of the minimum amount of
time or space an algorithm can require for its execution as the input size
increases.
We say that
f(n) = Ω g(n) if and only if
f(n) >= c . g(n) for some c >0 after n >= no >=0
For example :
1. Given f(n) = 3n + 2 and g(n) = n, then prove that f(n) = Ω g(n)
11
Solution
1. We have to show that f(n) >= c. g(n) where c and n0 are some
positive constants for all n which is >= n0
2. 3n + 2 >= c . n
3. When we put c=1
4. 3n +2 >= n
5. Put n = 1
6. 5 > = 1 [ True ]
7. Hence, when we put c=1 and n0=1, f(n)= Ω g(n).
2. Solve the function: f(n) = 3n +5n2 + 8n and find the big
omega(lower bound) notation.
Solution :
Steps:
1. Find out the smallest degree of n from f(n). This will be the value
for lower bound ( best case for the function f(n).
2. Use the formula to find out c and n0 to prove your claim.
f(n) = Ω (n ) iff f(n) >= C. n where c is some constant and n>=n0>=0
3n +5n2 + 8n >= c. n
If we put c=16 and n0 = 1 , then we get
16 >= 16 ( holds good)
12
It is means lower bound (Ω) for the given function f(n)= 3n +5n2 + 8n is
n.
3. Big Theta Notation (Θ): It’s the middle characteristics of both Big
O and Omega notations as it represents the lower and upper
bound of an algorithm.
We say that
f(n) = ɵ g(n) if and only if
c1.g(n) <= f(n) <= c2.g(n) for all n >=n0>=0 and c >0
13
For example
1. Given f(n) = 3n +2 and g(n) = n, prove that f(n) = ɵ g(n).
Solution
1. We have to show that c1.g(n) <= f(n) <= c2.g(n) for all n >=n 0>=0
and c >0
2. c1.g(n) <= f(n) <= c2.g(n)
3. c1. n <= 3n+2 <= c2.n
4. Put c1 =1, c2 = 4 and n=2 , then 2 <= 8 <=8 [ True ]
5. Hence, f(n) = ɵ g(n) where c1=1,c2=4 and n0=2.
3. Solve the function: f(n) = 27n2 +16 and find the Tight (average bound)
bound it.
Solution:
1. If we have upper bound (big-oh) and lower bound (big omega) of
f(n) equal, that’s when we can call it Theta-notation of f(n).
2. Use the formula c1. g(n) <= f(n) <= c2. g(n)
Let’s check if n2 is Theta or not.
27n2 +16 < = c1. n 2 ( for upper bound )
If we put c1 = 43 and n=1, then we get
43 < = 43 ( holds good)
14
Now check for the lower bound
27n2 +16 >= c2. n 2
If we put c1 = 43 and n=1, then we get
43 > = 43 ( hold good)
Since upper and lower bounds are the same for the given function f(n)=
27n2 + 16, n2 is Tight- bound for the function f(n).
15
4. Small-oh [o] : We use o-notation to denote an upper bound that is
not asymptotically tight whereas big-oh ( asymptotic upper bound) may
or may not be asymptotically tight.
We say that
f(n) = o(g(n)) if and only if
0<= f(n) < c. g(n) for all values of c which is >0 and n>=n0>0
Or
Lim f(n)/g(n) = 0
n->∞
For example
1. Give f(n) = 2n and g(n) = n2, prove that f(n) = o(g(n))
Solution
Lim 2n/n2
n->∞
Lim 2/n
n->∞
Lim 2/∞ =0
n->∞
16
Hence, f(n) = o(g(n))
5. w [small omega] : We use w-notation to denote a lower bound that is
not asymptotically tight.
We say that
f(n) = w(g(n)) if and only if
0 <= c. g(n) < f(n) for all values of c which is >0 and n>=n0>0
Or
Lim f(n)/g(n) = ∞
n->∞
For example
1. Given f(n)= n2/2 and g(n)= n , prove that f(n) = w(g(n)).
Solution
Lim n2/2 /n
n->∞
Lim n/2
n->∞
Lim ∞ /2 = ∞
n->∞
17
Hence, f(n) = w(g(n))
Question. For the factorial function f(n)= n!, we can define both upper
and lower bounds, but we cannot establish a tight Θ (theta) bound.
Justify your answer.
18
Question. Why should we do asymptotic analysis of algorithms?
It is crucial for several reasons:
Efficiency Comparison: It allows us to compare and evaluate different
algorithms based on their efficiency. By analyzing how an algorithm's
performance scales with input size, we can select the most suitable
algorithm for a given problem.
Algorithm Design: Asymptotic analysis guides the design of new
algorithms. It helps in making informed design choices to optimize
algorithms for various use cases.
Resource Management: Understanding the resource requirements of
algorithms as input size grows helps allocate computational resources
efficiently, preventing bottlenecks.
Scalability: It provides insights into how algorithms will perform as data
sizes increase, ensuring that systems can handle larger inputs efficiently.
19
Question. Order the following functions by their asymptotic growth, and
justify you answer: f1=2n, f2= n3/2, f3=nlog n, f4= nlogn.
20
21
22
Complexity of Algorithms
1. Time complexity
2. Space complexity
Algorithms can be broadly categorized into two main groups based on
their structure and approach:
1. Iterative algorithms ( having loop(s) )
2. Recursive algorithms (having recursion)
Note: In the ‘a priori’ analysis of algorithms, the RAM (Random Access
Machine) model is used for analyzing algorithms without running them
on a physical machine.
The RAM model has the following properties:
• A simple operation ( + , \ , * , - , = , &&, ||, if ) takes one-time step.
• Loops are comprised of simple/primitive/basic operations.
• Memory is unlimited and access takes one-time step.
Note: In the last step of the ‘a priori’ analysis of algorithms, asymptotic
notation is commonly used to characterize the time complexity of an
algorithm. Asymptotic notation provides a concise way to describe how
the performance of an algorithm scales as the input size becomes very
large. It allows us to focus on the most significant factors affecting the
algorithm's efficiency and ignore constant factors and lower-order
terms.
Q. What are the key characteristics of the RAM model?
23
Time complexity (running time)
The running time of the algorithm is the sum of running times for each
statement executed; a statement that takes Ci steps to execute and
executes “n” times will contribute (Ci * n) to the total running time.
Note: “Ci (i=1,2,3 …, n)” indicates constant unit of time.
A(n)
{ Cost Times
int i; C1 1
for( i = 1; i <=n ; i++) C2 n+1
printf(“MIET”); C3 n
}
T(n) = C1*1 + C2*(n+1) + C3* n
After eliminating constant terms, we get the time complexity in
terms of n.
O(n); linear time
Q. With a suitable example, define the term "running time" of an algorithm.
24
Time Complexity of Iterative Algorithms
Note: If a program doesn’t have loop(s) as well as recursion, then it
takes O(1)- constant running time.
A()
{
pf(“MIET”);
pf(“MIET”);
pf(“MIET”);
}
Pattern-01 One loop and increment/decrement is by 1
A(n)
{
for(i=1 ; i<=n; i++) → n+1
pf(“MIET”);
}
It will execute (n+1) times. T(n) = O(n)
25
Pattern-02 One loop and increment/decrement is not by 1
In this case, we need to calculate the number of iterations carefully.
A (n)
{
int i;
for( i= 1; i<n ; i= i*2){
pf(“MIET”);
As loop is not getting incremented by one, we will have to carefully
calculate the number of times “MIET” will be executed.
i= 1, 2, 4, . . . , n
After Kth iterations, “i” gets equal to “n”:
i= 1, 2, 4, . . . , n
Iterations= 20, 21, 22, … , 2k
2k = n
Convert it into logarithmic form… [ If ab = c, we can write it logac = b ]
k = log2n
O(logn)
26
Pattern-03 When there is no dependency between loops (just multiply
all “n”)
A(n) A(n)
{ {
int i , j; int i , j , k;
for( i = 1 to n) for( i = 1 to n)
for(j = 1 to n) for(j = 1 to n)
pf(“MIET”); //It for(k = 1 to n)
will be printed n2 times . pf(“MIET”);//n3
} }
Time complexity is O(n2) Time complexity is O(n3)
27
Pattern-4 When there is a dependency between the loop and the
statements in the body of the loop.
A(n)
{
1. int i = 1, j = 1;
2. while( j <= n)
{
3. i++;
4. j = j + i;
5. pf(“MIET”); // We need to know the no of times it’ll be printed
}
}
Solution:
We have to find out the number of times “MIET” will be printed to know
the time complexity of the above program. We can see that there is a
dependency between the line number 2 (while loop) and 4( the value of
“j” which in turns depends on “i”).
i = 1, 2, 3, 4, … k
j = 1, 3, 6, 10 … k(k+1)/2 [sum of the first “K” natural numbers]
28
k(k+1)/2 = n+1 [ when the value of “n” gets n+1, condition gets false]
k2 = n [ We eliminate constant terms, consider only variable.]
k =√n Time complexity is O(√n)
Pattern05: When there is a dependency between loops (having more
than one loop)
Note – We have to unroll loops in order to find out the number of times
a particular statement gets executed.
A(n)
{
int i, j, k;
for(i = 1; i <= n; i++)
{
for(j = 1; j <= i; j++)
{
for(k = 1; k <= 100; k++)
{
Pf(“MIET”);
}
}
}
}
29
There is a dependency between the second and the first loop; therefore,
we will have to unroll the loops to know the number of times “MIET” will
be printed.
i=1 i=2 i=3 … i=n
j= 1 j= 2 j=3 j=n
k = 1*100 k = 2 * 100 k = 3* 100 k = n* 100
1*100 + 2*100 + 3*100 + . . . + n*100
100( 1+ 2+3 +…n)
100( n(n+1)/2) = 50*n2 + 50*n
Time complexity = O(n2) [ We remove constant terms and lower order
terms.]
30
Q. Write a function to compute xn in logarithmic time complexity using
an iterative approach.
Time complexity of recursive programs
To find out the time complexity of recursive programs, we have to write
a recurrence relation for the given program and then solve it using one
of the following methods:
1. Iteration or Back substitution method
2. Recursion-tree method
3. Master method
4. Forward substitution method (Substitution Method)
5. Changing variable method
31
Let’s learn how to write a recurrence relation for the given program
having a recursive call/function.
Note : Each recursive algorithm or program must have a stopping
condition (also known as an anchor condition or base condition) to halt
the recursive calls.
A(n)
{
if (n > 0) // stopping condition for the recursive call
{
pf(“MIET”);
A(n-1); // Calling itself( recursive function)
}
We assume that T(n) is the total time taken to solve A(n) , where n is the
input. It means that this T(n) is split up among all statements inside the
function i.e., time taken by all instructions inside a function is equal to
T(n).
Note: “if” and “print” take constant amount of time step as per the RAM
model, and we can use either 1 or C to indicate it. When “if- condition”
gets false, it again takes constant amount of time — (one-time step).
32
Recurrence relation for the above program is given below:
T(n) = T(n-1) + 1 when n>0
1 When n = 0 ( stopping condition )
#Mixed (iterative + recursive)
A(n)
{
If(n>0) …... 1
{
for(i=1; i<=n; i++) …. n+1
{
pf(“MIET”); …... n
}
A(n-1); ….T(n-1)
}
}
T(n-1) + n when n>0
T(n) =
1 When n = 0
33
#Factorial of a number
fact(n)
{
if(n<=1)
return 1;
else
return n*fact(n-1); // here “*” takes constant time step
}
Note: Multiplication and other instructions in green will take a constant
amount of time. Left side of the * is the first operand, cannot be
included in the equation.
T(n) = T(n-1) + c when n>1
= 1 when n <=1
#Fibonacci number Fn
fib(n)
{
if (n== 0 || n==1)
return n;
else
return fib(n-1)+ fib(n-2);
}
34
T(n) = T(n-1)+ T(n-2) + 1 when n >1
1 unit of time when n<=1
1. Iteration method (backward substitution) for solving recurrences
The Iteration Method, also known as Backward Substitution, is a
technique used to solve recurrence relations and determine the time
complexity of algorithms. This method involves iteratively substituting a
recurrence relation into itself, moving backward towards the initial
conditions or base cases, until a pattern or closed-form solution
emerges.
T(n-1) + 1 when n>0
T(n) =
1 When n = 0
Note— When solving a recurrence relation to determine the time
complexity, our goal is to address the initial term given in T(…), which
represents a sub-problem. We utilize the base condition to simplify the
T() term.
T(n) = T(n-1) + 1 ………………….(1)
35
T(n-1)= T(n-2) + 1
T(n-2) = T(n-3) +1
Back substitute the value of T(n-1) into (1)
T(n) = [ T(n-2) + 1] + 1
T(n) = T(n-2) + 2 …………………(2)
Now, substitute the value of T(n-2) into (2)
T(n) = [T(n-3)+1]+2
T(n) = T(n-3)+3 …………………………(3)
.
.
.
= T(n-k)+k [ Assume n-k = 0 so, n= k ]
= T(n-n)+n
= T(0) + n [ T(0) = 1 is given ]
= 1+ n
T(n) = O(n)
T(n-1) + n when n>0
T(n) =
1 When n = 0
T(n) = T(n-1) + n …………………………(1)
36
T(n-1)= T(n-2)+ n-1
T(n-2)= T(n-3) + n-2
Substituting the value of T(n-1) into (1)
T(n)= [T(n-2)+(n-1)]+n
T(n)= T(n-2) + (n-1) + n …………………….(2)
Substituting the value of T(n-2) into (2)
T(n) = [ T(n-3) +(n-2) ] + (n-1)+n
T(n) = T(n-3) + (n-2) + (n-1) +n ………………….(3)
T(n) = T(n-k)+(n-(k-1))+ (n-(k-2))+. . . +(n-1) + n …………(4)
Assume that n-k = 0
Therefore n = k
In place of k, substitute “n” in the equation (4)
T(n) = T(n-n) + (n –(n-1)) + (n- (n-2) + . . . (n-1) +n
T(n) = T(0) + 1 +2 +. . .(n-1) + n
T(n) = 1+ n(n+1)/2
= O(n2)
37
Solve the recurrence using back substitution method :
T(n) = 2T(n/2) +n [previous year question]
Base condition is not given in the question; therefore, we assume that
when n=1 , it takes 1 unit of time.
T(n) = 2T(n/2) + n ……………………………… (1)
T(n/2)= 2T(n/4) + n/2
T(n/4)= 2T(n/8) + n/4
Substituting the value of T(n/2) into (1), we get
T(n) = [ 2 ( 2T(n/4)+n/2)+ n]
T(n) = 22 T(n/4) + 2n ……………………………………(2)
Substituting the value of T(n/4) into (2), we get
T(n) = [4 (2T(n/8) + n/4) + 2n
= 23 T(n/8) + n + 2n
= 23 T(n/23) + 3n…………………………………………(3)
.
38
= 2k T(n/2k) + k*n ………………………………………….(4)
Assume that (n/2k) = 1, then
2k = n
log2n = k
T(n) = 2log2n T(n/n) + n*logn
T(n) = n T (1) + n logn [ Since 2log2n = n]
= n+ nlogn= O(nlogn)
39
40
41
42
2. Recursion-Tree Method for Solving Recurrences
Type- 01 (Reducing function)
Steps:
1. Make T(n) the root node.
2. Draw the tree for two to three levels to calculate the cost and
height.
3. If the cost at each level is the same, multiply it by the height of
the tree to determine the time complexity.
4. If the cost at each level is not the same, try to identify a sequence.
The sequence is typically in an arithmetic progression (A.P.) or
geometric progression (G.P.).
43
44
45
46
47
Type-2 ( Dividing function- when there is more than one sub-problem,
and the size of each sub-problem is the same.)
Steps:
1. Make the last term the root node.
2. Draw the tree for two to three levels to calculate the cost and
height.
3. If the cost at each level is the same, multiply it by the height of
the tree to determine the time complexity.
4. If the cost at each level is not the same, try to identify a sequence.
The sequence is typically in an arithmetic progression (A.P.) or
geometric progression (G.P.).
Note: If the size of sub-problem is only one, follow Type-1 approach
only.
48
49
50
51
52
Q Explain Binary Search Algorithm. Also solve its recurrence relation.
It is a searching algorithm used in a sorted array by repeatedly dividing
the search interval in half. The idea of binary search is to use the
information that the array is sorted and reduce the time complexity to
O(Log n).
53
54
55
3 Master Theorem to solve recurrences
Note—Effective for the university exam
Question. State Master Theorem and find time complexity for the
recurrence relation T(n) = 9 T(n/3) +n.
Solution— Let a >= 1 and b > 1 be constants, let f(n) be a function , and
let T(n) be defined on the nonnegative integers by the recurrence
T(n) = aT(n/b) + f(n)
Where we interpret n/b to mean either floor value of (n/b) or ceiling
value of (n/b). Then T(n) has the following asymptotic bounds:
1. If f(n) = O(nlogba-ϵ ) for some constants ϵ >0, then T(n)= ɵ(nlogba)
2. If f(n) = ɵ(nlogba), then T(n) = ɵ(nlogba logn)
3. If f(n) = Ω(nlogba+ϵ ) for some constant ϵ >0, and if af(n/b) <= c(f(n))
for some constant c <1 and all sufficiently larger n, then T(n)=
ɵ(f(n)).
Given: a= 9, b= 3 and f(n) = n
Now we need to calculate nlogba as it’s the common term in all 3-cases of
the Master Theorem.
nlogba = nlog39 = n2 ( It is clearly bigger than f(n), which is n)
Case-1 can be applied; therefore, T(n) = ɵ(n2).
56
Question. State Master Theorem and find time complexity for the
recurrence relation T(n) = T(2n/3) +1.
Solution— Given: a = 1, b= 3/2 and f(n) =1
Now we need to calculate nlogba as it’s the common term in all 3-cases of
the Master Theorem.
nlogba = nlog3/21 = n0 = 1 [ Since, log1 = 0]
As the result of nlogba is equal to f(n) in the question, we can apply the
second case ( for tight bound/average case).
T(n)= T(n) = ɵ(nlogba logn)
=T(n) = ɵ(logn)
Question. State Master Theorem and find time complexity for the
recurrence relation T(n) = 3 T(n/4) +nlogn.
Solution— Given: a = 3, b= 4 and f(n) =nlogn
Now we need to calculate nlogba as it’s the common term in all 3-cases of
the Master Theorem.
nlogba = nlog43 = n0.793
Since, nlogn = Ω(nlog43+ ϵ) where ϵ = 0.2
57
Case-3 applies if we can show that
af(n/b) <= c.f(n)
3(n/4)log(n/4) <= (3/4)nlogn for c = ¾
By case-3, T(n) = ɵ(nlogn)
58
Note—Effective for the competitive exam
59
60
61
62
63
64
4. Substitution method for solving recurrences [ Forward
Substitution method]
65
66
67
Q. Solve by substitution method ( Forward substitution method):
a. T(n) = n* T(n-1) if n>1 ; T(1) =1 …… [A]
Solution:
Step 1: Guess the solution
T(n) = O ( nn ) ……. [1] [ You can easily get it using iteration method.]
Step 2: Now, we have to prove that our assumption is true using
property of mathematical induction.
68
T(n) <= c. nn from equation-[1]
Now, put n=1 in equation-[1]
T(1) <= c. 1
1 <= c.1 [ True for c>=1 , n0 = 1 ]
It should be true 1, 2, 3, . . ., k
T(k) <= c. kk [ 1<= k <= n]
When we move forward from 1 to n somewhere we get k = n-1.
T(n-1) <= c. (n-1)(n-1) …………………… [2]
Now, put the value of T(n-1) into equation [A].
T(n) <= n * c. (n-1)(n-1)
<= c * n * (n-1) (n-1)
<= c * n * n n [ if n-1 = n ]
<= cn * nn [ we consider only bigger term]
<= n * nn [ We remove constant term(s)]
<= nn+1
<= nn
Hence, T(n) = O(nn) proved
69
70
71
72
Sorting Algorithms and their Analysis
73
74
75
76
Shell-Sort Algorithm
77
78
79
Quick Sort Algorithm
80
81
82
83
84
85
86
Apply Merge sort on the array { 9, 6, 5, 0, 8,5} and also
write down its time complexity
87
88
Heap Sort Algorithm
89
90
91
92
Q What do you understand by a stable sort? Name two stable sort
algorithms.
A sorting algorithm is said to be stable if two objects having equal keys appear in
the same order in sorted output as they appear in the input data set. For
example, Insertion and Counting sorts.
Q. Define in-place sorting algorithm.
It is a type of sorting algorithm that rearranges the elements of an array or list
without requiring additional memory space proportional to the size of the input.
Q Describe the difference between the average-case and the worst-case analysis
of algorithm, and give an example of an algorithm whose average-case running
time is different from worst case analysis.
Q. Compare sorting algorithms in a tabular form.
93
Sorting in linear time O(n)
[Non-comparison-based sorting algorithms]
Non-comparison-based sorting algorithms do not rely on pairwise comparisons
between elements to determine their relative order. Instead, they exploit
properties of the input data, such as the range of values, to achieve efficient
sorting. These algorithms are often used when the range of possible input values is
known and limited, making them more efficient than comparison-based algorithms
in certain scenarios. Here are some common non-comparison-based sorting
algorithms:
1. Count sort
2. Radix sort
3. Bucket sort
Counting-Sort Algorithm
Counting Sort assumes that each of the "n" input elements is an integer in the range
from 0 to "k," where "k" is an integer representing the range of values.
94
95
Q. Write down the Counting-Sort algorithm and illustrate the
operation of Counting Sort on the array A = {6, 4, 8, 4, 5, 1}.
Solution:
96
97
Radix-Sort Algorithm
98
99
Bucket-Sort Algorithm
100