Complexity Analysis and Big O Notation
Complexity Analysis and Big O Notation
{
for (j=0; j<N; j++)
Statements;
}
}
Time complexity will be Cubic.
The running time of the three loops is proportional to the cube of N
N triples the running time increases b N*N*N
9: polynomial Time: O(𝒏 ) 𝒌
An algorithm is said to run in polynomial time or O(𝑛𝑘 ) if the number of steps
required to complete the algorithm for a given input is o(𝑛𝑘 ) for some non-
negative integer ‘k’ where ‘n’ is the complexity of the input.
polynomial time algorithm are said to be fast.
Selection sort takes polynomial time.
Example
for (i=0 ; i<N; i++)
F(3) + F(2)
(F(2)+F(1)) + (F(2)+F(1))
F(1) + F(0)
This tree keeps growing exponentially as we increase N.
Time complexity will be O(2𝑛 ).
Order of growth:
The order of growth means how the
performance of the algorithm is going to
increase or decrease by increasing the input.
O(1) remains constant on increasing or
decreasing input size.
O(𝑛 ) increases as the input size increases.
2
Asymptotic analysis
In mathematical is also known as asymptotic.
It is the method describe limiting behavior i.e upper bound, lower
bound or average
Refers to computing the execution time(or running time) of any
operation in mathematical units of computation
For example
The running time of one operation is computed as f(n) and may be
for another operation it is computed as g(𝑛2 )
This means that running time of the first operation increase
linearly with the increase in n.
Asymptotic analysis
The running time of the second operations will increase
exponentially when n increase
The running time of both operations will be nearly the same if n
significantly small.
Three types of analysis
1. Worst-Case
2. Best-Case
3. Average-Case
Worst-Case
The maximum execution time required by an algorithm to
complete the task is called worst-case
To calculate maximum number of steps or operations required
by the algorithm
For Example
In the linear search problem the worst case happens when the
element to be searched is not present inside the list i.e array
Therefore,
The algorithm will compare all the elements of the array one by
one.
Best-Case
The minimum execution time required by an algorithm to
complete the task is called Best-case
To calculate minimum number of steps or operations required by
the algorithm
For Example
In the linear search problem the Best-case happens when the
element to be searched is present at the first location
The best-case is bogus because it does not grantee us how much
maximum time required to complete a certain task of an algorithm
Average-Case
The average execution time required by an algorithm to
complete the task is called Average-case
To calculate number of steps or operations required by the
algorithm
Take all possible inputs and calculate the computing time for all
the inputs.
We sum all the calculated values and divide the sum by the total
number of inputs
For Example
In the linear search problem, we sum all the cases and divided by
(n+1)
Asymptotic Notations
Asymptotic notations are one of the ways to represent the
running time of algorithms
F(n)=O(g(n))
O(g(n)) = f(n)
0 ≤ f(n) ≤ cg(n) for all n ≥ 𝑛0
Ω-notation
Used to express the lower bound of an algorithm’s running time
Used to measure the best case time complexity of an algorithm
For a given g(n) it is denoted Ω(g(n)) pronounced as “’big
omega of g of n”
Ω(g(n)) = f(n)
0 ≤ cg(n) ≤ f(n) for all n ≥ 𝑛0
-notation
Used to express the both lower bound and upper bound of an
algorithm’s running time
For a given g(n) it is denoted (g(n)) pronounced as “’big theta
of g of n”
(g(n)) = f(n)
0 ≤ 𝑐1 g(n) ≤ f(n) ≤ 𝑐2 g(n) for all n ≥ 𝑛0
The value of f(n) is always between 𝑐1 g(n) and 𝑐2 g(n) for large
values of n (n >= 𝑛0
-notation
f(n) = (g(n))
Small o-notation
Small o-notation is same as big O-notation except that big-O is
inclusive upper bound while small o-notation is a strict upper
bound
For a given g(n) it is denoted o(g(n)) pronounced as “small-o of
g of n”
o(g(n)) = f(n)
𝑛0 > 0 such that 0 ≤ f(n) < cg(n) for all n ≥ 𝑛0
ω-notation
ω -notation is same as Ω notation except that Ω notation is
inclusive lower bound while small o-notation is a strict lower
bound
For a given g(n) it is denoted ω(g(n)) pronounced as “small-
omega of g of n
ω(g(n)) = f(n)
𝑛0 > 0 such that 0 ≤ cg(n) < f(n) for all n ≥ 𝑛0