[go: up one dir, main page]

0% found this document useful (0 votes)
35 views33 pages

Complexity Analysis and Big O Notation

Most important MCQs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views33 pages

Complexity Analysis and Big O Notation

Most important MCQs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Data Structure

COMPLEXITY ANALYSIS AND BIG O NOTATION


1. Algorithm Analysis

 It is the process of analyzing the problem- solving capability of the


algorithm in terms of running time (or execution time) and memory
space required
 Time Complexity
The running time of an algorithm is stated as a function relating
the input length to the number of steps known as time complexity.
 Space Complexity
Size of memory known as space complexity.
 For Example
we know that a set of numbers can be sorted using different algorithms. The
number of comparisons performed by one algorithm may vary with others for the
same input. Hence the time complexity of those algorithm may differ. At the same
time, we need to calculate the memory space required by each algorithm
 Stages
A priori Analysis:- This is the theoretical analysis of an algorithm. It is measured by
assuming that all other factors.
A posterior Analysis:- This is the experimental analysis of an algorithm. The
selected algorithm is implemented using programming language. The actual
running time and space required.
2. Algorithm complexity

 Measurement of the amount of running time and storage space


required.
 Input of a give size(n)
 Algorithm is efficient and fast it take less time to execute and
consumes less memory space.
 Time Factor: Comparisons in the sorting algorithm.
 Space factor: maximum memory space required.
 Algorithm is measured based on Space complexity and Time
complexity
3. Space complexity

 Total amount of memory space required by the algorithm to


complete its execution.
 Storage space required varies with size of the problem to be solved.
 A fixed amount of memory space for the program code, simple
variables and constants used in the program.
 A variable amount of memory space required by variables whose
size is dependent on the problem being solved. Space decrease or
increase it the problem uses dynamic memory allocation.
4. Time complexity

 Represent the amount of time required by the algorithm to


complete its execution.
 Estimated by counting the number of Elementary steps or
operations performed by the algorithm.
 Running time a vary with different types of input data.
 Numerical function T(n) where T(n) can be measured as number of
steps, provided each step consumes constant time.
 For example
The addition of Two n-bit integer takes n steps. Consequently, the total
computational time T(n) = c*n and where c is the time taken by the
addition of two bits
 On different computers the addition of two bits might take different time, say c1
and c2
 T(n) = c1*n and T(n) = c2*n respectively this shows that different machines result
in different slopes
 but time T(n) grows linearly as input size increases.
 Time complexity most commonly expressed using big O notation
 O(n), O(n log n), O(𝑛𝑎 ), O(2𝑛 ) where n is the input size in units of bits
5. Constant Time
 An algorithm is said to run constant (also write as O(1) time
 It required the same amount of time regardless of the input size
 It means that the value of n is bounded by the value that does not depend on
the size of the input
 For example assessing single element in an array takes constant time as only one
operation has to be performed to locate it.
Suppose there is a single statement like this:
Statement;
Time Complexity of this statement will be constant
6: Linear Time
 Algorithm is said to run in linear time or O(n) time
 if it execution time is directly proportional to the input size running time grows
linearly as input size increase
 for example accessing every element of an array will require running time
proportional to the length of an array
 Assume a single statement
for ( i=0; i<N; i++)
{
Statements;
}
 Time complexity will be linear. Running time of the loop is directly proportional to
N. When N doubles so does the running time
7: Logarithmic time
 Algorithm is said to run in logarithmic time or O(log n)
 if it execution time is proportional to the logarithmic (i.e log(n)) input size.
 By default, the base of log is 2.
 Every time the size of the input doubles.
 The algorithm perform one more step
 for example a binary search algorithm takes logarithmic time
While ( low<=high)
{
mid= (low+high)/2;
if(target<list[mid])
high=mid-1;
elseif(target>list[mid])
low=mid+1;
else break;
}
8: Log-Iinear Time: O(n log n)
 An algorithm is said to be log linear time when it calls another logarithmic time
algorithm inside a loop
 It is also knowns as quasi-linear time.
 Merge sort algorithm takes log-linear time.
9: Quadratic Time: O(𝒏 ) 𝟐
 An algorithm is said to run in quadratic time or O(𝑛2 ) if its execution time is
proportional to the square of the input size.
 Bubble sort, Selection sort and Insertion sort algorithm taking Quadratic time.
Example
for (i=0 ; i<N; i++)
{
for (j=0; j<N; j++)
{
Statements;
}
}
 Time complexity will be quardratic.
 The running time of the two loops is proportional to the square of N
 N doubles the running time increases b N*N
9: Cubic Time: O(𝒏 ) 𝟑
 An algorithm is said to run in cubic time or O(𝑛3 ) if its execution time is proportional to the
𝑛3 (the cube of n) of the input size.
 If the input doubles, the number of steps is multiplied by 8
Example
for (i=0 ; i<N; i++)
{
for (j=0; j<N; j++)

{
for (j=0; j<N; j++)
Statements;
}
}
 Time complexity will be Cubic.
 The running time of the three loops is proportional to the cube of N
 N triples the running time increases b N*N*N
9: polynomial Time: O(𝒏 ) 𝒌
 An algorithm is said to run in polynomial time or O(𝑛𝑘 ) if the number of steps
required to complete the algorithm for a given input is o(𝑛𝑘 ) for some non-
negative integer ‘k’ where ‘n’ is the complexity of the input.
 polynomial time algorithm are said to be fast.
 Selection sort takes polynomial time.
Example
for (i=0 ; i<N; i++)

for (j=0; j<N; j++)


Something processing on the list
The processing part is executed N*N times
Such code has O(𝑛2 ) complexity.
9: Exponential Time: O(𝟐 ) 𝒏
 An algorithm is said to run in exponential time or O(2𝑛 ) if it takes proportional to
𝟐𝒏 .
 It perform calculation doubles as the input grows.
 Recursion functions are performed in exponential time.
Int F(n)
{
if(n==0)
Return0;
Elseif(n==1)
Return 1;
Else
Return F(n-1)+F(n+1);
}
9: polynomial Time: O(𝟐 ) 𝒏
F(4)

F(3) + F(2)

(F(2)+F(1)) + (F(2)+F(1))

F(1) + F(0)
 This tree keeps growing exponentially as we increase N.
 Time complexity will be O(2𝑛 ).
Order of growth:
The order of growth means how the
performance of the algorithm is going to
increase or decrease by increasing the input.
O(1) remains constant on increasing or
decreasing input size.
O(𝑛 ) increases as the input size increases.
2
Asymptotic analysis
 In mathematical is also known as asymptotic.
 It is the method describe limiting behavior i.e upper bound, lower
bound or average
 Refers to computing the execution time(or running time) of any
operation in mathematical units of computation
 For example
The running time of one operation is computed as f(n) and may be
for another operation it is computed as g(𝑛2 )
 This means that running time of the first operation increase
linearly with the increase in n.
Asymptotic analysis
 The running time of the second operations will increase
exponentially when n increase
 The running time of both operations will be nearly the same if n
significantly small.
Three types of analysis
1. Worst-Case
2. Best-Case
3. Average-Case
Worst-Case
 The maximum execution time required by an algorithm to
complete the task is called worst-case
 To calculate maximum number of steps or operations required
by the algorithm
 For Example
In the linear search problem the worst case happens when the
element to be searched is not present inside the list i.e array
Therefore,
The algorithm will compare all the elements of the array one by
one.
Best-Case
 The minimum execution time required by an algorithm to
complete the task is called Best-case
 To calculate minimum number of steps or operations required by
the algorithm
 For Example
In the linear search problem the Best-case happens when the
element to be searched is present at the first location
The best-case is bogus because it does not grantee us how much
maximum time required to complete a certain task of an algorithm
Average-Case
 The average execution time required by an algorithm to
complete the task is called Average-case
 To calculate number of steps or operations required by the
algorithm
 Take all possible inputs and calculate the computing time for all
the inputs.
 We sum all the calculated values and divide the sum by the total
number of inputs
 For Example
In the linear search problem, we sum all the cases and divided by
(n+1)
Asymptotic Notations
 Asymptotic notations are one of the ways to represent the
running time of algorithms

 Sometime we are interested in worst-case running time only.

 Sometime we wish to cover all inputs, not just the worst-case

 Asymptotic notations are well suited to characterizing running


time no matter what the input.
Asymptotic Notations
 Following are the commonly used asymptotic notations to
represent running time complexity of algorithm
1. Big O-notation
2. Ω-notation
3. -notation
4. small o-notation
5. ω-notation
Big O-Notations
 Mathematical notation
 It describes the limiting behavior of a function when the
argument tends towards a particular value or infinity
 It is sed to express the upper bound of an algorithms running time
 Used to measure the worst-case time complexity of an algorithm
 It describe how the running time of an algorithm grows relative to
the input as the input gets larger
 Different functions with the same growth rate may be
represented using the O-notation
 The letter O is used because the growth rate of a fnction is also
referred to as the order of the function
Big O-Notations
 Mathematical notation
 It describes the limiting behavior of a function when the
argument tends towards a particular value or infinity
 It is sed to express the upper bound of an algorithms running time
 Used to measure the worst-case time complexity of an algorithm
 It describe how the running time of an algorithm grows relative to
the input as the input gets larger
 Different functions with the same growth rate may be
represented using the O-notation
 The letter O is used because the growth rate of a function is also
referred to as the order of the function
Big O-Notations

F(n)=O(g(n))
O(g(n)) = f(n)
0 ≤ f(n) ≤ cg(n) for all n ≥ 𝑛0
Ω-notation
 Used to express the lower bound of an algorithm’s running time
 Used to measure the best case time complexity of an algorithm
 For a given g(n) it is denoted Ω(g(n)) pronounced as “’big
omega of g of n”
Ω(g(n)) = f(n)
0 ≤ cg(n) ≤ f(n) for all n ≥ 𝑛0
-notation
 Used to express the both lower bound and upper bound of an
algorithm’s running time
 For a given g(n) it is denoted (g(n)) pronounced as “’big theta
of g of n”
(g(n)) = f(n)
0 ≤ 𝑐1 g(n) ≤ f(n) ≤ 𝑐2 g(n) for all n ≥ 𝑛0

 The value of f(n) is always between 𝑐1 g(n) and 𝑐2 g(n) for large
values of n (n >= 𝑛0
-notation

f(n) = (g(n))
Small o-notation
 Small o-notation is same as big O-notation except that big-O is
inclusive upper bound while small o-notation is a strict upper
bound
 For a given g(n) it is denoted o(g(n)) pronounced as “small-o of
g of n”

o(g(n)) = f(n)
𝑛0 > 0 such that 0 ≤ f(n) < cg(n) for all n ≥ 𝑛0
ω-notation
 ω -notation is same as Ω notation except that Ω notation is
inclusive lower bound while small o-notation is a strict lower
bound
 For a given g(n) it is denoted ω(g(n)) pronounced as “small-
omega of g of n

ω(g(n)) = f(n)
𝑛0 > 0 such that 0 ≤ cg(n) < f(n) for all n ≥ 𝑛0

You might also like