[go: up one dir, main page]

0% found this document useful (0 votes)
3 views8 pages

Asymptotic

The document discusses the performance analysis of algorithms focusing on space and time complexity, detailing how to calculate each. It explains the components of space complexity, including fixed and variable components, and provides examples for both space and time complexity calculations. Additionally, it covers asymptotic analysis and notations such as Big O, Big Omega, and Big Theta, which are used to evaluate algorithm efficiency based on input size.

Uploaded by

suganya.cse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views8 pages

Asymptotic

The document discusses the performance analysis of algorithms focusing on space and time complexity, detailing how to calculate each. It explains the components of space complexity, including fixed and variable components, and provides examples for both space and time complexity calculations. Additionally, it covers asymptotic analysis and notations such as Big O, Big Omega, and Big Theta, which are used to evaluate algorithm efficiency based on input size.

Uploaded by

suganya.cse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Performance analysis of an algorithm depends upon two factors i.e.

amount of memory used and amount of compute time consumed on any


CPU. Formally they are notified as complexities in terms of:
 Space Complexity.
 Time Complexity.

Space Complexity of an algorithm is the amount of memory it needs to


run to completion i.e. from start of execution to its termination. Space
need by any algorithm is the sum of following components:
1. Fixed Component: This is independent of the characteristics of the
inputs and outputs. This part includes: Instruction Space, Space of
simple variables, fixed size component variables, and constants
variables.
2. Variable Component: This consist of the space needed by
component variables whose size is dependent on the particular
problems instances(Inputs/Outputs) being solved, the space needed
by referenced variables and the recursion stack space is one of the
most prominent components. Also this included the data structure
components like Linked list, heap, trees, graphs etc.

Therefore the total space requirement of any algorithm 'A' can be


provided as

Space(A) = Fixed Components(A) + Variable Components(A)

Among both fixed and variable component the variable part is important
to be determined accurately, so that the actual space requirement can be
identified for an algorithm 'A'. To identify the space complexity of any
algorithm following steps can be followed:
1. Determine the variables which are instantiated by some default
values.
2. Determine which instance characteristics should be used to
measure the space requirement and this is will be problem specific.
3. Generally the choices are limited to quantities related to the number
and magnitudes of the inputs to and outputs from the algorithms.
4. Sometimes more complex measures of the interrelationships among
the data items can used.

Example: Space Complexity


Algorithm Sum(number,size)\\ procedure will produce sum of all numbers
provided in 'number' list
{
result=0.0;
for count = 1 to size do \\will repeat from 1,2,3,4,....size
times
result= result + number[count];
return result;
}
In above example, when calculating the space complexity we will be
looking for both fixed and variable components. here we have

Fixed components as 'result','count' and 'size' variable there for total


space required is three(3) words.

Variable components is characterized as the value stored in 'size' variable


(suppose value store in variable 'size 'is 'n'). because this will decide the
size of 'number' list and will also drive the for loop. therefore if the space
used by size is one word then the total space required by 'number'
variable will be 'n'(value stored in variable 'size').

therefore the space complexity can be written as Space(Sum) = 3 + n;

Time Complexity of an algorithm(basically when converted to program)


is the amount of computer time it needs to run to completion. The time
taken by a program is the sum of the compile time and the run/execution
time . The compile time is independent of the instance(problem specific)
characteristics. following factors effect the time complexity:
1. Characteristics of compiler used to compile the program.
2. Computer Machine on which the program is executed and physically
clocked.
3. Multiuser execution system.
4. Number of program steps.

Therefore the again the time complexity consist of two components


fixed(factor 1 only) and variable/instance(factor 2,3 & 4), so for any
algorithm 'A' it is provided as:

Time(A) = Fixed Time(A) + Instance Time(A)

Here the number of steps is the most prominent instance characteristics


and The number of steps any program statement is assigned depends on
the kind of statement like
 comments count as zero steps,
 an assignment statement which does not involve any calls to other
algorithm is counted as one step,
 for iterative statements we consider the steps count only for the
control part of the statement etc.

Therefore to calculate total number program of program steps we use


following procedure. For this we build a table in which we list the total
number of steps contributed by each statement. This is often arrived at by
first determining the number of steps per execution of the statement and
the frequency of each statement executed. This procedure is explained
using an example.

Example: Time Complexity


In above example if you analyze carefully frequency of " for count = 1 to size
do" it is 'size +1' this is because the statement will be executed one time
more die to condition check for false situation of condition provided in for
statement. Now once the total steps are calculated they will resemble the
instance characteristics in time complexity of algorithm. Also the repeated
compile time of an algorithm will also be constant every time we compile
the same set of instructions so we can consider this time as constant 'C'.
Therefore the time complexity can be expressed as: Time(Sum) = C +
(2size +3)

So in this way both the Space complexity and Time complexity can be
calculated. Combination of both complexity comprises the Performance
analysis of any algorithm and can not be used independently. Both these
complexities also helps in defining parameters on basis of which we
optimize algorithms.

Order of Growth: Whenever we work with algorithms, the most critical


factor considered for them is their efficiency. And for this we are
interested in analyzing how the running time increases when input size
increases. when to algorithms are compared with respect to their behavior
for large input size, the measure is know as Order of growth. The order
of growth can be estimated by taking into account the dominant term of
the running expression. The dominant term overpasses the values of the
other terms when the input size is large.

For example:
1. Let us consider T(n) = an + b where a>0, If the input size 'n' is
multiplied by 'k' then the dominant term in T(n) i.e. 'an' is also
multiplied by 'k'. This produces T(kn) = k(an)+b and we can say that
T(n) has a linear order of growth.
2
2. Let us consider T(n) = an + bn + c where a>0, If the input size 'n' is
2 2
multiplied by 'k' ,this produces T(kn) = (k a)n + (kb)n + c, thus the
2
dominant term is multiplied by k and we can say that T(n) has a
quadratic order of growth. and so on for cubic, logarithmic etc.

Thus for efficiency with respect to large problems the analysis is made for
large values of the input size and dominant terms plays and important
role. The analysis that is conducted for large values of input size is called
as asymptotic analysis.

In Asymptotic analysis it is considered that an algorithm 'A1' is better than


algorithm 'A2' if the order of growth of the running time of the 'A1' is
lower than that of 'A2'. Therefore asymptotic efficiency of algorithms are
concerned with how the running time of an algorithm increases with the
size of the input in the limit, as the size of the input increases without
bound.

Asymptotic Notations: For every algorithm corresponding to efficiency


analysis, we have three basic cases :
 Best Case
 Average Case
 Worst Case

And there are 5 notations to resemble them:

Asymptotic Notations
Execution time of an algorithm depends on the instruction set, processor
speed, disk I/O speed, etc. Hence, we estimate the efficiency of an algorithm
asymptotically.

Time function of an algorithm is represented by T(n), where n is the input


size.

Different types of asymptotic notations are used to represent the complexity


of an algorithm. Following asymptotic notations are used to calculate the
running time complexity of an algorithm.

 O − Big Oh Notation
 Ω − Big omega Notation
 θ − Big theta Notation
 o − Little Oh Notation
 ω − Little omega Notation

Big Oh, O: Asymptotic Upper Bound


The notation Ο(n) is the formal way to express the upper bound of an
algorithm's running time. is the most commonly used notation. It measures
the worst case time complexity or the longest amount of time an
algorithm can possibly take to complete.

A function f(n) can be represented is the order of g(n) that is O(g(n)), if


there exists a value of positive integer n as n0 and a positive constant c such
that −

f(n)⩽c.g(n)𝑓(𝑛)⩽𝑐.𝑔(𝑛) for n>n0𝑛>𝑛0 in all case

Hence, function g(n) is an upper bound for function f(n), as g(n) grows
faster than f(n).
Example

function, f(n)=4.n3+10.n2+5.n+1𝑓(𝑛)=4.𝑛3+10.𝑛2+5.𝑛+1
Let us consider a given

Considering g(n)=n3𝑔(𝑛)=𝑛3,

f(n)⩽5.g(n)𝑓(𝑛)⩽5.𝑔(𝑛) for all the values of n>2𝑛>2

O(g(n))𝑂(𝑔(𝑛)),
i.e. O(n3)𝑂(𝑛3)
Hence, the complexity of f(n) can be represented as

Big Omega, Ω: Asymptotic Lower Bound


The notation Ω(n) is the formal way to express the lower bound of an
algorithm's running time. It measures the best case time complexity or
the best amount of time an algorithm can possibly take to complete.

We say that f(n)=Ω(g(n))𝑓(𝑛)=Ω(𝑔(𝑛)) when there exists


constant c that f(n)⩾c.g(n)𝑓(𝑛)⩾𝑐.𝑔(𝑛) for all sufficiently large value of n.
Here n is a positive integer. It means function g is a lower bound for
function f ; after a certain value of n, f will never go below g.
Example

function, f(n)=4.n3+10.n2+5.n+1𝑓(𝑛)=4.𝑛3+10.𝑛2+5.𝑛+1.
Let us consider a given

Considering g(n)=n3𝑔(𝑛)=𝑛3, f(n)⩾4.g(n)𝑓(𝑛)⩾4.𝑔(𝑛) for all the values


of n>0𝑛>0.

Ω(g(n))Ω(𝑔(𝑛)),
i.e. Ω(n3)Ω(𝑛3)
Hence, the complexity of f(n) can be represented as

Theta, θ: Asymptotic Tight Bound


The notation θ(n) is the formal way to express both the lower bound and the
upper bound of an algorithm's running time. Some may confuse the theta
notation as the average case time complexity; while big theta notation could
be almost accurately used to describe the average case, other notations
could be used as well.

We say that f(n)=θ(g(n))𝑓(𝑛)=𝜃(𝑔(𝑛)) when there exist


constants c1 and c2 that c1.g(n)⩽f(n)⩽c2.g(n)𝑐1.𝑔(𝑛)⩽𝑓(𝑛)⩽𝑐2.𝑔(𝑛) for
all sufficiently large value of n. Here n is a positive integer.

This means function g is a tight bound for function f.


Example

function, f(n)=4.n3+10.n2+5.n+1𝑓(𝑛)=4.𝑛3+10.𝑛2+5.𝑛+1
Let us consider a given

Considering g(n)=n3𝑔(𝑛)=𝑛3, 4.g(n)⩽f(n)⩽5.g(n)4.𝑔(𝑛)⩽𝑓(𝑛)⩽5.𝑔(𝑛) fo


r all the large values of n.

θ(g(n))𝜃(𝑔(𝑛)),
i.e. θ(n3)𝜃(𝑛3).
Hence, the complexity of f(n) can be represented as

Little Oh, o

asymptotically tight. The bound 2.n2=O(n2)2.𝑛2=𝑂(𝑛2) is asymptotically


The asymptotic upper bound provided by O-notation may or may not be

tight, but the bound 2.n=O(n2)2.𝑛=𝑂(𝑛2) is not.

We use o-notation to denote an upper bound that is not asymptotically


tight.

any positive constant c>0𝑐>0 and there exists a value n0>0𝑛0>0, such
We formally define o(g(n)) (little-oh of g of n) as the set f(n) = o(g(n)) for

that 0⩽f(n)⩽c.g(n)0⩽𝑓(𝑛)⩽𝑐.𝑔(𝑛).

Intuitively, in the o-notation, the function f(n) becomes insignificant relative


to g(n) as n approaches infinity; that is,
limn→∞(f(n)g(n))=0lim𝑛→∞(𝑓(𝑛)𝑔(𝑛))=0

Example

function, f(n)=4.n3+10.n2+5.n+1𝑓(𝑛)=4.𝑛3+10.𝑛2+5.𝑛+1
Let us consider the same

Considering g(n)=n4𝑔(𝑛)=𝑛4,

limn→∞(4.n3+10.n2+5.n+1n4)=0lim𝑛→∞(4.𝑛3+10.𝑛2+5.𝑛+1𝑛4)=0

o(g(n))𝑜(𝑔(𝑛)),
i.e. o(n4)𝑜(𝑛4).
Hence, the complexity of f(n) can be represented as

You might also like