[go: up one dir, main page]

0% found this document useful (0 votes)
19 views68 pages

Complexity

Uploaded by

riteshyaaraa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views68 pages

Complexity

Uploaded by

riteshyaaraa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 68

Analysis of

Algorithms
Efficiency
The Analysis
Framework
There are two kinds of efficiency:
• Time efficiency and Space efficiency.

• Time efficiency, also called time complexity, indicates how fast


an algorithm in question runs.

• Space efficiency, also called space complexity, refers to the


amount of memory units required by the algorithm in addition to
the space needed for its input and output.

• Note: The research experience has shown that for most problems, we
can achieve much more spectacular progress in speed than in space.
Asymptotic
Notations
Big-Oh (O)

f(n)=O(g(n)) iff  positive


constants c and n0, such
that n  n0,
such that f(n)  cg(n)

g(n) is an asymptotic upper bound


for f(n).
Theorem [Big Oh Ratio Theorem/ Limit Theory]

Let f(n) and g(n) be such that lim n exists.


f(n)=O(g(n) iff lim nfor some finite constant c.

Check whether 10n2 – 5n + 6 = O(n2)


lim n = lim n[ 10 – (5/ ) + (6/ )] = 10
10 Hence True.
Big-Oh (O) [Solve the following]
Show that p(n) is asymptotically
bigger than q(n)

f(n) =O(g(n)) iff f(n) <= c g(n) g(n) is asymptotically bigger


than f(n)
Using Limit Theory
Lim n-> inf q(n)/p(n) = lim n-> inf (100 n2^n + 33 n)/(17n^2
2^n) = lim n-> inf 100/n + 33/17n2^n = 0
Using Big Oh Ratio theorem
q(n)=O(p(n))
Omega Notation (Ω)

f(n)= Ω(g(n)) iff


 positive constants c
and n0, such that n  n0,
such that f(n) cg(n)

g(n) is an asymptotic lower bound


for f(n).
Examples

• 3n + 5 for all n 1 therefore, 3n + 5 = Ω (n)

What about 3n – 5 ?
• 3n – 5 – n for all n therefore, 3n-5 = Ω (n)
Theta Notation (θ)

f(n)= θ(g(n)) iff


 positive constants c1 , c2 and n0, such
that n  n0,
such that c1.g(n) f(n)
i.e., f(n)= θ(g(n)) iff f(n) = O(g(n)) and
f(n) = Ω(g(n))
g(n) is an asymptotic lower and upper
bound for f(n).
Problem: n3+n2 logn = ) ? Show using formal definition of
asymtototic notion Do not use Big Oh Ratio Theorem/ Limit Theory
Ans:

n3+n2 logn n3 + n3 = 2 n3 for all n 1 (since logn n)


Therefore n3+n2 logn = O(n3)

Also, n3+n2 logn n3 for all n 1


Therefore, n3+n2 logn = Ω (n3)

Hence n3+n2 logn = (n3)


Basic operation

• To identify the most important operation of the algorithm called


the basic operation. It is usually the most time-consuming
operation in the algorithmic innermost loop
Running time of a Program

T(n) = CopC(n)
Where
• T(n) - running time of a program
• cop be the execution time of an algorithm’s basic operation on a
particular computer
• C(n) be the number of times this operation needs to be
executed for this algorithm.
Question
Time complexity

• Operation Count Method


• Step Count Method
Operation Count
Method
Operation Count Method

• Identify one or more key operations in the algorithm/program


and count how many times each key operation is performed in
the algorithm.
• Examples
• Matrix multiplication:
• Key operations are addition and multiplications
• Sorting:
• Key operations are comparison and swapping.
• Searching: The success of this method depends on our ability
• Comparison to identify the key operations.
Matrix Multiplication
void matrixMultiply(int **a, int **b, int **c, int m, int
n, int p)
{
// Multiply the m x n matrix a and the n x p matrix b
to get c.
• Key operations are
for (int i = 0; i < m; i++)
addition and
for (int j = 0; j < p; j++) multiplications
{ • Additions mnp times
int sum = 0; • Multiplications mnp
for (int k = 0; k < n; k++) times
sum += a[i][k] * b[k][j]; • Complexity O(mnp)
c[i][j] = sum;
}}
Polynomial Evaluation

double polyEval(double c[], int n, const double& x)


+c x
n-1
{// Evaluate the degree n polynomial
n-1 +..
with
Key operations are
// coefficients coeff[0:n] at the point x.
addition and
double y = 1, value = c[0]; multiplications
for (int i = 1; i <= n; i++)
{// add in next term • No. of additions = n
y *= x;
value += y * c[i];
• No. of Multiplications
= 2n
}
return value; • Complexity = O(n)
}
Rank of an Element in a
Sequence

• The rank of an element in a sequence is the number of smaller


elements in the sequence plus the number of equal elements
that appears to its left.
• example: a = [4, 3, 9, 3, 7]
Rank r = [2, 0, 4, 1, 3]
Rank of an Element in a
Sequence
void rank(int a[], int n, int r[]) Key Operation: Comparison
{// Rank the n elements a[0:n-1]. Number of times a[j] <= a[i]
// Element ranks returned in occurs is
r[0:n-1]
for (int i = 0; i < n; i++) i=1 j takes the value 0
r[i] = 0; // initialize 1 comparison

i=2 j takes the value 0, 1


// compare all element pairs
2 comparison
for (int i = 1; i < n; i++) ….
for (int j = 0; j < i; j++) i=n-1 j takes the value 0, 1, …, n-
if (a[j] <= a[i]) r[i]++; 2
else r[j]++; n-1 comparison
}
Total Comparisons = 1 + 2 + … +
n-1= n (n – 1 )/2 = O(n2)
Step Count Method
Step Count Method
Worst-Case, Best-Case, and Average-Case
Efficiencies
Best case for sequential search: Element found
at first place.
Worst case for sequential search: Element not found/element
found at last place.
Average case for sequential search: Element found at
position j. (i.e., a[j]=x)
• The standard assumptions are that
• (a) the probability of a successful search is equal to p (0 ≤ p ≤ 1)
• (b) the probability of the first match occurring in the ith position of the list is the same for
every i.

• In the case of a successful search, the probability of the first match


occurring in the ith position of the list is p/n for every i, and the number of
comparisons made by the algorithm in such a situation is obviously i.
• In the case of an unsuccessful search, the number of comparisons will be n
with the probability of such a search being (1− p). Therefore,
Space Complexity
Space Complexity depends on
1. Data Space:
• Space needed by the constants and simple variables
• Space needed by dynamically allocated objects

2. Environmental Stack Space: used to save information needed to


resume execution of partially completed methods and functions.
• Example: if function f() invokes function g(), then we must save at
least a pointer to the instruction of f() to be executed when g()
terminates.

3. Instruction Space: Space needed to store the compiled version of


the program instruction
• Compiler Dependent
Space Complexity Example: Sequential or
Linear Search

Int SequentialSearch(int a[], int n, const int& x) a[0] a[1] ….a[n-1]


{
int i;
for (i = 0; i < n && a[i] != x; i++);
if (i == n) return -1;
else return i;
}
Space needed by parameters a, n and x = 4 + 4 + 4 = 12 bytes (integer type)
Space needed by variable i = 4 bytes
Space needed by constants 0 and -1 = 4 + 4 = 8 bytes
Total Space needed = 24 bytes (Constant ) therefore, Sp(n)=0
Complexity = O(1)
Space Complexity Example: Recursive Sequential
Search
Int RSequentialSearch(int a[], int n, const
Environment Stack Space:
int& x)
RSequentialSearch (a, n, x)
{ RsequentialSearch (a, n-1, x)
if ( n < 1) return -1; …..
……
if (a[n-1] == x) return n-1;
RSequentialSearch(a, 1, x)
return RSquentialSearch(a n-1, x); RSequentialSearch (a, 0, x)
} Therefore, depth of recursion: n + 1

Each time for recursive calls, a, n, x


and return address stored in stack.
Ignore the constant space needed and find Since all are integers, 4 * 4 = 16
Environmental stack space. bytes needed.

Therefore, total environmental stack


space = 16 (n+1) bytes= O(n).
Write a C++ function for finding sum of array
elements. Write both iterative and recursive
program and find its space complexity.

int sum (int a[], int int Rsum (int a[], int n)
n) {
{ if (n > 0)
int result; return Rsum(a, n-1)
for(int i=0; i<n; + a[n-1];
i++)
result= result return 0;
+ a[i]; }
Sp(n) = 0 result;
return constant Rsum(a, n) Rsum(a, n-1)
space
} O(1) ….Rsum(a,1), Rsum(a, 0)
therefore, depth= n+1
Sp(n) = 12 (n+1) bytes
=O(n)
Data Space Examples

i. Int *a;
a = new int [n]; Space needed 4n Bytes (n integers, each need
4 Bytes)

ii. Int a[30]; 4 x 30 = 120 Bytes

iii. int **A;


A = new int * [m];
for(int i=0; i<m; i++)
A[i] = new int [n];
Space allotted for matrix A is 4mn bytes. O(mn) space
complexity.
Asymptotic Order
Asymptotic Order
Questions
Complexity of
Recursive
Algorithm
Complexity of Recursive Algorithm

Step1: Find Recurrence relation for the Recursive Algorithm

Step2: Solve the Recurrence relation using Masters Theorem /


Back Substitution Method
Recurrence relation
Recurrence relation
Recurrence relation
Recurrence relation
Recurrence relation
Recurrence relation
Back Substitution Method
Back Substitution Method
Back Substitution Method
Recursion Tree Method

In some situations where


there are two unequal
partions of the problem
Here two equal
then it is difficult to find
Partions are there
the solution of the
but still we can
recurance relation using
use Recursive tree
Back Subsitution method.
method by writing
In such cases we can use
2T(n/2) as T(n/2) +
Recursive tree method
T(n/2)
Like :
Like :
Recursion Tree Method
Masters Theorem
Masters Theorem (Type-A.1)

T(n) = aT(n/b) + f (n) where f(n)  (nd), d  0

Master Theorem: If a < bd, T(n)  (nd)


If a = bd, T(n)  (nd log n)
If a > bd, T(n)  (nlog b a )

Note: The same results hold with O instead of .

Examples: T(n) = 4T(n/2) + n  T(n)  ?


T(n) = 4T(n/2) + n2  T(n)  ?
T(n) = 4T(n/2) + n3  T(n)  ?
Masters Theorem (Type-A.2)

Masters Theorem Type A.2


And Type B are not there
in the syllabus. But still of
yu know then yu can use
them to solve MCQs (if
any)
Masters Theorem (Type-B)
Department of I&CT, MIT, Manipal

You might also like