DAA StudyMaterial
DAA StudyMaterial
BTCS 603
Design & Analysis of Algorithm
UNIT I
Index
Section 1: Introduction
1.1. Characteristics of an Algorithm
1.2. Algorithm Design
1.3. How algorithm is a technology
1.4. Algorithm Classification
1.4.1. Classification by implementation
1.4.1.1. Recursion or iteration
1.4.1.2. Logical
1.4.1.3. Serial or parallel or distributed
1.4.1.4. Deterministic or non-deterministic
1.4.1.5. Exact or approximate
1.4.2. Classification by Design Paradigm
1.4.2.1. Divide and conquer
1.4.2.2. Dynamic programming
1.4.2.3. The greedy method
1.4.2.4. Linear programming
1.4.2.5. Reduction
1.4.2.6. Search and enumeration
1.4.2.7. The probabilistic and heuristic paradigm
1.5. Recursive Algorithms
1.6. The Need for Analysis
1.7. Algorithmic efficiency
Section 2: Analyzing algorithms
2.1. Runtime Analysis of Algorithms
2.2. Random-access machine (RAM) model
2.3. How do we analyze an algorithm’s running time
2.4. Space and Time Complexity
2.4.1. Space Complexity
2.4.2. Time Complexity
2.5. Asymptotic Notations
2.5.1. Big ‘Oh’ Notation (O)
2.5.2. Omega Notation (Ω)
2.5.3. Theta Notation (Θ)
2.5.4. Little ‘Oh’ Notation (o)
2.5.5. Little Omega (ω)
Section 3: The divide-and-conquer approach
3.1. Divide And Conquer algorithm
3.2. Recurrence Relation for DAC algorithm
3.2.1. Binary Search
3.2.2. Maximum and Minimum
3.2.3. Merge Sort
3.2.4. Quick Sort
Section 4: Recurrence Methods
4.1. Substitution Method
4.2. Recursion tree Method
4.3. Master Method
Page 2
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
1. Introduction
An algorithm is a set of steps of operations to solve a problem performing calculation,
data processing, and automated reasoning tasks. An algorithm is an efficient method
that can be expressed within finite amount of time and space.
An algorithm is the best way to represent the solution of a particular problem in a very
simple and efficient way. If we have an algorithm for a specific problem, then we can
implement it in any programming language, meaning that the algorithm is
independent from any programming languages.
The word algorithm comes from the name of a Persian author, Abu Ja'far
Mohammed ibn Musa al Khowarizmi (c. 825 A.O.) who wrote a textbook on
mathematics. an algorithm is any well-defined computational procedure that takes
some value, or set of values, as input and produces some value, or set of values, as
output. An algorithm is thus a sequence of computational steps that transform the input
into the output.
We can also view an algorithm as a tool for solving a well-specified computational
problem. The statement of the problem specifies in general terms the desired
input/output relationship. The algorithm describes a specific computational procedure
for achieving that input/output relationship.
Page 3
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Example
Step 1: Fulfilling the pre-requisites: As discussed above, in order to write an
algorithm, its pre-requisites must be fulfilled.
• The problem that is to be solved by this algorithm: Add 3 numbers and print
their sum.
• The constraints of the problem that must be considered while solving the
problem: The numbers must contain only digits and no other characters.
• The input to be taken to solve the problem: The three numbers to be added.
• The output to be expected when the problem the is solved: The sum of the
three numbers taken as the input.
• The solution to this problem, in the given constraints: The solution consists
of adding the 3 numbers. It can be done with the help of ‘+’ operator, or bit-wise,
or any other method.
Step 2: Designing the algorithm: Now let’s design the algorithm with the help of above
pre-requisites:
Algorithm to add 3 numbers and print their sum:
– START
– Declare 3 integer variables num1, num2 and num3.
– Take the three numbers, to be added, as inputs in variables num1, num2,
and num3 respectively.
– Declare an integer variable sum to store the resultant sum of the 3
numbers.
– Add the 3 numbers and store the result in the variable sum.
– Print the value of variable sum
– END
Step 3: Testing the algorithm by implementing it: Inorder to test the algorithm,
implement it in C language, then test it for some inputs.
of this writing) and computer B executes only 10 million instructions per second, so that
computer A is1000 times faster than computer B in raw computing power. To make the
difference even more dramatic, suppose that the world’s craftiest programmer codes
in machine language for computer A, and the resulting code requires 2n2 instructions to
sort n numbers. Suppose further that just an average programmer writes for computer
B, using a highlevel language with an inefficient compiler, with the resulting code taking
50n lg n instructions.
• Time taken=
works, it will be the fastest method. The most popular greedy algorithm is finding the
minimal spanning tree as given by Kruskal.
1.4.2.4. Linear programming: When solving a problem using linear
programming, specific inequalities involving the inputs are found and then an attempt is
made to maximize (or minimize) some linear function of the inputs. Many problems
(such as the maximum flow for directed graphs) can be stated in a linear programming
way, and then be solved by a 'generic' algorithm such as the simplex algorithm.
1.4.2.5. Reduction: This technique involves solving a difficult problem by
transforming it into a better known problem for which we have (hopefully)
asymptotically optimal algorithms. The goal is to find a reducing algorithm whose
complexity is not dominated by the resulting reduced algorithm's. For example, one
selection algorithm for finding the median in an unsorted list involves first sorting the
list (the expensive portion) and then pulling out the middle element in the sorted list
(the cheap portion). This technique is also known as transform and conquer.
1.4.2.6. Search and enumeration: Many problems (such as playing chess) can be
modeled as problems on graphs. A graph exploration algorithm specifies rules for
moving around a graph and is useful for such problems. This category also includes
search algorithms, branch and bound enumeration and backtracking.
1.4.2.7. The probabilistic and heuristic paradigm: Algorithms belonging to this
class fit the definition of an algorithm more loosely.
Probabilistic algorithms are those that make some choices randomly (or pseudo-
randomly); for some problems, it can in fact be proven that the fastest solutions must
involve some randomness.
Genetic algorithms attempt to find solutions to problems by mimicking biological
evolutionary processes, with a cycle of random mutations yielding successive
generations of "solutions". Thus, they emulate reproduction and "survival of the fittest".
In genetic programming, this approach is extended to algorithms, by regarding the
algorithm itself as a "solution" to a problem.
Heuristic algorithms, whose general purpose is not to find an optimal solution, but an
approximate solution where the time or resources are limited. They are not practical to
find perfect solutions. An example of this would be local search, or simulated annealing.
In other words,
Page 7
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
factorial(0)=> 1
factorial(3)
3 * factorial(2)
3 * 2 * factorial(1)
3 * 2 * 1 * factorial(0)
3*2*1*1
=> 6
This corresponds very closely to what actually happens on the execution stack in the
computer's memory.
differ. At the same time, we need to calculate the memory space required by each
algorithm.
Analysis of algorithm is the process of analyzing the problem-solving capability of the
algorithm in terms of the time and size required (the size of memory for storage while
implementation). However, the main concern of analysis of algorithms is the required
time or performance. Generally, we perform the following types of analysis −
Worst-case − The maximum number of steps taken on any instance of size a.
Best-case − The minimum number of steps taken on any instance of size a.
Average case − An average number of steps taken on any instance of size a.
Amortized − A sequence of operations applied to the input of size a averaged
over time.
To solve a problem, we need to consider time as well as space complexity as the
program may run on a system where memory is limited but adequate space is available
or may be vice-versa. In this context, if we compare bubble sort and merge sort.
Bubble sort does not require additional memory, but merge sort requires additional
space. Though time complexity of bubble sort is higher compared to merge sort, we may
need to apply bubble sort if the program needs to run in an environment, where
memory is very limited.
Page 9
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
2. Analyzing Algorithm
2.1. Runtime Analysis of Algorithms
In general cases, we mainly used to measure and compare the worst-case theoretical
running time complexities of algorithms for the performance analysis.
The fastest possible running time for any algorithm is O(1), commonly referred to
as Constant Running Time. In this case, the algorithm always takes the same amount of
time to execute, regardless of the input size. This is the ideal runtime for an algorithm,
but it’s rarely achievable.
In actual cases, the performance (Runtime) of an algorithm depends on n, that is the size
of the input or the number of operations is required for each input item.
The algorithms can be classified as follows from the best-to-worst performance
(Running Time Complexity):
We want to predict the resources that the algorithm requires. Usually, running time.
In order to predict resource requirements, we need a computational model.
2.2. Random-access machine (RAM) model
• Instructions are executed one after another. No concurrent operations.
• Its too tedious to define each of the instructions and their associated time costs.
• Instead, we recognize that we will use instructions commonly found in real
computers:
• Arithmetic: add, subtract, multiply, divide, remainder, floor, ceiling). Also, shift
left/shift right (good for multiplying/dividing by 2k ).
• Data movement: load, store, copy.
• Control: conditional/unconditional branch, subroutine call and return.
Each of these instructions takes a constant amount of time.
Page 10
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Program 1.1
int add (int a, int b, int c)
{
return (a+b+c)/3;
}
Program 1.2
int Radd (int a[], int n)
1 {
2 If (n>0)
3 return Radd (a, n-1) + a[n-1];
4 else
5 return 0;
6 }
Example 1.1
Refer Program 1.1
One word for variables a,b,c. No instance characteristics. Hence S p(TC) = 0
Example 1.2
Program 1.3
Page 12
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Example 1.3
Refer Program 1.2
Instance characteristics depend on values of n. Recursive stack space includes
space for formal parameters, local variables and return address. So one word each for
a[],n, return address and return variables. Hence for each pass it needs 4 words. Total
recursive stack space needed is 4(n).
Hence Sp(TC) = 4(n).
1. Comments:
No executables, hence step count = 0
2. Declarative Statements:
Define or characterize variables and constants like (int , long, enum, …)
Statement enabling data types (class, struct, union, template)
Determine access statements ( public, private, protected, friend )
Character functions ( void, virtual )
All the above are non executables, hence step count = 0
Assignment statements : General form is <variable> = <expr>. Step count = expr, unless
size of <variable> is a function of instance characteristics. eg., a = b, where a and b are
structures. In that case, Step count = size of <variable> + size of < expr >
4. Iterative Statements:
While <expr> do
Do .. While <expr>
Page 13
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
5. Switch Statements:
Switch (<expr>) {
Case cond1 : <statement1>
Case cond2 : <statement2>
.
.
Default : <statement>
}
6. If-else Statements:
If (<expr>) <statement1>;
Else <statement2>;
Step count of If and Else is the cost of <expr>.
7. Function invocation:
All function invocation has Step count = 1, unless it has parameters passed as called by
value which depend s on instance characteristics. If so, Step count is the sum of the size
of these values.
If function being invoked is recursive, consider the local variables also.
9. Function Statements:
Step count = 0, cost is already assigned to invoking statements.
Example 1.4
Refer Program 1.2
Introducing a counter for each executable line we can rewrite the program as
{
count++ // return
return Radd (a, n-1) + a[n-1];
}
else
{
count++ // return
return 0;
}
}
Case 1: n=0
tRadd = 2
Case 2: n>0
2 + tRadd (n-1)
= 2 + 2 + tRadd (n-2)
= 2 * 2 + tRadd (n-2)
.
.
= 2n + tRadd (0)
= 2n + 2
Example 1.5
Program 1.4
int Madd (int a[][], int b[][], int c[][], int n)
1 {
2 For (int i=0; i<m; i++)
3 For (int j=0; j<n; j++)
4 c[i][j] = a[i][j] + b[i][j];
5 }
Introducing a counter for each executable line we can rewrite the program as
int Madd (int a[][], int b[][], int c[][], int n)
{
For (int i=0; i<m; i++)
{
count++ //for i
For (int j=0; j<n; j++)
{
count++ //for j
c[i][j] = a[i][j] + b[i][j];
count++ //for assignment
}
count++ //for last j
}
count++ //for last i
}
Step count is 2mn + 2m +1.
Page 15
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Step count does not reflect the complexity of statement. It is reflected in step per
execution (s/e).
For example, consider two programs with complexities c1n2 + c2n and c3n respectively.
For small values of n, complexity depend upon values of c 1, c2 and c3. But there will also
be an n beyond which complexity of c3n is better than that of c1n2 + c2n.This value of n is
called break-even point. If this point is zero, c3n is always faster (or at least as fast).
Common asymptotic functions are given below.
Page 16
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Function Name
1 Constant
log n Logarithmic
n Linear
n log n n log n
n2 Quadratic
n3 Cubic
2n Exponential
n! Factorial
Linear Functions
Example 1.6
f(n) = 3n + 2
When n ≥ 2, 3n + 2 ≤ 3n + n = 4n
Hence f(n) = O(n), here c = 4 and n0 = 2
When n ≥ 1, 3n + 2 ≤ 3n + 2n = 5n
Hence f(n) = O(n), here c = 5 and n0 = 1
Hence we can have different c,n0 pairs satisfying for a given function.
Example 1.7
f(n) = 3n + 3
When n ≥ 3, 3n + 3 ≤ 3n + n = 4n
Hence f(n) = O(n), here c = 4 and n0 = 3
Page 17
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Example 1.8
f(n) = 100n + 6
When n ≥ 6, 100n + 6 ≤ 100n + n = 101n
Hence f(n) = O(n), here c = 101 and n0 = 6
Quadratic Functions
Example 1.9
f(n) = 10n2 + 4n + 2
When n ≥ 2, 10n2 + 4n + 2 ≤ 10n2 + 5n
When n ≥ 5, 5n ≤ n2, 10n2 + 4n + 2 ≤ 10n2 + n2 = 11n2
Hence f(n) = O(n2), here c = 11 and n0 = 5
Example 1.10
f(n) = 1000n2 + 100n - 6
f(n) ≤ 1000n2 + 100n for all values of n.
When n ≥ 100, 5n ≤ n2, f(n) ≤ 1000n2 + n2 = 1001n2
Hence f(n) = O(n2), here c = 1001 and n0 = 100
Exponential Functions
Example 1.11
f(n) = 6*2n + n2
When n ≥ 4, n2 ≤ 2n
So f(n) ≤ 6*2n + 2n = 7*2n
Hence f(n) = O(2n), here c = 7 and n0 = 4
Constant Functions
Example 1.12
f(n) = 10
f(n) = O(1), because f(n) ≤ 10*1
Page 18
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Example 1.13
f(n) = 3n + 2
3n + 2 > 3n for all n.
Hence f(n) = Ω(n)
Similarly we can solve all the examples specified under Big ‘Oh’.
Example 1.14
f(n) = 3n + 2
f(n) = Θ(n) because f(n) = O(n) , n ≥ 2.
Similarly we can solve all examples specified under Big’Oh’.
Page 19
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
The following are some standard algorithms that follows Divide and Conquer algorithm.
1. Binary Search is a searching algorithm. In each step, the algorithm compares the
input element x with the value of the middle element in array. If the values
match, return the index of the middle. Otherwise, if x is less than the middle
element, then the algorithm recurs for left side of middle element, else recurs for
the right side of the middle element.
2. Quicksort is a sorting algorithm. The algorithm picks a pivot element,
rearranges the array elements in such a way that all elements smaller than the
picked pivot element move to left side of pivot, and all greater elements move to
right side. Finally, the algorithm recursively sorts the subarrays on left and right
of pivot element.
3. Merge Sort is also a sorting algorithm. The algorithm divides the array in two
halves, recursively sorts them and finally merges the two sorted halves.
4. Closest Pair of Points The problem is to find the closest pair of points in a set of
points in x-y plane. The problem can be solved in O(n^2) time by calculating
distances of every pair of points and comparing the distances to find the
minimum. The Divide and Conquer algorithm solves the problem in O(nLogn)
time.
5. Strassen’s Algorithm is an efficient algorithm to multiply two matrices. A
simple method to multiply two matrices need 3 nested loops and is O(n^3).
Strassen’s algorithm multiplies two matrices in O(n^2.8974) time.
6. Cooley–Tukey Fast Fourier Transform (FFT) algorithm is the most common
algorithm for FFT. It is a divide and conquer algorithm which works in O(nlogn)
time.
7. Karatsuba algorithm for fast multiplication it does multiplication of two n-
digit numbers in at most single-digit multiplications in
general (and exactly when n is a power of 2). It is therefore faster than
the classical algorithm, which requires n2 single-digit products. If n = 210 = 1024,
Page 20
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
in particular, the exact counts are 3 10 = 59, 049 and (210)2 = 1, 048, 576,
respectively.
Example:
To find the maximum and minimum element in a given array.
Input: { 70, 250, 50, 80, 140, 12, 14 }
Output: The minimum number in a given array is : 12
The maximum number in a given array is : 250
Approach: To find the maximum and minimum element from a given array is an
application for divide and conquer. In this problem, we will find the maximum and
minimum elements in a given array. In this problem, we are using a divide and conquer
approach(DAC) which has three steps divide, conquer and combine.
For Maximum:
In this problem, we are using the recursive approach to find maximum where we will
see that only two elements are left and then we can easily using condition i.e.
if(a[index]>a[index+1].)
In a program line a[index] and a[index+1])condition will ensure only two elements in
left.
if(index >= l-2)
{
if(a[index]>a[index+1])
{
// (a[index]
// Now, we can say that the last element will be maximum in a given array.
}
else
{
//(a[index+1]
// Now, we can say that last element will be maximum in a given array.
}
}
Page 21
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
In the above condition, we have checked the left side condition to find out the
maximum. Now, we will see the right side condition to find the maximum.
Recursive function to check the right side at the current index of an array.
max = DAC_Max(a, index+1, l);
// Recursive call
Now, we will compare the condition and check the right side at the current index of a
given array.
In the given program, we are going to implement this logic to check the condition on the
right side at the current index.
// Right element will be maximum.
if(a[index]>max)
return a[index];
// max will be maximum element in a given array.
else
return max;
}
For Minimum:
In this problem, we are going to implement the recursive approach to find the minimum
no. in a given array.
int DAC_Min(int a[], int index, int l)
//Recursive call function to find the minimum no. in a given array.
if(index >= l-2)
// to check the condition that there will be two-element in the left then we can easily find
the minimum element in a given array.
{
// here we will check the condition
if(a[index]<a[index+1])
return a[index];
else
return a[index+1];
}
Now, we will check the condition on the right side in a given array.
// Recursive call for the right side in the given array.
min = DAC_Min(a, index+1, l);
Now, we will check the condition to find the minimum on the right side.
// Right element will be minimum
if(a[index]<min)
return a[index];
// Here min will be minimum in a given array.
else
return min;
Implementation:
// Code to demonstrate Divide and
// Conquer Algorithm
#include <stdio.h>
int DAC_Max(int a[], int index, int l);
int DAC_Min(int a[], int index, int l);
// Driver Code
int main()
{
// Defining the variables
int min, max, N;
Page 23
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Output:
The minimum number in a given array is : 12
The maximum number in a given array is : 250
Steps
Splits the input with size n into k distinct sub problems 1 < k ≤ n
Solve sub problems
Combine sub problem solutions to get solution of the whole problem. Often
sub problems will be of same type as main problem. Hence solutions can be
expressed as recursive algorithms
Control Abstraction
Control abstraction is a procedure whose flow of control is clear but primary
operations are specified by other functions whose precise meaning is undefined
DAndC (P)
{
if Small(P) return S(P);
else
{
divide P into smaller instances P1, P2,…,Pk k1;
apply DandC to each of these sub problems;
retun Combine(DandC(P1), DandC(P2),…,DandC(Pk));
}
}
Program
int BinSearch (int a[], int n, int x)
{
int low=1, high=n, mid;
while (low <= high)
{
mid = (low + high)/2;
if (x < a[mid])
high = mid – 1;
else if (x > a[mid])
low = mid + 1;
else
return (mid);
}
return (0);
}
Comparing the program with the Divide and Conquer control abstraction,
S(P) : Index of the element, if the element is present else it will return zero.
g(1) : Θ(1)
If P has more than one element, divide the problem into sub problems as given below
Pick an index 1 in the range [i, l]
Compare x with aq
- x = aq, solved
- x < aq, search x in sub list ai, ai+1, … , aq-1 . Then P = (q-i, ai,….,aq-1, x)
- x > aq, search x in sub list aq+1, aq+2, … , al . Then P = (l-q, aq+1,….,al, x)
Time to divide P into one new sub problem is Θ(1). q is selected such that
Page 25
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Simulation
Consider 10 elements 10, 20, 30, 40, 50, 60, 70, 80, 90, 100
Number of comparison needed to search element is as per the table given below
Position 1 2 3 4 5 6 7 8 9 10
Element 10 20 30 40 50 60 70 80 90 100
No. of comparisons 3 2 3 4 1 3 4 2 3 4
required
The search tree for the above list of elements will be as given below.
If the element is present, then search end in a circular node (inner node), if the element
is not present, then it end up in a square node (leaf node)
Now we will consider the worse case, average case and best case complexities of the
algorithm for a successful and an unsuccessful search.
Worse Case
Find out k, such that 2k-1 <= n <= 2k
Then for a successful search, it will end up in either of the k inner nodes. Hence the
complexity is O(k), which is equal to O(log2n).
For an unsuccessful search, it need either k-1 or k comparisons. Hence complexity is
Θ (k), which is equal to Θ(log2n).
Average Case
Let I and E represent the sum of distance of all internal nodes from root and sum of
distance of all external nodes from the root respectively. Then
E = I + 2n
Let As(n) and Au(n) represents average case complexity of a successful and unsuccessful
search respectively. Then
As(n) = 1 + I/n
Au(n) = E / (n+1)
As(n) = 1 + I/n
Page 26
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
= 1 + (E-2n)/n
= 1 + (Au(n)(n+1) – 2n)/n
= (n + (Au(n)(n+1) – 2n))/n
= (n(Au(n) -1) + Au(n))/n
= Au(n) – 1 + Au(n)/n
As(n) = Au(n)(1+1/n) - 1
As(n) and Au(n) are directly related and are proportional to log2n.
Hence the average case complexity for a successful and unsuccessful search is O(log 2n)
Best Case
Best case for a successful search is when there is only one comparison, that is at middle
position if there is more than one element. Hence complexity is Θ(1)
Best case for an unsuccessful search is O(log 2n)
Best Case Average Case Worse Case
Successful Θ(1) O(log2n) O(log2n)
Unsuccessful O(log2n) O(log2n) O(log2n)
Given below is a straight forward method to calculate the maximum and minimum from
the list of n elements.
Program
Void StraightMinMax (int a[], int n, int *max, int *min)
{
int i;
*max = a[0];
*min = a[0];
for (i=1; i<n; i++)
{
if (a[i] > *max) *max = a[i];
if (a[i] < *min) *min = a[i];
}
}
Let us calculate the time complexity of the algorithm. Here, only element comparisons
are considered, because the frequency counts of other operations are same as that of
element comparison. 2(n-1) comparisons are required for worse, average and best case.
But if we modify the code as
Page 27
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Then Best case complexity is (n-1), it happens when elements are sorted in increasing
order.
Worse case complexity is 2(n-1), when elements are sorted in decreasing order
Average case complexity is (2(n-1)+ (n-1))/2 = 3(n-1)/2
Now let us solve the problem using Divide and Conquer method. An instance of the
problem can be represented as
P = (n, a[i],….,a[j])
where n : no of elements
a[i],….,a[j]: elements in the list
When n=1, max = min = a[1]. When n=2, the problem can be solved in one comparison.
Hence, Small(P) is true when n<=2. If there are more elements, then we have to divide
the problem into sub problems. So divide the problem into two instances,
To solve the problem, invoke Divide and Conquer algorithm recursively. Combine
solution for P1 and P2. Set Max(P) as larger among Max(P1) and Max(P2) and Min(P)
as smaller among Min(P1) and Min(P2)
Program
Void MaxMin (int i, int j, int *max, int *min)
{
if (i==j)
{
*max = a[i];
*min = a[i];
}
else if (i == j-1)
{
if (a[i] < a[j])
{
*max = a[j];
*min = a[i];
}
else
{
*max = a[i];
*min = a[j];
}
}
else
{
mid = (i+j)/2;
MaxMin (i, mid, *max, *min);
MaxMin (mid+1, j, *max1, *min1);
if (*max < *max1) *max = *max1;
if (*min > *min1) *min = *min1;
Page 28
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
}
}
Example
Consider 9 elements 22, 13, -5, -8, 15, 60, 17, 31 and 47. The recursive call can be
represented using a tree as given below
If n is a power of two, n = 2k
T(n) = 2T(n/2) + 2
= 2 (2T(n/4) + 2) + 2
= 4T(n/4) + 4 + 2
= ...
= 2k-1+ 2k -2
= 2k-1+ n -2
= n/2 + n – 2
T(n) = 3n/2 – 2 (Best, average, Worst)
Comparing with the Straight forward method, Divide and Conquer is 50% more faster.
But additional stack space is required for recursion.
Yet another figures are obtained if we include position comparison also, while
calculating complexity. For this rewrite Small(P) as
if (i >= j-1) (Small(P)}
Page 29
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
C(n) = 2C(n/2) + 3
= 4C(n/4) + 6 + 3
= 4C(n/4) + 3(1+2)
= ...
= 2k-1.2 + 3[2k-1-2]
= 2k + 3.2k-1-3
= n + 3.n/2 – 3
C(n) = 5n/2 – 3
While using StraightMinMax the comparison is 3(n-1) which is greater than 5n/2–3.
Even though the element comparison is less for MaxMin, it is slower than normal
method because of the stack.
Binary Search
Given a sorted array arr[] of n elements, write a function to search a given element x in
arr[].
A simple approach is to do linear search.The time complexity of above algorithm is O(n).
Another approach to perform the same task is using Binary Search.
Example :
The idea of binary search is to use the information that the array is sorted and reduce
the time complexity to O(Log n).
Recommended: Please solve it on “PRACTICE ” first, before moving on to the
solution.
Page 30
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int n = sizeof(arr) / sizeof(arr[0]);
int x = 10;
int result = binarySearch(arr, 0, n - 1, x);
(result == -1) ? printf("Element is not present in array")
: printf("Element is present at index %d",
result);
return 0;
}
Page 31
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Output :
Element is present at index 3
Example
1 2 3 4 5 6 7 8 9 10
2 5 8 12 16 23 38 56 72 91
Page 32
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Example 2.2
Consider 10 elements 310, 285, 179, 652, 351, 423, 861, 254, 450 and 520. The
recursive call can be represented using a tree as given below
Page 33
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Pick an element, t=a[s], reorder other elements so that all elements appearing before t is
less than or equal to t, all elements after t is greater than or equal to t.
Program
Page 34
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
}
While calculating complexity we are considering element comparison alone. Assume
that n elements are distinct and the distribution is such that there is equal probability
for any element to get selected for partitioning.
Page 35
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Example
Illustration of partition() :
arr[] = {10, 80, 30, 90, 40, 50, 70}
Indexes: 0 1 2 3 4 5 6
Example
Page 37
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Page 38
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
4. Recurrence Methods
Steps
Guess the form of the solution
Use mathematical induction to find constants or show that they can be found and
to prove that the answer is correct
Example 1.15
Find the upper bound for the recurrence relation
T(n) = 2 T( n/2 ) + n
Example 1.16
Find the worse case complexity of Binary Search
Page 39
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
T(n) = c + T(n/2)
Example 1.17
T(n) = T(n-1) + n
Example 1.17
T(n) = 2T(n/2) + n
• Guess: T(n) = O(nlgn)
– Induction goal: T(n) ≤ cn lgn, for some c and n ≥ n0
– Induction hypothesis: T(n/2) ≤ cn/2 lg(n/2)
• Proof of induction goal:
T(n) = 2T(n/2) + n ≤ 2c (n/2)lg(n/2) + n
= cn lgn – cn + n ≤ cn lgn
if: - cn + n ≤ 0 c ≥ 1
Steps
Convert the recurrence into a tree.
Each node represents the cost of a single sub problem somewhere in the set of
recursive function invocations
Sum the costs within each level of the tree to obtain a set of per-level costs
Sum all the per-level costs to determine the total cost of all levels of the recursion
Page 40
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Example 1.18
T(n) = 3T(n/4) + (n2)
T(n) = 3T(n/4) + cn2
3 2 3 3
T (n) cn 2 cn ( ) 2 cn 2 ... ( ) log4 n 1 cn 2 (n log4 3 )
16 16 16
log4 n 1
3 i 2
i 0
(
16
) cn (n log4 3 )
3 i 2
( ) cn (n log4 3 )
i 0 16
1 16 2
cn 2 (n log4 3 ) cn (n log4 3 )
3 13
1
16
O(n 2 )
T ( n) 3T ( n / 4) cn 2
3d n / 4 cn 2
2
3d ( n / 4) 2 cn 2
3
dn 2 cn 2
16
dn 2 Page 41
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Example 1.19
lg n 1 lg n 1 i i
n2 1 1 1
W (n) i 2 W (1) n n n 2 O(n) n 2
lg n 2
O ( n ) 2n 2
i 0 2 i 0 2 i 0 2 1 1
2
W(n) = O(n2)
log3 / 2 n log3 / 2 n
lg n 1
n n ... n n
i 0
1 n log
i 0
3/ 2 nn
lg 3 / 2 lg 3 / 2
n lg n
W(n) = O(nlgn)
Page 42
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Example 1.20
W(n) = W(n/3) + W(2n/3) + O(n)
• The longest path from the root to a leaf is: n (2/3)n (2/3)2 n … 1
• Subproblem size hits 1 when 1 = (2/3)in i=log3/2n
• Cost of the problem at level i = n
• Total cost:
Example 1.21
Page 43
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
The master method applies to recurrences of the form T(n) = a T(n/b) + f (n) ,
Describe the running time of an algorithm that divides a problem of size n into a sub
problems, each of size n/b
Page 44
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Case 1
Page 45
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Case 2
If f (n) = Θ (nlogba)
Page 46
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
Case 3
Example 1.22
T(n)=9T(n/3) + n
T(n) = (n2)
Example 1.23
T(n) = 4T(n/2) + n
T(n) = Θ(n2).
Example 1.24
T(n) = T(2n/3) + 1
T(n) = (lg n)
Example 1.25
T(n) = 4T(n/2) + n2
Example 1.26
Case3:logb a
n nlog4 3 O(n0.793) f (n) (nlog4e3= )0.2
T(n) = (n lg n)
Example 1.27
T(n) = 4T(n/2) + n3
CASE 3: f (n) = Ω(n2 + e) for e = 1 and 4(cn/2)3 £ cn3 (reg. cond.) for c = 1/2.
T(n) = Θ(n3).
Page 48
[DESIGN & ANALYSIS OF ALGORITHM]
CENTRAL UNIVERSITY OF KASHMIR Unit I
5. Exercises
From reference 1 in references Section 6.
Exercise numbers: 4.4-1 4.4-2 4.4-3 4.4-4 4.4-5 4.4-6 4.4-7 4.4-8
4.4-9 4.5-1 4.5-2 4.5-3 4.5-4
4-1 Recurrence examples: a, b, c, d, e, f, g.
4-3 More recurrence examples : a, b, c, d, f, e, f, g, h, i, j.
7.2-2 7.2-3 7.2-4 7.2-5
6. References
1. T. H. Cormen, C. E. Leiserson, R. L. Rivest, Clifford Stein, “Introduction to
Algorithms”, 2nd Ed., PHI.
2. Ellis Horowitz and Sartaz Sahani, “Computer Algorithms”, Galgotia Publications.
3. V. Aho, J. E. Hopcroft, J. D. Ullman, “The Design and Analysis of Computer
Algorithms”, Addition Wesley.
4. D. E. Knuth, “The Art of Computer Programming”, 2nd Ed., Addison Wesley.
Page 49