Lecture 05
Lecture 05
(COMP4120)
Lecture # 5
Algorithm Analysis
Course Instructor
Dr. Aftab Akram
PhD CS
Assistant Professor
Department of Information Sciences, Division of Science &
Technology
University of Education, Lahore
Comparing Algorithms
• How do you compare two algorithms?
• Run these algorithms are computer programs
• Compare the resource used by them
• This approach may not work due to following
reasons:
• Have to write two programs but want to keep just one
• “Better Written” program
• The choice of empirical test cases might unfairly favor
one algorithm
• What if both algorithm did not fall into resource budget?
Asymptotic Analysis
• Asymptotic analysis measures the efficiency of an
algorithm
• Actually an estimating technique
• Asymptotic analysis has proved useful to computer
scientists who must determine if a particular algorithm
is worth considering for implementation.
• The critical resource for a program is most often its
running time.
• Typically you will analyze the time required for an
algorithm (or the instantiation of an algorithm in the
form of a program), and the space required for a data
structure.
Factor affecting Running Time
• The environment in which program is compiled and
run
• Speed of CPU
• Bus Speed
• Peripheral Hardware, etc.
• Competition for network resources
• The programming languages and quality of code
generated by compiler
• The Coding Efficiency of Computer Programmer
Estimation Algorithm’s Performance
• Estimate Algorithm’s Performance
• number of basic operations required by the algorithm to
process an input of a certain size.
• For example, when comparing sorting algorithms, the
size of the problem is typically measured by the
number of records to be sorted.
• A basic operation must have the property that its time
to complete does not depend on the particular values
of its operands.
• Adding or comparing two integer variables are
examples of basic operations in most programming
languages
• Summing the contents of an array containing integers
is not, because the cost depends on the value of
Determining Running Time
• The most important factor affecting running time is
normally size of the input
• For a given input size we often express the time to
run the algorithm as a function of , written as
• We will always assume is a non-negative value.
• Let us call the amount of time required to compare
two integers in function largest.
• The total time to run largest is therefore
approximately , because we must make
comparisons, with each comparison costing time.
•
• This equation describes the growth rate for the running
time of the largest value sequential search algorithm.
Algorithm Growth Rate
• The growth rate for an algorithm is the rate at which
the cost of the algorithm grows as the size of its input
grows.
• Linear Growth Rate: as the value of n grows, the
running time of the algorithm grows in the same
proportion
• Quadratic Growth Rate: algorithm’s running-time
equation has a highest-order term containing a factor
of
• Logarithmic Growth Rate: algorithm having a
factor in running-time equation
• Exponential Growth Rate: an algorithm having
running time
Input Sizes
• Finding the factorial of
• Just one number as input
• Largest-value sequential search algorithm discussed
in Example 3.1
• This algorithm examines every elements in array
• Array size varies but still algorithm processes
every element in the array
• Even if element is found at the start of array
• So, in fact no best or worst case scenarios
Input Sizes
• For some algorithms, different inputs of a given size
require different amounts of time.
• For example, consider the problem of searching an
array containing integers to find the one with a
particular value (assume that appears exactly
once in the array).
• The sequential search algorithm begins at the first
position in the array and looks at each value in turn
until is found.
• Once is found, the algorithm stops.
Sequential Search
Best, worst and average cases
• There is a wide range of possible running times for
the sequential search algorithm.
• Best Case: The first integer in the array could have
value , and so only one integer is examined.
• Worst Case: if the last position in the array contains
, then the running time is relatively long, because
the algorithm must examine values
• Average Case: is find somewhat in the middle of
the array. On average, the algorithm examines
about values.
What scenario to consider?
• To discuss the algorithm’s performance, what scenario
should be considered?
• Best case rarely happen, and too optimistic to consider
• The advantage to analyzing the worst case is that you know
for certain that the algorithm must perform at least that
well.
• Average cases represent typical behavior of an algorithm
• Unfortunately, average-case analysis is not always possible.
• Average-case analysis first requires that we understand how
the actual inputs to the program (and their costs) are
distributed with respect to the set of all possible inputs to
the program.
• Real-time applications prefer worst-case scenarios.
Otherwise, we resort to average cases.
A Faster Computer, or a Faster
Algorithm?
• To explain this concept, we will take example of
sorting.
• Sorting is a fundamental problem related to data
structures and algorithm analysis.
• In sorting, we have to arrange elements of a data
structure, i.e., array or list in ascending or
descending order
• There are many algorithms that are designed to
solve sorting problem.
• However, not all of them have same performance.
A Faster Computer, or a Faster
Algorithm?
• To explain, our topic we take two famous sorting
algorithms
• Insertion Sort, which take to sort elements
• Merge Sort, which take to sort elements
• Both and are constants which do not depend on
• If we write Insertion sort running time as and
merge sort running time as
• We can see merge sort is faster since is much
smaller than
• For example, when , is
approximately 10, and when equals one million,
is approximately only 20.)
A Faster Computer, or a Faster
Algorithm?
• Although insertion sort usually runs faster than merge
sort for small input sizes, once the input size becomes
large enough, merge sort’s advantage of vs. will
more than compensate for the difference in constant
factors.
• For a concrete example, let us pit a faster computer
(computer A) running insertion sort against a slower
computer (computer B) running merge sort.
• They each must sort an array of 10 million numbers.
(Although 10 million numbers might seem like a lot, if
the numbers are eight-byte integers, then the input
occupies about 80 megabytes, which fits in the memory
of even an inexpensive laptop computer many times
over.)
A Faster Computer, or a Faster
Algorithm?
• Suppose that computer A executes 10 billion instructions
per second (faster than any single sequential computer at
the time of this writing) and computer B executes only 10
million instructions per second, so that computer A is 1000
times faster than computer B in raw computing power.
• To make the difference even more dramatic, suppose that
the world’s craftiest programmer codes insertion sort in
machine language for computer A, and the resulting code
requires instructions to sort numbers.
• Suppose further that just an average programmer
implements merge sort, using a high-level language with an
inefficient compiler, with the resulting code taking
instructions.
A Faster Computer, or a Faster
Algorithm?
• To sort 10 million numbers, computer A takes: