Unit I: Computational Thinking and Programming - 2
Chapter-11 Idea of Program Efficiency
"A Measure of the Average Execution Time
Necessary for an Algorithm to Complete Work
on a Set of Data"
In this tutorial we will discuss the following topics
S. No. Topics
1 Computational Complexity
2 Time Complexity
3 Measuring Time Complexity
4 Big-O Notation
5 Calculating Complexity
6 Space Complexity
Page 1 of 6
Unit I: Computational Thinking and Programming - 2
Chapter-11 Idea of Program Efficiency
Computational complexity is a field from computer science
which analyzes algorithms based on the amount resources
required for running it.
The amount of required resources varies based on the input size, so the
complexity is generally expressed as a function of n, where n is the size of
the input.
There are two aspects of algorithmic performance
• Time
- Instructions take time.
- How fast does the algorithm perform?
- What affects its runtime?
• Space
- Data structures take space
- What kind of data structures can be used?
- How does choice of data structure affect the runtime?
Note: Algorithms cannot be compared by running them on computers.
Run time is system dependent.
Time Complexity
Let’s start understanding what time complexity means with a short
description from Wikipedia.
In computer science, the time complexity is the computational complexity
that describes the amount of time it takes to run an algorithm. Time
complexity is commonly estimated by counting the number of elementary
operations performed by the algorithm, supposing that each elementary
operation takes a fixed amount of time to perform.
Page 2 of 6
Unit I: Computational Thinking and Programming - 2
Chapter-11 Idea of Program Efficiency
Measuring Time Complexity
Counting number of operations involved in the algorithms to
handle n items.
When analyzing the time complexity of an algorithm we may find three
cases: best-case, average-case and worst-case. Let’s understand what it
means.
Suppose we have the following unsorted list [1, 5, 3, 9, 2, 4, 6, 7, 8] and
we need to find the index of a value in this list using linear search.
best-case: The minimum number of steps taken on any instance of
size n. In our example, the best case would be to search for the value
1. Since this is the first value of the list, it would be found in the first
iteration.
average-case: An average number of steps taken on any instance of
size n. Maybe this is not the best example but, based on our sample,
we could say that the average-case would be when we’re searching for
some value in the “middle” of the list, for example, the value 2.
worst-case: this is the complexity of solving the problem for the worst
input of size n. In our example, the worst-case would be to search for
the value 8, which is the last element from the list.
Page 3 of 6
Unit I: Computational Thinking and Programming - 2
Chapter-11 Idea of Program Efficiency
Big-O Notation
In computer science, Big-O notation is used to classify algorithms
according to how their running time or space requirements grow as the
input size (n) grows. This notation characterizes functions according to
their growth rates: different functions with the same growth rate may be
represented using the same O notation.
These are the most common time complexities expressed using the Big-O
notation:
Name Time Complexity Example
Constant Time O(1) Push in Stack
Logarithmic Time O(log n) Finding entry in sorted array
Linear Time O(n) Finding entry in unsorted array
Quasilinear Time O(n log n) Sorting n items by divide and conquer
Quadratic Time O(n2) Shortest path between two nodes in graph
Exponential Time O(2n) The Tower of Hanoi problem
Factorial Time O (n!) Factorial of number n
Page 4 of 6
Unit I: Computational Thinking and Programming - 2
Chapter-11 Idea of Program Efficiency
Calculating Complexity
Follow the following instructions to calculate the time complexity
1. First take an input of some size n.
2. Try to find the number of comparisons in your algorithm (initially it will
take some time so be patient).
3. Try to fit an equation as f(n) where f(x) is time taken and n is size of
input.
4. Then try to find the asymptote function similar to that function.
Then you will able to calculate the time complexity more
understandingly.
Example:
Use big-O notation to analyze the time efficiency of the following
fragment of Python code:
k = n; Since the loop variable is cut in half each time through the
while k > 1: loop, the number of times the statements inside the loop
. will be executed is log2n.
Thus, an algorithm that halves the data remaining to be
.
processed on each iteration of a loop will be an O(log2n)
k = k/2 algorithm.
Space Complexity
Space complexity of an algorithm represents the amount of memory
space required by the algorithm in its life cycle. The space required by an
algorithm is equal to the sum of the following two components −
Page 5 of 6
Unit I: Computational Thinking and Programming - 2
Chapter-11 Idea of Program Efficiency
A fixed part that is a space required to store certain data and
variables, which are independent of the size of the problem. For
example, simple variables and constants used, program size, etc.
A variable part is a space required by variables, whose size depends
on the size of the problem. For example, dynamic memory
allocation, recursion stacks space, etc.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the
fixed part and S(I) is the variable part of the algorithm, which depends on
instance characteristic I.
Following is a simple example that tries to explain the concept −
Algorithm: SUM(X, Y)
Step 1 - START
Step 2 - Z ← X + Y + 20
Step 3 - Stop
Here we have three variables X, Y and Z and one constant.
Hence S(P) = 1 + 3. Now, space depends on data types of given variables
and constant types and it will be multiplied accordingly.
Page 6 of 6