Chapter Two: Complexity Analysis
Data Structures and Algorithms
Prepared by:
Ataklti Nguse
05/16/2025 Ataklti N. 1
2.1. Introduction
Programmers commonly deal with problems, algorithms, and
computer programs.
What is Program?
A computer program consists of code that is executed on a
computer to perform particular tasks.
A program is written in order to solve a problem.
A solution to a problem actually consists of two things:
A way to organize the data(Data Structures)
05/16/2025 Ataklti N. 2
2.1.1 What is Data Structure?
• Data Structure can be defined as the group of data elements which
provides an efficient way of storing and organising data in the computer so
that it can be used efficiently.
• It is a way of arranging data on a computer so that it can be accessed and
updated efficiently.
• Some examples of Data Structures are arrays, Linked List, Stack, Queue,
Tree, Graph ,etc.
• Data Structures are widely used in almost every aspect of Computer Science
i.e. Operating System, Compiler Design, Artificial Intelligence, Graphics and
many more.
05/16/2025 Ataklti N. 3
2.1.2 Operations on data structure
1) Traversing: Every data structure contains the set of data elements.
Traversing the data structure means visiting each element of the data
structure in order to perform some specific operation like searching or
sorting.
2) Insertion: Insertion can be defined as the process of adding the elements to
the data structure at any location.
• If we try to Insert an element to a full of data structure then Overflow occurs.
3) Deletion: The process of removing an element from the data structure is
called Deletion.
We can delete an element from the data structure at any random
location.
If we try to delete an element from an empty data structure then
05/16/2025 Ataklti N. 4
underflow occurs.
…Continued
4) Searching: The process of finding the location of an element within
the data structure is called Searching.
5) Sorting: The process of arranging the data structure in a specific
order is known as Sorting.
6) Merging: When two lists List A and List B of size M and N
respectively, of similar type of elements, or joined to produce the third
list, List C of size (M+N), then this process is called merging.
*does it possible merge two array elements' in one array? 1% mark
for the first 3 student(FIFO)
05/16/2025 Ataklti N. 5
2.1.3 Data Structure Classification
05/16/2025 Ataklti N. 6
2.2. Primitive and Non-primitive Data structures
• The primitive data types are the basic data types that are available in
most of the programming languages.
• The primitive data types are used to represent single values.
• Example is integer, real, Boolean and characters.
• The data types that are derived from primary data types are known
as non-Primitive data types.
• These data types are used to store group of values.
• Example Arrays, Structure, linked list, Stacks, Queue
05/16/2025 Ataklti N. 7
2.2.1 What is Algorithm?
• In computer programming terms, an algorithm is a set of
well-defined instructions to solve a particular problem.
• An Algorithm is a method of representing the step-by-step
procedure for solving a problem.
• Algorithms can be expressed in pseudocode, through
flowcharts or program code.
05/16/2025 Ataklti N. 8
2.2.2 Characteristics of an Algorithm
05/16/2025 Ataklti N. 9
2.2.3 The major categories of algorithms
• Sort: Algorithm developed for sorting the items in a certain order.
• Search: Algorithm developed for searching the items inside a data
structure.
• Delete: Algorithm developed for deleting the existing element from
the data structure.
• Insert: Algorithm developed for inserting an item inside a data
structure.
• Update: Algorithm developed for updating the existing element
inside a data structure.
05/16/2025 Ataklti N. 10
2.2.4 Abstract Data Type(ADT)
ADT is a type(class) for objects whose behavior is defined by a set of
value and set of operation.
The definition of ADT only mentions what operations are to be
performed but not how these operations will be implemented.
It does not specify how data will be organized in memory and what
algorithms will be used for implementing the operations
There are lots of formalized and standard Abstract data types such as
Linked Lists, Stacks, Queues, Trees, etc.
05/16/2025 Ataklti N. 11
2.3. Complexity Analysis
The field of complexity analysis is concerned with the study of the
efficiency of algorithms.
Complexity analysis is a technique to analyse & compare algorithms
For any given problem, there are a large number of different
algorithms that can be used to solve the problem.
All may produce the same result, but their efficiency may vary.
Sometimes, there are more than one way to solve a problem.
While analysing an algorithm, mostly consider time complexity and
space complexity.
05/16/2025 Ataklti N. 12
..continued
• Time complexity of an algorithm quantifies the amount of time taken by an
algorithm to run as a function of the length of the input.
• Space complexity of an algorithm quantifies the amount of space or
memory taken by an algorithm to run as a function of the length of the
input.
• The factor of time is more important than space.
These differences may not be noticeable for small amounts of data, but as
the size of the input data becomes large, differences will become
significant.
A better measure is to use the number of operations required to perform an
05/16/2025 Ataklti N. 13
algorithm.
2.3.1 Algorithm Analysing Technique
Definition 1: The computational complexity of an algorithm is a
measure of the cost incurred by applying the algorithm.
Definition 2: The asymptotic complexity of an algorithm is an
approximation of the computational complexity that holds for large
amounts of input data.
Asymptotic complexity is the key to comparing algorithms.
Objectives of computational complexity analysis:
To determine the feasibility of an algorithm by estimating an
upper bound on the amount of work performed
To compare different algorithms before deciding on which one to
implement
05/16/2025 Ataklti N. 14
..continued
Complexity analysis involves two distinct phases:
I. Algorithm Analysis: Analysis of the algorithm to produce a function
T (n) that describes the algorithm in terms of the operations
performed in order to measure the complexity of the algorithm.
II. Order of Magnitude Analysis: Analysis of the function T (n) to
determine the general complexity category to which it belongs.
There is no generally accepted set of rules for algorithm analysis.
However, an exact count of operations is commonly used.
05/16/2025 Ataklti N. 15
2.3.2 Analysis Rules for Running time T(n):
assume take any arbitrary time unit.
Execution of one of the following operations takes time 1:
Assignment, Single Input/Output, Single Boolean, Single Arithmetic Operations
and Function Return
Running time of a selection statement (if, switch) is the time for the condition
evaluation + the maximum of the running times for the individual clauses in the
selection.
Loops: Running time for a loop is equal to the running time for the statements
inside the loop * number of iterations.
The total running time of a statement inside a group of nested loops is the running
time of the statements multiplied by the product of the sizes of all the loops.
Always assume that the loop executes the maximum number of iterations possible.
Running time of a function call is 1 for setup + the time for any parameter
calculations
05/16/2025 + the time required for the Ataklti
execution
N. of the function body. 16
Running time calculation
examples using Analysis rule
1. int count(){
int k=0;
cout<< “Enter an integer”;
cin>>n; Time Units to Compute
1 for the assignment statement: int k=0
for (i=0;i<n;i++) 1 ,1 for the input and output statement.
k=k+1; In the for loop:
1 assignment, n+1 tests, and n increments.
return 0;} n loops of 2 units for an assignment, and an addition.
1 for the return statement.
T (n)= 1+1+1+(1+n+1+n)+2n+1 = 4n+6 = O(n)
05/16/2025 Ataklti N. 17
…Continued
2. int sum (int n)
{
int partial_sum = 0;
for (int i = 1; i <= n; i++)
partial_sum = partial_sum +(i * i * i);
return partial_sum;
}
05/16/2025 Ataklti N. 18
…Continued
• Calculate T(n) for the following
3. k=0;
Cout<<“enter an integer”;
Cin>>n;
For (i=0;i<n;i++)
K++;
05/16/2025 Ataklti N. 19
…Continued
• 4. i=0;
While (i<n)
{
x++;
i++;
}
J=1;
While(j<=10)
{
x++;
j++
}
05/16/2025 Ataklti N. 20
…Continued
5. for(i=1;i<=n;i++)
for(j=1;j<=n; j++)
k++;
05/16/2025 Ataklti N. 21
Cont. …
6. Sum=0;
if(test==1)
{
for (i=1;i<=n;i++)
sum=sum+i
}
else
{
cout<<sum;
}
05/16/2025 Ataklti N. 22
Bonus (3%)
• Calculate T(n) for the following codes
1. For (i=1;i<=n;i=i*2)
cout<<i*j;
05/16/2025 Ataklti N. 23
Exercise
• Calculate T(n) for the following codes
A. sum=0;
for(i=1;i<=n;i++)
for(j=1;j<=m;j++)
Sum++;
B. sum=0;
for(i=1;i<=n;i++)
for(j=1;j<=i;j++)
sum++;
C. For (i=1;i<=10;i++)
For (j=1;j<=10;j++)
cout<<i*j;
05/16/2025 Ataklti N. 24
…Continued
D. int total(int n){
int sum=0;
for (int i=1;i<=n;i++)
sum=sum+1;
return sum;}
F. void func(){
int x=0;int i=0;int j=1;
cout<< “Enter value”;
cin>>n;
while (i<n){
x++;
i++;}
while (j<n){
05/16/2025 Ataklti N. 25
Formal Approach to Analysis
• for Loops: Formally
– In general, a for loop translates to a summation. The index and bounds of the
summation are the same as the index and bounds of the for loop.
for (int i = 1; i <= N; i++) {
sum = sum+i;
}
• Suppose we count the number of operations that are done.
• There is 2 operation per iteration of the loop 1 for assignment and 1 for addition ,
hence 2N in total.
• Nested Loops: Formally
– Nested for loops translate into multiple summations, one for each for loop.
for (int i = 1; i <= N; i++) {
for (int j = 1; j <= M; j++) {
sum = sum+i+j;
05/16/2025 } Ataklti N. 26
Cont..
• Consecutive Statements: Formally
– Add the running times of the separate blocks of your code
for (int i = 1; i <= N; i++) {
sum = sum+i;}
for (int i = 1; i <= N; i++) {
for (int j = 1; j <= N; j++) {
sum = sum+i+j;}}
• Conditionals: Formally
– If (test) s1 else s2: Compute the maximum of the running time for s1 and s2.
if (test == 1) {
for (int i = 1; i <= N; i++) {
sum = sum+i;}}
else
for (int i = 1; i <= N; i++) {
05/16/2025 for (int j = 1; j <= N; j++) {
Ataklti N. 27
2.3.3.Asymptotic Analysis
Asymptotic analysis is concerned with how the running time of an algorithm increases
with the size of the input in the limit, or size of the input increases without bound.
From the terms that we are considering, we'll drop all the terms that grow slowly and
only keep the ones that grow fast as n becomes larger.
It uses to determine the growth rate of the complexity function.
There are five notations used to describe a running time function.
These are:-
I. Big-Oh Notation (O)
II. Big-Omega Notation ()
III. Theta Notation ()
IV. Little-o Notation (o)
V. Little-Omega
05/16/2025 ( notation) Ataklti N. 28
2.3.3.1 Big-Oh Notation
The most commonly used notation for specifying asymptotic
complexity, for estimating the rate of growth of complexity functions.
Big-O notation represents the upper bound of the running time of an
algorithm.
Definition 3: The function f(n) is O(g(n)) if there exist positive
numbers c and N such that f(n) ≤ c.g(n) for all n ≥ N.
n is the size of the data set.
g(n) is a function that is calculated using n as the parameter.
O(g(n)) means that the curve described by g(n) is an upper bound
for the resource needs of a function.
As n increases, f(n) grows no faster than g(n) or in the long
run (for large n) f grows at most as fast as g
It05/16/2025
computes the tight upperAtakltibound
N.
of f(n) 29
facts for Big-Oh problems:
1<=n for all n>=1
log2n<=n for all n>=2
n<=nlog2n for all n>=2
n<=n2 for all n>=1
2n <=n! for all n>=4
05/16/2025 Ataklti N. 30
Theorem on Big-O notation
Theorem 1: k is O(1)
Theorem 2: A polynomial is O(the term containing the highest power of n).
If f(n) is a polynomial of degree d, then f(n) is O(nd)
In general, f(n) is big-O of the dominant term of f(n).
Theorem 3: k*f(n) is O(f(n))
Constant factors may be ignored
E.g. f(n) =7n4+3n2+5n+1000 is O(n4)
Theorem 4(Transitivity): If f(n) is O(g(n))and g(n) is O(h(n)), then f(n) is
O(h(n)).
Theorem 5: For any base b, logb(n) is O(logn).
All any base logarithms grow at the same rate
Theorem 6: Each of the following functions is big-O of its successors:
K, logbn, n, nlogbn , n2,n3, 2n , n!, nn
05/16/2025 Ataklti N. 31
Finding Asymptotic Complexity:
Examples
• Rules to find Big-O from a given T(n)
– Take highest order term
– Drop lower order terms and constant multiplier
T(n)=4n4+3n3+2n+4=O(n4)
05/16/2025 Ataklti N. 32
Example
1. f(n)=10n+5 and g(n)=n. Show that f(n) is O(g(n)).
2. f(n) = 3n2 +4n+1. Show that f(n)=O(n2).
Big-O expresses an upper bound on the growth rate of a function, for
sufficiently large values of n.
An upper bound is the best algorithmic solution that has been found
for a problem.
05/16/2025 Ataklti N. 34
Finding Big O of given Algorithm: Exercises
1. for (i=1;i<=n;i++) 2. For (i=1;i<=n;i++)
For(j=1;j<=n;j++)
Cout<<i; Cout<<i;
T(n) = 1+n+1+n+n T(n)=1+n+1+n+n(1+n+1+n+n)
=3n+2 =3n2 +4n+2
T(n) = O(n) T(n)=O(n2)
05/16/2025 Ataklti N. 35
Exercise
Find Big O of the following algorithm
2. if(k==1)
{
For (i=1;i<=100;i++)
1. for(i=1;i<=n;i++) For (j=1;j<=1000;j++)
Sum=sum+i; Cout<<I;
For (i=1;i<=n;i++) }
Else
For(j=1;j<=n;j++)
{
Sum++;
for (i=1;i<=n;i++)
Sum++;
05/16/2025 Ataklti N. } 36
Cont..
3. for (i=1; i<=n;i++)
4. for (int i=1; i<=N; ++i)
{
for (int j=i; j>0; --j)
If (i<10)
{
For(j=1;j<=n;j=j*2)
cout<<j;
cout<<j;
}
Else
cout<<i;
}
05/16/2025 Ataklti N. 37
Cont..
7. for (int i=1; i<N; ++i) 8. int i=0, sum=0;
for (int j=0; j<N; j+=1) while (sum < N)
sum = sum + i++;
{
cout<<i*j;
}
05/16/2025 Ataklti N. 38
2.3.3.2 Ω, Θ and Little-o Notations
We have seen that big-O notation refers to an smallest upper bound on the rate of
growth of a function.
There is a similar definition for the lower bound, called big-omega (Ω) notation.
Omega notation represents the lower bound of the running time of an algorithm.
Definition 4: The function f(n) is Ω(g(n)) if there exist positive numbers c and N such
that f(n) ≥ c.g(n) for all n ≥ N.
In other words, we choose the smallest upper bound (big-O) function and the largest
lower bound (big Ω)
As n increases f(n) grows no slower than g(n) or in the long run (for large n) f grows at
least as fast as g
It computes the tight lower bound of f(n).
05/16/2025 Ataklti N. 39
Describes the best case analysis
Continued
For some algorithms (but not all), the lower and upper bounds on the
rate of growth will be the same.
In this case, a third notation exists for specifying asymptotic
complexity, called theta (Θ) notation.
Definition 5: The function f(n) is Θ(g(n)) if there exist positive
numbers c1, c2 and N such that c1.g(n) ≤ f(n) ≤ c2.g(n) for all n ≥ N.
This definition states that f(n) is Θ(g(n)) if f(n) is O(g(n)) and f(n) is
Ω(g(n)). In other words, the lower and upper bounds on the rate of
growth are the same.
For the same example, f(n) = n2 + 5n, we can see that g(n) = n2
05/16/2025 Ataklti N. 40
satisfies definition 5, so the function n + 5n is Θ(n ).
2 2
Continued
Theta notation encloses the function from above and below.
Since it represents the upper and the lower bound of the running
time of an algorithm
As n increases, f(n) grows as fast as g(n)
It computes the tight optimal bound of f(n).
Describes the average case analysis.
05/16/2025 Ataklti N. 41
Little-o Notation
Big-O notation may or may not be asymptotically tight, for example:
2n2 = O(n2)
=O(n3)
f(n)=o(g(n)) means for all c>0 there exists some k>0 such that
f(n)<c.g(n) for all n>=k. Informally, f(n)=o(g(n)) means f(n) becomes
insignificant relative to g(n) as n approaches infinity.
As n increases g(n) grows strictly faster than f(n).
It computes the non-tight upper bound of f(n)
Describes the worst case analysis
05/16/2025 Ataklti N. 42
2
Little-Omega ( notation)
Little-omega () notation is to big-omega () notation as little-o
notation is to Big-Oh notation.
We use notation to denote a lower bound that is not
asymptotically tight.
Formal Definition: f(n)= (g(n)) if there exists a constant no>0 such
that c. g(n)<f(n) for all n>=k.
Example: 2n2=(n).
OO Notation
Definition 7: The function f (n) is OO (g (n)) if it is O (g(n)) but the
05/16/2025 Ataklti N. 43
constant c is too large to be of practical significance.
2.4. Complexity Classes
A number of complexity classes of algorithms exist, and some of the
more common ones are illustrated in Figure 1 below.
Figure 1 – A comparison of various complexity
classes
05/16/2025 Ataklti N. 44
…Continued
05/16/2025 Ataklti N. 45
Measures of Times
Best, average and worst case complexity
Worst case analysis is used to find an upper bound on algorithm performance for large
problems (large n).
We must know the case that causes maximum number of operations to be executed.
Average Case Analysis
in average case analysis, we take all possible inputs and calculate the average
computing time for all of the inputs.
In this case we calculate the region between upper and lower bound of the running
time of algorithms.
In this case the number of executed operations are not minimum and not maximum.
Best Case Analysis
In the best case analysis, we calculate lower bound on running time of an algorithm.
We must know the case that causes minimum number of operations to be executed.
It describes the behavior of algorithm under optimal conditions.
05/16/2025 Ataklti N. 46
2.5. Relational Properties of the Asymptotic
Notations
• Transitivity:
– f(n) = (g(n)) and g(n) = (h(n)) f(n) = (h(n))
– Same for O and
• Reflexivity:
– f(n) = (f(n))
– Same for O and
• Symmetry:
– f(n) = (g(n)) if and only if g(n) = (f(n))
• Transpose symmetry:
– f(n) = O(g(n)) if and only if g(n) = (f(n))
05/16/2025 Ataklti N. 47
2.6. Amortized Analysis
• Amortized analysis considers both the cheap and expensive
operations performed by an algorithm.
• Amortized analysis computes the average time required to perform a
sequence of n operations on a data structure.
• Example. Priority queue.
05/16/2025 Ataklti N. 48
More Exercises
1. An algorithm takes 0.5 ms for input size 100. How long will it take for input size 1000 if the running
time is the following (assume low-order terms are negligible)
(a) linear
(b) O(N log N )
(c) quadratic
(d) Cubic
(e) Exponential
2.An algorithm takes 0.5 s for input size 100. How large a problem can be solved in 1
min if the running time is the following (assume low-order terms are negligible).
(a) linear
(b) O(N log N )
(c) quadratic
(d) Cubic
05/16/2025 Ataklti N. 49
More Exercises
3. for(int i = 10000; i > 0; i--)
sum++;
4. int n=50
for(int i=0 ;i<n;i++)
sum++;
5. for(int i=0 ;i<50n;i++)
sum++;
6. for(int i=0 ;i<n*n;i++)
sum++;
7. for(int i = 1; i <=n/3; i++)
for(int j = 1; j <= n; j++)
sum++;
05/16/2025 Ataklti N. 50
More Exercises
8. for(int i = 10000; i > 0; i--)
for(int i=0 ;i<n;i++)
sum++;
9. for(int i=1 ;i<n;i*=2)
for(int i=0 ;i<1000;i++)
sum++;
10. for(int i=1 ;i<n;i*=2)
for(int i=0 ;i<n;i++)
sum++;
11. for(int i = 0; i < n; i++)
sum++;
for(int j = 0; j < n; j++)
sum++;
05/16/2025 Ataklti N. 51
More Exercises
12. for(int i = 1; i < n; i=i*3)
sum++;
for(int j = 0; j < n*n; j++)
sum++;
13. Let An algorithm ABC takes 10 sec for input size n=2,n=6, and n=12.
where as, an algorithm CDE 4 sec for input size 2, 64 sec for input size
6 ,and 4096 sec for input size 12.
I. What is the running complexity of algorithm ABC?
II. What is the running complexity of algorithm CDE?
14. What is the running complexity of algorithm to display 2-D
array/square matrix for N input size?
15. An algorithm takes 4 seconds for an input size of 10. How large a
problem can be solved in 25 seconds if the running time is O(n2)?
05/16/2025 Ataklti N. 52
More Exercises
16. What is the running time to the following case;
Let, a program is executed the statements based on m value where the
m value is odd it display the elements of array for size N otherwise, it
displays the m value is Even number.
17. Multiplying two square matrices
18. True/False
5n + 100n3 = O(n4)
05/16/2025 Ataklti N. 53