Lecture 2 - I
Lecture 2 - I
Instructors:
Dr. Muhammad Yusuf
Baffa Sani
Bilal Lawal Alhassan
Faruk Yusha’u
Asymptotic Notation
Introduction
Asymptotic notation is a way of expressing the time or space complexity of an algorithm
in terms of mathematical functions. The most commonly used asymptotic notations are
Big O, Ω, and Θ.
Big O Notation (O): It represents the upper bound of an algorithm’s running time.
In simpler terms, it gives the worst-case time complexity of an algorithm.
Ω Notation (Ω): This provides the lower bound of an algorithm’s running time. It
represents the best-case time complexity.
Θ Notation (Θ): This notation provides both upper and lower bounds, giving a
tight bound on the algorithm’s running time.
These notations are useful for analyzing the efficiency of algorithms without getting
bogged down by hardware-specific details. They allow us to make general statements
about an algorithm’s performance as the input size grows.
Big O Notation
Big O notation is a mathematical notation used in computer science to describe the upper
bound or worst-case performance of an algorithm. It provides a way to express how the
running time or space requirements of an algorithm grow in relation to the size of the
input.
Key Points:
1
Definition: Big O notation is defined as follows:
O(g(n)) = {f (n) : there exist positive constants c and n0 such that 0 ≤ f (n) ≤ c·g(n) for all n ≥
In simpler terms, O(g(n)) represents the set of functions that grow no faster than
g(n) for sufficiently large input sizes.
Worst-Case Time Complexity: Big O notation is often used to describe the
worst-case scenario for an algorithm. It gives an upper limit on how the running
time of the algorithm increases as the input size grows.
Simplified Notation: Big O notation focuses on the dominant term of the func-
tion, ignoring constant factors and lower-order terms. For example, O(2n2 + 3n + 1)
is simplified to O(n2 ) because the quadratic term dominates for large values of n.
Common Patterns in Big O Notation:
Constant Time (O(1)): The running time does not depend on the input size.
Logarithmic Time (O(log n)): The running time grows logarithmically as the
input size increases. Examples include binary search.
Linear Time (O(n)): The running time grows linearly with the input size. Ex-
amples include iterating through an array.
Linearithmic Time (O(n log n)): The running time grows as n log n. Examples
include efficient sorting algorithms like mergesort.
Quadratic Time (O(n2 )): The running time grows quadratically with the input
size. Examples include nested loops over the input.
Cubic Time (O(n3 )): The running time grows cubically. Examples include algo-
rithms with three nested loops.
Exponential Time (O(2n )): The running time doubles with each additional input
element. Examples include brute-force combinatorial problems.
Factorial Time (O(n!)): The running time grows factorially with the input size.
Examples include solving the traveling salesman problem via brute force.
Ω Notation
Ω notation (Ω) is a mathematical notation used in computer science to describe the
lower bound or best-case performance of an algorithm. It provides information about
the minimum growth rate of an algorithm’s running time with respect to the size of the
input.
Key Points:
Definition: Ω notation is defined as follows:
Ω(g(n)) = {f (n) : there exist positive constants c and n0 such that 0 ≤ c·g(n) ≤ f (n) for all n ≥
In simpler terms, Ω(g(n)) represents the set of functions that grow at least as fast
as g(n) for sufficiently large input sizes.
Best-Case Time Complexity: Ω notation is often used to describe the best-case
scenario for an algorithm.
2
Θ Notation
Θ notation (Θ) is another mathematical notation used in computer science to describe
both the upper and lower bounds of an algorithm’s running time. It provides a more
precise characterization of the growth rate of an algorithm by specifying a tight range in
which the running time lies.
Key Points:
Θ(g(n)) = {f (n) : there exist positive constants c1 , c2 , and n0 such that 0 ≤ c1 ·g(n) ≤ f (n) ≤ c2 ·
In simpler terms, Θ(g(n)) represents the set of functions that grow at the same rate
as g(n) for sufficiently large input sizes.
Why Big O?
Big O notation is often preferred and widely used in algorithm analysis for several prac-
tical reasons:
Useful for Large Input Sizes: Big O notation is particularly useful when dealing
with large input sizes, where the dominant term in the complexity function becomes
the primary factor influencing performance.
Tool for Algorithm Selection: When choosing an algorithm for a specific task,
Big O notation helps developers make informed decisions about trade-offs between
time and space complexity.