[go: up one dir, main page]

0% found this document useful (0 votes)
8 views4 pages

Lecture 2 - I

The document provides an overview of asymptotic notation, which is used to express the time or space complexity of algorithms through Big O, Ω, and Θ notations. Big O notation describes the upper bound or worst-case performance, while Ω notation represents the lower bound or best-case performance, and Θ notation provides both bounds. The document emphasizes the importance of these notations in analyzing algorithm efficiency and aiding in algorithm selection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views4 pages

Lecture 2 - I

The document provides an overview of asymptotic notation, which is used to express the time or space complexity of algorithms through Big O, Ω, and Θ notations. Big O notation describes the upper bound or worst-case performance, while Ω notation represents the lower bound or best-case performance, and Θ notation provides both bounds. The document emphasizes the importance of these notations in analyzing algorithm efficiency and aiding in algorithm selection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Bayero University, Kano

CSC2204: Analysis of Algorithms

Lecture 2: Asymptotic Notation

Instructors:
Dr. Muhammad Yusuf
Baffa Sani
Bilal Lawal Alhassan
Faruk Yusha’u

Department of Computer Science


Bayero University, Kano
Asymptotic Notation: Understanding Algorithm
Complexity

Asymptotic Notation
Introduction
Asymptotic notation is a way of expressing the time or space complexity of an algorithm
in terms of mathematical functions. The most commonly used asymptotic notations are
Big O, Ω, and Θ.
Big O Notation (O): It represents the upper bound of an algorithm’s running time.
In simpler terms, it gives the worst-case time complexity of an algorithm.

ˆ Example: If an algorithm has a time complexity of O(n2 ), it means the running


time grows quadratically with the size of the input.

Ω Notation (Ω): This provides the lower bound of an algorithm’s running time. It
represents the best-case time complexity.

ˆ Example: If an algorithm has a time complexity of Ω(n), it means the running


time grows at least linearly with the size of the input.

Θ Notation (Θ): This notation provides both upper and lower bounds, giving a
tight bound on the algorithm’s running time.

ˆ Example: If an algorithm has a time complexity of Θ(n), it means the running


time grows linearly with the size of the input, and the upper and lower bounds
match.

These notations are useful for analyzing the efficiency of algorithms without getting
bogged down by hardware-specific details. They allow us to make general statements
about an algorithm’s performance as the input size grows.

Big O Notation
Big O notation is a mathematical notation used in computer science to describe the upper
bound or worst-case performance of an algorithm. It provides a way to express how the
running time or space requirements of an algorithm grow in relation to the size of the
input.
Key Points:

1
ˆ Definition: Big O notation is defined as follows:
O(g(n)) = {f (n) : there exist positive constants c and n0 such that 0 ≤ f (n) ≤ c·g(n) for all n ≥
In simpler terms, O(g(n)) represents the set of functions that grow no faster than
g(n) for sufficiently large input sizes.
ˆ Worst-Case Time Complexity: Big O notation is often used to describe the
worst-case scenario for an algorithm. It gives an upper limit on how the running
time of the algorithm increases as the input size grows.
ˆ Simplified Notation: Big O notation focuses on the dominant term of the func-
tion, ignoring constant factors and lower-order terms. For example, O(2n2 + 3n + 1)
is simplified to O(n2 ) because the quadratic term dominates for large values of n.
Common Patterns in Big O Notation:
ˆ Constant Time (O(1)): The running time does not depend on the input size.
ˆ Logarithmic Time (O(log n)): The running time grows logarithmically as the
input size increases. Examples include binary search.
ˆ Linear Time (O(n)): The running time grows linearly with the input size. Ex-
amples include iterating through an array.
ˆ Linearithmic Time (O(n log n)): The running time grows as n log n. Examples
include efficient sorting algorithms like mergesort.
ˆ Quadratic Time (O(n2 )): The running time grows quadratically with the input
size. Examples include nested loops over the input.
ˆ Cubic Time (O(n3 )): The running time grows cubically. Examples include algo-
rithms with three nested loops.
ˆ Exponential Time (O(2n )): The running time doubles with each additional input
element. Examples include brute-force combinatorial problems.
ˆ Factorial Time (O(n!)): The running time grows factorially with the input size.
Examples include solving the traveling salesman problem via brute force.

Ω Notation
Ω notation (Ω) is a mathematical notation used in computer science to describe the
lower bound or best-case performance of an algorithm. It provides information about
the minimum growth rate of an algorithm’s running time with respect to the size of the
input.
Key Points:
ˆ Definition: Ω notation is defined as follows:
Ω(g(n)) = {f (n) : there exist positive constants c and n0 such that 0 ≤ c·g(n) ≤ f (n) for all n ≥
In simpler terms, Ω(g(n)) represents the set of functions that grow at least as fast
as g(n) for sufficiently large input sizes.
ˆ Best-Case Time Complexity: Ω notation is often used to describe the best-case
scenario for an algorithm.

2
Θ Notation
Θ notation (Θ) is another mathematical notation used in computer science to describe
both the upper and lower bounds of an algorithm’s running time. It provides a more
precise characterization of the growth rate of an algorithm by specifying a tight range in
which the running time lies.
Key Points:

ˆ Definition: Θ notation is defined as follows:

Θ(g(n)) = {f (n) : there exist positive constants c1 , c2 , and n0 such that 0 ≤ c1 ·g(n) ≤ f (n) ≤ c2 ·

In simpler terms, Θ(g(n)) represents the set of functions that grow at the same rate
as g(n) for sufficiently large input sizes.

Why Big O?
Big O notation is often preferred and widely used in algorithm analysis for several prac-
tical reasons:

ˆ Simplicity in Analysis: Big O notation provides a simple and high-level view of


an algorithm’s efficiency by focusing on the upper bound or worst-case scenario.

ˆ Abstraction from Constant Factors and Lower-Order Terms: Big O nota-


tion abstracts away constant factors and lower-order terms, providing a simplified
representation that emphasizes the dominant factor affecting algorithmic complex-
ity.

ˆ Ease of Communication: Big O notation facilitates communication about algo-


rithmic complexity among researchers, developers, and educators.

ˆ Worst-Case Scenario Focus: In many practical scenarios, understanding the


worst-case time complexity is crucial for designing reliable systems.

ˆ Useful for Large Input Sizes: Big O notation is particularly useful when dealing
with large input sizes, where the dominant term in the complexity function becomes
the primary factor influencing performance.

ˆ Tool for Algorithm Selection: When choosing an algorithm for a specific task,
Big O notation helps developers make informed decisions about trade-offs between
time and space complexity.

You might also like