INTRODUCTION
INTRODUCTION
To prove that n! = O(nn ), we need to show that there exists a constant C > 0 and a non-negative
integer n0 such that for all n ≥ n0 ,
n! ≤ C ⋅ nn .
n! = 1 ⋅ 2 ⋅ 3 ⋯ n.
nn = n ⋅ n ⋅ n ⋯ n (n times).
1 ⋅ 2 ⋅ 3 ⋅ … ⋅ n ≤ n ⋅ n ⋅ n ⋅ … ⋅ n (n times) = nn .
This inequality holds because each factor in the product for n! is less than or equal to n. Therefore,
n! ≤ nn .
Now, we need to express this in the form required by the Big-O notation. Specifically, we need to find a
constant C such that
n! ≤ C ⋅ nn for all n ≥ n0 .
From the inequality n! ≤ nn , we can take C = 1 and n0 = 1 (or any positive integer). Thus,
n! ≤ 1 ⋅ nn for all n ≥ 1.
This satisfies the definition of Big-O notation. Hence, we have shown that:
n! = O(nn ).
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 1/22
Why recursion tree method is better than subsitution method for solving a recurrence relation ?
Find asymptotic upper bound of the following recurrence relation with help of recursion tree
method. T(n) = T(n/4) + T(n/2) + Θ(n^2)
The recursion tree method is often preferred over the substitution method for solving recurrence
relations for several reasons:
1. Visualization: The recursion tree method provides a visual representation of the recurrence,
making it easier to understand how the function is broken down into subproblems and how these
subproblems combine to form the original problem.
2. Work Distribution: It clearly shows the distribution of work at each level of recursion, helping to
identify the dominant terms more intuitively.
3. Pattern Recognition: Recursion trees often reveal patterns that can be used to sum the work
done across all levels, leading to easier identification of the total time complexity.
4. Handling Complex Recurrences: For more complex recurrences, especially those with multiple
recursive calls and non-trivial work at each step, the recursion tree method can be more
straightforward and less error-prone compared to finding an appropriate guess for substitution.
Solving the Recurrence T (n) = T (n/4) + T (n/2) + Θ(n2 ) using Recursion Tree Method
To find the asymptotic upper bound of the recurrence T (n) = T (n/4) + T (n/2) + Θ(n2 ), we will use the
recursion tree method.
2. Next Level:
For T (n/4), it breaks into T (n/16) and T (n/8) with work Θ((n/4)2 ) = Θ(n2 /16).
For T (n/2), it breaks into T (n/8) and T (n/4) with work Θ((n/2)2 ) = Θ(n2 /4).
3. General Pattern:
At the k -th level, the recurrence tree will have nodes corresponding to subproblems of sizes n/4k
and n/2k , with the work at each node being Θ((n/(2i ))2 ) for some i based on the branching.
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 2/22
Level 0: The root has work Θ(n2 ).
Level 1: Two nodes with work Θ(n2 /16) and Θ(n2 /4), summing up to Θ(n2 /16 + n2 /4) =
Θ(n2 ).
Level 2: Four nodes with work summing up to Θ(n2 /256 + n2 /64 + n2 /64 + n2 /16) = Θ(n2 ).
General Level k : Each level sums to Θ(n2 ).
5. Total Work:
Since each level contributes Θ(n2 ) work and there are log n levels (because each step reduces the
problem size by at least a factor of 2),
log n
T (n) = ∑ Θ(n2 ) = Θ(n2 log n).
i=0
Therefore, the asymptotic upper bound for the recurrence T (n) = T (n/4) + T (n/2) + Θ(n2 ) is
Θ(n2 log n).
State master theorem and find the time complexity for the recurrence relation : T(n) = 2T(n/4) + √n.
Master Theorem
The Master Theorem provides a straightforward way to determine the asymptotic behavior of
recurrences of the form:
T (n) = aT ( nb ) + f (n),
where a ≥ 1 and b > 1. The theorem considers three cases based on the relationship between f (n) and
nlogb a :
T (n) = Θ(nlogb a ).
= Θ(nlogb a ):
2. Case 2: f (n)
3. Case 3: f (n) = Ω(nc ) for some c > logb a, and af ( nb ) ≤ kf (n) for some k < 1 and sufficiently large n
:
T (n) = Θ(f (n)).
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 3/22
a=2
b=4
f (n) = n
log 2 log 2
logb a = log4 2 =
log 4 = 2 log 2 = 12 .
f (n) = n = n1/2
nlogb a = n1/2
Since f (n) = Θ(n1/2 ) and nlogb a = n1/2 , we are in Case 2 of the Master Theorem.
Conclusion
Write the significance of different asymptotic notations (Big-O, Big-omega, Big-theta) in detail and
also differentiate.
Asymptotic notations are mathematical tools used to describe the behavior of functions as the input size
grows. These notations provide a way to classify algorithms according to their running time or space
requirements in the limit, which helps in analyzing and comparing the efficiency of algorithms.
1. Big-O Notation (O )
Significance:
Big-O notation describes an upper bound on the time complexity or space complexity of an
algorithm.
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 4/22
It provides a worst-case scenario for how an algorithm performs as the input size (n) grows.
It helps in determining the scalability of an algorithm.
Definition:
A function f (n) is O(g(n)) if there exist positive constants c and n0 such that for all n
≥ n0 :
f (n) ≤ c ⋅ g(n).
Example:
If an algorithm has a running time T (n) = 3n2 + 5n + 2, then T (n) = O(n2 ).
Interpretation:
The algorithm's running time will not grow faster than c ⋅ n2 for some constant c when n is large
enough.
Significance:
Big-Omega notation describes a lower bound on the time complexity or space complexity of an
algorithm.
It provides a best-case scenario or a minimum time an algorithm will take for large input sizes.
It is useful in proving the efficiency of an algorithm and in establishing that a certain amount of
resources is necessary for solving a problem.
Definition:
A function f (n) is Ω(g(n)) if there exist positive constants c and n0 such that for all n
≥ n0 :
f (n) ≥ c ⋅ g(n).
Example:
If an algorithm has a running time T (n) = 3n2 + 5n + 2, then T (n) = Ω(n2 ).
Interpretation:
The algorithm's running time will not be less than c ⋅ n2 for some constant c when n is large enough.
Significance:
Big-Theta notation provides a tight bound on the time complexity or space complexity of an
algorithm.
It describes both the upper and lower bounds, giving an exact asymptotic behavior of an
algorithm.
It is used to describe the average-case complexity and ensures that an algorithm's running time is
well-characterized.
Definition:
A function f (n) is Θ(g(n)) if there exist positive constants c1 , c2 , and n0 such that for all n
≥ n0 :
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 5/22
c1 ⋅ g(n) ≤ f (n) ≤ c2 ⋅ g(n).
Example:
If an algorithm has a running time T (n) = 3n2 + 5n + 2, then T (n) = Θ(n2 ).
Interpretation:
The algorithm's running time grows at the same rate as n2 , bounded both above and below by constants
times n2 for large n.
1. Purpose:
2. Bounds:
3. Usage:
4. Tightness:
Examples of Usage
Sorting Algorithms:
Searching Algorithms:
By understanding these notations, computer scientists and engineers can better analyze and compare
the efficiency of algorithms, leading to more informed decisions in algorithm design and
implementation.
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 6/22
Find the time complexity for the following recurrence : T(n) = 2T(n^0.5) + log(n).
To find the time complexity of the recurrence T (n) = 2T ( n) + log(n), we can use the recursion tree
method or analyze it intuitively based on the pattern of recursive calls. Here, we'll use the recursion tree
method to understand how the work is distributed across different levels of recursion.
2. Next Level:
log( n).
3. General Pattern:
k
At level k , the size of the subproblem is n1/2 .
There are 2k subproblems at this level.
k
The cost at level k is 2k ⋅ log(n1/2 ) = 2k ⋅ 1
2k
log(n) = log(n).
Therefore, the total work done by the recursion tree can be approximated by multiplying the cost per
level by the number of levels:
log log n
T (n) = ∑ log(n) = log(n) ⋅ (log log n + 1).
k=0
Conclusion
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 7/22
Suppose we have a recurrence relation T(n) = aT(n/b) + f(n). Then show that
(a) If af(n/b) = kf(n) for some constant k < 1, then T(n) = O(f(n)).
(b) If af(n/b) = kf(n) for some constant k > 1, then T(n) = O(n^(log(a))).
(c) If af(n/b) = kf(n) for some constant k = 1, then T(n) = O(f(n)log(n)).
(a) If af (n/b) = kf (n) for some constant k < 1, then T (n) = O(f (n)).
Proof:
Given af (n/b) = kf (n) for k < 1, we can use induction to show T (n) = O(f (n)).
1. Base Case: For some small constant n (say n = 1), T (1) is a constant c.
3. Inductive Step:
Combining terms:
(b) If af (n/b) = kf (n) for some constant k > 1, then T (n) = O(nlogb a ).
Proof:
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 8/22
1. The recursion tree method helps us visualize this situation. At the top level, we have:
4. Since af (n/b) = kf (n) and k > 1, the total cost can be summarized as a geometric series with a
ratio k :
logb n
f (n) ∑ (ak)i .
i=0
i=0
nlogb a .
(c) If af (n/b) = kf (n) for some constant k = 1, then T (n) = O(f (n) log n).
Proof:
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 9/22
3. Each level of the recursion tree has a constant work contribution of f (n):
Summary
(c) If af (n/b) = kf (n) for k = 1, then T (n) = O(f (n) log n).
To find the time complexity of the recurrence T (n) = T (2n/3) + 1, we can use the method of expansion
(also known as the iterative method). This involves expanding the recurrence to find a pattern and then
solving for the closed-form expression.
Step-by-Step Expansion
1. First Expansion:
2n
T (n) = T ( )+1
3
2. Second Expansion:
2
2n 2 2n 2
T ( ) = T ( ⋅ ) + 1 = T (( ) n) + 1
3 3 3 3
So,
2 2
2 2
T (n) = T (( ) n) + 1 + 1 = T (( ) n) + 2
3 3
3. Third Expansion:
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 10/22
2 3
2 2
T (( ) n) = T (( ) n) + 1
3 3
So,
3
2
T (n) = T (( ) n) + 3
3
General Pattern
k
2
T (n) = T (( ) n) + k
3
We continue this expansion until the argument of T reaches the base case. Assuming the base case is
reached when n becomes 1, we need:
k
2
( ) n=1
3
Solving for k :
k
2 1
( ) =
3 n
2 1
k log ( ) = log ( )
3 n
2
k log ( ) = − log(n)
3
− log(n)
k=
log ( 23 )
log(n)
k=
log ( 32 )
k = c log(n)
1
where c = .
log( 32 )
Final Cost
At the base case, T (1) is a constant T (1) = Θ(1). Hence, the total cost is:
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 11/22
T (n) = T (1) + k = Θ(1) + c log(n)
Conclusion
T (n) = Θ(log(n))
Use the recursion tree to give an asymptotically tight solution to the recursion T(n) = T(n-a) + T(a) +
cn where a>=1 and c>0 are constant.
To find an asymptotically tight solution to the recurrence T (n) = T (n − a) + T (a) + cn using the
recursion tree method, let's break down the problem step by step.
1. First Level:
T (n) = T (n − a) + T (a) + cn
2. Second Level:
The cost at this level is c(n − a). The right branch T (a) is a constant, since a is a constant.
3. Third Level:
The cost at this level is c(n − 2a). The right branches T (a) remain constant.
General Pattern
At each level k , the cost of the leftmost path is c(n − ka), and each T (a) is evaluated as a constant T (a)
since a is constant.
The recursion stops when the argument of T becomes zero or negative. This happens when:
n
n − ka ≤ 0 ⟹ k ≥
a
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 12/22
So the depth of the recursion tree is approximately na .
1. Leftmost Path:
The leftmost path contributes:
n
a
∑ c(n − ka)
k=0
2. Right Branches:
Each right branch T (a) is a constant, and there are na such branches at each level.
1. Leftmost Path:
n
a
k=0
n
n n
a
n ( + 1)
∑ (n − ka) = n ( + 1) − a ( a a )
a 2
k=0
Simplifying:
n2 a n2 n
= +n− ( 2 + )
a 2 a a
n2 n2 n
= +n− −
a 2a 2
n2 n
= +
2a 2
n2
The dominant term is 2a , so the total work is O(n2 /a).
2. Right Branches:
Each level contributes a constant, summed over na levels, which results in an additional term of
O(n).
Conclusion
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 13/22
Combining these contributions, the total time complexity is dominated by the quadratic term:
n2
T (n) = O ( ) = O(n2 )
a
Thus, the asymptotically tight solution to the recurrence T (n) = T (n − a) + T (a) + cn is:
T (n) = Θ(n2 ).
If T(n) = 1 when n = 1 and T(n) = 2T(n/2) + 5n when n>1, then show that T(n) = O(nlog(n)).
To show that T (n) = O(n log n) for the recurrence T (n) = 2T (n/2) + 5n with the base case T (1) = 1, we
can use the Master Theorem.
Master Theorem
The Master Theorem provides a straightforward way to determine the asymptotic behavior of
recurrences of the form:
T (n) = aT ( nb ) + f (n)
Where a ≥ 1 and b > 1, and f (n) is an asymptotically positive function. The theorem states:
3. Case 3: If f (n) = Ω(nc ) where c > logb a, and if af (n/b) ≤ kf (n) for some k
< 1 and sufficiently
large n, then T (n) = Θ(f (n)).
2. Calculate logb a:
logb a = log2 2 = 1
f (n) = 5n
nlogb a = n1 = n
Conclusion
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 14/22
According to Case 2 of the Master Theorem:
T (n) = Θ(nlogb a log n) = Θ(n log n)
2. Second Level: Each T (n/2) = 2T (n/4) + 5(n/2), and there are 2 such calls (total cost is 2 ⋅ 5(n/2) =
5n)
3. Third Level: Each T (n/4) = 2T (n/8) + 5(n/4), and there are 4 such calls (total cost is 4 ⋅ 5(n/4) =
5n)
...
At each level, the total cost is 5n. The number of levels is log2 n, since the size of the problem reduces by
Therefore, T (n) = O(n log n), confirming our result from the Master Theorem.
Final Conclusion
The recurrence T (n) = 2T (n/2) + 5n has a time complexity of T (n) = O(n log n).
Recursion and iteration are two fundamental approaches to solving problems and implementing
algorithms in programming. They both have their own strengths and appropriate use cases. Here’s a
detailed differentiation between the two:
Recursion
Recursion is a technique where a function calls itself directly or indirectly to solve a smaller instance of
the same problem. Each recursive call works on a smaller subproblem, and the results of these
subproblems are combined to solve the original problem.
Characteristics of Recursion:
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 15/22
1. Base Case: Recursion relies on a base case to terminate the recursive calls. Without a base case,
recursion would result in infinite loops.
2. Recursive Case: Defines the function calling itself with a subset of the original problem.
3. Function Call Stack: Each recursive call adds a new layer to the function call stack, which can lead
to significant memory usage if the recursion depth is large.
4. Intuitive: Recursion can be more intuitive and easier to implement for problems that have a
natural recursive structure, such as tree traversals, factorial calculation, and Fibonacci sequence.
5. Overhead: Each function call involves overhead associated with maintaining the call stack, which
can lead to performance issues in some cases.
Example:
python
def factorial(n):
if n == 1: # Base case
return 1
else: # Recursive case
return n * factorial(n - 1)
Iteration
Iteration is a technique where a set of instructions is repeated in a loop until a condition is met. Iterative
solutions often involve loops like `for`, `while`, or `do-while` to repeat computations.
Characteristics of Iteration:
1. Loop Constructs: Uses loops (e.g., `for`, `while`) to repeatedly execute a block of code.
2. State Variables: Often relies on state variables to maintain and update the state across iterations.
3. Efficiency: Typically more memory efficient as it doesn't involve the overhead of function calls and
call stack management.
4. Direct Control: Provides more direct control over the execution flow, which can sometimes make
iterative solutions more efficient and easier to optimize.
5. Suitability: Often more suitable for problems with a fixed number of iterations or those that do
not naturally decompose into smaller subproblems.
Example:
python
def factorial(n):
result = 1
for i in range(1, n + 1):
result *= i
return result
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 16/22
Comparison
Function Call
Uses call stack for each call No function call overhead
Stack
The problem has a natural recursive structure (e.g., tree traversals, recursive mathematical
formulas).
The solution involves breaking down a problem into smaller, similar subproblems.
Code clarity and simplicity are more important than performance.
In summary, both recursion and iteration have their advantages and appropriate use cases. Choosing
between them depends on the problem at hand, the importance of performance versus code clarity,
and the natural structure of the problem.
Find the time complexity of the following recurrence relations using Master's Theorem.
(i) T(n) = 2T(n/4) + n^0.51
(ii) T(n) = (√2)T(n/2) + log(n)
To find the time complexity of the given recurrence relations using the Master Theorem, we need to
match each recurrence with the form:
T (n) = aT ( nb ) + f (n)
log 2 log 2 1
logb a = log4 2 =
log 4
= 2 log 2
= 2
Since n0.51 grows faster than n0.5 , we are in Case 3 of the Master Theorem:
Case 3: If f (n) = Ω(nc ) for some c > logb a and if af (n/b) ≤ kf (n) for some k < 1 and sufficiently large n,
1
< 1, the regularity condition holds. Therefore, by Case 3 of the Master Theorem:
Since 20.02
T (n) = Θ(n0.51 )
1
logb a = log2 ( 2) = log2 (21/2 ) =
2
Since log(n) grows slower than n1/2 , we are in Case 1 of the Master Theorem:
Here, f (n) = log(n) = O(nc ) for any c > 0 and specifically c < logb a = 1/2.
Summary
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 18/22
Little-Omega Notation (ω )
Definition:
The little-omega notation, denoted as ω , is used in asymptotic analysis to describe a lower bound that is
not tight. Specifically, f (n) = ω(g(n)) means that f (n) grows asymptotically faster than g(n), but not in a
tight manner. Formally, f (n) = ω(g(n)) if and only if:
f (n)
limn→∞
g(n) =∞
In other words, f (n) eventually outgrows g(n) by an unbounded factor as n approaches infinity.
f (n) = n2
g(n) = n
f (n) n2
limn→∞
g(n)
= limn→∞
n
= limn→∞ n = ∞
Since the limit is ∞, we conclude that n2 = ω(n). This indicates that n2 grows faster than n as n
approaches infinity, but n does not provide a tight lower bound for n2 ; it is a loose lower bound.
f (n) = 2n
g(n) = n3
f (n) 2n
limn→∞
g(n)
= limn→∞
n3
As n approaches infinity, the exponential function 2n grows much faster than the polynomial function n3 .
2n ∞
To confirm this, we apply L'Hôpital's rule multiple times because the limit n3 as n → ∞ is in the form ∞ :
First application:
2n (2n ln 2)
limn→∞
n3
(Apply L’Hôpital’s rule) limn→∞
3n2
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 19/22
Second application:
2n ln 2 2n (ln 2)2
limn→∞
3n2
(Apply L’Hôpital’s rule again) limn→∞
6n
Third application:
2n (ln 2)2 2n (ln 2)3
limn→∞
6n
(Apply L’Hôpital’s rule one more time) limn→∞
6
As we can see, the numerator grows exponentially while the denominator is constant. Thus, the limit is
∞:
2n (ln 2)3
limn→∞
6
=∞
Summary
Little-omega notation (ω ) is used to describe a lower bound that is not tight. It signifies that one
function grows significantly faster than another as the input size approaches infinity. Examples
demonstrate how functions like n2 and 2n grow much faster than n and n3 , respectively, by evaluating
their limits and confirming that the ratio of the functions approaches infinity.
A recursion tree is a visual tool used to analyze the performance of recursive algorithms. It helps in
understanding how a recursive function breaks down into subproblems and how the total cost of the
function is accumulated across these subproblems. By visualizing the recursive calls, you can often
derive the time complexity of the recursive algorithm.
1. Root Node: The root of the tree represents the initial problem, typically denoted as T (n).
2. Subproblem Nodes: Each node at a given level represents a subproblem created by the recursive
call. The children of a node represent the recursive calls made by that node.
3. Levels of the Tree: The levels of the tree correspond to the depth of the recursion. The root is at
level 0, its children are at level 1, and so on.
4. Costs: Each node may have an associated cost representing the work done at that node outside of
the recursive calls. Summing these costs helps in determining the total cost of the algorithm.
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 20/22
Let's consider a simple example of the recurrence relation:
T (n) = 2T (n/2) + n
2. First Level: The root makes two recursive calls to T (n/2), each with a cost of n/2. The total cost at
this level is 2 × (n/2) = n.
3. Second Level: Each T (n/2) makes two recursive calls to T (n/4). There are four T (n/4) calls in total,
each with a cost of n/4. The total cost at this level is 4 × (n/4) = n.
4. Subsequent Levels: This process continues until the subproblem size reaches the base case. If the
base case is T (1) = Θ(1), then at the lowest level, there will be many T (1) calls.
Visualization
scss
T(n)
/ \
T(n/2) T(n/2)
/ \ / \
T(n/4) T(n/4) T(n/4) T(n/4)
... ... ... ...
1. Number of Levels: The height of the tree is log2 n because the size of the problem is divided by 2
at each level.
2. Cost per Level: Each level has a total cost of n. At level k , there are 2k nodes, each contributing a
cost of n/2k . Therefore, the cost at level k is:
n
2k × =n
2k
3. Total Cost: The total cost is the sum of the costs at all levels. Since each of the log2 n levels has a
cost of n:
T (n) = n + n + n + … + n = n log2 n
Thus, the time complexity of the recurrence T (n) = 2T (n/2) + n is O(n log n).
Summary
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 21/22
Levels of the Tree: Correspond to the depth of recursion.
By summing the costs at each level, we can derive the total time complexity of the recursive algorithm.
The recursion tree method is especially useful for visualizing and solving recurrences in the divide-and-
conquer algorithms.
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 22/22