[go: up one dir, main page]

0% found this document useful (0 votes)
47 views7 pages

Big O, Omega, Theta Explained

1. Big O, Omega, and Theta notations are used to describe the asymptotic worst-case, best-case, and average-case time complexities of algorithms. 2. Big O gives the upper bound and describes worst-case complexity. Omega gives the lower bound and describes best-case complexity. Theta gives both upper and lower bounds and describes average-case complexity. 3. Examples are provided to illustrate Big O, Omega, and Theta notations.

Uploaded by

MIKHAIEL gaming
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views7 pages

Big O, Omega, Theta Explained

1. Big O, Omega, and Theta notations are used to describe the asymptotic worst-case, best-case, and average-case time complexities of algorithms. 2. Big O gives the upper bound and describes worst-case complexity. Omega gives the lower bound and describes best-case complexity. Theta gives both upper and lower bounds and describes average-case complexity. 3. Examples are provided to illustrate Big O, Omega, and Theta notations.

Uploaded by

MIKHAIEL gaming
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1.

Explain Big O, Omega and theta with example

Big O, Omega, and Theta are notations used in algorithm analysis to describe the
growth rate or asymptotic behavior of a function. They provide different bounds on
the efficiency of an algorithm.

Big-O Notation (O-notation)


Big-O notation represents the upper bound of the running time of an
algorithm. Thus, it gives the worst-case complexity of an algorithm.

Big-O gives the upper bound of a


function

O(g(n)) = { f(n): there exist positive constants c and n0


such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

The above expression can be described as a function f(n) belongs to the


set O(g(n)) if there exists a positive constant c such that it lies
between 0 and cg(n) , for sufficiently large n .
For any value of n , the running time of an algorithm does not cross the time
provided by O(g(n)) .
Since it gives the worst-case running time of an algorithm, it is widely used
to analyze an algorithm as we are always interested in the worst-case
scenario.

Omega Notation (Ω-notation)


Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best case complexity of an algorithm.

Omega gives the lower bound of a function

Ω(g(n)) = { f(n): there exist positive constants c and n0


such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

The above expression can be described as a function f(n) belongs to the


set Ω(g(n)) if there exists a positive constant c such that it lies above cg(n) ,

for sufficiently large n .


For any value of n , the minimum time required by the algorithm is given by
Omega Ω(g(n)) .
Theta Notation (Θ-notation)
Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm.

Theta bounds the function within


constants factors
For a function g(n) , Θ(g(n)) is given by the relation:

Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0


such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }

The above expression can be described as a function f(n) belongs to the


set Θ(g(n)) if there exist positive constants c1 and c2 such that it can be
sandwiched between c1g(n) and c2g(n) , for sufficiently large n.
If a function f(n) lies anywhere in between c1g(n) and c2g(n) for all n ≥ n0 ,

then f(n) is said to be asymptotically tight bound.


2. Give general form plan for analysing recursive algorithm and analyse binary search
recursive algorithm
To analyze a recursive algorithm, you can follow a general plan that includes the following
steps:

1. Define the recursive function: Write the recursive function that solves the problem. Clearly
define the input parameters and the base case(s) that terminate the recursion.

2. Determine the input size: Identify the parameter(s) that represent the size of the input.
This will help in determining how the input size affects the algorithm's performance.

3. Define the recurrence relation: Express the running time or space complexity of the
recursive function in terms of its own input size and possibly other variables. This is often
done using a recurrence relation or equation.

4. Solve the recurrence relation: Solve the recurrence relation to obtain a closed-form
expression for the running time or space complexity. This could involve using mathematical
techniques like substitution, recursion tree, or master theorem.

5. Analyze the complexity: Use the closed-form expression obtained from solving the
recurrence relation to analyze the time or space complexity of the algorithm. Identify the
best-case, worst-case, and average-case scenarios, if applicable.

Now let's analyze the Binary Search algorithm using recursion:

Binary Search Recursive Algorithm:


```c
int binarySearch(int arr[], int low, int high, int target) {
if (low <= high) {
int mid = low + (high - low) / 2;

if (arr[mid] == target) {
return mid; // Base case: Element found
} else if (arr[mid] > target) {
return binarySearch(arr, low, mid - 1, target); // Recursive call on the left half
} else {
return binarySearch(arr, mid + 1, high, target); // Recursive call on the right half
}
}

return -1; // Base case: Element not found


}
```

1. Recursive Function: The binarySearch function performs a binary search on a sorted array
to find a target element.
2. Input Size: The input size can be represented by the length of the array, which is `high -
low + 1`.

3. Recurrence Relation: The recurrence relation for the binarySearch function is T(n) = T(n/2)
+ O(1), where T(n) represents the time complexity of searching an array of size n/2.

4. Solving the Recurrence Relation: By applying the master theorem or by analyzing the
recursion tree, we find that the time complexity of binarySearch is O(log n).

5. Complexity Analysis: The Binary Search algorithm has a time complexity of O(log n),
indicating a logarithmic growth rate. This means that the algorithm's running time increases
slowly as the input size grows.

Note: The space complexity of the Binary Search algorithm is O(log n) as well since the
recursive calls consume stack space proportional to the number of recursive invocations.

By following this general plan, you can analyze other recursive algorithms and determine
their time and space complexities.

3. Apply Quick sort to sort “KLEITHUBBALLI”

Algorithm of Quick Sort -


Step 1 − Choose the high-index Pivot.
Step 2 − Take two variables to point Left and Right of the list excluding Pivot.
Step 3 − Left points to the low index.
Step 4 − Right points to the high index.
Step 5 − If the value at the Left is less than the pivot move in the right direction.
Step 6 − If the value at the Right is greater than the pivot move in the left direction.
Step 7 − If both step 5 and step 6 do not match SWAP Left and Right.

Step 8 − If left ≥ right, the point where they met is a new pivot. Checks the value of old and
new Pivots whether there is a need for Swapping or not.

4. Knapsack Problem
To solve the Greedy Knapsack problem with a capacity of 5 using a greedy algorithm, we
need to select items from a given set of items to maximize the total value while not
exceeding the capacity of the knapsack. The greedy approach selects items based on their
value-to-weight ratio, choosing items with higher ratios first.

Here's an example of solving the Greedy Knapsack problem when the capacity is 5:

Item 1: Weight = 2, Value = 4, Value-to-Weight Ratio = 4/2 = 2


Item 2: Weight = 1, Value = 3, Value-to-Weight Ratio = 3/1 = 3
Item 3: Weight = 3, Value = 5, Value-to-Weight Ratio = 5/3 ≈ 1.67
Item 4: Weight = 2, Value = 2, Value-to-Weight Ratio = 2/2 = 1
Sorting the items based on the value-to-weight ratio in descending order, we get:

Item 2: Weight = 1, Value = 3, Value-to-Weight Ratio = 3/1 = 3


Item 1: Weight = 2, Value = 4, Value-to-Weight Ratio = 4/2 = 2
Item 4: Weight = 2, Value = 2, Value-to-Weight Ratio = 2/2 = 1
Item 3: Weight = 3, Value = 5, Value-to-Weight Ratio = 5/3 ≈ 1.67

Starting with an empty knapsack and a capacity of 5, we choose items in descending order of
their value-to-weight ratio until the capacity is reached or no more items are left.

1. Select Item 2: Weight = 1, Value = 3. Knapsack capacity remaining = 5 - 1 = 4.


2. Select Item 1: Weight = 2, Value = 4. Knapsack capacity remaining = 4 - 2 = 2.
3. Select Item 4: Weight = 2, Value = 2. Knapsack capacity remaining = 2 - 2 = 0.

The total value of the selected items is 3 + 4 + 2 = 9.

Therefore, using the greedy approach, the optimal selection of items with a capacity of 5
results in a total value of 9.

5. Explain the Kruskal’s algorithm to find minimum cost Spanning tree and apply on the
graph

Kruskal's algorithm is a minimum spanning tree algorithm that takes a


graph as input and finds the subset of the edges of that graph which
• form a tree that includes every vertex

• has the minimum sum of weights among all the trees that can be
formed from the graph

How Kruskal's algorithm works


It falls under a class of algorithms called greedy algorithms that find the
local optimum in the hopes of finding a global optimum.
We start from the edges with the lowest weight and keep adding edges
until we reach our goal.

The steps for implementing Kruskal's algorithm are as follows:

1. Sort all the edges from low weight to high

2. Take the edge with the lowest weight and add it to the spanning tree.
If adding the edge created a cycle, then reject this edge.

3. Keep adding edges until we reach all vertices.

You might also like