[go: up one dir, main page]

0% found this document useful (0 votes)
26 views15 pages

Notes Unit-1-1

Cao notes

Uploaded by

Co-l- Laiba khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views15 pages

Notes Unit-1-1

Cao notes

Uploaded by

Co-l- Laiba khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Syllabus:

Introduction: Basic Terminologies, Data Structure, Analysis of an Algorithm, Asymptotic Notations,


Time-Space trade off. Searching: Linear Search and Binary Search Techniques and their complexity
analysis.

Data Structure
A data structure is a specialized format for organizing, processing, and storing data. It defines the
relationship between data elements and the operations that can be performed on them, providing a
means to manage and manipulate data efficiently.

Classification of Data Structure


Data structures are generally classified into Primitive data Structures and Non-primitive data
Structures.

Primitive data Structures:


Integer, Float, Character, Boolean: Basic data types provided by the programming language, used to
build more complex data structures.

Non-Primitive data Structures:


Non-primitive data structures are those data structures which are created using primitive data
structures. It is further classified as Linear and Non-Linear.

Linear Data Structure


A data structure is called linear if all of its elements are arranged in the linear order. It is further
classified as Static and Dynamic.

Static Data Structure


Static data structure has a fixed memory size. It is easier to access the elements in a static data
structure. An example of this data structure is an array.

Array
An array is a collection of similar type of data items and each data item is called an element of the
array. The data type of the element may be any valid data type like char, int, float or double. The
elements of array share the same variable name but each one carries a different index number known
as subscript. The array can be one dimensional, two dimensional or multidimensional. The individual
elements of the array “age” are: age[0], age[1], age[2], age[3],......... age[98], age[99].

Dynamic data structure


In dynamic data structure, the size is not fixed. It can be randomly updated during the runtime due to
which it is considered as efficient in terms of space complexity. Examples of this data structure are
linked list, queue, stack, etc.

Linked List
Linked list is a linear data structure which is used to maintain a list in the memory. It can be seen as
the collection of nodes stored at non-contiguous memory locations. Each node of the list contains a
pointer to its adjacent node.
Stack
Stack is a linear list in which insertion and deletions are allowed only at one end, called top. A stack is
an abstract data type (ADT), can be implemented in most of the programming languages. It follows
First-In-Last-Out (FILO) methodology for storing the data items. It is named as stack because it behaves
like a real-world stack, for example: - piles of plates or deck of cards etc.

Queue
Queue is a linear list in which elements can be inserted only at one end called rear and deleted only
at the other end called front. It is an abstract data structure, similar to stack. Queue is opened at both
ends therefore it follows First-In-First-Out (FIFO) methodology for storing the data items.

Non-linear data structure


This data structure does not form a sequence i.e. each item or element is connected with two or more
other items in a non-linear arrangement. The data elements are not arranged in sequential structure.
It is further classified as Trees and Graphs.

Trees
Trees are multilevel data structures with a hierarchical relationship among its elements known as
nodes. The bottommost nodes in the hierarchy are called leaf node while the topmost node is called
root node. Each node contains pointers to point adjacent nodes. Tree data structure is based on the
parent-child relationship among the nodes. Each node in the tree can have more than one child except
the leaf nodes whereas each node can have at most one parent except the root node.

Graphs
Graphs can be defined as the pictorial representation of the set of elements (represented by vertices)
connected by the links known as edges. A graph is different from tree in the sense that a graph can
have cycle which the trees don’t have.
Asymptotic notations used for Analysis of Algorithm.
Asymptotic notations are mathematical tools used to describe the running time or space requirements
of an algorithm in terms of the size of the input. They provide a way to analyze the efficiency and
performance of an algorithm, especially for large inputs. Here are the most commonly used
asymptotic notations:

1. Big O Notation (O)

Definition: Big O notation describes an upper bound on the time complexity of an algorithm. It
measures the worst-case scenario, providing the maximum number of operations an algorithm will
take as a function of the input size n.

Usage: It helps in understanding the worst-case performance and ensuring that the algorithm will not
perform worse than the specified bound.

Example: If an algorithm has a time complexity of 𝑂(𝑛2 ), it means that in the worst case, the algorithm
will take at most 𝑐 ⋅ 𝑛2 operations for some constant c.

2. Omega Notation (Ω)

Definition: Omega notation describes a lower bound on the time complexity of an algorithm. It
measures the best-case scenario, providing the minimum number of operations an algorithm will take
as a function of the input size n.

Usage: It helps in understanding the best-case performance, but it's less commonly used alone
because the best case is often less informative.

Example: If an algorithm has a time complexity of 𝛺(𝑛), it means that in the best case, the algorithm
will take at least 𝑐 ⋅ 𝑛 operations for some constant c.

3. Theta Notation (Θ)

Definition: Theta notation provides a tight bound on the time complexity of an algorithm. It means
that the algorithm's running time is both upper and lower bounded by the same function.

Usage: It gives a more precise analysis of an algorithm's performance because it specifies the exact
growth rate.

Example: If an algorithm has a time complexity of 𝜃(𝑛 log 𝑛), it means that the algorithm will take
exactly 𝑐1 ⋅ 𝑛 log 𝑛 to 𝑐2 ⋅ 𝑛 log 𝑛 operations for some constants 𝑐1 and 𝑐2 .

4. Little o Notation (o)

Definition: Little o notation describes an upper bound that is not tight. It means that the running time
grows strictly slower than the given function.

Usage: It is used to show that an algorithm is more efficient than a certain bound but without
specifying the exact rate of growth.

Example: If an algorithm has a time complexity of 𝑜(𝑛2 ), it means that the algorithm's running time
grows slower than 𝑛2 , but it does not grow at the same rate as 𝑛2 .

5. Little omega Notation (ω)


Definition: Little omega notation describes a lower bound that is not tight. It means that the running
time grows strictly faster than the given function.

Usage: It is used to indicate that an algorithm's running time is more than a certain bound, without
specifying how much more.

Example: If an algorithm has a time complexity of ω(n), it means that the algorithm's running time
grows faster than n, but it does not grow at the same rate as n.

How to decide performance of an algorithm


Deciding the performance of an algorithm involves evaluating various factors to determine how
efficiently it handles different sizes and types of inputs. The steps for evaluating an algorithm's
performance are:

1. Time Complexity

Time complexity measures the amount of time an algorithm takes to complete as a function of the
size of the input (denoted as n).

Steps:

• Analyze the algorithm to count the number of basic operations (e.g., comparisons, additions)
as a function of n.

• Use asymptotic notations (Big O, Big Omega, Theta) to describe the time complexity.

Example: If a sorting algorithm takes 𝑛2 comparisons in the worst case, its time complexity is 𝑂(𝑛2 ).

2. Space Complexity

Space complexity measures the amount of memory an algorithm uses as a function of the input size.

Steps:

• Determine the amount of extra memory the algorithm uses beyond the input data.

• Use asymptotic notations to describe the space complexity.

Example: If an algorithm uses an extra array of size n, its space complexity is O(n).

3. Best, Worst, and Average Case Analysis

Consider different scenarios to evaluate the algorithm's performance.

Definitions:

• Best Case: The scenario where the algorithm performs the fewest operations.

• Worst Case: The scenario where the algorithm performs the most operations.

• Average Case: The expected scenario where the algorithm's performance is averaged over all
possible inputs.

Example:

• Best Case: For linear search, the best case is O(1) if the target element is the first element.
• Worst Case: For linear search, the worst case is O(n) if the target element is the last element
or not present.

• Average Case: For linear search, the average case is O(n/2), which simplifies to O(n).

4. Empirical Analysis (Benchmarking)

Empirical analysis involves running the algorithm with actual data and measuring its performance.

Steps:

• Implement the algorithm.

• Use various input sizes and data distributions.

• Measure the execution time and memory usage.

Tools: Profilers, timers (e.g., time in Python), and memory analyzers.

5. Problem Size and Scalability

Evaluate how the algorithm's performance changes as the input size increases.

Steps:

• Analyze the time and space complexity for different input sizes.

• Consider the algorithm's scalability and its ability to handle large inputs.

Example: An algorithm with 𝑂(𝑛 log 𝑛) time complexity scales better than one with 𝑂(𝑛2 ) as n
increases.

6. Theoretical Analysis

Theoretical analysis involves using mathematical techniques to prove the algorithm's time and space
complexity.

Steps:

• Use mathematical induction, recurrence relations, or other techniques to derive the


complexity.

• Prove the upper and lower bounds for the algorithm's performance.

Example: Proving that the merge sort algorithm has a time complexity of 𝑂(𝑛 log 𝑛) using a
recurrence relation.

7. Considerations for Special Cases

Consider special cases or constraints specific to the problem domain.

Steps:

• Identify any constraints or special cases in the problem.

• Analyze the algorithm's performance under these constraints.

Example: Inverting a matrix can have different performance implications depending on whether the
matrix is sparse or dense.
8. Comparison with Other Algorithms

Compare the algorithm with other known algorithms for the same problem.

Steps:

• Implement or study other algorithms for the same problem.

• Compare their time and space complexities, as well as empirical performance.

Example: Comparing quicksort, mergesort, and heapsort for various datasets.

Time Complexities with Asymptotic notations


The asymptotic notations are mathematical tools used to describe the running time or space
requirements of an algorithm. The commonly used asymptotic notations are Big O, Big Ω and Theta.

Big O Notation
It Represents the upper bound (or worst-case) of the running time or space.

An algorithm runtime complexity is represented by the time function T(n) using the "Big-O" notation.
For example, the following statement 𝑇(𝑛) = 𝑂(𝑛2 ) says that an algorithm has a quadratic time
complexity.

For an input of size 10, the algorithm might perform roughly 100 operations.

For an input of size 100, it might perform roughly 10,000 operations.

For an input of size 1,000, it might perform roughly 1,000,000 operations.

This rapid growth in operations makes quadratic algorithms less suitable for very large datasets.

For any functions f(n) and g(n), we say that f(n) = O(g(n)) when there exist constants c > 0 and k > 0
such that

𝑓(𝑛) ≤ 𝑂(𝑔(𝑛)), 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 𝑘


𝑓(𝑛) ≤ 𝑐 ∗ 𝑔(𝑛), 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 𝑘
Here is a graphic representation of f(n) = O(g(n)) relation:

This means that function O(g(n)) is an upper bound for f(n), for all positive values of n ≥ k.
Constant Time Complexity: 𝑶(𝟏)
An algorithm is said to run in constant time if it requires the same amount of time regardless of the
input size. Examples:

• array: accessing any element


• fixed-size stack: push and pop methods
• fixed-size queue: enqueue and dequeue methods (without array)
Linear Time: 𝑶(𝒏)
An algorithm is said to run in linear time if its time execution is directly proportional to the input size,
i.e. time grows linearly as input size increases. Examples:

• array: linear search, traversing, find minimum


• ArrayList: contains method
• queue: contains method
Logarithmic Time: 𝑶(𝒍𝒐𝒈 𝒏)
An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of
the input size. Example:

• binary search
Quadratic Time: 𝑶(𝒏𝟐 )
An algorithm is said to run in quadratic time if its time execution is proportional to the square of the
input size. Examples:

• Bubble sort,
• Selection sort,
• Insertion sort
Asymptotic notations (Big Omega Ω)
A capital/upper case omega Ω notation is used to express lower bound. The Omega notation is the
least used notation among all three.

We say that f(n) = Ω(g(n)) when there exists constants c > 0 and k > 0 such that

𝑓(𝑛) ≥ 𝑐 ∗ 𝑔(𝑛) 𝑓𝑜𝑟 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑠𝑢𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡𝑙𝑦 𝑙𝑎𝑟𝑔𝑒 𝑛

Examples

1. A linear search through an unsorted list of elements has a best-case time complexity of Ω(1)
since the desired element could be the first item in the list.

2. The insertion sort algorithm, when sorting an already sorted list, runs in Ω(n) time, as it linearly
scans the list to ensure order.
3. A balanced binary search tree has a minimum depth of Ω(log n) since each level splits the
remaining elements in half.

Asymptotic notations (Theta θ)


The theta notation bounds a function from above and below, so it defines exact asymptotic behaviour.
A simple way to get the Theta notation of an expression is to drop low-order terms and ignore leading
constants.

We say that f(n) = Θ(g(n)) if and only f(n) = O(g(n)) and f(n) = Ω(g(n)).

It can be expressed as

0 ≤ 𝑐1 ∗ 𝑔(𝑛) ≤ 𝑓(𝑛) ≤ 𝑐2 ∗ 𝑔(𝑛) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 𝑘

Examples

2 𝑛 = 𝛩(𝑛)

𝑛2 + 2 𝑛 + 1 = 𝛩( 𝑛2 )

3𝑛3 + 6𝑛2 + 6000 = Θ(𝑛3 )

Time-Space Trade-Off
The time-space trade-off is a fundamental concept in computer science that refers to the balance
between the time complexity and space complexity of an algorithm or data structure. In many cases,
optimizing for time efficiency can lead to increased space usage, and vice versa. Understanding trade-
off is essential for designing efficient algorithms and data structures based on the specific needs and
constraints of an application.

For Example:

Uncompressed vs. compressed data: Storing data uncompressed takes more space but takes less time
to access, while storing data compressed takes less space but takes more time to access.

If you have a file containing many records, each with names, IDs, and other information. Searching
through this file one record at a time (linear search) takes a lot of time if there are many records. Using
binary search can make finding a record much faster, but it requires the records to be sorted by ID in
a separate file. This means you would need twice as much storage space to keep both the original and
sorted files.
Linear Search
Linear search, also called as sequential search, is a very simple method used for searching an array for
a particular value. It works by comparing the value to be searched with every element of the array one
by one in a sequence until a match is found. Linear search is mostly used to search an unordered list
of elements (array in which data elements are not sorted).

Figure 1 Linear Search Algorithm

• It searches for a specific value (VAL) in an array (A) of size N.


• It starts from the first element and checks each element one by one.
• If it finds the value, it prints the position and stops.
• If it doesn't find the value after checking all elements, it prints that the value is not in the array.
Complexity of Linear Search Algorithm
Linear search executes in O(n) time where n is the number of elements in the array. The best case of
linear search is when VAL is equal to the first element of the array. In this case, only one comparison
will be made. Likewise, the worst case will happen when either VAL is not present in the array, or it is
equal to the last element of the array. In both the cases, n comparisons will have to be made. However,
the performance of the linear search algorithm can be improved by using a sorted array.

Case Complexity
Best case O(1)
Average case O(n)
Worst case O(n)
Advantages:

• Simple and easy to implement


• Can be used for any kind of data
• Does not require the array to be sorted
Disadvantages:

• Not very efficient for large arrays


• May take a long time to find the desired element
• Not very robust to errors
Binary Search
The divide-and-conquer strategy is used in binary search, where the item is compared to the middle
element of the list thereafter the list is divided into two halves. The location of the middle element is
returned if a match is found. Otherwise, depending on the outcome of the match, we search into
either of the halves. For binary search, a sorted array is mandatory.

Figure 2 Binary Search Algorithm

Figure 2 shows the algorithm for binary search. In this algorithm, we see that BEG and END are the
beginning and ending positions of the segment that we are looking to search for the element. MID is
calculated as (BEG + END)/2.

Initially, BEG = lower_bound and END = upper_bound. The algorithm will terminate when
A[MID] = VAL. When the algorithm ends, we will set POS = MID. POS is the position at which
the value is present in the array.

However, if VAL is not equal to A[MID], then the values of BEG, END, and MID will be changed
depending on whether VAL is smaller or greater than A[MID].

(a) If VAL < A[MID], then VAL will be present in the left segment of the array. So, the value of END

will be changed as END = MID – 1.

(b) If VAL > A[MID], then VAL will be present in the right segment of the array. So, the value of
BEG will be changed as BEG = MID + 1.

Finally, POS = -1 indicates VAL is not present in the array, then eventually, END will be less than
BEG. When this happens, the algorithm will terminate, and the search will be unsuccessful.

Advantages:

• Very efficient for large arrays


• Can find the element in the minimum number of comparisons
• Robust to errors
Disadvantages:o

• Requires the array to be sorted


• Not very intuitive to understand
Complexity of Binary Search Algorithm
The complexity of the binary search algorithm can be expressed as f(n), where n is the number of
elements in the array. In the binary search algorithm, with each comparison, the size of the segment
is reduced to half. Thus, we can say that, in order to locate a particular value in the array, the total
number of comparisons that will be made is given as 𝑓(𝑛) = log 2 𝑛

The best-case complexity of binary search is also O(log n), which occurs when the desired element is
the middle element of the array. The worst-case complexity is also O(log n), which occurs when the
desired element is not in the array at all.

Case Complexity
Best case O(1)
Average case O(log n)
Worst case O(log n)

Bubble Sort
Bubble sort is a simple sorting algorithm that repeatedly steps through the list, compares adjacent
elements, and swaps them if they are in the wrong order. This process is repeated until the list is
sorted. Each pass through the list moves the largest unsorted element to its correct position, like
bubbles rising to the surface.

• It sorts an array of N elements.


• It makes multiple passes through the array.
• In each pass:
o It compares adjacent elements.
o If they're in the wrong order, it swaps them.
• With each pass, the largest unsorted element "bubbles up" to its correct position.
• It repeats this process N-1 times to ensure the entire array is sorted.
Bubble Sort is simple but not very efficient for large datasets. It's called "bubble" sort because smaller
elements gradually "bubble" to the top of the list with each pass.
Complexity of the Bubble Sort Algorithm
The time for sorting algorithms is measured in terms of the number of comparisons. the number 𝑓(𝑛)
of comparisons in the bubble sort can be calculated mathematically. There are 𝑛 − 1 comparisons
during the first pass, which places the largest element in the last position. In the second pass, there
are 𝑛 − 2, and in the third pass, there are 𝑛 − 3 comparisons. Thus, the total time required is

𝑓(𝑛) = (𝑛 − 1) + (𝑛 − 2) + ⋯ + 2 + 1
As per Carl Friedrich Gauss, if we have any arithmetic sequence then it can be simplified as follows:
(𝐹𝑖𝑟𝑠𝑡 𝑡𝑒𝑟𝑚 + 𝐿𝑎𝑠𝑡 𝑡𝑒𝑟𝑚) ∗ (𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑟𝑚𝑠)
𝑆𝑢𝑚 =
2
Our equation will become,

((𝑛 − 1) + 1 ) ∗ (𝑛 − 1) 𝑛 ∗ (𝑛 − 1) 𝑛(𝑛 − 1) n2 − n n2 n
𝑓(𝑛) = = = = = −
2 2 2 2 2 2
In Big O notation, we focus on the highest power of n and ignore constants: 𝑂(𝑛2 )

Insertion Sort
INSERTION(A, N)
Step 1: Set A[0] := -∞. [Initializes sentinel element.]
Step 2: Repeat Steps 3 to 5 for K = 2, 3, ..., N:
Step 3: Set TEMP := A[K] and PTR := K - 1.
Step 4: Repeat while TEMP < A[PTR]:
(a) Set A[PTR + 1] := A[PTR]. [Moves element forward.]
(b) Set PTR := PTR - 1.
[End of inner loop.]
Step 5: Set A[PTR + 1] := TEMP. [Inserts element in proper place.]
[End of outer loop.]
Step 6: Return.

Initialization: Set A[0] to a very small value (negative infinity) to act as a sentinel.

Iteration: For each element from the second to the last (K=2 to N):

Store Element: Temporarily store the current element in TEMP and set a pointer PTR to the previous
element's position.

Shift Elements: While TEMP is smaller than the element at PTR, shift elements one position to the
right.

Insert Element: Place TEMP in its correct sorted position.

Return: The array A is now sorted.

Complexity of the Insertion Sort Algorithm


The time for sorting algorithms is measured in terms of the number of comparisons. the number 𝑓(𝑛)
of comparisons in the Insertion sort can be calculated mathematically. There are 𝑛 − 1 comparisons
during the first pass, which places the largest element in the last position. In the second pass, there
are 𝑛 − 2, and in the third pass, there are 𝑛 − 3 comparisons. Thus, the total time required is
𝑓(𝑛) = (𝑛 − 1) + (𝑛 − 2) + ⋯ + 2 + 1
As per Carl Friedrich Gauss, if we have any arithmetic sequence then it can be simplified as follows:
(𝐹𝑖𝑟𝑠𝑡 𝑡𝑒𝑟𝑚 + 𝐿𝑎𝑠𝑡 𝑡𝑒𝑟𝑚) ∗ (𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑟𝑚𝑠)
𝑆𝑢𝑚 =
2
Our equation will become,

((𝑛 − 1) + 1 ) ∗ (𝑛 − 1) 𝑛 ∗ (𝑛 − 1) 𝑛(𝑛 − 1) n2 − n n2 n
𝑓(𝑛) = = = = = −
2 2 2 2 2 2
In Big O notation, we focus on the highest power of n and ignore constants: 𝑂(𝑛2 )

Selection Sort

Complexity of the Selection Sort Algorithm


The time for sorting algorithms is measured in terms of the number of comparisons. the number 𝑓(𝑛)
of comparisons in the Selection sort can be calculated mathematically. There are 𝑛 − 1 comparisons
during the first pass, which places the largest element in the last position. In the second pass, there
are 𝑛 − 2, and in the third pass, there are 𝑛 − 3 comparisons. Thus, the total time required is

𝑓(𝑛) = (𝑛 − 1) + (𝑛 − 2) + ⋯ + 2 + 1
As per Carl Friedrich Gauss, if we have any arithmetic sequence then it can be simplified as follows:
(𝐹𝑖𝑟𝑠𝑡 𝑡𝑒𝑟𝑚 + 𝐿𝑎𝑠𝑡 𝑡𝑒𝑟𝑚) ∗ (𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑟𝑚𝑠)
𝑆𝑢𝑚 =
2
Our equation will become,

((𝑛 − 1) + 1 ) ∗ (𝑛 − 1) 𝑛 ∗ (𝑛 − 1) 𝑛(𝑛 − 1) n2 − n n2 n
𝑓(𝑛) = = = = = −
2 2 2 2 2 2
In Big O notation, we focus on the highest power of n and ignore constants: 𝑂(𝑛2 )

Heap Sort
#include <stdio.h>

void swap(int *a, int *b) {


int temp = *a;
*a = *b;
*b = temp;
}
void heapify(int arr[], int n, int i) {
int largest = i; // Initialize largest as root
int left = 2 * i + 1; // Left child index
int right = 2 * i + 2; // Right child index

// If left child is larger than root


if (left < n && arr[left] > arr[largest])
largest = left;

// If right child is larger than largest so far


if (right < n && arr[right] > arr[largest])
largest = right;

// If largest is not root


if (largest != i) {
swap(&arr[i], &arr[largest]);
heapify(arr, n, largest); // Recursively heapify the affected
sub-tree
}
}

void heapSort(int arr[], int n) {


// Build heap (rearrange array)
for (int i = n / 2 - 1; i >= 0; i--)
heapify(arr, n, i);

// One by one extract elements from heap


for (int i = n - 1; i > 0; i--) {
// Move current root to end
swap(&arr[0], &arr[i]);

// Call max heapify on the reduced heap


heapify(arr, i, 0);
}
}

// Utility function to print an array


void printArray(int arr[], int n) {
for (int i = 0; i < n; ++i)
printf("%d ", arr[i]);
printf("\n");
}

int main() {
int arr[] = {12, 11, 13, 5, 6, 7};
int n = sizeof(arr) / sizeof(arr[0]);

printf("Original array: \n");


printArray(arr, n);

heapSort(arr, n);
printf("Sorted array: \n");
printArray(arr, n);
return 0;
}

Important Questions
1. Explain the concept of data structure and its types. (7M)
2. Explain Data Structure in brief. (6M)
3. Explain different Asymptotic notations used for Analysis of Algorithm. (7M)
4. How to decide performance of an algorithm? Explain big O notation in brief. (5/7M)
5. Explain Time & Space Complexity. (7M)
6. Write short note on Time space trade off. (4M)
7. Explain the concept of linear search along with the example. (4M)
8. Write an algorithm and program to implement Binary Search. (9M)
9. Write an algorithm for binary search. And also state its time complexity in each case. (7M)
10. Write an algorithm of Bubble Sort. (4M)
11. Write a function to implement heap sort. (5M)
12. Suppose an array A contains 10 elements as follows : - 10, 23, 44, 11, 2, 66, 3, 9, 7, 63 Sort the above
array using selection sort. (5M)

You might also like