[go: up one dir, main page]

0% found this document useful (0 votes)
17 views18 pages

DS Unit 1 - Brief Solution of Important Questions

Uploaded by

falana3199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views18 pages

DS Unit 1 - Brief Solution of Important Questions

Uploaded by

falana3199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

1. Explain Asymptotic Notations: Big-O, Big-Theta, and Big-Omega with examples.

Session: 2022-23
Answer:

Asymptotic Notations:

• Asymptotic Notations are mathematical tools used to analyse the performance of algorithms
by understanding how their efficiency changes as the input size grows.

• These notations provide a concise way to express the behaviour of an algorithm’s time or space
complexity as the input size approaches infinity.

• Rather than comparing algorithms directly, asymptotic analysis focuses on understanding the
relative growth rates of algorithms’ complexities.

• By using asymptotic notations, such as Big O, Big Omega, and Big Theta, we can categorize
algorithms based on their worst-case, best-case, or average-case time or space complexities,
providing valuable insights into their efficiency.

There are mainly three asymptotic notations:


1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)

1. Big-O Notation (O-notation):

Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it gives
the worst-case complexity of an algorithm.

.It is the most widely used notation for Asymptotic analysis.


.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-O(Worst Case) It is defined as the condition that allows an algorithm to complete statement
execution in the longest amount of time possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive constant C
and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0

It returns the highest possible output value (big-O) for a given input. The execution time serves as
an upper bound on the algorithm’s time complexity.
Mathematical Representation of Big-O Notation:

O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

Example:

f(n)=2n2+n

2n2+n<= C g(n)

2n2+n<=3n2 (Major Term is n2) for all n>=1 and C=3

So, we can write as f(n)=O(n2) for all n>=1 and C=3

2. Omega Notation (Ω-Notation):

Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides
the best-case complexity of an algorithm.

The execution time serves as a lower bound on the algorithm’s time complexity.

It is defined as the condition that allows an algorithm to complete statement execution in the
shortest amount of time.

Let g and f be the function from the set of natural numbers to itself. The function f is said to be Ω(g),
if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n0

Mathematical Representation of Omega notation :

Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

Example:

f(n)=2n2+n

2n2+n>= C g(n)

2n2+n>=2n2 (Greatest Lower Value) for all n>=0

So we can write as f(n)= Ω(n2) but for all n>=0 and C=2
3. Theta Notation (Θ-Notation):

Theta notation encloses the function from above and below. Since it represents the upper and the
lower bound of the running time of an algorithm, it is used for analysing the average-case complexity
of an algorithm.

Let g and f be the function from the set of natural numbers to itself. The function f is said to be Θ(g),
if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n
≥ n0

Theta notation

Mathematical Representation of Theta notation:

Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for
all n ≥ n0}

The execution time serves as both a lower and upper bound on the algorithm’s time complexity.

It exists as both, most, and least boundaries for a given input value.

Example:

f(n)=2n2+n

2n2<=2n2+n<= 3n2 for all n

So we can write as f(n)= Θ (n2) but for all n>=0 and C1=2 and C2=3

Difference Between Big oh, Big Omega and Big Theta :

S.No. Big O Big Omega (Ω) Big Theta (Θ)

It is like (<=) It is like (>=) It is like (==)


rate of growth of an rate of growth is greater meaning the rate of
algorithm is less than or than or equal to a specified growth is equal to a
1. equal to a specific value. value. specified value.

The upper bound of The algorithm’s lower The bounding of function


2.
algorithm is represented by bound is represented by from above and below is
S.No. Big O Big Omega (Ω) Big Theta (Θ)

Big O notation. Only the Omega notation. The represented by theta


above function is bounded asymptotic lower bound is notation. The exact
by Big O. Asymptotic upper given by Omega notation. asymptotic behavior is
bound is given by Big O done by this theta
notation. notation.

Big Omega (Ω) – Lower Big Theta (Θ) – Tight


Big O – Upper Bound
3. Bound Bound

It is define as lower bound


It is define as upper bound
and lower bound on an It is define as tightest
and upper bound on an
algorithm is the least bound and tightest bound
algorithm is the most
amount of time required ( is the best of all the worst
amount of time required (
the most efficient way case times that the
the worst case
possible, in other words algorithm can take.
performance).
4. best case).

Mathematically: Big Oh is 0 Mathematically: Big Omega Mathematically – Big


<= f(n) <= Cg(n) for all n >= is 0 <= Cg(n) <= f(n) for all n Theta is 0 <= C2g(n) <=
5. n0 >= n0 f(n) <= C1g(n) for n >= n0
2. Analyze the time complexity of the following code snippet: Session: 2021-22

for (int i = 1; i <= n; i++)


{
for (int j = 1; j <= i; j++)
{
printf("*");

Answer

Outer Loop (i loop):

o The outer loop runs from i = 1 to i = n.

o Therefore, it executes n iterations.

Inner Loop (j loop):

o For each iteration of the outer loop (value of i), the inner loop runs from j = 1 to j = i.

o Hence, the number of iterations of the inner loop depends on the value of i:

When i = 1, the inner loop runs 1 time.

When i = 2, the inner loop runs 2 times.

...

When i = n, the inner loop runs n times.

Total Number of Iterations:

o To determine the total number of iterations, we sum up the number of iterations of


the inner loop for all values of i:

Time Complexity:

o The total number of iterations is n(n+1)/2

o Ignoring constant factors and lower-order terms, the time complexity is: O(n2)

Final Answer:

The time complexity of the given code snippet is O(n²).


3. Illustrate the Time-Space Trade-off with an example. Session: 2019-20 , 2020-21

Answer

A trade-off is a situation where one thing increases and another thing decreases. It is a way to
solve a problem in:

• Either in less time and by using more space, or

• In very little space by spending a long amount of time.

The best Algorithm is that which helps to solve a problem that requires less space in memory
and takes less time to generate the output. But in general, it is not always possible to achieve
both conditions at the same time.

Example: Linear Vs Binary Search

Linear search checks each element in the array sequentially until the target is found or the end of the
array is reached.

Characteristics:

1. Time Complexity: O(n)

o In the worst case, the algorithm examines all n elements.

2. Space Complexity: O(1)

o No extra memory is required aside from a few variables.

Binary search requires a sorted array. It repeatedly divides the array into halves and checks the
middle element to determine whether to search in the left or right half.

Characteristics:

1. Time Complexity: O(logn)

o The array size is halved at each step, leading to logarithmic growth.

2. Space Complexity:

o Iterative version: O(1)

o Recursive version: O(logn) (due to the recursion stack).

3. Preprocessing Cost: Sorting the array takes O(nlogn) which adds an upfront cost if the array
is unsorted.

Time-Space Trade-off

1. Linear Search:

o Time: Slower O(n).

o Space: Minimal O(1)

o No preprocessing or extra memory is required.

2. Binary Search:
o Time: Faster (O(logn)

o Space: Requires extra space and/or time for sorting if the array isn't already sorted.

o Sorting the array can add O(nlogn) preprocessing time and may require O(n)
additional space for temporary storage.
4. Derive the formula to calculate the memory location of an element in Row Major Order and
Column Major Order. Session: 2020-21

Answer
Row Major Address Calculation:

Do by yourself Column Major Address Calculation.


5. For a 3x3 array stored in Row Major Order, calculate the address of element A[2][1], assuming
the base address is 200 and each element takes 4 bytes. Session: 2020-21, 2022-23, 2023-24

Answer

Row Major:

Address of A[I][J] = B + W * ((I – LR) * N + (J – LC))

I = Row Subset of an element whose address to be found,

J = Column Subset of an element whose address to be found,

B = Base address,

W = Storage size of one element store in an array(in byte),

LR = Lower Limit of row/start row index of the matrix(If not given assume it as zero),

LC = Lower Limit of column/start column index of the matrix(If not given assume it as zero),

N = Number of column given in the matrix.

Base address B = 200


Storage size of one element store in any array W = 4 Bytes
Row Subset of an element whose address to be found I = 2
Column Subset of an element whose address to be found J = 1
Lower Limit of row/start row index of matrix LR = 0
Lower Limit of column/start column index of matrix = 0
Number of column given in the matrix N = 3

Formula:
Address of A[I][J] = B + W * ((I – LR) * N + (J – LC))

Solution:
Address of A[2][1] = 200 + 4 * ((2 – 0) * 3 + (1 – 0))
= 200 + 4 * (6 + 1))
= 200+28
Address of A[I][J] = 228

Try For Column Major:

Address of A[I][J] = B + W * ((J – LC) * M + (I – LR))

I = Row Subset of an element whose address to be found,


J = Column Subset of an element whose address to be found,
B = Base address,
W = Storage size of one element store in any array(in byte),
LR = Lower Limit of row/start row index of matrix(If not given assume it as zero),
LC = Lower Limit of column/start column index of matrix(If not given assume it as zero),
M = Number of rows given in the matrix.
6. Write the algorithm for create, insertion and deletion in a singly linked list. Session: 2018-19 ,
2022-22, 2022-23
Answer

Algorithm for create a singly linked list

Initialize head ← NULL

Function CREATE_LIST(data[]):

For each value in data:

new_node= (struct Node*)malloc(sizeof(struct Node))

new_node.data ← value

new_node.next ← NULL

If head = NULL: // List is empty

head ← new_node

Else:

temp ← head

While temp.next ≠ NULL:

temp ← temp.next

End While

temp.next ← new_node

End If

End For

End Function

Algorithm for insertion in a singly linked list

1. At Beginning

Function INSERT_BEGINNING (data):

new_node= (struct Node*)malloc(sizeof(struct Node))

new_node.data ← data

new_node.next ← head

head ← new_node

End Function

2. In the end

Function INSERT_END(data):
new_node= (struct Node*)malloc(sizeof(struct Node))

new_node.data ← data

new_node.next ← NULL

If head = NULL: // List is empty

head ← new_node

Else:

temp ← head

While temp.next ≠ NULL:

temp ← temp.next

End While

temp.next ← new_node

End If

End Function

3. At a position

Function INSERT_AT_POSITION(data, pos):

If pos = 1:

new_node= (struct Node*)malloc(sizeof(struct Node))

new_node.data ← data

new_node.next ← head

head ← new_node

Else:

new_node= (struct Node*)malloc(sizeof(struct Node))

new_node.data ← data

temp ← head

count ← 1

While count < pos - 1 AND temp ≠ NULL:

temp ← temp.next

count ← count + 1

End While
If temp = NULL:

Print "Position out of bounds"

Else:

new_node.next ← temp.next

temp.next ← new_node

End If

End If

End Function

Algorithm for Deletion in a singly linked list

1. From Beginning

Function DELETE_BEGINNING():

If head = NULL:

Print "List is empty"

Else:

temp ← head

head ← head.next

Free temp

End If

End Function

2. From End

Function DELETE_END():
If head = NULL:
Print "List is empty"
Else If head.next = NULL: // Single node in the list
Free head
head ← NULL
Else:
temp ← head
While temp.next.next ≠ NULL:
temp ← temp.next
End While
Free temp.next
temp.next ← NULL
End If
End Function

3. From a given position


Function DELETE_AT_POSITION(pos):

If head = NULL:

Print "List is empty"

Else If pos = 1:

Call DELETE_BEGINNING()

Else:

temp ← head

count ← 1

While count < pos - 1 AND temp ≠ NULL:

temp ← temp.next

count ← count + 1

End While

If temp = NULL OR temp.next = NULL:

Print "Position out of bounds"

Else:

to_delete ← temp.next

temp.next ← to_delete.next

Free to_delete

End If

End If

End Function
7. Represent the polynomial 4x3+3x2+ 5 using a linked list. Session: 2019-20 ,2022-23, 2023-24

Answer

To represent a polynomial 4x3+3x2+ 5 using a linked list, each node will store:
1. The coefficient of the term.
2. The exponent of the term.
3. A pointer to the next node (to link the terms together).
Representation
• Each term of the polynomial is a node in the linked list.
• The terms are stored in decreasing order of their exponents.

Node Structure
Each node contains:
plaintext
Copy code
struct Node {
int coefficient;
int exponent;
Node* next;
};

Linked List Representation for 4x3+3x2+ 5


1. The first node represents 4x3:
o Coefficient: 4
o Exponent: 3
o Pointer to the next node.
2. The second node represents 3x2:
o Coefficient: 3
o Exponent: 2
o Pointer to the next node.
3. The third node represents 5:
o Coefficient: 5
o Exponent: 0 (constant term)
o Pointer is NULL (end of the list).

Illustration of the Linked List


plaintext
Copy code
Head → [4, 3] → [3, 2] → [5, 0] → NULL
8. Define the Abstract Data Type (ADT) of a stack. Session: 2019-20

Answer

A stack is a linear data structure that follows the Last In, First Out (LIFO) principle, meaning
the last element added to the stack is the first one to be removed.
The Abstract Data Type (ADT) defines the behaviour of a stack in terms of its operations
without specifying the implementation details.

Definition of Stack ADT


1. Structure:
o A collection of elements arranged in a linear order.
o Only one end, called the top, is accessible for both insertion and deletion.
2. Operations: The following are the primary operations defined by the Stack ADT:
(a) Push(element):
o Adds an element to the top of the stack.
o Precondition: The stack is not full.
o Postcondition: The new element becomes the top of the stack.
(b) Pop():
o Removes and returns the top element from the stack.
o Precondition: The stack is not empty.
o Postcondition: The element below the removed one becomes the top of the
stack.
(c) Peek()/Top():
o Returns the top element without removing it.
o Precondition: The stack is not empty.
o Postcondition: The stack remains unchanged.
(d) IsEmpty():
o Returns true if the stack contains no elements; otherwise, returns false.
(e) IsFull() (optional):
o Returns true if the stack is full (for fixed-size implementations).
Example
Stack Operations:
1. Initial Stack: S={}
2. Push(10): S={10}
3. Push(20): S={10,20}
4. Peek(): Returns 20
5. Pop(): Returns 20, S={10}
6. IsEmpty(): Returns false.

You might also like