[go: up one dir, main page]

0% found this document useful (0 votes)
16 views5 pages

Assignment 2

RGFVRDGDF
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views5 pages

Assignment 2

RGFVRDGDF
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Assignment 2

Ques1 :- What is an Algorithm?

Answer:- An algorithm is a procedure used for solving a problem or performing a


computation. Algorithms act as an exact list of instructions that conduct
specified actions step by step in either hardware- or software-based routines.
Ques:-2. What is the time and space complexity of the algorithm?

Answer:-

Time Complexity:- Time Complexity Analysis: (In Big-O notation)

Best Case: O(1), This will take place if the element to be searched is on the first index of the given
list. So, the number of comparisons, in this case, is 1.

Average Case: O(n), This will take place if the element to be searched is on the middle index of the
given list.

Worst Case: O(n), This will take place if:

The element to be searched is on the last index

The element to be searched is not present on the list

Space Complexity:-

Space Complexity of an algorithm is the total space taken by the algorithm with respect to the input
size. Space complexity includes both Auxiliary space and space used by input.

Ques3. What do you understand by the asymptotic notation of the algorithm?

Answer:-

Asymptotic notation is a set of mathematical tools used to describe how the performance of an
algorithm scales as the size of its input grows. It's not about the exact time an algorithm takes for a
specific input, but rather how the time requirement changes with increasing input size.

There are three main types of asymptotic notation used in algorithm analysis:

Big O Notation (O-notation): This represents the upper bound of an algorithm's running time. It
describes the worst-case scenario for how long the algorithm might take to execute.
Big Theta Notation (Θ-notation): This captures both the upper and lower bounds of an algorithm's
running time. It signifies that the running time grows exactly at a certain rate as the input size
increases.

Big Omega Notation (Ω-notation): This represents the lower bound of an algorithm's running time. It
describes the minimum amount of time an algorithm must take in the best-case scenario.

Ques4. Explain in details the Big Oh, Theta and Omega Notation of the algorithms and also elaborate
them with graphical representation and examples.

Answer:-

Θ Notation: The theta notation bounds a function from above and below, so it defines exact
asymptotic behavior.

A simple way to get the Theta notation of an expression is to drop low-order terms and ignore
leading constants. For example, consider the following expression.

3n3 + 6n2 + 6000 = Θ(n3)

Big O Notation: The Big O notation defines an upper bound of an algorithm, it bounds a function only
from above. For example, consider the case of Insertion Sort. It takes linear time in the best case and
quadratic time in the worst case. We can safely say that the time complexity of Insertion sort is
O(n^2). Note that O(n^2) also covers linear time.

If we use Θ notation to represent time complexity of Insertion sort, we have to use two statements
for best and worst cases:

1. The worst-case time complexity of Insertion Sort is Θ(n^2).

2. The best case time complexity of Insertion Sort is Θ(n).

3) Ω Notation: Just as Big O notation provides an asymptotic upper bound on a function, Ω notation
provides an asymptotic lower bound.

Ω Notation can be useful when we have a lower bound on the time complexity of an algorithm

Q5. Explain the term compile time and run time of an algorithms. Compile Time:Compile time refers
to the period during which a program is translated from high-level source code (e.g., written in
programming languages like C, C++, Java) into machine code or an intermediate representation that
can be executed by a computer. During compile time, the source code is analyzed by a compiler,
which checks for syntax errors, performs type checking, and generates the corresponding machine
code or bytecode. The output of the compilation process typically includes executable files, object
files, or bytecode files, depending on the programming language and compilation options. Syntax
errors and certain types of semantic errors are detected during compile time. These errors must be
fixed before the program can be successfully compiled. Run Time: Run time, also known as execution
time or dynamic time, refers to the period when a program or algorithm is actually running and
performing its tasks on a computer's hardware. During run time, the compiled code is executed by
the computer's processor(s). The program interacts with system resources such as memory, CPU,
and I/O devices to carry out its instructions. The output of the program or algorithm is generated
during run time, which may include results, error messages, or other forms of output depending on
the program's purpose. Logical errors, runtime errors (e.g., division by zero, array out of bounds),
and other issues that occur during program execution are detected and handled during run time

Q6. Explain the little oh and little omega notation used in analysis of the algorithm. In algorithm
analysis, in addition to the commonly used Big O, Big Omega, and Big Theta notations, there are two
additional notations: Little O (o) and Little Omega (ω). These notations provide more refined
information about the behavior of a function than Big O and Big Omega notations. Little O Notation
(o): Little O notation represents an upper bound on the growth rate of a function, but it is stricter
than Big O notation. It indicates that a function grows strictly slower than another function, without
being equal to it, as the input size approaches infinity. Little Omega Notation (ω): Little Omega
notation represents a lower bound on the growth rate of a function, but it is stricter than Big Omega
notation. It indicates that a function grows strictly faster than another function, without being equal
to it, as the input size approaches infinity Q7. Does small theta notation exist, if no, explain why. No,
small theta notation does not exist as a separate notation in the same way that little o and little
omega notations exist alongside big O and big omega notations, respectively. The reason for this is
rooted in the definitions and purposes of big O, big omega, and big theta notations. Since big theta
notation already represents a tight bound, there's no need for a separate small theta notation to
indicate a more precise relationship between growth rates. The concept of tight bounds is already
covered by big theta.
Assignment 2

Ques1:- What is an Array. Explain it as a data structure?

Answer:- An array is a fundamental data structure used in computer science and programming. It
represents a collection of elements stored in contiguous memory locations, where each element is
accessed by its index or position within the array. Arrays provide a way to organize and manage data
efficiently, allowing for easy and fast access to individual elements based on their positions. Here are
some key characteristics and concepts associated with arrays as a data structure Fixed Size: Arrays
have a fixed size, meaning the number of elements they can hold is determined at the time of
declaration. Once created, the size of an array typically cannot be changed dynamically. This fixed-
size property makes array operations efficient in terms of memory management. Homogeneous
Elements: Arrays store elements of the same data type (e.g., integers, floating-point numbers,
characters). This ensures that all elements in the array occupy the same amount of memory and can
be accessed using the same indexing mechanism. Indexing: Elements in an array are accessed using
zero-based indexing, where the index of the first element is 0, the index of the second element is 1,
and so on. Accessing elements by index allows for fast retrieval and manipulation of data within the
array. Contiguous Memory: Array elements are stored in contiguous memory locations, meaning
that the memory addresses of consecutive elements in the array are adjacent to each other. This
property enables efficient memory access and traversal of array elements. Random Access: Arrays
support random access, meaning that elements can be accessed directly by their index without the
need to traverse through the entire array. This enables fast retrieval and modification of individual
elements based on their positions. Iterating Over Elements: Arrays can be easily traversed or
iterated over using loops (e.g., for loop, while loop). This allows for performing operations on each
element of the array sequentially.

Ques 2:- 2. Write an algorithm for inserting, deleting an element in the 1 D array along with the
complexity of these operation in array?

Answer:- (a) Inserting an Element in a 1D Array: Algorithm InsertElement(Array, size, index,


element):

1. Check if the index is valid (0 <= index <= size).

2. If the array is full (size == length of Array), return an error.

3. Shift elements to the right from the end of the array up to the index.

4. Place the new element at the specified index.

5. Increment the size of the array.

6. Return success.
Time Complexity:

Best Case: O(1) - When inserting at the end of the array.

Worst Case: O(n) - When inserting at the beginning of the array, requiring shifting all subsequent
elements.

(b) Deleting an Element from a 1D Array: Algorithm DeleteElement(Array, size, index):

1. Check if the index is valid (0 <= index < size).

2. Remove the element at the specified index.

3. Shift elements to the left starting from the index + 1 to fill the empty space.

4. Decrement the size of the array. 5. Return success.

Time Complexity: Best Case: O(1) - When deleting the last element of the array.

Worst Case: O(n) - When deleting the first element of the array, requiring shifting all subsequent
elements.

Ques3:- Write a short note on row major and column major one-dimensional array along with
examples.?

In row-major ordering, elements of a multi-dimensional array are stored row by row in memory.
Each row of the array is stored consecutively in memory, followed by the next row, and so on
Example: Consider a 3x3 matrix: [1 2 3] [4 5 6] [7 8 9] In row-major ordering, the elements would be
stored in memory as: [1 2 3 4 5 6 7 8 9] Column-Major Ordering: 6 of 113 In column-major ordering,
elements of a multi-dimensional array are stored column by column in memory. Each column of the
array is stored consecutively in memory, followed by the next column, and so on. Example: Using the
same 3x3 matrix as mentioned in above example. In column-major ordering, the elements would be
stored in memory as: [1 4 7 2 5 8 3 6 9]

You might also like