Unit-1
Algorithm-
An algorithm is a set of well-defined instructions designed to perform a specific task or
solve a particular problem. It takes a set of inputs and produces the desired output through
a series of computational steps. Algorithms are fundamental to computer science and are
used in various fields such as mathematics, data science, artificial intelligence, and more.
Example: Algorithm to Add Three Numbers
step1: Start
step2: Declare three integer variables num1, num2, and num3.
step3: Take the three numbers as inputs in variables num1, num2, and num3.
step4: Declare an integer variable sum to store the resultant sum.
step5: Add the three numbers and store the result in the variable sum.
step6: Print the value of the variable sum.
step7: End
Characteristics of an Algorithm-
Well-Defined Inputs: If an algorithm says to take inputs, it should be
well-defined inputs. It may or may not take input.
Well-Defined Outputs: The algorithm must clearly define what output
will be yielded and it should be well-defined as well. It should produce
at least 1 output.
Inputs: An algorithm has zero or more inputs.
Outputs: An algorithm produces at least one output.
Finite: An algorithm must terminate after a finite number of steps in all test cases.
Feasible- The algorithm must be simple, generic, and practical, such that it can be
executed with the available resources. It must not contain some future technology or
anything.
Language Independent: The Algorithm designed must be language-independent, i.e. it
must be just plain instructions that can be implemented in any language, and yet the output
will be the same, as expected.
Effectiveness: An algorithm must be developed by using very basic, simple, and feasible
operations so that one can trace it out by using just paper and pencil.
Clear and Unambiguous: The algorithm should be unambiguous. Each of its steps should
be clear in all aspects and must lead to only one meaning. Write everything in a manner way
and logic should be clear.
Advantages of Algorithms:
• It is easy to understand.
• An algorithm is a step-wise representation of a solution to a given problem.
• In an Algorithm the problem is broken down into smaller pieces or steps hence, it is
easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
• Writing an algorithm takes a long time so it is time-consuming.
• Understanding complex logic through algorithms can be very difficult
Steps required to design an algorithm
1. Understand the Problem-The first step in designing an algorithm is to fully understand
the problem you’re trying to solve. Take the time to analyze the problem statement,
identify the inputs and outputs, and clarify requirements.
2.Designing the algorithm:
step1:Start
step2:Declare three integer variables num1, num2, and num3.
step3:Take the three numbers as inputs in variables num1, num2, and num3.
step4:Declare an integer variable sum to store the resultant sum.
step5:Add the three numbers and store the result in the variable sum.
step6:Print the value of the variable sum.
step7: End
3.Pseudocode or Flowchart : Write out the steps using plain language or diagrams to
visualize the algorithm’s structure.
4.Analyze the algorithm (Time complexity and space complexity): Finally, analyze the
performance of your algorithm and identify any areas for optimization. Consider factors
such as time complexity and space complexity and make any necessary adjustments to
improve efficiency.
5.Implement : Once you have designed your algorithm, it’s time to implement it in a
programming language of your choice. Ex-c, c++, python, java.
6.Testing: Test your algorithm using various inputs to ensure it produces the correct
outputs or not.
Analysis of Algorithm
1. Priori Analysis:
"Priori" means "before". Hence Priori analysis means checking the algorithm before its
implementation. In this, the algorithm is checked when it is written in the form of
theoretical steps. This Efficiency of an algorithm is measured by assuming that all other
factors, for example, processor speed, are constant and have no effect on the
implementation.
2. Posterior Analysis: "Posterior" means "after". Hence Posterior analysis means checking
the algorithm after its implementation. In this, the algorithm is checked by implementing it
in any programming language and executing it. it is dependent on the language of the
compiler and the type of hardware used.
Algorithm Complexity
Complexity in algorithms refers to the amount of resources (such as time or memory)
required to solve a problem or perform a task.
1.Time Complexity
2.Space Complexity
Time complexity
The time complexity of an algorithm refers to the amount of time required by the algorithm
to execute and get the result. This can be for normal operations, conditional if-else
statements, loop statements, etc.
worst case- In the worst-case analysis, we calculate the upper bound on the running time of
an algorithm. We must know the case that causes a maximum number of operations to be
executed.
best case-In the best-case analysis, we calculate the lower bound on the running time of an
algorithm. We must know the case that causes a minimum number of operations to be
executed.
average case-In average case analysis, we take all possible inputs and calculate the
computing time for all of the inputs. Sum all the calculated values and divide the sum by the
total number of inputs.
space complexity
The space complexity of an algorithm refers to the amount of memory required by the
algorithm to store the variables and get the result.
Growth of functions
The growth of functions describes how the execution time or space requirements of an
algorithm increase as the size of the input grows. This understanding helps in comparing
the efficiency of different algorithms and selecting the most appropriate one for a given
problem.
Asymptotic Notations in Algorithms
Asymptotic Notations are mathematical tools used to analyze the performance of
algorithms by understanding how their efficiency changes as the input size grows.
1.Big-O Notation (O-notation) (upper bound)
2.Omega Notation (Ω-notation) (lower bound)
3.Theta Notation (Θ-notation) (avg bound)
4.small oh
5.small omega
1.Big-O Notation (O-notation) (upper bound)- It denotes the upper bound of the runtime
of an algorithm. Big O Notation's role is to calculate the longest time an algorithm can take
for its execution, i.e., it is used for calculating the worst-case time complexity of an
algorithm.
We say that
f(n)=Og(n)
if and only if
f(n)<=c. g(n) for some c>0 after n>= no>=0
For example-
1. Given f(n) = 2n2 + 3n +2
and g(n)=n^2 prove that f(n), g(n).
Solution:
Steps:
1. We have to show f(n) <= c. g(n) where c and no are some positive
constants for all n which is >= ne
2. Now, find out the value of c and no such that the equation-(1) gets
true.
2n2+3n+2<=c.(n").....1)
If we put c = 7 then
2n²+3n+2<=7n²
Now, put n=1 which is no
7<=7[True]
Hence, when c=7 and no-1, f(n) = O g(n) for all n which is> = no
2.Omega Notation (Ω-notation) (lower bound)-It represents the lower bound of the
runtime of an algorithm. It is used for calculating the best time an algorithm can take to
complete its execution, i.e., it is used for measuring the best-case time complexity of an
algorithm.
We say that
f(n)= Ω g(n) if and only if f(n)>= c. g(n) for some c >0 after n >= n.>=0
For example:
1. Given f(n)=3n+2 and g(n)= n, then prove that f(n)= Ω g(n)
Solution
1. We have to show that f(n) >= c. g(n) where c ard no are some
positive constants for all n which is >= no
2. 3n+2>= c.n
3. When we put c=1
4. 3n +2>=n
5. Put n = 1
6. 5>=1[True]
7. Hence, when we put c=1 and no=1, f(n)= Q g(n).
3.Theta Notation (Θ-notation)-Theta notation encloses the function from above and
below. Since it represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an algorithm.
We say that
f(n)=Θ. g(n) if and only if c1.g(n)<=f(n)<= c2.g(n) for all n>=no>=0 and c>0
For example,
1. Given f(n) = 3n +2 and g(n) = n, prove that f(n) = Θ g(n).
Solution
1. We have to show that c1.g(n) <= f(n) <= c2.g(n) for all n >=no>=0
and c>0
2. c1.g(n)<=f(n) <= c2.g(n)
3. c1.n <= 3n+2 <= c2.n
4. Put c1=1 ,c2=4and n=2, then 2<=8<=8[True1
5. Hence, f(n)= e g(n) where c1=1, c2=4 and no=2.
Time complexity of recursive programs
To find out time complexity of recursive programs, we have to write
recurrence relation for the given program and then solve it using one of
the following methods:
1. Iteration method (Back substitution) method
2. Recursion-tree method
3. Master method
4. Forward substitution method (Substitution Method)
5. Changing variable method
Master Method
If recurrence relation is of type
T (n) = a T (n/b) + F (n)
Where
f(n) > 0;
a≥ 1;
b>1
then using these 3 rules we can find complexity of the relation & directly write them in the
form of 0, Ω, θ.
Case 1: If F (n) = 0 (n logb a–€)
Where € > 0 then
If Case 1 satisfied therefore the complexity of the function is
T (n) = θ (n log b a)
Case 2:
If F (n) = θ (n log b a)
T (n) = (n log^ b a X log n)
Case 3: if F (n) = Ω (n log^b a + €)
For € > 0
T (n) = θ (F (n))
Linear Time Sorting Algorithms (O(n))
These algorithms achieve linear time complexity under specific conditions:
• Counting Sort: Works efficiently when the range of input values is small compared
to the number of elements.
• Radix Sort: Sorts numbers digit by digit, often used with integers.
• Bucket Sort: Divides elements into buckets and sorts them individually, achieving
linear time in the best case.
Stable and In-place sorting
Stable sorting
A stable sorting algorithm is a sorting algorithm that preserves the relative order of
elements with equal keys. In other words, if two elements with the same key appear in the
input list in a certain order, they will also appear in the same order in the output list.
Stable Sorting Algorithms -
1. Bubble Sort
2. Insertion Sort
3. Merge Sort
4. Counting Sort
5. Radix Sort
6. Tim Sort
Unstable Sorting Algorithms-
1. Selection Sort
2.Quick Sort
3. Heap Sort
4. Shell Sort
In-place sorting
An in-place sorting algorithm is a sorting algorithm that sorts the input array in place,
without using any additional memory. This means that the algorithm does not need to
create a new array to store the sorted elements.
In-place sorting algorithms are often used in applications where memory is limited. For
example, in-place sorting algorithms are often used to sort lists of elements on a computer’s
memory.
In-place Sorting Algorithms:
Bubble Sort
Insertion Sort
Selection Sort
Quick Sort
Heap Sort
Shell Sort
Not In-place Sorting Algorithms (use extra space): (Trick to learn BMCR)
Merge Sort
Counting
Sort
Radix Sort
Bucket Sort
Sorting
A sorting algorithm is used to arrange elements of an array/list in a specific order. we are
sorting the array in ascending order or descending.
There are various sorting algorithms that can be used to complete this operation. And, we
can use any algorithm based on the requirement.
Shell Sort
Shell Sort is an in-place comparison-based sorting algorithm that is a generalization of
Insertion Sort. It first sorts elements that are far apart from each other and successively
reduces the interval between the elements to be sorted.
The interval between the elements is reduced based on the sequence used.
• Shell's original sequence: N/2, N/4, …, 1(we use this sequence).
Working of Shell Sort-
Ex- [9, 8, 3, 7, 5, 6, 4, 1]
Pass1-
Step-I We are using the shell's original sequence (N/2, N/4, ...1) as intervals in our
algorithm.
{In the first loop, if the array size is N = 8 then, the elements lying at the interval of N/2 =
4 are compared and swapped if they are not in order.}
Step-II The 0th element is compared with the 4th element.
Step-III If the 0th element is greater than the 4th one then, the 4th element is first stored
in temp variable and the 0th element (greater element) is stored in the 4th position and the
element stored in temp is stored in the 0th position. This process goes on for all the
remaining elements.
Pass2-
Step1-we are going to decrease the interval gap. (now N/4 where N=8) so the gap is 2.
Step2- The elements at 4th and 2nd position are compared. The elements
at 2nd and 0th position are also compared. All the elements in the array lying at the current
interval are compared.
Step3- The same process goes on for remaining elements.
Pass3-
Step1- Finally, when the interval is N/8 = 8/8 =1 then the array elements lying at the
interval of 1 are sorted. The array is now completely sorted.