UNIT - 1 Introduction - Asymtotic Notations
UNIT - 1 Introduction - Asymtotic Notations
UNIT 1: INTRODUCTION
❖ DEFINITION OF ALGORITHM:
Informal Definition:
An Algorithm is any well-defined computational procedure that takes some value or set
of values as Input and produces a set of values or some value as output. Thus algorithm
is a sequence of computational steps that transforms the i/p into the o/p.
Observe the following activities to see how an algorithm is used to produce the
desire result.
The solution to a given problem is expressed in the form of an algorithm.
The algorithm is converted into a program.
The program when it is executed, accept the input and produces the desired output.
❖ CHARACTERISTICS OF AN ALGORITHM
1) Input specified
An algorithm should have 0 or more well-defined inputs.
2) Output specified
An algorithm should have 1 or more well-defined outputs, and should match the desired
output.
3) Definiteness
Algorithms must specify every step and the order the steps must be taken in the process.
Definiteness means specifying the sequence of operations for turning input into output.
Algorithm should be clear and unambiguous.
4) Effectiveness
For an algorithm to be effective, all those steps that are required to get to output must be
feasible with the available resources. It should not contain any unnecessary and
redundant steps which could make an algorithm ineffective.
5) Finiteness
The algorithm must stop, eventually. Stopping may mean that you get the expected
output OR you get a response that no solution is possible. Algorithms must terminate
after a finite number of steps. An algorithm should not be infinite.
6) Independent
An algorithm should have step-by-step directions, which should be independent of any
programming code. It should be such that it could be run on any of the programming
languages.
Decision making:
(ii)
Ascertaining the Capabilities of the Computational Device:
In random-access machine (RAM), instructions are executed one after another
(The central assumption is that one operation at a time). Accordingly,
algorithms designed to be executed on such machines are called sequential
algorithms.
Choice of computational devices like Processor and memory is mainly based
on space and time efficiency
Choosing between Exact and Approximate Problem Solving:
An algorithm used to solve the problem exactly and produce correct result is
called an exact algorithm.
If the problem is so complex and not able to get exact solution, then we have
to choose an algorithm called an approximation algorithm. i.e., produces an
approximate answer. E.g., extracting square roots, solving nonlinear
equations, and evaluating definite integrals.
Algorithm Design Techniques:
An algorithm design technique (or “strategy” or “paradigm”) is a general
approach to solving problems algorithmically that is applicable to a variety of
problems from different areas of computing. (used to solve different problems
such as brute force, divide and conquer, greedy method, dynamic
programming and so on).
Implementation is only possible with the help of Algorithms and Data
Structures.
3. Pseudo-code Method:
In this method, we should typically describe algorithms as program, which resembles
language like Pascal & algol.
Pseudo-Code Conventions:
1) .Algorithm consists of Head and Body
Head: Algorithm Name(Parameter List)
->Assignment statements
->Using Operators
ALGORITHM GCD(m,n)
// GCD of m,n
Step 2: return M or N
EXAMPLE: M=10,N=6
CONTROL STRUCTURES:
Control Structures are just a way to specify flow of control in programs. It basically
analyzes and chooses in which direction a program flows based on certain parameters or
conditions.
There are three basic types of control structures ,namely sequential, selection and
iteration.
Depending on whether a condition is true or false, the decision control structure may
skip the execution of entire block of statements.
This is a control structure that allows the execution of a block of statements multiple
times until a specified condition is met.
ANALYSIS OF ALGORITHM
Analysis of algorithms is the determination of the amount of time and space resources
required to execute it.
Analysis of an algorithm can be done by measuring its Performance and Complexity
● Performance: How much time/memory/disk/etc. is used when a program is run.
This depends on the machine, compiler, etc. as well as the code we write.
● Complexity: How do the resource requirements of a program or algorithm scale,
i.e. what happens as the size of the problem being solved by the code gets larger.
SPACE COMPLEXITY
Space complexity is the amount of memory used by the algorithm (including the input
values to the algorithm) to execute and produce the result.
But while calculating the Space Complexity of any algorithm, we usually consider only
Data Space and we neglect the Program Space and Stack Space.
EXAMPLE
{
TIME COMPLEXITY
Time complexity of an algorithm signifies the total time required by the program to run
till its completion.
● Speed of computer
● Choice of the programming language
● Compiler used
● Choice of algorithm
● Size of input and output
Since time efficiency depends on the size of the i/p ‘n’,it is expressed in terms of n.
Example 1
int sum(int a, int b)
{
return a+b;
}
In the above sample code, it requires 1 unit of time to calculate a+b and 1 unit of time to
return the value. That means, totally it takes 2 units of time to complete its execution. And it
does not change based on the input values of a and b. That means for all input values, it
requires the same amount of time i.e. 2 units.
Example 2
int sum(int A[], int n)
{
int sum = 0, i;
for(i = 0; i < n; i++)
sum = sum + A[i];
rerurn sum
}
Units for Measuring Running Time: Count the number of times an algorithm’s basic
operation is executed. (Basic operation: The most important operation of the
algorithm, the operation contributing the most to the total running time.) For e.g., The
basic operation is usually the most time-consuming operation in the algorithm’s
innermost loop.
BASIC OPERATION
● An operation which contributes towards the running time of the algorithm. OR A
statement that executes max. No. of times in a function.
● No. of times Basic operation execution depends on size of i/p.
● Time efficiency is analyzed by determining the no. of times basic operation is
executed. So
KLE Society’s JT BCA College, Gadag Page 11
Design and Analysis of Algorithms
❖ ORDER OF GROWTH
Some algorithms execute faster for smaller values of n. But, as the value of n
increases, they tend to be very slow. So, the behavior of some algorithm changes with
increase in the value of n. This change in behavior of algorithm and algorithm’s
efficiency can be analyzed by considering the highest order of n. the order of growth is
normally determined for larger values of n for the following reasons.
Behavior of algorithm changes with increase in values of n(size of i/p)
2 pow 4=16, 2 pow 8=256…..i.e it shows exponential growth for small variation
in ‘N’.So an algorithm with linear growth is preferred over an algorithm with
exponential growth.
● N! : The running time of an algorithm is factorial. The algorithm that generate all
permutations of set will have this running time.
The following table shows the run time of these algorithms for different problem sizes:
10 1001 111
TYPES OF EFFICIENCIES:
1. Best case Efficiency: Define the input for which algorithm takes less time or
minimum time. In the best case calculate the lower bound of an algorithm. Example: In
the linear search when search data is present at the first location of large data then the
best case occurs. Efficiency (number of times the basic operation will be executed) for
the best case input of size n. i.e. The algorithm runs the fastest among all possible
inputs of size n.
2. Average case efficiency : In the average case take all random inputs and calculate
the computation time for all inputs. And then we divide it by the total number of inputs.
Average time taken (number of times the basic operation will be executed) to solve all
the possible instances (random) of the input. NOTE: NOT the average of worst and
best case.
3. Worst Case Efficiency: Define the input for which algorithm takes a long time or
maximum time. In the worst calculate the upper bound of an algorithm. Example: In the
linear search when search data is not present at all then the worst case occurs. Efficiency
(number of times the basic operation will be executed) for the worst case input of size
n. i.e. The algorithm runs the longest among all possible inputs of size n.
ASYMPTOTIC NOTATIONS
Asymptotic notations are the mathematical notations used to describe the running time
of an algorithm when the input tends towards a particular value or a limiting value.
The above expression can be described as a function f(n) belongs to the set O(g(n)) if
there exists a positive constant c and no such that, f(n)≤ c*g(n) for all n≥no For any
value of n, the running time of an algorithm does not cross the time provided by
O(g(n)).
Example
Consider the following f(n) and g(n) two non negative functions…
f(n)=3n+2 g(n)=n If we want to represent f(n) as O(g(n)) then it must satisfy f(n)
<= C*g(n) for all values of C > 0 and n0>=1
f(n)<=C*g(n) ⇒3n+2<=C*n
3*2+2<=4*2
➔ OMEGA ( Ω ) NOTATION
This notation provides a lower bound on the growth rate of an algorithm’s running time
or space usage. It represents the best-case scenario, i.e., the minimum amount of time or
space an algorithm may need to solve a problem. For example, if an algorithm’s running
time is Ω(n), then it means that the running time of the algorithm increases linearly with
the input size n or more.
Example
Consider the following f(n) and g(n) two non negative functions…
f(n)=3n+2 g(n)=n If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C
g(n) for all values of C>0 and n0>=1
f(n)>=C*g(n) ⇒3n+2>=C*n
3n+2>=C*n (let C=1)
3n+2>=1*n (let n=n0=1)
3*1+2>=1*1
5>=1(Condition satisfied)
Then f(n)>=C*g(n)¥n>=1 Above condition is always TRUE for all values of C = 1 and
n >= 1.
Example
Consider the following f)n) and g(n) two non negative functions… f(n)=3n+2 g(n)=n If
we want to represent f(n) as Θ(g(n)) then it must satisfy C1*g(n) <= f(n) <= C2*g(n) for
all values of C1 >0,C2 >0 and n0>=1
C1*g(n)<=f(n)<=C2*g(n) ⇒C1*n<=3n+2<=C2*n
C1*n<=3n+2<=C2*n (let C1=1 and C2=4)
1*n<=3n+2<=4*n (let n=n0=1)
1*1<=3*1+2<=4*1
1<=5<=4 (Condition not satisfied)
1*n<=3n+2<=4*n (let n=n0=2)
1*2<=3*2+2<=4*2
2<=8<=8 (Condition satisfied)
Then C1 g(n)<=f(n)<=C2 g(n) ¥n>=2 Above condition is always TRUE for all values of
C1 = 1, C2 = 4 and n >= 2.
1. Brute Force Algorithm: This is the most basic and simplest type of algorithm. A
Brute Force Algorithm is the straightforward approach to a problem i.e., the first
approach that comes to our mind on seeing the problem. More technically it is just like
iterating every possibility available to solve that problem.
Example: If there is a lock of 4-digit PIN. The digits to be chosen from 0-9 then the
brute force will be trying all possible combinations one by one like 0001, 0002, 0003,
0004, and so on until we get the right PIN. In the worst case, it will take 10,000 tries to
find the right combination.
3.Divide and Conquer Algorithm: In Divide and Conquer algorithms, the idea is to
solve the problem in two sections, the first section divides the problem into sub
problems of the same type. The second section is to solve the smaller problem
independently and then add the combined result to produce the final answer to the
problem. Some common problem that is solved using Divide and Conquers Algorithms
are Binary Search, Merge Sort, Quick Sort, Strassen’s Matrix Multiplication, etc.
complex problem into smaller overlapping sub problems and store the result for future
use. The following problems can be solved using the Dynamic Programming algorithm
Knapsack Problem, Weighted Job Scheduling, Floyd Warshall Algorithm, etc.
5.Greedy Algorithm: In the Greedy Algorithm, the solution is built part by part. The
decision to choose the next part is done on the basis that it gives an immediate benefit. It
never considers the choices that had been taken previously. Some common problems
that can be solved through the Greedy Algorithm are Dijkstra Shortest Path Algorithm,
Prim’s Algorithm, Kruskal’s Algorithm etc.
8. Sorting Algorithm: The sorting algorithm is used to sort data in maybe ascending or
descending order. It’s also used for arranging data in an efficient and useful manner.
Some common problems that can be solved through the sorting Algorithm are Bubble
sort, insertion sort, selection sort etc are examples of the Sorting algorithm.
9. Searching Algorithm: The searching algorithm is the algorithm that is used for
searching the specific key in particular sorted or unsorted data. Some common problems
that can be solved through the Searching Algorithm are Binary search or linear search is
one example of a Searching algorithm.
10. Hashing Algorithm: Hashing algorithms work the same as the Searching algorithm
but they contain an index with a key ID i.e. a key-value pair. In hashing, we assign a
key to specific data. Some common problems can be solved through the Hashing
Algorithm in password
Write an algorithm to find the Maximum and Minimum numbers in array and time
efficiency of the algorithm.