[go: up one dir, main page]

0% found this document useful (0 votes)
38 views21 pages

Daa Unit 1

Uploaded by

Sonu kumar Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views21 pages

Daa Unit 1

Uploaded by

Sonu kumar Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

lOMoARcPSD|43331363

DAA UNIT 1

Advanced Data Structure and Algorithms (Jawaharlal Nehru Technological University,


Anantapur)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)
lOMoARcPSD|43331363

UNIT – 1

INTRODUCTION: Notion of Algorithm - Fundamentals of


algorithmic problem solving – Important problem types –
Fundamentals of the analysis of algorithm efficiency – analysis
frame work – Asymptotic Notations - Recurrence equations –
Solving recurrence equations –Analysis of linear search- Recursive
solution to the Tower of Hanoi Puzzle.

ALGORITHM
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for
obtaining a required output for any legitimate input in a finite amount of time.

An algorithm, named after the ninth century scholar Abu Jafar Muhammad Ibn Musu Al-
Khowarizmi, is defined as follows:

 An algorithm is a set of rules for carrying out calculation either by hand or on a machine.
 An algorithm is a finite step-by-step procedure to achieve a required result.
 An algorithm is a sequence of computational steps that transform the input into the
output.
 An algorithm is a sequence of operations performed on data that have to be organized in
data structures.
 An algorithm is an abstraction of a program to be executed on a physical machine (model
of Computation).

Characteristics of an Algorithm:
1. Finiteness. An algorithm must always terminate after a finite number of steps.
2. Definiteness. Each step of an algorithm must be precisely defined; the actions to be
carried out must be rigorously and unambiguously specified for each case.
3. Input. An algorithm has zero or more inputs, i.e, quantities which are given to it initially
before the algorithm begins.
4. Output. An algorithm has one or more outputs i.e, quantities which have a specified
relation to the inputs.
5. Effectiveness. An algorithm is also generally expected to be effective. This means that all
of the operations to be performed in the algorithm must be sufficiently basic that they can
in principle be done exactly and in a finite length of time.

Issues or study of Algorithm:


 How to device or design an algorithm creating and algorithm.

Page 1 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

 How to express an algorithm definiteness.


 How to analysis an algorithm time and space complexity.
 How to validate an algorithm fitness.
 Testing the algorithm checking for error.

Algorithm Specification:
Algorithm can be described in three ways.
1. Natural language like English: When this way is choused care should be taken, we should
ensure that each & every statement is definite.
2. Graphic representation called flowchart: This method will work well when the algorithm
is small& simple.
3. Pseudo-code Method: In this method, we should typically describe algorithms as program,
which resembles language like c,c++,java etc.

Pseudo-Code Conventions for expressing Algorithms:

1. Comments begin with // and continue until the end of line.

2. Blocks are indicated with matching braces {and}.

3. An identifier begins with a letter. The data types of variables are not explicitly declared.

4. Compound data types can be formed with records. Here is an example,

Node. Record
{
data type –1 data-1;
.
.
.
data type –n data –n;
node * link;
}
5. Assignment of values to variables is done using the assignment statement.

<Variable>:= <expression>;
6. There are two Boolean values TRUE and FALSE.

1. Logical Operators AND, OR, NOT


2. Relational Operators <, <=,>,>=, =, !=
7. The following looping statements are employed.

For, while and repeat-until


While Loop: . }
While < condition > do .
{ . For Loop:
<statement-1> <statement

Page 2 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

For variable: = value-1to . Case statement:


value-2 step step do . Case
{ <statement-n> {
<statement-1> until<condition> :<condition-1>
8. A conditional statement :<statement-1>
. has the following forms. .
<statement-n> .
} If <condition> then .
repeat-until: <statement> :<condition-n>
repeat If <condition> then :<statement-n>
<statement-1> <statement-1> :else :<statement-n+1>
. Else <statement-1> }
9. Input and output are done using the instructions read & write.
10. There is only one type of procedure:
Algorithm, the heading takes the form,
Algorithm Name (Parameter lists)

As an example, the following algorithm fields & returns the maximum of ‘n’ given numbers:
1. algorithm Max(A,n)
2. // A is an array of size n
3. {
4. Result := A[1];
5. for I:= 2 to n do
6. if A[I] > Result then
7. Result :=A[I];
8. return Result;
9. }

In this algorithm (named Max), A & n are procedure parameters. Result & I are Local variables.

Types of Algorithms:

1. Brute Force
Brute force is a straightforward approach to solve a problem based on the problem’s
statement and definitions of the concepts involved. It is considered as one of the easiest
approach to apply and is useful for solving small – size instances of a problem.
Some examples of brute force algorithms are: Selection sort, Bubble sort
2. Greedy Algorithms "take what you can get now" strategy
The solution is constructed through a sequence of steps, each expanding a partially
constructed solution obtained so far. At each step the choice must be locally optimal – this
is the central point of this technique.
Examples: Minimal spanning tree, Shortest distance in graphs
3. Divide-and-Conquer, Decrease-and-Conquer
These are methods of designing algorithms that (informally) proceed as follows:

Page 3 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

Given an instance of the problem to be solved, split this into several smaller sub-instances
(of the same problem), independently solve each of the sub-instances and then combine
the sub-instance solutions so as to yield a solution for the original instance.
With the divide-and-conquer method the size of the problem instance is reduced by a
factor (e.g. half the input size), while with the decrease-and-conquer method the size is
reduced by a constant.
Examples of divide-and-conquer algorithms:
Binary search in a sorted array (recursion)
Mergesort algorithm, Quicksort algorithm (recursion)
4. Dynamic Programming
One disadvantage of using Divide-and-Conquer is that the process of recursively solving
separate sub-instances can result in the same computations being performed
repeatedly sinceidentical sub-instances may arise. The idea behind dynamic
programming is to avoid this pathology by obviating the requirement to calculate the
same quantity twice. The method usually accomplishes this by maintaining a table of
sub-instance results. Dynamic Programming is a Bottom-Up Technique in which the
smallest sub-instances areexplicitly solved first and the results of these used to construct
solutions to progressively larger sub-instances.
Examples: Fibonacci numbers computed by iteration.

5. Transform-and-Conquer
These methods work as two-stage procedures. First, the problem is modified to be more
amenable to solution.
1. Example: AVL trees guarantee O(nlogn) search time
6. Backtracking and branch-and-bound: generate and test methods
The method is used for state-space search problems. State-space search problems are
problems, where the problem representation consists of:
a. initial state
b. goal state(s)
c. a set of intermediate states
d. a set of operators that transform one state into another. Each operator has
preconditions and postconditions.
7. Backtracking uses depth-first search usually without cost function. The main
algorithm is as follows:
Store the initial state in a stack
While the stack is not empty, do:
a. Read a node from the stack.
b. While there are available operators do:
i. Apply an operator to generate a child
ii. If the child is a goal state – stop
iii. If it is a new state, push the child into the stack
The utility function is used to tell how close is a given state to the goal state and whether
a given state may be considered a goal state.
If no children can be generated from a given node, then we backtrack – read the next
node from the stack.
8. Genetic Algorithms

Page 4 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

Genetic algorithms (GAs) are used mainly for optimization problems for which the exact
algorithms are of very low efficiency.
GAs search for good solutions to a problem from among a (large) number of possible
solutions. The current set of possible solutions is used to generate a new set of possible
solution.
They start with an initial set of possible solutions, and at each step they do the following:
a. evaluate the current set of solutions (current generation)
b. choose the best of them to serve as “parents” for the new generation, and
construct the new generation.
Examples:
The knapsack problem solved with GAs
The traveling salesman problem

Fundamentals of Algorithmic problem solving

Fundamentals of Algorithmic problem solving are given as:


1. Understanding the problem 5. Algorithm design techniques
2. Ascertain the capabilities of 6. Methods of specifying an
the computational device algorithm
3. Exact /approximate soln. 7. Proving an algorithms
4. Decide on the appropriate correctness
data structure 8. Analysing an algorithm

1. Understanding the problem:The problem given should be understood completely.Check


if it is similar to some standard problems & if a Known algorithm exists.otherwise a new

Page 5 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

algorithm has to be devised.Creating an algorithm is an art which may never be fully


automated. An important step in the design is to specify an in- stance of the problem.
2. Ascertain the capabilities of the computational device: Once a problem is understood
we need to know the capabilities of the computing device this can be done by Knowing
the type of the architecture, speed & memory availability.
3. Exact /approximate soln.: Once algorithm is devised, it is necessary to show that it
computes answer for all the possible legal inputs. The solution is stated in two forms,
Exact solution or approximate solution.examples of problems where an exact solution
cannot be obtained are i) Finding a squareroot of number. ii) Solutions of non linear
equations.
4. Decide on the appropriate data structure:Some algorithms do not demand any in-
genuity in representing their inputs.Someothers are in fact are predicted on ingenious data
structures.A data type is a well-defined collection of data with a well-defined set of
operations on it.A data structure is an actual implementation of a particular abstract data
type. The Elementary Data Structures are ArraysThese let you access lots of data fast.
(good) .You can have arrays of any other da ta type. (good) .However, you cannot make
arrays bigger if your program decides it needs more space. (bad) .
5. Algorithm design techniques: Creating an algorithm is an art which may never be fully
au- tomated. By mastering these design strategies, it will become easier for you to devise
new and useful algorithms. Dynamic programming is one such technique. Some of the
techniques are especially useful in fields other then computer science such as operation
research and electric- al engineering. Some important design techniques are linear, non
linear and integer programming
6. Methods of specifying an algorithm: There are mainly two options for specifying an
algorithm: use of natural language or pseudocode & Flowcharts. A Pseudo code is a
mixture of natural language & programming language like constructs. A flowchart is a
method of expressing an algorithm by a collection of connected geometric shapes.
7. Proving algorithms correctness: Once algorithm is devised, it is necessary to show that
it computes answer for all the possible legal inputs .We refer to this process as algorithm
validation. The process of validation is to assure us that this algorithm will work correctly
independent of issues concerning programming language it will be written in. A proof of
correctness requires that the solution be stated in two forms. One form is usually as a
program which is annotated by a set of assertions about the input and output variables of
a program. These assertions are often expressed in the predicate calculus. The second
form is called a specification, and this may also be expressed in the predicate calculus. A
proof consists of showing that these two forms are equivalent in that for every given legal
input, they describe same out- put. A complete proof of program correctness requires that
each statement of programming language be precisely defined and all basic operations be
proved correct. All these details may cause proof to be very much longer than the
program.
8. Analyzing algorithms: As an algorithm is executed, it uses the computers central
processing unit to perform operation and its memory (both immediate and auxiliary) to
hold the program and data. Analysis of algorithms and performance analysis refers to the
task of determining how much computing time and storage an algorithm requires. This is
a challenging area in which some times require grate mathematical skill. An important
result of this study is that it allows you to make quantitative judgments about the value of
one algorithm over another.

Page 6 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

Algorithm Design Techniques

The following is a list of several popular design approaches:

1. Divide and Conquer Approach: It is a top-down approach. The algorithms which follow the
divide & conquer techniques involve three steps:

o Divide the original problem into a set of subproblems.


o Solve every subproblem individually, recursively.
o Combine the solution of the subproblems (top level) into a solution of the whole original
problem.

2. Greedy Technique: Greedy method is used to solve the optimization problem. An


optimization problem is one in which we are given a set of input values, which are required
either to be maximized or minimized (known as objective), i.e. some constraints or conditions.

o Greedy Algorithm always makes the choice (greedy criteria) looks best at the moment, to
optimize a given objective.
o The greedy algorithm doesn't always guarantee the optimal solution however it generally
produces a solution that is very close in value to the optimal.

3. Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all


possible small problems and then combine them to obtain solutions for bigger problems.

This is particularly helpful when the number of copying subproblems is exponentially large.
Dynamic Programming is frequently related to Optimization Problems.

4. Branch and Bound: In Branch & Bound algorithm a given subproblem, which cannot be
bounded, has to be divided into at least two new restricted subproblems. Branch and Bound
algorithm are methods for global optimization in non-convex problems. Branch and Bound
algorithms can be slow, however in the worst case they require effort that grows exponentially
with problem size, but in some cases we are lucky, and the method coverage with much less
effort.

5. Randomized Algorithms: A randomized algorithm is defined as an algorithm that is allowed


to access a source of independent, unbiased random bits, and it is then allowed to use these
random bits to influence its computation.

6. Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find the
right one. It is a depth-first search of the set of possible solution. During the search, if an

Page 7 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

alternative doesn't work, then backtrack to the choice point, the place which presented different
alternatives, and tries the next alternative.

7. Randomized Algorithm: A randomized algorithm uses a random number at least once during
the computation make a decision.

Example 1: In Quick Sort, using a random number to choose a pivot.

Example 2: Trying to factor a large number by choosing a random number as possible divisors.

Important Problem Types

1. Sorting

2. Searching

3. String processing

4. Graph problems

5. Combinatorial problems

6. Geometric problems

7. Numerical problems

sorting algorithm is an algorithm that puts elements of a list in a certain order. The most- used
orders are numerical order and lexicographical order.
Searching : In computer science, a search algorithm, broadly speaking, is an algorithm for
finding an item with specified properties among a collection of items.
String processing:String searching algorithms are important in all sorts of applications that we
meet everyday. For example the “pbm” graphics format is based on sequences of 1s and 0s. We
could express a task like “find a wide white stripe in the image” as a string searching problem.
Graph problems:Graph algorithms are one of the oldest classes of algorithms.
There are two large classes of graphs:

• directed graphs (digraphs )

• undirected graphs

Shortest path (min-cost path). Find the path from B to A with the minimum cost (de- termined
as some simple function of the edges traversed in the path) (Dijkstra’s and Floyd’s algorithms)

• Visit all nodes. Traversal. Depth- and breadth-first traversals

• Transitive closure. Determine all pairs of nodes that can reach each other (Floyd’s al- gorithm)

Page 8 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

Minimum spanning tree. A spanning three is a set of edges such that every node is reachable
from every other node, and the removal of any edge from the tree eliminates the reachability
property. A minimum spanning tree is the smallest such tree. (Prim’s and Kruskal’s algorithms)

Geometric Problems
Geometric algorithms deal with geometric objects such as points , lines, and polygons. Ancient
Greeks were very much interested in developing procedures for solving a variety of geometric
problems including problems of constructing simple geometric shapes-triangles

Numerical Problems
Numerical problems, another large area of applications are problems that involve mathematical
objects of continuous nature: solving equations and system of equation, computing definite
integrals, evaluating functions and so on. The majority of such mathematical problems can be
solved only approximately.

Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before implementation and
after implementation, as mentioned below −
 A priori analysis − This is theoretical analysis of an algorithm. Efficiency of algorithm is
measured by assuming that all other factors e.g. processor speed, are constant and have
no effect on implementation.
 A posterior analysis − This is empirical analysis of an algorithm. The selected algorithm
is implemented using programming language. This is then executed on target computer
machine. In this analysis, actual statistics like running time and space required, are
collected.

Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used by the
Algorithm X are the two main factors which decide the efficiency of X.
 Time Factor − The time is measured by counting the number of key operations such as
comparisons in sorting algorithm
 Space Factor − The space is measured by counting the maximum memory space
required by the algorithm.
The complexity of an algorithm f(n) gives the running time and / or storage space required by
the algorithm in terms of n as the size of input data.
Space Complexity
Space complexity of an algorithm represents the amount of memory space required by the
algorithm in its life cycle. Space required by an algorithm is equal to the sum of the following
two components −
 A fixed part that is a space required to store certain data and variables, that are
independent of the size of the problem. For example simple variables & constant used,
program size etc.
 A variable part is a space required by variables, whose size depends on the size of the
problem. For example dynamic memory allocation, recursion stack space etc.

Page 9 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

 Algorithm: SUM(A, B)
 Step 1 - START
 Step 2 - C ← A + B + 10
 Step 3 - Stop
Time Complexity
Time Complexity of an algorithm represents the amount of time required by the algorithm to
run to completion. Time requirements can be defined as a numerical function T(n),
where T(n) can be measured as the number of steps, provided each step consumes constant
time.
The Analysis Framework
1. Measuring an Input’s Size
2. Units for Measuring Running Time
3. Orders of Growth
4. Worst-Case, Best-Case, and Average-Case Efficiencies

Measuring an Input’s Size


Let’s start with the obvious observation that almost all algorithms run longer on larger inputs.
For example, it takes longer to sort larger arrays, multiply larger matrices, and so on. Therefore,
it is logical to investigate an algorithm’s efficiency as a function of some parameter n indicating
the algorithm’s input size.
Units for Measuring Running Time
The next issue concerns units for measuring an algorithm’s running time. Of course, we can
simply use some standard unit of time measurement—a second, or millisecond, and so on—to
measure the running time of a program implement-ing the algorithm. There are obvious
drawbacks to such an approach, however: dependence on the speed of a particular computer,
dependence on the quality of a program implementing the algorithm and of the compiler used in
generating the machine code, and the difficulty of clocking the actual running time of the pro-
gram.
Orders of Growth
An order of growth is a set of functions whose asymptotic growth behavior is considered
equivalent. For example, 2n, 100n and n + 1 belong to the same order of growth, which is
written O(n) in Big-Oh notation and often called linear because every function in the set grows
linearly with n.

Page 10 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

Worst-Case, Best-Case, and Average-Case Efficiencies

Asymptotic Analysis
 Asymptotic analysis of an algorithm, refers to defining the mathematical
boundation/framing of its run-time performance.
 Asymptotic analysis refers to computing the running time of any operation in
mathematical units of computation. For example, running time of one operation is
computed as f(n) and may be for another operation it is computed as g(n2). Which means
first operation running time will increase linearly with the increase in n and running time
of second operation will increase exponentially when n increases. Similarly the running
time of both operations will be nearly same if n is significantly small.
An algorithm falls under three types −
 Best Case − Minimum time required for program execution.
 Average Case − Average time required for program execution.

Page 11 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

 Worst Case − Maximum time required for program execution.

Types of Asymptotic Notations:


Following are commonly used asymptotic notations used in calculating running time
complexity of an algorithm.
 Ο Notation
 Ω Notation
 θ Notation

Big Oh Notation, Ο
The Ο(n) is the formal way to express the upper
bound of an algorithm's running time. It measures the
worst case time complexity or longest amount of time
an algorithm can possibly take to complete.

For example, for a function f(n)


Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that
g(n) ≤ c.f(n) for all n > n0. }

Omega Notation, Ω
The Ω(n) is the formal way to express the lower
bound of an algorithm's running time. It measures the
best case time complexity or best amount of time an
algorithm can possibly take to complete.

For example, for a function f(n)


Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that
g(n) ≤ c.f(n) for all n > n0. }

Theta Notation, θ
The θ(n) is the formal way to express both the lower
bound and upper bound of an algorithm's running time.
It is represented as following −

θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) =


Ω(f(n)) for all n > n0. }

Page 12 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

Common Asymptotic Notations

constant − Ο(1)

logarithmic − Ο(log n)

linear − Ο(n)

n log n − Ο(n log n)

quadratic − Ο(n2)

cubic − Ο(n3)

polynomial − nΟ(1)

exponential − 2Ο(n)

Now we will discuss and understand the various notations used for Time Complexity.
1. Big Oh denotes "fewer than or the same as" <expression> iterations.
2. Big Omega denotes "more than or the same as" <expression> iterations.
3. Big Theta denotes "the same as" <expression> iterations.
4. Little Oh denotes "fewer than" <expression> iterations.
5. Little Omega denotes "more than" <expression> iterations.

O(expression) is the set of functions that grow slower than or at the same rate as expression.
Omega(expression)is the set of functions that grow faster than or at the same rate as
expression.
Theta(expression) consist of all the functions that lie in both O(expression) and
Omega(expression).

Recursive Algorithms:
 A Recursive function is a function that is defined in terms of itself.
 Similarly, an algorithm is said to be recursive if the same algorithm is invoked in the body.
 An algorithm that calls itself is Direct Recursive.

Page 13 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

 Algorithm ‘A’ is said to be Indirect Recursive if it calls another algorithm which in turns calls
‘A’.

When to use recursion:


Recursion can be used for repetitive computations in which each action is stated in terms of
previous result. There are two conditions that must be satisfied by any recursive procedure.

1. Each time a function calls itself it should get nearer to the solution.

2. There must be a decision criterion for stopping the process.

The recursion algorithm for finding the factorial of a number is given below,
Algorithm: factorial-recursion
Input : n, the number whose factorial is to be found.
Output : f, the factorial of n
Method : if(n=0)
f=1
else
f=factorial(n-1) * n
if end
algorithm ends.

Iteration v/s Recursion:


Demerits of recursive algorithms:
 Many programming languages do not support recursion; hence, recursive mathematical
function is implemented using iterative methods.
 A recursive procedure can be called from within or outside itself and to ensure its proper
functioning it has to save in some order the return addresses so that, a return to the proper
location will result when the return to a calling statement is made.
 The recursive programs needs considerably more storage and will take more time.

Demerits of iterative methods :


 Mathematical functions such as factorial and Fibonacci series generation can be easily
implemented using recursion than iteration.
 In iterative techniques looping of statement is very much necessary.

Recursion is a top down approach to problem solving. It divides the problem into pieces or selects
out one key step, postponing the rest.

SOLVING RECURRENCES :

A recurrence relation is an equation that recursively defines a sequence where the next term is a
function of the previous terms (Expressing Fn as some combination of Fi with i<n).

Example − Fibonacci series: Fn = Fn−1 + Fn−2, Tower of Hanoi: Fn = 2Fn−1 + 1

There are mainly three ways for solving recurrences.

Page 14 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

1) Substitution Method: We make a guess for the solution and then we use mathematical
induction to prove the the guess is correct or incorrect.

2) Recurrence Tree Method: In this method, we draw a recurrence tree and calculate the time
taken by every level of tree. Finally, we sum the work done at all levels. To draw the recurrence
tree, we start from the given recurrence and keep drawing till we find a pattern among levels.
The pattern is typically a arithmetic or geometric series.

3) Master Method:
Master Method is a direct way to get the solution. The master method works only for following
type of recurrences or for recurrences that can be transformed to following type.

T(n) = aT(n/b) + f(n) where a >= 1 and b > 1

Example: Fibonacci numbers:


1, 1, 2, 3, 5, 8, 11, ….
F1 = 1, F2 = 1
Fn = Fn-1 + Fn-2, n =3,4,….

Time Complexity comparison of Sorting Algorithms


Algorithm Data Structure Time Complexity

Best Average Worst

Quicksort Array O(n log(n)) O(n log(n)) O(n^2)

Mergesort Array O(n log(n)) O(n log(n)) O(n log(n))

Heapsort Array O(n log(n)) O(n log(n)) O(n log(n))

Bubble Sort Array O(n) O(n^2) O(n^2)

Insertion Sort Array O(n) O(n^2) O(n^2)

Select Sort Array O(n^2) O(n^2) O(n^2)

Bucket Sort Array O(n+k) O(n+k) O(n^2)

Radix Sort Array O(nk) O(nk) O(nk)

Space Complexity comparison of Sorting Algorithms


Algorithm Data Structure Worst Case Auxiliary Space Complexity

Quicksort Array O(n)

Page 15 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

Algorithm Data Structure Worst Case Auxiliary Space Complexity

Mergesort Array O(n)

Heapsort Array O(1)

Bubble Sort Array O(1)

Insertion Sort Array O(1)

Select Sort Array O(1)

Bucket Sort Array O(nk)

Radix Sort Array O(n+k)

Linear Search-

Linear Search is the simplest searching algorithm.


 It traverses the array sequentially to locate the required element.
 It searches for an element by comparing it with each element of the array one by one.
 So, it is also called as Sequential Search.

Linear Search Algorithm is applied when-


 No information is given about the array.
 The given array is unsorted or the elements are unordered.
 The list of data items is smaller.

Linear Search Algorithm-


Consider-
 There is a linear array ‘a’ of size ‘n’.
 Linear search algorithm is being used to search an element ‘item’ in this linear array.
 If search ends in success, it sets loc to the index of the element otherwise it sets loc to -1.
Then, Linear Search Algorithm is as follows-
Linear_Search (a , n , item , loc)

Begin
for i = 0 to (n - 1) by 1 do
if (a[i] = item) then
set loc = i
Exit

Page 16 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

endif
endfor
set loc = -1
End
Time Complexity Analysis-

Linear Search time complexity analysis is done below-


Best case-
In the best possible case,
 The element being searched may be found at the first position.
 In this case, the search terminates in success with just one comparison.
 Thus in best case, linear search algorithm takes O(1) operations.

Worst Case-

In the worst possible case,


 The element being searched may be present at the last position or not present in the array
at all.
 In the former case, the search terminates in success with n comparisons.
 In the later case, the search terminates in failure with n comparisons.
 Thus in worst case, linear search algorithm takes O(n) operations.

Thus, we have-

Time Complexity of Linear Search Algorithm is O(n).


Here, n is the number of elements in the linear array.

Linear Search Efficiency-


Linear Search is less efficient when compared with other algorithms like Binary Search & Hash
tables.
 The other algorithms allow significantly faster searching.

Linear Search Example-


Consider-
 We are given the following linear array.
 Element 15 has to be searched in it using Linear Search Algorithm.

Page 17 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

Now,
 Linear Search algorithm compares element 15 with all the elements of the array one by
one.
 It continues searching until either the element 15 is found or all the elements are
searched.

What is Tower of Hanoi?

A mathematical puzzle consisting of three towers and more than one ring is
known as Tower of Hanoi.

The rings are of different sizes and are stacked in ascending order, i.e., the
smaller one sits over the larger one. In some of the puzzles, the number of rings
may increase, but the count of the tower remains the same.

What are the rules to be followed by Tower of Hanoi?

The Tower of Hanoi puzzle is solved by moving all the disks to another tower by
not violating the sequence of the arrangements.

The rules to be followed by the Tower of Hanoi are -


 Only one disk can be moved among the towers at any given time.
 Only the "top" disk can be removed.
 No large disk can sit over a small disk.

Page 18 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

In the above 7 step all the disks from peg A will be transferred to C given Condition:

1. Only one disk will be shifted at a time.

2. Smaller disk can be placed on larger disk.

Let T (n) be the total time taken to move n disks from peg A to peg C

1. Moving n-1 disks from the first peg to the second peg. This can be done in T (n-
1) steps.

Page 19 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)


lOMoARcPSD|43331363

2. Moving larger disks from the first peg to the third peg will require first one step.

3. Recursively moving n-1 disks from the second peg to the third peg will require
again T (n-1) step.

So, total time taken T (n) = T (n-1)+1+ T(n-1)

Page 20 of 20

Downloaded by 129-2023 Sonu Singh (s1062230129@timscdrmumbai.in)

You might also like