[go: up one dir, main page]

0% found this document useful (0 votes)
38 views64 pages

ADA - Unit-I

Uploaded by

akhilmvs19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views64 pages

ADA - Unit-I

Uploaded by

akhilmvs19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 64

18AIC206J

Analysis and Design of


Algorithms
Unit-I
Analysis and Design of Algorithms

1. Introduction

Analysis and Design of Algorithms


• ADA help to design the algorithms for solving different types of problems in Computer
Science.
• It also helps to design and analyze the logic on how the program will work before
developing the actual code for a program.

Purpose of Analysis and Design of Algorithms

• This is a first course in data structures and algorithm design.


• We able to learn good principles of algorithm design; learn how to analyze algorithms
and estimate their worst-case and average-case behaviour.
Analysis and Design of Algorithms-Introduction

Design algorithm
• An algorithm is a plan, a logical step-by-step process for solving a problem.
• Algorithms are normally written as a flowchart or in pseudocode. The key to any
problem-solving task is to guide your thought process.

Analysis of algorithms
• The analysis of algorithms is the process of finding the computational complexity of
algorithms, the amount of time, storage, or other resources needed to execute them.
Analysis and Design of Algorithms-Introduction

• Example 1 :
• Take 2 arrays A and B of 1010 entries each. Now write an
algorithm to find common entries.

1234 2347
Algorithm:
234 8754
Assume 2 arrays A and B
3456 2154
k=1
1451 1451
for i=1 to n
… …
for j=1 to n
… …
if A[i]=B[j] then C[k] = A[i];
… …
k=k+1
98765 67859
Output the array C
Array A Array B

1010 Entries
Analysis and Design of Algorithms-Introduction

• Example 1 :
• Take 2 arrays A and B of 1010 entries each. Now write an
algorithm to find common entries.
Algorithm:
Assume 2 arrays A and B
k=1
for i=1 to n
for j=1 to n
if A[i]=B[j] then C[k] = A[i];
k=k+1
Output the array C

• Steps taken by the above algorithm is n2 that is (1010 ) 2 = 1020


• How many seconds 1020 steps take?
• 1010 Seconds = 317 years
• On a modern powerful computer it takes 3 years.
Analysis and Design of Algorithms-Introduction

• Example 1 :
• Take 2 arrays A and B of 1010 entries each. Now write an
algorithm to find common entries.
If the example 1 concept (binary search) is used here. Then algorithm will be
for i=1 to n
look for A[i] in B by doing binary search and
if found store in C
Output the array C

Analysis:
• How many steps it takes. n(log2 n)
• n= 1010
• n(log2 n) = 1010 (log2 1010 )
• 1010 steps in 1 second then it for the above steps, it takes 1010 (log2
2. Fundamentals of Algorithms
What is an algorithm?
It is a finite set of instructions or a sequence of computational
steps that if followed it accomplishes a task.

Characteristics of an algorithm:
• Input : 0 or more input
• Output: 1 or more output
• Definiteness: clear and unambiguous
• Finiteness: termination after finite number of steps
• Effectiveness: solved by pen and paper
Fundamentals of Algorithms
• The skills required to effectively design and analyze algorithms are
tangled with the skills required to effectively describe algorithms.
• A complete description of any algorithm has four components:
What: A precise specification of the problem that the algorithm solves.
How: A precise description of the algorithm itself.
 Why: A proof that the algorithm solves the problem it is supposed to solve.
 How fast: An analysis of the running time of the algorithm.
• It is not necessary (or even advisable) to develop these four components in
this particular order.
• Problem specifications, algorithm descriptions, correctness proofs, and
time analyses usually evolve simultaneously, with the development of
each component informing the development of the others.
• Specifying the Problem
• Describing Algorithms
Fundamentals of Algorithms
Specifying the Problem
• To describe an algorithm, the problem is described that the algorithm is
supposed to solve.
• Algorithmic problems are often presented using standard English, in terms
of real-world objects.
• The algorithm designers, restate these problems in terms of formal,
abstract, mathematical objects—numbers, arrays, lists, graphs, trees,
and so on.
• If the problem statement carries any hidden assumptions, state those
assumptions explicitly.
• Specification to be refined as we develop the algorithm.
• The specification should include just enough detail that someone else could
use our algorithm as a black box, without knowing how or why the
algorithm actually works.
Fundamentals of Algorithms
Specifying the Problem (Contd..)
• In particular, the type and meaning of each input parameter, and exactly how the
eventual output depends on the input parameters are described.
• On the other hand, specification should deliberately hide any details that are not
necessary to use the algorithm as a black box.
• Example: Given two non-negative integers x and y, each represented as an array of
digits, compute the product x · y, also represented as an array of digits. To someone
using these algorithms, the choice of algorithm is completely irrelevant.
Fundamentals of Algorithms
Describing Algorithms

• The clearest way to present an algorithm is using a combination of pseudocode and

structured English.

• Pseudocode uses the structure of formal programming languages and mathematics

to break algorithms into primitive steps; the primitive steps themselves can be

written using mathematical notation, pure English, or an appropriate mixture of the

two, whatever is clearest.

• Well written pseudocode reveals the internal structure of the algorithm but

hides irrelevant implementation details, making the algorithm easier to


understand, analyze, debug, and implement.
Fundamentals of Algorithms
Analysing Algorithms:
• We have to prove that the algorithm actually does what it’s supposed to do, and that
it does so efficiently.

Correctness
• In some application settings, it is acceptable for programs to behave correctly most of
the time, on all “reasonable” inputs.
• But we require algorithms that are always correct, for all possible inputs.
• We must prove that our algorithms are correct; trusting our instincts, or trying a few
test cases, isn’t good enough.
• In particular, correctness proofs usually involve induction.

Running Time
• The most common way of ranking different algorithms for the same problem is by
how quickly they run.
• Preferably, we want the fastest possible algorithm for any particular problem.
Pseudocode Conventions
1. Comments //

2. Blocks are indicated with matching braces {and }.

3. An identifier begins with a letter. The data types of variables are not explicitly
declared.

4. Assignment  ‹variable› := ‹expression›;

5. Two Boolean values true and false,

 Logical operators and, or, and not,

 Relational operators <, ≤, ≥,and >

The following looping statements are used:


while, for and repeat-until
Pseudocode Conventions
6. The following looping statements are used: while, for
and repeat-until
Pseudocode Conventions
7. A conditional statement has the following forms:
Pseudocode Conventions
8. Input and output are done using the instructions read and write.
9. There is only one type of procedure: Algorithm. Algorithm
contains:
 Heading
 Body

10. Break and Return


11. Input, output : Read, Write
Pseudocode Conventions
Example of pseudocode: 10 87 45 66 99

1. Algorithm Max(A, n) A[1] A[2] A[3] A[4] A[5]


2. A is an array of size n.
3. { n=5
4. Result := A[1]; Result = 10
5. for i :=2 to n do i :=2 to n=5
6. if A[i] > result then A[2] = 87
87>10
7. Result :=
A[i]; Result = 87
8. return Result; A[3] = 45
45>87
9. }
Result = 87
A[4] = 66
66>87

Result = 87
A[5] = 99
3. Correctness of Algorithm
A proof of correctness requires that the solution be stated in two forms.
• First form is usually as a program which is annotated by a set of declarations about
the input and output variables of the program. These assertions are often
expressed in the predicate calculus.
• The second form is called a specification, and this may also be expressed in the
predicate calculus.

A proof consists of showing that these two forms are equivalent in that for
every given legal input, they describe the same output.

A complete proof of program correctness requires that each statement of the


programming language be precisely defined and all basic operations be
proved correct.

A proof of correctness is much more valuable than a thousand tests, since it


guarantees that the program will work correctly for all possible inputs.
3. Correctness of Algorithm
Methods of proving correctness
How to prove that an algorithm is correct?

Proof by:
 Counterexample (indirect proof )
 Induction (direct proof )
 Loop Invariant
Other approaches:
 proof by cases/enumeration
 proof by chain of iffs
 proof by contradiction
 proof by contrapositive

For any algorithm, we must prove that it always returns the desired output for all
legal instances of the problem. For sorting, this means even if the input is already
sorted or it contains repeated elements.
Proof by Mathematical Induction
Mathematical induction (MI) is an essential tool for proving the statement
that proves an algorithm's correctness. The general idea of MI is to prove that a
statement is true for every natural number n.

It contains 3 steps:
1.Induction Hypothesis: Define the rule we want to prove for every n, let's call
the rule f(n).

2.Induction Base: Proving the rule is valid for an initial value, or rather a
starting point - this is often proven by solving the Induction
Hypothesis f(n) for n=1 or whatever initial value is appropriate.

3.Induction Step: Proving that if we know that f(n) is true, we can step one
step forward and assume f(n+1) is correct.
Proof by Mathematical Induction
Example
If we define S(n) as the sum of the first n natural numbers, (for
example S(3) = 3+2+1), prove that the following formula can be applied
to any n:

Let's trace our steps:


1.Induction Hypothesis: S(n) defined with the formula above

2.Induction Base: In this step we have to prove that S(1) = 1, i.e n=1
3. Induction Step: In this step we need to prove that if the formula applies
to S(n), it also applies to S(n+1) as follows:

This is known as an implication (a=>b), which just means that we have to


prove b is correct providing we know a is correct.

Note that S(n+1) = S(n) + (n+1) just means we are recursively calculating the
sum.
Example with literals:
S(3) = S(2) + 3= S(1) + 2 + 3 = 1 + 2 + 3 = 6
Hence proved
4. Time complexity analysis
• The time complexity of an algorithm is the amount of computer time it needs to run
to complete.
T(P) = compile time + execution time
T(P) = Tp(execution time)

• The compiled program will be run several times without recompilation. Thus, the run
time of a program only considered. This run time is denoted by t p (instance characteristics).

• If the compiler characteristics and the time taken for addition, subtraction, division,
multiplication etc.. are known then we can find tp as:

• n denotes the instance characteristics


• ca,cs,cm, cd, denotes the time taken to do the operations.
• ADD, SUB,MUL,DIV are functions.
4. Time complexity analysis

Two methods to find the time complexity for an algorithm:


1. Count variable method
2. Table method

Step count:
 For algorithm heading → 0
 For braces → 0
 For expressions → 1
 For any looping statements → number of times the loop is repeating
4. Time complexity analysis
Table method

• The table contains s/e and frequency.


• The s/e of a statement is the amount by which the count changes
as a result of the execution of that statement.
• Frequency is defined as the total number of times each
statement is executed.
• Combining these two, the step count for the entire algorithm is
obtained.
4. Time complexity analysis

Example
Statements s/e Frequency Total Steps
Algorithm abc(a,b,c) 0 - 0
{ 0 - 0
return a+b+b*c+(a+b-c)/(a+b)+4.0; 1 1 1
} 0 - 0

Total =1
4. Time complexity analysis

Example
Statements s/e Frequency Total Steps
Sum(a,b) 0 - 0
{ 0 - 0
return a+b; 1 1 1
} 0 - 0

Total =1
4. Time complexity analysis

Example
Statements s/e Frequency Total Steps
A() 0 - 0
{ 0 - 0
for (int i=1 to n) 1 n+1 n+1
print(“ AIML”); 0 - 0
}
0 - 0

Total =n+1
4. Time complexity analysis

Example
Statements s/e Frequency Total Steps
Algorithm Sum(a, n) 0 - 0
{ 0 - 0
s:=0.0; 1 1 1
for i:=1 to n do 1 n+1 n+1
s := s + a[i]; 1 n n
return s; 1 1 1
} 0 - 0

Total =2n+3
4. Time complexity analysis

Example:
Statements s/e Frequency Total Steps
Algorithm Sum(a, n) 0 - 0
{ 0 - 0
s:=0.0; 1 1 1
for i:=1 to n do 1 n+1 n+1
s := s + a[i]; 1 n n
return s; 1 1 1
} 0 - 0

Total =2n+3
Insertion sort
• Session Learning Outcome - SLO – To Understand
the concept of Insertion sort, the time complexity of
an algorithm and Algorithm design paradigms.
Insertion sort
• Insertion Sort Algorithm sorts array by shifting elements one
by one and inserting the right element at the right position
• The insertion sort method sorts a list of elements by inserting
each successive elements in the previously sorted sub list.
• It performs n-1 passes.
• For pass1 through n-1 insertion sort ensures that the elements
in position 0 through p are in sorted order.
• In pass P, we move the pth element to left until its correct
place is found among the first element.
Insertion sort
• ALGORITHM DESIGN AND ANALYSIS
Efficiency of the algorithm’s design is totally dependent on the
understanding of the problem.
The important parameters to be considered in understanding
the problem are:
• Input
• Output
• Order of Instructions
• Check for repetition of Instructions
• Check for the decisions based on conditions
Insertion sort Procedure
• We start by making the second element of the given array, i.e. element
at index 1, the key.
• We compare the key element with the element(s) before it, i.e., element
at index 0:
• If the key element is less than the first element, we insert
the key element before the first element.
• If the key element is greater than the first element, then we insert it
after the first element.
• Then, we make the third element of the array as key and will compare it
with elements to it's left and insert it at the right position.
• And we go on repeating this, until the array is sorted.
Bubble sort
• Bubble sort is a simple sorting algorithm.
• This sorting algorithm is comparison-based algorithm in
which each pair of adjacent elements is compared and the
elements are swapped if they are not in order.
• This algorithm is not suitable for large data sets as its
average and worst case complexity are of Ο(n 2) where n is
the number of items.
• Time Complexity analysis for the following
concepts.
1. Insertion Sort
2. Bubble Sort
3. Merge Sort
4. Linear search
5. Binary search
Asymptotic Analysis

Session Learning Outcome-SLO


• Estimate algorithmic complexity
• Learn approximation tool
• Specify the behaviour of algorithm
Asymptotic Analysis
• Analysis of a given algorithm with larger values of input data
• Theory of approximation.
• Asymptote of a curve is a line that closely approximates a curve
but does not touch the curve at any point of time.
• Specify the behaviour of the algorithm when the input size
increases.
Asymptotic Analysis

• The order of growth of the running time of an algorithms,


gives a simple characterization of algorithms efficiency.
• Asymptotic order is concerned with how the running time of
an algorithm increases with the size of the input, if input
increases from small value to large values.

1. Big-Oh notation (O)


2. Big-Omega notation (Ω)
3. Theta notation (θ)
4. Little-oh notation (o)
5. Little-omega notation (ω)
Asymptotic Analysis
• Big-oh notation is used to define the worst-case running time of an
algorithm and concerned with large values of n.
• Definition: A function t(n) is said to be in O(g(n)), denoted as t(n) ϵ
O(g(n)), if t(n) is bounded above by some constant multiple of g(n)
for all large n. i.e., if there exist some positive constant c and some
non-negative integer n0 such that
t(n) ≤ cg(n) for all n ≥ n0
• O(g(n)): Class of functions t(n) that grow no faster than g(n).
• Big-oh puts asymptotic upper bound on a function.
Big-Oh Notation (O)
Big-Omega notation (Ω)
• This notation is used to describe the best case running
time of algorithms and concerned with large values of
n.
• Definition: A function t(n) is said to be in Ω(g(n)),
denoted as t(n) ϵ Ω(g(n)), if t(n) is bounded below by
some positive constant multiple of g(n) for all large n.
i.e., there exist some positive constant c and some
non-negative integer n0. Such that
t(n) ≥cg(n) for all n ≥ n0
• It represents the lower bound of the resources required
to solve a problem.
Big-Omega notation (Ω)
Theta notation (θ)
• Definition: A function t(n) is said to be in θ(g(n)),
denoted t(n) ϵ θ(g(n)), if t(n) is bounded both above
and below by some positive constant multiples of
g(n) for all large n. i.e., if there exist some positive
constant c1 and c2 and some non-negative integer
n0 such that
c2g(n) ≤ t(n) ≤ c1g(n) for all n > n0
θ(g(n)) = O(g(n)) ∩ Ω(g(n))
Theta notation (θ)
Little-oh notation (o)
• This notation is used to describe the worst case analysis of
algorithms and concerned with small values of n.
• Definition : A function t(n) is said to be in o(g(n)), denoted
t(n) ϵ o(g(n)), if there exist some positive constant c and
some non-negative integer such that
t(n) ≤ cg(n)
Little-omega notation (ω)
• This notation is used to describe the best case
analysis of algorithms and concerned with small
values of n.
• The function t(n) = ω(g(n)) iff
• Find the upper bound, lower bound and tight bound
range for the following functions
• 2n + 5
• 3n + 2
• 3n + 3
• n2 log n
• 10 n2 + 4 n + 2
• 20 n2 + 80 n + 10
• n!
• log n!
ALGORITHM DESIGN PARADIGMS

• Specifies the pattern to write or design an algorithm.

• Various algorithm paradigms are:


• Divide and Conquer
• Dynamic programming
• Backtracking
• Greedy Approach
• Branch and Bound

• Selection of the paradigms depends upon the problem to be addressed.


DIVIDE AND CONQUER

• The Divide and Conquer Paradigm is an algorithm design paradigm


which uses this simple process:
• It Divides the problem into smaller sub-parts until these sub-parts
become simple enough to be solved, and then the sub parts are solved
recursively, and then the solutions to these sub-parts can be combined to
give a solution to the original problem.
• Examples :
• Binary search
• Merge sort
• Quick sort
DYNAMIC PROGRAMMING

• Dynamic Programming is an algorithmic paradigm that solves a given


complex problem by breaking it into sub-problems and stores the
results of sub-problems to avoid computing the same results again.
• It is an optimization technique
• Examples :
• All pairs of shortest path
• Fibonacci series
BACKTRACKING

• Backtracking is an algorithmic paradigm aimed at improving the time


complexity of the exhaustive search technique if possible.
• Backtracking does not generate all possible solutions first and checks
later.
• It tries to generate a solution and as soon as even one constraint fails,
the solution is rejected and the next solution is tried.
• A backtracking algorithm tries to construct a solution incrementally,
one small piece at a time. It's a systematic way of trying out different
sequences of decisions until we find one that works.
• Examples :
• 8 Queens problem
• Sum of subsets
GREEDY APPROACH

• Greedy algorithms build a solution part by part, choosing the next


part in such a way, that it gives an immediate benefit.
• This approach is mainly used to solve optimization problems.
• Finding the shortest path between two vertices using Dijkstra’s
algorithm.
• Examples
• Prim’s
• Kruskal’s algorithm
• Travelling salesman problem
• Graph - map coloring
BRANCH AND BOUND

• Branch and bound is an algorithm design paradigm which is


generally used for solving combinational optimization
problems.
• These problems are typically exponential in terms of time
complexity and may require exploring all possible permutations
in worst case.
• Examples :
Greedy Algorithm for Fractional Knapsack
DP solution for 0/1 Knapsack
Backtracking Solution for 0/1 Knapsack.
Recurrence Relations
• A recurrence relation is an equation that defines a
sequence based on a rule that gives the next term as a
function of the previous term(s).
• So in algorithm analysis, a recurrence relation is a function
relating the amount of work needed to solve a problem
of size n to that needed to solve smaller problems (this is
closely related to its meaning in math).
• This does three operations (comparison, comparison,
addition), and also calls itself Recursively.
Recurrence Relations
Mathematical Analysis of Recursive Algorithms
• General Plan for Analyzing the Time Efficiency of Recursive
Algorithms

Decide on a parameter indicating an input’s size n.


Identify the algorithm’s basic operation.
Check whether the number of times the basic operation is executed
can vary on different inputs of the same size;
 if it can, the worst-case, average-case, and best-case efficiencies must be
investigated separately.
Set up a recurrence relation, with an appropriate initial condition,
for the number of times the basic operation is executed.
Solve the recurrence.
Recurrence Relations
Recurrence Examples
 0 n 0  0 n 0
s (n)  s (n) 
c  s (n  1) n  0 n  s (n  1) n  0

c n 1 
  c n 1
 
T (n)  T (n) 
 n
2T    c n  1   n
  2  aT    cn n  1
 b
Recurrence Relations
Recurrence Examples
• T(n) = T(n-1) + n Θ(n2)
• Recursive algorithm that loops through the input to eliminate one
item.

• T(n) = T(n/2) + c Θ(log n)


• Recursive algorithm that halves the input in one step.

• T(n) = T(n/2) + n Θ(n)


• Recursive algorithm that halves the input but must examine every
item in the input.

• T(n) = 2T(n/2) + 1 Θ(n)


• Recursive algorithm that splits the input into 2 halves and does a
constant amount of other work.
Recurrence Relations
Solution of Recurrence Relations

There are four methods for solving Recurrence:


• Substitution Method
• Iteration Method
• Recursion Tree Method
• Master Method
Recursive tree

• Recursion tree method is one of the methods to


solve recurrences.

• A recursion tree is a tree where each node represents


the cost of a certain recursive sub-problem.

• sum up the numbers in each node to get the cost of


the entire algorithm.
Steps to Solve Recurrence Relations Using
Recursion Tree Method

Step- 1:
• Draw a recursion tree based on the given recurrence relation.

Step- 2:
Determine-
• Cost of each level
• Total number of levels in the recursion tree
• Number of nodes in the last level
• Cost of the last level

Step- 3:
• Add cost of all the levels of the recursion tree and simplify the expression so
obtained in terms of asymptotic notation.
Solve the Recurrence Relation using Substitution Method

1. T(n)=T(n-1)+n with initial condition T(0)=1.

3.

4.

5.
Solve the Recurrence Relation using Recursion Tree and prove the
guess using substitution method

{
1. 1 𝑛=1
7. 𝑇 ( 𝑛 ) =
2𝑇 ( )
𝑛
2
+ n 𝑛>1

{
2. 1 𝑛=1
8. 𝑇 ( 𝑛 ) =
3𝑇 ( )
𝑛
4
2
+𝑐 𝑛 𝑛 >1

3.

{
1 𝑛=1
9. 𝑇 ( 𝑛 ) =
3𝑇 ( )
𝑛
4
+ cn 𝑛> 1

{
1 𝑛=1

{
4. 𝑇 ( 𝑛 ) =
𝑇 ( )
𝑛
+1 𝑛>1
10. 𝑇 ( 𝑛 ) =
1 𝑛=1
2
3𝑇
𝑛
2
+n 𝑛>1( )
5 . 𝑇 ( 𝑛) =
{ 1 𝑛=0
2 𝑇 ( n − 1)+1 𝑛> 0

{
1 𝑛=1
6. 𝑇 ( 𝑛 ) =
𝑇 ( )𝑛
2
+n 𝑛>1

You might also like