[go: up one dir, main page]

0% found this document useful (0 votes)
23 views129 pages

ADA m1

The document provides an overview of algorithms, including their definition, importance, and the relationship between algorithms and data structures. It emphasizes the significance of algorithm design and analysis in computer science, highlighting the need for efficiency in terms of time and space. Additionally, it discusses various algorithm design techniques, correctness, and performance analysis through concepts like time complexity and asymptotic notations.

Uploaded by

Shlok Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views129 pages

ADA m1

The document provides an overview of algorithms, including their definition, importance, and the relationship between algorithms and data structures. It emphasizes the significance of algorithm design and analysis in computer science, highlighting the need for efficiency in terms of time and space. Additionally, it discusses various algorithm design techniques, correctness, and performance analysis through concepts like time complexity and asymptotic notations.

Uploaded by

Shlok Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 129

ANALYSIS AND DESIGN OF ALGORITHMS

ANALYSIS AND DESIGN OF ALGORITHMS

Design
• To plan and make decisions about (something that is being built
or created) to create the plans, drawings, etc., that show how
(something) will be made

Analysis
• A process of studying or examining something in detail in order
to understand it or explain
What are algorithms?
• Informally, an algorithm is any well-defined computational procedure that takes
some value, or set of values, as input and produces some value, or set of values,
as output.
• An algorithm is thus a sequence of computational steps that transform the input
into the output.
• We can also view an algorithm as a tool for solving a well-specified
computational problem.
(A computational problem is a problem that can be solved step-by-step
with a computer. These problems usually have a well-defined input,
constraints, and conditions that the output must satisfied.)

• A sequence of unambiguous instructions for solving a problem,


i.e. for obtaining the required output for any legitimate input in a
finite amount of time
Why is the study of algorithms worthwhile?

• A “cookbook” for algorithms, you may someday encounter a problem


for which you cannot readily find a published algorithm.

• This course will teach you techniques of algorithm design and analysis
so that you can develop algorithms on your own, show that they give
the correct answer, and understand their efficiency.
Why study algorithms?
• Theoretical importance

– the core of computer science

• Practical importance

– A practitioner’s toolkit of known algorithms

– Framework for designing and analyzing algorithms for new problems

1-5
Data structures
• A data structure is a way to store and organize data in order to
facilitate access and modifications.
• No single data structure works well for all purposes, and so it is
important to know the strengths and limitations of several of them.

Algorithms + Data Structures = Programs


Algorithms as a technology

• Suppose computers were infinitely fast and computer memory was free.
• Would you have any reason to study algorithms?
• The answer is yes, if for no other reason than that you would still like to
demonstrate that your solution method terminates and does so with
the correct answer.
• Of course, computers may be fast, but they are not infinitely fast.
• Memory may be inexpensive, but it is not free.
• Computing time is therefore a bounded resource, and so is space in
memory.
• You should use these resources wisely, and algorithms that are efficient
in terms of time or space will help you do so.
• Furthermore, with the ever-increasing capacities of computers, we
use them to solve larger problems than ever before.

• Comparison between insertion sort and merge sort, it is at larger


problem sizes that the differences in efficiency between algorithms
become particularly prominent.

• Having a solid base of algorithmic knowledge and technique is one


characteristic that separates the truly skilled developers from the
novices.

• With modern computing technology, you can accomplish some tasks


without knowing much about algorithms, but with a good
background in algorithms, you can do much, much more.
Analyzing algorithms

• Analyzing an algorithm has come to mean predicting the resources that the
algorithm requires.
• Occasionally, resources such as memory, communication bandwidth, or
computer hardware are of primary concern, but most often it is
computational time that we want to measure.
• Generally, by analyzing several candidate algorithms for a problem, we can
identify a most efficient one.
• Algorithm analysis provides a means to distinguish between what is
practically possible and what is practically impossible.
• Efficient algorithms lead to efficient programs that make better use of
computer resources.
• Efficient programs sell better.

• Programmers who write efficient programs are preferred.


Insertion sort and Merge sort
Insertion sort – C1n2
Merge sort - C2n log2n
C1 < C2
Faster computer A(10 billion instructions per sec 1010) running insertion sort
Vs Slower computer B(10 million instructions per sec -107) running merge
sort
A is thousand times faster than B
Suppose that the world’s craftiest programmer codes insertion sort in
machine language for computer A, and the resulting code requires 2n 2
instructions to sort n numbers.
Suppose further that just an average programmer implements mergesort,
using a high-level language with an inefficient compiler, with the resulting
code taking 50n log2 n instructions.
To sort 10 million numbers, computer A takes
C1n2 =
2.(107) 2 instructions /1010 instructions per second = 20,000
seconds (more than 5.5 hours)
While computer B takes
C2n log2n=
50.(107) ln 107 instructions/ 107 instructions per second = 1163
seconds (less than 20 mins)
The future belongs to the computer scientist/engineer who has

– Knowledge: An uptodate grasp of fundamental problems and


solutions
– Ability: Principles and techniques that can be adapted to solve new
problems

13
What is an algorithm?
An algorithm is a sequence of unambiguous instructions for solving a
problem, i.e., for obtaining a required output for any legitimate input in a
finite amount of time.
1. Input
2. Output
3. Definiteness
4. Finiteness
5. Effectiveness

1-14
Notion of algorithm and problem
Probleml
em

algorithm

input “computer” output


(or instance)

algorithmic solution
(different from a conventional solution)
1-16
Important points
• Non-ambiguity
• Range of inputs
• The same algorithm can be represented in
different ways
• Several algorithms for solving the same
problem

17
gcd(m,n)

while n ≠0 do 1. t ← min (m ,n)


r ← m mod n
m←n 2. if m % t = 0 goto 3,
n ←r else goto 4
return m
3. if n % t = 0 return t,
else goto 4

4. t ← t - 1
5. goto 2

18
Fundamentals of Algorithmic Problem Solving

Understand the problem

Decide on computational means


Exact Vs approximate solution
Data structures
Algorithm design technique

Design an algorithm

Prove correctness

Analyze the algorithm

Code the algorithm

19
What does it mean to understand the problem?

• From a practical perspective, the first thing you need to do before


designing an algorithm is to understand completely the problem given.
• Read the problem’s description carefully and ask questions if you have
any doubts about the problem.
• Do a few small examples by hand, think about special cases, and ask
questions again if needed.

20
Deciding on computational means

• Ascertain the capabilities of the computational device the algorithm is intended


for.
• The essence of Random access machine architecture is Its central assumption
that instructions are executed one after another, one operation at a time.
• Accordingly, algorithms designed to be executed on such machines are called
sequential algorithms.
• The central assumption of the RAM model does not hold for some newer
computers that can execute operations concurrently, i.e., in parallel.
• Algorithms that take advantage of this capability are called parallel algorithms.
• Still, studying the classic techniques for design and analysis of algorithms under
the RAM model remains the cornerstone of algorithmics for the foreseeable
future.
Choosing between Exact and Approximate Problem Solving

• The next principal decision is to choose between solving the problem exactly or
solving it approximately.
• In the former case, an algorithm is called an exact algorithm; in the latter case, an
algorithm is called an approximation algorithm.
• Why would one opt for an approximation algorithm?
• First, there are important problems that simply cannot be solved exactly for most
of their instances;
• examples include extracting square roots, solving nonlinear equations, and
evaluating definite integrals.
• Second, available algorithms for solving a problem exactly can be unacceptably slow
because of the problem’s intrinsic complexity.
Data structures
• One should pay close attention to choosing data structures appropriate
for the operations performed by the algorithm.
• Some algorithms would run longer if we used a linked list instead of an
array in its implementation .
• Also note that some of the algorithm design techniques depend
intimately on structuring or restructuring data specifying a problem’s
instance.
• Many years ago, an influential textbook proclaimed the fundamental
importance of both algorithms and data structures for computer
programming by its very title: Algorithms+ Data Structures = Programs .
• In the new world of object-oriented programming, data structures remain
crucially important for both design and analysis of algorithms.
Design an algorithm

• Build a computational model of the solving process.

•How do you design an algorithm to solve a given problem?

•This is the main question this course seeks to answer by teaching you several
general design techniques.

•What is an algorithm design technique?

•An algorithm design technique (or “strategy” or “paradigm”) is a general


approach to solving problems algorithmically that is applicable to a variety of
problems from different areas of computing.
24
• First, they provide guidance for designing algorithms for new problems,
i.e.,problems for which there is no known satisfactory algorithm.
• Learning such techniques is akin to learning to fish as opposed to being
given a fish caught by somebody else.
• It is not true, of course, that each of these general techniques will be
necessarily applicable to every problem you may encounter. But taken
together, they do constitute a powerful collection of tools that you will
find quite handy in your studies and work.
• Second, algorithms are the cornerstone of computer science. Every
science is interested in classifying its principal subject, and computer
science is no exception.
Prove correctness

• Correct output for every legitimate input in finite time


• Based on correct math formula
• By induction
Analyze the algorithm

Efficiency: time and space


Simplicity
Generality: range of inputs, special cases
Optimality: no other algorithm can do better

Coding
How the objects and operations in the algorithm are
represented in the chosen Programming language?
27
Study of Algorithms
1. How to devise algorithms?
Algorithm design techniques
2. How to validate algorithms?
To show that algorithm produces correct answer for all possible legal
inputs.
3. How to analyze algorithms?
Quantitative judgements .
4. How to test a program?
Debugging – Executing programs on sample data sets to determine
whether faulty results occur, & if so correct them.
Profiling – Process of executing a correct program on datasets and
measuring the time and space it takes to compute the results
Algorithm Specification
Space Complexity
• Space complexity of an algorithm is the amount of space it
needs to run to completion
Time Complexity
• Time complexity is the amount of computer time an algorithm
needs to run to completion
Space complexity
Time complexity
2n+3 steps
Analysis framework
2 kinds of efficiency.
1. Time efficiency – indicates how fast an algorithm in question runs.
2. Space efficiency – deals with the extra space the algorithm requires.
Time is of more concern.
Measuring an Input’s size
- Algorithm’s efficiency is measured as a function of some parameter n
indicating the algorithm’s i/p size.
Eg. Sorting – no. of elements
check no. is prime or not – no. of bits b in the n’s binary representation
(b=log(n + 1))
• Units for measuring Run time
- Standard unit – sec, ms
- Count the no. of times each of the algorithms' operations is
executed.
- Identify the most important operation of the algorithm called basic
operation – the operation contributing the most to the total running
time and the no. of times the basic operation is executed.
Run time T(n) ≈ Cop C(n)
Where Cop – execution time of an algorithm’s basic operation on a
particular computer.
C(n) – No. of times this operation needs to be executed for this
algorithm
Theoretical analysis of time efficiency
Time efficiency is analyzed by determining the
number of repetitions of the basic operation
as a function of input size
• Basic operation: the operation that
contributes most towards the running time of
the algorithm
input size

T(n) ≈ copC(n)
Number of times
running time execution time basic operation is
for basic operation executed
Input size and basic operation examples

Problem Input size measure Basic operation

Searching for key in a list of n


Number of list’s items, i.e. n Key comparison
items

Matrix dimensions or total Multiplication of two


Multiplication of two matrices
number of elements numbers

Checking primality of a given n’size = number of digits (in


Division
integer n binary representation)

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge
Best-case, average-case, worst-case
• Worst case efficiency of an algorithm is its efficiency for the worst-case input of
size n for which the algorithm runs the longest among all possible inputs of that
size.
• Bounds the run time from above ie., guarantees that for any instance of size n, the
run time will not exceed Cworst(n)
• Best case efficiency – of an algorithms is its efficiency for the best case i/p of size
n, which is an i/p of size n for which the algorithm runs the fastest among all
possible i/p of that size.
• Average case efficiency – yields the necessary information about algorithm’s
behaviour on a typical or random i/p.
Best-case, average-case, worst-case
For some algorithms efficiency depends on form of input:

• Worst case: Cworst(n) – maximum over inputs of size n

• Best case: Cbest(n) – minimum over inputs of size n

• Average case: Cavg(n) – “average” over inputs of size n


– Number of times the basic operation will be executed on typical input
– NOT the average of worst and best case
– Expected number of basic operations considered as a random variable
under some assumption about the probability distribution of all possible
inputs
Worst case, best case and average case efficiencies
Algorithm Sequential_Search(A[0..n-1], k)
// Searches for a given value in a array.
// Input: An array A[0..n-1] & a search key k
// Output: The index of the first element of A
//that matches k or -1 if there are no matching element
i 0;
while i<n and A[i]!= k do
i i+1;
if i<n return i
else return -1
Sequential search
Worst case – Cworst(n) = n
Best case – Cbest(n) = 1

Cavg(n)=[1.P/n + 2. P/n + …. + i.P/n + … + n.P/n] + n.(1-p)


= P/n [1+2+ .. + i + … +n] +n(1-P)
=P/n * n(n+1)/2 + n(1-P)
= P(n+1)/2 + n(1-P)

P – Probability of successful search(0<=P<=1)


Probability of first match occurring in ith position is the same for every i
If P=1 then
Cavg(n) = (n+1)/2
Analysis Framework
• Both time and space efficiencies are measured as functions of the algorithm’s
input size.
• Time efficiency is measured by counting the number of times the algorithm’s basic
operation is executed.
• Space efficiency is measured by counting the number of extra memory units
consumed by the algorithm.
• The efficiencies of some algorithms may differ significantly for inputs of the same
size. For such algorithms, we need to distinguish between the worst-case,
average-case, and best-case efficiencies.

• The framework’s primary interest lies in the order of growth of the algorithm’s
running time (extra memory/time units consumed) as its input size goes to
infinity.
Asymptotic Notations and Basic Efficiency Classes
• Asymptotic notations are mathematical tools used in computer
science and mathematics to describe the time and space complexity of
algorithms and functions.
• They provide a concise way to analyze the behavior of algorithms as
the input size grows towards infinity.
Big O Notation (O)

• Represents the upper bound or worst-case time


complexity of an algorithm.
• Denoted as O(g(n)), where g(n) is a function
describing the growth rate of the algorithm.
• It defines an upper limit on the growth rate of the
algorithm, ignoring constant factors and lower-order
terms.
• Example: O(n^2) represents a quadratic time
complexity, indicating that the algorithm's running
time grows no faster than the square of the input size.
Asymptotic Notations and Basic Efficiency Classes
• The efficiency analysis framework concentrates on the order of
growth of an algorithm’s basic operation count as the principal
indicator of the algorithm’s efficiency.
• Informally, O(g(n)) is the set of all functions with a lower or same
order of growth as g(n) (to within a constant multiple, as n goes to
infinity).
• Thus, to give a few examples, the following assertions are all true:
• n ∈ O(n2), 100n + 5 ∈ O(n2),
• ½ n(n − 1) ∈ O(n2).
• n3 ∈ O(n2), 0.00001n3 ∈ O(n2),
n4 + n + 1 ∈ O(n2).
Omega Notation (Ω)

• The second notation,Ω(g(n)), stands for the set of all functions with
a higher or same order of growth as g(n) (to within a constant
multiple, as n goes to infinity).
• Represents the lower bound or best-case time complexity of an
algorithm.
• Denoted as Ω(g(n)), where g(n) is a function describing the growth
rate of the algorithm.
• It defines a lower limit on the growth rate of the algorithm, ignoring
constant factors and higher-order terms.
• Example: Ω(n) represents a linear time complexity, indicating that
the algorithm's running time grows at least as fast as the input size.
• For example, n3 ∈ (n2),
• 1/2n(n − 1) ∈ (n2), but 100n + 5 ∈ (n2).
Theta Notation (θ)

• Finally, θ(g(n)) is the set of all functions that have the same order of
growth as g(n) (to within a constant multiple, as n goes to infinity).
• Represents both the upper and lower bounds or average-case time
complexity of an algorithm.
• Denoted as θ(g(n)), where g(n) is a function describing the growth
rate of the algorithm.
• It defines a tight bound on the growth rate of the algorithm,
capturing both the best-case and worst-case scenarios.
• Example: θ(n log n) represents a linear time complexity, indicating
that the algorithm's running time grows at the same rate as the
product of the input size and its logarithm.
O-notation

• A function t(n) is said to be in O(g(n)), denoted t (n) ∈ O(g(n)),


if t (n) is bounded above by some constant multiple of g(n) for all
large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
t (n) ≤ cg(n) for all n ≥ n0.
O-notation
• As an example, let us formally prove one of the assertions
made in the introduction:
100n + 5 ∈ O(n2).
Indeed,
• 100n + 5 ≤ 100n + n (for all n ≥ 5) = 101n ≤ 101n2.
• Thus, as values of the constants c and n0 required by the
definition, we can take 101 and 5, respectively.
• Or c=105, n0=1
Big-oh
Ω-notation

• A function t (n) is said to be in Ω(g(n)), denoted t (n) ∈ Ω(g(n)), if


t (n) is bounded below by some positive constant multiple of g(n) for
all large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
t (n) ≥ cg(n) for all n ≥ n0
• Here is an example of the formal proof that
n3 ∈ (n2):
n3 ≥ n2 for all n ≥ 0,
i.e., we can select c = 1 and n0 = 0.
Big-omega
Big-theta
Asymptotic order of growth
A way of comparing functions that ignores constant factors and
small input sizes

• O(g(n)): class of functions f(n) that grow no faster than g(n)

• Θ(g(n)): class of functions f(n) that grow at same rate as g(n)

• Ω(g(n)): class of functions f(n) that grow at least as fast as g(n)


Time efficiency of non-recursive algorithms
General Plan for Analysis
• Decide on parameter n indicating input size

• Identify algorithm’s basic operation

• Determine worst, average, and best cases for input of size n

• Set up a sum for the number of times the basic operation is


executed

• Simplify the sum using standard formulas and rules


Element uniqueness problem
Matrix multiplication
General Plan for Analyzing the Time Efficiency of Non-recursive
Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. (As a rule, it is located in the
inner most loop.)
3. Check whether the number of times the basic operation is executed
depends only on the size of an input. If it also depends on some
additional property, the worst-case, average-case, and, if necessary,
best-case efficiencies have to be investigated separately.
4. Setup a sum expressing the number of times the algorithm’s basic
operation is executed.
5. Using standard formulas and rules of sum manipulation, either find a
closed form formula for the count or, at the very least, establish its
order of growth.
Mathematical Analysis of Recursive Algorithms
Tower of Hanoi
Step 1 − Move n-1 disks from source to aux
Step 2 − Move nth disk from source to dest
Step 3 − Move n-1 disks from aux to dest
A recursive algorithm for Tower of Hanoi can be driven as follows −
START
Algorithm Tower_Hanoi(disk, source, dest, aux)
IF disk == 1, THEN move disk from source to dest
ELSE Tower_Hanoi(disk - 1, source, aux, dest) // Step 1
move disk from source to dest // Step 2
Tower_Hanoi(disk - 1, aux, dest, source) // Step 3
END IF
END Procedure
STOP
#include<stdio.h>
int count=0;
void tower(int n, char A, char B, char C)
{
if(n==1)
{
printf("Move disc 1 from %c to %c\n", A, C);
count ++;
return;
}
tower(n-1, A, C, B);
printf("Move disc %d from %c to %c\n", n, A,C);
count ++;
tower(n-1, B, A, C);
}
void main()
{
int n;
printf("Enter the number of discs\n");
scanf("%d", &n);
tower(n, 'A','B','C');
printf("Total number of moves =%d\n", count);
}
Tower of Hanoi
• The number of disks n is the obvious choice for the input’s size indicator, and so is
moving one disk as the algorithm’s basic operation.
• Clearly, the number of moves M(n) depends on n only, and we get the following
recurrence equation for it:

M(n) = M(n − 1) + 1+ M(n − 1) for n > 1.

• With the obvious initial condition M(1) = 1, we have the following recurrence
relation for the number of moves M(n):

M(n) = 2M(n − 1) + 1 for n > 1,


M(1) = 1.
Tower of Hanoi
General Plan for Analyzing the Time Efficiency of Recursive Algorithms

1. Decide on a parameter (or parameters) indicating an input’s size.


2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed
can vary on different inputs of the same size; if it can, the worst-case,
average-case, and best-case efficiencies must be investigated
separately.
4. Set up a recurrence relation, with an appropriate initial condition,
for the number of times the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its
solution.
Order of Growth
Algorithm design strategies

• Brute force • Greedy approach

• Divide and conquer • Dynamic programming

• Decrease and conquer • Backtracking and


branch-and-bound
• Transform and conquer • Space and time tradeoffs

1-109
Brute Force
A straightforward approach, usually based directly on the
problem’s statement and definitions of the concepts
involved

Examples:
1. Computing an (a > 0, n a nonnegative integer)

2. Computing n!

3. Multiplying two matrices

4. Searching for a key of a given value in a list


Brute Force

Selection Sort

Selection Sort
Selection Sort

• Example of sorting with


selection sort.
• Each line corresponds to one
iteration of the algorithm, i.e., a
pass through the list’s tail to
the right of the vertical bar; an
element in bold indicates the
smallest element found.
• Elements to the left of the
vertical bar are in their final
positions and are not
considered in this and
subsequent iterations.
Bubble Sort

Bubble Sort

• First two passes of bubble sort


on the list 89, 45, 68, 90, 29, 34,
17.
• A new line is shown after a swap
of two elements is done.
• The elements to the right of the
vertical bar are in their final
positions and are not
considered in subsequent
iterations of the algorithm.
Bubble Sort
Sequential Search
• The algorithm simply compares successive
elements of a given list with a given search
key until either a match is encountered
(successful search) or the list is exhausted
without finding a match (unsuccessful
search).
• A simple extra trick is often employed in
implementing sequential search: if we
append the search key to the end of the
list, the search for the key will have to be
successful, and therefore we can eliminate
the end of list check altogether.
• Another straightforward improvement can
be incorporated in sequential search if a
given list is known to be sorted: searching
in such a list can be stopped as soon as an
element greater than or equal to the
search key is encountered.
Best-case, average-case, worst-case
For some algorithms efficiency depends on form of input:

• Worst case: Cworst(n) – maximum over inputs of size n

• Best case: Cbest(n) – minimum over inputs of size n

• Average case: Cavg(n) – “average” over inputs of size n


– Number of times the basic operation will be executed on typical input
– NOT the average of worst and best case
– Expected number of basic operations considered as a random variable
under some assumption about the probability distribution of all possible
inputs
Sequential search
Worst case – Cworst(n) = n
Best case – Cbest(n) = 1

Cavg(n)=[1.P/n + 2. P/n + …. + i.P/n + … + n.P/n] + n.(1-p)


= P/n [1+2+ .. + i + … +n] +n(1-P)
=P/n * n(n+1)/2 + n(1-P)
= P(n+1)/2 + n(1-P)

P – Probability of successful search(0<=P<=1)


Probability of first match occurring in ith position is the same for every i
If P=1 then
Cavg(n) = (n+1)/2
String Matching

• Problem : given a string of n characters called the text and a string of m characters (m ≤ n)
called the pattern, find a substring of the text that matches the pattern.
• Brute-force Solution: align the pattern against the first m characters of the text and start
matching the corresponding pairs of characters from left to right until either all the m pairs
of the characters match (then the algorithm can stop) or a mismatching pair is encountered.
In the latter case, shift the pattern one position to the right and resume the character
comparisons, starting again with the first character of the pattern and its counterpart in the
text.
• Note that the last position in the text that can still be a beginning of a matching substring is
n − m (provided the text positions are indexed from 0 to n − 1). Beyond that position, there
are not enough characters to match the entire pattern; hence, the algorithm need not make
any comparisons there.
Brute-Force String Matching
• pattern: a string of m characters to search for
• text: a (longer) string of n characters to search in
• problem: find a substring in the text that matches the pattern

Brute-force algorithm
Step 1: Align pattern at beginning of text
Step 2:Moving from left to right, compare each character of
pattern to the corresponding character in text until
• all characters are found to match (successful search); or
• a mismatch is detected
Step 3 While pattern is not found and the text is not yet
exhausted, realign pattern one position to the right and
repeat Step 2
Examples of Brute-Force String Matching

1. Pattern: 001011
Text: 10010101101001100101111010

2. Pattern: happy
Text: It is never too late to have a happy
life.
Pseudocode and Efficiency

Time efficiency: Θ(mn) comparisons (in the worst case)


Worst-case Time Complexity: Θ(mn)
•When the pattern doesn’t appear in the text at all or appears only at the
very end.
•For each of the (n−m+1) possible starting positions in the text, the
algorithm potentially makes m comparisons (one for each character in
the pattern).
•The algorithm will perform Θ((n-m+1)*m) comparisons, where n is
the length of the text and m is the length of the pattern.
•In the worst case, for each position in the text, the algorithm may need
to compare the entire pattern against the text.
•Thus, the worst-case time complexity is Θ(nm).

You might also like