[go: up one dir, main page]

0% found this document useful (0 votes)
16 views77 pages

DAA or Algorithms

The document provides an introduction to algorithms, discussing their properties, efficiency, and complexity analysis using asymptotic notations such as Big O, Omega, and Theta. It includes a detailed explanation of the problem-solving cycle, types of algorithm analysis, and the Master Theorem for analyzing divide-and-conquer recurrences. Additionally, it covers sorting algorithms, their complexities, and the concept of stable versus unstable sorting.

Uploaded by

Sakshi Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views77 pages

DAA or Algorithms

The document provides an introduction to algorithms, discussing their properties, efficiency, and complexity analysis using asymptotic notations such as Big O, Omega, and Theta. It includes a detailed explanation of the problem-solving cycle, types of algorithm analysis, and the Master Theorem for analyzing divide-and-conquer recurrences. Additionally, it covers sorting algorithms, their complexities, and the concept of stable versus unstable sorting.

Uploaded by

Sakshi Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 77

Chapter-1 (Introduction)

Algorithms, Analysing Algorithms,


Efficiency of an Algorithm, Time and Space
Complexity, Asymptotic notations: Big-Oh,
Time-Space trade-off Complexity of
Algorithms, Growth of Functions,
Performance Measurements.
Q Find the Largest Number Among Three Numbers ?
1. Start
2. Read the three numbers to be compared, as A, B and
C.
3. Check if A is greater than B.
1. If true, then check if A is greater than C.
1. If true, print 'A' as the greatest number.
2. If false, print 'C' as the greatest number.
2. If false, then check if B is greater than C.
1. If true, print 'B' as the greatest number.
2. If false, print 'C' as the greatest number.
4. End
#include
int main()
<stdio.h>
{
int A, B, C;

printf("Enter the numbers A, B and C:


");
scanf("%d %d %d", &A, &B, &C);

if (A >= B && A >= C)


printf("%d is the largest number.",
A);

if (B >= A && B >= C)


printf("%d is the largest number.",
B);

return
if (C >= A && C >= B)
} 0; printf("%d is the largest number.",
Introduction to Algorithm
• In mathematics and computer science, an algorithm is a finite
sequence of well-defined, computer-implementable instructions,
typically to solve a class of problems or to perform a computation. A
stem by step Procedure.
• Will accept Zero or more input, but generate at least
one output.
Properties of Algorithm
• Should terminate in finite time
• Unambiguous
• Input Zero or more output at least one
• Every instruction in algo should be effective
• It should be deterministic
Algorithm Program
Written in design Phase Written in Implementation Phase

Needs Domain Knowledge Needs Programmer Knowledge

Can be Written in Programming


Can be Written in Any Language
Language

Independent of H/w & S/w Dependent of H/w & S/w

We analysis Algorithm for time &


We test programs for faults
space
Problem Solving Cycle
• Problem Definition: Understand Problem
• Constraints & Conditions: Understand constraints if any
• Design Strategies (Algorithmic Strategy)
• Express & Develop the algo
• Validation (Dry run)
• Analysis (Space and Time analysis)
• Coding
• Testing & Debugging
• Installation
• Maintenance
Need for Analysis
• We do analysis of algorithm to do a performance comparison
between different algorithm to figure out which one is best possible
option.
• Following are the parameters which can be considered
while analysis of an algorithm
• Time
• Space
• Bandwidth
• Register
• Battery power
Types of Analysis
• Experimental or • Apriori Analysis or
• Apostrium or • Independent analysis or
• Relative analysis • Absolute analysis
• Experimental or Apostrium or relative analysis : Means analysis of
algorithm after it is converted to code. Implement both the algorithms
and run the two programs on your computer for different inputs and
see which one takes less time.
• Advantage: Exact values no rough
• Disadvantage: final result instead of depending only algorithm
depends on many other factors like background software & hardware,
programming language, even the temperature of the room.
• Apriori Analysis or Independent analysis or Absolute analysis: we do
analysis using asymptomatic notations and mathematical tools of only
algorithm, i.e. before converting it into program of a particular
programming language.
• It is a determination of order of magnitude of a statement.
• In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of
input size (we don’t measure the actual running time). We calculate, how does
the time (or space) taken by an algorithm increases with the input size.
• Asymptotic Analysis is not perfect, but that’s the best way available for analyzing
algorithms.

• It might be possible that those large inputs are never given to your software and
an algorithm which is asymptotically slower, always performs better for your
particular situation. So, you may end up choosing an algorithm that is
Asymptotically slower but faster for your software.

• Advantage: Uniform result depends only on algorithm.


• Disadvantage: Estimated or approximate value no accurate and precise
value.
Asymptotic Notations
• Asymptotic notations are Abstract notation for describing the behavior of
algorithm and determine the rate of growth of a function.
Big O Notation
• The Big O notation defines an upper bound of an algorithm, it bounds a function only from above.
• The Big O notation is useful when we only have upper bound on time complexity of an algorithm.
• Many times, we easily find an upper bound by simply looking at the algorithm.
• O(g(n)) = {f(n): there exist positive constants C and N0 such that 0 <= f(n) <= C*G(n) for all N >= N0}
Ω Notation
• Just as Big O notation provides an asymptotic upper bound on a function, Ω notation provides an
asymptotic lower bound.
• Ω Notation can be useful when we have lower bound on time complexity of an algorithm.
• For a given function g(n), we denote by Ω(g(n)) the set of functions.
• Ω (g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= c*g(n) <= f(n) for all n >= n0}.
Theta Notation
• Θ Notation: The theta notation bounds a function from above and below, so it defines exact
asymptotic behaviour.
• For a given function g(n), we denote Θ(g(n)) is following set of functions.
• Θ(g(n)) = {f(n): there exist positive constants C1, C2 and n0 such that 0 <= C1*g(n) <= f(n) <=
C2*g(n) for all n >= n0}
• The above definition means, if f(n) is theta of g(n), then the value f(n) is always between
C1*g(n) and C2*g(n) for large values of n (n >= n0).
• The definition of theta also requires that f(n) must be non-negative for values of n greater
than n0.
Small notations
• Every thing is same as big notations just, just we take strictly
increasing or monotonically increasing case and equal case
is not allowed.

• Analogy of asymptomatic notation with real numbers


f(n) is O((g(n)) a <= b
f(n) is Ω (g(n)) a >= b
f(n) is Θ (g(n)) a=b
f(n) is o(g(n)) a<b
f(n) is ω(g(n)) a>b
Conclusion
• Most useful notation is Theta, followed by Big O

• the Omega notation is the least used notation among all three
Master Theorem
• In the analysis of algorithms, the master theorem for divide-and-conquer
recurrences provides an asymptotic analysis (using Big O notation) for recurrence
relations of types that occur in the analysis of many divide and conquer
algorithms.
• The approach was first presented by Jon Bentley, Dorothea Haken, and James B.
Saxe in 1980, where it was described as a "unifying method" for solving such
recurrences. The name "master theorem" was popularized by the widely used
algorithms textbook Introduction to Algorithms by Cormen, Leiserson, Rivest,
and Stein.
• The above equation divides the problem into ‘a’ number of
subproblems recursively, a >= 1
• Each subproblem being of size n/b. the subproblems (of size less
than k) that do not recurse. (b > 1)
• where f(n) is the time to create the subproblems and combine their
results in the above procedure.
T(n) = a T(n/b) + f(n)
Case 1

• If f(n) = O (n log b a - ϵ) for some constant ϵ


•> 0,then T(n) = Θ (n log b
a)
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ > 0,
• then T(n) = Θ (n log ba)

• Q T(n) = 4T(n/2) + n?
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ > 0,
• then T(n) = Θ (n log ba)

• Q T(n) = 9T(n/3) + n?
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ > 0,
• then T(n) = Θ (n log ba)

• Q T(n) = 9T(n/3) + n2?


• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ > 0,
• then T(n) = Θ (n log ba)

• Q T(n) = 8T(n/2) + n2?


• Q T(n) = 7T(n/2) + n2?
Case 2
 If f(n) = Θ (n log a ),
b
 then T(n) = Θ (n log a
lg n)
b
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ
> 0, T(n) = Θ (n log ba)
• then
• Case2
• If f(n) = Θ (n log b
• then a ),T(n) = Θ (n log a lg n)
b

 Q T(n) = 2T(n/2) + n?
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ
> 0, T(n) = Θ (n log ba)
• then
• Case2
• If f(n) = Θ (n log b
• then a ),T(n) = Θ (n log a lg n)
b

 Q T(n) = T(2n/3) + 1?
Case 3
• If f(n) = Ω (n logb a + ϵ) for some constant ϵ > 0,
• and if a f(n/b) <= c f(n) for some constant c < 1 and all sufficiently
large n,
• then T(n) = Θ (f(n))
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ
> 0, T(n) = Θ (n log ba)
• then
• Case2
• If f(n) = Θ (n log b
• then a ),T(n) = Θ (n log a lg n)
b
• Case3
• f(n) = Ω (n log b a + ϵ) for some constant ϵ >
0, if a f(n/b) <= c f(n) for some constant c < 1 and all sufficiently large n, then T(n) = Θ
• and
(f(n))
• T(n) = T(n/3) + n ?
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ
> 0, T(n) = Θ (n log ba)
• then
• Case2
• If f(n) = Θ (n log b
• then a ),T(n) = Θ (n log a lg n)
b
• Case3
• f(n) = Ω (n log b a + ϵ) for some constant ϵ >
0, if a f(n/b) <= c f(n) for some constant c < 1 and all sufficiently large n, then T(n) = Θ
• and
(f(n))
• T(n) = 3T(n/4) + n lgn ?
Iteration method
Q Consider the following three functions.
f1 = 10n

f2 = nlogn

f3 = n√n
Which one of the following options arranges the functions in
the increasing order of asymptotic growth rate?

f2 < f 3 < f4
Q Consider the following functions from positives integers to real
numbers 10, √n, n, log2n, 100/n. The CORRECT arrangement of
the above functions in increasing order of asymptotic complexity

100/n < 10 < log2n < √n <


n
Ch-2
Sorting and Order Statistics
Concept of Searching, Sequential search, Index Sequential
Search, Binary Search Shell Sort, Quick Sort, Merge Sort,
Heap Sort, Comparison of Sorting Algorithms, Sorting in
Linear Time. Sequential search, Binary Search,
Comparison and Analysis Internal Sorting: Insertion Sort,
Selection, Bubble Sort, Quick Sort, Two Way Merge Sort,
Heap Sort, Radix Sort, Practical consideration for Internal
Sorting
• The process of arranging data items (numeric or char) in a specific order either
increasing or decreasing order is called sorting. It is an important process which is used
in a number of applications as many times we require sorted data.

• There are number of approaches available for sorting and some parameter based on
which we judge the performance of these algorithm.
Sorting Algorithm Best Case Worst Case

Selection O(n2) O(n2)

Bubble O(n2) / O(n) O(n2)

Insertion O(n) O(n2)

Merge O(nlogn) O(nlogn)

Heap O(nlogn) O(nlogn)


• Space complexity: - Internal sorting and External sorting
• Internal/Inplace sorting means when a sorting algorithm does not require any
additional memory apart from the space acquired by the problem. Eg Heap Sort
• While on the other and some sorting algo requires additional space called
external sorting algorithm. Eg Merge Sort
• Stable or Unstable: - When the input sequence contains copies then we check
whether the output sequence preserve the order of the respective copies or
not?
• Stable(Bubble Sort), Unstable(Insertion Sort)
• Even studying all the criterion, we understand that the different sorting algo have
different advantages in different scenarios therefor we must understand the
actual logic of the algorithm and then can solve problems based on them.
Selection Sort
• The algorithm divides the input list into two parts: a sorted sublist of items which is built
up from left to right at the front (left) of the list and a sublist of the remaining unsorted
items that occupy the rest of the list.
• Initially, the sorted sublist is empty and the unsorted sublist is the entire input list. The
algorithm proceeds by finding the smallest (or largest, depending on sorting order) element in
the unsorted sublist, exchanging (swapping) it with the leftmost unsorted element (putting it
in sorted order), and moving the sublist boundaries one element to the right.
Selection Sort(Algo)
Selection sort (A, n)
{ 1 2 3 4 5 6
for k � 1 to n-1
{
min = A[k]
Loc = k
for j � k + 1 to n
{
if(min > A[j])
{
min = A[j]
Loc = j
}
}
swap(A[k],A[Loc]
)
} }
Selection Sort(Analysis)
Selection sort (A, n)
{ • Depends on structure or content ?
for k 1 to n-1 • Structure
{
• Internal/External sort algorithm ?
min = A[k]
Loc = k • Internal
for j k+1 to n • Stable/Unstable sort algorithm ?
{ • Unstable
if(min > A[j]) • Best case time complexity ?
{
min = A[j]
• O(n2)
Loc = j • Worst case time complexity ?
} • O(n2)
} • Algorithmic Approach?
swap(A[k],A[Loc]
• Subtract and Conquer
)
} }
Selection Sort (Conclusion)

• Selection sort is noted for its simplicity and has performance advantages over
more complicated algorithms in certain situations(number of swaps, which
is n − 1 in the worst case).

• It has an O(n2) time complexity, which makes it inefficient on large lists.


Bubble / Shell / Sinking Sort
Bubble / Shell / Sinking Sort
• Bubble sort, sometimes referred to as sinking sort, is a simple sorting
algorithm that repeatedly steps through the list, compares adjacent
elements and swaps them if they are in the wrong order. The pass
through the list is repeated until the list is sorted. The algorithm, which is
a comparison sort, is named for the way smaller or larger elements
"bubble" to the top of the list.
Bubble / Shell / Sinking Sort(Algo without flag)
1 2 3 4 5 6
Bubble sort (A, n)
{
for k � 1 to n-1
{
ptr = 1
while(ptr <= n-k)
{
if(A[ptr] > A[ptr+1])
{
exchange(A[ptr],A[ptr+1])
}
ptr = ptr+1
}
}
}
Bubble / Shell / Sinking Sort (Algo with flag)
Bubble sort (A, n)
{
for k �1 to n-1 1 6
{
ptr = 1
while(ptr <= n-k)
{
if(A[ptr] > A[ptr+1]) 2
{
exchange(A[ptr],A[ptr+1])
}

}
}
ptr = ptr+1
3
}

4
Bubble sort (A, n)
{
for k � 1 to n-1
{
ptr = 1

5
while(ptr <= n-k)
{
if(A[ptr] > A[ptr+1])
{
exchange(A[ptr],A[ptr+1])
flag = 1
}
ptr = ptr+1
}
if(!flag)
{
break;
}

}
}
Bubble / Shell / Sinking Sort (Analysis with flag)
Bubble sort (A, n)
{ • Depends on structure or content ?
for k 1 to n-1 • Both
{
ptr = 1
• Internal/External sort algorithm ?
while(ptr <= n-k) • Internal
{ • Stable/Unstable sort algorithm ?
if(A[ptr] > A[ptr+1]) • Stable
{
exchange(A[ptr],A[ptr+1])
• Best case time complexity ?
flag = 1 • O(n)
} • Worst case time complexity ?
ptr = ptr+1 • O(n2)
}
if(!flag)
• Algorithmic Approach?
{ • Subtract and Conquer
break;
}
}
}
Bubble / Shell / Sinking Sort (Conclusion)
• Even other О(n2) sorting algorithms, such as insertion sort selection sort, generally run
faster than bubble sort, and are no more complex. Therefore, bubble sort is not a
practical sorting algorithm. This simple algorithm performs poorly in real world use and
is used primarily as an educational tool. More efficient algorithms such as heap sort,
or merge sort are used by the sorting libraries built into popular programming
languages such as Python and Java.

• When the list is already sorted (best-case), the complexity of bubble sort is only O(n).
By contrast, most other algorithms, even those with better average-case complexity,
perform their entire sorting process on the set and thus are more complex. However,
not only does insertion sort share this advantage, but it also performs better on a list
that is substantially sorted (having a small number of inversions).
Insertion Sort
Insertion Sort
• At each iteration, insertion sort removes one element from the input data, finds the location it
belongs within the sorted list, and inserts it there. It repeats until no input elements remain.

• At each array-position, it checks the value there against the largest value in the sorted list
(which happens to be next to it, in the previous array-position checked).

• If larger, it leaves the element in place and moves to the next. If smaller, it finds the correct
position within the sorted list, shifts all the larger values up to make a space, and inserts into
that correct position.
• The resulting array after k iterations has the property where the first k + 1 entries are
sorted ("+1" because the first entry is skipped). In each iteration the first remaining
entry of the input is removed, and inserted into the result at the correct position, thus
extending the result:
Insertion Sort (Algo)
Insertion sort (A, n) 1 2 3 4 5 6
{
for j � 2 to n
{
key = A[j]
i=j-1
while(i>0
and A[i]
> key)
{
A[i+1
]=
A[i]
} i = i-1
Insertion Sort (Analysis)
Insertion sort (A, n) • Depends on structure or content ?
{ • Both
for j � 2 to n • Internal/External sort algorithm ?
• Internal
{
• Stable/Unstable sort algorithm ?
key = A[j] • Stable
i=j-1 • Best case time complexity ?
while(i>0 • O(n)
and A[i] • Worst case time complexity ?
> key) • O(n2)
{ • Algorithmic Approach?
• Subtract and Conquer
A[i+1]
= A[i]
i = i-1
} }
Insertion Sort (Conclusion)
• Insertion sort is much less efficient on large lists than more advanced algorithms such
as heapsort(O(nlogn)), or merge sort (O(nlogn)). However, insertion sort provides
several advantages:

• Efficient for (quite) small data sets, much better other quadratic sorting algorithms
such as selection and bubble sort.
Merge Sort
• In computer science, merge sort is an efficient, general-purpose, comparison-based sorting algorithm.

• Merge sort is a divide and conquer algorithm that was invented by John von Neumann in 1945.
Von Neumann Architecture
Merge Sort
• Conceptually, a merge sort works as follows:
• Divide the unsorted list into n sublists, each containing one element (a list of one
element is considered sorted).
• Repeatedly merge sublists to produce new sorted sublists until there is only one
sublist remaining. This will be the sorted list.

You might also like