DAA or Algorithms
DAA or Algorithms
return
if (C >= A && C >= B)
} 0; printf("%d is the largest number.",
Introduction to Algorithm
• In mathematics and computer science, an algorithm is a finite
sequence of well-defined, computer-implementable instructions,
typically to solve a class of problems or to perform a computation. A
stem by step Procedure.
• Will accept Zero or more input, but generate at least
one output.
Properties of Algorithm
• Should terminate in finite time
• Unambiguous
• Input Zero or more output at least one
• Every instruction in algo should be effective
• It should be deterministic
Algorithm Program
Written in design Phase Written in Implementation Phase
• It might be possible that those large inputs are never given to your software and
an algorithm which is asymptotically slower, always performs better for your
particular situation. So, you may end up choosing an algorithm that is
Asymptotically slower but faster for your software.
• the Omega notation is the least used notation among all three
Master Theorem
• In the analysis of algorithms, the master theorem for divide-and-conquer
recurrences provides an asymptotic analysis (using Big O notation) for recurrence
relations of types that occur in the analysis of many divide and conquer
algorithms.
• The approach was first presented by Jon Bentley, Dorothea Haken, and James B.
Saxe in 1980, where it was described as a "unifying method" for solving such
recurrences. The name "master theorem" was popularized by the widely used
algorithms textbook Introduction to Algorithms by Cormen, Leiserson, Rivest,
and Stein.
• The above equation divides the problem into ‘a’ number of
subproblems recursively, a >= 1
• Each subproblem being of size n/b. the subproblems (of size less
than k) that do not recurse. (b > 1)
• where f(n) is the time to create the subproblems and combine their
results in the above procedure.
T(n) = a T(n/b) + f(n)
Case 1
• Q T(n) = 4T(n/2) + n?
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ > 0,
• then T(n) = Θ (n log ba)
• Q T(n) = 9T(n/3) + n?
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ > 0,
• then T(n) = Θ (n log ba)
Q T(n) = 2T(n/2) + n?
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ
> 0, T(n) = Θ (n log ba)
• then
• Case2
• If f(n) = Θ (n log b
• then a ),T(n) = Θ (n log a lg n)
b
Q T(n) = T(2n/3) + 1?
Case 3
• If f(n) = Ω (n logb a + ϵ) for some constant ϵ > 0,
• and if a f(n/b) <= c f(n) for some constant c < 1 and all sufficiently
large n,
• then T(n) = Θ (f(n))
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ
> 0, T(n) = Θ (n log ba)
• then
• Case2
• If f(n) = Θ (n log b
• then a ),T(n) = Θ (n log a lg n)
b
• Case3
• f(n) = Ω (n log b a + ϵ) for some constant ϵ >
0, if a f(n/b) <= c f(n) for some constant c < 1 and all sufficiently large n, then T(n) = Θ
• and
(f(n))
• T(n) = T(n/3) + n ?
• Case1
• If f(n) = O (n log b a - ϵ) for some constant ϵ
> 0, T(n) = Θ (n log ba)
• then
• Case2
• If f(n) = Θ (n log b
• then a ),T(n) = Θ (n log a lg n)
b
• Case3
• f(n) = Ω (n log b a + ϵ) for some constant ϵ >
0, if a f(n/b) <= c f(n) for some constant c < 1 and all sufficiently large n, then T(n) = Θ
• and
(f(n))
• T(n) = 3T(n/4) + n lgn ?
Iteration method
Q Consider the following three functions.
f1 = 10n
f2 = nlogn
f3 = n√n
Which one of the following options arranges the functions in
the increasing order of asymptotic growth rate?
f2 < f 3 < f4
Q Consider the following functions from positives integers to real
numbers 10, √n, n, log2n, 100/n. The CORRECT arrangement of
the above functions in increasing order of asymptotic complexity
• There are number of approaches available for sorting and some parameter based on
which we judge the performance of these algorithm.
Sorting Algorithm Best Case Worst Case
• Selection sort is noted for its simplicity and has performance advantages over
more complicated algorithms in certain situations(number of swaps, which
is n − 1 in the worst case).
}
}
ptr = ptr+1
3
}
4
Bubble sort (A, n)
{
for k � 1 to n-1
{
ptr = 1
5
while(ptr <= n-k)
{
if(A[ptr] > A[ptr+1])
{
exchange(A[ptr],A[ptr+1])
flag = 1
}
ptr = ptr+1
}
if(!flag)
{
break;
}
}
}
Bubble / Shell / Sinking Sort (Analysis with flag)
Bubble sort (A, n)
{ • Depends on structure or content ?
for k 1 to n-1 • Both
{
ptr = 1
• Internal/External sort algorithm ?
while(ptr <= n-k) • Internal
{ • Stable/Unstable sort algorithm ?
if(A[ptr] > A[ptr+1]) • Stable
{
exchange(A[ptr],A[ptr+1])
• Best case time complexity ?
flag = 1 • O(n)
} • Worst case time complexity ?
ptr = ptr+1 • O(n2)
}
if(!flag)
• Algorithmic Approach?
{ • Subtract and Conquer
break;
}
}
}
Bubble / Shell / Sinking Sort (Conclusion)
• Even other О(n2) sorting algorithms, such as insertion sort selection sort, generally run
faster than bubble sort, and are no more complex. Therefore, bubble sort is not a
practical sorting algorithm. This simple algorithm performs poorly in real world use and
is used primarily as an educational tool. More efficient algorithms such as heap sort,
or merge sort are used by the sorting libraries built into popular programming
languages such as Python and Java.
• When the list is already sorted (best-case), the complexity of bubble sort is only O(n).
By contrast, most other algorithms, even those with better average-case complexity,
perform their entire sorting process on the set and thus are more complex. However,
not only does insertion sort share this advantage, but it also performs better on a list
that is substantially sorted (having a small number of inversions).
Insertion Sort
Insertion Sort
• At each iteration, insertion sort removes one element from the input data, finds the location it
belongs within the sorted list, and inserts it there. It repeats until no input elements remain.
• At each array-position, it checks the value there against the largest value in the sorted list
(which happens to be next to it, in the previous array-position checked).
• If larger, it leaves the element in place and moves to the next. If smaller, it finds the correct
position within the sorted list, shifts all the larger values up to make a space, and inserts into
that correct position.
• The resulting array after k iterations has the property where the first k + 1 entries are
sorted ("+1" because the first entry is skipped). In each iteration the first remaining
entry of the input is removed, and inserted into the result at the correct position, thus
extending the result:
Insertion Sort (Algo)
Insertion sort (A, n) 1 2 3 4 5 6
{
for j � 2 to n
{
key = A[j]
i=j-1
while(i>0
and A[i]
> key)
{
A[i+1
]=
A[i]
} i = i-1
Insertion Sort (Analysis)
Insertion sort (A, n) • Depends on structure or content ?
{ • Both
for j � 2 to n • Internal/External sort algorithm ?
• Internal
{
• Stable/Unstable sort algorithm ?
key = A[j] • Stable
i=j-1 • Best case time complexity ?
while(i>0 • O(n)
and A[i] • Worst case time complexity ?
> key) • O(n2)
{ • Algorithmic Approach?
• Subtract and Conquer
A[i+1]
= A[i]
i = i-1
} }
Insertion Sort (Conclusion)
• Insertion sort is much less efficient on large lists than more advanced algorithms such
as heapsort(O(nlogn)), or merge sort (O(nlogn)). However, insertion sort provides
several advantages:
• Efficient for (quite) small data sets, much better other quadratic sorting algorithms
such as selection and bubble sort.
Merge Sort
• In computer science, merge sort is an efficient, general-purpose, comparison-based sorting algorithm.
• Merge sort is a divide and conquer algorithm that was invented by John von Neumann in 1945.
Von Neumann Architecture
Merge Sort
• Conceptually, a merge sort works as follows:
• Divide the unsorted list into n sublists, each containing one element (a list of one
element is considered sorted).
• Repeatedly merge sublists to produce new sorted sublists until there is only one
sublist remaining. This will be the sorted list.