[go: up one dir, main page]

0% found this document useful (0 votes)
42 views21 pages

DAA (Design Analysis and Algorith)

The document describes the divide and conquer algorithmic strategy and its application to problems like finding the maximum and minimum elements in a set. It discusses that divide and conquer involves dividing a problem into subproblems, solving the subproblems, and combining the solutions. For finding max/min, the algorithm divides the set into halves, finds the max/min of each half recursively, and then compares the results to find the overall max/min using only one comparison. The time complexity of this and other divide and conquer algorithms is analyzed using recurrence relations.

Uploaded by

KARTHIK BANDI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views21 pages

DAA (Design Analysis and Algorith)

The document describes the divide and conquer algorithmic strategy and its application to problems like finding the maximum and minimum elements in a set. It discusses that divide and conquer involves dividing a problem into subproblems, solving the subproblems, and combining the solutions. For finding max/min, the algorithm divides the set into halves, finds the max/min of each half recursively, and then compares the results to find the overall max/min using only one comparison. The time complexity of this and other divide and conquer algorithms is analyzed using recurrence relations.

Uploaded by

KARTHIK BANDI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

UNIT-II

SETS and DISJOINT SET UNION

SET:- We assume that a set is a collection of numbers 1,2,3,…….,n. We


assume that the sets being represented are pairwise disjoint. That is, if Si
and Sj, i ≠ j, are two sets, then there is no element that is in both Si and
Sj.

For example :- When n=12,the elements can be partitioned into three


disjoint sets,S1={2,4,9,10,12},S2={1,3,8,11},S3={5,6,7}.

Consider another example, when n=10.The elements are divided into


three sets. S1={1,7,8,9},S2={2,5,10},S3={3,4,6}.

Above three sets represented in one possible tree representation.

1 5 3

9 2 10 4 6
7 8

S1 S2 S3
FIG :- Possible tree representation of sets

Each set is represented as a tree Note that for each set we have linked the
nodes from the children to the parent rather than linking from parent to
children.
The operations that can be performed on these two sets are :
1.Disjoint Set Union: -

If Si and Sj are two disjoint sets, then their Union Si U Sj = All elements
x such that x is in Si or Sj. Thus, S1 U S2={1,7,8,9,2,5,10}.
Note that, after S1 U S2 creation S1 and S2 doesn’t exist
independentely. S1 and S2 are replaced by S1 U S2.
2.Find(i):-

Given that element i, find the set containing j. Thus 10 is in the set S2,and 6 is
in the set S3.
Union and Find operations:
Union operation:
To find union of S1 and S2 simply make one of the trees a sub tree of
the other.S1 U S2 could be represented any one of the following ways.

1
5

1 2
7 8 5 10
9

2
10 7 8 9

S1 U S2 or S1 U S2
FIG-2: possible representation of S1 U S2

With each set name, we keep a pointer to the


root of the tree representing the set.
Fig.3: Data representation for S1 , S2 and S3.

Instead of set names, we use the roots of the tree in union and find
operations.

If we determine that element i is in a tree with root j, and j has a pointer to


entry k in the set name table, then

the set name is just name[k].That is, name[1]=S1, name[2]=S2 and


name[3]=S3.

If we wish to unite sets Si and Sj, Then we wish to unite the trees with roots
Findpointer(Si) and Findpointer(Sj).Here Findpointer is a function that takes
a set name and determines the root of the tree that represents it. In many
applications the set name is just the element at the root. The operation of
Find(i) now becomes: Determine the root of the tree containing element i.
The function Union(i,j) requires two trees with roots i and j be joined.

Since the set elements are numbered 1 to n, we represent the


tree nodes using an array p[1:n],
Where n is the maximum number of elements. The ith element of this array
represents the tree node that contains element i. This array element gives
the parent pointer of the corresponding tree node.

i [1] [2 [3] [4] [5] [6] [7] [8] [9] [10]


]
p -1 5 -1 3 -1 3 1 1 1 5
Figure.4:Array representation of S1, S2 and S3 of figure-2
Find(i) is implemented by following the indices, starting at i until we
reach a node with parent value -1.

For example, Find(2) starts at 2 and then moves to 2’s parent 5. since
p[5] is negative, we have reached the root.
The operation union( i , j) is executed as follows; we pass

two trees with nodes i and j. First tree becomes subtree of the second tree,
statement p[i]:=j accomplishes the union.
Algorithm SimpleUnion(i,j)
{
P[i]:=j;
}
Algorithm SimpleFind(i)
{
While(p[i]>=0 do i:=p[i])
return i; }
Unit-II Divide and Conquer

General Method
Given a function to compute on n-inputs the divide-and-conquer strategy suggests
splitting the inputs into K distinct subsets, 1<K ≤n, yielding K sub problems. Each
sub problems must be solved separately and finally results of all these sub
problems must be combined to obtain the result of the initial big problem. If the
sub problems are still relatively large, then divide-and-conquer strategy can
possibly be applied. Often the sub-problem resulting from a divide-and-conquer
design are of the same type as the original problem. For those cases the
reapplication of the divide-and-conquer principle is naturally expressed by a
recursive algorithm. If it is possible to find a solution to any of the subproblems
then such sub problems cannot be further sub divided. Finally all solved sub
problems are combined into a single one.
Control Abstraction
By a control abstraction we mean a procedure whose flow of control is clear but
whose primary operations are specified by other procedures whose precise
meanings are left undefined.
Assume that P is a problem to be solved.
1. Algorithm Divide And Conquer (P)
2. {
3. if small(p) then return S(p);
4. else
5. {
6. divide P into smaller instances P1,P2,…..,Pk, k ≥1;
7. Apply Divide And Conquer to each of these sub problems;
8. return combine (Divide And Conquer(P1),Divide And Conquer(P2)
….. Divide And Conquer(Pk));
9. }
10.}

Algorithm Control abstraction for Divide and Conquer


If the size of the problem P is n and the sizes of the K sub problems are n1,n2,…
nk respectively, then the computing time of Divide and Conquer is described by
the recurrence relation.

The time complexity of many divide-and-conquer algorithms is given by the


recurrence formula

Where a and b are known constants.

DEFECTIVE CHESS BOARD


A defective chessboard is a 2kx2k board of squares with exactly one defective
Square. Some of the possible defective chessboards are shown in the diagram,
Figure 3.0 for K<2.

Figure 3-1 Triominoes with different orientations.

The defective square is shaded when K=0, there is only one possible defective
chess board.

In general, for any K, there are exactly 2k different defective chessboards.


In the defective chessboard problem, the required task is to tile a defective
chessboard using triominoes. Triominoes must cover all other squares in such a
way that two triominoes may not overlap and triominoes should not cover the
defective chessboard Number of triomines required are (22k-1)/3
→ when K = 0, required triominoes = 0
→ when K=1, one triominoes is required
→ when K=2, (22x²-1)/3 = (24-1)/3 = 15/3=5

Note that divide and Conquer method leads to an elegant solution on to the
defective chessboard problem. Divide and Conquer method suggests reducing the
problem of tiling a 2kx2k defective chessboard to that of tiling a smaller defective
chessboards. 2kx2k chessboard would be divided into four 2k-1x2k-1 chessboards as
shown in figure 3.2. When partitions are created only one partition has a defective
square. Tiling one of the four smaller boards corresponds to tiling a defective 2k-
1
x2k-1 chessboard. To convert the remaining three boards to defective boards, we
place a trimino at the corner formed by these three.
Trimino is placed in the figure 3.2 when the defect in the original 2k x2k
chessboard falls into the upper left 2k-1x2k-1 . When we use this partitioning
technique recursively to tile the entire 2kx2k defective chessboard. The recursion
terminates when the chessboard size has been reduced to 1x1.
Figure 3.2 . Partitioning a 2kx2k board

Let us suppose that P is a larger square. Dividing P into Smaller instances is


accomplished by dividing P into four instances of size 2k-1x2k-1 and placing a
triomino at the center of P so as to Cover One Square in each of the three smaller
instances that does not have a defect. No additional work for Combining smaller
instances. You write algorithm here

Time complexity of Defective chess board

Let t(k) denotes the time taken by tile board to tile 2kx2k defective chessboard.
When k=0, equals 1 and a constant d amount of time is spent. When k>0, four
recursive calls are made. These calls take 4t(k-1) time.
Recurrence formula for finding time complexity of defective chessboard is

t ( k )=
{4 t ( kd−1k=0) tc k > 0

This recurrence is solved by using Substitution


method
t(k) = 4t (k-1)+c
t(k) = 4 [4t (k-1-1)+c]+c
t(k) = 42t (K-2)+4c+c
t(k) = 4² [4t (k-2-1)+c) +5 c
t(k) = 4³ t (k-3)+16c +5c
t(k)= 4³ [4t (k-3-1)+c] + 21 c
t(k) = 44t (k-4)+4²c +21c
t(k) = 44 [4t (k-4-1)+c) + 85 c
t(k) = 45t (k-5) + 256c + 8cc
.
.
.
.
t(k) = 4k + kc
t(k) ≈ θ (4k) = θ (number of tiles needed)

FINDING THE MAXIMUM AND MINIMUM

The problem is to find the maximum and minimum elements in a set of n


elements. This problem can be solved by using divide and conquer
technique. The me complexity of the problem is equal to the number of
comparisons. More importantly , when the array a[1:n] contains elements
such as polynomials, vectors, very large numbers, or strings of
characters, the cost of an element comparison is much higher than the
cost of other opera ons. So ,the me is determined mainly by the total cost
of element comparisons.

Algorithm MaxMin(a,n,max,min)
{
max:=min:=a[1];
for i:=2 to n do
{
if(a[i]>max) then max:=a[i];
else if(a[i]<min) then min:=a[i];
}
}

Algorithm-1 Finding maximum and minimum elements in the array


The best case occurs when the elements are in increasing order and the
number of element comparisons in n-1. The worst case occurs when the
elements are in decreasing order and the number of element
comparisons is 2(n-1).

A divide and conquer algorithm for finding maximum and minimum of


array is applied as follows . Let the array contains n elements and the
problem is
P(n,a[i],..a[j])
Base case – If n=1 then the max=min =a[i];
If n=2 then the problem can be solved by making one comparison.
If n>2 then the problem P is divided into smaller subproblems .For
example divide P into two instances
P1=(⌊ |2⌋,a[1],…a[⌊ |2⌋]) and
P2=(n-⌊ |2⌋, a[⌊ |2⌋ + 1], . . . . , a[n])
→ er dividing P into P1 and P2 we must solve subproblems recursively
invoking the same divide and conquer algorithm.
MAX(P) is the longer of MAX(P1) and MAX(P2)
MIN(P) is smaller of MIN(P1) and MIN(P2)
A good way of keeping track of recursively calls is to build a
tree adding a node each me a new call is made .For MaxMin algorithm
each node has four items of informa on : i,g,max and min.
A recursive algorithm id given for finding MAX and MIN of
the set of elements {a(i),a(i+1),….a(j)}. For sets containing more than two
elements ,
the mid point is determined and two new problems are generated. When
MAX and MIN of these subproblems are determined the two maxima are
compared and two minima are compared for obtaining the solu on for the
en re set.

Algorithm MaxMin(I,j,max,min)
//a[1:n] is a global array
{
if (i=j) then max:=min:=a[i];
else if (i=j-1) then
{
If(a[i]<a[j]) then
{
max:=a[j]; min:=a[i];
}

else{
max:=a[i]; min:=a[j];
}
}
else
{
mid:= ⌊( + ) ∕ 2⌋;
MaxMin(i,mid,max,min);
MaxMin(mid+1,j,max1,min1);
If(max<max+1) then max:=max1;
If(min>min1) then min:=min1;
}
}
Algorithm -4: Recursive algorithm for finding maximum and minimum
elements in the array
(⌈ /2⌉) + (⌈n/2⌉) + 2, >2
T(n)= 1, =2
0, =1
Time complexity of MaxMin algorithm
T(n)= 2T(n/2)+2 n>2
T(n)= 2[2T(n/4)+2]+2
T(n)= 4T(n/4)+4+2
T(n)= 4[2T(n/8)+2]+4+2
T(n)= 8T(n/8)+8+4+2
T(n)= 8[2T(n/16)+2]+8+4+2
T(n)= 16T(n/16)+16+8+4+2
.
MERGE SORT
Merge sort uses divide and conquer technique.
Merge sort asymptotic time complexity in BEST CASE – O(log n),
AVERAGE CASE – O(n log n), WORST CASE – O(n log n).
The elements are to be sorted in non decreasing order. Given sequence of n
elements are a[1], a[2], a[3],………….,a[n]. The general idea of merge sort is to
split into two sets a[1], a[n/2], a[(n/2)+1], ………………,a[n].
Each set is individually sorted and the resulting sorted sequences are merge to
produce a single sorted sequence of n elements.
ALGORITHM for MergeSort(low , high):
// a[low : high] is a global array to be sorted
// small(p) is true if there is only one element
// to sort
{
If(low<high) then {
// Divide p into sub problems
//Find where to split the set
mid:=(low + high)/2;
MergeSort(low , mid);
MergeSort(mid+1 , high);
MergeSort(low , mid , high);} // Combine the solutions
Algorithm Merge(low , mid , high)
{
h:=low , i:=low , j:=mid+1;
while( (h<=mid) and (j<=high) ) do
{
If(a[h]<= a[j]) then
{
b[i]:=a[j];
j ++;
}
else
{
b[i]:=a[j];
j ++;
}
i ++;
}
If(h>mid) then
for k:=0 to high do
{
b[i]:=a[k];
i++;
}
else
for k:=h to mid do
{
b[i]:=a[k];
i++;
}
for k:=low to high do
a[k]:=b[k];
}
Example 1: Consider the array of 10 elements.
a[1 : 10] = (310, 285, 179, 652, 351, 423, 861, 254, 450, 520)
Algorithm MergeSort begins by splitting a[] into two subarray each of five
(a[1:5]) and a[6:10]). The elements in a[1:5] are then split into two subarrays of
size three (a[1:3]) and (a[4:5]) . Then the items in (a[1:3]) are
split into subarrays of size two (a[1:2]) and (a[3:3]) . The two values in (a[1:2])
are split a final time into one element subarray and now the merging begins now
data is in the form.
(310 | 285 | 179 | 652 , 351 | 423 , 861 , 254 , 450 , 520) where vertical bars
indicate the boundaries of subarrays. Elements a[1] and a[2] are merged to
yield.
(285 , 310 | 179 |652 , 351 | 423 , 861 , 254 , 450 , 520) then a[3] is merged
with a[1:2] and (179 , 285 , 310 | 652 , 351 | 423 , 861 , 254 , 450 , 520).
Then a[3] is merged with a[1:2] and (179
, 285 , 310 | 652 , 351 | 423 , 861 , 254 , 450 , 520) then a[3] is merged with
a[1:2] and (179 , 285, 310 | 652 , 351 | 423 , 861 , 254 , 450 , 520)
is produced . Next elements a[4] and a[5] are merged (179 , 285 , 310 | 351 ,
652 | 423 , 861 , 254 , 450 , 520) and then a[1:3] and a[4:5] (179 , 285 , 310 ,
351 , 652 | 423 ,861 , 254 , 450 , 520). At this point second
recursive call begins and executes recursively and produces the following
subarrays (179 , 285 , 310 , 351 , 652 | 423 , 861 , 254 , 450 , 520). Elements
a[6] and a[7] are merged. Then a[8] is merged with a a[6:7]
(179 , 285 , 310 , 351 , 652 | 254 , 423 , 861 | 450 ,520) . Next a[9] and a[10] are
merged and then a[6:8] and a[9:10]. (179 , 285 , 310 , 351 , 652 | 254 , 423 , 450
, 520 , 861) Now there are two sorted sub lists and the
final merge procedures are the fully sorted result.
Figure 1:
1,1
0

1,5 6,10

1,3 4,5 6,8 9,10

1,2 3,3 4,4 5,5 6,7 8,8 9,9 10,1


0

1,1 2,2 6,6 7,7

Fig 1 Tree of calls of MergeSort (1,10)


Fig 1 shows the sequence of recursive calls produced by the merge sort algorithm.
The pair of values in each node are the values of the pair of parameters low and high .
Splitting process continues until each is
placed in one single list.
Fig 2 is a tree representing the calls to procedure merge by
mergesort . For example the node containing 1,2 and 3 represents the merging of a[1:2] with
a[3].

1,1,2 6,6,7

1,2,3 4,4,5 6,7,8 9,9,10


Fig 2 Tree of calls of merge
The computing time (or time complexity) of mergesort described by the recurrence
relation
T(n)= { a n=1, a constant
{ 2T(n/2) + cn n>1 , c a constant
Where n is a power of 2 , n=2^k , we can solve this equation by recursion substitution:
T(n) = 2(2T(n/4) + cn/2) + cn
= 4T(n/4) + 2cn
= 4(2T(n/8) + cn/4)+2cn
= 2^k T(1) + kcn
= an + cn log n
It is easy to see that if 2^k <n<=2^k+1 , then T(n) <=T(2^k+1) . Therefore T(n) =O(n
log n) . Space complexity of merge sort is 2n locations.

STRASSEN’S MATRIX MULTIPLICATION


Let A and B two n x n matrices. The product of A and B is C=AB and C is
also a n x n matrix. Now, ( i ,j )th element of C is computed as follows.

The time complexity of matrix multiplication is O(n3).


The divide and conquer Strategy allows us to compute matrix
multiplication with time complexity O(n2.81).
For easy understanding, easy computational and simplicity purpose,
assume that n is a power of 2. That is n=2k .
In case n is not a power of 2, then add enough zeros to both A and B so
that the resulting dimensions are power of 2.

Imagine that A and B are each partitioned into four square sub matrices, each
submatrix having dimensions 2 2
Product of 2 x 2 matrices is computed as

→2.1
Then

→2.2
Formulas 2.1 and 2.2 are used for n=2.For n>2,the elements of C can be
computed using matrix multiplication and addition operations applied to
matrices of size 2 2
Since n is a power of 2, these matrix products can be recursively computed by
the same algorithm.
The recursive process continues until n becomes 2,then product is computed
directly.
To multiply A and B using equation 2.2 we need to perform eight
multiplications of 2 2 matrices and four additions of 2 2 matrices .Time
complexity of matrix addition is cn2.
The overall computing time T(n) of Strassen’s matrix multiplication is given
by the recurrence

The solution for this formula is T(n)=O(n3).


Hence there is no improvement over the conventional method
Since matrix multiplications are more expensive than matrix additions. We try
to reduce matrix multiplications.

We can attempt to reformulate the equations for Cij so as to have fewer


multiplications and possibly more additions. Strassen has discovered a way to
compute the Cij’s of 2.2 using only 7 multiplications and 18 additions or
subtractions.
Strassen’s method involves first computing the seven 2 2 matrices
P,Q,R,S,T,U and V as shown in 2.3 Then Cij’s are computed using the
formulas 2.4
*→P,Q,R,S,T,U and V can be computed using 7 matrix multiplications and
10 matrix additions or subtractions.
*→The Cij’s require an additional 8 additions or subtractions.
→2.3

→2.4
The resulting recurrence relation for T(n) is

→2.5
Where a and b are constants. The solution of T(n) is

You might also like