Lecture Notes Chapter 2
Divide & Conquer
Divide and conquer is an algorithm design paradigm based on multi-branched recursion.
A divide and conquer algorithm works by recursively breaking down a problem into two or more
sub-problems of the same or related type until these become simple enough to be solved directly.
The solutions to the sub-problems are then combined to give a solution to the original problem.
1. Divide the problem into a number of subproblems that are smaller instances of the same
problem.
2. Conquer the subproblems by solving them recursively.
Base case: If the subproblems are small enough, just solve them by brute force.
3. Combine the subproblem solutions to give a solution to the original problem.
Examples
1- Compute Maximum value in a one-dimensional array using the divide and conquer
technique:
int max( int a, int b)
{ if (a>=b) return a;
else return b;
}
int findmax( int a[] , int start, int end)
{ if(end-start== 1) return max(a[start], a[end]);
if(start==end) return a[start];
int mid= (start+end)/2;
return max( findmax(a,start,mid), findmax(a,mid+1,end));
}
2- Compute Summation of one-dimensional array elements using divide and conquer
technique:
int findsum( int a[] , int start, int end)
{ if(end-start== 1) return a[start]+a[end];
if(start==end) return a[start];
int mid= (start+end)/2;
return findsum(a,start,mid) + findsum(a,mid+1,end);
}
3- Binary search using divide and conquer technique
bool b-search( int a[], int start, int end, int x)
{ if( start<=end)
{ int mid= (start+end)/2;
if( a[mid]==x) return true;
if(x>a[mid]) return b-search(a,mid+1,end,x);
else return b-search(a,start,mid-1,x);
}
return false;}
4- Sequential search divide and conquer technique:
bool S-search( int a[], int start, int end, int x)
{ if(start==end) return a[start]==x;
int mid=(start+end)/2;
return( S-search(a,start,mid,x) || S-search(a,mid+1,end,x));
}
5- Merge sort algorithm
A sorting algorithm based on divide and conquer. Its worst-case running time has a lower
order of growth than insertion sort. Because we are dealing with subproblems, we state each
subproblem as sorting a subarray 𝐴[𝑝. . 𝑟]. Initially, p = 1 and r = n, but these values change
as we recurse through subproblems.
To sort A[𝒑. . 𝒓]:
Divide by splitting into two subarrays 𝐴[𝑝. . 𝑞] and 𝐴[𝑞 + 1. . 𝑟], where q is the halfway point
of 𝐴[𝑝. . 𝑟].
Conquer by recursively sorting the two subarrays 𝐴[𝑝. . 𝑞] and 𝐴[𝑞 + 1. . 𝑟],
Combine by merging the two sorted subarrays A[𝑝. . 𝑞] and 𝐴[𝑞 + 1. . 𝑟], to produce a single
sorted subarray A[𝑝. . 𝑟]. To accomplish this step, a procedure MERGE. 𝐴[𝑝, 𝑞, 𝑟] should be
defined
The recursion finishes when the subarray has just 1 element so that it’s trivially sorted.
we just take the remaining input pile and place it face down onto the output pile.
Computationally,
The recursion bottomseach basic
outstep
whentakes
theconstant
subarraytime, since
has justwe1 are comparing
element, just it’
so that
the two top cards. Since we perform at most n basic steps, merging takes ‚.n/
sorted.
time.
The following pseudocode implements the above idea, but with an additional
Mtwist
ERGE that
-Savoids
ORT .A;having
p; r/to check whether either pile is empty in each basic step.
if < r on the bottom of each pile a sentinel card,//
Wepplace which
checkcontains a special
for base casevalue
that we use to simplify our code. Here, we use 1 as the sentinel value, so that
q D b.p C r/=2c // divide
whenever a card with 1 is exposed, it cannot be the smaller card unless both piles
have Mtheir sentinel
ERGE cards.A;
-S ORT exposed.
p; q/ But once that happens, all the nonsentinel cards
// conquer
have Malready
ERGEbeen placed
-S ORT .A;onto
q Cthe output pile. Since
1; r/ we know in advance that
// conquer
exactly
MrERGE!pC 1 cards
.A; p; q;will
r/ be placed onto the output pile, we can stop once we
// combine
have performed that many basic steps.
M ERGE .A; p; q; r/
1 n1 D q ! p C 1
2 n2 D r ! q
3 let LŒ1 : : n1 C 1! and RŒ1 : : n2 C 1! be new arrays
4 for i D 1 to n1
5 LŒi! D AŒp C i ! 1!
6 for j D 1 to n2
7 RŒj ! D AŒq C j !
8 LŒn1 C 1! D 1
9 RŒn2 C 1! D 1
10 i D 1
11 j D 1
12 for k D p to r
13 if LŒi! " RŒj !
14 AŒk! D LŒi!
15 i D i C1
16 else AŒk! D RŒj !
17 j D j C1
In detail, the M ERGE procedure works as follows. Line 1 computes the length n1
of the subarray AŒp : : q!, and line 2 computes the length n2 of the subarray
AŒq C 1 : : r!. We create arrays L and R (“left” and “right”), of lengths n1 C 1
and n2 C 1, respectively, in line 3; the extra position in each array will hold the
sentinel. The for loop of lines 4–5 copies the subarray AŒp : : q! into LŒ1 : : n1 !,
and the for loop of lines 6–7 copies the subarray AŒq C 1 : : r! into RŒ1 : : n2 !.
Lines 8–9 put the sentinels at the ends of the arrays L and R. Lines 10–17, illus-
Example of using Merge-Sort Algorithm
Merge Sort Performance
The worst-case time complexity of merge sort is O(n.log(n)), where n is the size of the input.
The recurrence relation is:
𝑇(1) = Θ(1) 𝑓𝑜𝑟 𝑛 = 1
. 𝑛
𝑇(𝑛) = 2 𝑇 8 9 + Θ(𝑛) 𝑓𝑜𝑟 𝑛 > 1
2
The recurrence basically summarizes the merge sort algorithm – Sort two lists of half the
original list’s size and add the n steps taken to merge the resulting two lists.
We can use the recursion tree method to compute the running time.
6- Karatsuba multiplication
The Karatsuba algorithm is used to perform fast multiplication on two n-digit numbers. Karashuba
algorithm is based on the divide and conquer technique and takes less time to compute the product
than the time taken by a normal multiplication.
The usual multiplication approach takes n2 computations to achieve the final product since the
multiplication has to be performed between all digit combinations in both numbers. Then, the sub-
products are added to obtain the final product. This multiplication approach is known as Naive
Multiplication.
To understand this multiplication better, let us consider two 4-digit integers, 1456 and 6533, and
find the product using the Naive approach.
So, 1456 × 6533 =?
In this method of naive multiplication, given the number of digits in both numbers is 4, 16 single-
digit × single-digit multiplications are being performed. Thus, the time complexity of this approach
is O(42) since it takes 42 steps to calculate the final product.
But when the value of n keeps increasing, the time complexity of the problem also keeps
increasing. Hence, Karatsuba algorithm is adopted to perform faster multiplications.
Karatsuba Algorithm
The main idea of the Karatsuba Algorithm is to reduce the multiplication of multiple sub-
problems to the multiplication of three sub-problems.
For this algorithm, two n-digit numbers are taken as the input, X and Y, and the product of the
two number, P=X.Y, is obtained as the output.
Step 1 − In this algorithm, we assume that n is a power of 2.
Step 2 − If n = 1, then we use multiplication tables to find P = XY.
Step 3 − If n > 1, the n-digit numbers are split in half and represent the number using the
formulae −
X = 10n/2 X1 + X0
Y = 10n/2 Y1+ Y0
where, X1, X0, Y1, Y0 each have n/2 digits.
Step 4 − Take a variable Z1 = Z3 – Z2- Z0, where,
Z2= X1Y1,
Z0= X0Y0
Z3= (X1 + X0) (Y1 + Y0),
Z1 = X1Y0 + X0Y1.
Step 5 − Then, the product P is obtained after substituting the values in the formula –
P = 10n(Z2) + 10n/2(Z1) + Z0
P = 10n (X1Y1) + 10n/2 (X1Y0 + X0Y1) + X0Y0.
Step 6 − Recursively call the algorithm by passing the sub-problems (X1, Y1), (X0, Y0), and
(X1+X0 , Y1+Y0) separately. Store the returned values in variables U, V, and W, respectively.
Karatshuba Pseudo code
procedure karatsuba(X, Y)
if (X < 10) or (Y < 10)
return X*Y
/* calculates the size of the numbers */
m = max(size_base10(X), size_base10(Y))
m2 = m/2
/* split the digit sequences about the middle */
X1, X0 = split_at(X, m2)
Y1, Y0 = split_at(Y, m2)
/* 3 calls made to numbers approximately half the size */
Z2 = karatsuba(X1, Y1)
Z3 = karatsuba((X0 + X1) , (Y0 + Y1))
Z0 = karatsuba(X0 , Y0)
return (Z2*10^(2*m2))+((Z3-Z2-Z1)*10^(m2))+(Z0)
Analysis
The Karatsuba algorithm is a recursive algorithm since it calls smaller instances of itself during
execution. According to the pseudocode, it calls itself thrice on n/2-digit numbers to achieve the
final product of two n-digit numbers.
Now, if T(n) represents the number of operations required to perform n-digit multiplication, we
can write the following,
T(n) = 3T(n/2)
Apply T(n/2) = 3T(n/4) in the above equation, we get:
T(n) = 9T(n/4)
T(n) = 27T(n/8)
T(n) = 81T(n/16)
Etc.
T(n) = 3i T(n/2i) is the general form of the recurrence relation of Karatsuba algorithm.
Using any appropriate method (master theorem or the recursion tree method), we can show that
the time complexity of the Karatsuba algorithm for fast multiplication is O(nlog(3)).