[go: up one dir, main page]

0% found this document useful (0 votes)
707 views13 pages

Unit 4-Decrease and Conquer & Divide and Conquer

The decrease-and-conquer technique exploits the relationship between solving a problem for an instance of a given size and solving it for a smaller instance. It works by recursively solving smaller instances until the base case is reached. Two common variations are decrease-by-one, where the instance size is reduced by one on each iteration, and decrease-by-a-constant, where the size is reduced by a constant amount. Example algorithms that use this approach include insertion sort, which sorts an array by inserting elements into their correct position in a sorted sublist, and topological sorting, which linearly orders vertices in a directed acyclic graph.

Uploaded by

nosignal411
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
707 views13 pages

Unit 4-Decrease and Conquer & Divide and Conquer

The decrease-and-conquer technique exploits the relationship between solving a problem for an instance of a given size and solving it for a smaller instance. It works by recursively solving smaller instances until the base case is reached. Two common variations are decrease-by-one, where the instance size is reduced by one on each iteration, and decrease-by-a-constant, where the size is reduced by a constant amount. Example algorithms that use this approach include insertion sort, which sorts an array by inserting elements into their correct position in a sorted sublist, and topological sorting, which linearly orders vertices in a directed acyclic graph.

Uploaded by

nosignal411
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

DECREASE AND CONQUER

The decrease-and-conquer technique is based on exploiting the relationship


between a solution to a given instance of a problem and a solution to its
smaller instance. Once such a relationship is established, it can be exploited either
top down or bottom up.
In the decrease-by-a-constant variation, the size of an instance is reduced
by the same constant on each iteration of the algorithm. Typically, this constant
is equal to one (Figure below), although other constant size reductions do happen
occasionally.
Consider, as an example, the exponentiation problem of computing an where
a != 0 and n is a nonnegative integer. The relationship between a solution to an
instance of size n and an instance of size n − 1 is obtained by the obvious formula
an = an-1. a. So the function f (n) = an can be computed either “top down” by
using its recursive definition or “bottom up” by multiplying 1 by a n times.

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 1


DECREASE AND CONQUER

Insertion Sort:
In Insertion Sort we sort a given array A[0 to n-1] using Decrease-by-one technique. Here we
assume that the smaller problem of sorting the array A[0..n − 2] has already been solved to
give us a sorted array of size n − 1: A[0]≤ . . . ≤ A[n − 2]. We need is to find an appropriate
position for A[n − 1] among the sorted elements and insert it there. This is usually done by
scanning the sorted subarray from right to left until the first element smaller than or equal to
A[n − 1] is encountered to insert A[n − 1] right after that element. The resulting algorithm is
called straight insertion sort or simply insertion sort.

Here is pseudocode of this algorithm.

ALGORITHM InsertionSort(A[0..n − 1])


//Sorts a given array by insertion sort
//Input: An array A[0..n − 1] of n orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
for i ←1 to n − 1 do
v ←A[i]
j ←i − 1
while j ≥ 0 and A[j ]> v do
A[j + 1]←A[j ]
j ←j − 1
A[j + 1]←v

Example:

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 2


DECREASE AND CONQUER

The basic operation of the algorithm is the key comparisonA[j ]>v. The number of key
comparisons in this algorithm obviously depends on the nature of the input. In the worst case,
A[j ]> v is executed the largest number of times, i.e., for every j = i − 1, . . . , 0.
Since v = A[i], it happens if and only if A[j ]>A[i] for j = i − 1, . . . , 0. For the worst-case
input, we get A[0]> A[1] (for i = 1), A[1]>A[2] (for i = 2), . . . , A[n − 2]>A[n − 1]
(for i = n − 1). In other words, the worst-case input is an array of strictly decreasing values.

Best Case:
Ω(n)= n

Average Case:
Θ(n)= n2

Worst Case:
O(n)= n2

Topological Sorting:
Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices
such that for every directed edge u-v, vertex u comes before v in the ordering. A directed
cycle in a digraph is a sequence of three or more of its vertices that starts and ends with the
same vertex and in which every vertex is connected to its immediate predecessor by an edge
directed from the predecessor to the successor. Conversely, if a DFS forest of a digraph has
no back edges, the digraph is a dag, an acronym for directed acyclic graph.

Output: 5 4 2 3 1 0

Explanation: The first vertex in topological sorting is always a vertex with an in-degree of 0
(a vertex with no incoming edges). A topological sorting of the following graph is “5 4 2 3 1
0”. There can be more than one topological sorting for a graph. Another topological sorting
of the following graph is “4 5 2 3 1 0”.

Approach:
• Create a stack to store the nodes.
• Initialize visited array of size N to keep the record of visited nodes.

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 3


DECREASE AND CONQUER

• Run a loop from 0 till N :


• if the node is not marked True in visited array then call the recursive function for
topological sort and perform the following steps:
• Mark the current node as True in the visited array.
• Run a loop on all the nodes which has a directed edge to the current
node
• if the node is not marked True in the visited array:
• Recursively call the topological sort function on the node
• Push the current node in the stack.
• Print all the elements in the stack.

DIVIDE AND CONQUER

Introduction
Divide-and-conquer algorithms work according to the
following general plan:

1. A problem is divided into several subproblems of the same type, ideally of


about equal size.
2. The subproblems are solved (typically recursively, though sometimes a different
algorithm is employed, especially when subproblems become small
enough).
3. If necessary, the solutions to the subproblems are combined to get a solution
to the original problem.

The divide-and-conquer technique is diagrammed in Figure 5.1, which depicts the case of
dividing a problem into two smaller subproblems, by far the most widely occurring case (at
least for divide-and-conquer algorithms designed to be executed on a single-processor
computer).

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 4


DECREASE AND CONQUER

Merge Sort
It sorts a given array A[0..n − 1] by dividing it into two halves
A[0.._n/2_ − 1] and A[_n/2_..n − 1], sorting each of them recursively, and then
merging the two smaller sorted arrays into a single sorted one.

ALGORITHM Mergesort(A[0..n − 1])


//Sorts array A[0..n − 1] by recursive mergesort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
if n > 1
copy A[0.._n/2_ − 1] to B[0.._n/2_ − 1]
copy A[_n/2_..n − 1] to C[0.._n/2_ − 1]
Mergesort(B[0.._n/2_ − 1])
Mergesort(C[0.._n/2_ − 1])
Merge(B, C, A) //see below

The merging of two sorted arrays can be done as follows. Two pointers (array
indices) are initialized to point to the first elements of the arrays being merged.
The elements pointed to are compared, and the smaller of them is added to a new

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 5


DECREASE AND CONQUER

array being constructed; after that, the index of the smaller element is incremented
to point to its immediate successor in the array it was copied from. This operation
is repeated until one of the two given arrays is exhausted, and then the remaining
elements of the other array are copied to the end of the new array.

ALGORITHM Merge(B[0..p − 1], C[0..q − 1], A[0..p + q − 1])


//Merges two sorted arrays into one sorted array
//Input: Arrays B[0..p − 1] and C[0..q − 1] both sorted
//Output: Sorted array A[0..p + q − 1] of the elements of B and C
i ←0; j ←0; k←0
while i <p and j <q do
if B[i]≤ C[j ]
A[k]←B[i]; i ←i + 1
else A[k]←C[j ]; j ←j + 1
k←k + 1
if i = p
copy C[j..q − 1] to A[k..p + q − 1]
else copy B[i..p − 1] to A[k..p + q − 1]

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 6


DECREASE AND CONQUER

Time Complexity of Merge Sort:

Best Case:
Ω(n)= n log n

Average Case:
Θ(n)= n log n

Worst Case:
O(n)= n log n

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 7


DECREASE AND CONQUER

Quit Sort:

Quicksort is the other important sorting algorithm that is based on the divide-and
conquer approach. Unlike mergesort, which divides its input elements according to their
position in the array, quicksort divides them according to their value. A partition is an
arrangement of the array’s elements so that all the elements to the left of some element A[s]
are less than or equal to A[s], and all the elements to the right of A[s] are greater than or
equal to it:

after a partition is achieved, A[s] will be in its final position in the sorted array, and we can
continue sorting the two subarrays to the left and to the right of A[s] independently

ALGORITHM Quicksort(A[l..r])
//Sorts a subarray by quicksort
//Input: Subarray of array A[0..n − 1], defined by its left and right
// indices l and r
//Output: Subarray A[l..r] sorted in nondecreasing order
if l < r
s ←Partition(A[l..r]) //s is a split position
Quicksort(A[l..s − 1])
Quicksort(A[s + 1..r])

ALGORITHM Partition(A[l..r])
//Partitions a subarray by Hoare’s algorithm, using the first element
// as a pivot
//Input: Subarray of array A[0..n − 1], defined by its left and right
// indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned as
// this function’s value
p←A[l]
i ←l; j ←r + 1
repeat
repeat i ←i + 1 until A[i]≥ p
repeat j ←j − 1 until A[j ]≤ p
swap(A[i], A[j ])
until i ≥ j
swap(A[i], A[j ]) //undo last swap when i ≥ j
swap(A[l], A[j ])
return j

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 8


DECREASE AND CONQUER

We start by selecting a pivot—an element with respect to whose value we are going to divide
the subarray. We will now scan the subarray from both ends, comparing the subarray’s
elements to the pivot. The left-to-right scan, denoted below by index pointer i, starts with the
second element. Since we want elements smaller than the pivot to be in the left part of the
subarray, this scan skips over elements that are smaller than the pivot and stops upon
encountering the first element greater than or equal to the pivot. The right-to-left scan,
denoted below by index pointer j, starts with the last element of the subarray. Since we want
elements larger than the pivot to be in the right part of the subarray, this scan skips over
elements that are larger than the pivot and stops on encountering the first element smaller
than or equal to the pivot.

After both scans stop, three situations may arise, depending on whether or not the scanning
indices have crossed. If scanning indices i and j have not crossed, i.e., i < j, we simply
exchange A[i] and A[j ] and resume the scans by incrementing I and decrementing j,
respectively

If the scanning indices have crossed over, i.e., i > j, we will have partitioned the subarray
after exchanging the pivot with A[j ]:

Finally, if the scanning indices stop while pointing to the same element, i.e., i = j, the value
they are pointing to must be equal top. Thus, we have the subarray partitioned, with the split
position s = i = j :

We can combine the last case with the case of crossed-over indices (i > j ) by exchanging the
pivot with A[j ] whenever i ≥ j .

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 9


DECREASE AND CONQUER

Example:

Best Case:
Ω(n)= n log n

Average Case:
Θ(n)= n log n

Worst Case:
O(n)= n2

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 10


DECREASE AND CONQUER

Binary Search:
Binary search is a efficient algorithm for searching in a sorted array. It
works by comparing a search key K with the array’s middle element A[m]. If they
match, the algorithm stops; otherwise, the same operation is repeated recursively
for the first half of the array ifK <A[m], and for the second half ifK >A[m]:

As an example, let us apply binary search to searching for K = 70 in the array

The iterations of the algorithm are given in the following table:

ALGORITHM BinarySearch(A[0..n − 1], K)


//Implements nonrecursive binary search
//Input: An array A[0..n − 1] sorted in ascending order and
// a search key K
//Output: An index of the array’s element that is equal to K
// or −1 if there is no such element
l←0; r ←n − 1
while l ≤ r do
m←_(l + r)/2_
if K = A[m] return m
else ifK <A[m] r ←m − 1
else l←m + 1
return −1

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 11


DECREASE AND CONQUER

Best Case:
Ω(n)= 1

Average Case:
Θ(n)= n

Worst Case:
O(n)= n

Binary Tree Traversals and Related Properties

A binary tree T is defined as a finite set of nodes that is either empty or consists of a root and
two disjoint binary trees TL and TR called, respectively, the left and right subtree of the root.
Since the definition itself divides a binary tree into two smaller structures of the same type,
the left subtree and the right subtree, many problems about binary trees can be solved by
applying the divide-and-conquer technique.

Example:

Consider a recursive algorithm for computing the height of a binary tree.


The height is defined as the length of the longest path from the root to a leaf. Hence, it can be
computed as the maximum of the heights of the root’s left and right subtrees plus 1. (We have
to add 1 to account for the extra level of the root.) Also note that it is convenient to define the
height of the empty tree as −1.
Thus, we have the following recursive algorithm.

ALGORITHM Height(T )
//Computes recursively the height of a binary tree
//Input: A binary tree T
//Output: The height of T
if T = ∅ return −1
else return max{Height(Tlef t ), Height(Tright)} + 1

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 12


DECREASE AND CONQUER

The most important divide-and-conquer algorithms for binary trees are the three classic
traversals: preorder, inorder, and postorder. All three traversals visit nodes of a binary tree
recursively, i.e., by visiting the tree’s root and its left and right subtrees. They differ only by
the timing of the root’s visit:
In the preorder traversal, the root is visited before the left and right subtrees are visited (in
that order).
In the inorder traversal, the root is visited after visiting its left subtree but before visiting the
right subtree.
In the postorder traversal, the root is visited after visiting the left and right subtrees (in that
order).

Example for preorder, inorder, postorder traversals.

BHARATESH COLLEGE OF COMPUTER APPLICATIONS 13

You might also like