Unit 4-Decrease and Conquer & Divide and Conquer
Unit 4-Decrease and Conquer & Divide and Conquer
Insertion Sort:
In Insertion Sort we sort a given array A[0 to n-1] using Decrease-by-one technique. Here we
assume that the smaller problem of sorting the array A[0..n − 2] has already been solved to
give us a sorted array of size n − 1: A[0]≤ . . . ≤ A[n − 2]. We need is to find an appropriate
position for A[n − 1] among the sorted elements and insert it there. This is usually done by
scanning the sorted subarray from right to left until the first element smaller than or equal to
A[n − 1] is encountered to insert A[n − 1] right after that element. The resulting algorithm is
called straight insertion sort or simply insertion sort.
Example:
The basic operation of the algorithm is the key comparisonA[j ]>v. The number of key
comparisons in this algorithm obviously depends on the nature of the input. In the worst case,
A[j ]> v is executed the largest number of times, i.e., for every j = i − 1, . . . , 0.
Since v = A[i], it happens if and only if A[j ]>A[i] for j = i − 1, . . . , 0. For the worst-case
input, we get A[0]> A[1] (for i = 1), A[1]>A[2] (for i = 2), . . . , A[n − 2]>A[n − 1]
(for i = n − 1). In other words, the worst-case input is an array of strictly decreasing values.
Best Case:
Ω(n)= n
Average Case:
Θ(n)= n2
Worst Case:
O(n)= n2
Topological Sorting:
Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices
such that for every directed edge u-v, vertex u comes before v in the ordering. A directed
cycle in a digraph is a sequence of three or more of its vertices that starts and ends with the
same vertex and in which every vertex is connected to its immediate predecessor by an edge
directed from the predecessor to the successor. Conversely, if a DFS forest of a digraph has
no back edges, the digraph is a dag, an acronym for directed acyclic graph.
Output: 5 4 2 3 1 0
Explanation: The first vertex in topological sorting is always a vertex with an in-degree of 0
(a vertex with no incoming edges). A topological sorting of the following graph is “5 4 2 3 1
0”. There can be more than one topological sorting for a graph. Another topological sorting
of the following graph is “4 5 2 3 1 0”.
Approach:
• Create a stack to store the nodes.
• Initialize visited array of size N to keep the record of visited nodes.
Introduction
Divide-and-conquer algorithms work according to the
following general plan:
The divide-and-conquer technique is diagrammed in Figure 5.1, which depicts the case of
dividing a problem into two smaller subproblems, by far the most widely occurring case (at
least for divide-and-conquer algorithms designed to be executed on a single-processor
computer).
Merge Sort
It sorts a given array A[0..n − 1] by dividing it into two halves
A[0.._n/2_ − 1] and A[_n/2_..n − 1], sorting each of them recursively, and then
merging the two smaller sorted arrays into a single sorted one.
The merging of two sorted arrays can be done as follows. Two pointers (array
indices) are initialized to point to the first elements of the arrays being merged.
The elements pointed to are compared, and the smaller of them is added to a new
array being constructed; after that, the index of the smaller element is incremented
to point to its immediate successor in the array it was copied from. This operation
is repeated until one of the two given arrays is exhausted, and then the remaining
elements of the other array are copied to the end of the new array.
Best Case:
Ω(n)= n log n
Average Case:
Θ(n)= n log n
Worst Case:
O(n)= n log n
Quit Sort:
Quicksort is the other important sorting algorithm that is based on the divide-and
conquer approach. Unlike mergesort, which divides its input elements according to their
position in the array, quicksort divides them according to their value. A partition is an
arrangement of the array’s elements so that all the elements to the left of some element A[s]
are less than or equal to A[s], and all the elements to the right of A[s] are greater than or
equal to it:
after a partition is achieved, A[s] will be in its final position in the sorted array, and we can
continue sorting the two subarrays to the left and to the right of A[s] independently
ALGORITHM Quicksort(A[l..r])
//Sorts a subarray by quicksort
//Input: Subarray of array A[0..n − 1], defined by its left and right
// indices l and r
//Output: Subarray A[l..r] sorted in nondecreasing order
if l < r
s ←Partition(A[l..r]) //s is a split position
Quicksort(A[l..s − 1])
Quicksort(A[s + 1..r])
ALGORITHM Partition(A[l..r])
//Partitions a subarray by Hoare’s algorithm, using the first element
// as a pivot
//Input: Subarray of array A[0..n − 1], defined by its left and right
// indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned as
// this function’s value
p←A[l]
i ←l; j ←r + 1
repeat
repeat i ←i + 1 until A[i]≥ p
repeat j ←j − 1 until A[j ]≤ p
swap(A[i], A[j ])
until i ≥ j
swap(A[i], A[j ]) //undo last swap when i ≥ j
swap(A[l], A[j ])
return j
We start by selecting a pivot—an element with respect to whose value we are going to divide
the subarray. We will now scan the subarray from both ends, comparing the subarray’s
elements to the pivot. The left-to-right scan, denoted below by index pointer i, starts with the
second element. Since we want elements smaller than the pivot to be in the left part of the
subarray, this scan skips over elements that are smaller than the pivot and stops upon
encountering the first element greater than or equal to the pivot. The right-to-left scan,
denoted below by index pointer j, starts with the last element of the subarray. Since we want
elements larger than the pivot to be in the right part of the subarray, this scan skips over
elements that are larger than the pivot and stops on encountering the first element smaller
than or equal to the pivot.
After both scans stop, three situations may arise, depending on whether or not the scanning
indices have crossed. If scanning indices i and j have not crossed, i.e., i < j, we simply
exchange A[i] and A[j ] and resume the scans by incrementing I and decrementing j,
respectively
If the scanning indices have crossed over, i.e., i > j, we will have partitioned the subarray
after exchanging the pivot with A[j ]:
Finally, if the scanning indices stop while pointing to the same element, i.e., i = j, the value
they are pointing to must be equal top. Thus, we have the subarray partitioned, with the split
position s = i = j :
We can combine the last case with the case of crossed-over indices (i > j ) by exchanging the
pivot with A[j ] whenever i ≥ j .
Example:
Best Case:
Ω(n)= n log n
Average Case:
Θ(n)= n log n
Worst Case:
O(n)= n2
Binary Search:
Binary search is a efficient algorithm for searching in a sorted array. It
works by comparing a search key K with the array’s middle element A[m]. If they
match, the algorithm stops; otherwise, the same operation is repeated recursively
for the first half of the array ifK <A[m], and for the second half ifK >A[m]:
Best Case:
Ω(n)= 1
Average Case:
Θ(n)= n
Worst Case:
O(n)= n
A binary tree T is defined as a finite set of nodes that is either empty or consists of a root and
two disjoint binary trees TL and TR called, respectively, the left and right subtree of the root.
Since the definition itself divides a binary tree into two smaller structures of the same type,
the left subtree and the right subtree, many problems about binary trees can be solved by
applying the divide-and-conquer technique.
Example:
ALGORITHM Height(T )
//Computes recursively the height of a binary tree
//Input: A binary tree T
//Output: The height of T
if T = ∅ return −1
else return max{Height(Tlef t ), Height(Tright)} + 1
The most important divide-and-conquer algorithms for binary trees are the three classic
traversals: preorder, inorder, and postorder. All three traversals visit nodes of a binary tree
recursively, i.e., by visiting the tree’s root and its left and right subtrees. They differ only by
the timing of the root’s visit:
In the preorder traversal, the root is visited before the left and right subtrees are visited (in
that order).
In the inorder traversal, the root is visited after visiting its left subtree but before visiting the
right subtree.
In the postorder traversal, the root is visited after visiting the left and right subtrees (in that
order).