[go: up one dir, main page]

0% found this document useful (0 votes)
77 views40 pages

ADA Module 3 - Full

Analysis and Design of Algorithms module 3

Uploaded by

rashmi gs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views40 pages

ADA Module 3 - Full

Analysis and Design of Algorithms module 3

Uploaded by

rashmi gs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Analysis And Design

of Algorithms

TRANSFORM-AND-CONQUER: Balanced Search


Trees, Heaps and Heapsort.

SPACE-TIME TRADEOFFS: Sorting by Counting:


Comparison counting sort, Input Enhancement
in String Matching: Horspool’s Algorithm
Transform and Conquer
These methods work as two-stage procedures. First, in the transformation stage,
the problem’s instance is modified to an intermediate instance. Then, in the second
or conquering stage, it is solved.
intermediate instance

transform conquer
Balanced Search Trees
Introduction:

• Binary search tree - data structures for implementing dictionaries. -


an example of representation-change technique

• Elements in left subtree are smaller than the element in the subtree’s
root, and elements in right subtree are greater than it.

• Time Efficiency of searching, insertion, and deletion are all in Θ(log n)


– Average case
• Θ(n) – Worst case when tree becomes severely unbalanced.
Efforts put to preserve the logarithmic efficiency using 2 approaches:

1. Instance-Simplification (Self-Balancing Trees)


• AVL Trees: The difference in heights of the left and right
subtrees of every node must not exceed 1
• Red-Black Trees: Allows the height of one subtree to be up to
twice the height of the other subtree of the same node
• Rebalancing through Rotations: If insertion or deletion violates
the balance requirement, rotations are used to restore balance

2. Representation-Change
• 2-3 Trees: Each node can have 2 or 3 children.
• 2-3-4 Trees: Each node can have 2, 3, or 4 children.
• B-Trees: Generalization allowing a larger number of children per
node.
AVL Trees: Definition
An AVL tree is a binary search tree in which the balance factor of
every node, which is defined as the difference between the heights
of the node’s le and right subtrees, is either 0 or +1 or −1.

Balance Factor = height of left subtree – height of right subtree


If an insertion of a new node makes an AVL tree unbalanced,
we do rotation - A rotation in an AVL tree is a local
transformation of its subtree rooted at a node whose balance
has become either +2 or −2.

If there are several such nodes, we rotate the tree rooted at


the unbalanced node that is the closest to the newly inserted
leaf.

Types of rotations:

- Single
right rotation, or R-rotation
left rotation, or L-rotation
- Double
left-right rotation (LR-rotation)
right-left rotation (RL-rotation)
Construction of an AVL tree for the list

1) 10, 20, 30, 25, 27, 7, 4, 24

2) 5, 6, 8, 3, 2, 4, 7

3) C, O, M, P, U, T, E, R
5, 6, 8, 3, 2, 4, 7
2-3 Trees
A 2-3 tree is a tree that can have nodes of two kinds:

2-nodes and 3-nodes.

A 2-node contains a single key K and has two children:


the left child has keys < K, and
the right child has keys > K.

A 3-node contains two ordered keys K1 and K2 (K1 < K2) and has three
children.
The leftmost child has keys < K1,
the middle child has keys between K1 and K2, and
the rightmost child has keys > K2

NOTE: All leaf nodes in a 2-3 Tree should be in the same level.
Steps to solve using 2-3 Trees

• Insert the new key K in a leaf (unless the tree is empty).

• If the node has more than 1 key, then place them in order
- If there are 3 keys in the leaf, promote the middle
one as the parent.
- If the node with 3 keys is the root, split it and create
a new root with the middle element.

• Repeat above steps until all keys satisfy the properties of


2-3 Trees.
Solve 9, 5, 8, 3, 2, 4, 7

Solve 6, 5, 1, 9, 7, 8, 10
Heaps
A heap can be defined as a binary tree with keys assigned
to its nodes, one key per node, with the following two
conditions are met:
1. The shape property the binary tree is essentially
complete (or simply complete), i.e., all its levels are full
except possibly the last level, where only some rightmost
leaves may be missing.
2. The parental dominance or heap property the key in
each node is greater than or equal to the keys in its
children.
Heaps
Example:

Not Heap, this tree fails to meet


Heap, this tree satisfies both
first condition (shape property is
conditions.
violated).

Not Heap, this tree fails to meet


second condition (parental
dominance violated).
There are 2 types of
heap Maximum heap is a tree in which the
Maximum Heap value of each node is greater than or equal
to the value of its child node.
Heap
Minimum heap is a tree in which the
Minimum Heap value of each node is less than or equal to
the value of its child node.
This is a heap(Maximum heap) This is a heap(Minimum heap)
Important properties of heaps

1. There exists exactly one essentially complete binary tree with n

nodes. Its height is equal to log2n(floor).

2. The root of a heap always contains its largest element.

3. A node of a heap considered with all its descendants is also a heap.

4. A heap can be implemented as an array by recording its elements in

the top-down, left-to-right fashion.


Array representation of Heaps

Heap’s elements are stored from positions 1 through n of such an array,


leaving H[0] unused.
In array representation,
1. The parental node keys will be in the first n/2 positions of the array,
while the leaf keys will occupy last n/2 positions;
2. The children of a key in the array’s parental position i (1 ≤ i ≤ n/2)
will be in positions 2i and 2i + 1, and, correspondingly, the parent
of a key in position i (2 ≤ i ≤ n) will be in position i/2.
Construction of Heap
There are two approaches for constructing heap
1. Bottom-up heap construction.
2. Top-down heap construction.
Bottom-up heap construction
Algorithm initializes the essentially complete binary tree with n nodes
by placing keys in the order given and then “heapifies” the tree as
follows.
Construction of Heap

1. Starting with the last parental node, the algorithm checks whether the
parental dominance holds for the key in this node.
2. If it does not, the algorithm exchanges the node’s key K with the
larger key of its children and checks whether the parental dominance
holds for K in its new position. This process continues until the
parental dominance for K is satisfied.
3. After completing the “heapification” of the sub-tree rooted at the
current parental node, the algorithm proceeds to do the same for the
node’s immediate predecessor.
4. The algorithm stops after this is done for the root of the tree.
Example:

Figure: Bottom-up construction of a heap for the list 2, 9, 7, 6, 5, 8. The double headed
arrows show key comparisons verifying the parental dominance

Solve: 2, 7, 6, 5, 10, 8, 3
Construction of Heap

// for all non-leaf nodes`

// left child
(right child)
// if right child is larger
// If the current value v >= largest child

// If the current value v < largest child


Efficiency of Bottom-up heap construction algorithm
 Assume that n = 2k − 1 for k > 0, so that a heap’s tree is complete
binary tree.
 Let h be the height of the tree. According to the first property of heaps
in the list at the beginning of the section, h = log2 n or just
log2 (n + 1) − 1 = k − 1 for the specific values of n.
 Each key on level i of the tree will travel to the leaf level h in the
worst case of the heap construction.
 Since moving to the next level down requires two comparisons
1. one to find the larger child and
2. the other to determine whether the exchange is required
The total number of key comparisons involving a key on level i will be 2(h − i).

= 2n
In Summary:

Bottom-Up Heap Construction: Efficient with O(n) time complexity.

Top-Down Heap Construction: Each insertion has O(log n) time


complexity.

Insertion: Time efficient with O(log n) complexity.

Deletion: Maintains efficiency with O(log n) complexity.


Heap sort
This is a two-stage algorithm that works as follows.
 Stage 1 (heap construction): Construct a heap for a given array.
 Stage 2 (maximum deletions): Apply the root-deletion operation
n − 1 times to the remaining heap.
Maximum Key Deletion from a heap
Step 1 Exchange the root’s key with the last key K of the heap.
Step 2 Decrease the heap’s size by 1.
Step 3 “Heapify” the smaller tree by sifting K down the tree using
bottom-up heap construction algorithm. Repeat this operation until the
parental dominance condition holds for K in its new position.
Heap sort

Sorting the array


2, 9, 7, 6, 5, 8
by heapsort.

1. The time efficiency of heapsort is, in


Ө(n log n) in both the worst and
average cases.
2. Thus, heapsort’s time efficiency falls
in the same class as that of mergesort.
But, heapsort is in-place, i.e., it does
not require any extra storage.

*** Solve: 2, 7, 6, 5, 10. 8, 3


Analysis And Design
of Algorithms

TRANSFORM-AND-CONQUER: Balanced Search


Trees, Heaps and Heapsort.

SPACE-TIME TRADEOFFS: Sorting by Counting:


Comparison counting sort, Input Enhancement
in String Matching: Horspool’s Algorithm
Introduction to Space and Time Trade-offs
 When the time is important than memory, we can pre-compute the
function’s values and store them in a table.
 The idea is to pre-process the problem’s input, in whole or in part,
and store the additional information obtained to accelerate solving
the problem afterward. This approach is called as input
enhancement technique.
Sorting by Counting
 Is an example application of input-enhancement technique to the
sorting problem.
 The idea is to count, for each element of a list to be sorted, the total
number of elements smaller than this element and store the results in
a table.
 These numbers will indicate the positions of the elements in the
sorted list. This algorithm is called as Comparison Counting Sort.
Example:
// initialize to 0
// no. of iterations

Time efficiency of this algorithm is:


Input Enhancement in String Matching
Pre-process the pattern to get some information about it, store this
information in a table, and then use this information during an actual
search for the pattern in a given text.
There are two algorithms which use this idea for string matching
1. Knuth-Morris-Pratt algorithm
2. Boyer-Moore algorithm
R. Horspool suggested simplified version of Boyer-Moore algorithm.
Horspool’s algorithm
 Algorithm compares the characters of the pattern with characters of
the text from right to left.
 If the mismatch occurs, pattern string need to be shifted to the right.
 Horspool’s algorithm determines the shift size by looking at
the character of text which encounters the mismatch with
character of the pattern.
There are four possibilities can occur during comparison which
leads to find shift size;
Case 1: If there are no characters of the text in the pattern, then we can
safely shift the pattern by its entire length.
Example:

Case 2: If there are occurrences of character of text in the pattern but it


is not the last character of pattern, then the shift should align the
rightmost occurrence of character of text in the pattern.
Example:
Case 3: If character of text happens to be the last character in the
pattern but there are no occurrence of that character among its other m
− 1 characters, then this is similar to case 1 and the pattern should be
shifted by the entire pattern’s length m:
Example :

Case 4: if character of the text happens to be the last character in the


pattern and there are other occurrences of that character among its first
m − 1 characters, then the shift should be similar to that case 2.
Example:
Horspool’s algorithm to find pattern in text
Example: find the pattern BARBER in the text JIM_SA
W_ME_IN_A_BARBERSHOP
The shift table is filled using the ShiftTable(P[ ]) algorithm as follows
Pre-processing the string to find shift size
 We can pre-compute shift sizes and store them in a table.
 The table will be indexed by all possible characters that can be
encountered in a text, including, for natural language texts, the space,
punctuation symbols, and other special characters.
 The table’s entries will indicate the shift sizes computed by the
formula written below;
Pre-processing the string to find shift size
Horspool’s algorithm to find pattern in text

You might also like