[go: up one dir, main page]

0% found this document useful (0 votes)
39 views156 pages

DAA - Unit-2

Uploaded by

shreyash29092002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views156 pages

DAA - Unit-2

Uploaded by

shreyash29092002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 156

Red-Black Trees

Red-black trees: Overview


 Red-black trees are a variation of binary search
trees to ensure that the tree is balanced.
» Height is O(lg n), where n is the number of nodes.
 Operations take O(lg n) time in the worst case.

 Published by my PhD supervisor, Leo Guibas,


and Bob Sedgewick.
 Easiest of the balanced search trees, so they are
used in STL map operations…

redblack - 2
Red-black Tree
 Binary search tree + 1 bit per node: the attribute
color, which is either red or black.
 All other attributes of BSTs are inherited:
» key, left, right, and p.

 All empty trees (leaves) are colored black.


» We use a single sentinel, nil, for all the leaves of
red-black tree T, with color[nil] = black.
» The root’s parent is also nil[T ].

redblack - 3
Red-black Properties
1. Every node is either red or black.
2. The root is black.
3. Every leaf (nil) is black.
4. If a node is red, then both its children are
black.

5. For each node, all paths from the node to


descendant leaves contain the same number of
black nodes.

redblack - 4
Red-black Tree – Example
26 Remember: every
internal node has two
children, even though
17 41 nil leaves are not
usually shown.

30 47

38 50

nil[T]
redblack - 5
Height of a Red-black Tree
 Height of a node:
» h(x) = number of edges in a longest path to a leaf.
 Black-height of a node x, bh(x):
» bh(x) = number of black nodes (including nil[T ])
on the path from x to leaf, not counting x.
 Black-height of a red-black tree is the black-height
of its root.
» By Property 5, black height is well defined.

redblack - 6
Height of a Red-black Tree
h=4
 Example: 26 bh=2

 Height of a node:
h=3
17 h=1 41 bh=2
h(x) = # of edges in a bh=1
longest path to a leaf.
 Black-height of a node h=2
h=2 30
bh(x) = # of black nodes 47 bh=1
bh=1 h=1
on path from x to leaf, bh=1
not counting x. 38 h=1 50
bh=1
 How are they related?
» bh(x) ≤ h(x) ≤ 2 bh(x) nil[T]
redblack - 7
Operations on RB Trees
 All operations can be performed in O(lg n) time.
 The query operations, which don’t modify the
tree, are performed in exactly the same way as
they are in BSTs.
 Insertion and Deletion are not straightforward.
Why?

redblack - 8
Rotations

x Left-Rotate(T, x) y

 y Right-Rotate(T, y) x 
   

redblack - 9
Rotations
 Rotations are the basic tree-restructuring operation for
almost all balanced search trees.
 Rotation takes a red-black-tree and a node,
 Changes pointers to change the local structure, and
 Won’t violate the binary-search-tree property.
 Left rotation and right rotation are inverses.

x Left-Rotate(T, x) y

 y Right-Rotate(T, y) x 
   
redblack - 10
Left Rotation – Pseudo-code
Left-Rotate (T, x)
1. y  right[x] // Set y.
2. right[x]  left[y] //Turn y’s left subtree into x’s right subtree.
3. if left[y]  nil[T ]
4. then p[left[y]]  x
5. p[y]  p[x] // Link x’s parent to y.
6. if p[x] = nil[T ]
7. then root[T ]  y x Left-Rotate(T, x) y
8. else if x = left[p[x]]
 
then left[p[x]]  y Right-Rotate(T, y)
y x
9.
10. else right[p[x]]  y    
11. left[y]  x // Put x on y’s left.
12. p[x]  y

redblack - 11
Rotation
 The pseudo-code for Left-Rotate assumes that
» right[x]  nil[T ], and
» root’s parent is nil[T ].
 Left Rotation on x, makes x the left child of y, and the
left subtree of y into the right subtree of x.
 Pseudocode for Right-Rotate is symmetric: exchange
left and right everywhere.
 Time: O(1) for both Left-Rotate and Right-Rotate,
since a constant number of pointers are modified.

redblack - 12
Reminder: Red-black Properties
1. Every node is either red or black.
2. The root is black.
3. Every leaf (nil) is black.
4. If a node is red, then both its children are
black.

5. For each node, all paths from the node to


descendant leaves contain the same number of
black nodes.

redblack - 13
Insertion in RB Trees
 Insertion must preserve all red-black properties.
 Should an inserted node be colored Red? Black?
 Basic steps:
» Use Tree-Insert from BST (slightly modified) to
insert a node x into T.
• Procedure RB-Insert(x).
» Color the node x red.
» Fix the modified tree by re-coloring nodes and
performing rotation to preserve RB tree property.
• Procedure RB-Insert-Fixup.

redblack - 14
Insertion
RB-Insert(T, z) RB-Insert(T, z) Contd.
1. y  nil[T] 14. left[z]  nil[T]
2. x  root[T] 15. right[z]  nil[T]
3. while x  nil[T] 16. color[z]  RED
4. do y  x 17. RB-Insert-Fixup (T, z)
5. if key[z] < key[x]
6. then x  left[x]
7. else x  right[x] How does it differ from the
8. p[z]  y Tree-Insert procedure of BSTs?
9. if y = nil[T] Which of the RB properties
10. then root[T]  z might be violated?
11. else if key[z] < key[y]
12. then left[y]  z Fix the violations by calling
13. else right[y]  z RB-Insert-Fixup.

redblack - 15
Insertion – Fixup

redblack - 16
Insertion – Fixup
 Problem: we may have one pair of consecutive
reds where we did the insertion.
 Solution: rotate it up the tree and away…
Three cases have to be handled…
 http://www2.latech.edu/~apaun/RBTreeDemo.htm

redblack - 17
Insertion – Fixup
RB-Insert-Fixup (T, z)
1. while color[p[z]] = RED
2. do if p[z] = left[p[p[z]]]
3. then y  right[p[p[z]]]
4. if color[y] = RED
5. then color[p[z]]  BLACK // Case 1
6. color[y]  BLACK // Case 1
7. color[p[p[z]]]  RED // Case 1
8. z  p[p[z]] // Case 1

redblack - 18
Insertion – Fixup
RB-Insert-Fixup(T, z) (Contd.)
9. else if z = right[p[z]] // color[y]  RED
10. then z  p[z] // Case 2
11. LEFT-ROTATE(T, z) // Case 2
12. color[p[z]]  BLACK // Case 3
13. color[p[p[z]]]  RED // Case 3
14. RIGHT-ROTATE(T, p[p[z]]) // Case 3
15. else (if p[z] = right[p[p[z]]])(same as 10-14
16. with “right” and “left” exchanged)
17. color[root[T ]]  BLACK

redblack - 19
Correctness
Loop invariant:
 At the start of each iteration of the while loop,
» z is red.
» If p[z] is the root, then p[z] is black.
» There is at most one red-black violation:
• Property 2: z is a red root, or
• Property 4: z and p[z] are both red.

redblack - 20
Correctness – Contd.
 Initialization: 
 Termination: The loop terminates only if p[z] is black.
Hence, property 4 is OK.
The last line ensures property 2 always holds.
 Maintenance: We drop out when z is the root (since
then p[z] is sentinel nil[T ], which is black). When we
start the loop body, the only violation is of property 4.
» There are 6 cases, 3 of which are symmetric to the other 3.
We consider cases in which p[z] is a left child.
» Let y be z’s uncle (p[z]’s sibling).

redblack - 21
p[p[z]]
Case 1 – uncle y is red
new z
C C
p[z]
y
A D A D
z
     
B B
z is a right child here.
Similar steps if z is a left child.
  

 p[p[z]] (z’s grandparent) must be black, since z and p[z] are both red and there
are no other violations of property 4.
 Make p[z] and y black  now z and p[z] are not both red. But property 5
might now be violated.
 Make p[p[z]] red  restores property 5.
 The next iteration has p[p[z]] as the new z (i.e., z moves up 2 levels).
redblack - 22
Case 2 – y is black, z is a right child
C C
p[z] p[z]
A  y B  y
z
 z A 
B

   

 Left rotate around p[z], p[z] and z switch roles  now z is a left
child, and both z and p[z] are red.
 Takes us immediately to case 3.

redblack - 23
Case 3 – y is black, z is a left child
C B
p[z]
B  y A C

z
   
A 

 
 Make p[z] black and p[p[z]] red.
 Then right rotate on p[p[z]]. Ensures property 4 is maintained.
 No longer have 2 reds in a row.
 p[z] is now black  no more iterations.

redblack - 24
Algorithm Analysis
 O(lg n) time to get through RB-Insert up to the
call of RB-Insert-Fixup.
 Within RB-Insert-Fixup:
» Each iteration takes O(1) time.
» Each iteration but the last moves z up 2 levels.
» O(lg n) levels  O(lg n) time.
» Thus, insertion in a red-black tree takes O(lg n) time.
» Note: there are at most 2 rotations overall.

redblack - 25
Deletion
 Deletion, like insertion, should preserve all the
RB properties.
 The properties that may be violated depends on
the color of the deleted node.
» Red – OK. Why?
» Black?
 Steps:
» Do regular BST deletion.
» Fix any violations of RB properties that may result.

redblack - 26
Deletion
RB-Delete(T, z)
1. if left[z] = nil[T] or right[z] = nil[T]
2. then y  z
3. else y  TREE-SUCCESSOR(z)
4. if left[y] != nil[T ]
5. then x  left[y]
6. else x  right[y]
7. p[x]  p[y] // Do this, even if x is nil[T]

redblack - 27
Deletion
RB-Delete (T, z) (Contd.)
8. if p[y] = nil[T ]
9. then root[T ]  x
10. else if y = left[p[y]]
11. then left[p[y]]  x
12. else right[p[y]]  x
13. if y = z
14. then key[z]  key[y] The node passed to
15. copy y’s satellite data into z the fixup routine is
16. if color[y] = BLACK the lone child of the
17. then RB-Delete-Fixup(T, x) spliced up node, or
18. return y the sentinel.

redblack - 28
RB Properties Violation
 If y is black, we could have violations of red-
black properties:
» Prop. 1. OK.
» Prop. 2. If y is the root and x is red, then the root has
become red.
» Prop. 3. OK.
» Prop. 4. Violation if p[y] and x are both red.
» Prop. 5. Any path containing y now has 1 fewer
black node.

redblack - 29
RB Properties Violation
 Prop. 5. Any path containing y now has 1 fewer black
node.
» Correct by giving x an “extra black.”
» Add 1 to count of black nodes on paths containing x.
» Now property 5 is OK, but property 1 is not.
» x is either doubly black (if color[x] = BLACK) or red &
black (if color[x] = RED).
» The attribute color[x] is still either RED or BLACK. No new
values for color attribute.
» In other words, the extra blackness on a node is by virtue of x
pointing to the node.
 Remove the violations by calling RB-Delete-Fixup.

redblack - 30
Deletion – Fixup
RB-Delete-Fixup(T, x)
1. while x  root[T ] and color[x] = BLACK
2. do if x = left[p[x]]
3. then w  right[p[x]]
4. if color[w] = RED
5. then color[w]  BLACK // Case 1
6. color[p[x]]  RED // Case 1
7. LEFT-ROTATE(T, p[x]) // Case 1
8. w  right[p[x]] // Case 1

redblack - 31
RB-Delete-Fixup(T, x) (Contd.)
/* x is still left[p[x]] */
9. if color[left[w]] = BLACK and color[right[w]] = BLACK
10. then color[w]  RED // Case 2
11. x  p[x] // Case 2
12. else if color[right[w]] = BLACK
13. then color[left[w]]  BLACK // Case 3
14. color[w]  RED // Case 3
15. RIGHT-ROTATE(T,w) // Case 3
16. w  right[p[x]] // Case 3
17. color[w]  color[p[x]] // Case 4
18. color[p[x]]  BLACK // Case 4
19. color[right[w]]  BLACK // Case 4
20. LEFT-ROTATE(T, p[x]) // Case 4
21. x  root[T ] // Case 4
22. else (same as then clause with “right” and “left” exchanged)
23. color[x]  BLACK
redblack - 32
Deletion – Fixup
 Idea: Move the extra black up the tree until x points to
a red & black node  turn it into a black node,
 x points to the root  just remove the extra black, or
 We can do certain rotations and recolorings and finish.
 Within the while loop:
» x always points to a nonroot doubly black node.
» w is x’s sibling.
» w cannot be nil[T ], since that would violate property 5 at
p[x].
 8 cases in all, 4 of which are symmetric to the other.

redblack - 33
Case 1 – w is red
p[x]
B D
x w
A D B E
  x A new w C  
C E
   
   
 w must have black children.
 Make w black and p[x] red (because w is red p[x] couldn’t have
been red).
 Then left rotate on p[x].
 New sibling of x was a child of w before rotation  must be
black.
 Go immediately to case 2, 3, or 4.

redblack - 34
Case 2 – w is black, both w’s children
p[x] c are black new x c
B B
x w
A D A D
   
C E C E
       

 Take 1 black off x ( singly black) and off w ( red).


 Move that black to p[x].
 Do the next iteration with p[x] as the new x.
 If entered this case from case 1, then p[x] was red  new x is
red & black  color attribute of new x is RED  loop
terminates. Then new x is made black in the last line.
redblack - 35
Case 3 – w is black, w’s left child is red,
w’s right child is black
c
B B c
x w
x new w
A D A C
   

D
C E
 E
   

 
 Make w red and w’s left child black.
 Then right rotate on w.
 New sibling w of x is black with a red right child  case 4.

redblack - 36
Case 4 – w is black, w’s right child is red
c
B D
x w
A D B E
  x A C  
c’
C E
   
   

 Make w be p[x]’s color (c).


 Make p[x] black and w’s right child black.
 Then left rotate on p[x].
 Remove the extra black on x ( x is now singly black) without
violating any red-black properties.
 All done. Setting x to root causes the loop to terminate.
redblack - 37
Analysis
 O(lg n) time to get through RB-Delete up to the
call of RB-Delete-Fixup.
 Within RB-Delete-Fixup:
» Case 2 is the only case in which more iterations
occur.
• x moves up 1 level.
• Hence, O(lg n) iterations.
» Each of cases 1, 3, and 4 has 1 rotation   3
rotations in all.
» Hence, O(lg n) time.

redblack - 38
Lemma “RB Height”
Consider a node x in an RB tree: The longest
descending path from x to a leaf has length h(x),
which is at most twice the length of the shortest
descending path from x to a leaf.
Proof:
# black nodes on any path from x = bh(x) (prop 5)
 # nodes on shortest path from x, s(x). (prop 1)
But, there are no consecutive red (prop 4),
and we end with black (prop 3), so h(x) ≤ 2 bh(x).
Thus, h(x) ≤ 2 s(x). QED
redblack - 39
Bound on RB Tree Height
 Lemma: The subtree rooted at any node x has
 2bh(x)–1 internal nodes.
 Proof: By induction on height of x.
» Base Case: Height h(x) = 0  x is a leaf  bh(x) = 0.
Subtree has 20–1 = 0 nodes. 

» Induction Step: Height h(x) = h > 0 and bh(x) = b.


• Each child of x has height h - 1 and
black-height either b (child is red) or b - 1 (child is
black).
• By ind. hyp., each child has  2bh(x)– 1–1 internal nodes.
• Subtree rooted at x has  2 (2bh(x) – 1 – 1)+1
= 2bh(x) – 1 internal nodes. (The +1 is for x itself.)
redblack - 40
Bound on RB Tree Height
 Lemma: The subtree rooted at any node x has
 2bh(x)–1 internal nodes.
 Lemma 13.1: A red-black tree with n internal
nodes has height at most 2 lg(n+1).
 Proof:
» By the above lemma, n  2bh – 1,
» and since bh h/2, we have n  2h/2 – 1.
»  h  2 lg(n + 1).

redblack - 41
B Tree
Introduction to B-Tree

• B-trees are balanced search tree.


• More than two children are possible.

• B-Tree, stores all information in the leaves


and stores only keys and Child pointer.

• If an internal B-tree node x contains n[x]


keys then x has n[x]+1 children.
Example of B-Tree
Application of B-Tree

It designed to work on magnetic disks or other (direct access)


secondary storage devices.
Properties of B-Tree
• A B-Tree T is a rooted tree having the following
properties:
1. Every node x has the following fields:
1. n[x] the no. of keys currently stored in node x
2. The n[x] keys themselves, stored in non-decreasing
order, so that key1[x] ≤ key2[x]…… ≤keyn-1[x] ≤ keyn[x].
3. Leaf[x], a Boolean value that is TRUE if x is a leaf and
FALSE if x is an internal node.
2. Each internal node x also contains n[x]+1 pointers
(Childs) c1[x], c2[x],---------cn[x]+1[x].
3. All leaves have the same depth, which is the tree’s
height h.
Properties of B-Tree (cont.)
4. There are lower and upper bounds on the no. of
keys a node can contains: these bounds can be
expressed in terms of a fixed integer t≥2 called
the minimum degree of B-Tree.
– Every node other than the root must have at least t-1
keys, then root has at least t children if the tree is non
empty the root must have at least one key.
– Every node can contain at most 2t-1 keys. Therefore,
an internal node can have at most 2t children we say
that a node is full if it contains exactly 2t-1 keys.
Basic operation on B-tree
• B-TREE-SEARCH :-Searching in B Tree

• B-TREE-INSERT :-Inserting key in B Tree

• B-TREE-CREATE :-Creating a B Tree

• B-TREE-DELETE :- Deleting a key from B


Tree
X is a subtree and k is searching element
Complexity=O(t)

Complexity=
O(logt n)
Creating an empty B tree
INSERT
B-TREE-INSERT ALGORITHM
B-TREE-INSERT(T,k)
1. r  root[T]
2. If ( n[r] = 2t-1)
3. then s Allocate-Node()
4. root[T]  s
5. leaf[s]  FALSE
6. n[s]0
7. c1[s]r
8. B-TREE-SPLIT-CHILD(s,1,r)
9. B-TREE-INSERT-NONFULL(s,k)
10. else B-TREE-INSERT-NONFULL(r,k)
B-TREE-SPLIT-CHILD
ALGORITHM
B-TREE-SPLIT-CHILD(x,i,y)
1. z ALLOCATE-NODE()
2. leaf [z] leaf [y]
3. n[z]t-1
4. for j 1 to t-1
5. do keyj [z]  keyj+t [y]
6. if not leaf [y]
7. then for j1 to t
8. do cj [z] cj+t [y]
9. n[y] t-1
cont…….
10. for j n[x] +1 downto i+1
11. do cj+1 [x] cj [x]
12. cj+1 [x]  z
13. for j n[x] downto i
14. do keyj+1 [x]  keyj [x]
15. keyi [x]  keyt [y]
16. n[x]n[x] +1
17. DISK-WRITE(y)
18. DISK-WRITE(z)
19. DISK-WRITE(x)
t=3
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Deleting from B-Trees
The Concept
• You can delete a key entry from any node.
• ->Therefore, you must ensure that
before/after deletion, the B-Tree maintains
its properties.
• When deleting, you have to ensure that a
node doesn’t get too small (minimum node
size is T – 1). We prevent this by
combining nodes together.
Deleting a key k
Case-1
Case 2a and 2c
Case 3a and 3b
Binomial Heaps
DATA STRUCTURES: MERGEABLE HEAPS
• MAKE-HEAP ( )
– Creates & returns a new heap with no elements.
• INSERT (H,x)
– Inserts a node x whose key field has already been
filled into heap H.
• MINIMUM (H)
– Returns a pointer to the node in heap H whose key
is minimum.

CS 473 Lecture X 1
Mergeable Heaps

• EXTRACT-MIN (H)
– Deletes the node from heap H whose key is
minimum. Returns a pointer to the node.

• DECREASE-KEY (H, x, k)
– Assigns to node x within heap H the new value k
where k is smaller than its current key value.

CS 473 Lecture X 2
Mergeable Heaps
• DELETE (H, x)
– Deletes node x from heap H.

• UNION (H1, H2)


– Creates and returns a new heap that contains all
nodes of heaps H1 & H2.
– Heaps H1 & H2 are destroyed by this operation

CS 473 Lecture X 3
Binomial Trees
• A binomial heap is a collection of binomial
trees.
• The binomial tree Bk is an ordered tree defined
recursively
Bo Consists of a single node
.
.
.
Bk Consists of two binominal trees Bk-1
linked together. Root of one is the
leftmost child of the root of the other.
CS 473 Lecture X 4
Binomial Trees

B k-1

B k-1

Bk
CS 473 Lecture X 5
Binomial Trees B
2
B1
B1 B0
B1

B0 B1 B2 B3

B3
B2 B0

B1

CS 473
B4 Lecture X 6
Binomial Trees

B1 Bo
B2

Bk-2
Bk-1

Bk
CS 473 Lecture X 7
Properties of Binomial Trees (Cont.)

B1 B0
Bk-3 B2
Bk-2
B1 B0
Bk-3 B2
Bk-2

Bk-1
CS 473 Lecture X 8
Binomial Heaps
A BINOMIAL HEAP H is a set of BINOMIAL
TREES that satisfies the following “Binomial
Heap Properties”
1. Each binomial tree in H is HEAP-ORDERED
• the key of a node is ≥ the key of the parent
• Root of each binomial tree in H contains the
smallest key in that tree.

CS 473 Lecture X 9
Binomial Heaps
2. There is at most one binomial tree in H whose
root has a given degree.

CS 473 Lecture X 10
Binomial Heaps
Example: A binomial heap with n = 13 nodes
3 2 1 0

13 =< 1, 1, 0, 1>2
Consists of B0, B2, B3
head[H]

10 1 6

B0 12 25 8 14 29

18
B2 11 17 38

B3
27
CS 473 Lecture X 11
Representation of Binomial Heaps
• Each binomial tree within a binomial heap is stored in
the left-child, right-sibling representation
• Each node X contains POINTERS
– p[x] to its parent
– child[x] to its leftmost child
– sibling[x] to its immediately right sibling
• Each node X also contains the field degree[x] which
denotes the number of children of X.

CS 473 Lecture X 12
Representation of Binomial Heaps
• Let x be a node with sibling[x] ≠ NIL
– Degree [sibling [x]]=degree[x]-1
if x is NOT A ROOT
– Degree [sibling [x]] > degree[x]
if x is a root

10 1 6

B0 12 25 8 14 29

18
B2 11 17 38

B3
27
CS 473 Lecture X 13
Operations on Binomial Heaps
CREATING A NEW BINOMIAL HEAP

MAKE-BINOMIAL-HEAP ( )
allocate H RUNNING-TIME= Θ(1)
head [ H ]  NIL
return H
end

CS 473 Lecture X 14
Operations on Binomial Heaps
BINOMIAL-HEAP-MINIMUM (H)
x  Head [H]
min  key [x]
x  sibling [x]
while x ≠ NIL do
if key [x] < min then
min  key [x]
yx
endif
x  sibling [x]
endwhile
return y
end

CS 473 Lecture X 15
Uniting two binomial heaps

• The BINOMIAL-HEAP-UNION procedure repeatedly


links binomial trees whose roots have the same
degree. The following procedure links the B(k-1) tree
rooted at node y to the B(k-1) tree rooted at node z;
that is, it makes z the parent of y. Node z thus
becomes the root of a B(k) tree.
BINOMIAL-LINK(y, z)
1 p[y] ← z
2 sibling[y] ← child[z]
3 child[z] ← y
4 degree[z] ← degree[z] + 1
BINOMIAL-HEAP-UNION(H1, H2)
The following procedure unites binomial heaps H1 and H2, returning
the resulting heap. It destroys the representations of H1 and H2
in the process. Besides BINOMIAL-LINK, the procedure uses an
auxiliary procedure BINOMIAL-HEAP-MERGE that merges the
root lists of H1 and H2 into a single linked list that is sorted by
degree into monotonically increasing order.

BINOMIAL-HEAP-UNION(H1, H2)
1 H ← MAKE-BINOMIAL-HEAP()
2 head[H] ← BINOMIAL-HEAP-MERGE(H1, H2)
3 free the objects H1 and H2 but not the lists they point to
4 if head[H] = NIL
5 then return H
6 prev-x ← NIL
7 x ← head[H]
8 next-x ← sibling[x]
9 while next-x ≠ NIL
10 do if (degree[x] ≠ degree[next-x]) or
(sibling[next-x] ≠ NIL and degree[sibling[next-x]] =degree[x] )
then
11 prev-x ← x ▹ Cases 1 and 2
12 x ← next-x ▹ Cases 1 and 2
13 else if key[x] ≤ key[next-x]
14 then sibling[x] ← sibling[next-x] ▹ Case 3
15 BINOMIAL-LINK(next-x, x) ▹ Case 3
16 else if prev-x = NIL ▹ Case 4
17 then head[H] ← next-x ▹ Case 4
18 else sibling[prev-x] ← next-x ▹ Case 4
19 BINOMIAL-LINK(x, next-x) ▹ Case 4
20 x ← next-x ▹ Case 4
21 next-x ← sibling[x]
22 return H
Uniting Two Binomial Heaps
CASE 1: Occurs when degree [x] ≠ degree [next-x]
prev-x x next-x sibling { next-x}
a b c d

Bk Bl

l >k
prev-x x next-x

a b c d

Bk Bl
CS 473 Lecture X 19
Uniting Two Binomial Heaps: Cases
CASE 2: Occurs when x is the first of 3 roots of equal degree
degree [x] = degree [next-x] = degree [sibling[next-x]]
prev-x x next-x sibling [next-x]

a b c d

BK BK BK
prev-x x next-x
a b c d

BK BK BK
CS 473 Lecture X 20
Uniting Two Binomial Heaps: Cases
CASE 3 & 4: Occur when x is the first of 2 roots of equal degree
degree [x] = degree [next-x] ≠ degree [sibling [next-x]]

• Occur on the next iteration after any case


• Always occur immediately following CASE 2
• Two cases are distinguished by whether x or next-x has the
smaller key
• The root with the smaller key becomes the root of the linked tree

CS 473 Lecture X 21
Uniting Two Binomial Heaps: Cases
CASE 3 & 4 CONTINUED
prev-x x next-x sibling [next-x]

a b c d

Bk Bk Bl l>k
prev-x x next-x
CASE 3
a b d
key [b] ≤ key [c]
c
(Case 3)

prev-x x next-x
CASE 4
a c d
b

CS 473 Lecture X 22
• back
• back
Inserting a node
• The following procedure inserts node x into binomial heap H ,
assuming that x has already been allocated and key[x] has
already been filled in.
BINOMIAL-HEAP-INSERT(H, x)
1 H′ ← MAKE-BINOMIAL-HEAP()
2 p[x] ← NIL
3 child[x] ← NIL
4 sibling[x] ← NIL
5 degree[x] ← 0
6 head[H′] ← x
7 H ← BINOMIAL-HEAP-UNION(H, H′)
Extracting the node with the minimum key
• The following procedure extracts the node with the minimum key
from binomial heap H and returns a pointer to the extracted
node.
BINOMIAL-HEAP-EXTRACT-MIN(H)
1 find the root x with the minimum key in the root list of
H, and remove x from the root list of H
2 H′ ← MAKE-BINOMIAL-HEAP()
3 reverse the order of the linked list of x's children,
and set head[H′] to point to the head of the resulting
list.
4 H ← BINOMIAL-HEAP-UNION(H, H′)
5 return x
• back
Decreasing a key
• The following procedure decreases the key of a node x in a
binomial heap H to a new value k. It signals an error if k is
greater than x's current key.
BINOMIAL-HEAP-DECREASE-KEY(H, x, k)
1 if k > key[x]
2 then error "new key is greater than current key"
3 key[x] ← k
4y←x
5 z ← p[y]
6 while z ≠ NIL and key[y] < key[z]
7 do exchange key[y] ↔ key[z]
8▸ If y and z have satellite fields, exchange them, too.
9 y←z
10 z ← p[y]
decrease key 26 to 7
Deleting a key
• It is easy to delete a node x's key and satellite information from
binomial heap H in O(lg n) time. The following implementation
assumes that no node currently in the binomial heap has a key
of -∞.
BINOMIAL-HEAP-DELETE(H, x)
1 BINOMIAL-HEAP-DECREASE-KEY(H, x, -∞)
2 BINOMIAL-HEAP-EXTRACT-MIN(H)
Fibonacci Heaps
• Binomial heaps support the mergeable heap
operations (INSERT, MINIMUM,
EXTRACT_MIN, UNION plus,
DECREASE_KEY and DELETE) in O(lgn)
worst-case time.
• Fibonacci heaps support the mergeable heap
operations that do not involve deleting an
element in O(1) amortized time.

CS XXX Lecture X 1
Fibonacci Heaps

• Fibonacci heaps are especially desirable when


the number of EXTRACT-MIN and DELETE
operations is small relative to the number of
other operations.
• Fibonacci heaps are loosely based on binomial
heaps.
• A collection of trees if neither DECREASE-
KEY nor DELETE is ever invoked.
• Each tree is like a binomial tree.

CS XXX Lecture X 2
Fibonacci Heaps

• Fibonacci heaps differ from binomial-heaps,


however, in that they have more more relaxed
structure allowing for improved asymptotic time
bounds work that maintains the structure is
delayed until it is convenient to perform.
• Like a binomial heap, a fibonacci heap is a
collection of heap-ordered trees however, trees
are not constrained to be binomial trees.
• Trees within fibonacci heaps are rooted but
unordered.
CS XXX Lecture X 3
Structure of Fibonacci Heaps
Each node x contains:
p
• A pointer p[x]to its parent key
• A pointer child[x] to one of its degree
mark
children left right
– The children of x are linked together child
in a circular, doubly-linked list
which is called the child-list.

CS XXX Lecture X 4
Structure of Fibonacci Heaps
• Circular, doubly-linked lists have two
advantages for use in fib-heaps:
– we can remove a node in O(1) time
– given two such lists, we can concatenate them
in O(1) time.

CS XXX Lecture X 5
Structure of Fibonacci Heaps
• Two other fields in each node x
– degreee[x]: the number of children in the child
list of x
– mark[x]: a boolean-valued field
• indicates whether node x has lost a child since the
last time x was made the child of another one
• newly created nodes are unmarked
• A node x becomes unmarked whenever it is made
the child of another node
CS XXX Lecture X 6
Structure of Fibonacci Heaps
min[H]

23 7 3 17 24

18 52 38 30 26 46

marked Marked
nodes node

39 41 35

CS XXX Lecture X 7
Structure of Fibonacci Heaps
min[H]

23 7 3 17 24

18 52 38 26

39 41 35

CS XXX Lecture X 8
Concetenation of Two Circular,
Doubly – Linked Lists
min[H1]

b
a c d e
(x)

q
p r s
(y)

min[H2]

CS XXX Lecture X 9
Concetenation of Two Circular,
Doubly – Linked Lists

min[H2] min[H1]

a b r s p q c d d

CS XXX Lecture X 10
Concetenation of Two Circular,
Doubly – Linked Lists
CONCATENATE (H1, H2)
Running time
x ← left[min[H1]]
y ← left[min[H2]] is O(1)

right[x] ← min[H2]
left[min[H2]] ← x

right[y] ← min[H1]
left[min[H1]] ← y
end
CS XXX Lecture X 11
Potential Function
• A given fibonacci heap H
– t(H): the number of trees in root list of H
– m(H): the number of marked nodes in H
• The potential of fibonacci heap H is:
Φ(H) = t(H) + 2 m(H)
• A fibonacci heap application begins with an empty
heap:
the initial potential = 0
• The potential is non-negative at all subsequent
times.
CS XXX Lecture X 12
Mergeable Heap Operations
MAKE-HEAP, INSERT, MINIMUM,
EXTRACT-MIN, UNION
If only these operations are to be supported,
each fibonacci-heap is a collection of
unordered binomial trees.

CS XXX Lecture X 13
Mergeable Heap Operations
Creating a new fibonacci heap:
MAKE-FIB-HEAP procedure
– allocates and returns the fibonacci heap object
H
– Where n[H] = 0 and min[H] = NIL
– There are no trees in the heap
because t(H) = 0 and m(H) = 0 => Φ(H) = 0
the amortized cost = O(1) = the actual cost

CS XXX Lecture X 14
Mergeable Heap Operations
Inserting a node
FIB-HEAP-INSERT(H, x)
degree[x] ← 0
p[x] ← NIL
child[x] ← NIL
left[x] ← x
right[x] ← x
mark[x] ← FALSE
concatenate the root list containing x with root list H
if key[x] < key[min[H]] then
min[H] ← x
endif
n[H] ← n[H] + 1
end

CS XXX Lecture X 15
Mergeable Heap Operations
min[H]
H
x 21

23 7 3 17 24

18 52 38 30 26 46

39 41 35

CS XXX Lecture X 16
Mergeable Heap Operations
min[H’]
H’

23 7 21 3 17 24

18 52 38 30 26 46

39 41 35

CS XXX Lecture X 17
Mergeable Heap Operations
t(H’) = t(H) + 1
• Increase in potential:
Φ(H’) - Φ(H) = [t(H) + 1 + 2m(H)] – [t(H)
+ 2m(H)]
=1
The actual cost = O(1)
The amortized cost = O(1) + 1 = O(1)
CS XXX Lecture X 18
Mergeable Heap Operations
Finding the minimum node:
Given by pointer min[H]
actual cost = O(1)
amortized cost = actual cost = O(1)
since the potential of H does not change

CS XXX Lecture X 19
Uniting Two Fibonacci Heaps

CS XXX Lecture X 20
Extracting the Minimum Node
Extracting the Minimum Node
Repeatedly execute the following steps until every root in the
root list has a distinct degree value
(1) Find two roots x and y in the root list with the same degree
where key[x] ≤ key[y]
(2) Link y to x : Remove y from the root list and make y a
child of x
This operation is performed by procedure FIB-HEAP-LINK
Procedure CONSOLIDATE uses an auxiliary pointer array A[0......D(n)]
A[i] = y : y is currently a root with degree[y] = i

CS XXX Lecture X 22
Extracting the Minimum Node
CONSOLIDATE(H)
for i← 0 to D(n) do
min[H] ←NIL
A[i] ← N IL
for each node w in the root list of H for i ← 0 to D(n[H]) do
do if A[i] ≠ NIL then
x←w
d ← degree[x] add A[i] to the root list of H
while A[d] ≠ NIL do if min[H]=NIL or
y ← A[d] key[A[i]] < key[min[H]] then
if key[x] > key[y] then
exchange x ↔ y min[H] ← A[i]
FIB-HEAP-LINK(H,y,x)
A[d] ← NIL
d←d+1
A[d] ← x

CS XXX Lecture X 23
Extracting the Minimum Node

FIB-HEAP-LINK(H,y,x)
remove y from the root list of H
make y a child of x, incrementing degree[x]
mark[y] ← FALSE
end

CS XXX Lecture X 24
Extracting the Minimum Node
min[H]

23 7 21 3 17 24

18 52 38 30 26 46

39 41 35

CS XXX Lecture X 25
Extracting the Minimum Node
min[H]

23 7 21 18 52 38 17 24

39 41 30 26 46

35

CS XXX Lecture X 26
Extracting the Minimum Node
0 1 2 3 4
A
w, x

23 7 21 18 52 38 17 24

39 41 30 26 46

35

CS XXX Lecture X 27
Extracting the Minimum Node

0 1 2 3 4
A

w, x
23 7 21 18 52 38 17 24

39 41 30 26 46

35

CS XXX Lecture X 28
Extracting the Minimum Node
0 1 2 3 4
A
w, x

23 7 21 18 52 38 17 24

39 41 30 26 46

35

CS XXX Lecture X 29
Extracting the Minimum Node
0 1 2 3 4
A

w 7 21 18 52 38 17 24

x 23 39 41 30 26 46

35

CS XXX Lecture X 30
Extracting the Minimum Node
0 1 2 3 4
A

x
7 21 18 52 38 24

17 23 w 39 41 26 46

35

30

CS XXX Lecture X 31
Extracting the Minimum Node
0 1 2 3 4
A

x
7 21 18 52 38

24 17 23 w 39 41

26 46 30

35

CS XXX Lecture X 32
Extracting the Minimum Node
A

w, x
7 21 18 52 38

24 17 23 39 41

26 46 30

35

CS XXX Lecture X 33
Extracting the Minimum Node
0 1 2 3 4
A

w, x
7 21 18 38

24 17 23 52 39 41

26 46 30

35

CS XXX Lecture X 34
Extracting the Minimum Node
0 1 2 3 4
A

7 18 w, x 38

24 17 23 21 39 41

26 46 30 52

35

CS XXX Lecture X 35
Extracting the Minimum Node
0 1 2 3 4
A

w, x
7 18 38

24 17 23 21 39 41

26 46 30 52

35

CS XXX Lecture X 36
Extracting the Minimum Node
min[H]

7 18 38

24 17 23 21 39 41

26 46 30 52

35

CS XXX Lecture X 37
Decreasing a Key
FIB-HEAP-DECREASE-KEY(H, x, k)
If k>key[x]
then error “new key is greater than current key”
else
key[x] ← k
y ← p[x]
if y ≠ NIL and key[x] < key[y] then
CUT(H, x, y)
CASCADING-CUT(H,y)
if key[x] < key[min[H]] then
min[H] ← x

CS XXX Lecture X 38
Decreasing a Key

CUT(H, x, y)
remove x from the child list of y, decrementing degree y
add x to the root list of H
p[x] ← NIL
mark[x] ← FALSE
end

CS XXX Lecture X 39
Decreasing a Key
CASCADING-CUT(H, y)
z ← p[y]
if z ≠ NIL then
if mark[y] = FALSE then
mark[y] = TRUE
else
CUT(H, y, z)
CASCADING-CUT(H, z)
endif
endif
end

CS XXX Lecture X 40
Decreasing a Key

(a) The initial Fibonacci heap.

CS XXX Lecture X 41
Decreasing a Key

(b) The node with key 46 has its key decreased to 15.
The node becomes a root, and its parent (with key 24), which had
previously been unmarked, becomes marked.

CS XXX Lecture X 42
Decreasing a Key

• The node with key 35 has its key decreased to 5.


• The node, now with key 5, becomes a root.
• Its parent, with key 26, is marked, so a cascading cut occurs.
• The node with key 26 is cut from its parent and made an unmarked
root

CS XXX Lecture X 43
Decreasing a Key

(d). Another cascading cut occurs, since the node with key
24 is marked as well.
• This node is cut from its parent and made an unmarked
root in part
CS XXX Lecture X 44
Decreasing a Key

(e). The cascading cuts stop at this point, since the node with key 7 is
a root.
(Even if this node were not a root, the cascading cuts would stop,
since it is unmarked.)
Part (e) shows the result of the FIB-HEAP-DECREASE-KEY
operation, with H:min pointing to the new minimum node.
Lecture X 45
Amortized Cost of
FIB-HEAP-DECREASE-KEY
Procedure
Actual Cost
O(1) time + the time required to perform the
cascading cuts, suppose that CASCADING-
CUT is recursively called c times each call
takes O(1) time exclusive of recursive calls
therefore,
the actual cost = O(1) + O(c) = O(c)

CS XXX Lecture X 46
Deleting a Node

FIB-HEAP-DELETE(H, x)
FIB-HEAP-DECREASE-KEY(H, x, -∞) => O(1)
FIB-HEAP-EXTRACT-MIN(H) => O(D(n))
end

Amortized Cost = O(1) + O(D(n)) = O(D(n))

CS XXX Lecture X 47
Skip Lists
What is a Skip List
• A skip list for a set S of distinct (key, element) items is a series
of lists
S0, S1 , … , Sh such that
– Each list Si contains the special keys + and -
– List S0 contains the keys of S in non-decreasing order
– Each list is a subsequence of the previous one, i.e.,
S0  S1  …  Sh
– List Sh contains only the two special keys
• Skip lists are one way to implement the dictionary ADT

S3 - +
S2 - 31 +
S1 - 23 31 34 64 +
S0 - 12 23 26 31 34 44 56 64 78 +
Implementation
• We can implement a skip list with
quad-nodes
• A quad-node stores: quad-node
– item
– link to the node before
– link to the node after x
– link to the node below
• Also, we define special keys PLUS_INF
and MINUS_INF, and we modify the
key comparator to handle them
Search
• We search for a key x in a a skip list as follows:
– We start at the first position of the top list
– At the current position p, we compare x with y  key(after(p))
x = y: we return element(after(p))
x > y: we “scan forward”
x < y: we “drop down”
– If we try to drop down past the bottom list, we return
NO_SUCH_KEY
• Example: search for 78

S3 - +
S2 - 31 +
S1 - 23 31 34 64 +
S0 - 12 23 26 31 34 44 56 64 78 +
Exercise: Search
• We search for a key x in a a skip list as follows:
– We start at the first position of the top list
– At the current position p, we compare x with y  key(after(p))
x = y: we return element(after(p))
x > y: we “scan forward”
x < y: we “drop down”
– If we try to drop down past the bottom list, we return
NO_SUCH_KEY
• Ex 1: search for 64: list the (Si, node) pairs visited and the return value
• Ex 2: search for 27: list the (Si, node) pairs visited and the return value

S3 - +
S2 - 31 +
S1 - 23 31 34 64 +
S0 - 12 23 26 31 34 44 56 64 78 +
Insertion
• To insert an item (x, o) into a skip list, we use a randomized algorithm:
– We repeatedly toss a coin until we get tails, and we denote with i the
number of times the coin came up heads
– If i  h, we add to the skip list new lists Sh+1, … , Si +1, each containing
only the two special keys
– We search for x in the skip list and find the positions p0, p1 , …, pi of
the items with largest key less than x in each list S0, S1, … , Si
– For j  0, …, i, we insert item (x, o) into list Sj after position pj
• Example: insert key 15, with i = 2
S3 - +
p2
S2 - + S2 - 15 +
p1
S1 - 23 + S1 - 15 23 +
p0
S0 - 10 23 36 + S0 - 10 15 23 36 +
Deletion
• To remove an item with key x from a skip list, we proceed as
follows:
– We search for x in the skip list and find the positions p0, p1 ,
…, pi of the items with key x, where position pj is in list Sj
– We remove positions p0, p1 , …, pi from the lists S0, S1, … , Si
– We remove all but one list containing only the two special keys
• Example: remove key 34

S3 - +
p2
S2 - 34 + S2 - +
p1
S1 - 23 34 + S1 - 23 +
p0
S0 - 12 23 34 45 + S0 - 12 23 45 +
Trie
Trie
• A trie (derived from retrieval) is a multiway tree
data structure used for storing strings over an
alphabet.
• It is used to store a large amount of strings.
• The pattern matching can be done efficiently
using tries.
Structure of Trie node:
• Every node of Trie consists of multiple
branches.
• Each branch represents a possible
character of keys.
• Mark the last node of every key as the end
of the word node.
• A Trie node field isEndOfWord is used to
distinguish the node as the end of the
word node.
Trie is also known as digital tree or prefix tree:

Trie containing
1. aqua a d
2. dad
3. data q a
4. days
u d t y

a a s
Insert Operation in Trie:

• Inserting a key into Trie is a simple approach.


• Every character of the input key is inserted as an individual
Trie node. Note that the children is an array of pointers (or
references) to next-level trie nodes.
• The key character acts as an index to the array children.
• If the input key is new or an extension of the existing key,
construct non-existing nodes of the key, and mark the end of
the word for the last node.
• If the input key is a prefix of the existing key in Trie, Simply
mark the last node of the key as the end of a word.
Inserting
i) and
ii) ant
Advantages of tries
1. In tries the keys are searched using common
prefixes. Hence it is faster.
2. The lookup of keys depends upon the height in
case of binary search tree.
3. Tries take less space when they contain a large
number of short strings. As nodes are shared
between the keys.
4. Tries help with longest prefix matching, when
we want to find the key.
Search Operation in Trie:
• Searching for a key is similar to the insert operation.
However, It only compares the characters and
moves down. The search can terminate due to the end
of a string or lack of key in the trie.
• In the former case, if the isEndofWord field of the
last node is true, then the key exists in the trie.
• In the second case, the search terminates without
examining all the characters of the key, since the key is
not present in the trie.
Search
in Trie

You might also like