CSE408
Single Source Shortest
Path Problem:
Dijkstra algorithm
Dijkstra’s algorithm
• Dijkstra’s algorithm is a popular algorithm for
solving single-source shortest path problems.
• Graph should have non-negative edge
weights.
• It uses Edge Relaxation.
Edge Relaxation
if d(u)+ w(u, v) < d(v)
then d(v) = d(u)+ w(u, v)
Dijkstra’s Algorithm
Example
Dijkstra’s Algorithm
TC - O(V) + O(V) + O(V log V) + O(V^2 log V)
Aggregate TC - O(V) + O(V) + O(V log V) + O(E log V) = O((V+E) log V) or O(E log V)
Limitations of Dijkstra Algorithm
• It may fail to find shortest path distance from
source if “negative weight edges” in graph.
• Failed to detect negative weight edge cycle
reachable from source if exist.
Bellman Ford Algorithm
• Computes correct shortest path distances
from source even if negative weight edges
exist.
• Detects negative weight edge cycle.
Bellman Ford Algorithm
Example
-----V
-----E
O(1)
O(E)
TC - O(V.E) + O(E) = O(V.E)
Huffman Coding
• Proposed by Dr. David A. Huffman in 1952
– “A Method for the Construction of Minimum
Redundancy Codes”
• Applicable to many forms of data
transmission
– Our example: text files
The Basic Algorithm
• Huffman coding is a form of statistical coding
• Not all characters occur with the same
frequency!
• Yet all characters are allocated the same amount
of space
The Basic Algorithm
• Any savings in tailoring codes to frequency of
character?
• Code word lengths are no longer fixed like
ASCII.
The (Real) Basic Algorithm
1. Scan text to be compressed and tally occurrence of all
characters.
2. Sort or prioritize characters based on number of
occurrences in text.
3. Build Huffman code tree based on
prioritized list.
4. Perform a traversal of tree to determine all code words.
5. Scan text again and create new file using the Huffman
codes.
Building a Tree Scan the original text
• Consider the following short text:
Eerie eyes seen near lake.
• Count up the occurrences of all characters in the
text
Building a Tree Scan the original text
Eerie eyes seen near lake.
• What characters are present?
E e r i space
y s n a l k.
Building a Tree Scan the original text
Eerie eyes seen near lake.
• What is the frequency of each character in the text?
Char Freq. Char Freq. Char Freq.
E 1 y 1 k 1
e 8 s 2 . 1
r 2 n 2
i 1 a 2
space 4 l 1
Building a Tree Prioritize characters
• Create binary tree nodes with character
and frequency of each character
• Place nodes in a priority queue
– The lower the occurrence, the higher the
priority in the queue
Building a Tree Prioritize characters
• Uses binary tree nodes
public class HuffNode
{
public char myChar;
public int myFrequency;
public HuffNode myLeft, myRight;
}
priorityQueue myQueue;
Building a Tree
• The queue after inserting all nodes
E i y l k . r s n a sp e
1 1 1 1 1 1 2 2 2 2 4 8
Building a Tree
• While priority queue contains two or more nodes
– Create new node
– Dequeue node and make it left subtree
– Dequeue next node and make it right subtree
– Frequency of new node equals sum of frequency of left
and right children
– Enqueue new node back into queue
Building a Tree
E i y l k . r s n a sp e
1 1 1 1 1 1 2 2 2 2 4 8
Building a Tree
y l k . r s n a sp e
1 1 1 1 2 2 2 2 4 8
E i
1 1
Building a Tree
y l k . r s n a sp e
2
1 1 1 1 2 2 2 2 4 8
E i
1 1
Building a Tree
k . r s n a sp e
2
1 1 2 2 2 2 4 8
E i
1 1
y l
1 1
Building a Tree
2
k . r s n a 2 sp e
1 1 2 2 2 2 4 8
y l
1 1
E i
1 1
Building a Tree
r s n a 2 2 sp e
2 2 2 2 4 8
y l
E i 1 1
1 1
k .
1 1
Building a Tree
r s n a 2 2 sp e
2
2 2 2 2 4 8
E i y l k .
1 1 1 1 1 1
Building a Tree
n a 2 sp e
2 2
2 2 4 8
E i y l k .
1 1 1 1 1 1
r s
2 2
Building a Tree
n a 2 sp e
2 4
2
2 2 4 8
E i y l k . r s
1 1 1 1 1 1 2 2
Building a Tree
2 4 e
2 2 sp
8
4
y l k . r s
E i 1 1 1 1 2 2
1 1
n a
2 2
Building a Tree
2 4 4 e
2 2 sp
8
4
y l k . r s n a
E i 1 1 1 1 2 2 2 2
1 1
Building a Tree
4 4 e
2 sp
8
4
k . r s n a
1 1 2 2 2 2
2 2
E i y l
1 1 1 1
Building a Tree
4 4 4
2 sp e
4 2 2 8
k . r s n a
1 1 2 2 2 2
E i y l
1 1 1 1
Building a Tree
4 4 4
e
2 2 8
r s n a
2 2 2 2
E i y l
1 1 1 1
2 sp
4
k .
1 1
Building a Tree
4 4 4 6 e
2 sp 8
r s n a 2 2
4
2 2 2 2 k .
E i y l 1 1
1 1 1 1
Building a Tree
4
6 e
2 2 2 8
sp
4
E i y l k .
1 1 1 1 1 1
8
4 4
r s n a
2 2 2 2
Building a Tree
4
6 e 8
2 2 2 8
sp
4 4 4
E i y l k .
1 1 1 1 1 1
r s n a
2 2 2 2
Building a Tree
8
e
8
4 4
10
r s n a
2 2 2 2 4
6
2 2
2 sp
4
E i y l k .
1 1 1 1 1 1
Building a Tree
8 10
e
8 4
4 4
6
2 2
r s n a 2 sp
2 2 2 2 4
E i y l k .
1 1 1 1 1 1
Building a Tree
10
16
4
6
2 2 e 8
2 sp 8
4
E i y l k . 4 4
1 1 1 1 1 1
r s n a
2 2 2 2
Building a Tree
10 16
4
6
e 8
2 2 8
2 sp
4 4 4
E i y l k .
1 1 1 1 1 1
r s n a
2 2 2 2
Building a Tree
26
16
10
4 e 8
6 8
2 2 2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
Building a Tree
After enqueueing this node
there is only one node left in
priority queue.
26
16
10
4 e 8
6 8
2 2
2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
Building a Tree
Dequeue the single node left in the
queue.
26
This tree contains the new code 10
16
words for each character.
4 e 8
6 8
Frequency of root node should 2 2 2 sp 4 4
equal number of characters in text. 4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
Eerie eyes seen near lake. 26 characters
Encoding the File Traverse Tree for Codes
• Perform a traversal of the
tree to obtain new code
words 26
• Going left is a 0 going right is 16
10
a1
• code word is only completed 4 e 8
6 8
when a leaf node is reached 2 2 2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
Encoding the File Traverse Tree for Codes
Char Code
E 0000
i 0001 26
y 0010
l 0011 16
k 0100 10
. 0101
space 011 4 e 8
e 10 6 8
r 1100 2 2 2 sp 4 4
s 1101
n 1110 4
E i y l k .
a 1111 1 1 1 1 1 1 r s n a
2 2 2 2
1. Min number of bits used for any char = 2
2. Max number of bits used for any char = 4
3. Sum of frequency count or total #of bits = d(i)*c(i) //Sum of all internal nodes
(1*4)+(1*4)+(1*4)+ (1*4)+ (1*4)+ (1*4)+ (4*3)+(8*2)+(2*4)+(2*4)+(2*4)+(2*4)
= 84
4. Avg bits per char = 84/26 = 3.23 bit/char
Encoding the File
• Rescan text and encode file
using new code words Char Code
E 0000
Eerie eyes seen near lake. i 0001
y 0010
l 0011
000010110000011001110001010110110100
k 0100
111110101111110001100111111010010010
. 0101
space 011
1 e 10
r 1100
s 1101
n 1110
a 1111
Encoding the File
• Have we made things any
000010110000011001110001010110110100
better? 111110101111110001100111111010010010
1
• ASCII would take 8 * 26 =
208 bits
If modified code used 4 bits per
character are needed. Total bits
4 * 26 = 104. Savings not as great.
Decoding the File
• How does receiver know what the codes are?
• Tree constructed for each text file.
– Considers frequency for each file
– Big hit on compression, especially for smaller files
• Tree predetermined
– based on statistical analysis of text files or file types
• Data transmission is bit based versus byte based
Decoding the File
• Once receiver has tree it
scans incoming bit stream 26
• 0 go left 10
16
• 1 go right 4 e 8
6 8
2 2 2 sp 4 4
4
101000110111101111011 E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
11110000110101
`
Summary
• Huffman coding is a technique used to compress files
for transmission
• Uses statistical coding
– more frequently used symbols have shorter code words
• Works well for text and fax transmissions
• An application that uses several data structures