Maximum Likelihood Decoding
Unit 4
Channel Models: Hard vs Soft decisions
Binary Symmetric Channel (BSC)
- Discrete memoryless channel, binary i/p & o/p and symmetric transition probabilities
P(0 |1) P(1|1) P(1| 0) p
P(0 | 0) 1 p
- hard decision channel - U(m) is chosen closest in Hamming distance to Z - From U(m) , U(m) is chosen for which distance to Z is minimum
MDCT Unit 4: ML Decoding
Channel Models: Hard vs Soft decisions
Gaussian Channel
-Each demodulator o/p symbol is a value from continuous
alphabet -Symbol cannot be labeled as a correct or incorrect detection decision -Maximizing P(Z|U(m)) is equiv. to maximizing inner product between codeword sequence U(m) and Z
n
-Decoder chooses U(m) if it maximizes
i 1 j 1
z ji u (jim )
-Equivalent to choosing U(m) closest in Euclidean distance to Z -Soft decision channel
MDCT Unit 4: ML Decoding
Viterbi Decoding Algorithm
Performs ML decoding with reduced computational load special structure in code trellis Calculates measure of similarity or distance received signal (ti) and all trellis paths entering each state (ti) Trellis paths which could not be the candidate for ML choices are not considered Surviving path - The path with best metric is chosen when two paths enter the same state, and performed for all states - least likely paths are eliminated Optimum path: expressed choosing codeword with maximum likelihood metric or minimum distance metric Advantage: Complexity is not a function of no. of symbols in codeword sequence
MDCT Unit 4: ML Decoding
Convolutional Encoder (rate K=3)
MDCT Unit 4: ML Decoding
Encoder Trellis Diagram (rate K=3)
MDCT Unit 4: ML Decoding
Decoder Trellis Diagram (rate K=3)
MDCT Unit 4: ML Decoding
The basis
Any two paths merge to a single state, one path is eliminated in search of an optimum path
Path metrics for two merging paths
MDCT Unit 4: ML Decoding
Decoding Principle
At each time ti, 2K-1 states are present in trellis and can be entered by means of two paths Decoding computes the metric for two paths entering each state and eliminating one of them Computation is done for each of the 2K-1 states at time ti and decoder moves to time ti+1 and the process is repeated At any time the winning path metric for each state is termed as state metric for that state at that time
MDCT Unit 4: ML Decoding
Selection of Survivor paths
Survivors at t2
MDCT Unit 4: ML Decoding
Survivors at t3
Selection of Survivor paths (Contd.)
Metric comparisons at t4
Survivors at t4 Only one surviving path between time t1 and t2 and it is termed as common stem Transition occurred between 0010 and since it is due to input bit 1, decoder output is 1
MDCT Unit 4: ML Decoding
Selection of Survivor paths (Contd.)
Metric comparisons at t5
MDCT Unit 4: ML Decoding
Survivors at t5
Selection of Survivor paths (Contd.)
Metric comparisons at t6
Survivors at t6
MDCT Unit 4: ML Decoding
Sequential Decoding
Proposed by Wozencraft and modified by Fano Generates hypothesis about transmitted codeword sequence and computes metric between these hypothesis and received signal Goes forward still metric indicates its choices are likely; else goes backward changing hypothesis still finding a likely one Can work with both hard and soft decisions, however soft decisions are normally avoided since large storage elements are used and also complexity
MDCT Unit 4: ML Decoding
Convolutional Encoder (rate K=3)
MDCT Unit 4: ML Decoding
Tree Diagram (rate K=3)
MDCT Unit 4: ML Decoding
MDCT Unit 4: ML Decoding
Sequential Decoding Example
MDCT Unit 4: ML Decoding