Module - 3 Error Correction Codes
Module - 3 Error Correction Codes
Sem - V
Introduction :
Errors are introduced in the data when it passes through the channel. Channel noise
interferes the signal, so the signal power is reduced. Transmission power and channel
bandwidth are the main parameters in transmission of data over the channel. With this
parameters power spectral density of channel noise is also determine signal to noise ratio.
This SNR determines the probability of error. Using coding Techniques Signal to noise ratio
is reduced for fixed probability of error.
Channel encoder :
The channel encoder adds bits (redundancy) to the message bits. The encoder signal
is then over the noisy channel.
Channel decoder :
It identifies the redundant bits and uses them to detect and correct the errors in the
message bits if any. Thus the number of errors introduced due to channel noise are
minimized by encoder and decoder. Due to the redundant bits the overall data rate increases.
Hence channel has to accommodate this increased data rate. Systems becomes slightly
complex because of coding techniques.
Types of codes:
These Codes consists of ‘n’ number of bits in one block or codeword. This codeword consists
of ‘k’ message bits and (n-k) redundant bits. Such block codes are called (n, k) block codes.
The coding operation is discrete time convolution of input sequence with the impulse
response of the encoder. The Convolutional encoder accepts the message bits continuously and
generates the encoded sequence continuously.
a. Linear Code: If the two codewords of the linear codes are added by modulo-2
arithmetic the it produces third codeword in the code.
b. Nonlinear Code: Addition of the nonlinear codeword does not necessarily produce third
codeword.
where the maximum is taken over all possible input distributions p(x).
Types of errors
Random errors: These errors are due to white Gaussian noise in the channel. These error
generated in a particular interval does not affect the performance of the system in subsequent
intervals. These errors are totally uncorrelated.
Burst errors: These errors due to impulsive noise in the channel. Impulsive noise due to
lighting and switching transient. These error generated in a particular interval will affect the
performance of the system in subsequent intervals.
Codeword: The encoded block of ‘n’ bits is called a codeword. It contains message bits and
redundant bits.
Block length: The number of bits ‘n’ after coding is called the block length of the
code.
Code rate: The ration of message bits (k) and the encoder output bits (n) is called
code rate. Code rate r is defined by
0<𝑟<1
�
𝑟=
�
�
�
𝑛
𝑅𝑜 = 𝑅𝑠
𝑘
Code Vectors : An ‘n’ bit code word can be visualised in an n dimensional space as a vector
whose elements are the bits in the code word. To visualise 3 bit code vectors there will be 8 different
code words because of 2𝑘 symbol.
𝑏2 = 𝑍 𝑏1 = 𝑌 𝑏0 = 𝑋
Sl.No Bits of Code vector
1 0 0 0
2 0 0 1
3 0 1 0
4 0 1 1
5 1 0 0
6 1 0 1
7 1 1 0
8 1 1 1
Table. Code vectors in 3 dimensional space
Hamming Distance:
𝑡𝑑𝑒𝑡 = 𝑑𝑚𝑖𝑛 − 1
𝑑𝑚𝑖𝑛 − 1
𝑡𝑐𝑜𝑟𝑟 =
2
where dmin is the minimum Hamming distance between two codewords and t means
the smallest integer.
Code efficiency :
message bits in the block k
Code efficiency = =
transmitted bits for block n
Weight of the Code :
The number of non zero elements in the transmitted code vector is called weight of the code.
It is denoted by w(X) where X is code vector.
In Matrix form [X]1×n = [M]1×k [G]k×n and Generate matrix G can be represented as
G = [Ik[Pk×q]] where I = k×k identity matrix ; P = k×q Submatrix
k×n
By solving the above equation check vector can be obtained (additions are mod 2 addition)
Problem
The generator matrix for a (6,3) block code is given below. Find all code vectors of this
code.
𝐺 = [0 0 ∶
1 0 0 0 1 1
1 1 0 1]
0 0 1 1 1 0
Solution :
We know that
G = [Ik[Pk×q]]
k×n
𝐼 = [0
1 0 0 0 1 1
1 0] Pk×q = [ 1 0 1]
0 0 1 1 1 0
Here k=3, q=3 and n=6, here the block size of message vector 3 bits. Hence there will be 8 message vector
as shown in the table.
𝐶1 = 𝑀10 ⊕ 𝑀2 ⊕ 𝑀3 = 𝑀2 ⊕ 𝑀3
𝐶2 = 𝑀1 ⊕ 𝑀20 ⊕ 𝑀3 = 𝑀1 ⊕ 𝑀3
𝐶3 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀30 = 𝑀1 ⊕ 𝑀2
(iii) To determine check bits and code vectors for every message vector
𝐶1 = 𝑀2 ⊕ 𝑀3 = 0 ⊕ 0 = 0
𝐶2 = 𝑀1 ⊕ 𝑀3 = 0 ⊕ 0 = 0
𝐶3 = 𝑀1 ⊕ 𝑀2 = 0 ⊕ 0 = 0 ie [𝐶1 𝐶2 𝐶3] = (0 0 0)
For (001)
𝐶1 = 𝑀2 ⊕ 𝑀3 = 0 ⊕ 1 = 1
𝐶2 = 𝑀1 ⊕ 𝑀3 = 0 ⊕ 1 = 1
𝐶3 = 𝑀1 ⊕ 𝑀2 = 0 ⊕ 0 = 0 ie [𝐶1 𝐶2 𝐶3] = (1 1 0)
𝐶1 = 𝐶2 = 𝐶3 =
Sl.No Message Bits Check bits Complete Code Vector
𝑀2 ⊕ 𝑀1 ⊕ 𝑀3 𝑀1 ⊕ 𝑀2
M1 M2 M3 M1 M2 M3 C1 C2 C3
𝑀3
1 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 1 1 1 0 0 0 1 1 1 0
3 0 1 0 1 0 1 0 1 0 1 0 1
4 0 1 1 0 1 1 0 1 1 0 1 1
5 1 0 0 0 1 1 1 0 0 0 1 1
6 1 0 1 1 0 1 1 0 1 1 0 1
7 1 1 0 1 1 0 1 1 0 1 1 0
8 1 1 1 0 0 0 1 1 1 0 0 0
For every block code the parity check matrix can be defined as
H = [PT Iq]q×n
∶
Submatrix P is represented as
𝑃11 𝑃21 …
𝑃𝑘1
PT = [ 12 𝑃22 …
𝑃
𝑃𝑘2] 𝑞×𝑘
𝑃1𝑞 𝑃2𝑞 …
𝑃𝑘𝑞
𝑃11 𝑃21 … 𝑃𝑘 1 0… 0
1
[H]q×n [ 𝑃 𝑃22 … 𝑃𝑘 ∶ 0 1 … 0]
= 12 2
𝑃1𝑞 𝑃2𝑞 … 𝑃𝑘 0 0 … 1
q×n
These are (n,k) linear block codes, will satisfy the following conditions.
We know that
Code rate 𝑟 =
𝑘
𝑛 0<𝑟<1
𝑛−𝑞 𝑞
𝑞
𝑟=
� 𝑛
= 1− =1−
�
2q − 1
Since dmin is 3 for hamming code, it can detect double errors and correct single
errors.
Problem
The parity check matrix of a particular (7,4 ) linear block code is given by
[𝐻] = [1 1
1 1 1 0 1 0 0
0 1 0 1 0]
1 0 1 1 0 0 1
Solution :
Here n =7, k =4
The parity check matrix of q×n size is given and q = 3, n=7, k=4.
𝑃32 P41 … 1 0
𝑃13 𝑃23 𝑃33 P42 … 0 1
0
0]
P43 … 0 0 1 3×7
H [PT I3]
= ∶
𝑃11 𝑃21 1 1 1 0
𝑃3 P41
1
PT = [𝑃12 𝑃22 𝑃32 P42] = 1 0 1
[1 ]
𝑃13 𝑃23 𝑃33 P43 1 0 1 1
𝑃 = 21
Therefore
1 1
22 23
pp31 pp pp
32
1 0 1
0 1 1
33
p p41
42 p 43
1 0 0 0 1 1 1
1
𝐺
0
0 1 0 0 : 1
=
0 0 1 0 1 0 1
1
1
0 0 0 1
0
Codeword C = MP
p p p
11
[𝐶1 𝐶2 𝐶3]1×3 = [𝑀1 𝑀2 𝑀3
12 13
p p p
𝑀4]1×4
21 22 23
p31 p 32 p 33
42
p p 41
p43
4×3
1
1
[𝐶1 𝐶2 𝐶3]1×3 = [𝑀1 𝑀2 𝑀3
1
𝑀4]1×4
1
1
1 0 4×3
0
𝐶1 = 𝑀11 ⊕ 𝑀21 ⊕ 𝑀31 ⊕ 𝑀40 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀3
0
1
𝐶2 = 𝑀11 ⊕ 𝑀21 ⊕ 𝑀30 ⊕ 𝑀41 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀4
1 1
𝐶1 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀3 = 1 ⊕ 0 ⊕ 1 = 0
𝐶2 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀4 = 1 ⊕ 0 ⊕ 1 = 0
𝐶3 = 𝑀1 ⊕ 𝑀3 ⊕ 𝑀4 = 1 ⊕ 1 ⊕ 1 = 1
𝐶1 𝐶2 𝐶3
Sl.No Message Bits Check bits Complete Code Vector Weight of
M1 M2 M3 M4 M1 M2 M3 M4 C1 C2 C3 the code
vector
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 1 0 1 1 0 0 0 1 0 1 1 3
3 0 0 1 0 1 0 1 0 0 1 0 1 0 1 3
4 0 0 1 1 1 1 0 0 0 1 1 1 1 0 4
5 0 1 0 0 1 1 0 0 1 0 0 1 1 0 3
6 0 1 0 1 1 0 1 0 1 0 1 1 0 1 4
7 0 1 1 0 0 1 1 0 1 1 0 0 1 1 4
8 0 1 1 1 0 0 0 0 1 1 1 0 0 0 3
9 1 0 0 0 1 1 1 1 0 0 0 1 1 1 4
10 1 0 0 1 1 0 0 1 0 0 1 1 0 0 3
11 1 0 1 0 0 1 0 1 0 1 0 0 1 0 3
12 1 0 1 1 0 0 1 1 0 1 1 0 0 1 4
13 1 1 0 0 0 0 1 1 1 0 0 0 0 1 3
14 1 1 0 1 0 1 0 1 1 0 1 0 1 0 4
15 1 1 1 0 1 0 0 1 1 1 0 1 0 0 4
16 1 1 1 1 1 1 1 1 1 1 1 1 1 1 7
2k = 28 = 16 code vectors along with their weights. The smallest weight of any non zero code
vector is 3 therefore the minimum distance dmin =3.
dmin =3
dmin ≥ s+1 3 ≥ s+1 therefore s ≤ 2 thus two errors will be detected. And dmin ≥
2t+1 3 ≥ s+1 therefore t ≤ 1 thus one errors will be corrected.
Definition of Syndrome:
When Some errors are present in received vector Y then it will be not from the valid vector
and it will not satisfy the property
The non zero output of the product of 𝑌𝐻𝑇 is called as syndrome and it is used to detect the errors
in Y. Syndrome represented by S cand be written as
[𝐻] = [0 1 1 1∶ 0
1 1 1 0 1 0 0
1 0]
1 1 0 1 0 0 1
Syndrome is 3 bit vector, here q=3. Therefore 2q-1= 7 non zero syndrome. This shows that 7 single
bit error pattern will be represented by these 7 non zero syndrome. Error vector E is a n bit vector
representing error pattern.
Sl.No Bit in Bits of vector (E), Non zero bits shows error
error
1 1st 1 0 0 0 0 0 0
2 2nd 0 1 0 0 0 0 0
3 3 rd 0 0 1 0 0 0 0
4 4th 0 0 0 1 0 0 0
5 5 th 0 0 0 0 1 0 0
6 6 th 0 0 0 0 0 1 0
7 7th 0 0 0 0 0 0 1
0 1
1
1 1
1 1 0
𝐻𝑇 = 1
1
1
0 0
0
1 0
1 0 1
0
0
0 1
1
1 1
1 1 0
1 1
𝑆 = 𝐸𝐻𝑇 = (1000000)
1
0 0
0
0 1
0 1
1
0
0
S = (1 0 1)
𝑆 = 𝐸𝐻𝑇 = (0100000)
1 1 0
1 1
0 0 0
1 0
1
0 1
0
0
S = (1 1 1)
Syndrome
vector are rows of HT
𝑋 = (1 0 0 1 1 1 0 )
Let us consider above (7,4) block code and the Code vector
𝑌 = (1 0 (𝟏) 1 1 1 0 )
Let the error created in 3rd bit so
S
= YHT
0 1
1
1 1
1 1 0
1 1
𝑆 = (1 0 1 1 1 1 0 )
1
0 0
1 0
0
0 1
1
By Gauri Joshi VPM’s MPCOE, Velneshwar Page 19
TE EXTC Digital Communication
Sem - V
1 0)
= (1
(2) To determine the row of HT which is same as of S = 3rd row
(3) To determine E ; E = (0 0 1 0 0 0 0)
𝑋 = 𝑌⊕𝐸
(4) Correct Vector X
𝑋 = (1 0 1 1 1 1 0) ⊕ ( 0 0 1 0 0 0 0 ) = ( 1 0 0 1 1 1 0 )
𝑋 = (1 0 0 1 1 1 0 )
Consider the same message vector
and
𝑌 = (1 0 (𝟏) (𝟎) 1 1 0 )
S
= YHT
0
1
1 1
1
1 1 = (1 0 1)
𝑆 = (1 0 1 0 1 1 0 )
1
0
0
1 1
1
0 0
0
0
1
0
0
1
S is equal to row of HT which is same as of S = 1 st row therefore E = 1000000. Thus the error
correction and detection goes wrong. The probability occurrence of multiple errors is less compared
to single errors. To correct multiple errors extended hamming codes are used. In these codes one
extra bit is provided to correct double errors. We know that for (n,k) block codes there are 2n-
1distinct non zero syndromes. There are nC 1 = n single error pattern, nC2 double error pattern, nC3
2𝑞 − 1 ≥ 𝑛 𝐶1 + 𝑛 𝐶2 + 𝑛 𝐶3 + … . . +𝑛 𝐶𝑡
Hamming Bound
2𝑞 ≥ 1 + 𝑛 𝐶1 + 𝑛 𝐶2 + 𝑛 𝐶3 + … . . +𝑛 𝐶𝑡
2 ≥ ∑ 𝑛 𝐶𝑖
𝑞
𝑖=0
𝑡
2𝑛−𝑘 ≥ ∑ 𝑛 𝐶𝑖
𝑖=0
𝑛 − 𝑘 ≥ 𝑙𝑜𝑔2 (∑ 𝑛 𝐶𝑖)
𝑖=0
𝑡
𝑘 1
𝑛 𝑙𝑜𝑔2 (∑ 𝑛 𝐶𝑖)
𝑛
1− ≥
𝐶𝑜𝑑𝑖𝑛𝑔 𝑟𝑎𝑡𝑒
𝑘
𝑖=0
=𝑟
𝑛 𝑡
1−𝑟 ≥ 1
𝑙𝑜𝑔2 (∑ 𝑛 𝐶𝑖)
𝑛 𝑖=0
Problem:
Solution:
We know
that S = EHT
This equation shows that Syndrome depends only on error pattern not on transmitted codeword.
All error pattern the differ by a codeword have the same syndrome.
𝑆2 = 𝐸𝐻𝑇 𝑠𝑖𝑛𝑐𝑒
𝑋2𝐻𝑇
=0
[𝐻] = [1 0 1∶ 0
1 1 1 0 1 0 0
1 1 0]
1 0 1 1 0 0 1
Consider two code words
X2 = 0 0 0 1 0 1 1 ; X3 = 0 0 1 0 1 0 1
Y2 = (𝟏)0 0 1 0 1 1 ; Y3 = (𝟏) 0 1 0 1 0 1
1
1
1 1
1 = ( 1 1 1)
S = Y HT = ( 𝟏 0 0 1 0 1 1)
1 0
0
0
1
1
2 2
0 0
1
0
1
0 0 1
0
H =(𝟏
1 ST = Y
1
1
1 = ( 1 1 1)
0
0
1
1
3 3
0 0
1
0
1
0 0 1
0
Thus the syndrome S2 = S3 = ( 1 1 1) even if two codewords are different. This proves all error
pattern the differ by a codeword have the same syndrome.
If there are m1, m2, m3 ... mk are the bits of the k bit message word, then m1⊕ m2
⊕ m3 ⊕ ...... ⊕ mk ⊕ c1 = 0 In the above equation C 1 is the parity bit added with the message. If
there are even number of parity check bit C 1 = 0 and vice versa. For this code the transmitted bits are
n = k+1 and q=1. This code can correct single bit.
Repeated codes
In this code , a single message bit is transmitted and q=2t bit are the parity bit and k=1, then
the transmitted bits are n = 2t+1. This code can correct t errors per block. It requires large band
width.
Hadamard Code
It is derived from hadamard matrix here n = 2k and q = n-k = 2k –k. the code rate is 𝑟 =
𝑘
𝑘 This shows the code rate will be very small.
𝑛 2𝑘
=
Extended Code
(n,k) linear block code has a parity check matrix H. One column of zero elements (except last
element) and one row of 1’s is added to the parity check matrix as shown below
The code turned by such parity is called extended code. Thus it will be described as (n+1, k)
linear block code. Now the parity check matrix size is (q+1) by (n+1). For example (7,4) matrix is
𝑑𝑒 (𝑚𝑖𝑛) = 𝑑𝑚𝑖𝑛 + 1
represented as below with the extended parity bits and the minimum distance for extended code is
Dual code :
Consider an (n,k) block code. This code satisfies 𝐻𝐺𝑇 = 0, then the (n, n-k) i.e (n,q) block
code called as dual code. The generator matrix and parity matrix will be,
Cyclic Codes :
Cylic codes are the sub class of linear code. A code C is cyclic if
(i) C is a linear code;
(ii)any cyclic shift of a codeword is also a codeword.
Properties :
X2 is a codeword then X3 = X1 ⊕ X2 Here X3 is also a codeword. This property shows cyclic code
Linearity property :- Sum of any two codewords is also a valid codeword. For example X1 and
Cyclic property :- Every cyclic shift of the valid code vector produces another code vector. Let us
Here 𝑥𝑛−1, 𝑥𝑛−2, … 𝑥1, 𝑥0 represents the individual bits of code vectors. Let us shift the above
code vector cyclically to the left side
Here X(p) is the polynomial of degree (n-1), P is the arbitrary variable of the polynomial The
power of ‘p’ represents positions of the codeword.
𝑝0 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝐿𝑆𝐵
1. These are algebraic codes. The algebraic operations such as addition, Subtraction,
Multiplication and division etc becomes very simple.
2. Position of the bits are represented with the help of powers of ‘p’ in a polynomial.
Let 𝑀 = {𝑚𝑘−1, 𝑚𝑘−2, … 𝑚1, 𝑚0} be k bits of message vector. Then it can be represented by
the polynomial as
𝑋(𝑝) = 𝑀(𝑝)𝐺(𝑝)
Here G(p) is the generating polynomial of degree ‘q’. For an (n,k) cyclic code, q=n-k
represent the number of parity bits. The generating polynomial is given as,
If M1, M2, M3 … etc are the other message vectors, then the corresponding code vectors can be
calculated as
𝑋1(𝑝) = 𝑀1(𝑝)𝐺(𝑝)
𝑋2(𝑝) = 𝑀2(𝑝)𝐺(𝑝)
𝑋3(𝑝) = 𝑀3(𝑝)𝐺(𝑝) and So on. All the above code vectors in non systematic form and they
satisfy cyclic property. Note that G(p) is same for all code vectors.
Problem
The generator polynomial of (7,4) cyclic code is 𝐺(𝑝) = 𝑝3 + 𝑝 + 1 find all code vectors
in nonsystematic form.
Solution :
Here n =7, k =4 , Number of check bits q= n-k = 7- 4 = 3 and Block length n= 2q-1 = 8-1
= 7.
For ex Consider any message vector M = (m3, m2, m1, m0) = (0 1 0 1) Then the
message polynomial will be
𝑋(𝑝) = 𝑀(𝑝)𝐺(𝑝)
𝑋(𝑝) = ( 𝑝2 + 1 )(𝑝3 + 𝑝 + 1)
𝑋(𝑝) = (p5 + 𝑝3 + 𝑝2 + 𝑝3 + 𝑝 + 1)
𝑋(𝑝) = (p5 + 𝑝3 + 𝑝3 + 𝑝2 + 𝑝 + 1)
𝑋(𝑝) = (p5 + (1 ⊕ 1) 𝑝3 + 𝑝2 + 𝑝 + 1)
Note that the degree of above polynomial is n-1 = 6, The code vector of above polynomial is
𝑋 = (𝑥6𝑥5𝑥4𝑥3𝑥2𝑥1𝑥0) = (0 1 0 0 1 1 1)
Cyclic properties:
𝑋𝑔′ = (1 0 1 1 0 0 0 )
Problem
The generator polynomial of (7,4) cyclic code is 𝐺(𝑝) = 𝑝3 + 𝑝 + 1 find all code vectors in
systematic form.
Solution :
Here n =7, k =4 , Number of check bits q= n-k = 7- 4 = 3 and Block length n= 2q-1 = 8-1 = 7.
For ex Consider any message vector M = (m3, m2, m1, m0) = (0 1 0 1) Then the
message polynomial will be
𝑝𝑞𝑀(
𝐶(𝑝) = 𝑟𝑒𝑚 [ 𝑝)
𝐺(𝑝)
]
𝑝𝑞𝑀(𝑝)
] = 𝑝 + 0 𝑝 + 𝑝 + 0𝑝 + 0𝑝 + 0
𝐺(𝑝)
5 4 3 2
𝑝3 + 𝑝 + 1
[
𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒 𝐶(𝑝) = 𝑝2 + 0𝑝 + 0
Problem :
𝑝𝑛−𝑘𝑀(𝑝) by proper generator polynomial G(p). prove that X(p) is systematic cyclic code if G(p) is
message polynomial with k bit data and C(P) is the remainder polynomial obtained by dividing
Systematic form
Problem : -
Obtain the generator matrix of (7,4) cyclic code if 𝐺(𝑝) = 𝑝3 + 𝑝 + 1. and the code
vectors.
There will be four polynomials corresponding to 4 values of y. These four polynomials represent rows
of generator matrix
1 1 0 1 0 0 0
l 0 1 1 0 1 0
0 ⎪ = (1010011)
0
𝑋 = (𝑚3, 𝑚2, 𝑚1, 𝑚0 ∶ 𝐺) = ⎪1001 ∶ 0
𝗁
0 0 1 1 0 1 1 )
0 0 1 1
0
= [𝐼𝑘 ∶ 𝑃𝑘×𝑞]
𝐺 𝑘×𝑛
The tth row of this matrix will be represented in the polynomial form as,
𝑝𝑛−𝑡 𝑅𝑒𝑚𝑖𝑛𝑑𝑒𝑟𝑅𝑡(
= 𝑄𝑢𝑜𝑡𝑖𝑒𝑛𝑡 𝑝)
𝐺 𝑝
𝑄𝑡 𝑝 +
𝐺(𝑝)
(( ))
Find out the the generator matrix for a systematic (7,4) cyclic code if 𝐺(𝑝) = 𝑝3 + 𝑝 + 1.
Also find the parity matrix.
Solution :
𝑝6 + 𝑅𝑡(𝑝) = 𝑄𝑡(𝑝)( 𝑝3 + 𝑝 + 1)
Here
1 0 0 1
0 1 0
l 0 1 1
0 ⎪ = (1100010)
0 0 1 1
𝗁
0 0 1 1 0 1 1 )
0
0
= [𝐼𝑘 ∶ 𝑃𝑘×𝑞]
𝐺 𝑘×𝑛
The P Submatrix
0
1
𝑝 = 1
1
1
1 1 4×3
0 1
The parity check matrix is given
0
1 1
𝐻 = [𝑝𝑇 ∶ 𝐼� ]
𝑞×𝑛
�
Mod 2 addition
Operation :
Problem : -
Design the encoder for (7,4) cyclic code generator polynomial is 𝐺(𝑝) = 𝑝3 + 𝑝 + 1 and verify
its operation for any message vector.
Solution:
Lets verify the operation for message vector v (𝑚3, 𝑚2, 𝑚1, 𝑚0 ) = (1100 )
𝑋(𝑝) = 𝑀(𝑝)𝐺(𝑝)
𝑌(𝑝) 𝑅𝑒𝑚𝑖𝑛𝑑𝑒
𝐺(𝑝) = 𝑄𝑢𝑜𝑡𝑖𝑒𝑛𝑡 𝑟
𝐺(𝑝)
+
𝑋(𝑝)
If Y(p) = X(p), there is no errors then
= 𝑄𝑢𝑜𝑡𝑖𝑒𝑛𝑡 𝑅𝑒𝑚𝑖𝑛𝑑𝑒
𝑟
𝐺(𝑝)
+
𝐺(𝑝)
𝑌(𝑝)
If Y(p) ≠ X(p), there is errors then
= 𝑄𝑢𝑜𝑡𝑖𝑒𝑛𝑡 𝑅𝑒𝑚𝑖𝑛𝑑 𝑅(𝑝)
𝑒𝑟 = 𝑄(𝑝) +
𝐺(𝑝) 𝐺(𝑝)
+
𝐺(𝑝)
R(p) will be the polynomial of degree less than or equal to q-1. Multyply both sides of above equation by
G(p)
E depends upon the reminder R. For every reminder ‘R’ there will be specific error vector. So ‘R’ is
a syndrome vector S or R(p) = S(p)
Y(p) S(p)
G(p) = Q( p ) +
G(p)
Decoder
Operation :
Initially all the shift register contents are zero and the switch is closed in position 1.
The received vector Y is shifted bit by bit into the shift register.
Flipflops keep on changing the values according to input bits of Y and values of g1, g2 etc.
After all the bits of Y are shifted, q flipflops of shift register contains the q bit Syndrome vector.
The switch id then closed to position 2 and clocks are applied to the shift register.
The output is a syndrome vector S = (Sq-1, Sq-2, ….S1, S0).
• (n,1) Repetition codes. High coding gain (minimum distance always n-1), but very low rate:
1/n
• (n,k) Hamming codes. Minimum distance always 3. Thus can detect 2 errors and correct one
error. n=2m-1, k = n - m,
• Maximum-length codes. For every integer there exists a maximum length code (n,k) with n
= 2k - 1,dmin = 2k-1.
• BCH-codes. For every integer there exist a code with n = 2m-1, and where t is the error
correction capability
• (n,k) Reed-Solomon (RS) codes. Works with k symbols that consists of m bits that are encoded
to yield code words of n symbols. For these codes and
• Nowadays BCH and RS are very popular due to large dmin, large number of codes, and easy
generation
Advantages :
Disadvantages:
• Error correction is complicated since the combinational logic circuits in error detector are
complex.
Convolutional Code:
Introduction:
Convolutional codes offer an approach to error control coding substantially different from that of
block codes.
• encodes the entire data stream, into a single codeword.
• maps information to code bits sequentially by convolving a sequence of
information bits with “generator” sequences.
• does not need to segment the data stream into blocks of fixed size
(Convolutional codes are often forced to block structure by periodic truncation).
• Is a machine with memory.
– This fundamental difference imparts a different nature to the design and
evaluation of the code.
• Block codes are based on algebraic/combinatorial techniques.
• Convolutional codes are based on construction techniques.
• Easy implementation using a linear finite-state shift register.
• A Convolutional code is specified by three parameters (n, k, k)
– Rc = k / n is the coding rate, determining the number of data bits per coded bit.
– K is the constraint length of the convolutinal code (where the encoder has K-1
memory elements).
• The performance of a convolutional code depends on the coding rate and the
constraint length
– Longer constraint length K
• More powerful code
• More coding gain
– Coding gain: the measure in the difference between the signal to noise ratio (SNR)
levels between the uncoded system and coded system required to reach the same bit error rate
(BER) level
• More complex decoder
• More decoding delay
– Smaller coding rate Rc=k/n
• More powerful code due to extra redundancy
• Less bandwidth efficiency
Vector representation:
– Define n vectors, each with Kk elements (one vector for each modulo-2 adder). The i-th element
in each vector, is “1” if the i-th stage in the shift register is connected to the corresponding modulo-2
adder, and “0” otherwise.
Polynomial representation :
– Define n generator polynomials, one for each modulo-2 adder. Each polynomial is of
degree Kk-1 or less and describes the connection of the shift registers to the corresponding modulo-
2 adder.
• A state diagram is simply a graph of the possible states of the encoder and the possible
transitions from one state to another. It can be used to show the relationship between the
encoder state, input, and output.
• The stage diagram has 2 (K-1)k nodes, each node standing for one encoder state.
• Nodes are connected by branches
– Every node has 2k branches entering it and 2k branches leaving it.
– The branches are labeled with c, where c is the output.
– When k=1
• The solid branch indicates that the input bit is 0.
• The dotted branch indicates that the input bit is 1.
– Split the state a (all-zero state) into initial and final states, remove the self loop
– Label each branch by the branch gain Di, where i denotes the Hamming weight
of the n encoded bits on that branch
• Each path connecting the initial state and the final state represents a non-zero codeword
that diverges from and re-emerges with state a (all-zero state) only once.
Trellis Diagram
• Trellis diagram is an extension of state diagram which explicitly shows the passage of time.
– All the possible states are shown for each instant of time.
– Time is indicated by a movement to the right.
– The input data bits and output code bits are represented by a
• unique path through the trellis.
– The lines are labeled with c, where c is the output.
– After the second stage, each node in the trellis has 2k
• incoming paths and 2k outgoing paths.
– When k=1
• The solid line indicates that the input bit is 0.
• The dotted line indicates that the input bit is 1.
• Given the received code word r, determine the most likely path through the trellis.
(maximizing p(r|c'))
– Compare r with the code bits associated with each path
– Pick the path whose code bits are “closest” to r
– Measure distance using either Hamming distance for harddecision
• decoding or Euclidean distance for soft-decision
• decoding
– Once the most likely path has been selected, the estimated
• data bits can be read from the trellis diagram
Implementation:
1. Initialization:
– Let Mt(i) be the path metric at the i-th node, the t-th stage in trellis
– Large metrics corresponding to likely paths; small metrics corresponding to
unlikely paths
– Initialize the trellis, set t=0 and M0(0)=0;
2. At stage (t+1),
– Branch metric calculation
•Compute the metric for each branch connecting the states at time t to states
at time (t+1)
• The metric is related to the likelihood probability between the
received bits and the code bits corresponding to that branch:
p(r(t+1)|c'(t+1))
– Trellis update
• At each state, pick the most likely path which has the largest metric and delete
the other paths
•Set M(t+1)(i)= the largest metric corresponding to the state i
𝑃𝑒 ≤ ∑ 𝑎𝑑 𝑃2(𝑑)
∞
𝑑=𝑑𝑓𝑟𝑒𝑒
BER is obtained by multiplying the error-event probability by the number of data bit errors associated with
each error event.
𝑃𝑏 ≤ ∑ 𝑓(𝑑)𝑎𝑑 𝑃2(𝑑)
∞
𝑑=𝑑𝑓𝑟𝑒𝑒
f(d ) the number of data bit errors corresponding to the erroneous path with the Hamming distance
of d
Turbo Codes:
• Turbo codes were proposed by Berrou and Glavieux in the 1993 International
Conference in Communications.
• Performance within 0.5 dB of the channel capacity limit for BPSK was demonstrated.
• Features of turbo codes
• Parallel concatenated coding
• Recursive convolutional encoders
• Pseudo-random interleaving
• Iterative decoding
Pseudo-random Interleaving:
Solution:
• Make the code appear random, while maintaining enough structure to permit
decoding.
• This is the purpose of the pseudo-random interleaver.
• Turbo codes possess random-like properties.
• However, since the interleaving pattern is known, decoding is possible.
In a coded systems:
• Performance is dominated by low weight code words.
• A “good” code: will produce low weight outputs with very low probability.
• An RSC code: Produces low weight outputs with fairly low probability.
• However, some inputs still cause low weight outputs.
• Because of the interleaver: The probability that both encoders have inputs that cause
low weight outputs is very low.
• Therefore the parallel concatenation of both encoders will produce a “good” code.
Iterative Decoding:
• There is one decoder for each elementary encoder.
• Each decoder estimates the a posteriori probability (APP) of each data bit.
• The APP’s are used as a priori information by the other decoder.
• Decoding continues for a set number of iterations.
• Performance generally improves from iteration to iteration, but follows a law of
diminishing returns
The Turbo-Principle:
Turbo codes get their name because the decoder uses feedback, like a turbo engine
• Long latency.
• Poor performance at very low BER.
• Because turbo codes operate at very low SNR, channel estimation and tracking is a critical
issue.
• The principle of iterative or “turbo” processing can be applied to other problems.
• Turbo-multiuser detection can improve performance of coded multiple-access systems.
PART A ( 2 marks)
1. Draw the code tree of a Convolutional code of code rate r=1/2 and Constraint length of K=3
starting from the state table and state diagram for an encoder which is commonly used.
a. Draw the state Diagram.
b. Draw the state Table.
c. Draw the code Tree
2. Draw the trellis diagram of a Convolutional code of code rate r=1/2 and Constraint length of
K=3 starting from the state table and state diagram for an encoder which is commonly used.
a. Draw the state Diagram.
b. Draw the state Table.
c. Draw the trellis diagram
3. Decode the given sequence 11 01 01 10 01 of a convolutional code with a code rate of r=1/2
and constraint length K=3, using viterbi decoding algorithm.
a. Draw the state Diagram.
b. Draw the state Table.
c. Draw the code Tree
d. Decode the given sequence using trellis diagram.
4. Explain the construction of Block Code and explain how error syndrome is calculated
a. Representation of Block Code.
b. Generator Matrix.
c. Generation of Codewords.
d. Generation of Parity Check Matrix.
e. Calculation OF Error Syndrome.
5. Consider a (6,3) linear block code defined by the generator matrix
a. Determine if the code is a hamming code. Find the parity check matrix in
systematic code
b. Find the encoding tale for the linear block code.
c. What is the minimum hamming distance and how many errors can detect and correct.
d. Find the decoding tale for the linear block code.
e. Draw the hardware encoder diagram
𝐶1 = 𝑚1 + 𝑚2 + 𝑚4 , 𝐶2 = 𝑚1 + 𝑚2 + 𝑚3
6. The parity check bits of a (8,4) block code is given by
𝐶2 = 𝑚1 + 𝑚3 + 𝑚4 𝑎𝑛𝑑 𝐶4 = 𝑚2 + 𝑚3 + 𝑚4
a. Find the generator matrix and parity check matrix for this code.
b. Find minimum weight of this code.
7. Design the encoder for the (7,4) cyclic code generated by 𝐺(𝑝) = 𝑝3 + 𝑝 + 1 and verify
c. Find error detecting capabilities of this code.
𝑔(𝑥) = 1 + 𝑥2 + 𝑥3 and obtain the syndrome for the received code word 1001011.
8. Sketch the encoder and syndrome calculator for the generator polynomial