[go: up one dir, main page]

0% found this document useful (0 votes)
40 views64 pages

Module - 3 Error Correction Codes

Uploaded by

gauri.joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views64 pages

Module - 3 Error Correction Codes

Uploaded by

gauri.joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 64

TE EXTC Digital Communication

Sem - V

Module-3 Error Correction Codes

Introduction :

Errors are introduced in the data when it passes through the channel. Channel noise
interferes the signal, so the signal power is reduced. Transmission power and channel
bandwidth are the main parameters in transmission of data over the channel. With this
parameters power spectral density of channel noise is also determine signal to noise ratio.
This SNR determines the probability of error. Using coding Techniques Signal to noise ratio
is reduced for fixed probability of error.

Fig. Digital Communication Systems with channel encoding

Channel encoder :

The channel encoder adds bits (redundancy) to the message bits. The encoder signal
is then over the noisy channel.

Channel decoder :

It identifies the redundant bits and uses them to detect and correct the errors in the
message bits if any. Thus the number of errors introduced due to channel noise are
minimized by encoder and decoder. Due to the redundant bits the overall data rate increases.
Hence channel has to accommodate this increased data rate. Systems becomes slightly
complex because of coding techniques.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 1


TE EXTC Digital Communication
Sem - V

Types of codes:

(1) Block Codes :

These Codes consists of ‘n’ number of bits in one block or codeword. This codeword consists
of ‘k’ message bits and (n-k) redundant bits. Such block codes are called (n, k) block codes.

(2) Convolutional Codes :

The coding operation is discrete time convolution of input sequence with the impulse
response of the encoder. The Convolutional encoder accepts the message bits continuously and
generates the encoded sequence continuously.

The codes can also be classified as linear or nonlinear codes

a. Linear Code: If the two codewords of the linear codes are added by modulo-2
arithmetic the it produces third codeword in the code.

b. Nonlinear Code: Addition of the nonlinear codeword does not necessarily produce third
codeword.

Discrete memory less channel :

A discrete channel comprises of an input alphabet X, output alphabet Y, and a likelihood


function (probability transition matrix) p(y│x). The channel is said to be memory less if the
probability distribution of the output depends only on the input at that time and is conditionally
independent of previous channel inputs or outputs."Information" channel capacity of a discrete
memory less channel is:

where the maximum is taken over all possible input distributions p(x).

Methods of error correction

Forward Error Correction (FEC)

- Coding designed so that errors can be corrected at the receiver


- Appropriate for delay sensitive and one-way transmission (e.g.,
broadcast TV) of data
- Two main types, namely block codes and Convolutional codes.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 2


TE EXTC Digital Communication
Sem - V

Error Correction with retransmission or Automatic repeat request (ARQ)

- Decoder check the input sequence


- When it detects any error, it discards that part of the sequence and request
the transmitter for retransmission.
- Transmitter then again retransmits the part of the sequence in which error
was detected .
- Hence the decoder does not corrects the error, it just detects the error.
- It has low probability error but it is slow.

Types of errors

Random errors: These errors are due to white Gaussian noise in the channel. These error
generated in a particular interval does not affect the performance of the system in subsequent
intervals. These errors are totally uncorrelated.

Burst errors: These errors due to impulsive noise in the channel. Impulsive noise due to
lighting and switching transient. These error generated in a particular interval will affect the
performance of the system in subsequent intervals.

Important words in error control coding techniques:

Codeword: The encoded block of ‘n’ bits is called a codeword. It contains message bits and
redundant bits.
Block length: The number of bits ‘n’ after coding is called the block length of the
code.
Code rate: The ration of message bits (k) and the encoder output bits (n) is called
code rate. Code rate r is defined by
0<𝑟<1

𝑟=


encoder is 𝑅𝑆, Then channel data rate will be,


Channel Data Rate : It is the bit rate at the output of encoder. If the bit rate at the input of

𝑛
𝑅𝑜 = 𝑅𝑠
𝑘
Code Vectors : An ‘n’ bit code word can be visualised in an n dimensional space as a vector
whose elements are the bits in the code word. To visualise 3 bit code vectors there will be 8 different
code words because of 2𝑘 symbol.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 3


TE EXTC Digital Communication
Sem - V

𝑏2 = 𝑍 𝑏1 = 𝑌 𝑏0 = 𝑋
Sl.No Bits of Code vector

1 0 0 0
2 0 0 1
3 0 1 0
4 0 1 1
5 1 0 0
6 1 0 1
7 1 1 0
8 1 1 1
Table. Code vectors in 3 dimensional space

Fig. Code vectors representing 3bit codewords

Hamming Distance:

• Error control capability is determined by the Hamming distance


• The Hamming distance between two codewords is equal to the number of
differences between them, e.g., 10011011, 11010010 have a Hamming distance = 3
• Alternatively, can compute by adding codewords (mod 2) =01001001 (now count up the
ones)
• The maximum number of detectable errors is

𝑡𝑑𝑒𝑡 = 𝑑𝑚𝑖𝑛 − 1

• That is the maximum number of correctable errors is given by,

𝑑𝑚𝑖𝑛 − 1
𝑡𝑐𝑜𝑟𝑟 =
2

where dmin is the minimum Hamming distance between two codewords and t means
the smallest integer.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 4


TE EXTC Digital Communication
Sem - V

• From the example, tcorr = (3-1)/2 = 1 bit error can be corrected.


• A two-bit error will cause either an undecided correction or a failed correction.
• The number of errors that can be detected is
• From the example, tdet = 3-1 = 2
• All two-bit errors will be detected.
• As little as a three bit error might cause a failed detection.

Code efficiency :
message bits in the block k
Code efficiency = =
transmitted bits for block n
Weight of the Code :

The number of non zero elements in the transmitted code vector is called weight of the code.
It is denoted by w(X) where X is code vector.

Block Codes: ( consider only binary data)

• Data is grouped into blocks of length k bits (dataword or message)


• Each message is coded into blocks of length n bits (codeword), where in general
n>k
• This is known as (n,k) block code
• A vector notation is used for the message and codewords,
• Message M = (m1 m2….mk)
• Codeword C = (c1 c2...................cn)

Linear Block Codes:

• Sum of two codewords will produce another codeword.


• It shows that any code vector can be expressed as a linear combination of other code
vector.
• Consider any code vector is having m1, m2, m3 ... mk message bits and c1, c2, c3..............................cn
check bits then the code vector can be written as

X = ( m1, m2, m3 ... mk c1, c2, c3..............cn)

- Where q is the number of redundant bits added by the encoder q = n-k

• The code vector can also be written as X = (M|C)


• M = k bit message vector
• C = q bit check vector (Check bits play the role of error correction and detection)
• Code vector can be represented as X = MG
• X= Code vector of 1×n size or n bits
• M = Message vector of 1×k size or k bits
• G = Message vector of k×n size

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 5


TE EXTC Digital Communication
Sem - V

In Matrix form [X]1×n = [M]1×k [G]k×n and Generate matrix G can be represented as
G = [Ik[Pk×q]] where I = k×k identity matrix ; P = k×q Submatrix
k×n

Check vector cab be represented 𝐶 = 𝑀𝑃

The expanded form is

𝑃11 𝑃12 … 𝑃1𝑞


[𝐶1 𝐶2 𝐶3 … = [𝑀 1 𝑀 2 𝑀 3 … [𝑃21 𝑃22 … 𝑃2𝑞
𝐶𝑞] 1×
𝑞 𝑀𝑘]1×𝑘 ] 𝑘×𝑞

By solving the above equation check vector can be obtained (additions are mod 2 addition)

𝐶1 = 𝑀1𝑃11 ⊕ 𝑀2𝑃21 ⊕ 𝑀3𝑃31 ⊕ … ⊕ 𝑀𝑘𝑃𝑘1

𝐶2 = 𝑀1𝑃12 ⊕ 𝑀2𝑃22 ⊕ 𝑀3𝑃32 ⊕ … ⊕ 𝑀𝑘𝑃𝑘2

𝐶3 = 𝑀1𝑃13 ⊕ 𝑀2𝑃23 ⊕ 𝑀3𝑃33 ⊕ … ⊕ 𝑀𝑘𝑃𝑘3and So on....

Problem

The generator matrix for a (6,3) block code is given below. Find all code vectors of this
code.

𝐺 = [0 0 ∶
1 0 0 0 1 1
1 1 0 1]
0 0 1 1 1 0
Solution :

(i) Determination of P Submatrix from generator matrix

We know that
G = [Ik[Pk×q]]
k×n

𝐼 = [0
1 0 0 0 1 1
1 0] Pk×q = [ 1 0 1]
0 0 1 1 1 0

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 6


TE EXTC Digital Communication
Sem - V

(ii) To obtain equations for check bits

Here k=3, q=3 and n=6, here the block size of message vector 3 bits. Hence there will be 8 message vector
as shown in the table.

Sl.No Message vector


M1 M2 M3
1 0 0 0
2 0 0 1
3 0 1 0
4 0 1 1
5 1 0 0
6 1 0 1
7 1 1 0
8 1 1 1

Then the check vector

[𝐶1 𝐶2 𝐶3] = [𝑀1 𝑀2 𝑀3] [ 10


0 1 1
1]
1 1 0

𝐶1 = 𝑀10 ⊕ 𝑀2 ⊕ 𝑀3 = 𝑀2 ⊕ 𝑀3

𝐶2 = 𝑀1 ⊕ 𝑀20 ⊕ 𝑀3 = 𝑀1 ⊕ 𝑀3

𝐶3 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀30 = 𝑀1 ⊕ 𝑀2

(iii) To determine check bits and code vectors for every message vector

For m1, m2, m3 = (000)

𝐶1 = 𝑀2 ⊕ 𝑀3 = 0 ⊕ 0 = 0

𝐶2 = 𝑀1 ⊕ 𝑀3 = 0 ⊕ 0 = 0

𝐶3 = 𝑀1 ⊕ 𝑀2 = 0 ⊕ 0 = 0 ie [𝐶1 𝐶2 𝐶3] = (0 0 0)

For (001)

𝐶1 = 𝑀2 ⊕ 𝑀3 = 0 ⊕ 1 = 1

𝐶2 = 𝑀1 ⊕ 𝑀3 = 0 ⊕ 1 = 1

𝐶3 = 𝑀1 ⊕ 𝑀2 = 0 ⊕ 0 = 0 ie [𝐶1 𝐶2 𝐶3] = (1 1 0)

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 7


TE EXTC Digital Communication
Sem - V

𝐶1 = 𝐶2 = 𝐶3 =
Sl.No Message Bits Check bits Complete Code Vector

𝑀2 ⊕ 𝑀1 ⊕ 𝑀3 𝑀1 ⊕ 𝑀2
M1 M2 M3 M1 M2 M3 C1 C2 C3

𝑀3
1 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 1 1 1 0 0 0 1 1 1 0
3 0 1 0 1 0 1 0 1 0 1 0 1
4 0 1 1 0 1 1 0 1 1 0 1 1
5 1 0 0 0 1 1 1 0 0 0 1 1
6 1 0 1 1 0 1 1 0 1 1 0 1
7 1 1 0 1 1 0 1 1 0 1 1 0
8 1 1 1 0 0 0 1 1 1 0 0 0

Parity Check Matrix:

For every block code the parity check matrix can be defined as

H = [PT Iq]q×n

Submatrix P is represented as

𝑃11 𝑃12 … 𝑃1𝑞


𝑃 = [𝑃21 𝑃22 … 𝑃2𝑞
𝑃𝑘1 𝑃𝑘2 ]
𝑘×𝑞

𝑃11 𝑃21 …
𝑃𝑘1
PT = [ 12 𝑃22 …
𝑃
𝑃𝑘2] 𝑞×𝑘

𝑃1𝑞 𝑃2𝑞 …
𝑃𝑘𝑞

𝑃11 𝑃21 … 𝑃𝑘 1 0… 0
1
[H]q×n [ 𝑃 𝑃22 … 𝑃𝑘 ∶ 0 1 … 0]
= 12 2
𝑃1𝑞 𝑃2𝑞 … 𝑃𝑘 0 0 … 1
q×n

These are (n,k) linear block codes, will satisfy the following conditions.

1. Number of check bits q ≥ 3.


2. Block length n= 2q-1
By Gauri Joshi VPM’s MPCOE, Velneshwar Page 8
TE EXTC Digital Communication
Sem - V

3. Number of message bits k = n-q


4. Minimum distance dmin = 3

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 9


TE EXTC Digital Communication
Sem - V

We know that

Code rate 𝑟 =
𝑘

𝑛 0<𝑟<1
𝑛−𝑞 𝑞
𝑞
𝑟=
� 𝑛
= 1− =1−

2q − 1

Error detection and correction capabilities of hamming codes

Since dmin is 3 for hamming code, it can detect double errors and correct single
errors.

Problem

The parity check matrix of a particular (7,4 ) linear block code is given by

[𝐻] = [1 1
1 1 1 0 1 0 0
0 1 0 1 0]
1 0 1 1 0 0 1

1. Find the generator Matrix G


2. List all the code vectors
3. What is the minimum distance between code vectors
4. How many errors can be detected and how many errors can be corrected.

Solution :

Here n =7, k =4

1. Number of check bits q= n-k = 7- 4 = 3.


2. Block length n= 2q-1 = 8-1 = 7. This shows the given code is hamming code.

(1) To determine the P Submatrix

The parity check matrix of q×n size is given and q = 3, n=7, k=4.

𝑃11 𝑃21 [H]3×7 = [ 𝑃12


𝑃31
𝑃22
By Gauri Joshi VPM’s MPCOE, Velneshwar Page 10
TE EXTC Digital Communication
Sem - V

𝑃32 P41 … 1 0
𝑃13 𝑃23 𝑃33 P42 … 0 1
0
0]
P43 … 0 0 1 3×7

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 11


TE EXTC Digital Communication
Sem - V

H [PT I3]
= ∶
𝑃11 𝑃21 1 1 1 0
𝑃3 P41
1
PT = [𝑃12 𝑃22 𝑃32 P42] = 1 0 1
[1 ]
𝑃13 𝑃23 𝑃33 P43 1 0 1 1

𝑃 =  21
Therefore
 
 1 1 
22 23

pp31 pp pp

32
 1 0 1
 0 1 1
33

p p41
  
42 p 43


(2) To obtain generator matrix G

G = [Ik: [Pk×q]] G = [I4: [P4×3]]


k×n 4×7

1 0 0 0 1 1 1
1 
𝐺
  0
0 1 0 0 : 1 
=
0 0 1 0 1 0 1
1 
1
0 0 0 1 
0 

(3) To find all Code words

Codeword C = MP

𝑃11 𝑃12 … 𝑃1𝑞


[𝐶1 𝐶2 𝐶3 … = [𝑀 1 𝑀 2 𝑀 3 … [𝑃21 𝑃22 … 𝑃2𝑞
𝐶𝑞] 1×
𝑞 𝑀𝑘]1×𝑘 ] 𝑘×𝑞

p p p 
 11 
[𝐶1 𝐶2 𝐶3]1×3 = [𝑀1 𝑀2 𝑀3
12 13
p p p 
𝑀4]1×4 
21 22 23

 p31 p 32 p 33 
 42
p p 41
p43 

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 12


TE EXTC Digital Communication
Sem - V

4×3

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 13


TE EXTC Digital Communication
Sem - V

 1
1
[𝐶1 𝐶2 𝐶3]1×3 = [𝑀1 𝑀2 𝑀3

1
𝑀4]1×4 
1
 1 
1 0 4×3
 
0
𝐶1 = 𝑀11 ⊕ 𝑀21 ⊕ 𝑀31 ⊕ 𝑀40 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀3
0
1
𝐶2 = 𝑀11 ⊕ 𝑀21 ⊕ 𝑀30 ⊕ 𝑀41 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀4

1 1

𝐶3 = 𝑀11 ⊕ 𝑀20 ⊕ 𝑀31 ⊕ 𝑀41 = 𝑀1 ⊕ 𝑀3 ⊕ 𝑀4

For example if (m1, m2, m3, m4) = (1011)

𝐶1 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀3 = 1 ⊕ 0 ⊕ 1 = 0

𝐶2 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀4 = 1 ⊕ 0 ⊕ 1 = 0

𝐶3 = 𝑀1 ⊕ 𝑀3 ⊕ 𝑀4 = 1 ⊕ 1 ⊕ 1 = 1

𝐶1 𝐶2 𝐶3
Sl.No Message Bits Check bits Complete Code Vector Weight of
M1 M2 M3 M4 M1 M2 M3 M4 C1 C2 C3 the code
vector
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 1 0 1 1 0 0 0 1 0 1 1 3
3 0 0 1 0 1 0 1 0 0 1 0 1 0 1 3
4 0 0 1 1 1 1 0 0 0 1 1 1 1 0 4
5 0 1 0 0 1 1 0 0 1 0 0 1 1 0 3
6 0 1 0 1 1 0 1 0 1 0 1 1 0 1 4
7 0 1 1 0 0 1 1 0 1 1 0 0 1 1 4
8 0 1 1 1 0 0 0 0 1 1 1 0 0 0 3
9 1 0 0 0 1 1 1 1 0 0 0 1 1 1 4
10 1 0 0 1 1 0 0 1 0 0 1 1 0 0 3
11 1 0 1 0 0 1 0 1 0 1 0 0 1 0 3
12 1 0 1 1 0 0 1 1 0 1 1 0 0 1 4
13 1 1 0 0 0 0 1 1 1 0 0 0 0 1 3
14 1 1 0 1 0 1 0 1 1 0 1 0 1 0 4
15 1 1 1 0 1 0 0 1 1 1 0 1 0 0 4
16 1 1 1 1 1 1 1 1 1 1 1 1 1 1 7

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 14


TE EXTC Digital Communication
Sem - V

(4) Minimum distance between code vectors

2k = 28 = 16 code vectors along with their weights. The smallest weight of any non zero code
vector is 3 therefore the minimum distance dmin =3.

(5) Error correction and Detection capabilities

dmin =3

dmin ≥ s+1  3 ≥ s+1 therefore s ≤ 2 thus two errors will be detected. And dmin ≥
2t+1  3 ≥ s+1 therefore t ≤ 1 thus one errors will be corrected.

Encoder of (7,4) Hamming code

Definition of Syndrome:

When Some errors are present in received vector Y then it will be not from the valid vector
and it will not satisfy the property

𝑖𝑓 𝑋𝐻𝑇 = (0 0 0 … . .0)𝑎𝑛𝑑 𝑌𝐻𝑇 = (0 0 0 … . .0)

then X=Y i.e no errors or Y is valid code vector or

𝑖𝑓 𝑋𝐻𝑇 = 𝑌𝐻𝑇 = 𝑛𝑜𝑛 𝑧𝑒𝑟𝑜 then X≠Y i.e some errors.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 15


TE EXTC Digital Communication
Sem - V

The non zero output of the product of 𝑌𝐻𝑇 is called as syndrome and it is used to detect the errors
in Y. Syndrome represented by S cand be written as

[𝑆]1×𝑞 = [𝑌]1×𝑛[ 𝐻𝑇]𝑛×𝑞 and 𝑌 = 𝑋 ⊕ 𝐸 & 𝑋 = 𝑌 ⊕ 𝐸

Relationship between syndrome vector (S)and Error vector (E)

S = YHT = (𝑋 ⊕ 𝐸)𝐻𝑇 = 𝑋𝐻𝑇 ⊕ 𝐸𝐻𝑇

𝑆 = 𝐸𝐻𝑇 𝑠𝑖𝑛𝑐𝑒 𝑋𝐻𝑇 = 0

Detecting error with the help of syndrome: Problem

The parity check parity matrix of (7,4) block code is given as

[𝐻] = [0 1 1 1∶ 0
1 1 1 0 1 0 0
1 0]
1 1 0 1 0 0 1

Calculate the syndrome vector.

(1) To determine error pattern for single bit erros

Syndrome is 3 bit vector, here q=3. Therefore 2q-1= 7 non zero syndrome. This shows that 7 single
bit error pattern will be represented by these 7 non zero syndrome. Error vector E is a n bit vector
representing error pattern.

Sl.No Bit in Bits of vector (E), Non zero bits shows error
error
1 1st 1 0 0 0 0 0 0
2 2nd 0 1 0 0 0 0 0
3 3 rd 0 0 1 0 0 0 0
4 4th 0 0 0 1 0 0 0
5 5 th 0 0 0 0 1 0 0
6 6 th 0 0 0 0 0 1 0
7 7th 0 0 0 0 0 0 1

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 16


TE EXTC Digital Communication
Sem - V

(2) Calculation of Syndrome

[𝑆]1×𝑞 = [𝑌]1×𝑛[ 𝐻𝑇]𝑛×𝑞 and [𝑆]1×3 = [𝑌]1×7[ 𝐻𝑇]7×3

 0 1
1
 1 1

1 1 0
𝐻𝑇 = 1
 
1
1 
 0 0
0 
 1 0

1 0 1 


0


0

For example Syndrome of first bit error is


0 1
1
 1 1
1 1 0
 
1 1
𝑆 = 𝐸𝐻𝑇 = (1000000) 
1 
0 0

 0
0 1 

 0 1 
1


0


0

𝑆 =(1⊕0⊕0⊕0⊕0⊕0⊕0 0⊕0⊕0⊕0⊕0⊕0⊕0 1⊕0⊕0⊕0⊕0⊕0⊕0)

S = (1 0 1)

Syndrome of second bit error is


1 0 1
 
1 1 1
By Gauri Joshi VPM’s MPCOE, Velneshwar Page 17
TE EXTC Digital Communication
Sem - V

𝑆 = 𝐸𝐻𝑇 = (0100000)
1 1 0

  1 1
0 0 0


1 0
1 
 0 1 

0


0

𝑆 =(0⊕1⊕0⊕0⊕0⊕0⊕0 0⊕1⊕0⊕0⊕0⊕0⊕0 0⊕1⊕0⊕0⊕0⊕0⊕0)

S = (1 1 1)

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 18


TE EXTC Digital Communication
Sem - V

Syndrome
vector are rows of HT

Sl.No Bits of vector (E), Non zero Syndrome


bits shows error vector
1 0 0 0 0 0 0 0 0 0 0
2 1 0 0 0 0 0 0 1 0 1 1st of HT
3 0 1 0 0 0 0 0 1 1 1 2nd of HT
4 0 0 1 0 0 0 0 1 1 0 3rd of HT
5 0 0 0 1 0 0 0 0 1 1 4th of HT
6 0 0 0 0 1 0 0 1 0 0 5th of HT
7 0 0 0 0 0 1 0 0 1 0 6th of HT
8 0 0 0 0 0 0 1 0 0 1 7th of HT

Error correctin using syndrome vector

𝑋 = (1 0 0 1 1 1 0 )
Let us consider above (7,4) block code and the Code vector

𝑌 = (1 0 (𝟏) 1 1 1 0 )
Let the error created in 3rd bit so

Now error correction can be done by adopting following steps

(1) Calculate the syndrome S = YHT


(2) Check the row of HT which is same as of S
(3) For Pth of row of HT , Pth bit is in error. Hence write the

(4) Obtain the correct vector by 𝑋 = 𝑌 ⊕ 𝐸


corresponding error vector E.

(1) To obtain syndrome Vector

S
= YHT
  0 1
1

 1 1 
1 1 0
 
1 1
𝑆 = (1 0 1 1 1 1 0 ) 
1 
0 0

 1 0
0

 0 1 
1

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 19
TE EXTC Digital Communication
Sem - V

1 0)
= (1
(2) To determine the row of HT which is same as of S = 3rd row
(3) To determine E ; E = (0 0 1 0 0 0 0)

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 20


TE EXTC Digital Communication
Sem - V

𝑋 = 𝑌⊕𝐸
(4) Correct Vector X

𝑋 = (1 0 1 1 1 1 0) ⊕ ( 0 0 1 0 0 0 0 ) = ( 1 0 0 1 1 1 0 )

Thus the single bit error can be corrected using syndrome.

If double error occurs:

𝑋 = (1 0 0 1 1 1 0 )
Consider the same message vector

and
𝑌 = (1 0 (𝟏) (𝟎) 1 1 0 )

S
= YHT
 0
1

1 1
 1 
1 1 = (1 0 1)
𝑆 = (1 0 1 0 1 1 0 )  
 1
0

 0 
1 1
 1
 
0 0

 0
0 
1
0
0

1

S is equal to row of HT which is same as of S = 1 st row therefore E = 1000000. Thus the error
correction and detection goes wrong. The probability occurrence of multiple errors is less compared
to single errors. To correct multiple errors extended hamming codes are used. In these codes one
extra bit is provided to correct double errors. We know that for (n,k) block codes there are 2n-
1distinct non zero syndromes. There are nC 1 = n single error pattern, nC2 double error pattern, nC3

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 21


TE EXTC Digital Communication
Sem - V

triple error pattern and so on. Therefore to correct t error pattern

2𝑞 − 1 ≥ 𝑛 𝐶1 + 𝑛 𝐶2 + 𝑛 𝐶3 + … . . +𝑛 𝐶𝑡

Hamming Bound

2𝑞 ≥ 1 + 𝑛 𝐶1 + 𝑛 𝐶2 + 𝑛 𝐶3 + … . . +𝑛 𝐶𝑡

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 22


TE EXTC Digital Communication
Sem - V

2 ≥ ∑ 𝑛 𝐶𝑖
𝑞

𝑖=0
𝑡

2𝑛−𝑘 ≥ ∑ 𝑛 𝐶𝑖
𝑖=0

By taking logarithmic on base 2 on both sides

𝑛 − 𝑘 ≥ 𝑙𝑜𝑔2 (∑ 𝑛 𝐶𝑖)
𝑖=0

𝑡
𝑘 1

𝑛 𝑙𝑜𝑔2 (∑ 𝑛 𝐶𝑖)
𝑛
1− ≥
𝐶𝑜𝑑𝑖𝑛𝑔 𝑟𝑎𝑡𝑒
𝑘
𝑖=0
=𝑟
𝑛 𝑡

1−𝑟 ≥ 1
𝑙𝑜𝑔2 (∑ 𝑛 𝐶𝑖)
𝑛 𝑖=0

Problem:

For a linear block code provide with an example that


1. Syndrome depends only on error pattern not on transmitted codeword.
2. All error pattern the differ by a codeword have the same syndrome.

Solution:

Syndrome depends only on error pattern not on transmitted codeword.

We know
that S = EHT

This equation shows that Syndrome depends only on error pattern not on transmitted codeword.

All error pattern the differ by a codeword have the same syndrome.

Syndrome for the first received code

S1 = Y1HT = (𝑋1 ⊕ 𝐸)𝐻𝑇 = 𝑋1𝐻𝑇 ⊕ 𝐸𝐻𝑇

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 23


TE EXTC Digital Communication
Sem - V

𝑆1 = 𝐸𝐻𝑇 𝑠𝑖𝑛𝑐𝑒 𝑋1𝐻𝑇 = 0

Syndrome for the Second received code

S2 = Y2HT = (𝑋2 ⊕ 𝐸)𝐻𝑇 = 𝑋2𝐻𝑇 ⊕ 𝐸𝐻𝑇

𝑆2 = 𝐸𝐻𝑇 𝑠𝑖𝑛𝑐𝑒
𝑋2𝐻𝑇
=0

For example let us consider

[𝐻] = [1 0 1∶ 0
1 1 1 0 1 0 0
1 1 0]
1 0 1 1 0 0 1
Consider two code words

X2 = 0 0 0 1 0 1 1 ; X3 = 0 0 1 0 1 0 1

Error is introduced in MSB

Y2 = (𝟏)0 0 1 0 1 1 ; Y3 = (𝟏) 0 1 0 1 0 1


1
1

1 1
 1  = ( 1 1 1)
S = Y HT = ( 𝟏 0 0 1 0 1 1)
1 0

0 
0

1

1
2 2  
 0 0
1 
 0
1 
 
0 0 1 


0

  
H =(𝟏
1  ST = Y
1

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 24


TE EXTC Digital Communication
Sem - V

0 1 0 1 0 1) 0
1

1
1  = ( 1 1 1)
0

0

1

1
3 3  
 0 0
1 
 0
1 
 
0 0 1 


0

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 25


TE EXTC Digital Communication
Sem - V

Thus the syndrome S2 = S3 = ( 1 1 1) even if two codewords are different. This proves all error
pattern the differ by a codeword have the same syndrome.

Syndrome decoder for (n,k) block code

Other linear codes:

Single Parity bit code:

If there are m1, m2, m3 ... mk are the bits of the k bit message word, then m1⊕ m2
⊕ m3 ⊕ ...... ⊕ mk ⊕ c1 = 0 In the above equation C 1 is the parity bit added with the message. If
there are even number of parity check bit C 1 = 0 and vice versa. For this code the transmitted bits are
n = k+1 and q=1. This code can correct single bit.

Repeated codes

In this code , a single message bit is transmitted and q=2t bit are the parity bit and k=1, then
the transmitted bits are n = 2t+1. This code can correct t errors per block. It requires large band
width.

Hadamard Code

It is derived from hadamard matrix here n = 2k and q = n-k = 2k –k. the code rate is 𝑟 =
𝑘
𝑘 This shows the code rate will be very small.
𝑛 2𝑘
=

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 26


TE EXTC Digital Communication
Sem - V

Extended Code

(n,k) linear block code has a parity check matrix H. One column of zero elements (except last
element) and one row of 1’s is added to the parity check matrix as shown below

The code turned by such parity is called extended code. Thus it will be described as (n+1, k)
linear block code. Now the parity check matrix size is (q+1) by (n+1). For example (7,4) matrix is

𝑑𝑒 (𝑚𝑖𝑛) = 𝑑𝑚𝑖𝑛 + 1
represented as below with the extended parity bits and the minimum distance for extended code is

Dual code :

Consider an (n,k) block code. This code satisfies 𝐻𝐺𝑇 = 0, then the (n, n-k) i.e (n,q) block
code called as dual code. The generator matrix and parity matrix will be,

[𝐺]𝑞×𝑛 = [𝐼𝑞×𝑞 ∣ 𝑝𝑞×𝑛] [𝐻]𝑘×𝑛 = [𝑝𝑇𝑘×𝑞 ∣ 𝐼𝑘×𝑘 ]


𝑞×𝑛 𝑘×𝑛
and

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 27


TE EXTC Digital Communication
Sem - V

Cyclic Codes :
Cylic codes are the sub class of linear code. A code C is cyclic if
(i) C is a linear code;
(ii)any cyclic shift of a codeword is also a codeword.

 Systematic cyclic codes


 Non Systematic cyclic codes.

Properties :

X2 is a codeword then X3 = X1 ⊕ X2 Here X3 is also a codeword. This property shows cyclic code
Linearity property :- Sum of any two codewords is also a valid codeword. For example X1 and

is also a linear code.

Cyclic property :- Every cyclic shift of the valid code vector produces another code vector. Let us

𝑋 = {𝑥𝑛−1, 𝑥𝑛−2, … 𝑥1, 𝑥0}


consider n bit code vector

Here 𝑥𝑛−1, 𝑥𝑛−2, … 𝑥1, 𝑥0 represents the individual bits of code vectors. Let us shift the above
code vector cyclically to the left side

𝑋′ = {𝑥𝑛−2, 𝑥𝑛−3, … 𝑥1, 𝑥0, 𝑥𝑛−1}

Represents of codewords by a polynomial:

𝑋(𝑝) = {𝑥𝑛−1 𝑝𝑛−1 + 𝑥𝑛−2 𝑝𝑛−2 + … + 𝑥1𝑝 + 𝑥0}


This code word will be represents by a polynomial of a degree less than or equal to (n-1)

Here X(p) is the polynomial of degree (n-1), P is the arbitrary variable of the polynomial The
power of ‘p’ represents positions of the codeword.

𝑝𝑛−1 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝑀𝑆𝐵

𝑝0 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝐿𝑆𝐵

𝑝1 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝑆𝑒𝑐𝑜𝑛𝑑 𝑏𝑖𝑡 𝑓𝑟𝑜𝑚 𝐿𝑆𝐵 𝑠𝑖𝑑𝑒

Polynomial represents is due to following reasons

1. These are algebraic codes. The algebraic operations such as addition, Subtraction,
Multiplication and division etc becomes very simple.
2. Position of the bits are represented with the help of powers of ‘p’ in a polynomial.

Generation of code vectors in nonsystematic Form

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 28


TE EXTC Digital Communication
Sem - V

Let 𝑀 = {𝑚𝑘−1, 𝑚𝑘−2, … 𝑚1, 𝑚0} be k bits of message vector. Then it can be represented by
the polynomial as

𝑀(𝑝) = 𝑚𝑘−1𝑝𝑘−1 + 𝑚𝑘−2 𝑝𝑘−2 + … … … + 𝑚1𝑝 + 𝑚0

Let X(p) represent the codeword polynomial. It is given as

𝑋(𝑝) = 𝑀(𝑝)𝐺(𝑝)
Here G(p) is the generating polynomial of degree ‘q’. For an (n,k) cyclic code, q=n-k
represent the number of parity bits. The generating polynomial is given as,

𝐺(𝑝) = 𝑝𝑞 + 𝑔𝑞−1 𝑝𝑞−1 + … … … + 𝑔1𝑝 + 1


Here 𝑔𝑞−1 , 𝑔𝑞−2 … … 𝑔1 are the parity bits.

If M1, M2, M3 … etc are the other message vectors, then the corresponding code vectors can be
calculated as

𝑋1(𝑝) = 𝑀1(𝑝)𝐺(𝑝)

𝑋2(𝑝) = 𝑀2(𝑝)𝐺(𝑝)

𝑋3(𝑝) = 𝑀3(𝑝)𝐺(𝑝) and So on. All the above code vectors in non systematic form and they
satisfy cyclic property. Note that G(p) is same for all code vectors.

Problem

The generator polynomial of (7,4) cyclic code is 𝐺(𝑝) = 𝑝3 + 𝑝 + 1 find all code vectors
in nonsystematic form.

Solution :

Here n =7, k =4 , Number of check bits q= n-k = 7- 4 = 3 and Block length n= 2q-1 = 8-1
= 7.
For ex Consider any message vector M = (m3, m2, m1, m0) = (0 1 0 1) Then the
message polynomial will be

𝑀(𝑝) = 𝑚𝑘−1𝑝𝑘−1 + 𝑚𝑘−2 𝑝𝑘−2 + … … … + 𝑚1𝑝 + 𝑚0

𝑀(𝑝) = 𝑚3𝑝3 + 𝑚2 𝑝2 + 𝑚1𝑝 + 𝑚0 = 𝑝2 + 1

and the given generator polynomial is 𝐺(𝑝) = 𝑝3 + 𝑝 + 1

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 29


TE EXTC Digital Communication
Sem - V

To obtain non systematic code vectors

𝑋(𝑝) = 𝑀(𝑝)𝐺(𝑝)

𝑋(𝑝) = ( 𝑝2 + 1 )(𝑝3 + 𝑝 + 1)

𝑋(𝑝) = (p5 + 𝑝3 + 𝑝2 + 𝑝3 + 𝑝 + 1)

𝑋(𝑝) = (p5 + 𝑝3 + 𝑝3 + 𝑝2 + 𝑝 + 1)

𝑋(𝑝) = (p5 + (1 ⊕ 1) 𝑝3 + 𝑝2 + 𝑝 + 1)

𝑋(𝑝) = (0p6+p5 + 0p4 + 0𝑝3 + 𝑝2 + 𝑝 + 1)

Note that the degree of above polynomial is n-1 = 6, The code vector of above polynomial is

𝑋 = (𝑥6𝑥5𝑥4𝑥3𝑥2𝑥1𝑥0) = (0 1 0 0 1 1 1)

List of code vectors in non systematic form is

Sl.No Message Bits Complete Code Vector


M3 M2 M1 M0 X6 X5 X4 X3 X2 X1 X0
1 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 1 0 0 0 1 0 1 1
3 0 0 1 0 0 0 1 0 1 1 0
4 0 0 1 1 0 0 1 1 1 0 1
5 0 1 0 0 0 1 0 1 1 0 0
6 0 1 0 1 0 1 0 0 1 1 1
7 0 1 1 0 0 1 1 1 0 1 0
8 0 1 1 1 0 1 1 0 0 0 1
9 1 0 0 0 1 0 1 1 0 0 0
10 1 0 0 1 1 0 1 0 0 1 1
11 1 0 1 0 1 0 0 1 1 1 0
12 1 0 1 1 1 0 0 0 1 0 1
13 1 1 0 0 1 1 1 0 1 0 0
14 1 1 0 1 1 1 1 1 1 1 1
15 1 1 1 0 1 1 0 0 0 1 0
16 1 1 1 1 1 1 0 1 0 0 1

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 30


TE EXTC Digital Communication
Sem - V

Cyclic properties:

Let us consider Xg from above table is 𝑋𝑔 = (0 1 1 0 0 0 1 )

Shift this code vector cyclically by left side one

𝑋𝑔′ = (1 0 1 1 0 0 0 )

Thus cyclic shift of X8 produces X9

Generation of code vectors in systematic form

The Systematic form of the block code is,

𝑋 = (𝑘 𝑚𝑒𝑠𝑠𝑎𝑔𝑒 ∶ (𝑛 − 𝑘)𝑐ℎ𝑒𝑐𝑘 𝑏𝑖𝑡𝑠) = 𝑚𝑘−1, 𝑚𝑘−2, … 𝑚1, 𝑚0 ∶ 𝑐𝑞−1, 𝑐𝑞−2, … 𝑐1, 𝑐0

Here the check bits forms the polynomial as

𝐶(𝑝) = 𝑐𝑞−1𝑝𝑞−1 + 𝑐𝑞−2 𝑝𝑞−2 + … … … + 𝑐1𝑝 + 𝑐0


The check bit polynomial is

𝐶(𝑝) = 𝑟𝑒𝑚 [ 𝑝𝑞𝑀(


𝑝)
𝐺(𝑝)
]

Problem

The generator polynomial of (7,4) cyclic code is 𝐺(𝑝) = 𝑝3 + 𝑝 + 1 find all code vectors in
systematic form.

Solution :
Here n =7, k =4 , Number of check bits q= n-k = 7- 4 = 3 and Block length n= 2q-1 = 8-1 = 7.

For ex Consider any message vector M = (m3, m2, m1, m0) = (0 1 0 1) Then the
message polynomial will be

𝑀(𝑝) = 𝑚𝑘−1𝑝𝑘−1 + 𝑚𝑘−2 𝑝𝑘−2 + … … … + 𝑚1𝑝 + 𝑚0

𝑀(𝑝) = 𝑚3𝑝3 + 𝑚2 𝑝2 + 𝑚1𝑝 + 𝑚0 = 𝑝2 + 1

and the given generator polynomial is 𝐺(𝑝) = 𝑝3 + 𝑝 + 1

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 31


TE EXTC Digital Communication
Sem - V

To obtain systematic code vectors

𝑝𝑞𝑀(
𝐶(𝑝) = 𝑟𝑒𝑚 [ 𝑝)
𝐺(𝑝)
]

To obtain 𝒑𝒒𝑴(𝒑) Since q =3

𝑝𝑞𝑀(𝑝) = 𝑝3(𝑝2 + 1) = 𝑝5 + 0 𝑝4 + 𝑝3 + 0𝑝2 + 0𝑝 + 0

𝑝𝑞𝑀(𝑝)
] = 𝑝 + 0 𝑝 + 𝑝 + 0𝑝 + 0𝑝 + 0
𝐺(𝑝)
5 4 3 2

𝑝3 + 𝑝 + 1
[

𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒 𝐶(𝑝) = 𝑝2 + 0𝑝 + 0

Since q =3, the check bits are C = ( c2, c1, c0 ) = (1 0 0) The

code vector can be written as

𝑋 = ( 𝑚𝑘−1, 𝑚𝑘−2, … 𝑚1, 𝑚0 ∶ 𝑐𝑞−1, 𝑐𝑞−2, … 𝑐1, 𝑐0)

𝑋 = (𝑚3, 𝑚2, 𝑚1, 𝑚0 ∶ 𝑐2, 𝑐1, 𝑐0) = (0101 ∶ 100)

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 32


TE EXTC Digital Communication
Sem - V

List of code vectors in non systematic form is

Sl.No Message Bits Complete Code Vector


M3 M2 M1 M0 M3 M2 M1 M0 C2 C1 C0
1 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 1 0 0 0 1 0 1 1
3 0 0 1 0 0 0 1 0 1 1 0
4 0 0 1 1 0 0 1 1 1 0 1
5 0 1 0 0 0 1 0 0 1 1 1
6 0 1 0 1 0 1 0 1 1 0 0
7 0 1 1 0 0 1 1 0 0 0 1
8 0 1 1 1 0 1 1 1 0 1 0
9 1 0 0 0 1 0 0 0 1 0 1
10 1 0 0 1 1 0 0 1 1 1 0
11 1 0 1 0 1 0 1 0 0 1 1
12 1 0 1 1 1 0 1 1 0 0 0
13 1 1 0 0 1 1 0 0 0 1 0
14 1 1 0 1 1 1 0 1 0 0 1
15 1 1 1 0 1 1 1 0 1 0 0
16 1 1 1 1 1 1 1 1 1 1 1

Problem :

An n digit code polynomial X(p) is obtained as 𝑋(𝑝) = 𝐶(𝑝)𝑝𝑛−𝑘𝑀(𝑝) where M(p) is

𝑝𝑛−𝑘𝑀(𝑝) by proper generator polynomial G(p). prove that X(p) is systematic cyclic code if G(p) is
message polynomial with k bit data and C(P) is the remainder polynomial obtained by dividing

the factor of 𝑝𝑛+1 in modulo 2 sense

Generator and parity matrix of a cyclic code Non

Systematic form

The generator matrix has the size of k×n is given by

𝐺(𝑝) = 𝑝𝑞 + 𝑔𝑞−1 𝑝𝑞−1 + … … … + 𝑔1𝑝 + 1


Multiply
both sides by pi

𝐺(𝑝)𝑝𝑖 = 𝑝𝑖+𝑞 + 𝑔𝑞−1 𝑝𝑖+𝑞−1 + … … … + 𝑔1𝑝𝑖+1 + 1 and i= (k-1), (k-2), …..2,1,0

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 33


TE EXTC Digital Communication
Sem - V

Problem : -

Obtain the generator matrix of (7,4) cyclic code if 𝐺(𝑝) = 𝑝3 + 𝑝 + 1. and the code
vectors.

Here n =7, k =4 , Number of check bits q= n-k = 7- 4 = 3, 𝐺(𝑝)𝑝𝑖 will be


Solution :

𝐺(𝑝)𝑝𝑖 = 𝑝𝑖+3 + 𝑝𝑖+2 + 𝑝𝑖 𝑔𝑜𝑟 𝑔𝑖𝑣𝑒𝑛 𝐺(𝑝) 𝑠𝑖𝑛𝑐𝑒 𝑘 − 1 = 3, 𝑖 = 3,2,1,0

There will be four polynomials corresponding to 4 values of y. These four polynomials represent rows
of generator matrix

Polynomials for above four row is

Then the generater matrix is

To obtain code vectors X =MG:

1 1 0 1 0 0 0

l 0 1 1 0 1 0
0 ⎪ = (1010011)
0
𝑋 = (𝑚3, 𝑚2, 𝑚1, 𝑚0 ∶ 𝐺) = ⎪1001 ∶ 0

𝗁
0 0 1 1 0 1 1 )
 0 0 1 1
0

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 34


TE EXTC Digital Communication
Sem - V

(follow the same procedure to find the code vectors of others)

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 35


TE EXTC Digital Communication
Sem - V

Systematic generator matrix

The systematic form of generator matrix is given by

= [𝐼𝑘 ∶ 𝑃𝑘×𝑞]
𝐺 𝑘×𝑛

The tth row of this matrix will be represented in the polynomial form as,

𝑡𝑡ℎ 𝑟𝑜𝑤 𝑜𝑓 𝐺 = 𝑝𝑛−𝑡 + 𝑅𝑡(𝑝) 𝑤ℎ𝑒𝑟𝑒 𝑡 = 1,2,3, … 𝑘

𝑝𝑛−𝑡 𝑅𝑒𝑚𝑖𝑛𝑑𝑒𝑟𝑅𝑡(
= 𝑄𝑢𝑜𝑡𝑖𝑒𝑛𝑡 𝑝)
𝐺 𝑝
𝑄𝑡 𝑝 +
𝐺(𝑝)
(( ))

𝑖. 𝑒 𝑝𝑛−𝑡 = 𝑄𝑡(𝑝)𝐺(𝑝) ⊕ 𝑅𝑡(𝑝) 𝑎𝑛𝑑 𝑡 = 1,2, … . 𝑘

Parity check matrix

Find out the the generator matrix for a systematic (7,4) cyclic code if 𝐺(𝑝) = 𝑝3 + 𝑝 + 1.
Also find the parity matrix.

Solution :

To obtain the generator polynomial

𝑡𝑡ℎ 𝑟𝑜𝑤 𝑜𝑓 𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑜𝑟 𝑝𝑜𝑙𝑦𝑛𝑜𝑚𝑖𝑎𝑙 𝑖𝑠

𝑝𝑛−𝑡 + 𝑅𝑡(𝑝) = 𝑄𝑡(𝑝)𝐺 (𝑝) 𝑤ℎ𝑒𝑟𝑒 𝑡 = 1,2,3, … 𝑘

Given=7, k=4 and q=n-k =3 The

above equation will be

𝑝7−𝑡 + 𝑅𝑡(𝑝) = 𝑄𝑡(𝑝)( 𝑝3 + 𝑝 + 1) 𝑤ℎ𝑒𝑟𝑒 𝑡 = 1,2,3,4

With t=1, the above equation becomes

𝑝6 + 𝑅𝑡(𝑝) = 𝑄𝑡(𝑝)( 𝑝3 + 𝑝 + 1)

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 36


TE EXTC Digital Communication
Sem - V

To obtain 𝑹𝒕(𝒑), 𝑸𝒕(𝒑) for 1st Row

Here

𝑄𝑡(𝑝) = ( 𝑝3 + 𝑝 + 1)𝑎𝑛𝑑 𝑅𝑡(𝑝) = ( 𝑝2 + 1)


Putting those values
( 𝑝6 + 𝑝2 + 1) = ( 𝑝3 + 𝑝 + 1)( 𝑝3 + 𝑝 + 1)

1st row of polynomial is = 𝑝6 + 𝑝2 + 1

Other row polynomials

Conversion of row polynomials into matrix forms generator matrix

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 37


TE EXTC Digital Communication
Sem - V

To obtain code vectors X =MG:

1 0 0 1
 0 1 0
l 0 1 1
0 ⎪ = (1100010)
0 0 1 1

𝑋 = (𝑚3, 𝑚2, 𝑚1, 𝑚0 ∶ 𝐺) = ⎪1100 ∶ 0



0 1 1

𝗁
0 0 1 1 0 1 1 )
 0
0

(follow the same procedure to find the code vectors of others) To

obtain parity check matrix (H)

The systematic form of generator matrix is given by

= [𝐼𝑘 ∶ 𝑃𝑘×𝑞]
𝐺 𝑘×𝑛
The P Submatrix

 0
1
𝑝 = 1
 1
 1 
1 1 4×3
 
0 1
The parity check matrix is given
0

1 1

𝐻 = [𝑝𝑇 ∶ 𝐼� ]
𝑞×𝑛

ENCODERS AND DECODERS

These are the flipflops. Used to make Shift register.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 38


TE EXTC Digital Communication
Sem - V

if g=1 closed path and if g=0 open path.

Mod 2 addition

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 39


TE EXTC Digital Communication
Sem - V

Operation :

- Feedback switch is closed first.


- Output switch is connected in the message input.
- All shift registers are initialized to zero.
- K messages are shifted to transmitter as well as shifted into the registers.
- After the shift of k message bits registers contain ‘q’ check bits.
- The feedback switch is now opened and output switch is connected to check bits position.

Problem : -

Design the encoder for (7,4) cyclic code generator polynomial is 𝐺(𝑝) = 𝑝3 + 𝑝 + 1 and verify
its operation for any message vector.

Solution:

The generator polynomial is 𝐺(𝑝) = 𝑝3 + 0𝑝2 + 𝑝 + 1 = 𝑝3 + 𝑔2𝑝2 + 𝑔1𝑝 + 1 and q=3;

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 40


TE EXTC Digital Communication
Sem - V

Lets verify the operation for message vector v (𝑚3, 𝑚2, 𝑚1, 𝑚0 ) = (1100 )

𝑋 = (𝑚3, 𝑚2, 𝑚1, 𝑚0, 𝑐1, 𝑐2, 𝑐3 ) = (1100010 )

Syndrome Decoding, Error Detection and Error Correction

We know that 𝑋 = 𝑌 ⊕ 𝐸 and 𝑌 = 𝑋 ⊕ 𝐸

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 41


TE EXTC Digital Communication
Sem - V

In the polynomial 𝑌(𝑝) = 𝑋(𝑝) ⊕ 𝐸(𝑝)

𝑋(𝑝) = 𝑀(𝑝)𝐺(𝑝)

𝑌(𝑝) = 𝑀(𝑝)𝐺(𝑝) ⊕ 𝐸(𝑝)

Let the received polynomial

𝑌(𝑝) 𝑅𝑒𝑚𝑖𝑛𝑑𝑒
𝐺(𝑝) = 𝑄𝑢𝑜𝑡𝑖𝑒𝑛𝑡 𝑟
𝐺(𝑝)
+

𝑋(𝑝)
If Y(p) = X(p), there is no errors then
= 𝑄𝑢𝑜𝑡𝑖𝑒𝑛𝑡 𝑅𝑒𝑚𝑖𝑛𝑑𝑒
𝑟
𝐺(𝑝)
+
𝐺(𝑝)

𝑌(𝑝)
If Y(p) ≠ X(p), there is errors then
= 𝑄𝑢𝑜𝑡𝑖𝑒𝑛𝑡 𝑅𝑒𝑚𝑖𝑛𝑑 𝑅(𝑝)
𝑒𝑟 = 𝑄(𝑝) +
𝐺(𝑝) 𝐺(𝑝)
+
𝐺(𝑝)

R(p) will be the polynomial of degree less than or equal to q-1. Multyply both sides of above equation by
G(p)

𝑌(𝑝) = 𝑄(𝑝)𝐺(𝑝) + 𝑅(𝑝)

𝑀(𝑝)𝐺(𝑝) ⊕ 𝐸(𝑝) = 𝐸(𝑝)𝐺(𝑝) ⊕ 𝑅(𝑝)

𝑀(𝑝)𝐺(𝑝) ⊕ 𝐸(𝑝) = 𝐸(𝑝)𝐺(𝑝) ⊕ 𝑅(𝑝)

𝐸(𝑝) = 𝑀(𝑝)𝐺(𝑝) ⊕ 𝑄(𝑝)𝐺(𝑝) ⊕ 𝑅(𝑝)

𝐸(𝑝) = 𝑀(𝑝)𝐺(𝑝) ⊕ 𝑄(𝑝)𝐺(𝑝) ⊕ 𝑅(𝑝)

𝐸(𝑝) = [𝑀(𝑝) ⊕ 𝑄(𝑝)]𝐺(𝑝) ⊕ 𝑅(𝑝)

E depends upon the reminder R. For every reminder ‘R’ there will be specific error vector. So ‘R’ is
a syndrome vector S or R(p) = S(p)

Y(p) S(p)
G(p) = Q( p ) +
G(p)

The syndrome vector is obtaind by

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 42


TE EXTC Digital Communication
Sem - V

𝑆(𝑝) = 𝑟𝑒𝑚 [ 𝑌(𝑝


)
𝐺(𝑝
]
)

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 43


TE EXTC Digital Communication
Sem - V

Decoder

Operation :

 Initially all the shift register contents are zero and the switch is closed in position 1.
 The received vector Y is shifted bit by bit into the shift register.
 Flipflops keep on changing the values according to input bits of Y and values of g1, g2 etc.
 After all the bits of Y are shifted, q flipflops of shift register contains the q bit Syndrome vector.
 The switch id then closed to position 2 and clocks are applied to the shift register.
 The output is a syndrome vector S = (Sq-1, Sq-2, ….S1, S0).

Some block codes that can be realized by cyclic codes

• (n,1) Repetition codes. High coding gain (minimum distance always n-1), but very low rate:
1/n
• (n,k) Hamming codes. Minimum distance always 3. Thus can detect 2 errors and correct one
error. n=2m-1, k = n - m,
• Maximum-length codes. For every integer there exists a maximum length code (n,k) with n
= 2k - 1,dmin = 2k-1.
• BCH-codes. For every integer there exist a code with n = 2m-1, and where t is the error
correction capability
• (n,k) Reed-Solomon (RS) codes. Works with k symbols that consists of m bits that are encoded
to yield code words of n symbols. For these codes and
• Nowadays BCH and RS are very popular due to large dmin, large number of codes, and easy
generation

Advantages :

• Simpler and easy to implement.


• Eliminate the storage needed for lookup table decoding. Therefore it is powerful and efficient.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 44


TE EXTC Digital Communication
Sem - V

• Encoder and decoder is simpler than non cyclic codes


• Well defined mathematical structure, hence very efficient decoder.

Disadvantages:

• Error correction is complicated since the combinational logic circuits in error detector are
complex.

Convolutional Code:

Introduction:

Convolutional codes offer an approach to error control coding substantially different from that of
block codes.
• encodes the entire data stream, into a single codeword.
• maps information to code bits sequentially by convolving a sequence of
information bits with “generator” sequences.
• does not need to segment the data stream into blocks of fixed size
(Convolutional codes are often forced to block structure by periodic truncation).
• Is a machine with memory.
– This fundamental difference imparts a different nature to the design and
evaluation of the code.
• Block codes are based on algebraic/combinatorial techniques.
• Convolutional codes are based on construction techniques.
• Easy implementation using a linear finite-state shift register.
• A Convolutional code is specified by three parameters (n, k, k)

– k inputs and n outputs In practice, usually k=1 is chosen.

– Rc = k / n is the coding rate, determining the number of data bits per coded bit.

– K is the constraint length of the convolutinal code (where the encoder has K-1
memory elements).

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 45


TE EXTC Digital Communication
Sem - V

• The performance of a convolutional code depends on the coding rate and the
constraint length
– Longer constraint length K
• More powerful code
• More coding gain
– Coding gain: the measure in the difference between the signal to noise ratio (SNR)
levels between the uncoded system and coded system required to reach the same bit error rate
(BER) level
• More complex decoder
• More decoding delay
– Smaller coding rate Rc=k/n
• More powerful code due to extra redundancy
• Less bandwidth efficiency

Convolutional encoder (rate ½, K=3)


– 3 shift-registers, where the first one takes the incoming data bit and the rest form the memory
of the encoder.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 46


TE EXTC Digital Communication
Sem - V

Effective Code Rate

Initialize the memory before encoding the first bit (all-zero)


• Clear out the memory after encoding the last bit (all-zero)
• Hence, a tail of zero-bits is appended to data bits.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 47


TE EXTC Digital Communication
Sem - V

Vector representation:

– Define n vectors, each with Kk elements (one vector for each modulo-2 adder). The i-th element
in each vector, is “1” if the i-th stage in the shift register is connected to the corresponding modulo-2
adder, and “0” otherwise.

Polynomial representation :

– Define n generator polynomials, one for each modulo-2 adder. Each polynomial is of
degree Kk-1 or less and describes the connection of the shift registers to the corresponding modulo-
2 adder.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 48


TE EXTC Digital Communication
Sem - V

Polynomial representation : Example: m=(1 0 1)

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 49


TE EXTC Digital Communication
Sem - V

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 50


TE EXTC Digital Communication
Sem - V

State Diagram (2)

• A state diagram is simply a graph of the possible states of the encoder and the possible
transitions from one state to another. It can be used to show the relationship between the
encoder state, input, and output.
• The stage diagram has 2 (K-1)k nodes, each node standing for one encoder state.
• Nodes are connected by branches
– Every node has 2k branches entering it and 2k branches leaving it.
– The branches are labeled with c, where c is the output.
– When k=1
• The solid branch indicates that the input bit is 0.
• The dotted branch indicates that the input bit is 1.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 51


TE EXTC Digital Communication
Sem - V

Distance Properties of Convolutional Codes (1)


• The state diagram can be modified to yield information on code distance properties.
• How to modify the state diagram:

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 52


TE EXTC Digital Communication
Sem - V

– Split the state a (all-zero state) into initial and final states, remove the self loop
– Label each branch by the branch gain Di, where i denotes the Hamming weight
of the n encoded bits on that branch
• Each path connecting the initial state and the final state represents a non-zero codeword
that diverges from and re-emerges with state a (all-zero state) only once.

Distance Properties of Convolutional Codes (2)


• Transfer function (which represents the input-output equation in the modified state diagram)
indicates the distance properties of the convolutional code by

• The minimum free distance dfree denotes


– The minimum weight of all the paths in the modified state diagram that
diverge from and re-emerge with the all-zero state a.
– The lowest power of the transfer function T(X)

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 53


TE EXTC Digital Communication
Sem - V

Trellis Diagram

• Trellis diagram is an extension of state diagram which explicitly shows the passage of time.
– All the possible states are shown for each instant of time.
– Time is indicated by a movement to the right.
– The input data bits and output code bits are represented by a
• unique path through the trellis.
– The lines are labeled with c, where c is the output.
– After the second stage, each node in the trellis has 2k
• incoming paths and 2k outgoing paths.
– When k=1
• The solid line indicates that the input bit is 0.
• The dotted line indicates that the input bit is 1.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 54


TE EXTC Digital Communication
Sem - V

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 55


TE EXTC Digital Communication
Sem - V

Maximum Likelihood Decoding

• Given the received code word r, determine the most likely path through the trellis.
(maximizing p(r|c'))
– Compare r with the code bits associated with each path
– Pick the path whose code bits are “closest” to r
– Measure distance using either Hamming distance for harddecision
• decoding or Euclidean distance for soft-decision
• decoding
– Once the most likely path has been selected, the estimated
• data bits can be read from the trellis diagram

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 56


TE EXTC Digital Communication
Sem - V

The Viterbi Algorithm

• A breakthrough in communications in the late 60’s


– Guaranteed to find the ML solution
• However the complexity is only O(2K)
• Complexity does not depend on the number of original data bits
– Is easily implemented in hardware
• Used in satellites, cell phones, modems, etc
• Example: Qualcomm Q1900

• Takes advantage of the structure of the trellis:


– Goes through the trellis one stage at a time
– At each stage, finds the most likely path leading into each
• state (surviving path) and discards all other paths leading into the state (non-
surviving paths)
– Continues until the end of trellis is reached
– At the end of the trellis, traces the most probable path from
• right to left and reads the data bits from the trellis
– Note that in principle whole transmitted sequence must be
• received before decision. However, in practice storing of stages with length of 5K is quite
adequate

Implementation:
1. Initialization:
– Let Mt(i) be the path metric at the i-th node, the t-th stage in trellis
– Large metrics corresponding to likely paths; small metrics corresponding to
unlikely paths
– Initialize the trellis, set t=0 and M0(0)=0;
2. At stage (t+1),
– Branch metric calculation
•Compute the metric for each branch connecting the states at time t to states
at time (t+1)
• The metric is related to the likelihood probability between the
received bits and the code bits corresponding to that branch:
p(r(t+1)|c'(t+1))

– Branch metric calculation


• In hard decision, the metric could be the number of same bits between the received
bits and the code bits.

– Path metric calculation


• For each branch connecting the states at time t to states at time (t+1), add the
branch metric to the corresponding partial path metric Mt(i).

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 57


TE EXTC Digital Communication
Sem - V

– Trellis update
• At each state, pick the most likely path which has the largest metric and delete
the other paths
•Set M(t+1)(i)= the largest metric corresponding to the state i

3. Set t=t+1; go to step 2 until the end of trellis is reached


4. Trace back
– Assume that the encoder ended in the all-zero state
– The most probable path leading into the last all-zero state in the trellis has the largest
metric
•Trace the path from right to left
•Read the data bits from the trellis

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 58


TE EXTC Digital Communication
Sem - V

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 59


TE EXTC Digital Communication
Sem - V

Error Rate of Convolutional Codes (1)


• An error event happens when an erroneous path is selected at the decoder
• Error-event probability:

𝑃𝑒 ≤ ∑ 𝑎𝑑 𝑃2(𝑑)

𝑑=𝑑𝑓𝑟𝑒𝑒

𝑎𝑑 = 𝑇ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑝𝑎𝑡ℎ𝑠 𝑤𝑖𝑡ℎ 𝑡ℎ𝑒 ℎ𝑎𝑚𝑚𝑖𝑛𝑔 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑑


𝑃2(𝑑) = 𝑃𝑟𝑜𝑏𝑎𝑙𝑖𝑡𝑦 𝑜𝑓 𝑡ℎ𝑒 𝑝𝑎𝑡ℎ 𝑤𝑖𝑡ℎ 𝑡ℎ𝑒 ℎ𝑎𝑚𝑚𝑖𝑛𝑔 𝑐𝑜𝑑𝑒 𝑎𝑛𝑑 𝑖𝑡
𝑑𝑒𝑝𝑒𝑛𝑑𝑠 𝑜𝑛 𝑚𝑜𝑑𝑢𝑙𝑎𝑡𝑖𝑜𝑛 𝑠𝑐ℎ𝑒𝑚𝑒, ℎ𝑎𝑟𝑑 𝑜𝑟 𝑠𝑜𝑓𝑡 𝑑𝑒𝑐𝑖𝑠𝑖𝑜𝑛

BER is obtained by multiplying the error-event probability by the number of data bit errors associated with
each error event.

BER is upper bounded by

𝑃𝑏 ≤ ∑ 𝑓(𝑑)𝑎𝑑 𝑃2(𝑑)

𝑑=𝑑𝑓𝑟𝑒𝑒

f(d ) the number of data bit errors corresponding to the erroneous path with the Hamming distance
of d

Turbo Codes:

• Turbo codes were proposed by Berrou and Glavieux in the 1993 International
Conference in Communications.
• Performance within 0.5 dB of the channel capacity limit for BPSK was demonstrated.
• Features of turbo codes
• Parallel concatenated coding
• Recursive convolutional encoders
• Pseudo-random interleaving
• Iterative decoding

Pseudo-random Interleaving:

• The coding dilemma:


• Shannon showed that large block-length random codes achieve channel capacity.
• However, codes must have structure that permits decoding with reasonable
complexity.
• Codes with structure don’t perform as well as random codes.
• “Almost all codes are good, except those that we can think of.”

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 60


TE EXTC Digital Communication
Sem - V

Solution:
• Make the code appear random, while maintaining enough structure to permit
decoding.
• This is the purpose of the pseudo-random interleaver.
• Turbo codes possess random-like properties.
• However, since the interleaving pattern is known, decoding is possible.

Why Interleaving and Recursive Encoding?

In a coded systems:
• Performance is dominated by low weight code words.
• A “good” code: will produce low weight outputs with very low probability.
• An RSC code: Produces low weight outputs with fairly low probability.
• However, some inputs still cause low weight outputs.
• Because of the interleaver: The probability that both encoders have inputs that cause
low weight outputs is very low.
• Therefore the parallel concatenation of both encoders will produce a “good” code.

Iterative Decoding:
• There is one decoder for each elementary encoder.
• Each decoder estimates the a posteriori probability (APP) of each data bit.
• The APP’s are used as a priori information by the other decoder.
• Decoding continues for a set number of iterations.
• Performance generally improves from iteration to iteration, but follows a law of
diminishing returns

The Turbo-Principle:

Turbo codes get their name because the decoder uses feedback, like a turbo engine

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 61


TE EXTC Digital Communication
Sem - V

Turbo Code Summary:

• Turbo code advantages:


• Remarkable power efficiency in AWGN and flat-fading channels for moderately low BER.
• Deign tradeoffs suitable for delivery of multimedia services.

Turbo code disadvantages:

• Long latency.
• Poor performance at very low BER.
• Because turbo codes operate at very low SNR, channel estimation and tracking is a critical
issue.
• The principle of iterative or “turbo” processing can be applied to other problems.
• Turbo-multiuser detection can improve performance of coded multiple-access systems.

QUESTIONS FOR PRACTICE

PART A ( 2 marks)

1. What is hamming distance?


2. Define code efficiency.
3. What is meant by systematic and non-systematic codes?
4. What is meant by linear code?
5. What are the error detection and correction capabilities of hamming co d es ?
6. What is meant by cyclic codes?
7. How syndrome is calculated in Hamming codes and cyclic codes?
8. What is BCH code?
9. What is RS code?
10. What is difference between block codes and convolutional codes?
11. Define constraint length in convolutional code?
12. Define free distance and coding gain.
13. What is convolution code?
14. What is meant by syndrome of linear block code?
15. What are the advantages and disadvantages of convolutional codes?
16. Define sates of encoder?
17. Compare between code tree and trellis diagram?
18. Write the features of BCH Codes?
19. Define constraint length in convolutional codes?
20. Define constraint length in convolutional codes?
21. What is Golay codes?

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 62


TE EXTC Digital Communication
Sem - V

PAT B (12 Marks)

1. Draw the code tree of a Convolutional code of code rate r=1/2 and Constraint length of K=3
starting from the state table and state diagram for an encoder which is commonly used.
a. Draw the state Diagram.
b. Draw the state Table.
c. Draw the code Tree
2. Draw the trellis diagram of a Convolutional code of code rate r=1/2 and Constraint length of
K=3 starting from the state table and state diagram for an encoder which is commonly used.
a. Draw the state Diagram.
b. Draw the state Table.
c. Draw the trellis diagram
3. Decode the given sequence 11 01 01 10 01 of a convolutional code with a code rate of r=1/2
and constraint length K=3, using viterbi decoding algorithm.
a. Draw the state Diagram.
b. Draw the state Table.
c. Draw the code Tree
d. Decode the given sequence using trellis diagram.
4. Explain the construction of Block Code and explain how error syndrome is calculated
a. Representation of Block Code.
b. Generator Matrix.
c. Generation of Codewords.
d. Generation of Parity Check Matrix.
e. Calculation OF Error Syndrome.
5. Consider a (6,3) linear block code defined by the generator matrix

a. Determine if the code is a hamming code. Find the parity check matrix in
systematic code
b. Find the encoding tale for the linear block code.
c. What is the minimum hamming distance and how many errors can detect and correct.
d. Find the decoding tale for the linear block code.
e. Draw the hardware encoder diagram

5. Prove that 𝐺𝐻𝑇 = 𝐻𝐺𝑇 = 0 for a systematic linear block code.


f. Draw the hardware syndrome generator diagram.

𝐶1 = 𝑚1 + 𝑚2 + 𝑚4 , 𝐶2 = 𝑚1 + 𝑚2 + 𝑚3
6. The parity check bits of a (8,4) block code is given by

𝐶2 = 𝑚1 + 𝑚3 + 𝑚4 𝑎𝑛𝑑 𝐶4 = 𝑚2 + 𝑚3 + 𝑚4

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 63


TE EXTC Digital Communication
Sem - V

Here m1, m2, m3, m4 are message bits.

a. Find the generator matrix and parity check matrix for this code.
b. Find minimum weight of this code.

7. Design the encoder for the (7,4) cyclic code generated by 𝐺(𝑝) = 𝑝3 + 𝑝 + 1 and verify
c. Find error detecting capabilities of this code.

its operation for any vector.

𝑔(𝑥) = 1 + 𝑥2 + 𝑥3 and obtain the syndrome for the received code word 1001011.
8. Sketch the encoder and syndrome calculator for the generator polynomial

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 64

You might also like