[go: up one dir, main page]

0% found this document useful (0 votes)
35 views31 pages

DC Slide Module-4

Uploaded by

gafipis603
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views31 pages

DC Slide Module-4

Uploaded by

gafipis603
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Error Control Using Forward error Correction,

 Error control for data integrity may be exercised by means of forward error
correction (FEC)
 Figure a shows the model of a digital communication system using such an
approach.
 The discrete source generates information in the form of binary symbols.

Simplified models of a digital communication system. (a) Coding and


modulation performed separately
Error Control Using Forward error Correction,

 The channel encoder in the transmitter accepts message bits and adds
redundancy according to a prescribed rule, thereby producing an encoded
data stream at a higher bit rate.
 The channel decoder in the receiver exploits the redundancy to decide
which message bits in the original data stream, given a noisy version of the
encoded data stream, were actually transmitted.
 The combined goal of the channel encoder and decoder is to minimize the
effect of channel noise.
Error Control Using Forward error Correction,

 That is, the number of errors between the channel encoder input (derived
from the source) and the channel decoder output (delivered to the user) is
minimized.
 For a fixed modulation scheme, the addition of redundancy in the coded
messages implies the need for increased transmission bandwidth. Moreover,
the use of error-control coding adds complexity to the system.
 Thus, the design trade-offs in the use of error-control coding to achieve
acceptable error performance include considerations of bandwidth and
system complexity.
Error Control Using Forward error Correction,

 These codes have been classified into block


codes and convolutional codes.
 The distinguishing feature for this particular
classification is the presence or absence of
memory in the encoders for the two codes.
 The channel encoder produces bits at the rate
R0 = (n/k)Rs, where Rs is the bit rate of the
information source.
 The bit rate R0, coming out of the encoder, is
called the channel data rate.
Error Control Using Forward error Correction,

 The dimensionless ratio r = k/n is called the


code rate, where 0 < r < 1.
 Thus, the code rate is a dimensionless ratio,
whereas the data rate produced by the source
and the channel data rate produced by the
encoder are both measured in bits per second.
Error Control Using Forward error Correction,

 In a convolutional code, the encoding operation


may be viewed as the discrete-time convolution of
the input sequence with the impulse response of
the encoder.
 The duration of the impulse response equals the
memory of the encoder.
 The channel encoder accepts message bits as a
continuous sequence and thereby generates a
continuous sequence of encoded bits at a higher
rate.
Error Control Using Forward error Correction,

 In the model depicted in Figure a, the


operations of channel coding and modulation
are performed separately in the transmitter;
and likewise for the operations of detection
and decoding in the receiver.
Error Control Using Forward error Correction,

 Bandwidth efficiency is of major concern, the most effective


method of implementing forward error-control correction
coding is to combine it with modulation as a single function,
as shown in Figure b.
 In this second approach, coding is redefined as a process of
imposing certain patterns on the transmitted signal and the
resulting code is called a trellis code.
Error Control Using Forward error Correction,

 Block codes, convolutional codes, and trellis


codes represent the classical family of codes
that follow traditional approaches rooted in
algebraic mathematics in one form or another.
 In addition to these classical codes, we now
have a “new” generation of coding techniques
exemplified by turbo codes and low-density
parity-check (LDPC) codes.
Linear Block Codes
 Definition: A code is said to be linear if any
two codewords in the code can be added in
modulo-2 arithmetic to produce a third
codeword in the code.
ck = ci + cj
Example: codes {0000, 0101, 1010, 1111}
ci = 0 1 0 1
cj = 1 0 1 0
-----------------
ck = 1 1 1 1
------------------
Linear Block Codes
 Let constitute a block of k arbitrary
message bits.
 Thus, we have distinct message blocks.
 Let this sequence of message bits be applied to
a linear block encoder, producing an n-bit
codeword whose elements are denoted by
Linear Block Codes
 For the code to possess a systematic structure,
a codeword is divided into two parts, one of
which is occupied by the message bits and the
other by the parity-check bits.
 Clearly, we have the option of sending the
message bits of a codeword before the parity-
check bits, or vice versa.
Channel
Message
Encoder
Linear Block Codes
 The (n – k) leftmost bits of a codeword are
identical to the corresponding parity-check bits
and the k rightmost bits of the codeword are
identical to the corresponding message bits.
 We may therefore write
Linear Block Codes
 The (n – k) parity-check bits are linear sums of
the k message bits, as shown by the generalized
relation

 where the coefficients are defined as follows:


Linear Block Codes
 This system of equations may be rewritten in a
compact form using matrix notation.
 To proceed with this reformulation, we
respectively define the 1-by-k message vector
m, the 1-by-(n – k) parity-check vector b, and
the 1-by-n code vector c as follows:
Linear Block Codes
 We may thus rewrite the set of simultaneous
equations defining the parity check-bits in the
compact matrix form
 b = mP
 The P is the k-by-(n – k) coefficient matrix defined
by

where the element pij is 0 or 1.


Linear Block Codes
 c may be expressed as a partitioned row vector
in terms of the vectors m and b as follows:

 Substituting b value in the above equation and


taking m common
Linear Block Codes
 The generator matrix G is said to be in the
canonical form, in that its k rows are linearly
independent; that is, it is not possible to
express any row of the matrix G as a linear
combination of the remaining rows.
 Using the definition of the generator matrix G,
 we may simplify as c = mG
 This basic property of linear block codes is
called closure.
Linear Block Codes
 To prove its validity, consider a pair of code
vectors ci and cj corresponding to a pair of
message vectors mi and mj, respectively.
 we may express the sum of ci and cj as

 The modulo-2 sum of mi and mj represents a new


message vector.
 Correspondingly, the modulo-2 sum of ci and cj
represents a new code vector.
Linear Block Codes
 There is another way of expressing the
relationship between the message bits and
parity-check bits of a linear block code.
 Let H denote an (n – k)-by-n matrix, defined as
where PT is an (n – k)-by-k matrix, representing
the transpose of the coefficient matrix P, and
In-k is the (n – k)-by-(n – k) identity matrix.
Linear Block Codes
 Accordingly, we may perform the following
multiplication of partitioned matrices:

 In modulo-2 arithmetic, the matrix sum


PT + PT is 0. We therefore have
Linear Block Codes
 Postmultiplying both sides of c = mG by HT

 The matrix H is called the parity-check matrix of


the code and the equations specified by above are
called parity-check equations.
 The generator equation c = mG and the parity-
check detector equation cHT = mGHT are basic to
the description and operation of a linear block
code.
Linear Block Codes
 These two equations are depicted in the form of
block diagrams in Figure
Syndrome: Definition and Properties

 The generator matrix G is used in the encoding


operation at the transmitter.
 On the other hand, the parity-check matrix H is
used in the decoding operation at the receiver.
 Let r denote the 1-by-n received vector that
results from sending the code vector c over a
noisy binary channel.
 We express the vector r as the sum of the original
code vector c and a new vector e, as shown by
r=c+e
Syndrome: Definition and Properties

 The vector e is called the error vector or error pattern.


 The ith element of e equals 0 if the corresponding
element of r is the same as that of c.
 On the other hand, the ith element of e equals 1 if the
corresponding element of r is different from that of c,
in which case an error is said to have occurred in the
ith location.
 That is, for i = 1, 2,,, n, we have
Syndrome: Definition and Properties
 The receiver has the task of decoding the code
vector c from the received vector r. (c = r + e)
 The algorithm commonly used to perform this
decoding operation starts with the computation
 of a 1-by-(n – k) vector called the error-syndrome
vector or simply the syndrome.
 The importance of the syndrome lies in the fact
that it depends only upon the error pattern.
 Given a 1-by-n received vector r, the
corresponding syndrome is formally defined as
s = rHT
Syndrome: Definition and Properties

 The syndrome has the following important


properties.
Property-1: The syndrome depends only on the error
pattern and not on the transmitted codeword.
Property-2: All error patterns that differ by a codeword
have the same syndrome.
Syndrome: Definition and Properties
 Property-1: The syndrome depends only on the error
pattern and not on the transmitted codeword
 To prove this property, we have seen
r=c+ e
s = rHT
s = (c + e)HT
s = cHT + eHT
s = 0 + eHT
s = eHT
 Hence, the parity-check matrix H of a code permits us
to compute the syndrome s, which depends only upon
the error pattern e.
Syndrome: Definition and Properties
 suppose that the error pattern e contains a pair of errors in locations i
and j caused by the additive channel noise, as shown by

 Then, substituting this error pattern into yields the syndrome where
hi and hj are respectively the ith and jth rows of the matrix HT.

 In words, we may state the following corollary to Property 1:


 For a linear block code, the syndrome s is equal to the sum of those
rows of the transposed parity-check matrix HT where errors have
occurred due to channel noise.
Syndrome: Definition and Properties

 Property-2: All error patterns that differ by a codeword


have the same syndrome.
 For k message bits, there are 2k distinct code
vectors denoted as ci, where i = 0, 1, , , 2k – 1.
 Correspondingly, for any error pattern e we
define the 2k distinct vectors ei as follows

You might also like