DC Material PDF
DC Material PDF
DC Material PDF
in
n
g.i
UNIT III BASEBAND TRANSMISSION & RECEPTION
ISI – Nyquist criterion for distortion less transmission – Pulse shaping –
Correlative coding - Eye pattern – Receiving Filters- Matched Filter,
rin
Correlation receiver, Adaptive Equalization
ee
UNIT IV DIGITAL MODULATION SCHEME
Geometric Representation of signals - Generation, detection, PSD & BER of
gin
REFERENCES
w.
Department of ECE
n
g.i
SUBJECT NAME: DIGITALCOMMUNICATION
rin
Regulation:2017
ee Year and Semester: III/V
gin
En
arn
Le
w.
ww
n
UNITII WAVEFORMCODING 9
g.i
Prediction filtering and DPCM - Delta Modulation - ADPCM & ADM
principles-Linear Predictive Coding
rin
UNIT III BASEBANDTRANSMISSION 9
Properties of Line codes- Power Spectral Density of Unipolar / Polar RZ &
ee
NRZ – Bipolar NRZ - Manchester- ISI – Nyquist criterion for distortionless
transmission – Pulse shaping – Correlative coding - Mary schemes – Eye
gin
pattern - Equalization
UNITIV DIGITALMODULATIONSCHEME 9
En
UNITV ERRORCONTROLCODING 9
Channel coding theorem - Linear Block codes - Hamming codes - Cyclic
Le
REFERENCES:
1. 1. B. Sklar, “Digital Communication Fundamentals and Applications”, 2nd
Edition,PearsonEducation, 2009
2. 2. B.P.Lathi, “Modern Digital and Analog Communication Systems” 3rd
Edition,OxfordUniversity Press2007.
3. H P Hsu, Schaum Outline Series - “Analog and Digital Communications”,
TMH2006
4. J.G Proakis, “Digital Communication”, 4th Edition, Tata Mc Graw Hill
Company,2001.
n
(Copies Available in Library: Yes)
g.i
References
2. B. Sklar, “Digital Communication Fundamentals and Applications”, 2nd
rin
Edition, Pearson Education, 2009(Copies Available in Library: Yes)
3. B.P.Lathi, “Modern Digital and Analog Communication Systems” 3rd
ee
Edition, Oxford University Press 2007. (Copies Available in Library:
gin
Yes)
4. H P Hsu, Schaum Outline Series - “Analog and Digital Communications”,
En
S.No Week
Le
w.
No.of
Topics to be Covered Text Page. No
Hours
ww
n
R1,T1 165
g.i
UNIT II WAVEFORM CODING
11 Prediction filtering
rin
2 TI 109-113
200-
202,836-
12 DPCM
4 ee 2 T1,R1,R2
840,123-
126
203-
gin
208,842-
13 Delta Modulation
843,336-
1 T1,R1,R2 359
En
211-
14 ADPCM 215,121-
2 T1,R2 123
arn
5
15 ADMprinciples 1 T1 208-210
16 Linear Predictive Coding
Le
1 R1 853-854
261-
Eye pattern
24 262,151-
8 1 T1,R1 152
25 Equalization 1 T1 263-266
n
2 T1 275-279
g.i
29 BFSK 2 T1 279-283
30 10 QPSK 1 T1 284-290
rin
31 QAM 1 T1 283-285
32
33 11
structure ee
Carrier Synchronization
of Non-coherent
1
2
T1 344-347
gin
Receivers R1 194-200
34 Principle of DPSK 1 T1 307-310
En
370-
36 Linear Block codes 378,416-
12 2 T1,R2 425
Le
378-
37 Hamming codes 379,423-
1 T1,R2 424
w.
379-
38 Cyclic codes 392,425-
2 T1,R2 434
ww
393-
39 13 Convolutional codes 402,471-
2 T1,R2 482
404-
40 Viterbi Decoder 1 406,482-
T1,R2 485
FacultyIncharge HoD
TABLE OF CONTENTS
UNIT Q.NO TITLE PAGE NO
I 1-12 PART A 7-11
PART B
1 Quantization 12-15
2 PCM 16-18
3 Sampling Theorem 19-22
4 TDM & Logarithmic companding of speech signal 23-26
II 1– 10 PART A 27-29
n
PART B
g.i
Delta Modulation(DM) and Adaptive Delta
1 30-35
Modulation(ADM)
2 Differential Pulse code Modulation(DPCM) 36-38
rin
3 Adaptive Differential Pulse code Modulation(ADPCM) 39-43
4
5 ee
Linear Predictive Coding
Prediction Filter
44-46
46-49
gin
III 1– 10 PART A 50-53
PART B
1 Nyquist first criteria to minimize ISI. 53-56
En
n
A band limited signal of finite energy, which has no frequency
g.i
components higher than W Hz, may be completely recovered from the
knowledge of its samples taken at the rate of 2W samples persecond.
rin
2. What is companding? Sketch the input and output characteristics of
expander and compressor and also write A-law and µ-law for
ee
compression. (MAY/JUNE2016& Nov/Dec 2016& April/May 2017)
gin
The signal is compressed at the transmitter and expanded at the receiver.
This is called as companding. The combination of a compressor and
En
3. State sampling theorem for band limited signals and filters to avoid
aliasing(OR)
State Low passsamplingtheorem (NOV/DEC2015)
If a finite –energy signal g(t) contains no frequencies higher than W hertz
,it is completely determined by specifying its co=ordinates at a sequence
of points spaced 1/2W seconds apart.
If a finite energy signal g(t) contains no frequencies higher than W hertz,
n
it may be completely recovered from its co=ordinates at a sequence of
g.i
points spaced 1/2W secondsapart.
Filters to avoidaliasing
rin
o Before sampling a low pass pre alias filter is used to attenuate
those high frequency components which do not contribute the
ee
information content of thesignal.
gin
o The filtered signal is sampled at a rate slightly higher than Nyquist
rate 2W.
En
The conversion of analog sample of the signal into digital form is called
quantizingprocess.
Le
II. The output is assigned a discrete value selected from a finite set of
ww
n
While converting the signal value from analog to digital, quantization is
g.i
performed.
The analog value is assigned to nearest digital value. This is called
rin
quantization.
The quantized .value is then .converted into equivalent binary value. .The
ee
quantization levels are fixed depending upon the number of bits.
gin
Quantization is performed in every Analog to DigitalConversion.
Quantization noise power is the quantizer error instance and is given
En
values ofquantizer
Le
7. What is meant by PCM and what are the noises present in PCM system
what is the SNR of PCM system if the number of quantization level is28?
w.
rounded off to the nearest one of a finite set of discrete levels and
encoded so that both time and amplitude are represented in
discrete form.. This allows the message to be transmitted bymeans
of a digitalwaveform.
Aliasing noise, quantization noise, Channel noise, Intersymbol
interference
Signal to noiseRatio
n
robust quantization.
g.i
rin
9. What is the disadvantage of uniform quantization over the non-uniform
quantization?
ee
SNR decreases with decrease in input power level at the uniform
quantizer but non-uniform quantization maintain a constant SNR for wide
gin
range of input power levels. This type of quantization is called as robust
quantization.
En
arn
10. What are the advantages and disadvantages of Digital over analog
Communicationsystem?(April/May-2011)
Le
Advantages:
Ruggedness to channel noise and otherinterferences.
w.
fidelity.
Security ofinformation.
Disadvantages:
Digital Communication system needs more bandwidth than analog
Communicationsystem
10
n
communication.
g.i
As it is digital coding techniques are available for compression,
encryption and errorcorrection.
rin
Disadvantages
PCM signal generation and reception involve complexprocesses.
ee
PCM requires much larger transmission bandwidth than analog
gin
modulation.
12. What is TDM? Write its advantages anddisadvantages
En
Advantages:
Only one carrier in the medium at any giventime
w.
11
PART –B
1.Explain the process of quantization and obtain the expression for signal
to quantization ratio in the case of uniform quantizer
(Nov/Dec2016,April/May17)
The conversion of an analog sample of the signal into a digital form is called the
quantizing process. The quantizing process has a twofold effect.
1) The peak to peak range of input sample is subdivided into a finite set of
decision levels (or) decision thresholds that are aligned with the „risers‟ of the staircase
and
n
2) The output is assigned a discrete value selected from a finite set
g.i
representation levels (or) reconstruction levels that are aligned with „treads‟ of the
staircase.
rin
2 types of quantizers.
1) Uniformquantizer
2) Non uinifomquantizer ee
gin
1) Uniform quantizer:
In uniform quantization.as in figure 1(a), the separation between the desicion
En
thresholds and the separation between the representation levels of the quantizer have
a common value called the step size.
arn
1. Mid treadtype::
According to the staircase-like transfer characteristics of figure 1a, the decision
w.
thresholds of the quantizer are located at ±Δ/2, ±3Δ/2, ±5Δ/2....... and representation
ww
levels are located at 0, ±Δ, ±2Δ, where Δ is a step size. Since the origin lies in the
middle of tread of the staircase, it is referred to as symmetric quantizer of midtread
type.
2. Mid risertype:
Figure 2(a) shows staircase-like transfer characteristics in which the decision
thresholds of the quantizer are located at 0, ±Δ, ±2Δ, and the representation levels
are located at ±Δ/2, ±3Δ/2, ±5Δ/2.......where Δ is a stepsize..
12
Since in this case, the origin lies in the middle of the riser of the stair case, it is
referred to as symmetric quantizer of midriser type.
Both quantizers mid tread (or) mid riser type is memory less ,that is the quantizer
output is determined only by the value of corresponding input samples.
n
g.i
rin
ee
gin
Fig 1Two types of quantization: (a) mid tread and (b) midrise.
En
Overload level:
The absolute value of which is one half of peak to peak range of input sample
arn
values.
Quantization Noise: The use of quantization introduces an error defined as the
difference between the input signal m and output signal v.The error is called
Le
quantization noise.
w.
Let the quantizer input m be the sample value of a zero mean random variable M.
A quantizerg(.)maps the input random variable M of continuous amplitude into a
ww
discrete random variable, their respective sample values are related by the equation
v = g(m)
Let the quantization error be denoted by random variable Q of sample valueq
q= m-v (or)
Q=M-V
With the input M having zero mean, and the quantizer assumed to be symmetric
as in figure (2),the quantization output V and therefore quantization error also have
zero mean.
13
n
g.i
rin
ee
Figure (2) Illustration of the quantization process
Quantization error Q:
gin
Consider then an input m of continuous amplitude in the range (-
mmax,mmax).Assuming a uniform quantizer of mid riser type. we find that the step size of
En
For a uniform quantizer, the quantization error Q will have its sample values
bounded by (-Δ/2 ≤q≤ Δ/2).If the step size Δ is very small. It is reasonable to assume
w.
fQ(q)= --------(1)
For this is true, we must ensure that incoming signal does not overload the
quantizer. Then with the mean of the quantization error being zero, its variance is
the same as the mean squarevalue.
=E[Q2]
14
= (q)---------(2)
------------(3)
n
code.
g.i
Therefore, --------(4)
Substituting the value of L from equation(4)
rin
Now,
ee
gin
= x
En
=
arn
Let P be the average power of the message signal m(t).Now express the output
signal-to-noise ratio of uniform quantizer
Le
(SNR)o=
w.
The above equation shows that the output signal to noise ratio of the quantizer
ww
15
2. Describe PCM waveform coder and decoder with neat sketch and
list the merits and compared with analog coders(Nov/Dec15,April17)
Pulse-Code Modulation
PCM is a discrete-time, discrete-amplitude waveform-coding process, by means
of which an analog signal is directly represented by a sequence of coded pulses.
Specifically, the transmitter consists of two components: a pulse-amplitude
modulator followed by an analog-to-digital (A/D) converter. The latter component itself
embodies a quantizer followed by an encoder. The receiver performs the inverse of
n
these two operations: digital-to- analog (D/A) conversion followed by pulse-amplitude
g.i
demodulation. The communication channel is responsible for transporting the encoded
rin
pulses from the transmitter to the receiver. Figure 3, a block diagram of the PCM,
shows the transmitter, the transmission path from the transmitter output to the receiver
ee
input, and the receiver. It is important to realize, however, that once distortion in the
form of quantization noise is introduced into the encoded pulses, there is absolutely
gin
nothing that can be done at the receiver to compensate for that distortion.
En
arn
Le
w.
ww
16
n
g.i
Quantization in the Transmitter
The PAM representation of the message signal is then quantized in the analog-
rin
to-digital converter, thereby providing a new representation of the signal that is
discrete in both time andamplitude.
ee
By using a non uniformquantizer with the feature that the step size increases
gin
as the separation from the origin of the input–output amplitude characteristic of
the quantizer is increased, the large end-steps of the quantizer can take care of
En
possible excursions of the voice signal into the large amplitude ranges that
occur relativelyinfrequently.
arn
The last signal-processing operation in the transmitter is that of line coding, the
purpose of which is to represent each binary codeword by a sequence of pulses; for
ww
The most important feature of a PCM systems is its ability to control the effects
of distortion and noise produced by transmitting a PCM signal through the channel,
connecting the receiver to the transmitter. This capability is accomplished by
reconstructing the PCM signal through a chain of regenerative repeaters, located at
sufficiently close spacing along the transmission path.
n
g.i
Figure. 5 Block diagram of regenerative repeater.
rin
As illustrated in Figure. 5, three basic functions are performed in a regenerative
repeater: equalization, timing, and decision making. The equalizer shapes the received
ee
pulses so as to compensate for the effects of amplitude and phase distortions
gin
produced by the non-ideal transmission characteristics of the channel. The timing
circuitry provides a periodic pulse train, derived from the received pulses, for sampling
En
the equalized pulses at the instants of time where the SNR ratio is a maximum. Each
sample so extracted is compared with a predetermined threshold in the decision-
arn
making device. In each bit interval, a decision is then made on whether the received
symbol is 1 or 0 by observing whether the threshold is exceeded or not. If
thethreshold is exceeded, a clean new pulse representing symbol 1 is transmitted to
Le
the next repeater; otherwise, another clean new pulse representing symbol 0 is
w.
transmitted
ww
18
samples that are usually spaced uniformly in time. The sampling rate is properly
chosen in relation to the bandwidth of the message signal, so that the sequence of
samples uniquely defines the original analog signal.
Frequency-Domain Description of Sampling
Consider an arbitrary signal g(t) of finite energy, which is specified for all time t.
A segment of the signal g(t) is shown in Figure 6a. Suppose that we sample the signal
g(t) instantaneously and at a uniform rate, once every Ts seconds. Consequently, we
obtainaninfinitesequenceofsamplesspacedTssecondsapartanddenotedby
n
{g(nTs)}, where n takes on all possible integer values, positive as well as negative. We
g.i
refer to Ts as the sampling period, and to its reciprocal fs = 1/Ts as the sampling rate.
For obvious reasons, this ideal form of sampling is called instantaneous sampling.
rin
ee
gin
En
Figure. 6. The sampling process. (a) Analog signal. (b) Instantaneously sampled
arn
Where G(f) is the Fourier transform of the original signal g(t) and fs is the
sampling rate.
19
n
Figure. 7 (a) Spectrum of a strictly band-limited signal g(t). (b) Spectrum of the
g.i
sampled version of g(t) for a sampling period Ts =1/2W.
rin
The Sampling Theorem
The sampling theorem for strictly band- limited signals of finite energy in two
equivalent parts:
ee
A band-limited signal of finite energy that has no frequency components higher
gin
than W hertz is completely described by specifying the values of the signal
instants of time separated by 1/2Wseconds.
En
Aliasing Phenomenon
Aliasing refers to the phenomenon of a high-frequency component in the
w.
spectrum of the signal seemingly taking on the identity of a lower frequency in the
spectrum of its sampled version, as illustrated in Figure 8. The aliased spectrum,
ww
shown by the solid curve in Figure 8 b, pertains to the under sampled version of the
message signal represented by the spectrum of Figure 8 a. To combat the effects of
aliasing in practice, we may use two corrective measures:
Prior to sampling, a low-pass anti-aliasing filter is used to attenuate those high-
frequency components of the signal that are not essential to the information
being conveyed by the message signalg(t).
The filtered signal is sampled at a rate slightly higher than the Nyquistrate.
20
n
g.i
rin
ee
Figure. 8 (a) Spectrum of a signal. (b) Spectrum of an under-sampled version of the
signal exhibiting the aliasing phenomenon.
gin
En
arn
Le
w.
ww
21
sampling rate greater than the Nyquist rate. (c) Magnitude response of reconstruction
filter.
The use of a sampling rate higher than the Nyquist rate also has the beneficial
effect of easing the design of the reconstruction filter used to recover the original
signal from its sampled version. Consider the example of a message signal that has
been anti-alias (low- pass) filtered, resulting in the spectrum shown in Figure 9 a. The
corresponding spectrum of the instantaneously sampled version of the signal is shown
in Figure 9 b, assuming a sampling rate higher than the Nyquist rate. According to
n
Figure 9 b, we readily see that design of the reconstruction filter may be specified as
g.i
follows:
The reconstruction filter is low-pass with a pass band extending from –W to W,
rin
which is itself determined by the anti-aliasingfilter.
The reconstruction filter has a transition band extending (for positive
ee
frequencies) from W to (fs – W), where fs is the samplingrate.
gin
4. Explain TDM & logarithmic companding of speech signal.
En
shown inFig.10.
Each input message signal is first restricted in bandwidth by a low-pass
w.
22
n
g.i
rin
ee
gin
En
with the sampling theorem. Let T, denote the sampling period so determined for
each message signal. Let denote the time spacing between adjacentsamples
w.
n
between the representation levels
g.i
For example,the range of voltages covered by voice signals from the peaks
of loud talk to the weak passages of the weak talk is on the order of 1000 to 1.
rin
The use of non uniformquantizer is equivalent to passing the baseband
signal through a compressor and then applying the compressed signal to the
uniform quantizer. ee
gin
Logarithmic companding of speech signal:
A particular form of compression law that is used in practice is µ law,which
En
is defined by
=
arn
Where m and v are the normalized input and output voltages and µ is a
Le
positive constant.
In figure 11(a), we have plotted the µ -law for 3 different values of µ.The
w.
24
n
g.i
rin
Figure 11 compression laws (a)µ-law (b)A-
lawA-law is givenby
ee
gin
En
=
w.
25
n
g.i
rin
ee
gin
En
arn
Le
expander
ww
26
n
quantizer output are used to derive the backward estimates.
g.i
APF: Adaptive prediction with forward estimation, in which unquantized
samples of the input signal are used to derive the forward estimates of the
rin
predictor coefficients.
APB: Adaptive prediction with backward estimation, in which Samples of
ee
the quantizer output and the prediction error are used to derive estimates of the
gin
predictor coefficients.
Limitations of forward estimation with backward estimation:
En
o Sideinformation
o Buffering
arn
o Delay
2. What are the advantages of delta modulator? (MAY/JUNE2016)
Le
Delta modulation transmits only one bit for one sample. Thus the
signaling rate and transmission channel bandwidth is quite small for delta
w.
modulation.
The transmitter and receiver implementation is very much simple for delta
ww
modul
ation. There is no analog to digital converter involved in deltamodulation.
3. What is linear predictor? On what basis predictor coefficients are
determined. (MAY/JUNE2016)
Predictor is said to be linear if the future sample value is alinear
combination of present and past inputsamples.
27
n
reduced compared toPCM.
g.i
5. Define delta modulation and adaptive delta modulation(NOV/DEC
2015)
rin
Delta modulation is the one-bit version of differential pulse code
modulation.
ee
Types of noise in DM or Drawbacks of DM
gin
1.Slope Overload noise(Startup error) 2.Granular noise(Hunting)
Adaptive delta modulation-The performance of a delta modulator can be
En
signal the step size is increased. Conversely, when the input signal is
varying slowly, the step is reduced, In this way, the step size is adapting
Le
to the level of the signal. The resulting method is called adaptive delta
modulation(ADM).
w.
6. DefineADPCM.
ww
28
n
Waveform coding uses the temporal characteristics i.e., time varying
g.i
parameters in signal and form the estimate of waveform. It minimizes the value
by making difference between signal and estimate. It requires more bit rate than
rin
signal bandwidth.
ee
9. Differentiate the properties of temporal waveform coding and model
gin
basedcoding. [NOV12]
waveform
1 coding
Encoder uses temporal It is based on
arn
29
PART –B
n
The difference between the input and the approximation is quantized into two
g.i
levels namely ±δ i.e. positive to negativedifferences.
rin
If the approximation falls below the signal at any samples it is increased byδ.
On the other hand the approximation lies above the signal it is diminished byδ.
ee
The signal does not change too rapidly from sample to sample. We find the
staircase approximation remains with in ±δ of the inputsignal.
gin
En
arn
Le
w.
ww
30
n
The basic principle of delta modulation may be formalized with set of discrete
g.i
timerelations.
.
rin
=
ee .
gin
Ts – Samplingperiod.
e(nTs) – prediction error between the present samplesx(nT s).
En
DM TRANSMITTER.
It consist of a summer, two-level quantizer and an accumulator interconnected as
Le
shown infig.
Assume that accumulator is initially set tozero.
w.
In summer accumulator adds quantizer output (±δ) with the previous sample
ww
approximation.
31
n
g.i
rin
DM RECEIVER
The staircase approximation u(t) is reconstructed by passing the incoming
ee
sequence of positive and negative pulses through an accumulator in a manner
gin
similar to that used in thetransmitter.
The out-of-band quantization noise in the high frequency staircase waveform u(t)
En
Delta modulation offers two unique features (1) a one bit code-word for the output
which eliminates the need of word framing (2) simplicity of design for both the
transmitter andreceiver.
QUANTIZATION ERROR.
Delta modulation has two types oferror.
1) slope over load 2) granularnoise
32
n
g.i
SLOPE OVERLOAD DISTORTION
rin
Let q(nTs) denote the quantizingerror
u(nTs) = x(nTs)+q(nTs)
ee
To eliminate u(nTs-Ts), we may express the prediction error e(nTs)as
gin
If we consider the maximum slope of the original input waveform x(t) ,the
arn
Le
We find the step size ∆=2δ is too small for the staircase approximation u(t) to
w.
follow a steep segment of the input waveform x(t) with the result that u(t) falls
behindx(t).
ww
GRANULAR NOISE
In contrast to slope-overload distortion, granular noise occurs when the stepsize
∆ is too large relatively to the local slope characteristics of the input wave form
x(t).
33
Thereby causing the staircase approximation u(t) to hunt around a relatively flat
segment of the inputwaveform.
(ii) Explain how Adaptive Delta Modulation performs better and gains more
SNR than delta modulation.[Nov/Dec 2016,April/May 2017]
The performance of the delta modulator can be improved significantly by making
the step size of the modulator (assume time-varyingform).
During a steep segment of the input signal the step size isincreased.
n
Conversely when the input signal is varying slowly, the step size isreduced.
g.i
In this way, the step size is adapted to the level of the input signal is called
adaptive delta modulation(ADM).
rin
Several ADM schemes to adjust stepsize
ee
1) Discrete set of values is provided for the stepsize.
2) Continuous range for step size variation isprovided.
gin
BLOCK DIAGRAM
It consist of a summer, two-level quantizer ,an accumulator and logic for step size
En
n
RECEIVER
g.i
rin
ee
gin
35
n
adjacent samples has a variance that is smaller than the variance of the
g.i
signalitself.
When these highly correlated samples are encoded as in a standard
rin
PCM system, the resulting encoded signal contains redundantinformation.
Symbols that are not absolutely essential to the transmission of
ee
information are generated as a result of encoding process and remove this
gin
redundancy beforeencoding.
DPCM TRANSMITTER
En
36
n
is known as differential pulse-codemodulation.
g.i
The quantizer output may be representedas
u[nTs] = Q[enTs)]
rin
=e(nTs)+q(nTs) (1)
q(nTs) – quantization error.
ee
The quantizer output u(nTs) is added to the predicted value x(nTs) to
gin
produce the predictorinput.
u(nTs)= +v(nTs). (2)
En
u(nTs)= (3)
Equation 3 can be rewrite as
arn
. (4)
The quantizedsignalu( ) at the predictor input differs from
Le
Ifthepredictionisgoodthevarianceofthepredictionerrore(nTs)willbe
smaller.
ww
DPCM RECEIVER
The receiver for reconstructing the quantized version of the input is shown
infig.
It consist of decoder to reconstruct the quantized errorsignal.
The quantized version of the original input is reconstructed from the decoder
output using the same predictor as used in thetransmitter.
37
In the absence of channel noise in transmitter, we find that the encoded signal at
the receiverinput.
n
g.i
The receiver output is equal to u(nTs), which differs from the original input x(nTs)
only by the quantizing error q(nT s) incurred as a result of quantizing the
rin
prediction errore(nTs).
The transmitter and receiver operate on the same sequence of samplesu(nT s).
ee
gin
SNR IN DPCM:
The output signal to quantization noise ratio of a signal coder
En
(SNR)o= .
arn
(SNR)o= .
w.
(SNR)o=GP(SNR)P.
ww
(SNR)P = .
38
n
g.i
Reduction in the number of bits per sample from 8 to 4 involves the combined
use of adaptive quantization and adaptiveprediction.
rin
Adaptive means being responsive to changing level and spectrum of the input
speechsignal.
ee
The variation of performance with speakers and speech material together with
variations in signal level inherent in the speech communicationprocess.
gin
A digital coding scheme that uses both adaptive quantization and adaptive
prediction is called adaptive differential pulse-code modulation(ADPCM).
En
The term “adaptive quantization” refers to a quantizer that operates with a time
varying step size ∆(nTs), where Ts is the samplingperiod.
arn
∆(nTs) = X(nTs)
Φ – constant,
w.
estimating continuously.
39
ADAPTIVE QUANTIZATION
The respective quantization schemes are referred to as adaptive quantization
with forward estimation (AQF) and adaptive quantization with backward
estimation(AQB).
n
The samples are released after the estimate X(nTs) has been obtained this
g.i
estimate is obviously independent of quantizingnoise.
Therefore we find the that the step size ∆(nT s) obtained from AQF ismore
rin
reliable than that from AQB.
However, the use of AQF requires the explicit transmission of level information to
a remotedecoder. ee
gin
En
arn
Le
The system with additional side information that has to be transmitted to the
receiver.
w.
The processing delay in the encoding operation result the use ofAQF.
The problem of level transmission, buffering and delay intrinsic to AQF are all
ww
avoided in the AQB scheme by using the quantizer output to extract information
for the computation of the step size∆(nTs).
40
n
systems not obvious that the system will bestable
g.i
The system is indeed stable in the sense that if the quantizer input x(nTs) is
bounded then so with the backward estimate X(nTs) and the correspondingstep
rin
size ∆(nTs).
ADAPTIVE PREDICTION
ee
The use of adaptive prediction in ADPCM is justified because step size are
gin
inherently nonstationary.
The two schemes for performing adaptive prediction are 1) adaptive prediction
En
The APF in which unquantized samples of the input signal are used to derive
estimates of the predictorcoefficient.
41
In APF scheme N unquantized samples of the input speech are first buffered and
then released after computation of M predictor coefficients that are optimized for
the buffered segment of inputsamples.
The choice of M involves a compromise between an adequate prediction gain
and an acceptable amount of sideinformation.
Likewise the choice of learning period or buffer length N involves a compromise
between the rate at which information on predictor coefficients must be updated
and transmitted toreceiver.
n
g.i
ADAPTIVE PREDICTIVE WITH BACKWARD ESTIMATION (APB)
APF suffers from the same intrinsic disadvantages as AQF the disadvantages
rin
are eliminated using APB scheme in the belowfig.
ee
gin
En
arn
Le
w.
The optimum predictor coefficients are estimated on the basis of quantized and
ww
42
(nTs)isthepredictionofthespeechinputsamplex(nTs)theaboveequationcan be
rewriteas
y(nTs) = u(nTs) - (nTs).
u(nTs) – represents a sample value of the predictorinput
(nTs) – sample value of the predictoroutput.
y(nTs) – predictorerror.
n
The structure of the predictor assumed to be of order M as shown in below fig.for
g.i
adaptation of the predictor coefficients, we use the least mean square(LMS)
algorithm it may write as
rin
k(nTs+Ts) = k(nTs)+µy(nTs)u(nTs-kTs) k = 1,2, . . . . .M.
ee
gin
En
arn
Le
w.
43
n
excitation depends on whether the speech sound is voiced orunvoiced.
g.i
Voiced sounds are produced by forcing air through the glottis with the
tension of vocal chords adjusted so that they vibrate in a relaxation oscillation
rin
thereby producing quasi-periodic pulse of air that excite the vocaltract.
Unvoiced sounds are generated by forming a constriction at some points in
ee
the vocal tract and forcing air through the constriction at a high enough
gin
velocity to produceturbulence
Examples of voiced and unvoiced sounds are A andS.
En
The speech waveform in the below figure (a) is the result of utterance “every
salt breeze comes from the sea” by a malesubject.
arn
The waveform of fig (b) corresponds to the “A” segment in the world “salt”
and fig (c) corresponds to “S”segment.
Le
The generation of voiced sound is modeled as the response of the vocal tract
filter excited with a periodic sequence of impulses spaced by a fundamental
w.
44
n
g.i
rin
ee
A linear predictor vocoder consists of transmitter and a receiver having the
gin
block diagram shown in belowfig.
En
arn
Le
w.
ww
The transmitter first performs analysis on the input speech signal, block
byblock.
45
Each block is 10-30 ms long for which the speech production process may
be treated as essentiallystationary.
The parameters resulting from the analysis namely the prediction –error
filter (analyzer) coefficients, a voiced/unvoiced parameter, and the pitch
period, provide a complete description for the particular segment of the
input speechsignal.
A digital representation of the parameters of the complete description
n
constitutes the transmittedsignal.
g.i
The receiver first performs decoding followed by synthesis of the speech
signal the standard result of this analysis/synthesis is an artificial –
rin
sounding reproduction of the original speech signal.
5) ee
Briefly explain about PredictionFilter.
gin
Prediction constitutes a special form of estimation the requirement is to use
a finite set of present and past samples of a stationary process to predict a
En
The difference between the actual sample of the process at the (future)
timeofinterestandthepredictoroutputiscalledthepredictionerror.
w.
46
n
g.i
rin
By minimizing the mean square value of the prediction error as a special
case of the weiner filter proceed asfollow.
ee
1. The variance of the sample Xn, viewed as the desired response,equals
gin
En
3. The autocorrelation function of the predictor‟s tap input Xn-k with another
tap input Xn-m is givenby
w.
47
εn= Xn - n
=Xn-
The prediction error εn is computed by giving the present and past samples
of a stationary process, namely Xn, Xn-1 . . . Xn-Many giving the predictor
coefficients h01,h02, . . . h0M, by using the structures which is called as
prediction error filter as shown infig.
The operation of prediction error filtering is invertible. By rearrangingthe
n
lastequation.
g.i
rin
ee
gin
En
arn
linear combination of “past” samples of the process Xn-1 … Xn-M, plus the
“present” prediction error εn
w.
Where n refers the present structure for performing the inverse operations
ww
48
that if we are given one can compute the other by means of a linear
filteringoperations.
The reason for representing samples of a stationary process (Xn) by
samples of the corresponding prediction error . The prediction
error variance is less thanζ2X1.
The variance of Xn. If Xn has zero mean,εn.
Then the prediction error varianceis
n
g.i
rin
ee
gin
En
arn
Le
w.
ww
49
n
Non-Return-to-Zero, Inverted(NRZI)
g.i
Manchesterencoding
rin
2. What is ISI? What are the causes of ISI? (MAY/JUNE2016)
The transmitted signal will undergo dispersion and gets broadened
ee
during its transmission through the channel. So they happen to collide or
overlap with the adjacent symbols in the transmission. This overlapping
gin
is called Inter SymbolInterference.
Pulse shaping compresses the B.W of the data impulse to a small B.W
En
greater than the nyquist minimum, so that it would not spread in time
and degrade the system‟s error performance due to increasedISI.
arn
word
All error pattern differing by a code word will have the samesyndrome.
w.
With syndrome decoding and (n,k) linear block code can correct upto t
error per code word if n and k satisfy the hammingbound.
50
n
5. Define the followingterms
g.i
i) NRZ unipolar format
rin
ii) NRZ polar format
iii) NRZ bipolar format
iv)
ee
Manchesterformat
NRZ unipolar format-In this format binary 0 is represent by no pulse and binary
gin
1 is represented by the positivepulse.
NRZ polar format-Binary 1 is represented by a positive pulse and binary 0 is
En
second half Bit duration positive pulse. Binary 1: first half bit duration positive pulse
and the second half Bit duration negativepulse.
w.
51
Width of the eye- It defines the time interval over which the received waveform
can be sampled without error from intersymbol interference.
Sensitivity of an eye- The sensitivity of the system to timing error is
determined by the rate of closure of the eye as the sampling time isvaried.
Margin over noise- The height of the eye opening at a specified sampling time
defines the margin over noise.
Applications for eye pattern
used to study the effect ofISI
n
Eye opening-additive noise in thesignal
g.i
Eye overshoot/undershoot-Peak distortion due to interruptions in the signal
path
rin
Eye width-Timing synchronization and jittereffects.
ee
7. State Nyquist criterion for ZeroISI.
gin
The weighted pulse contribution akP(iTb-kTb) for k=1 free from ISI. Sampling
time t=iTb for the receivedsignal.
En
ThefrequencydomainP(f)eliminatesISIforsamplestakenatintervalsT b
provided that it satisfies equation.
arn
Le
52
n
PART-B
g.i
1. Derive and explain Nyquist first criteria to minimizeISI.[Nov16,April 17]
The transfer function of the channel and the transmitted pulse shape are
rin
specified and the problem is to determine the transfer functions of the transmitting and
receiving filters to reconstruct the transmitted datasequence.{bk}
ee
The receiver extracts and then decodes the corresponding sequence ofweights
gin
{ak} from the output y(t)
The extraction involves sampling the output y(t) at some timet=iTb.
En
The decoding requires that the weighted pulse contribution akp(iTb –Ktb) for k=I
be free from ISI must be represented byk
arn
simplifies to y(ti)=µai.
It implies zero inter symbol interference. It assures perfect reception in the
w.
absence ofnoise.
ww
Pð (f)=Rb
Where Rb =1/Tb is the bit rate. Pð (f) is the Fourier transform of infinite
periodic sequence of delta function of period T b
53
Pð (f) =
Let the integer m=i-k. Then i=k corresponds to m=1 and corresponds to
m
Imposing the condition of zero ISI on the sample values of p(t) in the above
integral
Pð (f) = = p(0).by using the sifting property of
deltafunction.
n
As p(0)=1,by normalization the condition for zero ISI is satisfied if
g.i
rin
Thus Nyquist criterion for distortion less baseband transmission is formulated
in terms of time function p(t) and frequency functionP(f)
IDEAL SOLUTION:
ee
A frequency function P(f) is obtained by permitting only one non zero
gin
component in the series for each f in the range from –B0 to B0 and B0 denotes half the
bitrate
En
B0= Rb/2
P(f)=1/2B0 RECT (F/2B0)
arn
In this solution no frequencies of absolute value exceeding half the bit rate are
needed. Hence one signal waveform that produces zero ISI is defined by sincfunction.
Le
54
r
(a)ideal amplitude response (b)ideal basic pulseshape
n
The function p(t) is the impulse response of an ideal LPF with pass band
g.i
amplitude response 1/(2B0) and bandwidthB0
The function p(t) has its peak value at the origin and goes through zero at
rin
integer multiples of bit durationTb
If the received waveform y(t) is sampled at instants of time t=0,
ee
then the pulses defined by µp(t-iTb) with arbitrary amplitude µ and index i=
gin
0, ….. Will not interfere with eachother.
(i) The amplitude characteristics of P(f) must be flat from –B0 to B0 andzero
.This is physically unrealizable because of abrupt transitions.
arn
(ii) The function p(t) decreases as 1/!t! For large !t! Producing slow rate of
decay.
Le
To evaluate the effect of timing error put the sampling time tiequal to zero.In the
absence ofnoise
w.
y( =µ =µ
=µ
=µa0sinc (2B0ðt)+µsin(2πB0ðt)/∏
The first term on right side defines the desired symbol and the remaining series
represents interference caused by timing error ðt in sampling the outputy(t).
PRACTICAL SOLUTION:
The practical difficulties caused by ideal solution is overcome by extending the
bandwidth from B0=Rb/2 to value between B0 and2B0.
55
n
g.i
The frequency f1 and bandwidth B0 are related by α =1-f1/B0.The parameter α is
called rollofffactor
rin
ee
gin
En
arn
Le
w.
ww
Response for different rolloff factors (a) Frequency response (b)Time response
The frequency response P(f) normalized by multiplying it by 2B0 is shown for
three values of α namely 0,0.5 and 1.For α=0.5 or 1,the roll off characteristic of P(f)
cuts of gradually compared to idealLPF.
The time response p(t) is the inverse fourier transform ofP(f)
56
n
Correlative coding is a practical means of achieving the theoretical maximum
g.i
signaling rate of 2B0 bits per second in a bandwidth of B0 Hz using realizable and
perturbation tolerantfilters.
rin
Duobinary Signaling:
Duo means doubling the transmission capacity of straight binarysystem.
ee
gin
En
arn
Le
57
n
= Hc(f) [exp(jπfTb )]exp(-jπfTb)
g.i
= 2Hc(f ) cos(πfTb ) exp (-jπfTb)
For an ideal channel of bandwidth B0= Rb /2
rin
Hc(f) . 0otherwise
ee
The overall frequency response is of the form half cycle cosinefunction
gin
H(f) =
58
n
corresponds to a correct decision. then the current estimate will becorrect
g.i
The technique of using a stored estimate of the previous symbol is called
decisionfeedback.
rin
A drawback of this detection process is that once errors are made, they tend to
propagate. This is due to the fact that a decision on the current binary digit bk depends
ee
on the correctness of a decision made on the previous binary digitbk-1
gin
Error propagation can be avoided by using precoding before the duobinary
coding.
En
arn
Le
w.
ww
59
The resulting precoder output {ak }is next applied to the duobinary coder,
thereby producing sequence {ck} and it is related to {ak)asfollows
Ck =ak+ ak-1. The precoding is a nonlinear operation.
Assume that symbol 1 at the precoder output is represented by +1 volt and
symbol 0 by -1volt
Ck=
The decision rule for detecting the original input sequence {bk} from {ck}is
n
g.i
Bk=
rin
ee
gin
En
arn
60
n
correlative coding scheme.
g.i
rin
ee
gin
En
arn
Le
w.
ww
61
It involves the use of tapped delay line filter with tap weights W 0,W 1,W N-I.
Correlative sample ck is obtained by superposition of N successive input sample
valuesbk
Ck =
n
g.i
rin
ee
gin
En
arn
Le
w.
ww
62
n
As prechannel equalization requires a feedback channel, adaptive equalization
g.i
at the receiving side isconsidered.
This equalization can be achieved before data transmission by training the filter
rin
with suitable training sequence transmitted through channel so as to adjust the filter
parameters to optimalvalues.
ee
The adaptive equalizer consists of a tapped delay line filter with 100 taps or
gin
more and its coefficients are updated according to LMSalgorithm.
The adjustments to the filter coefficients are made in a step by step fashion
En
version of the signal is generated in the receiver .It is applied to the adaptive equalizer
w.
as the desired response. The training sequence may be Pseudo Noise sequence and
the length of the training sequence may be equal to or greater than the length of
ww
adaptiveequalizer.
When the training period is completed adaptive equalizer is switched to
decision directedmode.
63
n
g.i
rin
ee
gin
En
arn
Le
w.
ww
n
g.i
rin
ee
gin
En
arn
Le
w.
ww
65
4. a) Determine the power spectral density of NRZ polar and unipolar data
formats. [NOV/DEC2015,April/May 2017]
Unipolar format (on-off signaling)
The symbol 1 is represented as transmitting a pulse whereas symbol 0 is
represented by switching off pulse. When the pulse occupies the full duration of a
symbol the unipolar format is said to be non-return to zero (NRZ) type. When it
occupies a fraction (usually one half) of the symbol duration it is said to be return to
zero (RZ)type.
n
g.i
rin
ee
gin
En
arn
Le
w.
Polar format:
A positive pulse is transmitted for symbol 1 and a negative for symbol 0. It can be
ww
of the NRZ or RZ type. Polar waveform has no dc component provided that 0s and 1s
in the input data occur in the equal proportion.
66
n
by T= Tb log2 M. correspondingly one baud equals log2 M bits persecond.
g.i
The source is characterized as having the ensemble averaged autocorrelation
function
rin
RA(n)= E [ AkAk-n ]
ee
Where E is the expectation operator. the power spectral density of the discrete
PAM signal X(t) defined in equation (1) is given by
gin
Sx(f)= (n) exp(-j2πnfT)…..(2)
i) NRZ Unipolarformat:
The 0s and 1s of a random binary sequence occur with equal probability then
arn
P(Ak= 0 ) = P(Ak≠ a ) =
Le
E[ ] P(Ak= 0 ) + P(Ak≠ a )=
Consider the next product AkAk-nfor n ≠ 0. This product has four possible values
ww
namely 0, 0 , 0 and . Assuming that the successive symbols in the binary sequence
are statistically independent these four values occur with a probability of 1/4 each.
Hence for n ≠ 0, we maywrite
67
RA(n)= ……(3)
For the basic pulse v(t) we have a rectangular pulse of unit amplitude and
duration Tb. hence the fourier transform of v(t)equals
V(f) = Tbsinc (fTb) ……(4)
Hence the use of the equation (3) & (4) in (2) with T= Tb, yields the following
n
g.i
result for the power spectral density of the NRZ unipolarformat
Sx(f)= ..(5)
rin
we next use poissons formula written in the form
ee …(6)
Where δ (f) denotes a dirac delta function at f=0. Now substituting equation (6) in
gin
(5) and recognizing that the sinc function sinc(fT b) has nulls at f= ±1/Tb,±2/Tb,…….
We may simply expression for the power spectral density Sx(f)as
En
Sx(f)= δ(f)….(7)
arn
The presence of the Dirac delta function δ (f) accounts for one half of the power
contained in the unipolar waveform. The curve a shows a normalized plot of equation
Le
(7). Specifically the power spectral density Sx(f) is normalized withrespectto and f
is normalized with respect to the bit rate 1 / Tb. The power of the NRZ unipolar format
w.
68
n
g.i
rin
ii) NRZ Polarformat:
ee
Consider a polar format of the NRZ type for which the binary data consists of
gin
independent and equally likely symbols and it is givenby
RA(n)= …..(8)
En
The basic pulse v(t) for the pulse format is same as that for the unipolar format.
arn
Hence the use of equation (4) and (8) in equation (2) with the symbol period T=Tb
yields the power spectral density of NRZ polar formatas
Le
Sx(f) =
The normalized form of this equation is plotted in curve b. the power of the NRZ
w.
polar format lies inside the main lobe of the sinc shaped curve, which extends up to
the bit rate 1/Tb.
ww
69
contain long strings of 0s and 1s. This property does not hold for the unipolar and
polarformats.
NRZ Bipolar format:
The bipolar format has three levels a, 0 and –a. then assuming that the 1s and
0s with equal probability. The probabilities of three levels are asfollows
P(Ak= a ) =
P(Ak= 0 )=
n
P(Ak= -a ) =
g.i
Hence for n = 0, we may write
rin
E[ ] P(Ak= a) + P(Ak=0)+ P(Ak= -a ) =
For n=1 the dibit represented by the sequence (Ak-1 ,Ak) can assume only four
ee
possible forms (0,0), (0,1) , (1,0) and (1,1). The respective values of the product AkAk-1
gin
are 0, 0 ,0 and , the last value results from the fact that successive 1s alternate in
polarity. Each of the dibits occur with the probability 1/4, on the assumption that
En
successive symbols in the binary sequence occur with equal probability. Hence we
may write
arn
For n>1 we find that E [Ak Ak-n ] = 0. Accordingly for the NRZ bipolar format, we
Le
have
w.
RA(n)= …….(9)
ww
Where in the second on the right side. We have made note of the fact thatRA(-n)
=RA(n).
The basic pulse v(t) for the NRZ bipolar format has its fourier transform and
hence substituting the corresponding equations with T= Tb. The power spectral
density of the NRZ bipolar format is givenby
70
=
The normalized form of this equation is plotted in curve c. The power lies inside
a bandwidth equal to the bit rate 1/ Tb, the spectral content of the NRZ bipolar format is
relatively small around zerofrequency.
iv) Manchester format (biphase baseband signaling:
n
Symbol 1 is represented by a transmitting a positive pulse for one half of the
g.i
symbol duration followed by a negative pulse for the remaining half of the symbol
rin
duration for symbol 0, these two pulses are transmitted in reverseorder.
The autocorrelation function RA(n) for the Manchester format is same as that for
ee
the NRZ polar format. The basic pulse v(t) for the Manchester format consists of
doublet pulse of unit amplitude and total duration Tb. Hence the fourier transform of
gin
the pulseequals
V(f) = jTbsinc( )
En
Thus substituting the corresponding equations , we find that the power spectral
arn
Sx(f) =
Le
The normalized form of this equation is plotted in curve d. The power lies inside
a bandwidth equal to the bit rate 2/Tb.
w.
ww
70
5. Write short notes on Eye pattern & Inter symbol interference [Nov/Dec
2015,2016
Eye pattern: Eye patterns can be observed using an oscilloscope. The received
wave is applied to the vertical deflection plates of an oscilloscope and the saw tooth
wave at a rate equal to transmitted symbol rate is applied to the horizontal deflection
plates, resulting display is eye pattern as it resembles human eye. The interior region
of eye pattern is called eye opening.
The width of the eye opening defines the time interval over which the received
n
wave can be sampled without error from ISI. It is apparent that the preferred time for
g.i
sampling is the instant of time at which the eye is openwidest.
The sensitivity of the system to timing error is determined by the rate of closure
rin
of the eye as the sampling time isvaried.
The height of the eye opening at a specified sampling time is a measure of the
margin over channelnoise. ee
gin
En
arn
Le
w.
pattern cross traces from the lower portion , with the result that the eye is completely
closed. In such a situation it is impossible to avoid errors due to the combined
presence of ISI and noise inthe system and so a solution has to be found to correct for
them.
71
n
When the dispersed pulse originate from different symbol interval and the
g.i
channel bandwidth closer to signal bandwidth, spreading of signal exceed a symbol
rin
duration and causes the signal to overlap or interfere with each other. This is known as
Inter Symbol Interference(ISI).
ee
gin
En
arn
of Tb.
The pulse amplitude modulator modulates the binary sequence in a new
w.
ak =
The signal is applied to the transmit filter of impulse response g(t). The
transmitted signal will be
S(t)= )
The transmitted signal is modified when it is transmitted through the channel
with the impulse response of h(t). In addition to that it adds a random noise to the
signalatthereceiverinput.Thissignalispassedthroughthereceiverfilter.The
72
resultant signal is sampled synchronously with the transmitter. Sampling instants can
be determined by clock or timing signal.
The sequence of the samples are used to reconstruct the original data
sequence by means of decision device. Each of the sample is compared with the
thresholdvalue.
If the sample value is greater than the threshold then the decision made in favor
of 1. If the sample value is less than the threshold then the decision made in favor of
0.If the sample value is equal to the threshold then the receiver makes a random
n
guess about which symbol was transmitted. The receiver filter outputis
g.i
y(t)= ) +n(t)
rin
Where𝜇 is a scaling factor and p(t) is the pulse to be defined
The delay (t0) due to the transmission delay through the system should be
ee
included with the pulse but for the simplification purpose we kept t0 to be zero. Scaled
pulse 𝜇p(t) is obtained by the double convolution of the impulse response of
gin
the transmit filter g(t), impulse response h(t) of the channel, the impulse response of
c(t) of the receiverfilter.
En
domain
𝜇P(f) = G(f) . H(f) . C(f)
Le
Where n(t) is the noise produced at the output of the receive filter due to channel
noise w(t) and w(t) is the white Gaussian noise with zero mean.
w.
y(ti)= + n(ti)
k≠i
The first term is produced by the ithtransmitted bit. The second term
represents the residual effect of all other transmitted bits on the decoding of the ithbit.
This residual effect is called inter symbol interference. Last term n(t i) represents the
noise sample at ti. In the absence of noise andISI
y(ti)=
73
n
g.i
rin
ee
gin
2. Distinguish BPSK, QAM and QPSK techniques. Write the expression for
the signal set of QPSK (MAY/JUNE 2016),(NOV/DEC2015),(April/May 2017)
En
BPSK-Phase of the carrier is shifted between two values according to input bit
sequence(1,0).
arn
SIN )
74
n
transmission bandwidth, a QPSK wave carries twice as many bits of information as the
g.i
corresponding binary PSKwave.
rin
5. List out the applications ofQAM.
Stereo broadcasting of AMsignals
Encoding color signals in analog TV broadcastingsystem.
Used inmodems ee
gin
Used in digital communicationsystem.
6. Give the two basic operation of DPSKtransmitter.
En
7. Why synchronization is required and what are the three broad types of
synchronization?
Le
The signals from various sources are transmitted on the single channel by
multiplexing. This requires synchronization between transmitter and receiver.
w.
Special synchronization bits are added in the transmitted signal for the purpose.
Synchronization is also required for detectors to recover the digital data properly
ww
75
8. DefineBER [MAY14]
The signal gets contaminated by several undesired waveforms in channel. The
net effect of all these degradations causes error in detection. The performance
measure of this error is called Bit error rate.
9. How can BER of a systembeimproved? [NOV12]
Increasing transmitted signal power
Improving frequency filtering techniques
Proper modulation & demodulation techniques
n
Coding a Decoding methods
g.i
10. Draw the constellation diagram of QAM. [NOV 10, MAY 13, NOV14]
rin
ee
gin
En
arn
Le
w.
ww
76
PART-B
1. Explain the transmitter, receiver and signal space diagram of BPSK [may/june
2016,April /May 2017]
In a coherent binary PSK system the pair of signals, S1(t) and S2(t) used
to represent binary symbols 1 and 0 are definedby
S1(t) (1)
S2(t)= =- (2)
n
g.i
Where0≤t< and is the transmitted signal energy perbit.
A pair of sinusoidal waves that differ only in a relative phase shift of 180 degrees
rin
as defined above are referred to as antipodalsignals.
namely ee
From equations 1 and 2 there is only one basis function of unit energy
gin
Փ1(t)= 0≤t< (3)
En
and
S2(t)= Փ1(t) 0≤t< (5)
Le
A coherent binary PSK system is having a signal space that is one dimensional
w.
i.e., N=1 and with two message points i.e., M=2 as shown in figure1
ww
S11 =
=+ (6)
and
S21 =
= (7)
n
pointcorrespondingto ) is located atS21=- .
g.i
The signal space of Figure 1 is partitioned into tworegions:
rin
1. The set of points closest to the message pointat+ .
Signal S2(t)is transmitted but the noise is such that received signal point falls
insideregion and so the receiver decides in favor ofsignal .Alternatively
Le
signal is transmitted but the noise is such that the received signalpointfalls
w.
To calculate the probability of error the decision region associated with symbol 1
ww
orsignal is givenby
0 < x1 <1
where x1 is the observation scalar:
x1= (8)
78
(x1|0) =
= (9)
n
= (10)
g.i
Putting
rin
z= (11)
= (12)
arn
symbol 0 given that symbol 1 was transmitted also has the same value as in
(12). Thus averaging the conditionalprobabilities and
w.
PSKequals
= (13)
The carrier and the timing pulses used to generate the binary wave are usually
extracted from a common master clock. The desired PSK wave is obtained at
the modulatoroutput.
n
g.i
Figure 2 Binary PSK transmitter
rin
Binary PSK Receiver:
ee
To detect the original binary sequence of 1s and 0s apply the noisy PSK wave
x(t) to a correlator which is also supplied with a locally generated coherent
gin
referencesignal as in figure3
En
arn
Le
w.
80
Si(t) =
0 elsewhere (1)
n
Where i=1,2and is the transmitted signal energy per bit
g.i
andthetransmittedfrequencyequals
rin
f i= for somefixedinteger and i=1,2 (2)
Փi(t) =
En
0 elsewhere (3)
Where i= 1,2 . Correspondingly the coefficient sijfor i=1,2 and j=1,2 is defined by
arn
Sij=
Le
=
w.
= (4)
ww
Thus a coherent binary FSK system is having a signal space that is two
dimensional i.e., N=2 with two message points i.e., M=2 as in figure1
81
n
g.i
rin
Figure 1 Signal space diagram for coherent binary FSK system
ee
The two message points are defined by the signalvectors:
gin
S1 = (5)
and
En
S2 = (6)
arn
The observation vectors x has two elements x1 and x2 they are defined by
respectively
w.
x1= (7)
ww
x2= (8)
82
After applying the decision rule the observation space is partitioned into two
decision regions labeled and as shown infigure1.
Accordinglythereceiverdecidesinfavorofsymbol1ifthereceivedsignal
point represented by the observation vector x falls insideregion
. This occurs when x1>x2 if we have x1 <x2 the received signal point falls inside
region and the receiver decides in favor of symbol 0. The decision boundary
separating region from region is defined by x1 = x2 .
n
Define a new Gaussian random variable L whose sample value l is equal to the
g.i
difference between x1and x2thus
l=x1-x2 (9)
rin
The mean value of the random variable L depends on which binary symbol was
transmitted.
ee
Given that symbol 1 was transmitted the Gaussian random variables X1 and
X2 whose sample values are denoted by x1 and x2 have mean values equal to
gin
and zero respectively.
The conditional mean of the random variable L given that symbol 1 was
En
transmitted is givenby
arn
= -
= (10)
Le
If symbol 0 was transmitted the random variables X1 and X2 have mean values
equal tozeroand respectively. Correspondingly the conditionalmeanof
w.
= -
= (11)
Var[L] = ]+ ]
= (12)
83
(13)
n
=
g.i
= (14)
rin
Put (15)
ee
Then changing the variable of integration from l to z we may rewrite asfollows
gin
En
= (16)
arn
find the average probability of symbol error for coherent binary FSKis
= (17)
w.
In a binary FSK system we have to double the bit energy to noise density ratio
ww
84
n
thatfrequency istransmitted.
g.i
rin
ee
gin
En
arn
switched off whilethe oscillator in the lower channel is switched on with the
result that frequency is transmitted
w.
The two frequencies and are chosen to equal integer multiples of the bit
ww
rate1/ as in equation(2)
In the transmitter we assume that the two oscillators are synchronized so that
their outputs satisfy the requirements of the two orthonormalbasisfunction
and as in equation(4).
To detect the original binary sequence given the noisy received wavex(t)
receiver is used as shown in figure 3
85
BFSK Receiver
Itconsistsoftwocorrelatorswithacommoninputwhicharesuppliedwithlocally
generated coherent reference signals and .
The correlator outputs are then subtracted one from the other and the resulting
difference l is compared with a threshold of zero volts.
If l>0 the receiver decides in favor of 1. If l<0 it decides in favor of 0.
n
g.i
rin
ee
gin
En
3. Explain the transmitter, receiver and signal space diagram of QPSK [nov/dec
2015,2016]
Le
As with binary PSK QPSK is characterized by the fact that the information carried
by the transmitted wave is contained in thephase.
w.
In Quadriphase shift keying (QPSK) the phase of the carrier takes on one offour
ww
Si(t) =
0 elsewhere (1)
Where i=1,2,3,4 and E is the transmitted signal energy per symbol Tis the time
duration and the carrier frequency equals for some fixed integer .
Each possible value of the phase corresponds to a unique pair of bits called as
dibit.
86
For example the foregoing set of phase values to represent the Gray encoded
set of dibits: 10,00,01, and11.
Using trigonometric identity we may rewite (1) in the equivalent form:
Si(t)=
n
0 elsewhere (2)
g.i
Wherei= 1,2,3,4. Based on this representation the following observations are
made:
rin
There are only two orthonormal basisfunctions and
containedinthe
ee
expansion of Si(t). The appropriate formsfor and
gin
aredefinedby
Փ1(t)= (3)
En
There are four message points and the associated signal vectors are definedby
Le
A QPSK signal is having a two dimensional signal constellation i.e., N=2 and
four message points i.e., M=4 as illustrated in Figure 1
ww
87
n
g.i
rin
Figure 1 Signal space diagram for coherent QPSK system
ee
To realize the decision rule for the detection of the transmitted data sequence
gin
the signal space is partitioned into fourregions
1. The set of points closest to the message point associated with signal vector
En
2. The set of points closest to the message point associated with signal vector
arn
3. The set of points closest to the message point associated with signal vector
Le
4. The set of points closest to the message point associated with signal vector
w.
x(t)=
i=1,2,3,4 (6)
where is the sample function of a white Gaussian noise process of zero
mean and power spectral density . The observation vector x of a coherent
QPSK receiver has two elements and that are definedby
88
x1=
= (7)
and x2=
= (8)
Where i=1,2,3,4.
Thus x1 and x2 are sample values of independent Gaussian random variables
n
with mean values equal to and =
g.i
respectively and with common varianceequalto .
rin
The decisionruleis toguess)= was transmitted if the
receivedsignalpoint associated with the observation vector x fallsinsideregion
eeguess
gin
was transmitted if the received signal point fallsinsideregion and soon.
The probability of correct decision equals the conditional probability of the
En
Both and are Gaussian random variables with a conditional mean equal
w.
(9)
Where the first integral on the right side is the conditional probability of the
event x1>0 and the second integral is the conditional probability of the event
x2>0 both given that signal wastransmitted.
Let
89
(10)
(11)
n
(12)
g.i
Accordingly
rin
=
= ee (13)
gin
The average probability of symbol error for coherent QPSK istherefore
=1-
En
= (14)
arn
In the region where (E/ ) we may ignore the second term on the right
side of equation (14) and so approximate the formula for the average probability
Le
(15)
ww
In a QPSK system we note that there are two bits per symbol. This means that
the transmitted signal energy per symbol is twice the signal energy per bit that
is
E=2 (16)
Thus expressing the average probability of symbol error in terms of the ratio
we maywrite
90
(17)
QPSK transmitter.
n
g.i
rin
ee
Figure 2 Block diagram of QPSK transmitter
gin
The input binary sequence is represented in polar form with symbols 1 and
andՓ2(t) .
The result is a pair of binary PSK waves which may be detected independently
due to the orthogonality of Փ1(t) andՓ2(t).
Finally the two binary PSK waves are added to produce the desired QPSK
wave. Note that the symbol duration T of a QPSK wave is twice as long as the
bit duration of the input binarywave.
91
That is for a given bit rate a QPSK wave requires half the transmission
bandwidth of the corresponding binary PSK wave. Equivalently for a given
transmission bandwidth a QPSK wave carries twice as many bits of information
as the corresponding binary PSKwave.
QPSK Receiver
The QPSK receiver consists of a pair of correlators with a common input and
supplied with a locally generated pair of coherent reference signals Փ1(t) and
Փ2(t) as shown in figure3.
n
g.i
The correlator outputs and are each compared with a threshold of zero
volts.
rin
If a decision is made in favor of symbol 1 for the upper or in phase
Similarlyif ee
a decision is made in favor of symbol 1 for
gin
thelowerorquadrature channel outputbutif a decision is made in favor of
symbol0
En
Finally these two binary sequences at the in-phase and quadrature channel
outputs are combined in a multiplexer to reproduce the original binary sequence
arn
n
the
g.i
Phase of the current signal waveform unchanged.
The receiver is equipped with a storage capability so that it can measure the
rin
relative phase difference between the waveforms received during two
successive bitintervals.
ee
DPSK is another non coherent orthogonal modulation. When it is considered
gin
over two bit intervals. Suppose the transmitted DPSK signalequals
Let S1(t) denote the transmitted DPSK for 0≤ t ≤ 2T bfor the case when we have
binary symbol 1 at the transmitter input for the second part of this interval
namely Tb ≤ t ≤ 2Tb. The transmission of leaves the carrier phase unchanged
Le
S1(t) =
ww
Let S2(t) denote the transmitted DPSK signal for 0≤ t ≤ 2Tb for the case when
we have binary symbol 0 at the transmitter input for Tb ≤ t ≤ 2Tb . The
transmission of 0 advances the carrier by phase by 180 0 and so we define S2(t)
as
93
S2(t) =
Here the equations S1(t) and S2(t) are indeed orthogonal over the two bit
interval 0≤ t ≤ 2Tb. In other words DPSK is a special case of non-coherent
orthogonal modulationwith
T= 2Tb and E= 2Eb
n
We find that the average probability of error for DPSK is givenby
g.i
Pe= )
rin
The next issue is generation and demodulation of DPSK. The differential
encoding process at the transmitter input starts with an arbitrary first bit serving
ee
as reference and there after the differentially encoded sequence {d k}is
generated by using logicalequation
gin
Where bkis the input binary digit at time KT b and dk-1 is the previous value of
En
differentially encoded digit. The use of an over bar denotes logical inversion.
The following table illustrates logical operation involved in the use of logical
arn
equation, assuming that the reference bit added to the differentially encoded
sequence {dk}is as 1. The differentially encoded sequence {dk} thus generated
Le
is used to phase shift key a carrier with the phase angles 0 and πradians.
w.
ww
The block diagram of DPSK transmitter consists, in part, of a logic network and
a one bit delay element interconnected coded sequence {d k} with the logical
equation. This sequence is amplitude level shifted and then used to modulate a
carrier wave of frequency fc, thereby producing the desired DPSKwave.
n
g.i
Fig: Block diagram for DPSK Transmitter
rin
ee
gin
En
band pass filter centered at the carrier frequency fc. So as to limit the noise
power. The filtered output and a delayed version of it, with the delay equal to
Le
the bit duration Tbare applied to the correlator. The resulting correlator output is
proportional to the cosine of the difference between the carrier phase angles in
w.
the two correlator outputs. The correlator output is finally compared with
threshold of 0 volts and decision is thereby made in favor of symbol 0 or symbol
ww
1.
If correlator output is positive – The phase difference between the waveforms
received during the pertinent pair of bit intervals lies inside the range –π/2 to
π/2. A decision is made in favour of symbol1.
If correlator output is negative - The phase difference lies outside the range –
π/2 to π/2. A decision is made in favour of symbol0.
95
n
g.i
rin
ee
gin
points for M=16. The corresponding signal constellations for the in phase and
quadrature components of the amplitude phase modulated wave asshown,
Le
S1(t) =
Where E0is the energy of the signal with the lowest amplitude and a iand biare a
w.
pair of independent integers chosen in accordance with the location of the pertinent
ww
message point. The signal S1(t) consists of two phase quadrature carriers , each of
which is modulated by a set of discrete amplitude hence the name called quadrature
amplitude modulation.
ᶲ1(t) and
ᶲ2(t) =
96
Pe‟=( 1- )erfc (
n
)
g.i
Where L is the square root of M
The probability of symbol error for M-ary QAM is givenby
rin
Pe = 1 – Pc
= 1 – ( 1- Pe‟)2
ee Pe = 2 Pe‟
gin
Where it is assumed that Pe‟ is small compared to unity and we find the
probability of symbol error for M-ary QAM is given by
En
Pe = 2( 1- ) erfc( )
arn
The transmitted energy in M-ary QAM is variable in that its instantaneous value
depends on the particular symbol transmitted. It is logical to express Pe in terms of the
average value of the transmitted energy rather than Eo . Assuming that the L
Le
amplitude levels of the in phase or quadrature component are equally likely wehave
w.
Eav= 2
Where the multiplying factor 2 accounts for the equal combination made by in
ww
phase and quadrature components. The limits of the summation take account of the
symmetric nature of the pertinent amplitude levels around zero we get
Eav=
97
Pe = 2( 1- ) erfc( )
n
g.i
Fig: Block diagram of M-ary QAM Transmitter
The serial to parallel converter accepts a binary sequence at a bit rate Rb=1/Tb
rin
and produces two parallel binary sequences whose bit rates are Rb/2 each. The 2 to L
level converters where L= , generate polar L level signals in response to the
ee
respective in phase and quadrature channel inputs. Quadrature carrier multiplexing of
gin
the two polar L level signals so generated produces desired M-ary QAM signal.
En
arn
Le
w.
98
n
i) The sum of two code words belonging to the code is also acodeword.
g.i
ii) The all zero word is always acodeword.
iii) The minimum distance between two code words of a linear code is equal to
rin
the minimum weight of thecode.
2. What is meant by constrained length of convolutional encoder? (MAY/JUNE
2016) ee
gin
Constraint length is the number of shift over which the single message bit can
influence the encoder output. It is expressed in terms of message bits.
En
capacity C and a source generates information at a rate less than C then there
exists a coding technique such that the output of the source may be transmitted
over the channel with an arbitrarily low probability of symbol error,
Le
For binary symmetric channel if the code rate r is less than the channel capacity
w.
C it is possible to find the code with error free transmission. If the code rate r is
greater than the channel capacity it is not possible to find thecode.
ww
4. What is cyclic code and List the properties of cyclic codes. (NOV/DEC2015)
A linear code is cyclic if every cyclic shift of the code vector produces some other valid
code vector.
Linearity Property: the sum of two code word is also a codeword
Cyclic property: Any cyclic shift of a code word is also a codeword
5. What is hamming distance and Write itscondition
The hamming distance .between two code vectors is equal to the number of
Elements in which they differ. For example, let the two code words be,
99
n
6 Define code efficiency, code, block rate, Hamming weight and minimum
g.i
distance
Code Efficiency
rin
The code efficiency is the ratio of message bits in a block to the transmitted bits for
that block by the encoder i.e., Code efficiency= (k/n)
ee
k=message bits n=transmitted bits.
gin
Code:
In (n,k) block code, the channel coder accepts information of k-bits blocks, it adds n-k
En
redundant bits to form n-bit block. This n-bit block is called the code word.
Block rate:
arn
Minimum distance.
The minimum distance dmin of a linear block code is defined as the smallest
ww
n
9. Find the hamming distance between 101010 and 010101. If the minimum
g.i
hamming distance of a (n, k) linear block code is 3, what is its minimum
hamming weight? [NOV12]
rin
Hamming Distance Calculation:
Codeword1: 101010 and Codeword2: 010101
d(x,y):ee 6
gin
For Linear BlockCode
Minimum hamming distance = minimum hamming weight
En
dmin ≤ 2t + 1
t ≥ (dmin – 1)/2
ww
101
PART-B
1. Describe the steps involved in generation of linear block codes define
and explain the properties of syndrome.
Linear Block Codes
Consider (n,k) linear block codes
It is a systematic code
Since message and parity bits are separate
b0, b1, …………, m0, m1
n
bn-k-1 ,……,mk-1
g.i
Message Order
rin
m = [m0, m1 ,……,mk-1] 1* k
Parity bits
ee
b = [b0, b1, …………, bn-k-1] 1* n-k
gin
Code word
x = [x0, x1 ,……,xn-1] 1*n
En
Coefficient Matrix
arn
P= k*n-k
b = mP
Le
IdentityMatrix
w.
Ik = k*k
ww
Generator Matrix
x=
x=
x=m
x=mG
G= k*n
102
n
g.i
rin
ee
gin
En
arn
Le
w.
ww
103
H= n-k*n
To prove the use of parity check matrix
W.K.T
n
X=MG
g.i
rin
Syndrome Decoding
ee
gin
Y=x+e
Y – receivedvector
e – error pattern
En
arn
S= y
Important properties of syndrome
Le
Property 1:
The syndrome depends only on the error pattern and not on th e transmitted
w.
code word
ww
S= y
S= (x+e)
=x
= 0+e
S=e
Property2
All error pattern that differs at most by a code word have the same syndrome
104
ei =e+xi i= 0,1,2,…….
Multiplyby
=e +0
=e
Property3
The syndrome S is the sum of those columns of the matrix H corresponding to
n
the error locations.
g.i
H = [h0, h1 ,……,hn-1]
S=
rin
= [e1, e2 ,……,en]
Property 4
ee S=
gin
With syndrome decoding an (n,k)LBC can correct upto t errors per codeword,
provided n & k satisfy the hamming bound
En
arn
Where =
Hamming weight - It is defined as the number of non zero elements in the code
vector.
ww
Minimum distance (dmin) -The minimum distance of a linear block code is the
smallest hamming weight of the non- zero code vector
Errordetection- It
can detect S number oferrors
Errorcorrection - It
can correct t number oferrors
105
n
In order to achieve such a high level of performance, we may have to resort
g.i
to the use of channelcoding.
Aim
rin
It is used to increase the resistance of a digital communication system to
channelnoise.
Channel coding consistsof ee
gin
Mapping the incoming data sequence into a channel input sequence,and
Inverse mapping the channel output sequence into an output data sequence in
En
such a way that the overall effect of channel noise on the system isminimized.
The mapping operation is performed in the transmitter by means of an encoder,
arn
r = k/n
„r‟is less thanunity.
Statement: The channel coding theorem for a discrete memoryless channelis
stated in two parts asfollows.
1. Let a discrete memoryless source with an alphabet „ζ‟ have entropy H(ζ)
and produce symbols once every Ts seconds. Let a discrete memoryless
channel have capacity C and be used once every T seconds. Then,if
n
g.i
There exists a coding scheme for which the source output can be
transmitted over thechannel and be reconstructed with an arbitrarily
rin
small probability of error. The parameterC/Tc is called the criticalrate.
2. Conversely,if
ee
gin
it is not possible to transmit information over the channel and reconstruct
En
The channel coding theorem does not show us how to construct a good code.
Rather, that it tells us that if the condition is satisfied, then good codes do exist.
Le
the probability of error can be made arbitrarily low by the use of a suitable
encoding scheme.
But the ratio Tc/Ts equals the code rate of the encoder:
n
Condition can also be written as
g.i
r≤C
If r ≤ C , the code has low probability of error.
rin
ee
gin
En
arn
Le
w.
ww
108
3. For (6,3) systematic linear block code, the code word comprises I1 , I2, I3,
P1, P2, P3 where the three parity check bits P1, P2 and P3 are formed from the
information bits as follows:
P1 = I2
P2 = I3
P3 = I3
Find
i. The parity checkmatrix
n
ii. The generatormatrix
g.i
iii. All possible codewords.
rin
iv. Minimum weight and minimum distanceand
v. The error detecting and correcting capability of thecode.
ee
vi. If the received sequence is 10000. Calculate the syndrome and
decode thereceivedsequence. (16)
gin
[DEC 10]
Solution:
En
Given: n=6K
=3
Le
w.
ww
109
(ii) GeneratorMatrix:
n
(iii)All Possible Codewords:
g.i
b = mP
where b Parity bits
rin
mmessage bits
No ofParitybits = n – k = 6 – 3 =3
ee
No of message bits=k =3
gin
En
b1= m1 m2
b2= m1 m3
arn
b3= m2 m3
Le
w.
ww
n
g.i
It can detect upto 2 errors.
Error Correction:
rin
ee
gin
En
(vi) Syndrome:
Le
SYNDROME TABLE:
SYNDROME ERROR PATTERN
000 000000
110
110 100000
101 010000
011 001000
100 000100
010 000010
001 000001
n
g.i
rin
The correct codeword is 111000
ee
gin
4. Consider a (7, 4) linear block code whose parity check matrix is givenby
En
Generator Matrix:
Given
ww
H=
H=
PT
111
P=
G=
Given K=4
G=
G=
n
g.i
b. Error Detection:
rin
To find dmin, we have to write the table for the codewords.
b= mP
No. ofparity bits = n-k =7-4 =3
ee
gin
No.of message bits = k=4
En
=
arn
b1= m1 m2 m3
b2 = m 1 m2 m4
Le
b3 = m 1 m3 m4
w.
ww
112
n
g.i
rin
ee
gin
En
dmin =3
arn
Le
w.
C. Error Correction:
113
n
g.i
rin
Decoder:
ee
gin
En
arn
Le
w.
ww
114
5. Determine the generator polynomial g(x) for a (7, 4) cyclic code, and find code
vectors for the following data vectors 1010, 1111, and 1000. (8)[NOV 11,
MAY14]
Given :
To find generator polynomial
It is a factor of (xn+1)
Here n=7
x7+1 = (1+x) (1+x+x3) (1+x2+x3)
n
Generator must have a maximum power of n-k.
g.i
Here n-k = 7-4 = 3
Therefore generator must be a term with power 3
rin
So (1+x+x3) and (1+x2+x3) can be used as generator.
Assume (1+x+x3) is a generator
g(x) = 1+x+x3 ee
gin
(i) Consider data vector 1010:
m1 =1010
En
m1(x) = 1+x2
Step 1:
arn
Step 2:
Divide x3m1(x) by g(x)
ww
x3+x5 / (1+x+x3)
Quotient = q(x) = x2
Remainder = R(x) = x2
115
Step 3:
Add the remainder R(x) to x3m1(x)
C1(x) = x2 + ( x5+x3)
= x2+x3+x5
C1 = 0011010
(ii) Consider data vector 1111:
m2 =1111
n
m2(x) = 1+x+x2+x3
g.i
Step 1:
Multiply m2(x) by xn-k
rin
xn-k= x7-3 = x3
ee
x3m2(x) = x3(1+x+x2+x3) = x3+x4+x5+x6
gin
Step 2:
En
Quotient=q(x)
ww
=x3+x2+1
Remainder=R(x) =x2+x+1
= 1+x+x2
Step 3:
Add the remainder R(x) to x3m2(x)
C2(x) = (1+x+x2)+ ( x3+x4+x5+x6)
= 1+x+x2+x3+x4+x5+x6
C2 = 1111111
116
n
g.i
rin
ee
gin
En
arn
Le
w.
ww
117
m1 = 1000
m3(x) = 1
Step 1:
Multiply m3(x) by xn-k
xn-k = x7-3 = x3
x3m3(x) = x3(1) = x3
Step 2:
Divide x3m3(x) by g(x)
n
3
x3 / (1+x+x )
g.i
rin
ee
Quotient = q(x) = 1
Remainder = R(x) =x+1
gin
Step 3:
En
= 1+x+x3
C3 = 1101000
Le
w.
ww
118
H=
n
g.i
f) Check whether it is a hammingcode
g) If the received sequence is [0101100]. Calculate the syndrome and decode
rin
the receivedsequence.
h) Illustrate the relation between the minimum distance and the structure of
Solution:
ee
parity check matrix H by considering the code word[0101100].
gin
Coefficient Matrix:
Given
En
H=
arn
W.k.t H=
From above equation
Le
w.
a) COEFFICIENT MATRIX:
ww
P= =
b) GENERATOR MATRIX:
G =
119
Given n=7,K=4
G =
c) All possiblecodewords
b = mP
b
n
g.i
No.of parity bits = n-k = 7-4 =3
m
rin
No.of message bits = k= 4
ee =
gin
b1= m1 m3 m4
b2 = m 1 m2 m4
En
b3 = m 2 m3 m4
arn
Le
w.
ww
120
n
g.i
rin
ee
gin
d) MINIMUM WEIGHT & MINIMUMDISTANCE
En
Minimum weight =3
In LBC,
Minimum distance = minimum weight
Le
Error detection:
ww
120
n
f) TO CHECK WHETHER IT IS A HAMMINGCODE
g.i
1) Yes
2) Block length n =2q-1
rin
q = n-k
= 7-4
q =3 ee
gin
n =2q -1
7 = 23 -1
En
7 = 8-1
7=7 Yes
arn
3) No.of Messagebits
K =2q –q-1
4 = 23- 3-1
Le
= 8-4
w.
4 =4
4) No. of Paritybits
ww
q = n-k
3 = 7-4
3=3 Yes
Since it satisfies all the conditions. It is a hamming code.
g) Syndrome:
Received sequence = r = 0101100
Syndrome S = rHT
121
n
g.i
=
SYNDROME TABLE:
rin
SYNDROME ERROR PATTERN
ee
000 0000000
gin
110 10 0 0 0 0 0
0 11 0100000
En
10 1 0010000
111 0001000
arn
100 0000100
010 0000010
Le
001 0000001
w.
= 0101100+0000000
= 0101100
0101100 is the correct code word
n
Solution:
g.i
Given
Rate = ½
rin
Constraintlength=3
Generatorvectorg1= &
ee
g2 =Input
gin
message m =10011
1) Encoder:
En
2) Dimension of thecode:
123
3) Coderate:
4) Constraintlength:
Definition:
No of shifts over which the msg bit can influence the encoder output.
n
g.i
Here it is 3.
5) Output sequence: Given
rin
Generatorvectorg1= &
g2 =
ee
Input message m = 10011
gin
In Polynomial Representation
g1(D) = 1+D+D2
En
= 1+D3+D4
Output of Upper Path
Le
= 1+D3+D4+D+D4+D5+D2+D5+D6
= 1+D+D2+D3+D6
ww
x1 = {1 1 1 1 0 01}
Output of Lower Path
x2(D) = m(D) g2(D)
= (1+D3+D4) (1+D2)
= 1+D3+D4+D2+D5+D6
= 1+D2+D3+D4+D5+D6
x1 = {1 0 1 1 1 11}
124
Overall output
The switch moves between upper and lower path alternatively
Code word = {11 10 11 11 01 01 11}
n
3. Coderate
g.i
4. Constraintlength
5. Obtain the encoded output for the input message 10011. UsingTime
rin
DomainApproach
Solution :
Given:
ee
gin
Rate = ½
Constraintlength=3
Generatorvector = &
En
=
arn
1. Encoder
Le
w.
ww
2. Dimension of thecode:
The encoder takes 1 input at a time. So k = 1
It generates 2 output bits. So n =2
Dimension = (n, k) = (2, 1)
3. Coderate:
125
4. Constraintlength:
Number of shifts over which the message bits can influence the
encoder output.
Here it is 3.
5. Outputsequence
Generator Sequence of Top Adder
↑ ↑ ↑
n
g.i
Generator Sequence of BottomAdder
rin
↑↑ ↑
ee
gin
Message Sequence
En
↑ ↑↑↑ ↑
arn
i = 0, 1, 2, 3, 4, 5,6
ww
l = 0, 1, 2
i=0
l=0
126
= 1*1
=1
i=1
l = 0,1
n
= 1*0 1*1
g.i
=0 1
rin
=1
i=2
ee l = 0, 1, 2
gin
En
=0 0 1
=1
Le
i=3
l = 0, 1, 2
w.
ww
= 0
0
=1
i=4
127
l= 0, 1, 2
= 1 0
=0
n
g.i
i=5
rin
l = 0, 1, 2
ee
gin
=1*m5 1*1 1*1
En
=1 1
=0
arn
i=6
Le
l = 0, 1, 2
w.
ww
=1*m6 1*1
=1
128
i = 0, 1, 2, 3, 4, 5, 6
l = 0, 1, 2
n
i=0
g.i
l=0
rin
ee
gin
= 1*1 = 1
i=1
l = 0,1
En
arn
Le
= 1*0 0*1
=0 0
w.
=0
ww
i=2
l = 0, 1, 2
= 0*0 1*1
= 0 1
129
=1
i=3
l = 0, 1, 2
n
=1 0 0
g.i
=1
rin
i=4
l = 0, 1, 2
ee
gin
En
=1 0 0
arn
=1
i=5
Le
l = 0, 1, 2
w.
ww
= 1
=1
i=6
l = 0, 1, 2
130
=1
n
Overall output
g.i
rin
The Switch moves between upper & lower path alternatively
Code word =
ee
gin
9. Arate convolutional encoder has generator vectorsg1= , ,
Solution: Given:
Le
K =1, n=3
w.
1. Encoder:
0 0 a
0 1 b
1 0 c
1 1d
State Table:
Output
In x1 = m
Current x2 = m Next
SNo state m1 m state
n
2
x3=m m2
g.i
m2 m1 m x1 x2 x3 m1 m
1 a= 0 0 0 0 0 0 0 0 =a
rin
1 1 1 1 0 1 =b
2 b= 0 1 0 0 1 0 1 0 =c
1 1 0 1 1 1 =d
3 c=
ee 1 0 0
1
0
1
1
0
1
0
0 0 =a
0 1 =b
gin
4 d= 1 1 0 0 0 1 1 0 =c
1 1 1 0 1 1 =d
Trellis diagram:
En
Input lines
0
arn
1
Le
w.
ww
State Diagram:
132
n
Code Tree:
g.i
rin
ee
gin
En
arn
Le
w.
133
n
g.i
Step1: Consider first 3 bits & stage1
rin
ee
gin
Step2: Consider first 6 bits & stage 1 &2
En
arn
Le
w.
134
Survivors
n
g.i
rin
Step4: Consider all the bits and 4stages
ee
gin
En
arn
Le
Survivors
w.
ww
135
n
1. Whatis aliasing? Pageno:07
g.i
2. What is companding? Sketch the input and output characteristics of expander
andcompressor. Pageno:07
rin
3. What are the advantages ofdeltamodulator? Pageno:27
4. What is linear predictor? On what basis predictor coefficients aredetermined.
5. What arelinecodes? ee Pageno:50
gin
6. What is ISI? What are the causesofISI? Pageno:50
7. Distinguish coherent andnon-coherentreception Pageno:74
En
8. What is QPSK? Write the expression for the signal setofQPSK Pageno:74
9. What is alinearcode? Pageno:99
arn
PART-B (5 * 16 = 80)
Le
w.
11.(a)i)Statethelowpasssamplingtheoremandexplainthereconstructionof the
signal fromitssamples. (9) Pageno:19-22
ww
Page no:16-18
136
ii) What is TDM? Explain the difference between analog TDM and digital TDM.
(6) Pageno:23-26
12. (a) i) draw the block diagram of ADPCM system and explain its function (10)
Page no:39-43
ii) A delta modulator with a fixed step size of 0.75 V, is given a sinusoidal
message signal. If the sampling frequency is 30 times the nyquist rate.
Determine the maximum permissible amplitude of the message signal if slope
n
overload is to be avoided. (6)
g.i
OR
(b) i) Draw the block diagram of an adaptive delta modulator with continuously
rin
variable step sizeandexplain. (10) Page no:30-35
ii) Compare PCM system with delta modulation system (6)
ee
gin
13. (a) i) Sketch the power spectra of (a) Polar RZ and (b) bipolar RZ signals. (8)
Page no:64-70
En
ii) Compare the various line coding techniques and list their merits and demerits
(8)
arn
OR
(b) i) Draw the block diagram of duo binary signaling scheme without andwith
precoderandexplain. (9) pageno:57-62
Le
14.(a)ExplainthegenerationanddetectionofacoherentbinaryFSKsignaland derive
ww
G=
n
OR
g.i
(b) i) The generation of a polynomial of a (7,4) cyclic code is 1+x2+x3. Develop
rin
encoder and syndrome calculator forthiscode (8)
ii) Explain the Viterbi algorithm forconvolutionalcode. (8)
ee
gin
En
arn
Le
w.
ww
138
n
1. State sampling theorem for band limited signals and filters to avoidaliasing
g.i
2. Write the two fold effects of quantization process. Pageno:8
5. Define APFandAPB. Pageno:27
rin
6. Write the limitations ofdeltamodulation. Pageno:27
7. List the propertiesof syndrome. Pageno:50
ee
8. Compare M-ary PSK andM-aryQAM Pageno:51
gin
9. Draw the block diagram of coherent BFSK receiver. Pageno:74
10. Distinguish BPSK andQPSKtechniques Pageno:74
En
PART-B (5 * 16 = 80)
11. (a) Describe the process of sampling and how the message is reconstructed
Le
from its samples. Also illustrate the effect of aliasing with neat sketch.
w.
(16)
Page no:19-22
ww
OR
(b) Describe the PCM waveform coder and decoder with neat sketch and list the
merits compared withanalogcoders. (16)
Page no:16-18
ii) Explain how adaptive delta modulation performs better and gains more
SNR thandeltamodulation (8)
Page no:39-43
OR
(b) Illustrate how the adaptive time domain coder codes the speech at low
bit rate and compare it with frequency domain coder.
13 (a) i) Describe modified duo binary coding technique and its performanceby
illustrating its frequency and impulse responses (10) Pageno:57-62
n
ii) Determine the power spectral density of NRZ bipolar and unipolardata
g.i
formats. Assume that ones and zeros in the binary data occur with equal
probability. (6) Pageno:64-70
rin
OR
b) i) Describe how eye pattern illustrates the performance of a data
ee
transmission system with respect to ISI with neat sketch (10) Page no:71-
gin
73
ii) Illustrate the modes of operation of adaptive equalizer with neat block
En
describe how it reproduces with the minimum probability of symbol error with
neatsketch Pageno:86-92
OR
Le
of PSK
15 a) For a systemic linear block codes the 3 parity check digits P1, P2, P3
ww
are givenby
PKn-k=
i) Construct generatormatrix
ii) Construct code generated by thematrix
iii) Determine error correctingcapacity
iv) Decode the received words with anexample
140
OR
n
g.i
rin
ee
gin
En
arn
Le
w.
ww
141
n
5.Define Correlative level coding. Page No :52
g.i
6.For the binary data 01101001 draw the unipolar and RZ signal Page No :66
7.Distinguish coherent vs non coherent digital modulation techniques.Page No :74
8. Draw a block diagram of a coherent BFSK receiver. Page No :74
rin
9.Generate the cyclic code for (n,k) syndrome calculator. Page No :115
10.Define channel coding theorem. Page No :99
ee
PART B-(5*16 =80 marks)
11.(a) Illustrate and describe the types of quantizer? Describe the midtread and
gin
midrise type characteristics of uniform quantizer with a suitable diagram. (16)
Page No:12-15
Or
(b) Draw and explain the TDM with its applications.(16) Page No :22-26
En
12.(a) Describe delta modulation system in detail with a neat block diagram.Also,
arn
Or
(b)Describe how eye pattern is helpful to obtain the performance of the system
in detail with a neat sketch. (16)
Page No :71-73
14.(a) (i) Describe the generation and detection of Coherent binary PSK
Signals. (10) Page no 77-80
(ii) Illustrate the power spectra of binary PSK signal. (6) Page no 77-80
Or
142
(b) (i) Describe the generation and detection of Coherent QPSK Signals .(12)
Page No :86
(ii)Illustrate the power spectra of QPSK signal. (4) Page No :86
15.(a)(i) Describe the cyclic codes with the linear and cyclic property.Also represent
the cyclic property of a code word in polynomial notation. (12) Page no 115-117
(ii) List the different types of errors detected by CRC code. (4)
Or
(b)Describe how the errors are corrected using Hamming code with an
example. (12)
(ii) The code vector [1110010] is sent, the received vector is
[1100010].Calculate the syndrome. (4)
n
g.i
rin
ee
gin
En
arn
Le
w.
ww
143
n
Part a –(10*2=20 marks)
g.i
1.A certain lowpass bandlimited signal x(t) is sampled and the spectrum of the
sampled version has the first guard band from 1500 Hz to 1900 Hz.What is the
sampling frequency?What is the maximum frequency of the signal?
rin
2.What is companding?Sketch the characteristics of a comparator. Page No :7
3.What is meant by granular noise in a delta modulation system? How can it be
avoided?
ee Page No:33
4.What is a linear predictor? On What basis are the predictor coefficients
determined? Page No:46
gin
5.State the desirable properties of line codes. Page No :52
6.What is an eye diagram? Page No :51
7.What is QPSK?Write down an expression for the signal set. Page No :74
En
code?
Le
Page No :12&16-18
ww
144
12.(a) With neat diagram, explain the adaptive delta modulation and demodulation
system in detail. Page No :34-35
Or
(b) Explain the operation of DPCM encoder and decoder with neat block
diagrams. Page No :36-38
13.(a) Derive the power spectral density of unipolar NRZ data format and list its
n
properties Page No :65-70
g.i
Or
(b) (i) Describe the Nyquist‟s criteria for distortion less base band
rin
transmission. (10)
Page No :53-56
ISI? ee
(ii) What is a “raised Cosine spectrum”? Discuss how does it help to avoid
(6)
gin
14.(a) Explain in detail the detection and generation of BPSK system.Derive the
expression for its bit error probability. Page No :77-84
Or
En
15.(a) The generator polynomial of a (7,4) linear systematic cyclic block code is
1+x+x3. Determine the correct code word transmitted, if the received word is
(i) 1011011 and (ii) 1101111
Le
Or
(b) A rate 1/3 convo
lutional encoder with constraint length of 3 uses the generator
w.
Page No : 131-135
__________________________
145