UNIT –I MULTIRATE SIGNAL PROCESSING
Signal
System
Advantages of Digital over Analog Signal Processing
• Digital signal processing offers greater accuracy and precision by working with discrete
values, reducing errors from environmental factors and component tolerances.
• It is less susceptible to noise and interference, whereas analog processing can suffer from
distortion and degradation over time.
• Digital systems allow flexibility and programmability, enabling modifications through
software updates instead of requiring circuit redesigns.
• Stored digital signals do not degrade, ensuring consistent reproducibility, unlike analog
storage, which loses quality over time.
• Error detection and correction mechanisms in digital processing help maintain accuracy,
which is not easily achievable in analog systems.
• Complex operations like filtering, compression, and encryption are more efficiently
performed in digital processing, while analog circuits require additional hardware for such
tasks.
• Digital compression techniques enable efficient bandwidth usage, whereas analog signals
typically require more bandwidth for transmission.
• Digital processing is scalable and can be integrated into microprocessors, FPGAs, and
ASICs, making devices more compact, while analog circuits require larger components.
• Over time, digital systems tend to be more cost-effective due to mass production and lower
maintenance costs compared to analog systems, which rely on precise, sometimes
expensive components.
• Digital systems allow real-time adaptive processing, such as dynamic noise cancellation,
which analog systems cannot achieve as effectively.
• Digital processing maintains signal integrity over long distances, whereas analog signals
degrade due to attenuation and interference.
Basic Elements of Digital Signal Processing System
Convolution
To summarize the process,
Cross Correlation and Auto Correlation
Z Transform
Inverse Z Transform
Discrete Fourier Transform
Multirate Signal Processing
Decimation by a factor D
The Process of reducing the sampling rate by an integral factor (D) is called Decimation of
the sampling rate. It is also called Down sampling by factor(D).
Decimator comprises of two blocks such as decimation filter and down sampler. Decimation
filter is used to bandlimit the signal before decimation operation. Down sampler decreases the
sampling rate by an integral factor (D)
Ifwe
Where
Interpolation by a factor I
The process of increasing the sampling rate of a signal is called "interpolation" or "sampling rate expansion"
(or "upsampling").
Let I be the interpolation factor. The interpolator simply inserts (I - 1) zeros between successive samples of x(n).
The block diagram representing interpolation is as
follows:
• x(n): Input signal with sampling frequency Fx.
• Upsampler (↑ I): Increases the sampling rate by
factor I.
• Anti-imaging filter h(n): Removes high-frequency
artifacts introduced by upsampling.
• y(n): Output signal with sampling frequency Fy.
Before Sampling
After Sampling
The upsampler inserts (I - 1) = 2 zeros between each sample.
Sampling Rate Conversion by a rational Factor
The sampling rate conversion can be accomplished by cascading an interpolator with a decimator, as shown
in the diagram.
x(n)=v(nI)
x(K)=v(KI)
Relation between the spectrum X(w) and Y(w)
Sampling rate conversion of bandpass Signal
Principle of Bandpass Signal Processing
The bandpass signal is converted to a lowpass signal with the help of mixing. The lowpass signal contains all
the information that is present in the bandpass signal. The lowpass signal is then interpolated or decimated.
Applications of Multirate Digital Signal Processing
Digital Filter Banks
\
Subband coding of Speech Signal
Subband coding is a method
Quadrature Mirror Filter
Analysis
Decimation and Interpolation
Synthesis
UNIT 2 DISCRETE RANDOM PROCESS
(Equation 1)
(Equation A)
-----(A)
Put l=1 in equation (A)
Substitute (C) in (B)
Representing (B) & (C) in matrix form:
From (C)
Substitute the value of rx(1)
Filters for generating random processes from white noise and inverse filter:
UNIT -III LINEAR PREDICTION AND FILTERING
Linear Prediction
Linear estimation and prediction
The important task in signal processing is the estimation of signals in the presence of noise. The
estimation problem is to recover the desired signal from the corrupted data with minimum false
decision.
The filter (or algorithm) or the criterion that enables this estimation is known as an estimator.
Forward linear prediction:
Consider the problem of predicting future values of a stationary random process from the past values of
the process. For this, consider a one-step forward linear predictor. This predictor forms the prediction of
the value of x(n) from past values of x(n−1), x(n−2),…,x(n−p).
Linearly predicted value of x(n) is
Where,
• ap(k) - Predictor coefficient of the one-step forward linear predictor
• -ve - Negative sign is for mathematical convenience
Let fp(n) be the forward prediction error, which is the difference between the value of x(n) and the
predicted value x^(n). This is called the forward prediction error.
Linear prediction is equivalent to linear filtering, where the predictor is embedded in the linear filter.
This is called a prediction error filter with input sequence x(n) and output sequence fp(n).
The realization for the prediction error filter is a direct-form FIR filter with a system function.
The direct-form FIR filter is equivalent to an all-zero lattice filter. The lattice filter is described by a
recursive equation.
error
P Stage Lattice Filter
The output of the p stage lattice filter is expressed as,
Backward linear prediction:
Substituting (1) in (2)
The backward linear predictor is realized by a direct-form FIR filter structure or as a lattice structure.
The lattice structure provides both forward and backward linear prediction.
The coefficients of a backward linear predictor are the complex conjugate of the coefficients of the
forward linear predictor, but they occur in reverse order.
The above equation is used to obtain the direct-form FIR filter coefficient am(k) from the reflection
coefficient Km. The minimum mean square error is the same as that of forward prediction.
WIENER FILTERS FOR FILTERING AND PREDICTION
There are three special cases in linear estimation.
FIR WIENER FILTER
In matrix Form
IIR WIENER FILTER
In the design of IIR Filter, we consider two cases
i. Non causal IIR Wiener filter
ii. Causal IIR Wiener filter
Non-Causal IIR Wiener Filter
DISCRETE KALMAN FILTER
Using this state representation, the AR(p) process and the observation model ca be written in the matrix form as,
The estimated error is given by,
Differentiating,
Error covariance matrix is given by,
UNIT –IV ADAPTIVE FILTERING
FIR ADAPTIVE FILTER
ADAPTIVE FILTER BASED ON STEEPEST DECENT METHOD
The Steepest decent algorithm as follows,
On substituting,
LMS ALGORITHM
The minimum mean square error is given as,
ADAPTIVE ECHO CANCELLATION
RLS ALGORITHM
The minimum mean square error is given as,
Let,
UNIT V SPECTRUM ESTIMATION
PERIODOGRAM
Case i) Representation in correlation
Taking Discrete Time Fourier Transform leads to an estimate of the power spectrum known as the
periodogram
Case i) Representation in terms of x(n)
Let xN (n) be the finite length signal of length N that is equal to x (n) over the interval [O, N - I], and is
zero otherwise,
FILTER BANK REPRESENTATION OF PERIODOGRAM
The frequency response of this filter is
(8.12)
As, =1 , Integrate with respect to w, Substituting limits we get,
To Estimate the Power Spectrum
PERIODOGRAM OF WHITE NOISE
Performance of the periodogram
Problem
Variance of the Periodogram
Therefore,
MODIFIED PERIODOGRAM
The periodogram is proportional to the squared magnitude of the Fourier transform of the
windowed signal
Although a rectangular window has a narrow main lobe compared to other windows and,
therefore, produces the least amount of spectral smoothing, it has relatively large sidelobes that
may lead to masking of weak narrowband components.
BARTLETT’S METHOD: PERIODOGRAM AVERAGING
Unlike either the periodogram or the modified periodogram, produces a consistent estimate of the
power spectrum. This method estimate the power spectrum of the random process by averaging
the periodogram. The random variable xi(n) , i= 1, 2,3…. K is portioned into K overlapping
Sequence of length L. The total Length N=KL.
The periodogram of xi(n),
n=0,1,2…. L-1, i=0,1,2…K-1
1
= ∑𝐾−1
𝑖=0 𝐹[𝑟𝑥𝑥 (𝐾)𝑤𝐵 (𝐾)]
𝑁
1 1
= ∑𝐾−1
𝑖=0 [𝑃𝑥 (𝑒 𝑗𝑤 )𝑤𝐵 (𝑒 𝑗𝑤 )]
𝑁 2𝜋
The expected value can be approximated using
WELCH METHOD: AVERAGING MODIFIED PERIODOGRAM
If there is no overlap, then D=L
=
sub n-m=k and m=n-k
Limitations of the Non Parametric Method
Non parametric methods are not designed to include the information that may be
available to include the information that may be available about the process into the estimation
procedure
PARAMETRIC METHODS
Step 1: Select an appropriate Model AR, MA, ARMA for the processor based on a prior
Knowledge about the process is generated
Step 2: Estimate the Model parameters from the given data
Step 3: From these Model Parameters Estimate the power Spectrum
Advantages
It eliminates the need of Windowing.
Provides better resolution than non parametric method
The problem of spectral leakage is eliminated
Types of Model
Auto Regressive Model
Moving Average Model
Auto Regressive Moving Average