Matched Filters: Appendix D
Matched Filters: Appendix D
A P P E N D I X D
Matched Filters
Consider a known signal s(t) corrupted by additive white Gaussian noise w(t), resulting
in the received signal
N
S w ( f ) = -----0- for all f in the entire interval – ∞ < f < ∞ (D.2)
2
The power spectral density of white noise is illustrated in Fig. D.1(a). For a sta-
tionary random process, the autocorrelation function is the inverse Fourier trans-
form of the power spectral density. (See Appendix C.) It follows, therefore, that
the autocorrelation function of white noise consists of a Dirac delta function
δ (τ), weighted by N0/2, as shown in Fig. D.1(b). That is,
R w ( τ ) = E [ w ( τ )w ( t – τ ) ]
N (D.3)
= -----0- δ ( τ )
2
where E is the statistical expectation operator. Accordingly, any two different
samples of white noise are uncorrelated, no matter how closely together in time
they are taken. If the white noise w(t) is also Gaussian, then the two samples are
statistically independent. In a sense, white Gaussian noise represents the ultimate
in randomness.
509
HayAppDv3.fm Page 510 Thursday, January 22, 2004 9:10 PM
SW(f )
N0 /2
0 f
(a)
RW(f )
N0
d(t)
2
0 t
(b)
FIGURE D.1 (a) Power spectrum of the additive white noise W(t).
(b) Autocorrelation function of W(t).
2. Since the signal s(t) is known and therefore deterministic, it follows that s(t) and
w(t) are as uncorrelated (i.e., dissimilar) as they could ever be.
In light of point 2, we may intuitively state that, for the problem described herein, the
optimum receiver consists of a correlator with two inputs, one being the noisy received
signal x(t) and the other being a locally generated replica of the known signal s(t), as
shown in Fig. D.2. For obvious reasons, this optimum receiver is known as the correla-
tion receiver.
Another way of constructing the optimum receiver is to use a matched filter,
defined as a linear filter whose impulse response h(t) is a time-reversed, delayed ver-
sion of the known signal s(t); that is,
h(t) = s(T – t) 0≤t≤T (D.4)
0 otherwise
x (t) T
y
K0 dt
s(t)
Sample
at t ⫽ T
x (t) Matched filter: y
h(t) ⫽ s(T ⫺ t)
Figure D.3 shows a matched filter receiver, which consists of a matched filter followed
by a sampler that is activated at the end of the signaling interval t = T. The important
point to note here is that the correlation receiver of Fig. D.2 and the matched filter
receiver of Fig. D.3 are equivalent insofar as their overall output samples are con-
cerned. Specifically, for the same input signal and at the end of a signaling interval, the
resulting output samples produced by these two receivers are identical.
Since, by assumption, the white noise w(t) is the sample function of a Gaussian
process W(t), it follows that the receiver output y is the sample of a Gaussian-distributed
random variable Y. To complete the characterization of the receiver output, we need to
determine its mean and variance.
The mean of the random variable Y is
µY = E [ Y ]
T
= E+E ∫0 W ( t )s ( t ) dt
(D.7)
T
= E+E ∫0 [ W ( t ) ]s ( t ) dt
= E
where we have used two facts: First, the known signal s(t) is deterministic and there-
fore unaffected by the expectation operator E. Second, by assumption, the mean of
the white noise process W(t) is zero.
The variance of the random variable Y is
2 2
σY = E [ ( Y – µY ) ]
T T
= E ∫0 ∫0 W ( t1 )W ( t2 )s ( t1 )s ( t2 ) dt1 dt2 (D.8)
T T
= ∫0 ∫0 E [ W ( t1 )W ( t2 ) ]s ( t1 )s ( t2 ) dt1 dt2
Invoking the use of Eq. (D.2), we may write
N
E [ W ( t 1 )W ( t 2 ) ] = -----0- δ ( t 1 – t 2 ) (D.9)
2
2 N T T
σ Y = -----0- ∫ ∫ δ ( t 1 – t 2 )s ( t 1 )s ( t 2 ) dt 1 dt 2
2 0 0
N T 2
= -----0- ∫0 s ( t 1 ) dt 1 (D.10)
2
N0 E
= ---------
-
2
2
1 ( y – µY )
f Y ( y ) = ----------------- exp – ---------------------
-
2 πσ Y 2σ
2
Y
(D.11)
1 ( y – E ) 2
= ------------------ exp – -------------------
π N0 E N0 E
which is illustrated graphically in Fig. D.4. Substituting Eq. (D.11) into (D.12) yields
1 λ ( y – E ) 2
Prob ( say H 0 H 1 is true ) = ------------------
π N0 E ∫ –∞
exp – ------------------- dy
N0 E
(D.13)
fY (y)
Conditional
probability of
error, given
that the known signal
s(t) is present
(i.e., y ⬎ l)
0 l E y
1 ( λ – E ) ⁄ N0 E 2
Prob ( say H 0 H 1 is true ) = -------
π ∫ –∞ exp ( – z ) dz
(D.15)
1 ∞ 2
= -------
π ∫( E – λ ) ⁄ N0 E
exp ( – z ) dz
At this point in the discussion, we digress briefly to introduce a function that is closely
related to the Gaussian distribution: the error function, defined by
2 u 2
erf ( u ) = -------
π ∫0 exp ( –z ) dz (D.16)
Table E.1 of Appendix E gives values of the error function erf(u) for the argument u
in the interval 0 ≤ u ≤ 3.3. The error function has two useful properties:
1. Symmetry property, described by
2 ∞ 2
erfc ( u ) = -------
π ∫u exp ( –z ) dz (D.19)
1 E – λ
Prob ( say H 0 H 1 is true ) = --- erfc -------------- (D.21)
2 N E 0
• The signal energy E and noise spectral density N0 have different physical inter-
pretations, in that E is measured in joules whereas N0 is measured in watts/hertz;
yet these two units are in fact equal.
HayAppDv3.fm Page 515 Thursday, January 22, 2004 9:10 PM
y ( t ) = Rs ( T – t )
(D.22)
= Rs ( t – T ) if w ( t ) = 0
where, in the second line, we have used the fact that the autocorrelation function of
a signal of finite energy is an even function of the lag (see Appendix A). In words,
Eq. (D.23) states that the output of a filter matched to an input signal is equal to the
autocorrelation function of that signal, delayed by an amount equal to the duration of
the signal.
h ( t ) = s* ( T – t ) 0≤t≤T (D.24)
0 otherwise
where the asterisk denotes complex conjuction. Except for this minor modification,
everything else presented in the Appendix remains intact.