[go: up one dir, main page]

0% found this document useful (0 votes)
20 views60 pages

DSP Slides 4up

Uploaded by

tippars
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views60 pages

DSP Slides 4up

Uploaded by

tippars
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Signals

Digital Signal Processing I flow of information


I measured quantity that varies with time (or position)
I electrical signal received from a transducer
(microphone, thermometer, accelerometer, antenna, etc.)
Ali Özgür Yöntem I electrical signal that controls a process
(slides by Markus Kuhn) Continuous-time signals: voltage, current, temperature, speed, . . .
Discrete-time signals: daily minimum/maximum temperature,
lap intervals in races, sampled continuous signals, . . .
Computer Laboratory, University of Cambridge
Electronics (unlike optics) can only deal easily with time-dependent
signals. Spatial signals, such as images, are typically first converted into
https://www.cl.cam.ac.uk/teaching/2122/{DSP,L314}/ a time signal with a scanning process (TV, fax, etc.).
These notes are provided as an aid for following the lectures, and are not a substitute for attending

Michaelmas 2021
CST Part II(75%)Unit/Part III/MPhil ACS
dsp-slides-4up.pdf 2021-08-09 14:40 70836be 1 2

Signal processing Analog electronics

Passive networks (resistors, capacitors,


Signals may have to be transformed in order to inductances, crystals, SAW filters),
R

I amplify or filter out embedded information non-linear elements (diodes, . . . ),


I detect patterns (roughly) linear operational amplifiers Uin L C Uout

I prepare the signal to survive a transmission channel


I prevent interference with other signals sharing a medium
Advantages: Uin
I undo distortions contributed by a transmission channel

Uout
I compensate for sensor deficiencies
I passive networks are highly linear
I find information encoded in a different domain over a very large dynamic range √
0 1/ LC ω (= 2πf)
and large bandwidths
To do so, we also need
Uin
I methods to measure, characterise, model and simulate transmission I analog signal-processing circuits
channels require little or no power Uout

I mathematical tools that split common channels and transformations t


I analog circuits cause little
into easily manipulated building blocks additional interference
Z t
Uin − Uout 1 dUout
= Uout dτ +C
R L −∞ dt
3 4
Digital signal processing Some DSP applications

Analog/digital and digital/analog converter, CPU, DSP, ASIC, FPGA.


communication systems geophysics
Advantages: modulation/demodulation, channel seismology, oil exploration
I noise is easy to control after initial quantization equalization, echo cancellation
astronomy
I highly linear (within limited dynamic range) consumer electronics VLBI, speckle interferometry
perceptual coding of audio and video (DAB,
I complex algorithms fit into a single chip DVB, DVD), speech synthesis, speech transportation
recognition radar, radio navigation
I flexibility, parameters can easily be varied in software
I digital processing is insensitive to component tolerances, aging, music security
synthetic instruments, audio effects, noise steganography, digital watermarking, biometric
environmental conditions, electromagnetic interference reduction identification, surveillance systems, signals
intelligence, electronic warfare
But: medical diagnostics
I discrete-time processing artifacts (aliasing) magnetic-resonance and ultrasonic imaging, engineering
X-ray computed tomography, ECG, EEG, MEG, control systems, feature extraction for pattern
I can require significantly more power (battery, cooling) AED, audiology recognition, sensor-data evaluation

I digital clock and switching cause interference

5 6

Objectives Textbooks

By the end of the course, you should be able to


I R.G. Lyons: Understanding digital signal processing. 3rd ed.,
I apply basic properties of time-invariant linear systems Prentice-Hall, 2010. (£68)
I understand sampling, aliasing, convolution, filtering, the pitfalls of I A.V. Oppenheim, R.W. Schafer: Discrete-time signal processing. 3rd
spectral estimation ed., Prentice-Hall, 2007. (£47)
I explain the above in time and frequency domain representations I J. Stein: Digital signal processing – a computer science perspective.
I use filter-design software Wiley, 2000. (£133)
I visualise and discuss digital filters in the z-domain I S.W. Smith: Digital signal processing – a practical guide for
I use the FFT for convolution, deconvolution, filtering engineers and scientists. Newness, 2003. (£48)
I implement, apply and evaluate simple DSP applications, e.g. in MATLAB I K. Steiglitz: A digital signal processing primer – with applications to
digital audio and computer music. Addison-Wesley, 1996. (£67)
I apply transforms that reduce correlation between several signal sources
I Sanjit K. Mitra: Digital signal processing – a computer-based
I understand the basic principles of several widely-used modulation and
approach. McGraw-Hill, 2002. (£38)
image-coding techniques.

7 8
Sequences and systems Some simple sequences

A discrete sequence {xn }∞


n=−∞ is a sequence of numbers un

. . . , x−2 , x−1 , x0 , x1 , x2 , . . . Unit-step sequence: 1


(
where xn denotes the n-th number in the sequence (n ∈ Z). A discrete 0, n < 0
un =
sequence maps integer numbers onto real (or complex) numbers. 1, n ≥ 0
We normally abbreviate {xn }∞ n=−∞ to {xn }, or to {xn }n if the running index is not obvious.
The notation is not well standardized. Some authors write x[n] instead of xn , others x(n). . . . −3 −2 −1 0 1 2 3 ... n
Where a discrete sequence {xn } samples a continuous function x(t) as

xn = x(ts · n) = x(n/fs ), δn
Impulse sequence: 1
we call ts the sampling period and fs = 1/ts the sampling frequency. (
A discrete system T receives as input a sequence {xn } and transforms it 1, n = 0
δn =
into an output sequence {yn } = T {xn }: 6 0
0, n =
discrete = un − un−1
. . . , x2 , x1 , x0 , x−1 , . . . . . . , y2 , y1 , y0 , y−1 , . . . . . . −3 −2 −1 0 1 2 3 ... n
system T

9 10

Sinusoidial sequences Properties of sequences


A cosine wave, amplitude A, frequency f , phase offset ϕ: A sequence {xn } is
x(t) = A · cos( 2πf t + ϕ ) periodic ⇔ ∃k > 0 : ∀n ∈ Z : xn = xn+k
| {z }
phase
Is a continuous function with period tp still periodic after sampling?
Sampling it at sampling rate fs results in the discrete sequence {xn }:

xn = A · cos(2πf n/fs + ϕ) = A · cos(ω̇n + ϕ)
X
absolutely summable ⇔ |xn | < ∞
n=−∞
where ω̇ = 2πf /fs is the frequency expressed in radians per sample.
X∞
MATLAB/Octave example: square summable ⇔ |xn |2 < ∞ ⇔ “energy signal”
1 n=−∞
n=0:40; fs=8000; 0.8
| {z
“energy”
}
0.6
f=400; x=cos(2*pi*f*n/fs); k
0.4
1 X
stem(n, x); ylim([-1.1 1.1]) 0.2 0 < lim |xn |2 < ∞ ⇔ “power signal”
k→∞ 1 + 2k
This shows 41 samples (≈ 1/200 s = 5 ms) 0
n=−k
of an f = 400 Hz sine wave, sampled at −0.2 | {z }
fs = 8 kHz. −0.4 “average power”
Exercise: Try f = 0, 1000, 2000, 3000, 4000, −0.6

This energy/power terminology reflects that if U is a voltage supplied to a load


5000 Hz. Try negative f . Try sine instead of co- −0.8

resistor R, then P = U I = U 2 /R is the power consumed, and P (t) dt the energy. It


R
sine. Try adding phase offsets ϕ of ±π/4, ±π/2, −1

and ±π. 0 5 10 15 20 25 30 35 40 is used even if we drop physical units (e.g., volts) for simplicity in calculations.
11 12
A brief excursion into measuring signal intensity Perception of signal strength
Root-mean-square (RMS) signal strength Sensation limit (SL) = lowest intensity stimulus that can still be perceived
DC = direct current (constant), AC = alternating current (zero mean) Difference limit (DL) = smallest perceivable stimulus difference at given
intensity level
Consider a time-variable signal f (t) over time interval [t1 , t2 ]:
1
Z t2
Weber’s law
DC component = mean voltage = f (τ ) dτ
t2 − t1 t1 Difference limit ∆φ is proportional to the intensity φ of the stimulus
AC component = f (t) − DC component (except for a small correction constant a, to describe deviation of
experimental results near SL):
How can we state the strength of an AC signal?
The root-mean-square signal strength (voltage, etc.) ∆φ = c · (φ + a)
s Z t2
1
rms = f 2 (τ ) dτ Fechner’s scale
t 2 − t 1 t1
Define a perception intensity scale ψ using the sensation limit φ0 as the
is the strength of a DC signal of equal average power. origin and the respective difference limit ∆φ = c · φ as a unit step. The
RMS of a sine wave: result is a logarithmic relationship between stimulus intensity and scale
value:
s
Z 2πk
1 A φ
[A · sin(τ + ϕ)]2 dτ = √ for all k ∈ N, A, ϕ ∈ R ψ = logc
2πk 0 2 φ0
13 14

Units and decibel


Fechner’s scale matches older subjective intensity scales that follow
differentiability of stimuli, e.g. the astronomical magnitude numbers for Communications engineers often use logarithmic units:
star brightness introduced by Hipparchos (≈150 BC). I Quantities often vary over many orders of magnitude → difficult to
Stevens’ power law agree on a common SI prefix (nano, micro, milli, kilo, etc.)
I Quotient of quantities (amplification/attenuation) usually more
A sound that is 20 DL over SL is perceived as more than twice as loud as
one that is 10 DL over SL, i.e. Fechner’s scale does not describe well interesting than difference
perceived intensity. A rational scale attempts to reflect subjective I Signal strength usefully expressed as field quantity (voltage, current,
relations perceived between different values of stimulus intensity φ. pressure, etc.) or power, but quadratic relationship between these
Stanley Smith Stevens observed that such rational scales ψ follow a two (P = U 2 /R = I 2 R) rather inconvenient
power law: I Perception is logarithmic (Weber/Fechner law → slide 14)
ψ = k · (φ − φ0 )a Plus: Using magic special-purpose units has its own odd attractions (→ typographers, navigators)

Example coefficients a: brightness 0.33, loudness 0.6, heaviness 1.45, Neper (Np) denotes the natural logarithm of the quotient of a field
temperature (warmth) 1.6. quantity F and a reference value F0 . (rarely used today)
Bel (B) denotes the base-10 logarithm of the quotient of a power P and
a reference power P0 . Common prefix: 10 decibel (dB) = 1 bel.

15 16
Decibel Types of discrete systems
Where P is some power and P0 a 0 dB reference power, or equally where discrete
. . . , x2 , x1 , x0 , x−1 , . . . . . . , y2 , y1 , y0 , y−1 , . . .
F is a field quantity and F0 the corresponding reference level: system T
P F
10 dB · log10 = 20 dB · log10 A causal system cannot look into the future:
P0 F0
Common reference values are indicated with a suffix after “dB”: yn = f (xn , xn−1 , xn−2 , . . .)
0 dBW = 1 W A memory-less system depends only on the current input value:
0 dBm = 1 mW = −30 dBW
yn = f (xn )
0 dBµV = 1 µV
√ A delay system shifts a sequence in time:
0 dBu = 0.775 V = 600 Ω × 1 mW
0 dBSPL = 20 µPa (sound pressure level) yn = xn−d
0 dBSL = perception threshold (sensation limit)
T is a time-invariant system if for any d
0 dBFS = full scale (clipping limit of analog/digital converter)
Remember: {yn } = T {xn } ⇐⇒ {yn−d } = T {xn−d }.
3 dB = 2× power, 6 dB = 2× voltage/pressure/etc. T is a linear system if for any pair of sequences {xn } and {x0n }
10 dB = 10× power, 20 dB = 10× voltage/pressure/etc.
W.H. Martin: Decibel – the new name for the transmission unit. Bell Syst. Tech. J., Jan. 1929. T {a · xn + b · x0n } = a · T {xn } + b · T {x0n }.
ITU-R Recommendation V.574-4: Use of the decibel and neper in telecommunication.
17 18

Example: M -point moving average system Example: exponential averaging system

M −1 ∞
1 X xn−M +1 + · · · + xn−1 + xn
X
yn = xn−k = yn = α · xn + (1 − α) · yn−1 = α (1 − α)k · xn−k
M M k=0
k=0

It is causal, linear, time-invariant, with memory. With M = 4: It is causal, linear, time-invariant, with memory. With α = 12 :

x x
y y

0 0

19 20
Example: accumulator system Example: backward difference system

n
X
yn = xk yn = xn − xn−1
k=−∞

It is causal, linear, time-invariant, with memory. It is causal, linear, time-invariant, with memory.

x x
y y

0 0

21 22

Other examples Constant-coefficient difference equations


Time-invariant non-linear memory-less systems: Of particular practical interest are causal linear time-invariant systems of
the form
yn = x2n , yn = log2 xn , yn = max{min{b256xn c, 255}, 0} xn b0 yn
Linear but not time-invariant systems: N
X
yn = b0 · xn − ak · yn−k z −1
( −a1
xn , n ≥ 0 k=1
yn−1
yn = = xn · un
0, n<0
yn = xbn/4c z −1
Block diagram representation −a2
yn = xn · <(eω̇ jn ) of sequence operations: yn−2
x0n
Linear time-invariant non-causal systems: z −1
−a3
xn xn + x0n
Addition: yn−3
1
yn = (xn−1 + xn+1 )
2 Multiplication xn a axn
9 The ak and bm are
X sin(πkω) by constant:
yn = xn+k · · [0.5 + 0.5 · cos(πk/10)] constant coefficients.
πkω xn xn−1
k=−9
Delay: z −1
23 24
or
Convolution
xn xn−1 xn−2 xn−3
z −1 z −1 z −1 Another example of a LTI systems is
M
X
yn = bm · xn−m b0 b1 b2 b3 ∞
X
m=0 yn = ak · xn−k
k=−∞
yn
where {ak } is a suitably chosen sequence of coefficients.
or the combination of both:
xn b0 a−1
0 yn This operation over sequences is called convolution and is defined as

z −1 z −1
X
b1 −a1 {pn } ∗ {qn } = {rn } ⇐⇒ ∀n ∈ Z : rn = pk · qn−k .
k=−∞
N
X M
X xn−1 yn−1
ak · yn−k = bm · xn−m
k=0 m=0 z −1 z −1 If {yn } = {an } ∗ {xn } is a representation of an LTI system T , with
b2 −a2
{yn } = T {xn }, then we call the sequence {an } the P impulse response of
xn−2 yn−2
T , because {an } = T {δn }, as {an } ∗ {δn } = {an }, k ak · δn−k = an .
z −1 z −1
b3 −a3 If f and g are continuous functions, their convolution is defined similarly as the integral
Z ∞
xn−3 yn−3 (f ∗ g)(t) = f (s)g(t − s)ds.
−∞

The MATLAB function filter is an efficient implementation of the last variant. But what is the continuous equivalent of {δn }? More on that later . . .
25 26

Convolution examples Properties of convolution


For arbitrary sequences {pn }, {qn }, {rn } and scalars a, b:
A B C D I Convolution is associative

({pn } ∗ {qn }) ∗ {rn } = {pn } ∗ ({qn } ∗ {rn })

I Convolution is commutative

E F A∗B A∗C {pn } ∗ {qn } = {qn } ∗ {pn }

I Convolution is linear

{pn } ∗ {a · qn + b · rn } = a · ({pn } ∗ {qn }) + b · ({pn } ∗ {rn })

C∗A A∗E D∗E A∗F I The impulse sequence (slide 10) is neutral under convolution

{pn } ∗ {δn } = {δn } ∗ {pn } = {pn }

I Sequence shifting is equivalent to convolving with a shifted impulse

{pn−d }n = {pn } ∗ {δn−d }n


27 28
All LTI systems just apply convolution Direct form I and II implementations

Proof: xn b0 a−1
0 yn xn a−1
0 b0 yn
Any sequence {xn } can be decomposed into a weighted sum of shifted
impulse sequences: z −1 z −1 z −1
∞ b1 −a1 −a1 b1
xn−1 yn−1
X
{xn } = xk · {δn−k }
k=−∞
z −1 z −1 = z −1
b2 −a2 −a2 b2
(∗) (∗∗)
Let’s see what happens if we apply a linear time-invariant system T xn−2 yn−2
to such a decomposed sequence:

! ∞ z −1 z −1 z −1
X (∗) X b3 −a3 −a3 b3
T {xn } = T xk · {δn−k } = xk · T {δn−k } xn−3 yn−3
k=−∞ k=−∞
∞ ∞
!
(∗∗) X X The block diagram representation of the constant-coefficient difference
= xk · {δn−k } ∗ T {δn } = xk · {δn−k } ∗ T {δn }
k=−∞ k=−∞
equation on slide 25 is called the direct form I implementation.
= {xn } ∗ T {δn } q.e.d. The number of delay elements can be halved by using the commutativity
of convolution to swap the two feedback loops, leading to the direct form
⇒ The impulse response T {δn } fully characterizes an LTI system. II implementation of the same LTI system.
These two forms are only equivalent with ideal arithmetic (no rounding errors and range limits).
29 30

Convolution: optics example Convolution: electronics example


If a projective lens is out of focus, the blurred image is equal to the
original image convolved with the aperture shape (e.g., a filled circle): R
Uin

Uin C Uout

Uout

∗ = t
Any passive network (resistors, capacitors, inductors) convolves its input
voltage Uin with an impulse response function h, leading to
Uout = Uin ∗ h, that is
Z ∞
as image plane focal plane
Point-spread function h (disk, r = 2f ): Uout (t) = Uin (t − τ ) · h(τ ) · dτ
−∞
1
x2 + y 2 ≤ r 2

,
h(x, y) = r2 π
0, x2 + y 2 > r 2 In the above example:
a
Original image I, blurred image B = I ∗ h, i.e. −t
Uin − Uout 1

dUout · e RC , t ≥ 0
=C· , h(t) = RC
B(x, y) =
ZZ
0 0 0
I(x−x , y−y )·h(x , y )·dx dy
0 0 0 R dt 0, t<0
s

f 31 32
Adding sine waves Cartesian coordinates for sine waves
Adding together sine waves of equal frequency, but arbitrary amplitude Sine waves of any amplitude A and phase (start angle) ϕ can be
and phase, results in another sine wave of the same frequency: represented as linear combinations of sin(ωt) and cos(ωt):

A1 · sin(ωt + ϕ1 ) + A2 · sin(ωt + ϕ2 ) = A · sin(ωt + ϕ) A · sin(ωt + ϕ) = x · sin(ωt) + y · cos(ωt) cos(ωt) = sin(ωt + 90◦ )

Why? where
A
x = A · cos(ϕ), y = A · sin(ϕ) A · sin(ϕ)
Think of A · sin(ωt + ϕ) as the height of ωt
ω ϕ
an arrow of length A, rotating 2π times per second,
with start angle ϕ (radians) at t = 0. A2 and A · cos(ϕ)
A p y
A= x2 + y 2 , tan ϕ = .
Consider two more such arrows, ϕ2 x
of length A1 and A2 , A1
Base: two rotating arrows with start angles 0◦ [height = sin(ω)] and 90◦ [height = cos(ω)].
with start angles ϕ1 and ϕ2 . ωt ϕ
A1 and A2 stuck together are as high as A, ϕ1 Adding two sine waves as vectors in Cartesian coordinates is simple:
all three rotating at the same frequency.
f1 (t) = x1 · sin(ω) + y1 · cos(ω)
But adding sine waves as vectors (A1 , ϕ1 ) and (A2 , ϕ2 ) in polar coordinates is cumbersome: f2 (t) = x2 · sin(ω) + y2 · cos(ω)
A1 sin ϕ1 + A2 sin ϕ2
f1 (t) + f2 (t) = (x1 + x2 ) · sin(ω) + (y1 + y2 ) · cos(ω)
q
A = A21 + A22 + 2A1 A2 cos(ϕ2 − ϕ1 ), tan ϕ =
A1 cos ϕ1 + A2 cos ϕ2
33 34

Why are sine waves useful? Why are sine waves useful?
1) Sine-wave sequences form a family of discrete 2) Sine waves are orthogonal to each other
sequences that is closed under convolution with The term “orthogonal” is used here in the context of an (infinitely dimensional)
arbitrary sequences. vector space, where the “vectors” are functions of the form f : R → R
(or f : R → C) and the scalar product is defined as
Convolution of a discrete sequence {xn } with another sequence {hn } is Z
nothing but adding together scaled and delayed copies of {xn }. f · g = f (t) · g(t) dt.
Think again of {hn } as decomposed into a sum of impulses:
X X Over integer (half-)periods:
{xn } ∗ {hn } = {xn } ∗ hk · {δn−k }n = hk · ({xn } ∗ {δn−k }n ) Z π
k k m, n ∈ N, m 6= n ⇒ sin(nt) sin(mt)dt = 0
X 0
= hk · {xn−k }n Z π
k m, n ∈ N ⇒ sin(nt) cos(mt)dt = 0
−π
If {xn } is a sampled sine wave of frequency f , i.e.
We can even (with some handwaving) extend this to improper integrals:
xn = Ax · sin(2πf t + φx ) Z ∞
P
then {yn } = {xn } ∗ {hn } = k hk · {xn−k }n is another sampled sine sin(ω1 t + ϕ1 ) · sin(ω2 t + ϕ2 ) dt “=” 0
wave of frequency f , i.e. for each {hn } there exists a pair (Ay , φy ) with −∞
⇐⇒ ω1 6= ω2 ∨ ϕ1 − ϕ2 = (2k + 1)π/2 (k ∈ Z)
yn = Ay · sin(2πf t + φy )
The equivalent applies for continuous sine waves and convolution.
They can be used to form an orthogonal function basis for a transform.
35 36
1
Why are exponential functions useful?
Adding together two exponential functions with the same base z, but
different scale factor and offset, results in another exponential function
with the same base:

A1 · z t+ϕ1 + A2 · z t+ϕ2 = A1 · z t · z ϕ1 + A2 · z t · z ϕ2
= (A1 · z ϕ1 + A2 · z ϕ2 ) · z t = A · z t

0
Likewise, if we convolve a sequence {xn } of values

. . . , z −3 , z −2 , z −1 , 1, z, z 2 , z 3 , . . .

xn = z n with an arbitrary sequence {hn }, we get {yn } = {z n } ∗ {hn },



X ∞
X ∞
X
yn = xn−k · hk = z n−k · hk = z n · z −k · hk = z n · H(z)
k=−∞ k=−∞ k=−∞
sin(1t)⋅sin(2t)
sin(1t) where H(z) is independent of n.
sin(2t)
−1 Exponential sequences are closed under convolution with
0 1.5708 3.1416 4.7124 6.2832
t arbitrary sequences.
The same applies in the continuous case.
37 38

Why are complex numbers so useful? We can now represent sine waves as projections of a rotating complex
1) They give us all n solutions (“roots”) of equations involving vector. This allows us to represent sine-wave sequences as exponential
√ sequences with basis e jω̇ .
polynomials up to degree n (the “ −1 = j ” story).
2) They give us the “great unifying theory” that combines sine and A phase shift in such a sequence corresponds to a rotation of a complex
exponential functions: vector.
3) Complex multiplication allows us to modify the amplitude and phase
1 jθ of a complex rotating vector using a single operation and value.
e + e− jθ

cos(θ) =
2 Rotation of a 2D vector in (x, y)-form is notationally slightly messy, but
1
e jθ − e− jθ fortunately j2 = −1 does exactly what is required here:

sin(θ) =
2j
     
or x3 x2 −y2 x1
1  j(ωt+ϕ)  = ·
cos(ωt + ϕ) = e + e− j(ωt+ϕ) y3 y2 x2 y1 (x3 , y3 )
2  
or x1 x2 − y1 y2
= (−y2 , x2 )
x1 y2 + x2 y1
(x2 , y2 )
cos(ω̇n + ϕ) = <(e j(ω̇n+ϕ) ) = <[(e jω̇ )n · e jϕ ]
z1 = x1 + jy1 , z2 = x2 + jy2 (x1 , y1 )
sin(ω̇n + ϕ) = =(e j(ω̇n+ϕ) ) = =[(e jω̇ )n · e jϕ ]
z1 · z2 = x1 x2 − y1 y2 + j(x1 y2 + x2 y1 )
Notation: <(a + jb) := a, =(a + jb) := b and (a + jb)∗ := a − jb, where j2 = −1 and a, b ∈ R.
Then <(x) = 12 (x + x∗ ) and =(x) = 21j (x − x∗ ) for all x ∈ C.
39 40
Complex phasors Recall: Fourier transform
Amplitude and phase are two distinct characteristics of a sine function
that are inconvenient to keep separate notationally. We define the Fourier integral transform and its inverse as
Complex functions (and discrete sequences) of the form Z ∞
F{g(t)}(f ) = G(f ) = g(t) · e−2π jf t dt
(A · e jϕ ) · e jωt = A · e j(ωt+ϕ) = A · [cos(ωt + ϕ) + j · sin(ωt + ϕ)] −∞
Z ∞
(where j2 = −1) are able to represent both amplitude A ∈ R+ and phase F −1 {G(f )}(t) = g(t) = G(f ) · e2π jf t df
ϕ ∈ [0, 2π) in one single algebraic object A · e jϕ ∈ C. −∞

Thanks to complex multiplication, we can also incorporate in one single Many equivalent forms of the Fourier transform are used in the literature. There is no strong
factor both a multiplicative change of amplitude and an additive change consensus on whether the forward transform uses e−2π jf t and the backwards transform e2π jf t , or
of phase of such a function. This makes discrete sequences of the form vice versa. The above form uses the ordinary frequency f , whereas some authors prefer the angular
frequency ω = 2πf :
Z ∞
xn = e jω̇n
∓ jωt
F {h(t)}(ω) = H(ω) = α h(t) · e dt
−∞

eigensequences with respect to an LTI system T , because for each ω̇, Z ∞


± jωt
F −1 {H(ω)}(t) = h(t) = β H(ω)· e dω
there is a complex number (eigenvalue) H(ω̇) such that −∞

This substitution introduces factors α and β such that αβ = 1/(2π). Some authors set α = 1
T {xn } = H(ω̇) · {xn } and β = 1/(2π), to keep √ the convolution theorem free of a constant prefactor; others prefer the
unitary form α = β = 1/ 2π, in the interest of symmetry.

In the notation of slide 38, where the argument of H is the base, we would write H(e jω̇ ).
41 42

Properties of the Fourier transform


If
x(t) •−◦ X(f ) and y(t) •−◦ Y (f )
are pairs of functions that are mapped onto each other by the Fourier Time shifting:
transform, then so are the following pairs.
x(t − ∆t) •−◦ X(f ) · e−2π jf ∆t
Linearity:
Frequency shifting:
ax(t) + by(t) •−◦ aX(f ) + bY (f )
x(t) · e2π j∆f t •−◦ X(f − ∆f )
Time scaling:
  Parseval’s theorem (total energy):
1 f
x(at) •−◦ X ∞ ∞
|a| a
Z Z
|x(t)|2 dt = |X(f )|2 df
−∞ −∞
Frequency scaling:
 
1 t
x •−◦ X(af )
|a| a
43 44
Fourier transform example: rect and sinc Convolution theorem
Convolution in the time domain is equivalent to (complex) scalar
The Fourier transform of the “rectangular function” multiplication in the frequency domain:
1

 1 if |t| < 2
 1 F{(f ∗ g)(t)} = F{f (t)} · F {g(t)}
1
rect(t) = 2 if |t| = 12
 0 R
Proof: z(r) = s x(s)y(r − s)ds ⇐⇒
R
z(r)e− jωr dr = r s x(s)y(r − s)e− jωr dsdr =
R R
0 otherwise
 r
− 12 0 1
2 t:=r−s
x(s) r y(r − s)e− jωr drds = s x(s)e− jωs r y(r − s)e− jω(r−s) drds =
R R R R
s
is the “(normalized) sinc function” R
x(s)e− jωs t y(t)e− jωt dtds = s x(s)e− jωs ds · t y(t)e− jωt dt.
R R R
s

1
sin πf
Z 2
Convolution in the frequency domain corresponds to scalar multiplication
F{rect(t)}(f ) = e−2π jf t dt = = sinc(f )
− 12 πf in the time domain:

and vice versa F{f (t) · g(t)} = F{f (t)} ∗ F {g(t)}


F{sinc(t)}(f ) = rect(f ).
This second form is also called “modulation theorem”, as it describes what happens in the
Some noteworthy properties of these functions: frequency domain with amplitude modulation of a signal (see slide 54).
I ∞ sinc(t) dt = 1 = ∞ rect(t) dt
R R The proof is very similar to the one above.
−∞ −∞ 1
Both equally work for the inverse Fourier transform:
I sinc(0) = 1 = rect(0)
−1 −1 −1
I ∀n ∈ Z \ {0} : sinc(n) = 0 0 F {(F ∗ G)(f )} = F {F (f )} · F {G(f )}
−3 −2 −1 0 1 2 3 F
−1
{F (f ) · G(f )} = F
−1
{F (f )} ∗ F
−1
{G(f )}
45 46

Dirac delta function Some properties of the Dirac delta function:


Z ∞
The continuous equivalent of the impulse sequence {δn } is known as f (x)δ(x − a) dx = f (a)
−∞
Dirac delta function δ(x). It is a generalized function, defined such that
Z ∞
e±2π jxa dx = δ(a)

0, x 6= 0 1 −∞
δ(x) =
∞, x = 0 ∞ ∞
X
±2π jixa 1 X
Z ∞ e = δ(x − i/a)
δ(x) dx = 1 i=−∞
|a| i=−∞
−∞
1
0 x δ(ax) = δ(x)
|a|
and can be thought of as the limit of function sequences such as
 Fourier transform:
0, |x| ≥ 1/n
δ(x) = lim ∞
n/2, |x| < 1/n
Z
n→∞
F{δ(t)}(f ) = δ(t) · e−2π jf t dt = e0 = 1
or −∞
n 2 2
δ(x) = lim √ e−n x Z ∞
n→∞ π F −1 {1}(t) = 1 · e2π jf t df = δ(t)
The delta function is mathematically speaking not a function, but a distribution, that is an −∞
expression that is only defined when integrated.

47 48
Z ∞ P10
e2π jtf df = δ(t) cos(2πfi t) ≈ δ(t)
Linking the Dirac delta with the Fourier transform −∞
i=1
f1 , . . . , f10 ∈ [0, 3] chosen uniformly at random
1
The Fourier transform of 1 follows from the Dirac delta’s ability to
sample inside an integral:
0.5
g(t) = F −1 (F(g))(t)
Z ∞ Z ∞  0
= g(s) · e−2π jf s · ds · e2π jf t · df
−∞ −∞
Z ∞ Z ∞  −0.5

= e−2π jf s · e2π jf t · df · g(s) · ds


−∞ −∞ −1
Z ∞ Z ∞  −4 −3 −2 −1 0 1 2 3 4

= e−2π jf (t−s) · df ·g(s) · ds


−∞ −∞
10
| {z }
δ(t−s)

So if δ has the property 5


Z ∞
g(t) = δ(t − s) · g(s) · ds 0
−∞
then Z ∞ −5
e−2π jf (t−s) df = δ(t − s)
−∞
−10
49 −4 −3 −2 −1 0 1 2 3 4 50


X ∞
X 5
X ∞
X
e±2π jnt = δ(t − n) cos(2πnt) ≈ δ(t − n)
n=−∞ n=−∞ n=1 n=−∞
Sine and cosine in the frequency domain
1

1 2π jf0 t 1 −2π jf0 t 1 2π jf0 t 1


0.5
cos(2πf0 t) = e + e sin(2πf0 t) = e − e−2π jf0 t
2 2 2j 2j
1 1
0 F{cos(2πf0 t)}(f ) = δ(f − f0 ) + δ(f + f0 )
2 2
j j
−0.5 F{sin(2πf0 t)}(f ) = − δ(f − f0 ) + δ(f + f0 )
2 2
−1
−4 −3 −2 −1 0 1 2 3 4 < <
1 1
2 2
= =
5 1 1
2 j 2 j
4

2
−f0 f0 f −f0 f0 f
1

0
As any x(t) ∈ R can be decomposed into sine and cosine functions, the spectrum of any
−1 real-valued signal will show the symmetry X(−f ) = [X(f )]∗ , where ∗ denotes the complex
conjugate (i.e., negated imaginary part).
−2
−4 −3 −2 −1 0 1 2 3 4 51 52
Fourier transform symmetries Example: amplitude modulation

We call a function x(t) Communication channels usually permit only the use of a given frequency
interval, such as 300–3400 Hz for the analog phone network or 590–598
odd if x(−t) = −x(t) MHz for TV channel 36. Modulation with a carrier frequency fc shifts
even if x(−t) = x(t) the spectrum of a signal x(t) into the desired band.
Amplitude modulation (AM):
and ·∗ is the complex conjugate, such that (a + jb)∗ = (a − jb).
Then y(t) = A · cos(2πtfc ) · x(t)

x(t) is real ⇔ X(−f ) = [X(f )]
X(f ) Y (f )
x(t) is imaginary ⇔ X(−f ) = −[X(f )]∗
x(t) is even ⇔ X(f ) is even ∗ =
x(t) is odd ⇔ X(f ) is odd
x(t) is real and even ⇔ X(f ) is real and even
x(t) is real and odd ⇔ X(f ) is imaginary and odd −fl 0 fl f −fc fc f −fc 0 fc f
x(t) is imaginary and even ⇔ X(f ) is imaginary and even The spectrum of the baseband signal in the interval −fl < f < fl is
x(t) is imaginary and odd ⇔ X(f ) is real and odd shifted by the modulation to the intervals ±fc − fl < f < ±fc + fl .
How can such a signal be demodulated?

53 54

Sampling using a Dirac comb The Fourier transform of a Dirac comb



X ∞
X
The loss of information in the sampling process that converts a s(t) = ts · δ(t − ts · n) = e2π jnt/ts
n=−∞ n=−∞
continuous function x(t) into a discrete sequence {xn } defined by
is another Dirac comb
xn = x(ts · n) = x(n/fs )

( )
X
can be modelled through multiplying x(t) by a comb of Dirac impulses S(f ) = F ts · δ(t − ts n) (f ) =
n=−∞

X Z∞ X
∞ ∞  
s(t) = ts · δ(t − ts · n) −2π jf t
X n
ts · δ(t − ts n) e dt = δ f− .
n=−∞ ts
−∞ n=−∞ n=−∞
to obtain the sampled function
s(t) S(f )
x̂(t) = x(t) · s(t)

The function x̂(t) now contains exactly the same information as the
discrete sequence {xn }, but is still in a form that can be analysed using
the Fourier transform on continuous functions. −2ts −ts 0 ts 2ts t −2fs −fs 0 fs 2fs f

55 56
Sampling and aliasing Frequency-domain view of sampling

x(t) s(t) x̂(t)


sample
cos(2π tf)
cos(2π t(k⋅ fs± f))
· =
... ... ... ...
0 t −1/fs 0 1/fs t −1/fs 0 1/fs t
X(f ) S(f ) X̂(f )
0

∗ =
... ... ... ...
0 f −fs fs f −fs 0 fs f
Sampling a signal in the time domain corresponds in the frequency
domain to convolving its spectrum with a Dirac comb. The resulting
copies of the original signal spectrum in the spectrum of the sampled
Sampled at frequency fs , the function cos(2πtf ) cannot be distinguished signal are called “images”.
from cos[2πt(kfs ± f )] for any k ∈ Z.
57 58

Discrete-time Fourier transform (DTFT) 1 8


DTFT real
The Fourier transform of a sampled signal 0.8 6
DTFT imag


X 0.6 4
x̂(t) = ts · xn · δ(t − ts · n)
n=−∞ 0.4 2

is 0.2 0
Z ∞ ∞
−2π j ffs
X
F{x̂(t)}(f ) = X̂(f ) = x̂(t) · e−2π jf t dt = ts · xn · e n 0
-5 0 5
-2
- -¾ -½ -¼ 0 ¼ ½ ¾
−∞ n=−∞ time-domain samples DTFT frequency (1 period)

The inverse transform is 1 8


DTFT real
Z ∞ Z fs /2 DTFT imag
0.8 6
2π j ffs m
x̂(t) = X̂(f ) · e2π jf t df or xm = X̂(f ) · e df.
−∞ −fs /2 0.6 4

The DTFT is also commonly expressed using the normalized frequency 0.4 2
ω̇ = 2π ffs (radians per sample), and the notation
0.2 0
X
X(e jω̇ ) = xn · e− jω̇n 0 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾
n
time-domain samples DTFT frequency (1 period)
is customary, to highlight both the periodicity of the DTFT and its
relationship with the z-transform of {xn } (see slide 125).
59 60
1 8 1 8
DTFT real DTFT real
DTFT imag DTFT imag
0.8 6 0.8 6

0.6 4 0.6 4

0.4 2 0.4 2

0.2 0 0.2 0

0 -2 0 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ -5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period) time-domain samples DTFT frequency (1 period)

1 8 1 8
DTFT real DTFT real
DTFT imag DTFT imag
0.8 6 6
0.5

0.6 4 4
0
0.4 2 2

-0.5
0.2 0 0

0 -2 -1 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ -5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period) time-domain samples DTFT frequency (1 period)

61 62

1 8
Properties of the DTFT
DTFT real
DTFT imag
6
0.5 The DTFT is periodic:
4
0 X̂(f ) = X̂(f + kfs ) or X(e jω̇ ) = X(e j(ω̇+2πk) ) ∀k ∈ Z
2

-0.5
0 Beyond that, the DTFT is just the Fourier transform applied to a discrete
-1 -2
sequence, and inherits the properties of the continuous Fourier transform,
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ e.g.
time-domain samples DTFT frequency (1 period)
I Linearity
1 8
DTFT real I Symmetries
DTFT imag
0.8 6
I Convolution and modulation theorem:
0.6 4
{xn } ∗ {yn } = {zn } ⇐⇒ X(e jω̇ ) · Y (e jω̇ ) = Z(e jω̇ )
0.4 2

0.2 0 and
Z π
0 0
X(e jω̇ ) · Y (e j(ω̇−ω̇ ) ) dω̇ 0 = Z(e jω̇ )
0 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ xn · yn = zn ⇐⇒
time-domain samples DTFT frequency (1 period) −π

63 64
Nyquist limit and anti-aliasing filters Nyquist limit and anti-aliasing filters

Without anti-aliasing filter With anti-aliasing filter


single-sided Nyquist
If the (double-sided) bandwidth of a signal to be sampled is larger than X(f ) bandwidth X(f ) limit = fs /2
the sampling frequency fs , the images of the signal that emerge during anti-aliasing filter

sampling may overlap with the original spectrum.


Such an overlap will hinder reconstruction of the original continuous
signal by removing the aliasing frequencies with a reconstruction filter.
Therefore, it is advisable to limit the bandwidth of the input signal to the 0 f −fs 0 fs f
sampling frequency fs before sampling, using an anti-aliasing filter. double-sided bandwidth
reconstruction filter
In the common case of a real-valued base-band signal (with frequency X̂(f ) X̂(f )
content down to 0 Hz), all frequencies f that occur in the signal with
non-zero power should be limited to the interval −fs /2 < f < fs /2.
The upper limit fs /2 for the single-sided bandwidth of a baseband signal
is known as the “Nyquist limit”.
−2fs −fs 0 fs 2fs f −2fs −fs 0 fs 2fs f
Anti-aliasing and reconstruction filters both suppress frequencies outside |f | < fs /2.

65 66

Impulse response of ideal low-pass filter with cut-off frequency fs /2:


Reconstruction of a continuous band-limited waveform

The ideal anti-aliasing filter for eliminating any frequency content above
fs /2 before sampling with a frequency of fs has the Fourier transform
(
1 if |f | < f2s
H(f ) = = rect(ts f ).
0 if |f | > f2s

This leads, after an inverse Fourier transform, to the impulse response


 
sin πtfs 1 t
h(t) = fs · = · sinc .
πtfs ts ts

The original band-limited signal can be reconstructed by convolving this


with the sampled signal x̂(t), which eliminates the periodicity of the 0
frequency domain introduced by the sampling process:

x(t) = h(t) ∗ x̂(t)

Note that sampling h(t) gives the impulse function: h(t) · s(t) = δ(t). −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3
t⋅ fs
67 68
If before being sampled with xn = x(t/fs ) the signal x(t) satisfied the
Reconstruction filter example Nyquist limit
Z ∞
F{x(t)}(f ) = x(t) · e−2π jf t dt = 0 for all |f | ≥ f2s
sampled signal −∞
interpolation result 1
 
t
scaled/shifted sin(x)/x pulses then it can be reconstructed by interpolation with h(t) = ts sinc ts :
Z ∞
x(t) = h(s) · x̂(t − s) · ds
−∞
Z ∞   ∞
1 s X
= sinc · ts xn · δ(t − s − ts · n) · ds
−∞ ts ts n=−∞
∞ Z ∞  
X s
= xn · sinc · δ(t − s − ts · n) · ds
n=−∞ −∞ t s
∞ ∞
t − ts · n
X   X
= xn · sinc = xn · sinc(t/ts − n)
n=−∞
ts n=−∞

X sin π(t/ts − n)
= xn ·
n=−∞
π(t/ts − n)
1 2 3 4 5
69 70

Reconstruction filters Interpolation through convolution

The mathematically ideal form of a reconstruction filter for suppressing


aliasing frequencies interpolates the sampled signal xn = x(ts · n) back
into the continuous waveform

X sin π(t/ts − n)
x(t) = xn · .
n=−∞
π(t/ts − n)

Choice of sampling frequency


Due to causality and economic constraints, practical analog filters can only
approximate such an ideal low-pass filter. Instead of a sharp transition between the
“pass band” (< fs /2) and the “stop band” (> fs /2), they feature a “transition band”
in which their signal attenuation gradually increases.
The sampling frequency is therefore usually chosen somewhat higher than twice the
highest frequency of interest in the continuous signal (e.g., 4×). On the other hand,
the higher the sampling frequency, the higher are CPU, power and memory
requirements. Therefore, the choice of sampling frequency is a tradeoff between signal
quality, analog filter cost and digital subsystem expenses.

71 72
Band-pass signal sampling Spectrum of a periodic signal
Sampled signals can also be reconstructed if their spectral components
A signal x(t) that is periodic with frequency fp can be factored into a
remain entirely within the interval n · fs /2 < |f | < (n + 1) · fs /2 for some
single period ẋ(t) convolved with an impulse comb p(t). This
n ∈ N. (The baseband case discussed so far is just n = 0.)
corresponds in the frequency domain to the multiplication of the
X(f ) anti-aliasing filter X̂(f ) reconstruction filter spectrum of the single period with a comb of impulses spaced fp apart.
x(t) ẋ(t) p(t)

= ∗
− 54 fs 0 5
4 fs f −fs −fs 0 fs fs f ... ... ... ...
2 2
n=2
−1/fp 0 1/fp t t −1/fp 0 1/fp t
In this case, the aliasing copies of the positive and the negative
X(f ) Ẋ(f ) P (f )
frequencies will interleave instead of overlap, and can therefore be
removed again later by a reconstruction filter.
The ideal reconstruction filter for this sampling technique will only allow frequencies in the interval = ·
[n · fs /2, (n + 1) · fs /2] to pass through. The impulse response of such a band-pass filter can be
obtained by amplitude modulating a low-pass filter, or by subtracting two low-pass filters: ... ...
sin πtfs /2 2n + 1 sin πt(n + 1)fs sin πtnfs
 
h(t) = fs · cos 2πtfs = (n + 1)fs − nfs . −fp 0 fp f f −fp 0 fp f
πtfs /2 4 πt(n + 1)fs πtnfs

73 74

Spectrum of a sampled signal Continuous vs discrete Fourier transform

A signal x(t) that is sampled with frequency fs has a spectrum that is I Sampling a continuous signal makes its spectrum periodic
periodic with a period of fs .
I A periodic signal has a sampled spectrum
x(t) s(t) x̂(t) We sample a signal x(t) with fs , getting x̂(t). We take n consecutive
samples of x̂(t) and repeat these periodically, getting a new signal ẍ(t)
· = with period n/fs . Its spectrum Ẍ(f ) is sampled (i.e., has non-zero
value) at frequency intervals fs /n and repeats itself with a period fs .
... ... ... ...
0 t −1/fs 0 1/fs t −1/fs 0 1/fs t
Now both ẍ(t) and its spectrum Ẍ(f ) are finite vectors of length n.

X(f ) S(f ) X̂(f ) ẍ(t) Ẍ(f )

∗ =
... ... ... ...
... ... ... ...
−n/fs fs−1 0 fs−1 n/fs t −fs −fs /n 0 fs /n fs f
0 f −fs fs f −fs 0 fs f

75 76
If x(t) has period tp = n · ts , then after sampling it at rate ts we have
∞ ∞ n−1
Discrete Fourier Transform (DFT)
X X X
ẍ(t) = x(t)·s(t) = ts · xi ·δ(t−ts ·i) = ts · xi ·δ(t−ts ·(i+nl))
i=−∞ l=−∞ i=0
n−1 n−1
and the Fourier transform of that is
X ik 1 X ik
Xk = xi · e−2π j n xk = · Xi · e2π j n
Z ∞
i=0
n i=0
F{ẍ(t)}(f ) = Ẍ(f ) = ẍ(t) · e−2π jf t dt
−∞ The n-point DFT multiplies a vector with an n × n matrix
∞ n−1 ∞ n−1 
1 1 1 1 ··· 1

−2π j ffs ·(i+nl) −2π j ffs ·nl −2π j ffs ·i
X X X X
= ts · xi ·e = ts · e · xi ·e 
 1 e
1
−2π j n
e
2
−2π j n
e
3
−2π j n
··· e −2π j n−1
n


2 4 6 2(n−1)
l=−∞ i=0 l=−∞ i=0  −2π j n −2π j n −2π j n −2π j n

| {z } 
 1 e e e ··· e 

3(n−1)
1
P l Fn =  1 e−2π j n
3
e−2π j n
6
e−2π j n
9
··· e −2π j n 
l δ(f − n fs )
ts n
 
 
 .. .. .. .. .. .. 
Recall that
P∞
e±2π jixa = 1
P∞
δ(x − i/a) and map x = f , a = n
and i = l.

. . . . . .

i=−∞ |a| i=−∞ fs  
n−1 2(n−1) 3(n−1) (n−1)(n−1)

After substituting k := f
= f
i.e. f
= k
and f = kfp 1 e−2π j n e−2π j n e−2π j n · · · e−2π j n
fp fs n, fs n
x0 X0 X0 x0
       
∞ n−1
1 X X ki  x1   X1   X1   x1 
Ẍ(kfp ) = · δ(kfp − lfp ) · xi · e−2π j n  x   X 
    1  X   x 
   
n Fn ·  2  =  2  , · Fn∗ ·  2  =  2 
l=−∞ i=0  .   .  n  .   . 
|  {z } | {z }  ..   ..   ..   .. 
Xk xn−1 Xn−1 Xn−1 xn−1
δ(0) if k ∈ Z

=
0

if k ∈
/Z
Show that Xk = Xk±n for all k ∈ Z. 77 78

Discrete Fourier Transform visualized Inverse DFT visualized

     
x0 X0
x1 X1
     
     
       
   x2   X2     
X0 x0
     
   x3   X3   
· = X1 x1
          
 
x4 X4
    
X2 x2
          
       
x5 X5
  
      1 
X3   x3 
· · =
        


 
  x6  
  X6 
 8  X4  
  x4 

X5 x5
 
x7 X7    
 


X6 x6
    
 
X7 x7
 
The n-point DFT of a signal {xi } sampled at frequency fs contains in
the elements X0 to Xn/2 of the resulting frequency-domain vector the
frequency components 0, fs /n, 2fs /n, 3fs /n, . . . , fs /2, and contains in
Xn−1 downto Xn/2 the corresponding negative frequencies. Note that
for a real-valued input vector, both X0 and Xn/2 will be real, too.
Why is there no phase information recovered at fs /2?

79 80
Fast Fourier Transform (FFT) Efficient real-valued FFT
The symmetry properties of the Fourier transform applied to the discrete
Fourier transform {Xi }n−1 n−1
i=0 = Fn {xi }i=0 have the form
n−1
Xi∗
  ik
∀i : xi = <(xi ) ⇐⇒ ∀i : Xn−i =
X
Fn {xi }n−1
i=0 = xi · e−2π j n
k
i=0 ∀i : xi = j · =(xi ) ⇐⇒ ∀i : Xn−i = −Xi∗
n −1 n −1
2 2
X ik
−2π j n/2 k X ik
−2π j n/2
= x2i · e + e−2π j n x2i+1 · e These two symmetries, combined with the linearity of the DFT, allows us to
i=0 i=0 calculate two real-valued n-point DFTs
n −1  n −1 
  k

2
 F n2 {x2i }i=0 k + e−2π j n · F n2 {x2i+1 }i=0
2
, k< n
{Xi0 }n−1 0 n−1
{Xi00 }n−1 00 n−1
i=0 = Fn {xi }i=0 i=0 = Fn {xi }i=0

 2
k
=  n −1   n −1 
k
2
 F n {x2i }i=0 + e−2π j n · F n2 {x2i+1 }i=0
2
k≥ n simultaneously in a single complex-valued n-point DFT, by composing its input


2
, 2
k− n
2
k− n
2 as
xi = x0i + j · x00i
The DFT over n-element vectors can be reduced to two DFTs over and decomposing its output as
n/2-element vectors plus n multiplications and n additions, leading to
1 1
log2 n rounds and n log2 n additions and multiplications overall, Xi0 = ∗
(Xi + Xn−i ) Xi00 = ∗
(Xi − Xn−i )
2 2j
compared to n2 for the equivalent matrix multiplication.
A high-performance FFT implementation in C with many processor-specific optimizations and where Xn = X0 .
support for non-power-of-2 sizes is available at http://www.fftw.org/.
To optimize the calculation of a single real-valued FFT, use this trick to calculate the two half-size
real-value FFTs that occur in the first round.
81 82

Fast complex multiplication FFT-based convolution


Calculating the convolution of two finite sequences {xi }m−1 n−1
i=0 and {yi }i=0
Calculating the product of two complex numbers as of lengths m and n via

(a + jb) · (c + jd) = (ac − bd) + j(ad + bc) min{m−1,i}


X
zi = xj · yi−j , 0≤i<m+n−1
involves four (real-valued) multiplications and two additions. j=max{0,i−(n−1)}
The alternative calculation
takes mn multiplications.
α = a(c + d) Can we apply the FFT and the convolution theorem to calculate the
(a + jb) · (c + jd) = (α − β) + j(α + γ) with β = d(a + b) convolution faster, in just O(m log m + n log n) multiplications?
γ = c(b − a)
{zi } = F −1 (F{xi } · F {yi })
provides the same result with three multiplications and five additions.
The latter may perform faster on CPUs where multiplications take three There is obviously no problem if this condition is fulfilled:
or more times longer than additions. {xi } and {yi } are periodic, with equal period lengths
This “Karatsuba multiplication” is most helpful on simpler microcontrollers. Specialized
signal-processing CPUs (DSPs) feature 1-clock-cycle multipliers. High-end desktop processors use In this case, the fact that the DFT interprets its input as a single period
pipelined multipliers that stall where operations depend on each other.
of a periodic signal will do exactly what is needed, and the FFT and
inverse FFT can be applied directly as above.
83 84
In the general case, measures have to be taken to prevent a wrap-over:
Zero padding is usually applied to extend both sequence lengths to the
A B F−1[F(A)⋅F(B)]
next higher power of two (2dlog2 (m+n−1)e ), which facilitates the FFT.
With a causal sequence, simply append the padding zeros at the end.
With a non-causal sequence, values with a negative index number are
wrapped around the DFT block boundaries and appear at the right end.
In this case, zero-padding is applied in the center of the block, between
the last and first element of the sequence.
Thanks to the periodic nature of the DFT, zero padding at both ends has
A’ B’
−1
F [F(A’)⋅F(B’)] the same effect as padding only at one end.

If both sequences can be loaded entirely into RAM, the FFT can be
applied to them in one step. However, one of the sequences might be too
large for that. It could also be a realtime waveform (e.g., a telephone
signal) that cannot be delayed until the end of the transmission.
In such cases, the sequence has to be split into shorter blocks that are
separately convolved and then added together with a suitable overlap.
Both sequences are padded with zero values to a length of at least m + n − 1.
This ensures that the start and end of the resulting sequence do not overlap.

85 86

Each block is zero-padded at both ends and then convolved as before: Deconvolution
A signal u(t) was distorted by convolution with a known impulse
response h(t) (e.g., through a transmission channel or a sensor problem).
The “smeared” result s(t) was recorded.
Can we undo the damage and restore (or at least estimate) u(t)?

∗ ∗ ∗
∗ =
= = =

∗ =
The regions originally added as zero padding are, after convolution, aligned to
overlap with the unpadded ends of their respective neighbour blocks. The
overlapping parts of the blocks are then added together.

87 88
The convolution theorem turns the problem into one of multiplication:
Z
s(t) = u(t − τ ) · h(τ ) · dτ Typical workarounds:
I Modify the Fourier transform of the impulse response, such that
s = u∗h
|F{h}(f )| >  for some experimentally chosen threshold .
F{s} = F{u} · F {h} I If estimates of the signal spectrum |F{s}(f )| and the noise
spectrum |F{n}(f )| can be obtained, then we can apply the
F{u} = F{s}/F{h}
“Wiener filter” (“optimal filter”)
u = F −1 {F{s}/F{h}}
|F{s}(f )|2
W (f ) =
In practice, we also record some noise n(t) (quantization, etc.): |F{s}(f )|2 + |F{n}(f )|2

before deconvolution:
Z
c(t) = s(t) + n(t) = u(t − τ ) · h(τ ) · dτ + n(t)
ũ = F −1 {W · F {c}/F{h}}
Problem – At frequencies f where F{h}(f ) approaches zero, the noise
will be amplified (potentially enormously) during deconvolution:

ũ = F −1 {F{c}/F{h}} = u + F −1 {F{n}/F{h}}

89 90

Vowel "A" sung at varying pitch Different vovels at constant pitch


8000
8000
7000
7000
6000
6000
Frequency (Hz)

5000

Frequency (Hz)
5000
4000
4000
3000

2000 3000

1000 2000

0 1000
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Time
[w,fs, bits] = auread('sing.au');
0
specgram(w,2048,fs);
0.5 1 1.5 2 2.5 3 3.5 4
ylim([0 8e3]); xlim([0 4.5]); aeiou.au
Time
saveas(gcf, 'sing.eps', 'eps2c');

91 92
Spectral estimation
We introduced the DFT as a special case of the continuous Fourier
1
cos(2 *[0:15]/16*4)
12 transform, where the input is sampled and periodic.
DTFT mag
10 DFT mag If the input is sampled, but not periodic, the DFT can still be used to
0.5
8
calculate an approximation of the Fourier transform of the original
continuous signal. However, there are two effects to consider. They are
0 6
particularly visible when analysing pure sine waves.
4
-0.5 Sine waves whose frequency is a multiple of the base frequency (fs /n) of
2
the DFT are identical to their periodic extension beyond the size of the
-1 0 DFT. They are, therefore, represented exactly by a single sharp peak in
0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period) the DFT. All their energy falls into one single frequency “bin” in the
cos(2 *[0:15]/16*4.2) DFT result.
1 12

10
DTFT mag
DFT mag
Sine waves with other frequencies, which do not match exactly one of the
0.5 output frequency bins of the DFT, are still represented by a peak at the
8
output bin that represents the nearest integer multiple of the DFT’s base
0 6 frequency. However, such a peak is distorted in two ways:
4 I Its amplitude is lower (down to 63.7%).
-0.5
2 I Much signal energy has “leaked” to other frequencies.
-1 0
0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
93 94

Windowing
35
Sine wave Discrete Fourier Transform
30
1 300
25

20 200
0
15
100
10
−1 0
5
0 200 400 0 200 400
0
0 5
16 Sine wave multiplied with window function Discrete Fourier Transform
10 15 15.5 100
20 25
1
30 15
input freq.
DFT index
The leakage of energy to other frequency bins not only blurs the estimated spectrum. 0 50
The peak amplitude also changes significantly as the frequency of a tone changes from
that associated with one output bin to the next, a phenomenon known as scalloping.
In the above graphic, an input sine wave gradually changes from the frequency of bin
15 to that of bin 16 (only positive frequencies shown). −1 0
0 200 400 0 200 400
95 96
Some better window functions
The reason for the leakage and scalloping losses is easy to visualize with the
help of the convolution theorem:
1
The operation of cutting a sequence of the size of the DFT input vector out of
a longer original signal (the one whose continuous Fourier spectrum we try to
estimate) is equivalent to multiplying this signal with a rectangular function. 0.8
This destroys all information and continuity outside the “window” that is fed
into the DFT.
Multiplication with a rectangular window of length T in the time domain is 0.6
equivalent to convolution with sin(πf T )/(πf T ) in the frequency domain.
The subsequent interpretation of this window as a periodic sequence by the
0.4
DFT leads to sampling of this convolution result (sampling meaning
multiplication with a Dirac comb whose impulses are spaced fs /n apart).
Where the window length was an exact multiple of the original signal period, 0.2
sampling of the sin(πf T )/(πf T ) curve leads to a single Dirac pulse, and the Rectangular window
windowing causes no distortion. In all other cases, the effects of the convolution Triangular window
become visible in the frequency domain as leakage and scalloping losses. 0
Hann window
Hamming window

0 0.2 0.4 0.6 0.8 1


All these functions are 0 outside the interval [0,1].
97 98

cos(2 *[0:15]/16*4.2)
Rectangular window (64−point) Triangular window
1 12
DTFT mag
10 DFT mag
20 20

Magnitude (dB)

Magnitude (dB)
0.5
8
0 0
0 6
−20 −20
4
-0.5
2 −40 −40

-1 0 −60 −60
0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾ 0 0.5 1 0 0.5 1
time-domain samples DTFT frequency (1 period)
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
cos(2 *[0:15]/16*4.2).*hann(16) Hann window Hamming window
1 12
DTFT mag
10 DFT mag
0.5 20 20

Magnitude (dB)

Magnitude (dB)
8

0 6
0 0

4 −20 −20
-0.5
2
−40 −40
-1 0
0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾ −60 −60
time-domain samples DTFT frequency (1 period) 0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)

99 100
Numerous alternatives to the rectangular window have been proposed
that reduce leakage and scalloping in spectral estimation. These are Does zero padding increase DFT resolution?
vectors multiplied element-wise with the input vector before applying the
The two figures below show two spectra of the 16-element sequence
DFT to it. They all force the signal amplitude smoothly down to zero at
the edge of the window, thereby avoiding the introduction of sharp jumps si = cos(2π · 3i/16) + cos(2π · 4i/16), i ∈ {0, . . . , 15}.
in the signal when it is extended periodically by the DFT.
The left plot shows the DFT of the windowed sequence
Three examples of such window vectors {wi }n−1
i=0 are:
Triangular window (Bartlett window): xi = si · wi , i ∈ {0, . . . , 15}

i and the right plot shows the DFT of the zero-padded windowed sequence
wi = 1 − 1 −
n/2

0 si · wi , i ∈ {0, . . . , 15}
xi =
0, i ∈ {16, . . . , 63}
Hann window (raised-cosine window, Hanning window):
  where wi = 0.54 − 0.46 × cos (2πi/15) is the Hamming window.
i
wi = 0.5 − 0.5 × cos 2π
n−1 DFT without zero padding DFT with 48 zeros appended to window
4 4
Hamming window:
  2 2
i
wi = 0.54 − 0.46 × cos 2π
n−1
0 0
0 5 10 15 0 20 40 60
101 102

cos(2 *[0:15]/16*3.3) + cos(2 *[0:15]/16*4) cos(2 *[0:15]/16*3.3) + cos(2 *[0:15]/16*4)


2 8 2 8
DTFT mag DTFT mag
DFT mag DFT mag
1 6 1 6

0 4 0 4

-1 2 -1 2

-2 0 -2 0
0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾ 0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period) time-domain samples DTFT frequency (1 period)

zero-padded to 64 samples with 64 actual samples


2 8 2 35
DTFT mag DTFT mag
DFT mag 30 DFT mag
1 6 1
25

20
0 4 0
15

10
-1 2 -1
5

-2 0 -2 0
0 20 40 60 - -¾ -½ -¼ 0 ¼ ½ ¾ 0 20 40 60 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period) time-domain samples DTFT frequency (1 period)

103 104
Applying the discrete Fourier transform (DFT) to an n-element long Digital filters
real-valued sequence samples the DTFT of that sequence at n/2 + 1
discrete frequencies.
Filter: supresses (removes, attenuates) unwanted signal components.
The DTFT spectrum has already been distorted by multiplying the
(hypothetically longer) signal with a windowing function that limits its I low-pass filter – suppress all frequencies above a cut-off frequency
length to n non-zero values and forces the waveform down to zero I high-pass filter – suppress all frequencies below a cut-off frequency,
outside the window. Therefore, appending further zeros outside the including DC (direct current = 0 Hz)
window will not affect the DTFT. I band-pass filter – suppress signals outside a frequency interval (=
The frequency resolution of the DFT is the sampling frequency divided by passband)
the block size of the DFT. Zero padding can therefore be used to increase I band-stop filter (aka: band-reject filter) – suppress signals inside a single
the frequency resolution of the DFT, to sample the DTFT at more frequency interval (= stopband)
places. But that does not change the limit imposed on the frequency I notch filter – narrow band-stop filter, ideally suppressing only a single
resolution (i.e., blurriness) of the DTFT by the length of the window. frequency
Note that zero padding does not add any additional information to the
For digital filters, we also distinguish
signal. The DTFT has already been “low-pass filtered” by being
convolved with the spectrum of the windowing function. Zero padding in I finite impulse response (FIR) filters
the time domain merely causes the DFT to sample the same underlying I infinite impulse response (IIR) filters
DTFT spectrum at a higher resolution, thereby making it easier to
visually distinguish spectral lines and to locate their peak more precisely. depending on how far their memory reaches back in time.

105 106

Window-based design of FIR filters


Solutions:
I Make the impulse response finite by multiplying the sampled h(t)
Recall that the ideal continuous low-pass filter with cut-off frequency fc
has the frequency characteristic with a windowing function
   I Make the impulse response causal by adding a delay of half the
1 if |f | < fc f window size
H(f ) = = rect
0 if |f | > fc 2fc The impulse response of an n-th order low-pass filter is then chosen as
and the impulse response sin[2π(i − n/2)fc /fs ]
hi = 2fc /fs · · wi
sin 2πtfc 2π(i − n/2)fc /fs
h(t) = 2fc = 2fc · sinc(2fc · t).
2πtfc where {wi } is a windowing sequence, such as the Hamming window
Sampling this impulse response with the sampling frequency fs of the
wi = 0.54 − 0.46 × cos (2πi/n)
signal to be processed will lead to a periodic frequency characteristic,
that matches the periodic spectrum of the sampled signal. with wi = 0 for i < 0 and i > n.
There are two problems though: Note that for fc = fs /4, we have hi = 0 for all even values of i. Therefore, this special case
requires only half the number of multiplications during the convolution. Such “half-band” FIR
I the impulse response is infinitely long filters are used, for example, as anti-aliasing filters wherever a sampling rate needs to be halved.
I this filter is not causal, that is h(t) 6= 0 for t < 0

107 108
FIR low-pass filter design examples FIR low-pass filter design examples

order n = 10 order n = 30
0.3 1.2 0.3 1.2
DTFT mag DTFT mag
0.25 1 0.25 1
0.2 0.2
0.8 0.8
0.15 0.15
0.6 0.6
0.1 0.1
0.4 0.4
0.05 0.05

0 0.2 0 0.2

-0.05 0 -0.05 0
0 2 4 6 8 10 - -¾ -½ -¼ 0 ¼ ½ ¾ 0 10 20 30 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period) time-domain samples DTFT frequency (1 period)

109 110

FIR low-pass filter design example (MATLAB) Filter performance


z Plane Impulse Response
1
1
An ideal filter has a gain of 1 in the pass-band and a gain of 0 in the stop
Imaginary Part

Amplitude

0.5
30 0.5 band, and nothing in between.
0
A practical filter will have
−0.5
0
I frequency-dependent gain near 1 in the passband
−1
I frequency-dependent gain below a threshold in the stopband
−1 0 1 0 10 20 30
Real Part n (samples) I a transition band between the pass and stop bands
0 0 We truncate the ideal, infinitely-long impulse response by multiplication
with a window sequence.
Phase (degrees)
Magnitude (dB)

−20 −500 In the frequency domain, this will convolve the rectangular frequency
response of the ideal low-pass filter with the frequency characteristic of
−40 −1000 the window.
The width of the main lobe determines the width of the transition band,
−60 −1500
0 0.5 1 0 0.5 1 and the side lobes cause ripples in the passband and stopband.
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
order: n = 30, cutoff frequency (−6 dB): fc = 0.25 × fs /2, window: Hamming
MATLAB DSP Toolbox functions: b = fir1(30, 0.25), zplane, impz, freqz, freqzplot
111 112
Low-pass to band-pass filter conversion (modulation) Band-pass filter example (modulation)
0.3 1.2
DTFT mag
0.25 1
To obtain a band-pass filter that attenuates all frequencies f outside the 0.2
0.8
range fl < f < fh , we first design a low-pass filter with a cut-off 0.15
frequency (fh − fl )/2. We then multiply its impulse response with a sine 0.1
0.6

wave of frequency (fh + fl )/2, effectively amplitude modulating it, to 0.05


0.4

shift its centre frequency. Finally, we apply a window function: 0 0.2

-0.05 0
sin[π(i − n/2)(fh − fl )/fs ] 0 10 20 30 - -¾ -½ -¼ 0 ¼ ½ ¾
hi = (fh − fl )/fs · · cos[πi(fh + fl )/fs ] · wi time-domain samples DTFT frequency (1 period)
π(i − n/2)(fh − fl )/fs
0.6 1.2
DTFT mag
H(f ) 0.4
1

= ∗ 0.2
0.8

0.6
0
0.4
−fh −fl 0 fl fh f − fh −f
2
l fh −fl
2
f − fh +f
2
l
0 fh +fl
2
f
-0.2
0.2

-0.4 0
0 10 20 30 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
113 114

Low-pass to high-pass filter conversion (freq. inversion) High-pass filter example (freq. inversion)
In order to turn the spectrum X(f ) of a real-valued signal xi sampled at 0.3 1.2

fs into an inverted spectrum X 0 (f ) = [X(fs /2 − f )]∗ = X(f ± fs /2), we 0.25 1


DTFT mag

merely have to shift the periodic spectrum by fs /2: 0.2


0.8
0
X (f ) X(f ) 0.15
0.6
0.1
= ∗ 0.05
0.4

0.2
... ... ... ... 0

-0.05 0
−fs 0 fs f −fs 0 fs f − f2s 0 fs
2
f 0 10 20 30 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
This can be accomplished by multiplying the sampled sequence xi with
0.3 1.2
yi = cos πfs t = cos πi = e jπi , which is nothing but multiplication with DTFT mag
the sequence 0.2 1

. . . , 1, −1, 1, −1, 1, −1, 1, −1, . . . 0.1 0.8

0 0.6
So in order to design a discrete high-pass filter that attenuates all
-0.1 0.4
frequencies f outside the range fc < |f | < fs /2, we merely have to
design a low-pass filter that attenuates all frequencies outside the range -0.2 0.2

−fc < f < fc , and then multiply every second value of its impulse -0.3 0
0 10 20 30 - -¾ -½ -¼ 0 ¼ ½ ¾
response with −1. time-domain samples DTFT frequency (1 period)
115 116
High-pass filter example (subtract from impulse seq.) Linear phase filters
0.3 1.2
DTFT mag A filter where the Fourier transform H(f ) of its impulse response h(t) is
0.25 1
real-valued will not affect the phase of the filtered signal at any
0.2
0.8 frequency. Only the amplitudes will be affected.
0.15
0.6
0.1
0.4
∀f ∈ R : H(f ) ∈ R ⇐⇒ ∀t ∈ R : h(t) = [h(−t)]∗
0.05

0 0.2
A phase-neutral filter with a real-valued frequency response will have an
-0.05 0 even impulse response, and will therefore usually be non-causal.
0 10 20 30 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period) To make such a filter causal, we have to add a delay ∆t (half the length
0.8 1.2 of the impulse response). This corresponds to multiplication with
0.6 1
DTFT mag e−2π jf ∆t in the frequency domain:
0.4 0.8
h(t − ∆t) •−◦ H(f ) · e−2π jf ∆t
0.2 0.6

0 0.4
Filters that delay the phase of a signal at each frequency by the time ∆t
therefore add to the phase angle a value −2π jf ∆t, which increases
-0.2 0.2
linearly with f . They are therefore called linear-phase filters.
-0.4 0 This is the closest one can get to phase-neutrality with causality.
0 10 20 30 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
117 118

Finite impulse response (FIR) filter Infinite impulse response (IIR) filter
N
X M
X
M ak · yn−k = bm · xn−m Usually normalize: a0 = 1
X
yn = bm · xn−m k=0 m=0
M N
!
m=0 X X
yn = bm · xn−m − ak · yn−k /a0
M = 3:
m=0 k=1
xn xn−1 xn−2 xn−3
z −1 z −1 z −1 max{M, N } =
Direct form I implementation: “filter order”
b0 b1 b2 b3 xn b0 a−1 yn
0

yn z −1 z −1
b1 −a1
(see slide 25)
xn−1 yn−1
Transposed implementation:
xn z −1 z −1
b2 −a2
xn−2 yn−2
b3 b2 b1 b0
z −1 z −1
z −1 z −1 z −1 b3 −a3
yn xn−3 yn−3
119 120
Infinite impulse response (IIR) filter – direct form II Polynomial representation of sequences

M N
!
We can represent sequences {xn } as polynomials:
X X
yn = bm · xn−m − ak · yn−k /a0
m=0 k=1 ∞
X
X(v) = xn v n
Direct form II: Transposed direct form II: n=−∞

xn a−1
0 b0 yn xn b0 a−1
0 yn
Example of polynomial multiplication:

z −1 (1 + 2v + 3v 2 ) · (2 + 1v)
−a1 b1 z −1
2
b1 −a1 2 + 4v + 6v
+ 1v + 2v 2 + 3v 3
−1
z = 2 + 5v + 8v 2 + 3v 3
−a2 b2 z −1
b2 −a2
Compare this with the convolution of two sequences (in MATLAB):
z −1
−a3 b3 z −1 conv([1 2 3], [2 1]) equals [2 5 8 3]
b3 −a3

121 122

Convolution of sequences is equivalent to polynomial multiplication:


Example of polynomial division:

X
{hn } ∗ {xn } = {yn } ⇒ yn = hk · xn−k 1 X∞
k=−∞ = 1 + av + a2 v 2 + a3 v 3 + · · · = an v n
1 − av
↓ ↓ n=0

∞ ∞ 1 + av + a2 v 2 + ···
! !
X X
n n
H(v) · X(v) = hn v · xn v 1 − av 1
n=−∞ n=−∞ 1 − av

X ∞
X xn yn av
= hk · xn−k · v n av − a2 v 2
n=−∞ k=−∞ a2 v 2
v
a a2 v 2 − a3 v 3
Note how the Fourier transform of a sequence can be accessed easily ···
yn−1
from its polynomial form:
Rational functions (quotients of two polynomials) can provide a

X convenient closed-form representations for infinitely-long exponential
X(e− jω̇ ) = xn e− jω̇n
sequences, in particular the impulse responses of IIR filters.
n=−∞

123 124
The z-transform The z-transform defines for each sequence a continuous complex-valued
surface over the complex plane C.
For finite sequences, its value is defined across the entire complex plane
The z-transform of a sequence {xn } is defined as: (except possibly at z = 0 or |z| = ∞).

X For infinite sequences, it can be shown that the z-transform converges
X(z) = xn z −n only for the region
n=−∞
xn+1 xn+1
Note that this differs only in the sign of the exponent from the polynomial representation discussed lim < |z| < lim
on the preceeding slides. n→∞ xn n→−∞ xn
Recall that the above X(z) is exactly the factor with which an The z-transform identifies a sequence unambiguously only in conjunction with a given region of
exponential sequence {z n } is multiplied, if it is convolved with {xn }: convergence. In other words, there exist different sequences, that have the same expression as their
z-transform, but that converge for different amplitudes of z.

The z-transform is a generalization of the discrete-time Fourier


n
{z } ∗ {xn } = {yn } transform, which it contains on the complex unit circle (|z| = 1):

X
∞ ∞ jω̇
X X t−1
s · F{x̂(t)}(f ) = X(e ) = xn e− jω̇n
⇒ yn = z n−k xk = z n · z −k xk = z n · X(z) n=−∞
k=−∞ k=−∞
where ω̇ = 2π ffs .

125 126

Properties of the z-transform


Time reversal:
If X(z) is the z-transform of {xn }, we write here {xn } •−◦ X(z).
{x−n } •−◦ X(z −1 )
If {xn } •−◦ X(z) and {yn } •−◦ Y (z), then:
Multiplication with exponential:
Linearity: {a−n xn } •−◦ X(az)
{axn + byn } •−◦ aX(z) + bY (z) Complex conjugate:

Convolution: {x∗n } •−◦ X ∗ (z ∗ )

{xn } ∗ {yn } •−◦ X(z) · Y (z) Real/imaginary value:


1
Time shift: {<{xn }} •−◦ (X(z) + X ∗ (z ∗ ))
2
1
{xn+k } •−◦ z k X(z) {={xn }} •−◦ (X(z) − X ∗ (z ∗ ))
2j
Remember in particular: delaying by one sample is multiplication with z −1 . Initial value:

x0 = lim X(z) if xn = 0 for all n < 0


z→∞

127 128
Some example sequences and their z-transforms: Example:
What is the z-transform of the impulse response {hn }
xn X(z) of the discrete system yn = xn + ayn−1 ?
xn yn
δn 1 yn = xn + ayn−1
z 1 Y (z) = X(z) + az −1 Y (z) z −1
un = a
z−1 1 − z −1
z 1 Y (z) − az −1 Y (z) = X(z) yn−1
n
a un =
z−a 1 − az −1 Y (z)(1 − az −1 ) = X(z)
z Y (z) 1 z
nun = =
(z − 1)2 X(z) 1 − az −1 z−a
z(z + 1)
n2 un Since {yn } = {hn } ∗ {xn }, we have Y (z) = H(z) · X(z) and therefore
(z − 1)3
z Y (z) z
ean un H(z) = = = 1 + az −1 + a2 z −2 + · · ·
z − ea X(z) z−a
n − 1 a(n−k)
 
1 where polynomial long division returns the causal impulse response
e un−k
k−1 (z − ea )k
z 2 sin(ϕ) + z sin(ω̇ − ϕ) h0 = 1, h1 = a, h2 = a2 , . . . , hn = an for all n ≥ 0
sin(ω̇n + ϕ)un
z 2 − 2z cos(ω̇) + 1 We have applied here the linearity of the z-transform, and its time-shift and convolution properties.
129 130

z-transform of recursive filter structures Using the linearity and time-shift property of the z-transform:

Consider the discrete system defined by xn b0 a−1


0 yn
k
X m
X
k m z −1 z −1 al · yn−l = bl · xn−l
X X b1 −a1 l=0 l=0
al · yn−l = bl · xn−l xn−1 yn−1
k m
l=0 l=0 X X
z −1 z −1 al z −l · Y (z) = bl z −l · X(z)
··· ···
or equivalently ··· ··· l=0 l=0
k
X m
X
k m z −1 z −1 Y (z) al z −l = X(z) bl z −l
X X bm −ak
a0 yn + al · yn−l = bl · xn−l xn−m yn−k l=0
Pm
l=0
l=1 l=0 Y (z) bl z −l
m k
! H(z) = = Pkl=0
X X X(z) l=0 al z
−l
yn = a−1
0 · bl · xn−l − al · yn−l
l=0 l=1 b0 + b1 z −1 + b2 z −2 + · · · + bm z −m
H(z) =
a0 + a1 z −1 + a2 z −2 + · · · + ak z −k
What is the z-transform H(z) of its impulse response {hn }, where
{yn } = {hn } ∗ {xn }?

131 132
The z-transform of the impulse re- xn b0 a−1
0 yn
This function can be converted into the form
sponse {hn } of the causal LTI system
defined by z −1 z −1 m m
b1 −a1 Y Y
xn−1 yn−1 (1 − cl · z −1 ) (z − cl )
k m b0 l=1 b0 k−m l=1
X X H(z) = · k = ·z · k
al · yn−l = bl · xn−l z −1
··· ···
z −1 a0 Y a0 Y
−1
l=0 l=0 ··· ··· (1 − dl · z ) (z − dl )
l=1 l=1
with {yn } = {hn } ∗ {xn } is the z −1 z −1
bm −ak
rational function xn−m yn−k where the cl are the non-zero positions of zeros (H(cl ) = 0) and the dl
are the non-zero positions of the poles (i.e., z → dl ⇒ |H(z)| → ∞) of
b0 + b1 z −1 + b2 z −2 + · · · + bm z −m H(z). Except for a constant factor, H(z) is entirely characterized by the
H(z) = position of these zeros and poles.
a0 + a1 z −1 + a2 z −2 + · · · + ak z −k
(bm 6= 0, ak =
6 0) which can also be written as On the unit circle z = e jω̇ , H(e jω̇ ) is the discrete-time Fourier transform
of {hn } (ω̇ = πf / f2s ). The DTFT amplitude can also be expressed in
terms of the relative position of e jω̇ to the zeros and poles:
Pm
z k l=0 bl z m−l z k b0 z m + b1 z m−1 + b2 z m−2 + · · · + bm
H(z) = k
= · .
z m a0 z k + a1 z k−1 + a2 z k−2 + · · · + ak
P
z m l=0 al z k−l b0
Qm
|e jω̇ − cl |
jω̇
|H(e )| = · Qkl=1
H(z) has m zeros and k poles at non-zero locations in the z plane, plus a0 l=1 |e
jω̇ − d |
l
k − m zeros (if k > m) or m − k poles (if m > k) at z = 0.

133 134

0.8 0.8z
H(z) = = (cont’d)
Example: a single-pole filter 1−0.2·z −1 z−0.2
1
Magnitude Response

0.95
Consider this IIR filter: Its z-transform 2
1.75
0.9
xn 0.8 yn 0.8 0.8z 1.5
H(z) = =

Magnitude
1.25
1 − 0.2 · z −1 z − 0.2 1
0.85

0.75
0.8
z −1 has one pole at z = d1 = 0.2 and one 0.5
0.2 zero at z = 0. 0.25 0.75
0
yn−1 1
0.5 1 0.7
0 0.5
a0 = 1, a1 = −0.2, Amplitude |H(z)|: −0.5 −0.5
0
−1 −1
imaginary real 0 0.2 0.4 0.6 0.8
b0 = 0.8 2 Normalized Frequency (×π rad/sample)

1.75 Run this LTI filter at sampling frequency fs and test it with sinusoidial
xn = δn ⇒ yn = 1.5 input (frequency f , amplitude 1): xn = cos(2πf n/fs )
1.25
0.8
Impulse Response
Output: yn = A(f ) · cos(2πf n/fs + θ(f ))
1
0.6 0.75 What are the gain A(f ) and phase delay θ(f ) at frequency f ?
Amplitude

0.5 Answer: A(f ) = |H(e j2πf /fs )|


0.4
0.25 ={H(e jπf /fs )}
θ(f ) = ∠H(e j2πf /fs ) = tan−1
0.2 0
1
<{H(e jπf /fs )}
0.5 1 Example: fs = 8 kHz, f = 2 kHz (normalized frequency f / f2s = 0.5) ⇒ Gain A(2 kHz) =
0 0 0.5
0 2 4 0
q
0.8 j 0.8 j(− j−0.2) 0.82 +0.162
n (samples) −0.5 −0.5 |H(e jπ/2 )| = |H( j)| = j−0.2 = ( j−0.2)(− j−0.2) = 0.8−0.16
1+0.04
j
= 2 = 0.784. . .
−1 −1 1.04
imaginary real 135 136
z 1
H(z) = z−0.7 = 1−0.7·z −1 How do poles affect time domain?
Visual verification in MATLAB: z Plane Impulse Response
x 1 1
n = 0:15;

Imaginary Part
fs = 8000; 1.5 y (time domain)

Amplitude
f = 1500; y (z−transform)
x = cos(2*pi*f*n/fs); 0 0.5
b = [0.8]; a = [1 -0.2]; 1
y1 = filter(b,a,x);
−1 0
z = exp(j*2*pi*f/fs);
−1 0 1 0 10 20 30
H = 0.8*z/(z-0.2); Real Part n (samples)
A = abs(H); 0.5
z 1
theta = atan(imag(H)/real(H)); H(z) = z−0.9 = 1−0.9·z −1
y2 = A*cos(2*pi*f*n/fs+theta);
plot(n, x, 'bx-', ... z Plane Impulse Response
0 1
n, y1, 'go-', ... 1

Imaginary Part
n, y2, 'r+-')

Amplitude
legend('x', ...
'y (time domain)', ... −0.5 0 0.5
'y (z-transform)')
ylim([-1.1 1.8])
−1 0
−1 −1 0 1 0 10 20 30
0 5 10 15 Real Part n (samples)
137 138

z 1 z2 1
H(z) = z−1 = 1−z −1 H(z) = (z−0.9·e jπ/6 )·(z−0.9·e− jπ/6 )
= 1−1.8 cos(π/6)z −1 +0.92 ·z −2

z Plane Impulse Response z Plane Impulse Response


1 1 2
1
Imaginary Part

Imaginary Part
Amplitude

Amplitude
1
0 0.5 2
0
0

−1 0 −1 −1
−1 0 1 0 10 20 30 −1 0 1 0 10 20 30
Real Part n (samples) Real Part n (samples)
H(z) = z
= 1 z2 1
z−1.1 1−1.1·z −1 H(z) = (z−e jπ/6 )·(z−e− jπ/6 )
= 1−2 cos(π/6)z −1 +z −2
z Plane Impulse Response z Plane Impulse Response
1 20 5
1
Imaginary Part

Imaginary Part
Amplitude

Amplitude
0 10 2
0 0

−1 0 −1 −5
−1 0 1 0 10 20 30 −1 0 1 0 10 20 30
Real Part n (samples) Real Part n (samples)
139 140
z2 1 1
H(z) = (z−0.9·e jπ/2 )·(z−0.9·e− jπ/2 )
= 1−1.8 cos(π/2)z −1 +0.92 ·z −2 = 1+0.92 ·z −2
IIR filter design goals
z Plane Impulse Response
1 1

Imaginary Part
The design of a filter starts with specifying the desired parameters:

Amplitude
2
I The passband is the frequency range where we want to approximate
0 0 a gain of one.
I The stopband is the frequency range where we want to approximate
−1 −1 a gain of zero.
−1 0 1 0 10 20 30 I The order of a filter is the maximum of the number of zeros or poles
Real Part n (samples)
it has in the z-domain, which is the maximum delay (in samples)
z 1 needed to implement it.
H(z) = z+1 = 1+z −1

z Plane Impulse Response


I Both passband and stopband will in practice not have gains of
1 1 exactly one and zero, respectively, but may show several deviations
Imaginary Part

from these ideal values, and these ripples may have a specified

Amplitude
maximum quotient between the highest and lowest gain.
0 0 I There will in practice not be an abrupt change of gain between
passband and stopband, but a transition band where the frequency
−1 response will gradually change from its passband to its stopband
−1
−1 0 1 0 10 20 30 value.
Real Part n (samples)
141 142

IIR filter design techniques IIR filter design in MATLAB


The aforementioned filter-design techniques are implemented in the
The designer can then trade off conflicting goals such as: small transition MATLAB Signal Processing Toolbox in the functions butter, cheby1,
band, low order, low ripple amplitude or absence of ripples. cheby2, and ellip. They output the coefficients an and bn of the
Design techniques for making these tradeoffs for analog filters (involving difference equation that describes the filter.
capacitors, resistors, coils) can also be used to design digital IIR filters: These can be applied with
Butterworth filters: Have no ripples, gain falls monotonically across the pass filter to a sequence, or
and p
transition band. Within the passband, the gain drops slowly down to can be visualized with
1 − 1/2 (−3 dB). Outside the passband, it drops asymptotically by a factor zplane as poles/zeros in
2N per octave (N · 20 dB/decade).
the z-domain, with impz
Chebyshev type I filters: Distribute the gain error uniformly throughout the as an impulse response,
passband (equiripples) and drop off monotonically outside. and with freqz as an
Chebyshev type II filters: Distribute the gain error uniformly throughout the amplitude and phase
stopband (equiripples) and drop off monotonically in the passband. spectrum.
Elliptic filters (Cauer filters): Distribute the gain error as equiripples both in Call filterDesigner for
the passband and stopband. This type of filter is optimal in terms of the
an interactive GUI.
combination of the passband-gain tolerance, stopband-gain tolerance, and
transition-band width that can be achieved at a given filter order.
MATLAB Filter Designer
143 144
Cascade of filter sections Butterworth filter design example
Higher-order IIR filters can be numerically unstable (quantization noise). z Plane Impulse Response
A commonly used trick is to split a higher-order IIR filter design into a 1 1
cascade of l second-order (biquad) filter sections of the form:

Imaginary Part
0.5

Amplitude
xn b0 yn 0.5
0

−0.5
z −1 b0 + b1 z −1 + b2 z −2 0
−a1 b1 H(z) = −1
1 + a1 z −1 + a2 z −2 −1 0 1 0 10 20
Real Part n (samples)
z −1 0 0
−a2 b2

Phase (degrees)
Magnitude (dB)
Filter sections H1 , H2 , . . . , Hl are then applied sequentially to the input −20
sequence, resulting in a filter −50
l l −40
Y Y bk,0 + bk,1 z −1 + bk,2 z −2
H(z) = Hk (z) =
1 + ak,1 z −1 + ak,2 z −2 −60 −100
k=1 k=1
0 0.5 1 0 0.5 1
Each section implements one pair of poles and one pair of zeros. Jackson’s algorithm for pairing Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
poles and zeros into sections: pick the pole pair closest to the unit circle, and place it into a
order: 1, cutoff frequency (−3 dB): 0.25 × fs /2
section along with the nearest pair of zeros; repeat until no poles are left.
145 146

Butterworth filter design example Chebyshev type I filter design example


z Plane Impulse Response z Plane Impulse Response
1 1 1 1
Imaginary Part

Imaginary Part
0.5 0.5
Amplitude

Amplitude
0.5 0.5
0 0

−0.5 −0.5
0 0
−1 −1
−1 0 1 0 10 20 −1 0 1 0 10 20
Real Part n (samples) Real Part n (samples)
0 0 0 0
Phase (degrees)

Phase (degrees)
Magnitude (dB)

−20 −200 Magnitude (dB) −20 −200

−40 −400 −40 −400

−60 −600 −60 −600


0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)

order: 5, cutoff frequency (−3 dB): 0.25 × fs /2 order: 5, cutoff frequency: 0.5 × fs /2, pass-band ripple: −3 dB
147 148
Chebyshev type II filter design example Elliptic filter design example
z Plane Impulse Response z Plane Impulse Response
1 1 1 1
Imaginary Part

Imaginary Part
0.5 0.5

Amplitude

Amplitude
0.5 0.5
0 0

−0.5 −0.5
0 0
−1 −1
−1 0 1 0 10 20 −1 0 1 0 10 20
Real Part n (samples) Real Part n (samples)
0 100 0 0

Phase (degrees)

Phase (degrees)
Magnitude (dB)

Magnitude (dB)
0 −100
−20 −20
−100 −200
−40 −40
−200 −300

−60 −300 −60 −400


0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)

order: 5, cutoff frequency: 0.5 × fs /2, stop-band ripple: −20 dB order: 5, cutoff frequency: 0.5 × fs /2, pass-band ripple: −3 dB, stop-band ripple: −20 dB
149 150

Notch filter design example Peak filter design example


z Plane Impulse Response z Plane Impulse Response
1 1 1 1
Imaginary Part

Imaginary Part
0.5 0.5
Amplitude

Amplitude
0.5 0.5
0 0

−0.5 −0.5
0 0
−1 −1
−1 0 1 0 10 20 −1 0 1 0 10 20
Real Part n (samples) Real Part n (samples)
0 0 0 100
Phase (degrees)

Phase (degrees)
Magnitude (dB)

Magnitude (dB)
−100 50
−20 −20
−200 0
−40 −40
−300 −50

−60 −400 −60 −100


0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)

order: 2, cutoff frequency: 0.25 × fs /2, −3 dB bandwidth: 0.05 × fs /2 order: 2, cutoff frequency: 0.25 × fs /2, −3 dB bandwidth: 0.05 × fs /2
151 152
Summary: FIR vs IIR filters Zero-phase IIR filtering (filtfilt)
In non-realtime applications, where the entire input sequence is available
in advance, a simple trick can be used to apply an IIR filter H without
FIR filters: causing any phase change in the filtered signal. (MATLAB: filtfilt)
+ easy to construct linear-phase filters (symmetric impulse response) 1 apply the (causal) filter H normally in forward direction

+ numerically stable 2 time-reverse the resulting sequence


No poles means: none can get dangerously close to the unit circle. 3 apply the filter H again (i.e., in backwards direction)
− higher order, i.e. computationally expensive 4 time-reverse the resulting sequence again

IIR filters: This is equivalent of applying the filter twice, once normally and once
with a time-reversed impulse response.
+ can achieve given transition bands with lower order, i.e. Reversing a real-valued sequence in the time domain corresponds to taking the complex conjugate
computationally less expensive, as a few multiplications and delays in the frequency domain.
can achieve long impulse responses (slowly decaying oscillations) Resulting filter G (for hn ∈ R):
− can become numerically unstable
(i.e., impulse response not absolutely summable) {gn } = {hn } ∗ {h−n }
− generally not linear phase, and less control over phase behaviour G(e jω̇ ) = H(e jω̇ ) · H(e− jω̇ ) = H(e jω̇ ) · H ∗ (e jω̇ ) = |H(e jω̇ )|2

In MATLAB (not optimized):


y = fliplr(filter(b, a, fliplr(filter(b, a, x))))
153 154

Random variables, vectors, and processes Random sequences


Let S be the set of all possible outcomes (results) of some experiment. A discrete-time random process or random sequence {xn } can also be
We call S the sample space of that experiment. thought of as a discrete sequence of random variables
A random variable X is a function
. . . , x−2 , x−1 , x0 , x1 , x2 , . . .
X:S→E Each time we repeat an experiment, we observe one realization or sample
that assigns to each outcome ζ ∈ S a value X(ζ) ∈ E, where usually sequence
E ⊆ R or E ⊆ C. . . . , x−2 , x−1 , x0 , x1 , x2 , . . .
~
A random vector X(ζ) = (x1 (ζ), x2 (ζ), . . . , xn (ζ))T is a vector of n of that random process. (We cannot observe the outcome ζ directly.)
random variables, or equivalently a random variable that outputs vectors, Each individual random variable xn in a random sequence is
~
e.g. X(ζ) ∈ Rn . characterized by its probability distribution function
A continuous-time random process X : S → E R is a function that maps
Pxn (a) = Prob(xn ≤ a)
each experimental outcome ζ ∈ S onto a continuous-time function xζ (t),
and a discrete-time random process X : S → E Z maps each outcome ζ and the entire random process is characterized completely by all joint
onto a discrete sequence {xζ,n }n . probability distribution functions
The ensemble of a random process is the set of all functions (or
sequences) from which it picks its output. Pxn1 ,...,xnk (a1 , . . . , ak ) = Prob(xn1 ≤ a1 ∧ . . . ∧ xnk ≤ ak )
In the following, we will usually omit outcome parameter ζ from random variables, etc., for
notational convenience, and use boldface roman to distinguish random variables from samples. for all possible sets {xn1 , . . . , xnk } and all k > 0.
155 156
Stationary processes
Two random variables xn and xm are called independent if

Pxn ,xm (a, b) = Pxn (a) · Pxm (b) A random sequence is called strict-sense stationary if
The derivative pxn (a) = Px0 n (a) is called the probability density function. Pxn1 +l ,...,xnk +l (a1 , . . . , ak ) = Pxn1 ,...,xnk (a1 , . . . , ak )
This helps us to define quantities such as the
R
I expected value E(xn ) = apxn (a) da for any shift l and any number k, that is if all joint probability
distributions are time invariant.
I mean-square value (average power) E(|xn |2 ) = |a|2 pxn (a) da
R
If the above condition holds at least for k = 1, then the mean
I variance Var(xn ) = E[|xn − E(xn )|2 ] = E(|xn |2 ) − |E(xn )|2
I correlation Cor(xn , xm ) = E(xn · x∗m ) E(xn ) = mx ,
I covariance Cov(xn , xm ) = E[(xn − E(xn )) · (xm − E(xm ))∗ ] =
and variance
E(xn x∗m ) − E(xn )E(xm )∗
E(|xn − mx |2 ) = σx2
The expected value E(·) is a linear operator: E(ax) = aE(x) and are constant over all n. (σx is also called standard deviation).
E(x + y) = E(x) + E(y).
Variance is not linear, but Var(ax) = a2 Var(x) and, if x and y are If the above condition holds in addition also for k = 2, we call the
independent, Var(x + y) = Var(x) + Var(y). random sequence wide-sense stationary (WSS).
If a sequence is strict-sense stationary, it is always also wide-sense stationary, but not vice versa.

157 158

A wide-sense stationary random process {xn } can not only be


characterized by its mean mx = E(xn ) and variance σx2 = E(|xn − mx |2 ) Ergodic processes
over all sample positions n.
If . . . , x−2 , x−1 , x0 , x1 , x2 , . . . is a WSS random sequence, then we can
It can, in addition, also be characterized by its autocorrelation sequence estimate the mean value and auto-correlation sequence from these
φxx (k) = E(xn+k · x∗n ) random variables from any location n as

The autocorrelation sequence of a zero-mean version of a sequence is mx = E(xn )


called the autocovariance sequence φxx (k) = E(xn+k x∗n )
γxx (k) = E[(xn+k − mx ) · (xn − mx )∗ ] = φxx (k) − |mx |2
What if we have just one sample sequence . . . , x−2 , x−1 , x0 , x1 , x2 , . . .?
where γxx (0) = σx2 . If we still can estimate mean and auto-correlation from that as
L N
A pair of stationary random processes {xn } and {yn } can, in addition, 1 X 1 X
be jointly wide-sense stationary and therefore be characterized by their mx = lim xn ≈ xn for large N
L→∞ 2L + 1 N n=1
n=−L
crosscorrelation sequence
L N
1 1 X
φxy (k) = E(xn+k · yn∗ )
X
φxx (k) = lim xn+k x∗n ≈ xn+k x∗n
L→∞ 2L + 1 N n=1
n=−L
Their crosscovariance sequence is then
then we call the process mean ergodic and correlation ergodic, resp.
γxy (k) = E[(xn+k − mx ) · (yn − my )∗ ] = φxy (k) − mx m∗y Ergodicity means that single-sample-sequence time averages are identical to averages over the
entire ensemble for a random process, or, in other words, variation along the time axis looks similar

The complex conjugates are only needed with complex-valued sequences. to variation across the ensemble.
159 160
Deterministic crosscorrelation sequence Demonstration of covert spread-spectrum communication:
n = randn(1,10000);
pattern=2*round(rand(1,1000))-1;
For deterministic finite-energy sequences {xn } and {yn }, we can define p1 = [zeros(1,2000), pattern, zeros(1,7000)];
their crosscorrelation sequence as p2 = [zeros(1,4000), pattern, zeros(1,5000)];
r = n + p1/3 - p2/3;

X ∞
X figure(1)
cxy (k) = xi+k · yi∗ = ∗
xi · yi−k . plot([n;p1/3-3;p2/3-4;r-6]');
figure(2)
i=−∞ i=−∞ plot(conv(r,fliplr(pattern)));
% or: plot(xcorr(r,pattern));
If {xn } is similar to {yn }, but lags l elements behind (xn ≈ yn−l ), then cxy (l) will be a peak in
the crosscorrelation sequence. It can therefore be used to locate shifted versions of a known
sequence in another one.
MATLAB’s xcorr function calculates the crosscorrelation sequence for two finite sequences
(vectors) of equal length (or zero-pads the shorter one). Option unbiased divides for each lag

through the length of the overlap, to estimate E(xn+k yn ) from a pair of sample sequences {xn }
and {yn } from jointly correlation-ergodic WSS processes {xn } and {yn }.

This crosscorrelation sequence is essentially just convolution, with the


second input sequence mirrored:

{cxy (n)} = {xn } ∗ {y−n }
It can therefore be calculated equally easily via the DTFT:
Cxy (e jω̇ ) = X(e jω̇ ) · Y ∗ (e jω̇ )
Swapping the input sequences mirrors the output sequence: cxy (k) = c∗
yx (−k).
161 162

Deterministic autocorrelation sequence Power spectrum of a random sequence


Equivalently, we define the deterministic autocorrelation sequence in the
time domain as

For a zero-mean wide-sense stationary random sequence {xn } with
X
cxx (k) = xi+k x∗i
i=−∞ absolutely summable autocorrelation sequence
This is just the sequence convolved with a time-reversed version of itself: φxx (k) = E(xn+k · x∗n )
{cxx (k)} = {xi } ∗ {x∗−i } we call the DTFT
This corresponds in the frequency domain to ∞
X
Φxx (e jω̇ ) = φxx (n) · e− jω̇n
jω̇ jω̇ ∗ jω̇ jω̇ 2
Cxx (e ) = X(e ) · X (e ) = |X(e )| . n=−∞

In other words, the DTFT Cxx (e jω̇ ) of the autocorrelation sequence of its autocorrelation sequence the power density spectrum (PDS) or
{cxx (n)} of a sequence {xn } is identical to the squared amplitudes of power spectrum of {xn }.
the DTFT, or power spectrum, of {xn }. The power spectrum is real, even∗ , non-negative and periodic.

for real-valued sequences
This suggests, that the DTFT of the autocorrelation sequence of a
random process might be a suitable way for defining the power spectrum
of that random process.
What can we say about the phase in the Fourier spectrum of a time-invariant random process?
163 164
Filtered random sequences
The autocorrelation of a sequence {xn } with power spectrum Φxx (e jω̇ ) is Let {xn } be a random sequence from a WSS random process. The output
Z π ∞ ∞
1
X X
yn = hk · xn−k = hn−k · xk
φxx (k) = Φxx (e jω̇ )e jkω̇ dω̇
2π −π k=−∞ k=−∞

of an LTI applied to it will then be another random sequence, characterized by


Since the variance of {xn } is ∞
X
!
X∞ X∞
my = E(yn ) = E hk · xn−k = hk · E(xn−k ) = mx hk
π
1
Z
k=−∞ k=−∞ k=−∞
Var(xn ) = φxx (0) = Φxx (e jω̇ )dω̇
2π −π and

X φxx (k) = E(xn+k · x∗n )
we can interpret φyy (k) = φxx (k − i)chh (i), where P∞
f ∗
2π fhs chh (k) = i=−∞ hi+k hi .
1 i=−∞
Z
Φxx (e jω̇ )dω̇
π f
2π fsl
In other words:
{φyy (n)} = {chh (n)} ∗ {φxx (n)}
as the variance of the output of an ideal band-pass filter applied to {xn } {yn } = {hn } ∗ {xn } ⇒
with cut-off frequencies 0 ≤ fl < fh . Φyy (e jω̇ ) = |H(e jω̇ )|2 · Φxx (e jω̇ )
Similarly:
{φyx (n)} = {hn } ∗ {φxx (n)}
{yn } = {hn } ∗ {xn } ⇒
Φyx (e jω̇ ) = H(e jω̇ ) · Φxx (e jω̇ )
165 166

Summary: White noise


∗{hn } ∗{h∗
−n }
{yn } = {hn } ∗ {xn } ⇒ {φxx (n)} −→ {φyx (n)} −→ {φyy (n)} A random sequence {xn } is a white noise signal, if mx = 0 and
Proofs: φxx (k) = σx2 δk .

!
X The power spectrum of a white noise signal is flat:
φyx (l) = E(x∗n · yn+l ) = E x∗n · hk · xn+l−k =
k=−∞ Φxx (e jω̇ ) = σx2 .
∞ ∞
WSS
A commonly used form of white noise is white Gaussian noise (WGN),
X X
= hk · E(x∗n · xn+l−k ) = hk · φxx (l − k)
k=−∞ k=−∞ where each random variable xn is independent and identically distributed
∞ ∞
! (i.i.d.) according to the normal-distribution probability density function
X X
φyy (l) = E(yn∗ · yn+l ) = E h∗k · x∗n−k · hm · xn+l−m = 1 −
(x−mx )2
2
k=−∞ m=−∞ pxn (x) = p e 2σx

∞ ∞
2πσx2
WSS
X X
= h∗k · hm · E(x∗n−k · xn+l−m ) = Application example:
k=−∞ m=−∞
∞ ∞
Where an LTI {yn } = {hn } ∗ {xn } can be observed to operate on white
i:=m−k noise {xn } with φxx (k) = σx2 δk , the crosscorrelation between input and
X X
= h∗k · hm · φxx (l + k − m) =
k=−∞ m=−∞ output will reveal the impulse response of the system:
∞ ∞ ∞ ∞
=
X
h∗k ·
X
hk+i · φxx (l − i) =
X
φxx (l − i)
X
h∗k · hk+i
φyx (k) = σx2 · hk
k=−∞ i=−∞ i=−∞ k=−∞
| {z } where φyx (k) = φ∗xy (−k) = E(yn+k · x∗n ).
chh (i) 167 168
Demonstration of covert spread-spectrum radar: Spectral estimation: periodogram
x = randn(1,10000)
h = [0 0 0.4 0 0 0.3 0 0 0.2 0 0];
y = conv(x, h); Estimate amplitude spectrum of the noisy discrete sequence
figure(1)
plot(1:length(x), x, 1:length(y), y-5);
figure(2) xk = sin(2π jk × 8/64) + sin(2π jk × 14.32/64) + ni with φnn (i) = 4δi
c = conv(fliplr(x),y);
stem(c(length(c)/2-20:length(c)/2+20)); n = 64; % block length
m = 1000; % blocks averaged
k = 1:(n*m);
x = 2*randn(1, n*m) + ...
sin(2*pi*k*8/n) + ...
sin(2*pi*k*14.32/n);

s1 = abs(fft(x(1:n))/n);
s1 s2 s2 = abs(fft(x(1:8*n))/(8*n));

s1 Absolute values of a single 64-element DFT of {xn }64n=1 (rect. window).


4000 The flat spectrum of white noise is only an expected value. In a single
discrete Fourier transform of such a sequence, the significant variance of
2000 the noise spectrum becomes visible. It almost drowns the two peaks from
the sine waves.
s2 Absolute values of a single 512-element DFT of {xn }512
n=1 (rect. window).
0
With an 8× larger window, the bandwidth of each frequency bin is now
reduced 8×, so the sine functions stand out better from the noise.
−2000 However, the variance in each frequency bin relative to the expected value
0 10 20 30 40 50
remains the same.
169 170

Spectral estimation: averaging Welch’s method for estimating PSD


Estimate amplitude spectrum of the noisy discrete sequence
“Periodogram”: Single-rectanglar-window DTFTPpower spectrum of a
N −1
random sequence {xn }: |X(ω̇)|2 with X(ω̇) = n=0 xn · e−2π jnω̇ .
xk = sin(2π jk × 8/64) + sin(2π jk × 14.32/64) + ni with φnn (i) = 4δi Var[|X(ω̇)|2 ]
Problem: E[|X(ω̇)|2 ] does not drop with increasing window length N .
n
m
=
64;
=
% block length
1000; % blocks averaged
“Welch’s method” for estimating the PSD makes three improvements:
k
x
=
1:(n*m);
=
2*randn(1, n*m) + ...
I Reduce leakage using a non-rectangular window sequence {wi }
sin(2*pi*k*8/n) + ...
sin(2*pi*k*14.32/n);
(“modified periodogram”)
xx = reshape(x, n, m)';
I To reduce the variance, average K periodograms of length N .
s3 = mean(abs(fft(xx, n, 2)/n),1);
s3 s4 s4 = abs(mean(fft(xx, n, 2)/n,1)); I Triangular, Hamming, Hanning, etc. windows can be used with 50%
s3 {xn }64000
n=1 cut into 1000 consecutive 64-sample windows, showing the
overlap (L = N/2), such that all samples contribute with equal
average of the absolute values of the DFT of each window. weight.
Non-coherent averaging: discard phase information first. 0≤k<K
xk,n = xk·L+n · wn ,
This better approximates the shape of the power spectrum: with a flat noise floor. 0≤n<N
s4 Same 1000 windows, but this time the complex values of the DFTs N
X −1
averaged before the absolute value was taken ⇒ coherent averaging. Xk (ω̇) = xk,n · e−2π jnω̇
Because DFT is linear, this is identical to first averaging all 1000 windows and then applying n=0
a single DFT and taking its absolute value.
The windows start 64 samples apart. Only periodic waveforms with a period length that K−1
divides 64 are not averaged away. This periodic averaging step suppresses both the noise 1 X
and the second sine wave. P (ω̇) = |Xk (ω̇)|2
K
k=0
171 172
Periodic averaging Parametric models of the power spectrum
If we understand the physical process that generates a random sequence,
If a signal x(t) has a periodic component with period length tp , then we
we may be able to model and estimate its power spectrum more
can isolate this periodic component from discrete sequence xn = x(n/fs )
accurately, with fewer parameters.
by periodic averaging
If {xn } can be modeled as white noise filtered by an LTI system H(e jω̇ ),
L N then
1 X 1 X
x̄n = lim xn+pi ≈ xn+pi , n ∈ {0, . . . , p − 1} Φxx (e jω̇ ) = σw
2
|H(e jω̇ )|2 .
L→∞ 2L + 1 N i=1
i=−L
Often such an LTI can be modeled as an IIR filter with
but only if the period length in samples p = tp · fs is an integer.
b0 + b1 z −1 + b2 z −2 + · · · + bm z −m
Otherwise {xn } may need to be interpolated and resampled at an integer multiple of t−1
p first. H(e jω̇ ) = .
a0 + a1 z −1 + a2 z −2 + · · · + ak z −k
Periodic averaging of x(t) corresponds in the
Ptime domain to convolution
with a windowed Dirac comb a(t) = w(t) · i δ(t − tp i): The auto-regressive moving-average model ARMA(k, m) is
m k
Z
x̄(t) = x(t − s) · a(s)ds
X X
xn = bl · wn−l − al · xn−l
s
l=0 l=1
In the frequency domain, this means multiplication with an t−1
p spaced 2
where {wn } is stationary white noise with variance σw .
Dirac comb that has been convolved with W (f ). Pk
There is also the simpler AR(k) model xn = wn − l=1 al · xn−l .
173 174

IQ sampling / downconversion / complex baseband signal X(f ) δ(f + fc )

Consider signal x(t) ∈ R in which only frequencies fl < |f | < fh are of ∗


interest. This band has a centre frequency of fc = (fl + fh )/2 and a
bandwidth B = fh − fl . It can be sampled efficiently (at the lowest
possible sampling frequency) by downconversion: −fc 0 fc f −fc 0 fc f
I Shift its spectrum by −fc :

y(t) = x(t) · e−2π jfc t Y (f ) anti-aliasing filter Ẑ(f ) Z(f )

sample
I Low-pass filter it with a cut-off frequency of B/2: −→
Z ∞
z(t) = B y(τ ) · sinc((t − τ )B) · dτ •−◦ Z(f ) = Y (f ) · rect(f /B)
−∞
−2fc −fc −B 0 B fc f −2fc −fc 0 B fc f
2 2

I Sample the result at sampling frequency fs ≥ B: Shifting the center frequency fc of the interval of interest to 0 Hz (DC)
makes the spectrum asymmetric. This leads to a complex-valued
zn = z(n/fs ) time-domain representation
(∃f : Z(f ) 6= [Z(−f )]∗ =⇒ ∃t : z(t) ∈ C \ R).

175 176
IQ upconversion / interpolation Example: IQ downconversion of a sine wave
Given a discrete sequence of downconverted samples zn ∈ C recorded
What happens if we downconvert the input signal
with sampling frequency fs at centre frequency fc (as on slide 175), how
can we reconstruct a continuous waveform x̃(t) ∈ R that matches the A 2π jf t+ jφ A −2π jf t− jφ
original signal x(t) within the frequency interval fl to fh ? x(t) = A · cos(2πf t + φ) = ·e + ·e
2 2
Reconstruction steps:
using centre frequency fc and bandwidth B < 2fc with |f − fc | < B/2?
I Interpolation of complex baseband signal (remove aliases):

X After frequency shift:
z̃(t) = zn · sinc(t · fs − n)
A 2π j(f −fc )t+ jφ A −2π j(f +fc )t− jφ
n=−∞
y(t) = x(t) · e−2π jfc t = ·e + ·e
I Upconvert by modulating a complex phasor at carrier frequency fc . 2 2
Then discard the imaginary part (to reconstruct the negative After low-pass filter with cut-off frequency B/2 < fc < f + fc :
frequency components of the original real-valued signal):
A 2π j(f −fc )t+ jφ
·e
 
x̃(t) = 2< z̃(t) · e2π jfc t z(t) =
2
 
 
= 2< < z̃(t) + j= z̃(t) · cos 2πfc t + j sin 2πfc t
 After sampling:
A 2π j(f −fc )n/fs + jφ
zn = ·e
2
 
= 2< z̃(t) · cos 2πfc t − 2= z̃(t) · sin 2πfc t
Recall that 2<(c) = c + c∗ for all c ∈ C. 177 178

Software-defined radio (SDR) front end SDR front-end hardware examples


IQ downconversion in SDR receiver: Low-cost USB-dongle receivers: ≈£20 SDR front ends are also
Realtek RTL2832U/R820T (RTL-SDR) commonly used today in
⊗ sample Q USB2, fs < 2.5 MHz, fc = 24–1776 MHz, 8-bit IQ samples
http://osmocom.org/projects/sdr/wiki/rtl-sdr military radios, spectrum
surveillance, amateur-radio
x(t) −90◦ y(t) z(t) zn stations, mobile-phone base
stations, MRI machines,
⊗ sample I radars, etc.

The real part <(z(t)) is also known as “in-phase” signal (I) and
cos(2πfc t) the imaginary part =(z(t)) as “quadrature” signal (Q).

Mid range transceivers: £250–£2k High-end measurement kit: £3k–£40k


IQ upconversion in SDR transmitter: HackRF One, Ettus USRP B200/N200, etc. National Instruments (NI), Rohde&Schwarz, etc.
USB3 or 1-Gbit Ethernet, fs = 10–50 MHz, 10 Gbit/PCIe, FPGA, B, fs = 60–1000 MHz,
⊗ δ Q fc = 0–6 GHz, 16-bit IQ samples fc = 0–14 GHz, float32 IQ samples in volts

x̃(t) +90◦ z̃(t) ẑ(t) zn

⊗ δ I

In SDR, x(t) is the antenna voltage and zn appears on the


cos(2πfc t) digital interface with the microprocessor.
179 180
Visualization of IQ representation of sine waves IQ representation of amplitude-modulated signals
⊗ Q Assume voice signal s(t) contains only frequencies below B/2.
Q Antenna signal amplitude-modulated with carrier frequency fc :
x(t) −90◦ y(t) z(t)
x(t) = s(t) · A · cos(2πfc t + ϕ)
⊗ I
After IQ downconversion with centre frequency fc0 ≈ fc :
A 0
z(t) = · s(t) · e2π j(fc −fc )t+ jϕ
cos(2πfc t) 2 =[z(t)]
I With perfect receiver tuning fc0 = fc :
Recall these products of sine and cosine functions:
I cos(x) · cos(y) = 1 cos(x − y) + 1 cos(x + y) A
I sin(x) · sin(y) =
2
1
2
1 z(t) = · s(t) · e jϕ
2 cos(x − y) − 2 cos(x + y) 2 <[z(t)]
I sin(x) · cos(y) = 1 1
sin(x − y) + sin(x + y)
2
Consider: (with x = 2πfc t)
2
Reception techniques:
I sin(x) = cos(x − 1 π) I Non-coherent demodulation (requires s(t) ≥ 0):
2
I cos(x) · cos(x) = 1 1
+ cos 2x 2
s(t) = A |z(t)|
2 2
I sin(x) · sin(x) = 1 1
2 − 2 cos 2x
I sin(x) · cos(x) = 0 + 1
2 sin 2x I Coherent demodulation (requires knowing ϕ and fc0 = fc ):
I cos(x) · cos(x − ϕ) = 1 1
cos(ϕ) + cos(2x − ϕ) 2
2
1
2
1
s(t) = A <[z(t) · e− jϕ ]
I sin(x) · cos(x − ϕ) = sin(ϕ) + sin(2x − ϕ)
2 2
181 182

IQ representation of frequency-modulated signals Frequency demodulating IQ samples


Rt
Determine s(t) from downconverted signal z(t) = A
2 · e2π jk 0
s(τ )dτ + jϕ
.
In frequency modulation, the voice signal s(t) changes the
carrier frequency fc : fc (t) = fc + k · s(t) First idea: measure the angle ∠z(t), where the angle operator ∠ is
Compared to a constant-frequency carrier signal cos(2πfc t + ϕ), to allow
R variable frequency, we
defined such that ∠ae jφ = φ (a, φ ∈ R, a > 0). Then take its derivative:
need to replace the phase-accumulating term 2πfc t with an integral 2π fc (t)dt.
1 d
Frequency-modulated antenna signal: s(t) = ∠z(t)
2πk dt
 Z t 
x(t) = A · cos 2π · [fc + k · s(τ )]dτ + ϕ Problem: angle ambiguity, ∠ works only for −π ≤ φ < π.
0 Ugly hack: MATLAB function unwrap removes 2π jumps from sample sequences
 Z t 
Better idea: first take the complex derivative
= A · cos 2πfc t + 2πk · s(τ )dτ + ϕ
0
dz(t) A Rt
= · 2π jk · s(t) · e2π jk 0 s(τ )dτ + jϕ
After IQ downconversion from centre frequency fc : dt 2
=[z(t)]
dz(t)
A 2π jk R t s(τ )dτ + jϕ then divide by z(t): dt /z(t) = 2π jk · s(t)
z(t) = ·e 0
2 Other practical approaches:
h i
I s(t) ∝ = dz(t)
dt · z ∗
(t) /|z(t)|2 <[z(t)]
Therefore, s(t) is proportional to the rotational rate of z(t).
z(t)
I s(t) ∝ ∠ z(t−∆t) /∆t
183 184
Digital modulation schemes Basic model of a modem
Pick zn ∈ C from a constellation of 2n symbols to encode n bits:

ASK BPSK QPSK


10 00 bits - symbols - P impulses - transmit filter - IQ
bj ∈ {0, 1} ai ∈ S ai δ(t − its ) ∗ht (t) upconversion
0 1 0 1

?
noise - LTI channel
11 01 +n(t) ∗hc (t)

8PSK 101 16QAM FSK


1 ?
111 100
00
bits  data  sampling  receive filter  IQ
0 slicer ∗hr (t) downconv.
110 000 01

11
IQ up/down conversion: only required for pass-band channels (e.g., radio)
010 001 10

011 00 01 11 10
185 186

Pulse Amplitude Modulation (PAM) PAM reception


Baseband transmission (e.g., for wires), no IQ up/down conversion Receive filter applied to channel output z(t):
I binary PAM: ai ∈ S = {−A, A} ⊂ R Z ∞
1 bit/symbol ⇒ bit rate (bit/s) = symbol rate (baud) y(t) = hr (s)y(t − s)ds
0
I m-ary PAM: ai ∈ S = {A1 , . . . , Am } ⊂ R
k = log2 m bit/symbol ⇒ bit rate (bit/s) > symbol rate (baud) Initial symbol pulses x̂(t) have now passed through three LTIs:
I bit sequence {bj } → symbol sequence {ai },
y = h ∗ x̂
ai = f (bki , . . . , bki+k−1 )
h = ht ∗ hc ∗ hr
Pulse generator (symbol rate fs = t−1
s ):
X H(f ) = Ht (f ) · Hc (f ) · Hr (f )
x̂(t) = ai · δ(t − its )
i Sample y(t) at times tn = nts + td with delay td where pulse magnitude
is largest:
Transmit filter: x = x̂ ∗ ht , X(f ) = X̂(f ) · Ht (f ) X
X yn = y(nts + td ) = ai h((n − i)ts + td ) + vn
x(t) = ai · ht (t − its )
i
i

Channel: where vn = v(nts + td ) is the sampled noise v = n ∗ hr .


Z ∞
z(t) = hc (s)x(t − s)ds + n(t) Data slicer: compare yn against thresholds and convert detected nearest
0 symbol a0n ∈ S back into bits b0kn , . . . , b0kn+k−1 .
187 188
Synchronization Intersymbol interference
The receiver needs to know the times tn when to sample y(t). For notational convenience: set td = 0 and allow h(t) to be non-causal.
I Local clock generators have temperature-dependent frequency drift. X
yn = an h(0) + ai h((n − i)ts ) + vn
I In some transmission systems, the transmitter provides the sample i6=n
clock on a separate wire (or wire pair).
Ideally, we want
For example: DVI and HDMI video cables contain four wire pairs: three transmitting (
red/green/blue pixel bytes (using an 8b/10b line encoding), and one providing a pixel clock 1, i=0
signal, which the receiver multiplies 10× to get a bit clock. h(its ) =
0, i 6= 0
I More commonly, the receiver has to extract the sample clock from
the received signal, for example by tracking the phase of transitions otherwise yn will depend on other (mainly previous) symbols, not just on
(phase locked loop, PLL). an ⇒ intersymbol interference. (See also: interpolation function)
This works reliably only if there are regular transitions. Nyquist ISI criterion
• Some systems use a line encoding (e.g., Manchester code, 8b/10b X
encoding) to ensure regular transitions. yn = an + vn ⇔ h(t) · δ(t − its ) = δ(t)
Some line encodings add a spectral line at the symbol rate, which the receiver can i
extract with a band-pass filter, others first require a non-linear step, e.g. squaring.
X
⇔ H(f ) ∗ fs δ(f − ifs ) = 1
• Others use a scrambler: the data bits bi are XORed with the output i
of synchronized deterministic random-bit generators (e.g., a X
maximum-length linear feedback shift register), in both the sender ⇔ H(f − ifs ) = ts
and recipient, to make long runs of the same symbol unlikely. i

189 190

Some possible pulse-shape choices Optimal transmit and receive filtering


I h(t) = rect(t/ts ) •−◦ H(f ) = ts sinc(f /fs ) Nyquist ISI criterion dictates H(f ) = Ht (f ) · Hc (f ) · Hr (f ).
Rectangular pulses may be practical on fibre optics and short cables, where there are no
Bandwidth limits guide choice of Ht (f ), and channel dictates Hc (f ) and
bandwidth restriction. Not suitable for radio: bandwidth high compared to symbol rate. N (f ).
I h(t) = sinc(t/ts ) •−◦ H(f ) = ts rect(f /fs ) How should we then choose Ht (f ) Hr (f )?
Most bandwidth efficient pulse shape, but very long symbol waveform, very sensitive to Select a received pulse spectrum Pr (f ), e.g. raised cosine. Then for some
clock synchronization errors.
arbitrary gain factor k > 0:
I Raised-cosine filter: rectangle with half-period cosine transitions
H(f ) = Ht (f ) · Hc (f ) · Hr (f ) = k · Pr (f )

ts , |f | ≤ ts /2 − β
Optimal filters

H(f ) = ts cos2 4β π
(|f | − ts /2 + β), ts /2 − β < |f | ≤ ts /2 + β
N (f )|Hr (f )|2 df at slicer relative to
R
 Minimize noise variance Var(vn ) =
0, |f | > ts /2 + β

symbol distance.
cos 2πβt 1
h(t) = sinc(t/ts ) Pr (f )
2
1 − (4βt)2 |Hr (f )| = p
N (f )Hc (f )
Transition width (roll-off) β with 0 ≤ β ≤ ts /2; for β = 0 this is H(f ) = ts rect(f /fs ). 1
2
p
I Gaussian filter: both h(t) and H(f ) are Gaussians (self-Fourier) Pr (f ) N (f )
|Ht (f )| = k
Fastest transition without overshot in either time or frequency domain, but does not satisfy Hc (f )
Nyquist ISI criterion.

191
If N (f ) and Hc (f ) are flat: |Hr (f )| = |Ht (f )|/k 0 , e.g. root raised cosine. 192
Audiovisual data compression
Audio-visual lossy coding today typically consists of these steps:
I A transducer converts the original stimulus into a voltage.
I This analog signal is then sampled and quantized.
Structure of modern audiovisual communication systems:
The digitization parameters (sampling frequency, quantization levels) are preferably chosen
generously beyond the ability of human senses or output devices.

sensor + entropy I The digitized sensor-domain signal is then transformed into a


signal - - perceptual - - channel
sampling coding coding coding perceptual domain.
This step often mimics some of the first neural processing steps in humans.
I This signal is quantized again, based on a perceptual model of what level
? of quantization-noise humans can still sense.
I The resulting quantized levels may still be highly statistically dependent.
noise - channel
A prediction or decorrelation transform exploits this and produces a less
dependent symbol sequence of lower entropy.
I An entropy coder turns that into an apparently-random bit string, whose
? length approximates the remaining entropy.
human perceptual  entropy channel
senses
 display  
The first neural processing steps in humans are in effect often a kind of decorrelation transform;
decoding decoding decoding
our eyes and ears were optimized like any other AV communications system. This allows us to use
the same transform for decorrelating and transforming into a perceptually relevant domain.

193 194

Outline of the remaining lectures Entropy coding review – Huffman

X 1
1.00 Entropy: H = p(α) · log2
I Quick review of entropy coding p(α)
0 1 α∈A
I Transform coding: techniques for converting sequences of = 2.3016 bit
highly-dependent symbols into less-dependent lower-entropy 0.40 0.60
sequences.
0 1 0 1
• run-length coding
• decorrelation, Karhunen-Loève transform (PCA) v w
0.25
0.20 0.20 u
• Discrete cosine transform 0.35 0 1
I Introduction to some characteristics and limits of human senses
Mean codeword length: 2.35 bit x
0.10
• perceptual scales and sensitivity limits 0.15

• colour vision Huffman’s algorithm constructs an optimal code-word tree for a set of 0 1
symbols with known probability distribution. It iteratively picks the two
I Quantization techniques to remove information that is irrelevant to elements of the set with the smallest probability and combines them into y z
a tree by adding a common root. The resulting tree goes back into the 0.05 0.05
human senses set, labeled with the sum of the probabilities of the elements it combines.
The algorithm terminates when less than two elements are left.

195 196
Entropy coding review – arithmetic coding Arithmetic coding
Partition [0,1] according 0.0 0.35 0.55 0.75 0.9 0.95 1.0
to symbol probabilities: u v w x y z
Encode text wuvw . . . as numeric value (0.58. . . ) in nested intervals: Several advantages:
1.0
z 0.75
z 0.62
z 0.5885
z 0.5850
z
I Length of output bitstring can approximate the theoretical
y y y y y information content of the input to within 1 bit.
x x x x x I Performs well with probabilities > 0.5, where the information per
symbol is less than one bit.
w w w w w
I Interval arithmetic makes it easy to change symbol probabilities (no
need to modify code-word tree) ⇒ convenient for adaptive coding
Can be implemented efficiently with fixed-length arithmetic by rounding
v v v v v probabilities and shifting out leading digits as soon as leading zeros
appear in interval size. Usually combined with adaptive probability
estimation.
u u u u u Huffman coding remains popular because of its simplicity and lack of patent-licence issues.

0.55
0.0 0.55 0.5745 0.5822

197 198

Coding of sources with memory and correlated symbols Old (Group 3 MH) fax code
Run-length coding: pixels white code black code
I Run-length encoding plus modified Huffman 0 00110101 0000110111
code 1 000111 010
I Fixed code table (from eight sample pages) 2 0111 11
3 1000 10
I separate codes for runs of white and black
↓ 4 1011 011
pixels 5 1100 0011
5 7 12 3 3 I termination code in the range 0–63 switches 6 1110 0010
between black and white code 7 1111 00011
8 10011 000101
Predictive coding: I makeup code can extend length of a run by a
9 10100 000100
multiple of 64
encoder decoder 10 00111 0000100
I termination run length 0 needed where run 11 01000 0000101
f(t) g(t) g(t) f(t) length is a multiple of 64 12 001000 0000111
− +
I single white column added on left side before 13 000011 00000100
transmission 14 110100 00000111
I makeup codes above 1728 equal for black and 15 110101 000011000
predictor predictor 16 101010 0000010111
white
P(f(t−1), f(t−2), ...) P(f(t−1), f(t−2), ...) ... ... ...
I 12-bit end-of-line marker: 000000000001 (can 63 00110100 000001100111
be prefixed by up to seven zero-bits to reach 64 11011 0000001111
Delta coding (DPCM): P (x) = x next byte boundary) 128 10010 000011001000
n Example: line with 2 w, 4 b, 200 w, 3 b, EOL → 192 010111 000011001001
... ... ...
X
1000|011|010111|10011|10|000000000001
Linear predictive coding: P (x1 , . . . , xn ) = ai xi 1728 010011011 0000001100101
i=1

199 200
Modern (JBIG) fax code Statistical dependence
Random variables X, Y are dependent iff ∃x, y:
Performs context-sensitive arithmetic coding of binary pixels. Both encoder and
decoder maintain statistics on how the black/white probability of each pixel P (X = x ∧ Y = y) 6= P (X = x) · P (Y = y).
depends on these 10 previously transmitted neighbours:
If X, Y are dependent, then

? ⇒ ∃x, y : P (X = x | Y = y) 6= P (X = x) ∨
P (Y = y | X = x) 6= P (Y = y)
Based on the counted numbers nblack and nwhite of how often each pixel value ⇒ H(X|Y) < H(X) ∨ H(Y|X) < H(Y)
has been encountered so far in each of the 1024 contexts, the probability for
the next pixel being black is estimated as
nblack + 1 Application
pblack =
nwhite + nblack + 2 Where x is the value of the next symbol to be transmitted and y is the
vector of all symbols transmitted so far, accurate knowledge of the
The encoder updates its estimate only after the newly counted pixel has been conditional probability P (X = x | Y = y) will allow a transmitter to
encoded, such that the decoder knows the exact same statistics. remove all redundancy.
Joint Bi-level Expert Group: International Standard ISO 11544, 1993.
Example implementation: http://www.cl.cam.ac.uk/~mgk25/jbigkit/ An application example of this approach is JBIG, but there y is limited to
10 past single-bit pixels and P (X = x | Y = y) is only an estimate.
201 202

Practical limits of measuring conditional probabilities Correlation

The practical estimation of conditional probabilities, in their most general Two random variables X ∈ R and Y ∈ R are correlated iff
form, based on statistical measurements of example signals, quickly
reaches practical limits. JBIG needs an array of only 211 = 2048 counting E{[X − E(X)] · [Y − E(Y)]} =
6 0
registers to maintain estimator statistics for its 10-bit context.
If we wanted to encode each 24-bit pixel of a colour image based on its where E(· · · ) denotes the expected value of a random-variable term.
statistical dependence of the full colour information from just ten Dependent but not correlated:
previous neighbour pixels, the required number of Correlation implies dependence, but 1
dependence does not always lead to
(224 )11 ≈ 3 × 1080
correlation (see example to the right).
registers for storing each probability will exceed the estimated number of However, most dependency in audio- 0
particles in this universe. (Neither will we encounter enough pixels to visual data is a consequence of corre-
record statistically significant occurrences in all (224 )10 contexts.) lation, which is algorithmically much
This example is far from excessive. It is easy to show that in colour easier to exploit.
−1
images, pixel values show statistical significant dependence across colour −1 0 1
channels, and across locations more than eight pixels apart. Positive correlation: higher X ⇔ higher Y, lower X ⇔ lower Y
A simpler approximation of dependence is needed: correlation. Negative correlation: lower X ⇔ higher Y, higher X ⇔ lower Y

203 204
Correlation of neighbour pixels Covariance and correlation
Values of neighbour pixels at distance 1 Values of neighbour pixels at distance 2
256 256

192 192
We define the covariance of two random variables X and Y as

Cov(X, Y) = E{[X − E(X)] · [Y − E(Y)]} = E(X · Y) − E(X) · E(Y)


128 128

and the variance as Var(X) = Cov(X, X) = E{[X − E(X)]2 }.


64 64

The Pearson correlation coefficient


0 0
0 64 128 192 256 0 64 128 192 256 Cov(X, Y)
Values of neighbour pixels at distance 4 Values of neighbour pixels at distance 8 ρX,Y = p
256 256
Var(X) · Var(Y)

192 192 is a normalized form of the covariance. It is limited to the range [−1, 1].
If the correlation coefficient has one of the values ρX,Y = ±1, this
128 128
implies that X and Y are exactly linearly dependent, i.e. Y = aX + b,
with a = Cov(X, Y)/Var(X) and b = E(Y) − E(X).
64 64

0 0
0 64 128 192 256 0 64 128 192 256

205 206

Covariance Matrix Decorrelation by coordinate transform


Neighbour−pixel value pairs Decorrelated neighbour−pixel value pairs
256 320

~ = (X1 , X2 , . . . , Xn ) ∈ Rn we define the


For a random vector X 256
covariance matrix 192
  192
Cov(X)~ = E (X~ − E(X))
~ · (X ~ − E(X)) ~ T = (Cov(Xi , Xj )) =
i,j 128 128
 
Cov(X1 , X1 ) Cov(X1 , X2 ) Cov(X1 , X3 ) · · · Cov(X1 , Xn )
64
 Cov(X2 , X1 ) Cov(X2 , X2 ) Cov(X2 , X3 ) · · · Cov(X2 , Xn ) 
  64
 Cov(X3 , X1 ) Cov(X3 , X2 ) Cov(X3 , X3 ) · · · Cov(X3 , Xn )  0
.. .. .. ..
 
 .. 
 . . . . .  0 −64
0 64 128 192 256 −64 0 64 128 192 256 320
Cov(Xn , X1 ) Cov(Xn , X2 ) Cov(Xn , X3 ) · · · Cov(Xn , Xn ) Probability distribution and entropy

correlated value pair (H = 13.90 bit)


Idea: Take the values of a group of cor-
~ are uncorrelated if and only if
The elements of a random vector X decorrelated value 1 (H = 7.12 bit) related symbols (e.g., neighbour pixels) as
decorrelated value 2 (H = 4.75 bit)
~ a random vector. Find a coordinate trans-
Cov(X) is a diagonal matrix. form (multiplication with an orthonormal
Cov(X, Y) = Cov(Y, X), so all covariance matrices are symmetric: matrix) that leads to a new random vector
~ = CovT (X).
~ whose covariance matrix is diagonal. The
Cov(X) vector components in this transformed co-
ordinate system will no longer be corre-
lated. This will hopefully reduce the en-
tropy of some of these components.
207 −64 0 64 128 192 256 320 208
Theorem: Let X ~ ∈ Rn and Y
~ ∈ Rn be random vectors that are linearly Quick review: eigenvectors and eigenvalues
~ ~
dependent with Y = AX + b, where A ∈ Rn×n and b ∈ Rn are
constants. Then
We are given a square matrix A ∈ Rn×n . The vector x ∈ Rn is an
~
E(Y) ~ +b
= A · E(X) eigenvector of A if there exists a scalar value λ ∈ R such that
~
Cov(Y) ~ · AT
= A · Cov(X)
Ax = λx.
Proof: The first equation follows from the linearity of the expected-value
~ · B) = A · E(X)
~ · B for matrices A, B. The corresponding λ is the eigenvalue of A associated with x.
operator E(·), as does E(A · X
With that, we can transform The length of an eigenvector is irrelevant, as any multiple of it is also an
  eigenvector. Eigenvectors are in practice normalized to length 1.
Cov(Y)~ = E (Y ~ − E(Y))
~ · (Y~ − E(Y))
~ T
 
Spectral decomposition
= E (AX ~ − AE(X))
~ · (AX ~ − AE(X))~ T Any real, symmetric matrix A = AT ∈ Rn×n can be diagonalized into the
  form
= E A(X ~ − E(X))
~ · (X ~ − E(X))
~ T AT A = U ΛU T ,
 
= A · E (X ~ − E(X))
~ · (X ~ − E(X))
~ T · AT where Λ = diag(λ1 , λ2 , . . . , λn ) is the diagonal matrix of the ordered
eigenvalues of A (with λ1 ≥ λ2 ≥ · · · ≥ λn ), and the columns of U are
= ~ · AT
A · Cov(X) the n corresponding orthonormal eigenvectors of A.

209 210

Karhunen-Loève transform (KLT) B is orthonormal, that is BB T = I.


We are given a random vector variable X ~ ∈ Rn . The correlation of the Multiplying the above from the right with B T leads to the spectral
~
elements of X is described by the covariance matrix Cov(X). ~ decomposition
Cov(X)~ = BDB T
How can we find a transform matrix A that decorrelates X, ~ i.e. that
~ ~ T
turns Cov(AX) = A · Cov(X) · A into a diagonal matrix? A would of the covariance matrix. Similarly multiplying instead from the left with
provide us the transformed representation Y ~ = AX
~ of our random B T leads to
~
B T Cov(X)B =D
vector, in which all elements are mutually uncorrelated.
Note that Cov(X) ~ is symmetric. It therefore has n real eigenvalues and therefore shows with
λ1 ≥ λ2 ≥ · · · ≥ λn and a set of associated mutually orthogonal
eigenvectors b1 , b2 , . . . , bn of length 1 with ~ =D
Cov(B T X)

~ i = λi bi .
Cov(X)b that the eigenvector matrix B T is the wanted transform.
The Karhunen-Loève transform (also known as Hotelling transform or
We convert this set of equations into matrix notation using the matrix Principal Component Analysis) is the multiplication of a correlated
B = (b1 , b2 , . . . , bn ) that has these eigenvectors as columns and the random vector X~ with the orthonormal eigenvector matrix B T from the
diagonal matrix D = diag(λ1 , λ2 , . . . , λn ) that consists of the spectral decomposition Cov(X)~ = BDB T of its covariance matrix. This
corresponding eigenvalues: ~ whose covariance matrix is
leads to a decorrelated random vector B T X
~ diagonal.
Cov(X)B = BD

211 212
Karhunen-Loève transform example I Karhunen-Loève transform example I
The resulting covariance matrix C has three eigenvalues 0.0622, 0.0025, and 0.0006:
    
0.0328 0.0256 0.0160 0.7167 0.7167
 0.0256 0.0216 0.0140   0.5833  = 0.0622  0.5833 
0.0160 0.0140 0.0109 0.3822 0.3822
    
0.0328 0.0256 0.0160 −0.5509 −0.5509
 0.0256 0.0216 0.0140   0.1373  = 0.0025  0.1373 
colour image green channel 0.0160 0.0140 0.0109 0.8232 0.8232
red channel blue channel     
0.0328 0.0256 0.0160 −0.4277 −0.4277
The colour image (left) has m = r 2 pixels, each We can now define the mean colour vector  0.0256 0.0216 0.0140   0.8005  = 0.0006  0.8005 
of which is an n = 3-dimensional RGB vector   0.0160 0.0140 0.0109 −0.4198 −0.4198
1 X
m 0.4839
Ix,y = (rx,y , gx,y , bx,y )
T S̄c = Sc,i , S̄ =  0.4456  It can thus be diagonalized as
m i=1 0.3411
 
0.0328 0.0256 0.0160
The three rightmost images show each of these and the covariance matrix  0.0256 0.0216 0.0140  = C = U · D · U T =
colour planes separately as a black/white 0.0160 0.0140 0.0109
image. 1 X m
Cc,d = (Sc,i − S̄c )(Sd,i − S̄d )
   
0.7167 −0.5509 −0.4277 0.0622 0 0 0.7167 0.5833 0.3822
We want to apply the KLT on a set of such Rn m − 1 i=1  0.5833 0.1373 0.8005   0 0.0025 0   −0.5509 0.1373 0.8232 
colour vectors. Therefore, we reformat the
0.3822 0.8232 −0.4198 0 0 0.0006 −0.4277 0.8005 −0.4198
 
image I into an n × m matrix of the form 0.0328 0.0256 0.0160
C =  0.0256 0.0216 0.0140 
0.0160 0.0140 0.0109
(e.g. using MATLAB’s singular-value decomposition function svd).
 
r1,1 r1,2 r1,3 · · · rr,r
S =  g1,1 g1,2 g1,3 · · · gr,r 
b1,1 b1,2 b1,3 · · · br,r [“m − 1” because S̄c only estimates the mean]
213 214

Karhunen-Loève transform example I Spatial correlation

Before KLT: We finally apply the orthogonal 3 × 3 transform


matrix U , which we just used to diagonalize the The previous example used the Karhunen-Loève transform in order to
covariance matrix, to the entire image: eliminate correlation between colour planes. While this is of some
 
S̄1 S̄1 · · · S̄1

relevance for image compression, far more correlation can be found
T 
T = U · S −  S̄2 S̄2 · · · S̄2  between neighbour pixels within each colour plane.
S̄3 S̄3 · · · S̄3
red green blue 
S̄1 S̄1 · · · S̄1
 In order to exploit such correlation using the KLT, the sample set has to
After KLT:
+  S̄2 S̄2 · · · S̄2  be extended from individual pixels to entire images. The underlying
S̄3 S̄3 · · · S̄3
calculation is the same as in the preceeding example, but this time the
columns of S are entire (monochrome) images. The rows are the
The resulting transformed image
  different images found in the set of test images that we use to examine
u1,1 u1,2 u1,3 · · · ur,r typical correlations between neighbour pixels.
u v w T =  v1,1 v1,2 v1,3 · · · vr,r 
Projections on eigenvector subspaces: w1,1 w1,2 w1,3 · · · wr,r In other words, we use the same formulas as in the previous example, but this time n is the
number of pixels per image and m is the number of sample images. The Karhunen-Loève
consists of three new “colour” planes whose transform is here no longer a rotation in a 3-dimensional colour space, but it operates now in a
pixel values have no longer any correlation to much larger vector space that has as many dimensions as an image has pixels.
the pixels at the same coordinates in another To keep things simple, we look in the next experiment only at m = 9000 1-dimensional “images”
plane. [The bear disappeared from the last of with n = 32 pixels each. As a further simplification, we use not real images, but random noise
these (w), which represents mostly some of the that was filtered such that its amplitude spectrum is proportional to 1/f , where f is the frequency.
v=w=0 w=0 original green grass in the background.] The result would be similar in a sufficiently large collection of real test images.
Photo courtesy of Robert E. Barber

215 216
Karhunen-Loève transform example II Matrix U 0 with normalised KLT Matrix with Discrete Cosine
eigenvector columns Transform base vector columns
Matrix columns of S filled with samples of 1/f filtered noise

...
Covariance matrix C Matrix U with eigenvector columns

Breakthrough: Ahmed/Natarajan/Rao discovered the DCT as an


excellent approximation of the KLT for typical photographic images, but
far more efficient to calculate.
Ahmed, Natarajan, Rao: Discrete Cosine Transform. IEEE Transactions on Computers, Vol. 23,
January 1974, pp. 90–93.

217 218

DCT base vectors for N = 8:


Discrete cosine transform (DCT)
7
The forward and inverse discrete cosine transform
N −1 6
C (2x + 1)uπ
p u
X
Su = sx cos
N/2 x=0 2N
5
N −1
C (2x + 1)uπ
p u Su cos
X
sx =
u=0
N/2 2N 4

u
with 3
√1 u=0

Cu = 2
1 u>0 2

is an orthonormal transform:
1
N −1
(2x + 1)u0 π u = u0

C (2x + 1)uπ Cu0 1
p u cos
X
·p cos =
N/2 2N N/2 2N 0 u 6= u0 0
x=0

0 1 2 3 4 5 6 7
x
219 220
Discrete cosine transform – 2D Whole-image DCT

The 2-dimensional variant of the DCT applies the 1-D transform on both
2D Discrete Cosine Transform (log10) Original image
rows and columns of an image:
4
50 50
Cu C
Su,v = p p v · 100
3
100
N/2 N/2
150 2 150
N −1 N −1
X X (2x + 1)uπ (2y + 1)vπ 200 1 200
sx,y cos cos
x=0 y=0
2N 2N 250 0 250

300 300
−1
sx,y = 350 350
N −1 N −1 −2
C C (2x + 1)uπ (2y + 1)vπ 400 400
p u p v · Su,v cos
X X
cos −3
u=0 v=0
N/2 N/2 2N 2N 450 450
−4
500 500
100 200 300 400 500 100 200 300 400 500
A range of fast algorithms have been found for calculating 1-D and 2-D
Photo courtesy of SIPI,
DCTs (e.g., Ligtenberg/Vetterli). University of Southern
California

221 222

Whole-image DCT, 80% coefficient cutoff Whole-image DCT, 90% coefficient cutoff

80% truncated 2D DCT (log10) 80% truncated DCT: reconstructed image 90% truncated 2D DCT (log10) 90% truncated DCT: reconstructed image

4 4
50 50 50 50
3 3
100 100 100 100

150 2 150 150 2 150

200 1 200 200 1 200

250 0 250 250 0 250

300 300 300 300


−1 −1
350 350 350 350
−2 −2
400 400 400 400
−3 −3
450 450 450 450
−4 −4
500 500 500 500
100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500

223 224
Whole-image DCT, 95% coefficient cutoff Whole-image DCT, 99% coefficient cutoff

95% truncated 2D DCT (log10) 95% truncated DCT: reconstructed image 99% truncated 2D DCT (log10) 99% truncated DCT: reconstructed image

4 4
50 50 50 50
3 3
100 100 100 100

150 2 150 150 2 150

200 1 200 200 1 200

250 0 250 250 0 250

300 300 300 300


−1 −1
350 350 350 350
−2 −2
400 400 400 400
−3 −3
450 450 450 450
−4 −4
500 500 500 500
100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500

225 226

Base vectors of 8×8 DCT RGB video colour coordinates


v
0 1 2 3 4 5 6 7 Hardware interface (VGA): red, green, blue signals with 0–0.7 V
0 Electron-beam current and photon count of cathode-ray displays are
roughly proportional to (v − v0 )γ , where v is the video-interface or
control-grid voltage and γ is a device parameter that is typically in the
1
range 1.5–3.0. In broadcast TV, this CRT non-linearity is compensated
electronically in TV cameras. A welcome side effect is that it
2 approximates Stevens’ scale and therefore helps to reduce perceived noise.
Software interfaces map RGB voltage linearly to {0, 1, . . . , 255} or 0–1.
3
How numeric RGB values map to colour and luminosity depends at
u

4 present still highly on the hardware and sometimes even on the operating
system or device driver.
5 The “sRGB” standard aims to standardize the meaning of an RGB value
with the parameter γ = 2.2 and with standard colour coordinates of the
6 three primary colours.
https://www.w3.org/Graphics/Color/sRGB, IEC 61966-2-1 at https://bsol.bsigroup.com/

7
227 228
YUV video colour coordinates YUV transform example

Images: Pennebaker/
Mitchell (1992)

The human eye processes colour and luminosity at different resolutions.


To exploit this phenomenon, many image transmission systems use a
colour space with a “luminance” coordinate
Y = 0.3R + 0.6G + 0.1B original Y channel U and V channels
The centre image shows only the luminance channel as a black/white
and colour (“chrominance”) components
image. In the right image, the luminance channel (Y) was replaced with
V = R−Y = 0.7R − 0.6G − 0.1B a constant, such that only the chrominance information remains.
U = B − Y = −0.3R − 0.6G + 0.9B This example and the next make only sense when viewed in colour. On a black/white printout of
this slide, only the Y channel information will be present.
If based on gamma-corrected R0 , G0 , B 0 then Y 0 is called “luma” and U 0 , V 0 “chroma”.
229 230

Y versus UV sensitivity example YCrCb video colour coordinates


Since −0.7 ≤ V ≤ 0.7 and −0.9 ≤ U ≤ 0.9, a more convenient
normalized encoding of chrominance is:
Y=0.1 Y=0.3 Y=0.5
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6

Cr

Cr

Cr
0.4 0.4 0.4

Cb = + 0.5 0.2 0.2 0.2


2.0 0
0 0.5 1
0
0 0.5 1
0
0 0.5 1
Cb Cb Cb
Y=0.7 Y=0.9 Y=0.99
1 1 1
V
Cr = + 0.5 0.8 0.8 0.8

1.6 0.6 0.6 0.6

Cr

Cr

Cr
0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
Cb Cb Cb

Modern image compression techniques operate on Y , Cr, Cb channels


original blurred U and V blurred Y channel separately, using half the resolution of Y for storing Cr, Cb.
In the centre image, the chrominance channels have been severely Some digital-television engineering terminology:
If each pixel is represented by its own Y , Cr and Cb byte, this is called a “4:4:4” format. In the
low-pass filtered (Gaussian impulse response ). But the human eye compacter “4:2:2” format, a Cr and Cb value is transmitted only for every second pixel, reducing
the horizontal chrominance resolution by a factor two. The “4:2:0” format transmits in alternating
perceives this distortion as far less severe than if the exact same filtering lines either Cr or Cb for every second pixel, thus halving the chrominance resolution both
is applied to the luminance channel (right image). Photo courtesy of
Karel de Gendre
horizontally and vertically. The “4:1:1” format reduces the chrominance resolution horizontally by
a quarter and “4:1:0” does so in both directions. [ITU-R BT.601]
231 232
Quantization Example of a linear quantizer (resolution R, peak value V ):
   
x 1
Uniform/linear quantization: Non-uniform quantization: y = max −V, min V, R +
6 6 R 2
5 5
4 4
3 3
2 2 Adding a noise signal that is uniformly distributed on [0, 1] instead of adding 12 helps to spread the
1 1 frequency spectrum of the quantization noise more evenly. This is known as dithering.
0 0
-1
-2
-1
-2 Variant with even number of output values (no zero):
-3 -3
-4 -4     
-5 -5
x 1
-6
-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
-6
-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 y = max −V, min V, R +
R 2
Quantization is the mapping from a continuous or large set of values
(e.g., analog voltage, floating-point number) to a smaller set of (typically
Improving the resolution by a factor of two (i.e., adding 1 bit) reduces
28 , 212 or 216 ) values.
the quantization noise by 6 dB.
This introduces two types of error:
Linearly quantized signals are easiest to process, but analog input levels
I the amplitude of quantization noise reaches up to half the maximum need to be adjusted carefully to achieve a good tradeoff between the
difference between neighbouring quantization levels signal-to-quantization-noise ratio and the risk of clipping. Non-uniform
I clipping occurs where the input amplitude exceeds the value of the quantization can reduce quantization noise where input values are not
highest (or lowest) quantization level uniformly distributed and can approximate human perception limits.

233 234

Logarithmic quantization
V -law (US)
A-law (Europe)
Rounding the logarithm of the signal amplitude makes the quantization
error scale-invariant and is used where the signal level is not very
predictable. Two alternative schemes are widely used to make the

signal voltage
logarithm function odd and linearize it across zero before quantization:
µ-law:
V log(1 + µ|x|/V ) 0
y= sgn(x) for −V ≤ x ≤ V
log(1 + µ)
A-law:

A|x| V

1+log A sgn(x) for 0 ≤ |x| ≤ A
y= A|x|
V (1+log V ) V

1+log A sgn(x) for A ≤ |x| ≤ V -V
European digital telephone networks use A-law quantization (A = 87.6), North American ones use
µ-law (µ=255), both with 8-bit resolution and 8 kHz sampling frequency (64 kbit/s). [ITU-T -128 -96 -64 -32 0 32 64 96 128
G.711] byte value

235 236
Joint Photographic Experts Group – JPEG Summary of the baseline JPEG algorithm
Working group “ISO/TC97/SC2/WG8 (Coded representation of picture and audio information)”
was set up in 1982 by the International Organization for Standardization.

Goals: The most widely used lossy method from the JPEG standard:
I continuous tone gray-scale and colour images I Colour component transform: 8-bit RGB → 8-bit YCrCb
I recognizable images at 0.083 bit/pixel I Reduce resolution of Cr and Cb by a factor 2
I useful images at 0.25 bit/pixel I For the rest of the algorithm, process Y , Cr and Cb components
I excellent image quality at 0.75 bit/pixel independently (like separate gray-scale images)
The above steps are obviously skipped where the input is a gray-scale image.
I indistinguishable images at 2.25 bit/pixel
I Split each image component into 8 × 8 pixel blocks
I feasibility of 64 kbit/s (ISDN fax) compression with late 1980s Partial blocks at the right/bottom margin may have to be padded by repeating the last
hardware (16 MHz Intel 80386). column/row until a multiple of eight is reached. The decoder will remove these padding
pixels.
I workload equal for compression and decompression
I Apply the 8 × 8 forward DCT on each block
The JPEG standard (ISO 10918) was finally published in 1994. On unsigned 8-bit input, the resulting DCT coefficients will be signed 11-bit integers.
William B. Pennebaker, Joan L. Mitchell: JPEG still image compression standard. Van Nostrad
Reinhold, New York, ISBN 0442012721, 1993.
Gregory K. Wallace: The JPEG Still Picture Compression Standard. Communications of the ACM
34(4)30–44, April 1991, http://doi.acm.org/10.1145/103085.103089

237 238

Outlook
I Quantization: divide each DCT coefficient with the corresponding
value from an 8 × 8 table, then round to the nearest integer:
The two standard quantization-matrix examples for luminance and chrominance are:
16 11 10 16 24 40 51 61 17 18 24 47 99 99 99 99
12 12 14 19 26 58 60 55 18 21 26 66 99 99 99 99
14 13 16 24 40 57 69 56 24 26 56 99 99 99 99 99 Further topics that we have not covered in this brief introductory tour
14 17 22 29 51 87 80 62 47 66 99 99 99 99 99 99 through DSP, but for the understanding of which you should now have a
18 22 37 56 68 109 103 77 99 99 99 99 99 99 99 99
24 35 55 64 81 104 113 92 99 99 99 99 99 99 99 99 good theoretical foundation:
49 64 78 87 103 121 120 101 99 99 99 99 99 99 99 99
72 92 95 98 112 100 103 99 99 99 99 99 99 99 99 99
I multirate systems
I apply DPCM coding to quantized DC coefficients from DCT I effects of rounding errors
I read remaining quantized values from DCT in zigzag pattern I adaptive filters
I locate sequences of zero coefficients (run-length coding) I DSP hardware architectures
I apply Huffman coding on zero run-lengths and magnitude of AC I sound effects
values
If you find any typo or mistake in these lecture notes, please email Markus.Kuhn@cl.cam.ac.uk.
I add standard header with compression parameters
http://www.jpeg.org/
Example implementation: http://www.ijg.org/

239 240

You might also like