[go: up one dir, main page]

0% found this document useful (0 votes)
24 views18 pages

Notes Week 1

Uploaded by

Lap Sang Ho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views18 pages

Notes Week 1

Uploaded by

Lap Sang Ho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

EE 3008A Linear Systems and Signal processing

Dr. Alan Pak Tao Lau


Department of Electrical Engineering,
The Hong Kong Polytechnic University
Sept. 2025

Chapter 1 Introduction
Objectives:
• To provide a broad overview of communication systems
• To describe the main components of a communication system
• To introduce the concept of a signal
• To describe the classification of communications
• To explain the importance of modulation in communications
• To describe the key factors to evaluate the performance of a communication system

Telecommunications is made up of the words “tele” which means “over a distance” and communications means
“the process of conveying message”.
During communication, the message is transmitted from its source to a destination. This transmission is achieved by
the use of a communication system.

Figure 1.1 Communication in our daily lives.

1
In communication, the physical message, such as sound, word, picture, etc., is converted into an electrical message
called signal and this electrical signal is conveyed at the distant place, where it is reconverted into the physical message
through some media. Thus, a communication system has following components:

Figure 1.2 A communication system model.

Source originates a message, such as a human voice, a television picture, or data.


If the message is not electrical, it should be converted by an input transducer into an electrical waveform referred to
as the baseband signal or message signal.
e.g. in a telephone system, human voice is converted into an electric current variation.
The transmitter modifies the message signal for efficient transmission.
The channel is a medium such as wire, coaxial cable, an optical fiber, or a radio link -- through which the transmitter
output is sent.
The receiver reprocesses (demodulate) the output signal from the channel. The receiver output is fed to the output
transducer, which converts the electrical signal into its original form, the message.
The destination is the unit to which the message is communicated.

1.1 Classification of communication systems:


A communication system is divided into two categories depending on the transmission media (channel) used: line
communication system, and wireless communication system.
In line communication, transmission is carried out on the transmission line.
Examples of transmission line: wire, coaxial cable, optical fiber, etc.

In wireless communication, signals from various sources are transmitted through a common media – open space.
Examples of wireless communication: radio, microwave, etc.

2
Figure 1.3 Wireless and wireline communication system.

A communication system can be divided into analog communication system and digital communication system,
according to the characteristics of transmitted signals.

Figure 1.4 Analog and digital communication systems

In addition, communication system can be divided into duplex and simplex systems. Duplex systems are those which
one can both send and receive information (such as telephone or computer network). Simplex systems are those which
only allow one-way communication (such as TV or traffic lights)

A signal is a set of information or data. A signal is a function of time.


e.g. the variation of electric current that contains the message.

X(t)

Figure 1.5 A signal represented as a function of time.

3
Noise refers to undesired signal which carries no information. It is random and unpredictable signal produced by the
natural processes both internal and external to the system. When such random variations are superimposed on an
information-bearing signal and if the noise amplitude is larger than that of the signal, the message may be partially
corrupted or totally obliterated. The information cannot be correctly received.
There are two types of noises, external noise and internal noise.
Examples of external noise:
Interference from signals transmitted on nearby channel, natural noise from lightning, etc.
Example of internal noise:
Thermal noise is generated from thermal motion of electrons in conductors.
Shot noise is caused by random variations in the arrival of electrons or holes at the output of the device.
Noise is one of the basic factors that set the limit of communication system performance.

Noise is unavoidable. Why?


At any temperature above absolute zero, thermal energy causes microscopic particles to exhibit random motion. The
random motion of charged particles such as electrons generates random currents or voltages called thermal noise.
Thermal noise exists in every communication system.
1.2 How to evaluate the performance of a communication system?
The performance of a communication system is usually evaluated by two key factors:
1. Efficiency determines the capacity of transmission channel;
2. Reliability determines the signal quality.

In an analog communication system, efficiency is measured by transmission channel bandwidth, B, and reliability
is measured by system output signal-to-noise ratio (S/N).

In a digital communication system, efficiency is measured by bit rate, R, and reliability is measured by bit error rate,
Pb.

Bit rate: R = n/T (bits/sec) where n is the number of bits sent in T seconds

Bit error rate: Pb = number of error bits / total number of bits


e.g. a digital telephone system requires Pb < 10-3  10-6 and data communication requires Pb < 10-9.

We will talk more about this later in the course.

4
Chapter 2 Signal and Systems Analysis
Objectives:
• To define the energy and power of a signal
• To classify different types of signals
• To introduce special functions commonly used in telecommunication systems
• To define linear and time-invariant systems
• To define convolution
• To introduce Fourier series and Fourier transform
• To explain the concept of negative frequency
• To show how the signal may be described in either the time domain or the frequency domain and establish
the relationship between these descriptions.
• To introduce power spectral density

Reference: Chapter 1,2, of Signals and Systems by A. V. Oppenheim and A. S. Willsky, 2nd edition, Prentice Hall
1997.

In communication system, the received waveform is usually categorized into the desired part containing the
information and undesired part. The desired part is the signal and the undesired part is the noise.
A signal may describe a wide variety of physical phenomena. Although signals can be represented in many ways, in
communication systems the information in a signal can be represented as a function of time.
2.1 Classification of signals
The most useful method of signal representation for any given situation depends on the type of signal being considered.
Signals can be classified in various ways which are not mutually exclusive:

2.1.1. Continuous and discrete signals


Continuous signals are signals in which the time variable is a continuous variable. In other words, the signal is defined
at any point in time.
Discrete signals are signals that are defined only at discrete times. For these signals, the time variable only takes on
discrete values and the signal is only defined at discrete times and are undefined at other times. that assume only
specific values at a certain time and is undefined at other times. For example, signals associated with computers are
discrete. Another example is stock market data. Why? Because the value of a stock is only defined during trading
hours. The value of the stock is not defined during weekends for example.

To distinguish between continuous-time and discrete-time signals, we will use the symbol t to denote the
continuous-time independent variable and n to denote the discrete-time variable. In addition, for continuous-time
signals we will enclose the variable in parentheses (.), whereas for discrete-time signals we will use brackets [.] to
enclose the variable. Continuous-time and discrete-time signals are illustrated graphically in the following figure. It
is important to note that the discrete-time signal x[n] is defined only for integer values of the variable. Our choice of
graphical representation for x[n] emphasizes this fact, and for further emphasis we will on occasion refer to x[n] as a
discrete-time sequence.

5
Figure 2.1 Graphical representations of (a) continuous-time and (b) discrete-time signals

2.1.2. Periodic and nonperiodic signals


A periodic signal is one that repeats itself exactly after a fixed length of time. It is defined by

g(t) = g(t + T) for all t and g[n] = g[n+N] for all N (2.1)

where the smallest positive number that satisfies the above equation is called the period.
Above equation simply says that shifting the signal by a period or an integer number of periods to the left or right
leaves the waveform unchanged. Consequently, a periodic signal is fully described by specifying its behavior over
any one period.
Any signal for which there is no value of T satisfying the above equation is said to be nonperiodic, or aperiodic.
Information-carrying signals are normally nonperiodic.

N
g[n]

0 1 2 3 4 5 6 7 8 9 10
n
Figure 2.2 Periodic signals.

2.1.3. Deterministic and random signals


A signal, whose physical description is known completely, in either a mathematical form or a graphical form, is a
deterministic signal.

6
g (t ) = cos(ct ) g[n] = cos(c n) (2.2)

If a signal is known only in terms of probabilistic description, such as mean value, mean squared value and so on,
rather than its complete mathematical or graphical description, is a random signal.
n(t ), E[n(t )] = 0, E[n 2 (t )] =  2 (2.3)
Most of the noise signals are random signal
2.2 Power and Energy of a signal

We know that the power of a voltage or current signal in a circuit is given by P = V / R = I R . Since our
2 2

course is focused on signals, we will neglect R. The energy E and average power P of a signal is defined as

 g (t )
2
E  − g[n]

E dt 2

−

T 1
1

N 2
P  lim
 g[ n]
2
P  lim g (t ) dt N → 2 N + 1 − N
T → 2T
−T (2.4)
We will explain why we need to use magnitude squared instead of simple squared of a signal for calculation of
power and energy.

2.3 Transformations of the time variable


A central concept in signal and system analysis is that of the transformation of a signal. We will focus on a very
limited but important class of elementary signal transformations that involve simple modification of the time
variable. Graphically, it involves the modification of the graph along the time axis. As we will see in this and
subsequent chapters, these elementary transformations allow us to introduce several basic properties of signals and
systems. In later chapters, we will find that they also play an important role in defining and characterizing far richer
and important classes of systems.

A simple and very important example of transforming the independent variable of a signal is a time shift. A time
shift in discrete time is illustrated in the following figure, in which we have two signals x[n] and x[n − n0 ] that
are identical in shape, but that are displaced or shifted relative to each other.

Figure 2.3. Shifting of a discrete-time signal in time.

We will also encounter time shifts in continuous time, as illustrated in the following figure, in which x(t − t0 )
represents a delayed (if t0 is positive) or advanced (if t0 is negative) version of x(t ) . The following figure shows
what x(t − t0 ) looks like when t0 is negative.

7
Figure 2.4. Shifting of a continuous-time signal by a time t0.

A second basic transformation of the time axis is that of time reversal. The following figures serve as examples.
Graphically, x(−t ), x[−n] will look like a reflection of x(t ), x[n] along the y-axis respectively.

Figure 2.5 Reflection of continuous-time and discrete-time signals about t=0 and n=0 respectively.

Another transformation is that of time scaling. In the following figure, we have illustrated three signals,
x(t ), x(2t ), x(t / 2) . Graphically, x(2t ), x(t / 2) looks like a ‘compressed’ or ‘stretched out’ versions of x(t )
respectively.

Figure 2.6 Time scaling for continuous-time function.

In general, it is often necessary to determine the effect of transforming x(t ) to obtain a signal of the form x(at + b)
for given numbers a, b . What will the graphs look like?

8
Figure 2.7 Continuous-time signal and its transformations.

2.4 Special functions commonly used in telecommunication systems


There is a particular class of functions which plays an important role in signal analysis, they are called singularity
functions. These functions are discontinuous or have discontinuous derivatives.
Singularity functions are mathematical idealizations and, strictly speaking, do not occur in physical systems. They are
useful in signal analysis because they serve as good approximations to certain limiting conditions in physical systems.
Here, we are going to discuss two types of singularity functions: unit step function and Dirac delta function (unit
impulse function).

2.4.1. Exponential and sinusoidal functions


jt
g[n] = e jn
dg (t )
g (t ) = e dt
= j e jt
Euler’s formula

e jt = cos(t ) + j sin(t )

9
Figure 2.8 Discrete-time sinusoidal function.

Exponential functions (will be shown later) are important class of functions in the systems we are going to study in
this course.

2.4.2. The sinc function


We will encounter this function quite often.
sin( t )
sinc(t ) = ,
t
The sinc function is shown in the following figure as a function of x. It has zero crossings at t = 1, 2, 3,... and
we define sinc(0) = 1 . It is an important function that we will use in this course.

10
Figure 2.9 The sinc function.

Example 2.1
Plot y = 3sinc(4t ).

2.4.3. Unit step function


For continuous-time systems, the unit step function is defined by

1 t 0
u (t ) = {
0 t 0

(u(t) is not defined at t = 0, or you may define u(0) = 1 or u(0) = ½)


Similarly, for discrete-time systems,
1 u[n]

1 n  0
u[n] = 
0 n  0

-5 -4 -3 -2 -1 0 1 2 3 4 5
n
Here are some examples of the amplitude-scaled and time-shifted versions of step functions.

Figure 2.10. Amplitude-scaled and time-shifted step functions.

Exercise: How do you express the following functions using combinations of amplitude-scaled and time-shifted
step functions?

11
2.4.4. Unit impulse function
In continuous-time systems, the unit impulse function is defined as follows:
 (t ) = 0, t0

 −
 (t )dt = 1

Figure 2.11 Delta function in continuous time.

 
u (t + ) − u (t − )
 (t ) = lim 2 2
 →0 

In discrete-time, the unit impulse function is given by

1 n = 0
 [ n] = 
0 n  0

[n]

-5 -4 -3 -2 -1 0 1 2 3 4 5

Figure 2.12 Delta function in discrete-time.

12
2.4.5. Relationship between (t) and u(t)
t
−
 ( )d = u (t )
du (t ) (2.5)
=  (t )
dt
Proof:
t 0− t t
(1) When t > 0,
−
 ( )d =   ( )d +   ( )d =   ( )d =1 = u (t )
− 0− 0−
t
When t < 0, −
 ( )d = 0 = u (t )
Similarly,
n
1 n  0
  [k ] = 0 = u[n]
k = −  n0

u[n] − u[n − 1] =  [n]

2.4.6. Multiplication of a function by (t)

f(t)(t) = f(0)(t), f(t) continuous at 0.


f(t)(t – t0) = f(t0)(t – t0), f(t) continuous at t0.

Proof:
(1) f(t) is continuous at 0, then f(0) exists.
when t = 0, f(t)(t) = f(0)(t)
when t  0, f(t)(t) = 0 = f(0)(t)
so that f(t)(t) = f(0)(t),

2.4.7. Sampling property of (t), [n]

 
 f (t ) (t )dt = f (0)   (t )dt = f (0)
1
f(t)
− −

−
f (t ) (t − T )dt = f (T )



−
f [n] [n] = f [0] (t)



−
f [n] [n − N ] = f [ N ]

-1 -0.5 0 0.5 1
 
Proof: −
f (t ) (t − T )dt = f (T )   (t − T )d (t − T ) = f (T )
−

13
2.4.8. Representation of Discrete-time signals in terms of impulses
The key idea in visualizing how the discrete-time unit impulse can be used to construct any discrete-time-signals is
to think of a discrete-time signal as a sequence of individual impulses. Consider the signal x[n] depicted in the
following figure.

Figure 2.13 Decomposition of a discrete-time signal into a weighted sum of shifted impulses.

14
In this case, we can write

Therefore, the sum of the five sequences in the figure equals 𝑥[𝑛]for−2 ≤ 𝑛 ≤ 2. More generally, by including
additional shifted, scaled impulses, we can write

𝑥[𝑛] = ⋯ 𝑥[−3]𝛿[𝑛 + 3] + 𝑥[−2]𝛿[𝑛 + 2] + 𝑥[−1]𝛿[𝑛 + 1] + 𝑥[0]𝛿[𝑛] + 𝑥[1]𝛿[𝑛 − 1] + 𝑥[2]𝛿[𝑛 − 2] + ⋯

For any value of n, only one of the terms on the right-hand side of the above equation is nonzero, and the scaling
associated with that term is precisely 𝑥[𝑛]. Writing this summation in a more compact form, we have

+∞

𝑥[𝑛] = ∑ 𝑥[𝑘]𝛿[𝑛 − 𝑘].


𝑘=−∞

This corresponds to the representation of an arbitrary sequence as a linear combination of shift unit impulses 𝛿[𝑛 −
𝑘], where the weights in this linear combination are 𝑥[𝑘].

2.5 System
Physical systems in the broadest sense are an interconnection of components, devices, or subsystems. In contexts
ranging from signal processing and communications to electromechanical motors, automotive vehicles, a system can
be viewed as a process in which input signals are transformed by the system or cause the system to respond in some
way, resulting in other signals as outputs. For example, if you talk on a cell phone, your voice will be converted by
the microphone to an electrical signal and passed through the phone network to the recipient’s cell phone where it is
converted to sound through the speaker. In this case, the communication network will be referred to as the system.
Mathematically, a system is defined as a set of rules that associates an input function to an output function.

Figure 2.14 System with input and output.

where g(t) is input signal (or source signal); y(t) is output signal (or response signal); h(t) is the system response
when input is a unit impulse function and is known as unit impulse response of the system. We will talk about h(t) a
lot later in the course.

Symbolically, input and response are represented as g (t ) → y (t ) and read as input g(t) causes a response y(t). For
discrete-time systems with input g[ n] and output y[n] , we will write g[n] → y[n] .

15
Classification of systems
2.5.1. Linear and nonlinear systems
For inputs g1(t), g2(t) with corresponding outputs y1(t), y2(t) respectively, a system is said to be linear if the following
properties hold:

1) ag1(t) → ay1(t)
2) ag1(t) + bg2(t) → ay1(t) +by2(t) for any scalar a,b

Otherwise, the system is nonlinear.

Example 2.2
The system y (t ) = 2tg (t ) is linear.
Proof: Let g1 (t ) and g 2 (t ) be two inputs and y1 (t ) = 2tg1 (t ) and y2 (t ) = 2tg 2 (t ) be the two corresponding
outputs. Now, if the input to the system is g (t ) = ag1 (t ) + bg 2 (t ) for some scalar a and b, the output will be

y(t ) = 2t ( ag1 (t ) + bg 2 (t ) ) = a ( 2tg1 (t ) ) + b ( 2tg 2 (t ) ) = ay1 (t ) + by2 (t )

Thus the system is linear.


Example 2.3
The system y(t ) = g 2 (t ) is nonlinear.
g1 (t ) and g 2 (t ) be two inputs and y1 (t ) = g1 (t ) and y2 (t ) = g 2 (t ) be the two corresponding
2 2
Proof: Let
outputs. Now, if the input to the system is g (t ) = g1 (t ) + g 2 (t ) , the output will be

y (t ) = ( g1 (t ) + g 2 (t ) ) = g12 (t ) + 2 g1 (t ) g 2 (t ) + g 22 (t ).
2

However, y1 (t ) + y2 (t ) = g1 (t ) + g 2 (t ) and hence


2 2
y(t )  y1 (t ) + y2 (t ) . Therefore, when the input g (t ) is a
linear combination of two other inputs g (t ) = g1 (t ) + g 2 (t ) , the output y (t ) is not a linear combination of the
corresponding outputs y1 (t ) + y2 (t ) . Therefore, the system is not linear.
(Note: why don’t we need to prove the same thing for general coefficients a and b?)

2.5.2. Time-invariant and time-varying systems


A system is time-invariant if a time shift in the input results in a corresponding time shift in the output. Mathematically,
for input g(t) and corresponding output y(t), a system is time-invariant if

g(t – t0) → y(t – t0) for any t0.

t
t

t t
t0 t0
Figure 2.15 The time-invariance property.

16
Any system not meeting this requirement is said to be time-varying.

Example 2.4
A system y(t) = x(t)cos2f0t is a time-varying system.
Proof: For input x(t – t0), the output is given by x(t – t0) cos2f0t. However, this output is not the same as
y(t – t0)= x(t – t0) cos2f0(t – t0). In other words, if the input is x(t – t0), the output is not y(t – t0) and hence the system
is not time invariant or is time-varying.

2.5.3. Convolution in Linear and Time-Invariant (LTI) systems


Consider a discrete-time LTI system. If we apply the impulse function  [n] as the input, let h[n] be the output

1
 [n] h[n] 

0 n 0 n
We call h[n] the Impulse response of an LTI system. Now, since the system is linear, if we apply a scaled version
of the impulse b [n] , the output will be bh[n]
b [n] bh[n] b
b
0 n 0 n

Furthermore, since the system is time-invariant, if we apply a delayed version of the impulse  [n − k ] , the output
will be a delayed version of the impulse response h[n − k ]
1

 [n − k ] h[n − k ] 
k n k n
Now for a general input g[n] , we can express it as
g[n] =  g[−1] [n + 1] + g[0] [n] + g[1] [n − 1] + 
With the properties of LTI systems, the output signal corresponding to g[n] will be given by
y[n] =  g[−1]h[n + 1] + g[0]h[n] + g[1]h[n − 1] + 

 g[k ]h[n − k ]
k = −

Similarly, for continuous-time LTI systems, the output signal y (t ) with input g (t ) is given by

y (t ) =  g ( )h(t −  )d
−

where h(t) is the impulse response.

Convolution between two functions is commonly written as


y[n] = g[n]  h[n] y (t ) = g (t )  h(t )
The following are some examples of convolution.

17
Example 2.5. Compute the convolution between g[n] and h[n] given by the following figures.
2
g[n] h[n]
1 1 1 1

0 1 n 0 1 2 n
Let the output signal by y[n]. In this example, the convolution of g[n] and h[n] is

y[n] = …. g[− 1]h[n + 1] + g[0]h[n] + g[1]h[n − 1] + g[2]h[n − 2] +…


= g[0]h[n − 0] + g[1]h[n − 1].
In particular,
y[0] = g[0]h[0 − 0] + g[1]h[0 − 1] = 2∙1 + 1∙0 = 2
y[1] = g[0]h[1 − 0] + g[1]h[1 − 1] = 2∙1 + 1∙1 = 3
y[2] = g[0]h[2 − 0] + g[1]h[2 − 1] = 2∙1 + 1∙1 = 3
y[3] = g[0]h[3 − 0] + g[1]h[3 − 1] = 2∙0 + 1∙1 = 1
y[4] = y[5] = y[6]= …= 0

Example 2.6. Compute the convolution between g(t) = e−atu(t) and h(t) = u(t) where u(t) is the step function.

g(τ)

h(t − τ)

t τ

g(τ)h(t − τ)

t τ

For t < 0, there is no overlap between y(t) = 0


For t > 0,
 t 1 t 1
y (t ) =  g ( )h(t −  )d =  e − a d = − e − a = (1 − e − at )
− 0 a 0 a

18

You might also like