[go: up one dir, main page]

0% found this document useful (0 votes)
22 views11 pages

Lecture 4

The document discusses the principles of source coding in digital communication, focusing on the sampling theorem and the efficient representation of data through source encoding. It explains the importance of the Nyquist rate for sampling signals and introduces concepts of fixed-length and variable-length code words for data encoding. Additionally, it highlights the need for unique decodability in encoding methods to ensure accurate reconstruction of the original data.

Uploaded by

maxi milian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views11 pages

Lecture 4

The document discusses the principles of source coding in digital communication, focusing on the sampling theorem and the efficient representation of data through source encoding. It explains the importance of the Nyquist rate for sampling signals and introduces concepts of fixed-length and variable-length code words for data encoding. Additionally, it highlights the need for unique decodability in encoding methods to ensure accurate reconstruction of the original data.

Uploaded by

maxi milian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Source Coding:

1- Sampling theorem: Sampling of the signals is the


fundamental operation in digital communication. A
continuous time signal is first converted to discrete time signal
by sampling process. Also it should be possible to recover or
reconstruct the signal completely from its samples. The
sampling theorem state that:
a. A band limited signal of finite energy, which has no
frequency components higher than W Hz, is completely
described by specifying the values of the signal at instant of
time separated by 1/2W second
b. A band limited signal of finite energy, which has no
frequency components higher than W Hz, may be
completely recovered from the knowledge of its samples
taken at the rate of 2W samples per second.
When the sampling rate is chosen 𝑓𝑠=2𝑓𝑚 each spectral
replicate is separated from each of its neighbors by a
frequency band exactly equal to 𝑓𝑠 hertz, and the analog
waveform ca theoretically be completely recovered from the
samples, by the use of filtering. It should be clear that if
𝑓𝑠>2𝑓𝑚, the replications will be move farther apart in
frequency making it easier to perform the filtering operation.
When the sampling rate is reduced, such that 𝑓𝑠<2𝑓𝑚, the
replications will overlap, as shown in figure below, and some
information will be lost. This phenomenon is called aliasing.
A band-limited signal having no spectral components above
𝑓𝑚 hertz can be determined uniquely by values sampled at
uniform intervals of .

The sampling rate is 𝑓

So that 𝑓𝑠≥2𝑓𝑚. The sampling rate 𝑓𝑠=2𝑓𝑚 is called Nyquist


rate.
Number of samples in 1sec=1 𝑓
𝑓
𝑠 𝑓

𝑠
𝑓
Example: Find the Nyquist rate and Nyquist interval for the
following signals.

i- 𝑚( )=
ii- 𝑚 𝑠 𝑠
Solution:
i- =500 ⁡⁡⁡⁡⁡⁡⁡∴2 𝑓=500 ⁡⁡⁡⁡⁡→𝑓=250𝐻𝑧
Nyquist interval ⁡𝑚𝑠 .

Nyquist rate =2𝑓𝑚 𝑥 = 2 × 250 = 500𝐻𝑧


ii-

= 𝑠 𝑠

 f1= =2500 Hz

f2= =1500 Hz

Then the highest frequency is 2500Hz


Nyquist interval ⁡𝑚𝑠 .
Nyquist rate =2𝑓𝑚 𝑥 = 2 × 2500 = 5000𝐻𝑧
H.W1:
Find the Nyquist interval and Nyquist rate for the following:
i- 𝑠 𝑠
ii-

Example:
A waveform [20+20sin(500t+30o)] is to be sampled
periodically and reproduced from these sample values. Find
maximum allowable time interval between sample values,
how many sample values are needed to be stored in order
to reproduce 1 sec of this waveform?.
Solution:
𝑥( ) = 20+20sin(500 +30 o)
t =500t→2 𝑓 =500→𝑓=79.58⁡𝐻𝑧
Thus, the highest frequency component in the signal is 79.58 Hz
Minimum sampling rate will be twice of the signal
frequency:
𝑓𝑠(min) = 2 ×79.58 = 159.15⁡𝐻𝑧
𝑇𝑠(𝑚 𝑥) ⁡𝑚𝑠 .

Number of samples in 1sec=1 𝑓 𝑠 𝑚 𝑠


2- Source Coding

An important problem in communications is the efficient


representation of data generated by a discrete source. The
process by which this representation is accomplished is called
source encoding. An efficient source encoder must satisfies two
functional requirements:

i- The code words produced by the encoder are in binary form.


ii- The source code is uniquely decodable, so that the original
source sequence can be reconstructed perfectly from the
encoded binary sequence.

The entropy for a source with statistically independent


symbols:

𝐻 ∑

We have:
max[𝐻( )] = 2𝑚
A code efficiency can therefore be defined as:
𝐻
𝑚 𝑥𝐻
The overall code length, L, can be defined as the average code
word length:

∑ 𝑥 𝑏 𝑠/𝑠 𝑚𝑏

The code efficiency can be found by:


𝐻

Note that max[𝐻( )] 𝑏 𝑠/𝑠 𝑚𝑏 = 𝑏 𝑠/ 𝑑 𝑑


a. Fixed-Length Code Words
If the alphabet X consists of the 7 symbols {a, b, c, d, e, f, g},
then the following fixed-length code of block length L = 3
could be used.
C(a) = 000
C(b) = 001
C(c) = 010
C(d) = 011
C(e) = 100
C(f) = 101
C(g) = 110.
The encoded output contains L bits per source symbol. For the
above example the source sequence bad... would be encoded
into 001000011... . Note that the output bits are simply run
together (or, more technically, concatenated). This method is
nonprobabilistic; it takes no account of whether some symbols
occur more frequently than others, and it works robustly
regardless of the symbol frequencies.

This is used when the source produces almost equiprobable


messages , then 𝑥 𝑥 𝑥 𝑥 then:
and for binary coding then:
1- 𝑏 𝑚 𝑠𝑠 if n=2r (n=2,4,8,16,… and r
is an integer) which gives
2- 𝑏 𝑚 𝑠𝑠 if which
gives less efficiency.

Example: For ten equiprobable messages coded in a fixed length


code then
𝑥 𝑑 𝑏 𝑠
And
𝐻

Example: For eight equiprobable messages coded in a fixed


length code then

𝑥 𝑑 𝑏 𝑠
Example: Find the efficiency of a fixed length code used to
encode messages obtained from throwing a fair die (a) once,
(b) twice, (c) 3 times.
Solution:
a- For a fair die, the messages obtained from it are
equiprobable with a probability of 𝑥 .
𝑏 𝑚 𝑠𝑠
𝐻

b- For two throws then the possible messages are


n=6 6=36 with equal probabilities
𝑏 𝑠
𝑏 𝑠 𝑠 𝑚𝑏 𝑠
𝑚 𝑠𝑠
while,

c- For three throws then the possible messages are n=6


6 =216 with equal probabilities
𝑏 𝑠
𝑏 𝑠 𝑠 𝑚𝑏 𝑠
𝑚 𝑠𝑠
while,

𝐻
b. Variable-Length Code Words
When the source symbols are not equally probable, a
more efficient encoding method is to use variable-length
code words. For example, a variable-length code for the
alphabet X = {a, b, c} and its lengths might be given by
C(a)= 0 l(a)=1
C(b)= 10 l(b)=2
C(c)= 11 l(c)=2
The major property that is usually required from any variable-
length code is that of unique decodability. For example, the
above code C for the alphabet X = {a, b, c} is soon shown to be
uniquely decodable. However such code is not uniquely
decodable, even though the codewords are all different. If the
source decoder observes 01, it cannot determine whether the
source emitted (a b) or (c).
Prefix-free codes: A prefix code is a type of code system
(typically a variable length code) distinguished by its
possession of the "prefix property", which requires that there
is no code word in the system that is a prefix (initial segment)
of any other code word in the system. For example:
{ = 0, 𝑏 = 110, = 10, 𝑑 = 111}⁡ 𝑠⁡ ⁡ 𝑓 𝑥⁡ 𝑑 . When
message probabilities are not equal, then we use variable
length codes. The following properties need to be considered
when attempting to use variable length codes:
1) Unique decoding:
Example
Consider a 4 alphabet symbols with symbols represented by
binary digits as follows:
A=0
B=01
C=11
D=00
If we receive the code word 0011 it is not known whether the
transmission was DC or AAC. This example is not, therefore,
uniquely decodable.
2) Instantaneous decoding:
Example:
Consider a 4 alphabet symbols with symbols represented by
binary digits as follows:
A=0
B=10
C=110
D=111
This code can be instantaneously decoded since no complete
codeword is a prefix of a larger codeword. This is in contrast
to the previous example where A is a prefix of both B and D.
This example is also a ‘comma code’ as the symbol zero
indicates the end of a codeword except for the all ones word
whose length is known.
Example:
Consider a 4 alphabet symbols with symbols represented by
binary digits as follows:
A=0
B=01
C=011
The code is identical to the previous example but the bits are
time reversed. It is still uniquely decodable but no longer
instantaneous, since early codewords are now prefixes of later
ones.

You might also like