[go: up one dir, main page]

0% found this document useful (0 votes)
10 views8 pages

Module - 1 DCom

Uploaded by

gauri.joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views8 pages

Module - 1 DCom

Uploaded by

gauri.joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

TE EXTC Digital Communication Sem - V

Module-1
Introduction
Communication has been one of the greatest needs of the human race. It is essential to form
social unions, to educate the young, and to express a myriad of emotions and needs. Good
communication is central to a civilized society.

1.1 Digital communication system


Digital communication systems are communication systems where the information
propagates
through the system in the form of symbols that are discrete or digital. It uses digital sequence as
an
interface between the source and the channel input (and likewise between the channel
output and
final destination).

Block diagram of digital communication system

1. Information Source and Input Transducer:


The source of information can be analog or digital, e.g. analog: audio or video signal, digital: like
teletype signal. In digital communication the signal produced by this source is converted
into digital signal which consists of 1′s and 0′s. For this we need a source encoder.

2. Channel Encoder:
The information sequence is passed through the channel encoder. The purpose of the channel
encoder is to introduce, in controlled manner, some redundancy in the binary information
sequence that can be used at the receiver to overcome the effects of noise and
interference encountered in the transmission on the signal through the channel. For example take
k bits of the information sequence and map that k bits to unique n bit sequence called code word.

3. Channel:
The communication channel is the physical medium that is used for transmitting signals from
transmitter to receiver. In wireless system, this channel consists of atmosphere, for traditional
telephony, this channel is wired, there are optical channels, under water acoustic channels
etc. We further discriminate this channels on the basis of their property and
characteristics, like AWGN
channel etc.

4. Channel Decoder:
This sequence of numbers then passed through the channel decoder which

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 1


TE EXTC Digital Communication Sem - V
attempts to reconstruct the original information sequence from the knowledge of the code used
by the channel encoder and the redundancy contained in the received data

5. Source Encoder:
In digital communication we convert the signal from source into digital signal
as mentioned above. The point to remember is we should like to use as few binary digits as
possible to represent the signal. In such a way this efficient representation of the source output
results in little or no redundancy. This sequence of binary digits is called information sequence

6. Source Decoder:
At the end, if an analog signal is desired then source decoder tries to decode the
sequence from the knowledge of the encoding algorithm. And which results in the
approximate replica of the input at the transmitter end.

Advantage of digital communication


1. Digital communication can be done over large distances though internet and other things.
2. Digital communication gives facilities like video conferencing which save a lot of
time, money and effort.
3. It is easy to mix signals and data using digital techniques.
4. The digital communication is fast, easier and cheaper.
5. It can be tolerated the noise interference.
6. It can be detect and correct error easily because of channel coding.
7. Used in military application.
8. It has excellent processing techniques are available for digital signals such as data
compression, image processing, channel coding and equalization etc

Limitation of digital communication


1) Generally, more bandwidth is required than that for analog systems.
2) Synchronization is required.
3) High power consumption (Due to various stages of conversion).
4) Complex circuit, more sophisticated device making is also drawbacks of digital system.
5) Introduce sampling error
6) As square wave is more affected by noise, That’s why while communicating through
channel we send sine waves but while operating on device we use square pulses
Elaboration:
In information theory, the entropy of a random variable quantifies the average level of uncertainty
or information associated with the variable's potential states or possible outcomes. This measures
the expected amount of information needed to describe the state of the variable, considering the
distribution of probabilities across all potential states.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 2


TE EXTC Digital Communication Sem - V

The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A
Mathematical Theory of Communication", and is also referred to as Shannon entropy. Shannon's
theory defines a data communication system composed of three elements: a source of data,
a communication channel, and a receiver. The "fundamental problem of communication" – as
expressed by Shannon – is for the receiver to be able to identify what data was generated by the
source, based on the signal it receives through the channel. Shannon considered various ways to
encode, compress, and transmit messages from a data source, and proved in his source coding
theorem that the entropy represents an absolute mathematical limit on how well data from the
source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened
this result considerably for noisy channels in his noisy-channel coding theorem.

Entropy in information theory is directly analogous to the entropy in statistical thermodynamics.


The analogy results when the values of the random variable designate energies of microstates, so
Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance
to other areas of mathematics such as combinatorics and machine learning. The definition can be
derived from a set of axioms establishing that entropy should be a measure of how informative the
average outcome of a variable is. For a continuous random variable, differential entropy is
analogous to entropy.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 3


TE EXTC Digital Communication Sem - V

The average information rate represents the amount of information transmitted from a
source per unit of time, typically measured in bits per second. It's calculated by
multiplying the entropy (or average information content per symbol) by the rate at which
the symbols are emitted by the source. Essentially, it quantifies how much information a
source is generating or transmitting over time.

Key Concepts:
 Entropy (H): Represents the average information content per symbol, often interpreted as a
measure of the uncertainty associated with the source output.
 Symbol Emission Rate (r): The number of symbols or messages generated by the source per
unit of time.
 Information Rate (R): The product of entropy and symbol emission rate (R = rH).
Formula:
R = rH

Where:
 R = Information rate (bits/second)
 r = Rate at which messages are generated (messages/second)
By Gauri Joshi VPM’s MPCOE, Velneshwar Page 4
TE EXTC Digital Communication Sem - V
 H = Entropy or average information content (bits/message)

 Example:
 If a source generates 2B samples (messages) per second with an entropy of 1.8
bits per sample, the information rate would be:
 R = 2B messages/second * 1.8 bits/message = 3.6B bits/second

An Additive White Gaussian Noise (AWGN) channel is a model for a communication


channel where the only impairment to the signal is noise, which is additive, white, and
Gaussian. This model is used to understand and simulate the behavior of
communication systems in noisy environments.

Key characteristics of an AWGN channel:


 Additive: The noise is added to the transmitted signal.
 White: The noise has a uniform power spectral density across the frequency band. This means
the noise has the same amount of power at all frequencies within the bandwidth.
 Gaussian: The noise has a normal distribution in the time domain. This means the noise
amplitudes have a bell-shaped distribution, with the majority of values clustered around the
average value.
Applications and Importance:
 Channel Modeling:
AWGN channels are often used as a basic model for various communication systems, especially
those with strong signals and minimal other impairments like fading or interference.
 Simulations:
AWGN is used in simulations to test the performance of communication systems under noisy
conditions.
 Satellite and Deep Space Communications:
AWGN can be a good model for satellite and deep space links, where the signal may be affected
by background noise but not by other common terrestrial impairments.
Mathematical Model:
The output of an AWGN channel (Y(t)) can be represented as the sum of the input
signal (X(t)) and the noise (N(t)):
Y(t) = X(t) + N(t).

where:
 X(t) is the input signal waveform.
 N(t) is the white Gaussian noise process with a specific noise power spectral density (N0).
Limitations:
 Terrestrial Wireless Links:

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 5


TE EXTC Digital Communication Sem - V
AWGN is not a good model for most terrestrial wireless links due to factors like fading, multipath
propagation, and interference.

 Real-world systems:
Real-world communication channels are more complex and often involve other impairments
beyond AWGN.

The Shannon-Hartley Theorem provides a theoretical limit on the channel capacity of a


communication channel. It states that the channel capacity (C) is equal to the
bandwidth (B) multiplied by the logarithm base 2 of 1 plus the signal-to-noise ratio
(S/N): C = B log2(1 + S/N). This formula indicates that increasing bandwidth or
improving the signal-to-noise ratio will increase the channel capacity, allowing for a
higher rate of reliable information transmission.

Key Components:
 Channel Capacity (C):
The maximum rate at which information can be reliably transmitted over a channel.
 Bandwidth (B):
The range of frequencies available for transmission, measured in Hertz (Hz).
 Signal-to-Noise Ratio (S/N):
The ratio of the power of the desired signal to the power of the noise.
 log2(1 + S/N):
The logarithm base 2 of (1 + S/N) indicates how much the signal-to-noise ratio contributes to the
increase in channel capacity.
Significance:
 Theoretical Limit:
The Shannon-Hartley Theorem sets a theoretical upper bound on how much information can be
reliably transmitted over a channel with a given bandwidth and noise level.
 Foundation for Communication Systems:
It is a fundamental concept in communication theory, guiding the design and optimization of
communication systems.
 Trade-offs:
It highlights the trade-offs between bandwidth, signal-to-noise ratio, and the achievable data
rate.
Source coding, also known as data compression, is the process of encoding data to
reduce its size for storage or transmission. Huffman coding and Shannon-Fano coding
are two popular lossless source coding methods that assign variable-length binary
codes to characters based on their frequency of occurrence.

Source Coding:

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 6


TE EXTC Digital Communication Sem - V
 Source coding, or data compression, aims to reduce redundancy in data.
 It involves converting data into a more compact representation without losing information
 This process reduces the number of resources required to store and transmit data.
Huffman Coding:
 Huffman coding is a lossless data compression technique that assigns variable-length binary
codes to symbols based on their frequency of occurrence.
 More frequent symbols are assigned shorter codes, while less frequent symbols are assigned
longer codes.
 It builds a binary tree where each node represents a symbol and its probability.
 Huffman coding is known for its efficiency and simplicity.
Shannon-Fano Coding:
 Shannon-Fano coding is another lossless data compression technique that assigns variable-length
binary codes based on the probabilities of symbols.
 It divides the symbols into two sets based on their probabilities, and assigns codes recursively.
 While Shannon-Fano is simpler than Huffman coding, it may not always produce the most optimal
prefix codes.

Key Differences between Huffman and Shannon-Fano:


Feature Huffman Coding Shannon-Fano Coding

Basis Symbol frequency Symbol probabilities

Efficiency Generally more efficient, provides Less efficient, may not always produce
optimal prefix codes optimal prefix codes

Complexity Higher, involves building a binary tree Lower, involves simpler recursive division

Code Lengths Variable-length codes that are generally Variable-length codes, but may be longer
shorter

Algorithm Bottom-up (building tree from individual Top-down (dividing symbols into sets)
Approach symbols)

Optimal Codes Always produces optimal prefix codes Does not always produce optimal prefix

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 7


TE EXTC Digital Communication Sem - V

codes

Benefits of Huffman and Shannon-Fano Coding:


 Lossless Compression: Both techniques preserve the original data's integrity.
 Variable-Length Codes: Assigning codes of varying lengths based on frequency/probability
allows for more efficient representation of data.
 Improved Compression: Both methods can significantly reduce the size of data compared to
fixed-length coding schemes.

By Gauri Joshi VPM’s MPCOE, Velneshwar Page 8

You might also like