[go: up one dir, main page]

0% found this document useful (0 votes)
16 views26 pages

Dcom Viva Imp

The document covers fundamental concepts in information theory, digital communication, modulation techniques, and error control systems. It defines key terms such as entropy, source coding, and channel capacity, and explains various modulation schemes and error correction methods. Additionally, it differentiates between various coding techniques and their applications in digital communication systems.

Uploaded by

yarabb7860
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views26 pages

Dcom Viva Imp

The document covers fundamental concepts in information theory, digital communication, modulation techniques, and error control systems. It defines key terms such as entropy, source coding, and channel capacity, and explains various modulation schemes and error correction methods. Additionally, it differentiates between various coding techniques and their applications in digital communication systems.

Uploaded by

yarabb7860
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Unit 1: Information Theory and Source Coding

1.​ Define entropy.

○​ Entropy is a measure of the average information content per source symbol. It


quantifies the uncertainty or randomness in the information source.

2.​ What are the properties of entropy?

○​ Non-negativity, maximum entropy occurs for a uniform distribution, and it is


additive for independent sources.

3.​ List the components of a digital communication system.

○​ Source, Source Encoder, Channel Encoder, Modulator, Channel, Demodulator,


Channel Decoder, Source Decoder, Destination.

○​

4.​ Explain Shannon's Source Coding Theorem.


○​ It states that the average codeword length per source symbol cannot be less
than the entropy of the source, i.e., L >= H(X).

5.​ What is source coding? Why is it needed?

○​ Source coding reduces redundancy in source data to represent information with


fewer bits, enhancing transmission efficiency.

6.​ Differentiate between Shannon-Fano and Huffman coding.

○​ Huffman coding is optimal and always gives the most efficient prefix code,
whereas Shannon-Fano is not always optimal.

7.​ What is differential entropy?

○​ Differential entropy extends the concept of entropy to continuous-valued random


variables.

8.​ Define mutual information.

○​ Mutual information is the amount of information one random variable contains


about another; I(X;Y) = H(X) - H(X|Y).

9.​ What is channel capacity?

○​ Channel capacity is the maximum rate at which information can be reliably


transmitted over a communication channel.

10.​State Shannon's Channel Coding Theorem.

○​ For a given channel with capacity C, it is possible to transmit at any rate R < C
with arbitrarily low probability of error.

11.​What is the significance of channel capacity theorem?

○​ It provides a theoretical upper bound on the reliable data transmission rate of a


channel.
12.​How is joint entropy defined?

○​ H(X, Y) is the entropy of a pair of random variables and measures the total
uncertainty of the pair.

13.​Explain conditional entropy.

○​ H(X|Y) measures the amount of uncertainty remaining in X given that Y is known.

14.​List the properties of mutual information.

○​ Non-negative, symmetric: I(X;Y) = I(Y;X), and I(X;X) = H(X).

15.​Why is source coding important in digital communication?

○​ To reduce transmission bandwidth and increase efficiency by removing


redundancy.
Unit 2: Baseband Modulation and Transmission

16.​What is baseband transmission?

○​ Baseband transmission refers to the transmission of a signal without modulation,


using its original frequency range directly over the channel.

17.​Define PAM (Pulse Amplitude Modulation).

○​ PAM is a modulation technique where the amplitude of discrete pulses is varied


in accordance with the message signal.

18.​List different types of PAM.

○​ 2-level PAM (binary), 4-level PAM, 8-level PAM, etc.

19.​What is the power spectral density (PSD) of PAM signals?

○​ The PSD of PAM depends on the pulse shape and the symbol rate; it helps
analyze bandwidth and signal performance.

20.​Define Inter-Symbol Interference (ISI).


○​ ISI is the distortion that occurs when previous symbols interfere with the current
symbol due to channel time dispersion.

21.​What are the causes of ISI?

○​ Limited bandwidth, multipath propagation, and improper filtering.

22.​How can ISI be reduced?

○​ Using equalizers, pulse shaping filters (like Raised Cosine), and correlative
coding techniques.

23.​What is correlative coding?

○​ Correlative coding intentionally introduces ISI in a controlled way to increase


bandwidth efficiency, e.g., Duo-binary coding.

24.​Explain the role of an equalizer.

○​ An equalizer compensates for ISI by adjusting the received signal to restore the
original transmitted symbols.

25.​What are the types of equalizers?

○​ Linear equalizer, Decision Feedback Equalizer (DFE), Adaptive Equalizer.

26.​What is an eye diagram?

○​ An eye diagram is a graphical representation of a digital signal from an


oscilloscope, used to assess ISI and signal quality.
27.​How is the quality of a digital signal judged using an eye pattern?

○​ A wide open eye indicates minimal ISI and good signal quality; a closed eye
suggests severe ISI.

28.​Why is pulse shaping important in digital transmission?

○​ It reduces bandwidth and minimizes ISI for effective signal recovery.

29.​Differentiate between linear and nonlinear equalizers.

○​ Linear equalizers use linear filters, while non-linear equalizers use decision
feedback or adaptive filtering to handle complex ISI.

30.​State the Nyquist criterion for zero ISI.

○​ The Nyquist criterion ensures that the pulse shaping filter produces zero ISI by
spacing pulses at intervals where they do not interfere.
Unit 3: Bandpass Modulation and Demodulation

31.​What is bandpass modulation?


●​ Bandpass modulation involves shifting the baseband signal to a higher frequency range
for efficient transmission over physical channels like RF.

32.​List common digital bandpass modulation schemes.

●​ ASK, FSK, BPSK, QPSK, QAM.

33.​What is the general structure of a digital bandpass transmitter?

●​ Source → Encoder → Modulator → Bandpass filter → Transmit antenna.

34.​What is the role of a demodulator in digital communication?

●​ It extracts the original baseband signal from the modulated carrier signal.

35.​Define Amplitude Shift Keying (ASK).

●​ In ASK, the carrier's amplitude is varied according to digital data while frequency and
phase remain constant.

36.​Define Frequency Shift Keying (FSK).

●​ In FSK, different frequencies represent different digital symbols.

37.​What is Binary Phase Shift Keying (BPSK)?

●​ BPSK uses two phases (0° and 180°) to represent binary 1 and 0, providing robust
noise immunity.

38.​Define Quaternary Phase Shift Keying (QPSK).

●​ QPSK uses four distinct phase shifts to represent two bits per symbol, improving
bandwidth efficiency.

39.​Explain Quadrature Amplitude Modulation (QAM).

●​ QAM combines amplitude and phase variations to transmit multiple bits per symbol,
e.g., 16-QAM, 64-QAM.

40.​What is a signal space diagram?

●​ A graphical representation of modulation schemes in terms of their amplitude and phase


components.

41.​Compare the bandwidth efficiency of ASK, FSK, and PSK.

●​ PSK and QAM are more bandwidth-efficient than ASK and FSK due to their multi
bit-per-symbol capability.
42.​Which modulation is more power-efficient: ASK or PSK? Why?

●​ PSK is more power-efficient as it has constant amplitude and better noise performance.

43.​What is the bit error rate (BER)?

●​ BER is the probability of a bit being incorrectly received due to noise or distortion in the
channel.

44.​How does QPSK compare to BPSK in terms of BER and bandwidth?

●​ QPSK has the same BER performance as BPSK but transmits twice the data per
bandwidth.

45.​What are the applications of QAM?

●​ Used in cable TV, DSL, Wi-Fi, and LTE systems for high-speed data transmission.

46.​What is bandwidth efficiency?

●​ Bandwidth efficiency is the number of bits transmitted per second per Hz of bandwidth,
measured in bps/Hz.

47.​List advantages of FSK.

●​ Robust against amplitude fading and noise; useful in wireless and modem
communication.

48.​Why is BPSK preferred in noisy environments?

●​ Due to its constant envelope and maximum distance between symbols in signal space.

49.​Differentiate between coherent and non-coherent detection.

●​ Coherent detection requires carrier phase synchronization, while non-coherent


detection does not.

50.​Which modulation scheme is most bandwidth efficient and why?

●​ Higher-order QAM (e.g., 64-QAM) is most bandwidth-efficient because it transmits more


bits per symbol.
Unit 4: Error Control Systems

51.​What are error control systems?

●​ Techniques used to detect and correct errors in digital communication systems to


ensure reliable data transmission.

52.​List types of error control.

●​ Error detection, error correction, Automatic Repeat reQuest (ARQ), Forward Error
Correction (FEC).

53.​What are error control codes?

●​ Structured methods for encoding data so that errors can be detected and/or corrected at
the receiver.

54.​What is a linear block code?

●​ A type of error-correcting code where the encoder maps k-bit data into n-bit codewords
using linear operations.
55.​Define a codeword.

●​ A codeword is the encoded n-bit output that includes both data and redundancy bits.

56.​What is a generator matrix?

●​ A matrix used to generate codewords from input data in linear block codes.

57.​What is a systematic linear block code?

●​ A code where the original data appears unchanged in the codeword followed by parity
bits.

58.​Define parity check matrix.

●​ A matrix used to check for errors in received codewords and compute the syndrome.

59.​What is syndrome decoding?

●​ A technique that uses the syndrome to detect and possibly correct errors in linear block
codes.

60.​What is Hamming distance?

●​ The number of bit positions in which two codewords differ; used to determine
error-detecting and correcting capabilities.

61.​What is minimum distance of a code?

●​ The smallest Hamming distance between any two valid codewords; determines the
error correction capability.

62.​How many errors can a linear block code detect?


●​ A code with minimum distance dmin ​can detect up to dmin−1 errors.

63.​How many errors can it correct?

●​ It can correct up to t=[dmin-1/2⌋ errors.

64.​What are cyclic codes?

●​ A subclass of linear block codes where cyclic shifts of a codeword produce another
valid codeword.

65.​What is the generator polynomial in cyclic codes?

●​ A polynomial that defines the cyclic code and is used to encode data polynomially.

66.​List properties of binary cyclic codes.

●​ Cyclically invariant, represented by polynomials, and can be encoded using shift


registers.

67.​What is the advantage of cyclic codes?

●​ Simple implementation using linear feedback shift registers (LFSRs) for both encoding
and error detection.

68.​What is CRC (Cyclic Redundancy Check)?

●​ An error-detecting code widely used in digital networks and storage devices, based on
cyclic codes.

69.​Describe the structure of a cyclic encoder using shift registers.

●​ A feedback shift register with XOR gates based on the generator polynomial.

70.​What is a syndrome in cyclic codes?

●​ The remainder obtained after dividing the received polynomial by the generator
polynomial.

71.​What is a convolutional code?

●​ A type of FEC code where each input bit affects multiple output bits using shift registers
and modulo-2 adders.

72.​Differentiate between block and convolutional codes.

●​ Block codes work on blocks of bits; convolutional codes work on bit streams with
memory.
73.​What is a constraint length in convolutional codes?

●​ The number of bits in the encoder memory that affects the output.

74.​What is a code tree?

●​ A tree representation of all possible state transitions in a convolutional encoder.

75.​What is a trellis diagram?

●​ A time-sequenced diagram showing all possible state transitions and output sequences
of a convolutional code.

76.​Explain state diagram in convolutional codes.

●​ A finite state machine showing the states and transitions of the encoder based on input
bits.

77.​What is Viterbi decoding?

●​ A maximum likelihood decoding algorithm that finds the most likely transmitted
sequence using the trellis.

78.​What is free distance in convolutional codes?

●​ The minimum Hamming distance between any two distinct paths in the trellis;
determines error performance.

79.​How does Viterbi decoding achieve error correction?

●​ By comparing path metrics and selecting the path with the minimum error.

80.​What is soft decision decoding?

●​ A decoding method that considers signal amplitude (confidence) levels rather than just
0 or 1 decisions.

81.​List advantages of convolutional coding.

●​ Good performance in low SNR, suitable for real-time transmission, and efficient
decoding via Viterbi algorithm.

82.​What is interleaving and why is it used?

●​ Interleaving rearranges data bits to mitigate burst errors by spreading them out in time
or space.

83.​Differentiate between ARQ and FEC.


●​ ARQ retransmits corrupted data upon request; FEC corrects errors without
retransmission.

84.​What is Hybrid ARQ?

●​ A combination of ARQ and FEC to improve reliability and throughput.

85.​What are burst errors?

●​ Multiple consecutive bits in a data stream are affected by errors, often due to fading or
interference.

86.​Which codes are best for burst error correction?

●​ Cyclic codes and convolutional codes with interleaving.

87.​What is the role of redundancy in error control?

●​ Redundancy allows detection and correction of errors by adding extra bits.

88.​Compare single-bit and double-bit error correction.

●​ Single-bit correction can fix only one bit; double-bit correction needs more redundancy
and complexity.

89.​What is meant by decoding complexity?

●​ It refers to the computational resources required to decode the received data correctly.

90.​Why are LDPC and Turbo codes used in modern systems?

●​ They offer near-Shannon limit performance and are used in 5G, satellite, and deep
space communication.

91.​Differentiate between hard and soft decision decoding.

●​ Hard decision uses binary input; soft decision uses analog signal values for better
accuracy.

92.​What is the importance of free distance in convolutional codes?

●​ Greater free distance improves the error-correcting capability.

93.​Why are error control codes essential in wireless systems?

●​ Due to high noise and fading, robust error control ensures data integrity.

94.​What is puncturing in convolutional codes?


●​ A technique to increase code rate by removing some output bits systematically.

95.​What are tail bits in convolutional encoding?

●​ Extra bits added at the end to bring the encoder back to the zero state for proper
decoding.

96.​What are the trade-offs in choosing an error control code?

●​ Between redundancy, decoding complexity, and error correction capability.

97.​List applications of convolutional codes.

●​ Mobile communication, satellite, telemetry, and deep-space communication.

98.​How are error correction codes implemented in hardware?

●​ Using logic circuits, shift registers, and XOR gates.

99.​What is code rate in error control coding?

●​ Code rate = k/n, where k is the number of data bits, and n is the total number of
transmitted bits.

100.​ Why is understanding coding theory essential for communication engineers?

●​ It is critical for designing reliable, efficient, and robust digital communication systems.​

Differentiate Between Questions (All Units)

No Topics Point 1 Point 2


.

1 Shannon-Fano vs Shannon-Fano may Huffman always gives the


Huffman Coding not produce optimal most efficient prefix code.
prefix codes.

2 Entropy vs Entropy is for discrete Differential entropy is for


Differential random variables. continuous random
Entropy variables.
3 Joint Entropy vs Joint entropy Conditional entropy
Conditional measures total measures remaining
Entropy uncertainty of two uncertainty in one variable
variables. given the other.

4 Source Coding vs Source coding Channel coding adds


Channel Coding removes redundancy redundancy to protect data
from data. from errors.

5 Baseband vs Baseband transmits Bandpass shifts the signal to


Bandpass the signal in its original a higher frequency range.
Transmission frequency.

6 ISI vs Noise ISI is caused by pulse Noise is an external random


spreading in the disturbance.
channel.

7 Linear vs Linear equalizers use Non-linear equalizers use


Non-linear linear filtering. decision-based feedback.
Equalizers

8 Eye Diagram Open eye indicates Closed eye indicates poor


(Open vs Closed) good signal quality and signal quality and high ISI.
low ISI.

9 ASK vs FSK ASK varies amplitude FSK varies frequency of the


of the carrier. carrier.

10 BPSK vs QPSK BPSK transmits 1 bit QPSK transmits 2 bits per


per symbol. symbol.

11 QPSK vs QAM QPSK uses only phase QAM uses both amplitude
variation. and phase variation.
12 Coherent vs Coherent detection Non-coherent does not
Non-Coherent requires phase require phase reference.
Detection synchronization.

13 BER vs SNR BER measures bit SNR measures signal


errors per total bits strength relative to noise.
sent.

14 Linear Block vs Block codes operate Convolutional codes operate


Convolutional on fixed-length data on bit streams with memory.
Codes blocks.

15 Generator Matrix Generator matrix is Parity check matrix is used


vs Parity Check used for encoding. for error detection.
Matrix

16 Syndrome vs Syndrome detects Codeword contains encoded


Codeword errors. data and redundancy.

17 Cyclic Codes vs Cyclic codes are Convolutional codes are


Convolutional block-based and stream-based and
Codes polynomial-driven. memory-driven.

18 Trellis Diagram vs Trellis is Code tree is exhaustive and


Code Tree time-structured and suitable for small
compact. constraints.

19 Hard vs Soft Hard decision uses Soft decision uses signal


Decision binary 0/1 values. amplitudes for better
Decoding accuracy.

20 ARQ vs FEC ARQ uses FEC corrects errors without


retransmission to retransmission.
correct errors.
Comparison-Based Viva Questions (All Units)

No Question Answer
.

1 Is Huffman coding better Yes, Huffman is better because it always


than Shannon-Fano? Why? produces an optimal prefix code with the
shortest average codeword length.

2 Is source coding or channel Source coding is more important as it


coding more important for removes redundancy from the source
compression? data, leading to better compression.

3 Is BPSK or QPSK better for QPSK is better since it transmits 2 bits


bandwidth efficiency? per symbol, doubling the bandwidth
efficiency compared to BPSK.

4 Is FSK better than ASK in Yes, FSK is better because frequency is


noisy environments? less susceptible to noise compared to
amplitude variations in ASK.

5 Is coherent or non-coherent Coherent detection is better as it uses


detection better in phase synchronization for improved error
performance? performance.

6 Is Huffman coding or Arithmetic coding is better for large


Arithmetic coding better for symbol sets as it provides more efficient
large alphabets? compression than Huffman.

7 Is QAM better than PSK for Yes, QAM is better because it can encode
high data rates? more bits per symbol using both
amplitude and phase variations.

8 Is PSK or FSK more PSK is more bandwidth efficient as it uses


bandwidth efficient? smaller frequency spacing than FSK.
9 Is cyclic code or block code Cyclic codes are easier due to their
easier to implement in implementation using shift registers and
hardware? polynomial arithmetic.

10 Is Viterbi decoding or Viterbi decoding is better because it is an


brute-force decoding better efficient maximum likelihood algorithm
for convolutional codes? with lower complexity.

11 Is eye diagram or BER test BER test is better for numerical


better for assessing signal performance analysis, while the eye
quality? diagram is good for visualizing timing and
distortion.

12 Is joint entropy more Yes, it gives the combined uncertainty of


informative than individual two variables and is essential for
entropy? analyzing correlated sources.

13 Is mutual information or Mutual information is better as it


entropy better for channel quantifies the actual information
analysis? transferred through the channel.

14 Is convolutional coding or Convolutional coding is better since it


block coding better for handles continuous bit streams with
streaming data? memory, ideal for real-time applications.

15 Is soft decision decoding or Soft decision is better because it uses


hard decision decoding confidence levels of received signals,
better? offering superior error correction.

16 Is channel capacity affected Noise has a stronger impact, as it reduces


more by bandwidth or the signal-to-noise ratio, directly limiting
noise? reliable data transmission rates.
17 Is QAM suitable for No, because QAM requires linear
power-constrained amplification and is sensitive to amplitude
systems? Why or why not? noise, increasing power demands.

18 Is FEC or ARQ better for FEC is better since it avoids delays from
real-time communication? retransmission, which is critical in
real-time systems.

19 Is parity check or CRC CRC is better as it can detect longer burst


better for burst error errors using polynomial division, unlike
detection? simple parity checks.

20 Is interleaving more critical More critical in convolutional codes to


in block codes or spread burst errors over time and improve
convolutional codes? decoding success.

You might also like