NATIONAL INSTITUTE OF TECHNOLOGY CALICUT
Department of Electronics and Communication Engineering
WINTER 2024
Course Supervisor: Dr. Deepthi P P
EC6405E – STATISTICAL SIGANL PROCESSING
Laboratory Experiment-2
Lab Report
(Tool: MATLAB)
SUBMITTED BY
Syamantak Sarkar (M230922EC)
M. Tech – Signal Processing
CODE:
% Define the length of the signal
N = 1000;
% Generate random noise sequence v(n)
v = randn(1, N);
% Compute x(n) based on the given equation
x = 0.447 * (v + circshift(v, -1) + circshift(v, -2) + circshift(v, -3) +
circshift(v, -4) );
% Prediction using Wiener 2-tap filter
A = [1 0.8; 0.8 1];
B = [0.8; 0.6];
W = inv(A) * B;
w0 = W(1);
w1 = W(2);
% Initialize the predicted signal vector
x_hat = zeros(1, N);
% Iterate over the samples to predict x_hat(n)
for n = 3:N
x_hat(n) = w0 * x(n-1) + w1 * x(n-2);
end
% Plot the generated signal x(n)
figure;
subplot(3,1,1);
stem(x, 'LineWidth', 1);
xlabel('Sample (n)');
ylabel('Amplitude');
title('Original Signal x(n)');
grid on;
% Plot the predicted signal x_hat(n)
subplot(3,1,2);
stem(x_hat, 'LineWidth', 1);
xlabel('Sample (n)');
ylabel('Amplitude');
title('Predicted Signal x_{hat}(n)');
grid on;
% Compute the error signal
error_signal = x - x_hat;
% Plot the error signal
subplot(3,1,3);
stem(error_signal, 'LineWidth', 1);
xlabel('Sample (n)');
ylabel('Amplitude');
title('Error Signal (x(n) - x_{hat}(n))');
grid on;
% Define the parameters for quantization
n_bits = 3; % Number of bits
vmax_error = max(error_signal); % Maximum value of the error signal
vmin_error = min(error_signal); % Minimum value of the error signal
vmax_x = max(x); % Maximum value of the reconstructed signal
vmin_x = min(x); % Minimum value of the reconstructed signal
% Compute the ratio of the squares of the ranges and take the logarithm
Snr_improvement = 10*log10(((vmax_x - vmin_x)^2) / ((vmax_error -
vmin_error)^2));
% Print the ratio
fprintf( 'SNR improvemnt(in dB): %.2f dB\n', Snr_improvement);
% Define the parameters for quantization
n_bits = 3; % Number of bits
vmax = max(error_signal); % Maximum value of the error signal
vmin = min(error_signal); % Minimum value of the error signal
% Compute the quantization levels for DPCM and PCM
% Compute the quantization levels for DPCM
sum_x_squared = sum(x.^2); % Sum of squares of the original signal x(n)
expect_x = (1/N) * (sum_x_squared);
del_dpcm = (vmax - vmin) / (2^n_bits);
SNR_dpcm = expect_x / (del_dpcm^2 / 12); % Quantization levels for DPCM
del_pcm = (max(x) - min(x)) / (2^n_bits);
SNR_pcm = expect_x / (del_pcm^2 / 12); % Quantization levels for PCM
% Print the results
fprintf('SNR for DPCM: %.2f dB\n', 10*log10(SNR_dpcm));
fprintf('SNR for PCM: %.2f dB\n', 10*log10(SNR_pcm));
OUTPUT:
SNR improvemnt(in dB): 5.13 dB
SNR for DPCM: 17.69 dB
SNR for PCM: 12.56 dB
INFERENCE:
Here, 1-0.2|k| is the autocorrelation function that is used.
• The SNR for DPCM is much greater than the SNR for PCM, as can be shown.
• The average SNR value is greater in the absence of noise while estimating the
signal than in the presence of noise.
CODE:
% Define k values
k = 0:10;
N=2;
% Calculate autocorrelation function
r_x = autocorr_func(k);
% Define the matrix W
Rx_mat = autocorrelationMatrix(N);
% Define the vector v
rx_vect = autocorrelationvector(N);
% Perform matrix multiplication
s=inv(Rx_mat);
y=transpose(rx_vect);
w=s*y;
% Display the result
disp('w matrix(L=2) --> ');
disp(w);
% L=3 Tap Calculations
P_2n = r_x(1)- (w(1,1)*r_x(2)) - (w(2,1)*r_x(3));
r_3=[conj(r_x(4))] -(w(1,1)*conj(r_x(3))) -(w(2,1)*conj(r_x(2)));
T_3 =(-r_3/P_2n);
w_21 = w(1,1)+(T_3*w(2,1));
w_22=w(2,1)+(T_3*w(1,1));
w_23 = T_3;
P_3 = (1-((T_3)*(T_3)))*P_2n;
P_3n = r_x(1)- (w_21*r_x(2)) - (w_22*r_x(3))-(w_23*r_x(4));
fprintf('P2 = ');
disp(P_2n);
fprintf('Gamma_3 = ');
disp(r_3);
fprintf('T_3 = ');
disp(T_3 );
disp('w matrix (L=3) --> ');
disp(w_21);
disp(w_22);
disp(w_23);
% L=4 Tap Calculations
P_3n = r_x(1)- (w_21*r_x(2)) - (w_22*r_x(3))-(w_23*r_x(4));
P_3 = (1-((T_3)*(T_3)))*P_2n;
r_4=(conj(r_x(5))) -(w_21*conj(r_x(4))) -(w_22*conj(r_x(3))) -
(w_23*conj(r_x(2)));
T_4 =(-r_4/P_3);
w_31 = w_21+(T_4*w_23);
w_32=w_22+(T_4*w_22);
w_33 = w_23+(T_4*w_21);
w_34 = T_4;
fprintf('P3 = ');
disp(P_3);
fprintf('Gamma_4 = ');
disp(r_4);
fprintf('T_4 = ');
disp(T_4);
disp('w matrix(L=4)--> ');
disp(w_31);
disp(w_32);
disp(w_33);
disp(w_34);
% L=5 Tap Calculations
P_4n = r_x(1)- (w_31*r_x(2)) - (w_32*r_x(3))-(w_33*r_x(4)) -(w_34*r_x(5));
P_4 = (1-((T_4)*(T_4)))*P_3;
r_5=(conj(r_x(6))) -(w_34*conj(r_x(5))) -(w_33*conj(r_x(4))) -
(w_32*conj(r_x(3))) - (w_31*conj(r_x(3)));
T_5 =(-r_5/P_4);
w_41 = w_31+(T_5*w_34);
w_42=w_32+(T_5*w_33);
w_43 = w_33+(T_5*w_32);
w_44 = w_34+(T_5*w_31);
w_45 = T_5;
fprintf('P4 = ');
disp(P_4);
fprintf('Gamma_5 = ');
disp(r_5);
fprintf('T_5 = ');
disp(T_5);
disp('w matrix(L=4) -->');
disp(w_41);
disp(w_42);
disp(w_43);
disp(w_44);
disp(w_45);
% Prediction using Winer 5 Tap Filter
N = 1000;
v = randn(1, N);
x = zeros(1, N);
% Compute x(n) based on the given equation
x = 0.447 * (v + circshift(v, -1) + circshift(v, -2) + circshift(v, -3) +
circshift(v, -4) );
% Initialize the predicted signal vector
x_hat = zeros(1, N);
w0 = w_41;
w1 = w_42;
w2 = w_43;
w3 = w_44;
w4 = w_45;
% Iterate over the samples to predict x_hat(n)
for n = 6:N
x_hat(n) = w0*x(n-1) + w1*x(n-2) + w2*x(n-3) + w3*x(n-4) + w4*x(n-5);
end
% Plot the generated signal x(n)
figure;
subplot(3,1,1);
stem(x);
xlabel('Sample (n)');
ylabel('Amplitude');
title('Original Signal x(n)');
grid on;
% Plot the predicted signal x_hat(n)
subplot(3,1,2);
stem(x_hat);
xlabel('Sample (n)');
ylabel('Amplitude');
title('Predicted Signal x_{hat}(n)');
grid on;
% Compute the error signal
error_signal = x - x_hat;
% Plot the error signal
subplot(3,1,3);
stem(error_signal);
xlabel('Sample (n)');
ylabel('Amplitude');
title('Error Signal (x(n) - x_{hat}(n))');
grid on;
% Define the parameters for quantization
n_bits = 3; % Number of bits
vmax_error = max(error_signal); % Maximum value of the error signal
vmin_error = min(error_signal); % Minimum value of the error signal
vmax_x = max(x); % Maximum value of the reconstructed signal
vmin_x = min(x); % Minimum value of the reconstructed signal
% Compute the ratio of the squares of the ranges and take the logarithm
Snr_improvement = 10*log10(((vmax_x - vmin_x)^2) / ((vmax_error -
vmin_error)^2));
fprintf('vmax error: %.2f \n',vmax_error);
fprintf('vmin_error: %.2f \n',vmin_error);
fprintf('vmax_x: %.2f\n',vmax_x);
fprintf('vmin_x: %.2f \n',vmin_x);
% Print the ratio
fprintf( 'SNR improvement(in dB): %.2f dB\n', Snr_improvement);
% Compute the quantization levels for DPCM and PCM
sum_x_squared = sum(x.^2); % Sum of squares of the original signal x(n)
fprintf('sum of x squares: %.f\n',sum_x_squared);
expect_x = (sum_x_squared/N);
del_dpcm = (vmax_error - vmin_error) / (2^n_bits);
SNR_dpcm = expect_x / (del_dpcm^2 / 12); % Quantization levels for DPCM
del_pcm = (vmax_x - vmin_x) / (2^n_bits);
SNR_pcm = expect_x / (del_pcm^2 / 12); % Quantization levels for PCM
% Print the results
fprintf('SNR for DPCM: %.2f dB\n', abs(10*log10(SNR_dpcm)));
fprintf('SNR for PCM: %.2f dB\n', 10*log10(SNR_pcm));
function Rx = autocorrelationMatrix(N)
Rx = zeros(N);
for i = 1:N
for j = i:N
Rx(i, j) = autocorr_func(i - j);
Rx(j, i) = Rx(i, j);
end
end
end
%rx vector creation
function rx = autocorrelationvector(N)
rx = zeros(1:N);
for i = 1:N
rx(i) = autocorr_func(i);
end
end
function r_x = autocorr_func(k)
r_x = 1 - 0.2 * abs(k);
end
OUTPUT:
w matrix(L=2) -->
0.8889
-0.1111
P2 = 0.3556
Gamma_3 = -0.0444
T_3 = 0.1250
w matrix (L=3) -->
0.8750
-1.0686e-15
0.1250
P3 = 0.3500
Gamma_4 = -0.2500
T_4 = 0.7143
w matrix(L=4)-->
0.9643
-1.8319e-15
0.7500
0.7143
P4 = 0.1714
Gamma_5 = -1.0214
T_5 = 5.9583
w matrix(L=4) -->
5.2202
4.4687
0.7500
6.4598
5.9583
vmax error: 56.28
vmin_error: -50.75
vmax_x: 2.91
vmin_x: -3.03
SNR improvement(in dB): -25.11 dB
sum of x squares: 1096
SNR for DPCM: 11.34 dB
SNR for PCM: 13.77 dB
INFERENCE:
The coefficients for the five tap Weiner estimator are much higher, as can be shown.
• The filter coefficients change at each step as we increase the number of taps from L=2 to L=5.
Therefore, each step requires a recalculation of all the filter coefficients.
• Both in PCM and DPCM, the SNR values exceed the L = 2 scenario. This is because the
correlation function has no memory, thus as the number of tapes (i.e., the dependencies on earlier
inputs) increases, the prediction will become more and more accurate.
CODE
% Generation of random signal
N = 1000;
v = randn(1, N);
x = zeros(1, N);
y = zeros(1, N);
% Compute x(n) based on the given equation
x = 0.447 * (v + circshift(v, -1) + circshift(v, -2) + circshift(v, -3) +
circshift(v, -4) );
% Plotting the generated random signal x(n)
figure;
subplot(2, 1, 1);
stem(1:N, x);
xlabel('n');
ylabel('x(n)');
title('Generated random signal x(n)')
% Adding white noise to the signal x(n)
u = normrnd(0, sqrt(0.25), N);
for n = 1:N
y(n) = x(n) + u(n);
end
% Plotting the signal with added white noise
subplot(2, 1, 2);
stem(1:N, y);
title('Y (x + white noise)');
% Prediction using Wiener 2 tap filter
x_hat = zeros(N);
A = [1.25 0.8; 0.8 1.25];
B = [0.8; 0.6];
W = inv(A) * B;
w0 = W(1);
w1 = W(2);
% Initialize the predicted signal vector
x_hat = zeros(1, N);
% Iterate over the samples to predict x_hat(n)
for n = 3:N
x_hat(n) = w0 * y(n-1) + w1 * y(n-2);
end
% Plotting the generated signal, predicted signal, and error signal
figure;
subplot(3, 1, 1);
stem(1:N, x);
title('Generated Signal x(n)');
subplot(3, 1, 2);
stem(1:N, x_hat);
title('Predicted Signal x_{hat}(n)');
error_signal = x - x_hat;
subplot(3, 1, 3);
stem(1:N, error_signal);
title('Error Signal (x(n) - x_{hat}(n))');
% Define the parameters for quantization
n_bits = 3; % Number of bits
vmax_error = max(error_signal); % Maximum value of the error signal
vmin_error = min(error_signal); % Minimum value of the error signal
vmax_x = max(x); % Maximum value of the reconstructed signal
vmin_x = min(x); % Minimum value of the reconstructed signal
% Compute the ratio of the squares of the ranges and take the logarithm
Snr_improvement = 10*log10(((vmax_x - vmin_x)^2) / ((vmax_error -
vmin_error)^2));
fprintf('vmax error: %.2f \n',vmax_error);
fprintf('vmin_error: %.2f \n',vmin_error);
fprintf('vmax_x: %.2f\n',vmax_x);
fprintf('vmin_x: %.2f \n',vmin_x);
% Print the ratio
fprintf( 'SNR improvement(in dB): %.2f dB\n', Snr_improvement);
% Compute the quantization levels for DPCM and PCM
sum_x_squared = sum(x.^2); % Sum of squares of the original signal x(n)
fprintf('sum of x squares: %.f\n',sum_x_squared);
expect_x = (sum_x_squared/N);
del_dpcm = (vmax_error - vmin_error) / (2^n_bits);
SNR_dpcm = expect_x / (del_dpcm^2 / 12); % Quantization levels for DPCM
del_pcm = (vmax_x - vmin_x) / (2^n_bits);
SNR_pcm = expect_x / (del_pcm^2 / 12); % Quantization levels for PCM
% Print the results
fprintf('SNR for DPCM: %.2f dB\n', 10*log10(SNR_dpcm));
fprintf('SNR for PCM: %.2f dB\n', 10*log10(SNR_pcm));
OUTPUT:
vmax error: 2.40
vmin_error: -2.52
vmax_x: 2.66
vmin_x: -3.30
SNR improvement(in dB): 1.65 dB
sum of x squares: 1005
SNR for DPCM: 15.03 dB
SNR for PCM: 13.38 dB
COLOURED NOISE:
CODE:
% Generation of random signal
N = 1000;
v = randn(1, N);
x = zeros(1, N);
y = zeros(1, N);
% Compute x(n) based on the given equation
x = 0.447 * (v + circshift(v, -1) + circshift(v, -2) + circshift(v, -3) +
circshift(v, -4) );
% Plotting the generated random signal x(n)
figure;
subplot(2, 1, 1);
stem(1:N, x);
xlabel('n');
ylabel('x(n)');
title('Generated random signal x(n)')
% Adding white noise to the signal x(n)
% Coloured Noise
u = zeros(N);
u(1) = sqrt(0.25 / 2) * v(1);
for i = 2:1:N
u(i) = sqrt(0.25 / 2) * v(i - 1) + sqrt(0.25 / 2) * v(i);
end
for n = 1:N
y(n) = x(n) + u(n);
end
% Plotting the signal with added white noise
subplot(2, 1, 2);
stem(1:N, y);
title('Y (x + white noise)');
% Prediction using Wiener 2 tap filter
x_hat = zeros(N);
A = [1.25 0.925; 0.8 0.925];
B = [0.8; 0.6];
W = inv(A) * B;
w0 = W(1);
w1 = W(2);
% Initialize the predicted signal vector
x_hat = zeros(1, N);
% Iterate over the samples to predict x_hat(n)
for n = 3:N
x_hat(n) = w0 * y(n-1) + w1 * y(n-2);
end
% Plotting the generated signal, predicted signal, and error signal
figure;
subplot(3, 1, 1);
stem(1:N, x);
title('Generated Signal x(n)');
subplot(3, 1, 2);
stem(1:N, x_hat);
title('Predicted Signal x_{hat}(n)');
error_signal = x - x_hat;
subplot(3, 1, 3);
stem(1:N, error_signal);
title('Error Signal (x(n) - x_{hat}(n))');
% Define the parameters for quantization
n_bits = 3; % Number of bits
vmax_error = max(error_signal); % Maximum value of the error signal
vmin_error = min(error_signal); % Minimum value of the error signal
vmax_x = max(x); % Maximum value of the reconstructed signal
vmin_x = min(x); % Minimum value of the reconstructed signal
% Compute the ratio of the squares of the ranges and take the logarithm
Snr_improvement = 10*log10(((vmax_x - vmin_x)^2) / ((vmax_error -
vmin_error)^2));
fprintf('vmax error: %.2f \n',vmax_error);
fprintf('vmin_error: %.2f \n',vmin_error);
fprintf('vmax_x: %.2f\n',vmax_x);
fprintf('vmin_x: %.2f \n',vmin_x);
% Print the ratio
fprintf( 'SNR improvement(in dB): %.2f dB\n', Snr_improvement);
% Compute the quantization levels for DPCM and PCM
sum_x_squared = sum(x.^2); % Sum of squares of the original signal x(n)
fprintf('sum of x squares: %.f\n',sum_x_squared);
expect_x = (sum_x_squared/N);
del_dpcm = (vmax_error - vmin_error) / (2^n_bits);
SNR_dpcm = expect_x / (del_dpcm^2 / 12); % Quantization levels for DPCM
del_pcm = (vmax_x - vmin_x) / (2^n_bits);
SNR_pcm = expect_x / (del_pcm^2 / 12); % Quantization levels for PCM
% Print the results
fprintf('SNR for DPCM: %.2f dB\n', 10*log10(SNR_dpcm));
fprintf('SNR for PCM: %.2f dB\n', 10*log10(SNR_pcm));
OUTPUT:
vmax error: 2.44
vmin_error: -2.61
vmax_x: 2.97
vmin_x: -2.86
SNR improvement(in dB): 1.24 dB
sum of x squares: 1031
SNR for DPCM: 14.92 dB
SNR for PCM: 13.67 dB
INFERENCE:
When we are making predictions based on noisy signals, the signal-to-noise ratio
(SNR) value is lower.
In comparison to the white noise signal, the signal-to-noise ratio (SNR) value is
lower when the signal in question is a coloured one.
The signal-to-noise ratio (SNR) value for prediction begins to fall when noise is
added.
The following table provides a comparison of the various DPCM and PCM
SNR values that are available.
L = 2 Weiner L = 5 Weiner L = 2 Weiner L = 2 Weiner
filter filter filter filter prediction
prediction (no prediction (no prediction (Coloured
noise) noise) (White noise) noise)
DPCM 15.69 16.34 15.03 14.92
12.56
PCM 13.77 13.38 13.67
Inferences:
• DPCM SNR is higher than PCM SNR in all the experiments. This is due to
the fact that in DPCM the error signal is encoders and transmitted so the
quantization error energy is much less than PCM where the signal x(n) itself
is transmitted.
• When we are predicting the signal from white noise the error signal has less
swing. So, the SNR improvement is better than SNR improvement when we
are predicting form the coloured noise.
• In case of L=5 tap the SNR improvement is poor this can be due to the fact
of overfitting.