[go: up one dir, main page]

0% found this document useful (0 votes)
10 views52 pages

Image Processing Assignment

Uploaded by

isaniul999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views52 pages

Image Processing Assignment

Uploaded by

isaniul999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Assignment

Assignment No : 01
Assignment Name: Image Restoration and Reconstruction
Course Title ​ : Image Processing
Course Code ​ : CSE355
Sec ​:3
Submission Date : 14-09-2025
Submitted To ​ : Md Mijanur Rahman

Submitted By ​ :

Name ID
Md Sabbir Ahmed 2022200000115

Md Saniul Islam 2022200000144

Abdullah Al Ahad 2022200000117

Eyamin Hasan 2022200000122


Chapter 5
Image restoration and reconstruction

Table of content
5.1Noise model

· Gaussian nosie

· Rayleigh noise

· Erlang(Gamma) noise

· Exponential noise

· Uniform noise

5.2Filtering

1.Mean Filter

· Arithmetic Mean Filter

· Geometric Mean Filter

· Harmonic Mean Filter

· Contra-Harmonic Mean Filter

2.Median Filter

· Max filter

· Min Filter

· Alpha-Trimmed Mean Filter

· Midpoint Mean Filter

5.3Adaptive Filter

· Adaptive Mean Filter

· Adaptive Local Noise Reduction Filter


5.4 Periodic Noise Reduction Using Frequency Domain Filtering

· Notch Filtering

· Optimum Notch Filtering

5.5 Linear Positive Invariant Degradation

5.6 Estimating the Degradation Function

· Estimation By Image Observation

· Estimation By Image Experimentation

· Estimation By Image Mathematical Modeling

5.7 Inverse Filtering

5.8 Minimum Mean Square Error (Wiener) Filtering

5.9 Constrained Least Squares (CLS) Filtering

5.10 Geometric Mean Filter

5.11 Image Reconstruction From Projection

· Computed Tomography(CT)

· Projection and Radon Transform

· Backprojection

· The Fourier Slice Theorem

· Reconstruction using Parallel Beam Filter

· Reconstruction using Fan Beam Filter


5.1 Noise Model:
Gaussian Noise
Gaussian noise is a type of statistical noise that follows the normal distribution, also known
as the Gaussian distribution. It is one of the most commonly used noise models in image
processing.

This noise is called “Gaussian” because the random values (noise values) follow a
bell-shaped curve when plotted as a histogram.

The PDF of a Gaussian random variable, z, is defined by the following familiar


expression:
2
−(𝑧−𝑧)
1 2

p(z)= √2πσ
𝑒 2σ
−∝< 𝑧 < ∝
Where:

●​ z: the gray level (pixel intensity value)


●​ μ: the mean of the distribution (center of the curve)
●​ σ: the standard deviation (controls the spread of the noise)

●​ p(z)= The value whose probability we are calculating


●​ μ= Center of the noise distribution (default is 0)
●​ σ= How wide the noise distribution is (controls variation)

Rayleigh Noise
Rayleigh noise is a type of additive noise commonly found in image processing, especially in
applications like radar imaging, laser systems, and medical ultrasound images. It follows a
Rayleigh probability distribution, which is non-Gaussian and positively skewed (i.e., not
symmetric).
This type of noise typically arises in situations where the image signal results from the
magnitude of two orthogonal Gaussian noise components (e.g., in coherent imaging systems).
Because of its nature, Rayleigh noise tends to brighten image regions and affects
high-intensity areas more than low ones.

The PDF of Rayleigh noise is given by


2
2 −(𝑧−𝑎) /𝑏
𝑝(𝑧) = { 𝑏 (𝑧 − 𝑎)𝑒 , 𝑧≥𝑎 0, 𝑧<𝑎

Here,

a= Location parameter; This shifts the distribution along the horizontal axis. Often

set to 0 for simplicity.

b= Scale parameter; This controls the spread or width of the curve (how much the

values vary). A larger b → more spread.

z= Noise value added to a pixel

p(z)= Probability density of that value

Erlang (Gamma) Noise


Erlang noise is a type of multiplicative noise often modeled by the Gamma distribution. It
is commonly used to describe noise in signals where the noise is related to the signal’s
intensity — for example, in radar or ultrasound imaging.

The PDF of Erlang noise is


𝑏 𝑏−1
𝑎𝑧 −𝑎𝑧
P(z)= (𝑏−1)
.𝑒 , 𝑧≥0

where the parameters are such that a > b, b is a positive integer, and “!” indicates factorial.
The mean and variance of z are
𝑏
z= 𝑎

2 𝑏
σ= 2
𝑎

a>0: This is the rate parameter – it controls how fast the distribution decays.​

b: This is the shape parameter (a positive integer), representing the number of events being
summed.​

z: The random variable (like image intensity).

Exponential Noise
Exponential noise is used to model random variations (like brightness in images). It has the
probability density function:
−𝑎𝑧
𝑝(𝑧) = 𝑎. 𝑒 , 𝑧≥0

Where: a is the rate parameter (in your case, a=1)


1
Mean z= 𝑎

2 1
Variance σ = 2
𝑎

Uniform Noise
Uniform noise is a type of random noise where all values within a certain range are equally
likely to occur. It follows a uniform probability distribution.

In other words, if noise values range between a and , then every value in that interval has the
same probability.

1
𝑝(𝑧) = { 𝑏−𝑎 , 𝑎≤𝑧≤𝑏 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

The mean and variance of z are


𝑎+𝑏
𝑧= 2
2
2 (𝑏−𝑎)
σ = 12

Noise Type Histogram Shape Typical Usage

Gaussian Bell-shaped, symmetric Camera sensor,electronics

Rayleigh Skewed(one-sided) Radar, sonar images

Erlang (Gamma) Skewed, smoother than Medical imaging


rayleigh

Exponential Highly skewed Partical detection, gamma


scan

Uniform Flat across all values Artificial noise testing

Salt and pepper Spikes at 0 and 255 Transmission /sensor faults


Salt-and-Pepper Noise
Salt-and-pepper noise is a type of image noise where some pixels in an image are randomly
replaced with either the minimum value (0 — black) or the maximum value (255 —
white).
It appears as scattered white and black dots, resembling salt and pepper sprinkled over the
image.
This noise is typically caused by:
●​ Transmission errors
●​ Bit errors in transmission
●​ Faulty memory or sensor issues
●​ Analog-to-digital conversion problems
It mainly affects individual pixels, making it different from smoother noise types like
Gaussian noise.
In salt-and-pepper noise:
●​ Salt refers to the white pixels — these are pixels set to the maximum intensity value
(e.g., 255 in an 8-bit image). So, "salt" = bright, white specks.
●​ Pepper refers to the black pixels — these are pixels set to the minimum intensity
value (0 in an 8-bit image). So, "pepper" = dark, black specks.

Together, they create a noisy image with randomly scattered bright and dark pixels, like
grains of salt and pepper sprinkled over it.

Left: Original grayscale image without any noise.

Right: Same image corrupted with 5% salt-and-pepper noise, where random pixels are
replaced with:
Black (0) → Pepper noise
White (255) → Salt noise
Comparsion between Salt-Paper Noise and Gaussian Noise:

Feature Salt-and-Paper Noise Gaussian Noise

Type Impulse noise Additive noise

Pixel corruption pattern Random pixel set to 0 or All pixel slightly altered by
255 a gaussian distribution

Formula See earlier (probabillistic


2
−(𝑧−𝑧)
1 2

pixel replacement ) p(z)= √2πσ


𝑒 2σ

Statical distribution Bimodal (0 and 255 spikes) Normal (bell-curbed)


centred at 0

Visual appearance Sharp black and white Overall grainy or blurry


spikes look

Cause in real life Transmission error,sensor Sensor thermal noise,low


fault light

Removal filter Median,Adaptive gausian ,Weiner


median,contra harmonic

Best restoration model Median fietring Gaussian or weiner

Effect on ML model Sharp outliner confuses Gradual noise degrades


models;may alter edges general features
heaviely

Histogram effect Spikes at 0 and 255 Broades histogram around


mean value
5.2 Filtering:
Spatial Filters for Noise Reduction:

1.​ Mean Filter (Averaging Filter):


o​ Calculates the average of the pixel values within the mask window.
o​ Smooths the image by reducing intensity variations caused by noise.
o​ Good for Gaussian noise but blurs edges.
2.​ Median Filter:
o​ Takes the median of pixel values within the mask window.
o​ Effective at removing Salt-and-Pepper noise.
o​ Preserves edges better than mean filter.
3.​ Adaptive Filters:
o​ Adjust the filtering based on local image characteristics such as variance,
preserving edges and details.

Mean Filter
Arithmetic Mean Filter

●​ Definition: Each pixel is replaced with the average of pixel values in its
neighbourhood.
●​ Formula:

^ 1
𝑓(x,y)= 𝑚𝑛 ∑ 𝑔(𝑟, 𝑐)
𝑟,𝑐∈𝑆𝑥𝑦

where:

●​ g(r,c) = input image pixels


●​ Sxy​= neighbourhood window of size m×n
(a)x-ray image of a circuit board(b) result of filtering with arithmetic mean filter

Effect:
●​ Smooths image by reducing intensity variations.
●​ Reduces Gaussian noise well.
●​ Disadvantage: Blurs edges and fine details.

0 0 0 0 0 0 0

0 12 25 30 28 15 0

0 20 100 35 40 22 0

0 18 25 90 45 20 0

0 22 28 40 50 18 0

0 15 20 25 30 10 0

0 0 0 0 0 0 0

Input image

Given 5x5 pixel image (intensity values):


Here we use 0 padding to filtered the every pixel value of the image.
Let kernel (window size) =3x3 for each pixel value.
Let compute the filtered value at pixel with value 90.
Now, sum all the highlighted value in the 3x3 neighbourhood:
∑g(r,c) = g(4,4) = 100 + 35 + 40 + 25 + 90 + 45 + 28 + 40 + 50
= 453
We know,
1
f= 𝑚𝑛 ∑ g(r,c)

so now,
1
f= 3×3 ×453

453
f= 9

f=50.33

0 0 0 0 0 0 0

0 39.25 37 43 28.33 26.25 0

0 33.33 39.44 46.44 36.11 28.33 0

0 35.5 42 50.33 40 32.5 0

0 21.33 31.44 39.22 36.44 28.83 0

0 21.25 25 32.16 28.83 27 0

0 0 0 0 0 0 0

Output image

Geometric Mean Filter


●​ Definition: Each pixel is replaced with the geometric mean of the pixel values in its
neighbourhood.
1
^
●​ Formula: 𝑓(x,y)=[ ∏ 𝑔(𝑟, 𝑐)] 𝑚𝑛

(𝑟,𝑐)ϵ𝑆𝑥𝑦

where

o​ g(r,c) = input image pixels


o​ Sxy= neighbourhood window of size m×n
(a)x-ray image of a circuit board(b) result of filtering with geometric mean filter

●​ Effect:
o​ Reduces noise while preserving edges and details better than the arithmetic
mean filter.
o​ Works well for Gaussian noise and speckle noise.
o​ Does not blur edges as much as the arithmetic mean filter.

0 0 0 0 0 0 0

0 12 25 30 28 15 0

0 20 100 35 40 22 0

0 18 25 90 45 20 0

0 22 28 40 50 18 0

0 15 20 25 30 10 0

0 0 0 0 0 0 0

Input image

Given 5x5 pixel image (intensity values):


Here we use 0 padding to filtered the every pixel value of the image. Let kernel
(window size)=3x3 for each pixel value.
Let compute the filtered value at pixel with value 90.
Now, multiply all the highlighted value in the 3x3 neighbourhood:
[∏g(r,c)] =[ ∏g(3,3)] = 100*35*40*25*90*45*28*40*50
= 7.938e14
We know,
1
^
𝑓=(7.938e14) 3*3
^
𝑓=45.25

0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 30.87 51.78 42.25 0 0

0 0 27.32 45.25 27.13 0 0

0 0 32.45 28.93 30.46 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0
Output imag

Harmonic Mean Filter


●​ Definition: Each pixel is replaced with the harmonic mean of the pixel values in its
neighborhood.
●​ Formula:=
𝑚𝑛
𝑓=
1
∑ 𝑔(𝑟,𝑐)
(𝑟,𝑐)ϵ𝑆𝑥𝑦

where:
●​ g(r,c) = input image pixels
●​ Sxy​= neighbourhood window of size m×n

(a)x-ray image of a circuit board(b) result of filtering with harmonic mean filter

●​ Effect:
o​ Effective at removing salt noise (white spots).
o​ Preserves details better than arithmetic mean filter.
o​ Not suitable for images corrupted by pepper noise (black spots).

0 0 0 0 0 0 0

0 12 25 30 28 15 0

0 20 100 35 40 22 0

0 18 25 90 45 20 0

0 22 28 40 50 18 0

0 15 20 25 30 10 0

0 0 0 0 0 0 0

Given 5x5 pixel image (intensity values):


Here we use 0 padding to filtered the every pixel value of the image. Let kernel
(window size) =3x3 for each pixel value.
Let compute the filtered value at pixel with value 90.
Now, calculate harmonic mean of all the highlighted value in the 3x3 neighbourhood:

1 1 1 1 1 1 1 1 1
∑ 𝑔(𝑟,𝑐)
= 100
+ 35 + 25 + 90 + 45 + 28
+ 40 + 50
(𝑟,𝑐)ϵ𝑆𝑥𝑦

= 0.21761
3*3
f= .021761

f=41

0 0 0 0 0 0 0

0 25.2 35.2 26 45 18 0

0 17.13 60.63 29.3 34.56 23.66 0

0 22.8 30.13 41 40.6 22.44 0

0 25.3 24.93 35.42 29.75 16.3 0

0 16.33 20.2 25.3 24.56 13.23 0

0 0 0 0 0 0 0
Output image

Contraharmonic Mean Filter


●​ Definition: A generalized mean filter that reduces specific types of noise depending
on its parameter Q.
●​ Formula:
𝑄+1
∑ 𝑔(𝑟,𝑐)
^ 𝑟,𝑐∈𝑆𝑥𝑦
𝑓(𝑥, 𝑦) =
𝑄
∑ 𝑔(𝑟,𝑐)
𝑟,𝑐∈𝑆𝑥𝑦

●​ g(r,c) = input image pixels


●​ Sxy= neighbourhood window of size m×n
●​ Q= order of the filter (real number

Effect:
●​ Q>0: Removes pepper noise (black spots).
●​ Q<0: Removes salt noise (white spots).
●​ Q=0: Reduces to Arithmetic Mean Filter.

0 0 0 0 0 0 0

0 12 25 30 28 15 0

0 20 100 35 40 22 0
0 18 25 90 45 20 0

0 22 28 40 50 18 0

0 15 20 25 30 10 0

0 0 0 0 0 0 0

Input image

Given 5x5 pixel image (intensity values):


Here we use 0 padding to filtered the every pixel value of the image. Let kernel (window
size)
=3x3 for each pixel value.
Let compute the filtered value at pixel with value 90.
We know,
Let find the filtered value for Q=1 and Q=-1 both case.

Case 1: Q=1
𝑄+1
∑ 𝑔(𝑟,𝑐)
^ 𝑟,𝑐∈𝑆𝑥𝑦
𝑓=
𝑄
∑ 𝑔(𝑟,𝑐)
𝑟,𝑐∈𝑆𝑥𝑦

1+1
∑ 𝑔(𝑟,𝑐)
^ 𝑟,𝑐∈𝑆𝑥𝑦
𝑓=
1
∑ 𝑔(𝑟,𝑐)
𝑟,𝑐∈𝑆𝑥𝑦

1+1
∑ 𝑔(𝑟, 𝑐) =1002+352+402+252+902+452+282+402+502=28459
𝑟,𝑐∈𝑆𝑥𝑦

1
∑ 𝑔(𝑟, 𝑐) =100+35+40+25+90+45+28+40+50=453
𝑟,𝑐∈𝑆𝑥𝑦

^ 28459
𝑓= 453
^
𝑓 = 62. 83
0 0 0 0 0 0 0

0 16.25 35.6 44.2 25.6 22.25 0

0 25.6 35.2 37.24 32.2 24.66 0

0 19.2 23.5 62.83 40.6 24.3 0

0 22 32.5 37.56 45.6 19.6 0

0 18.6 27.56 26.4 32 12.3 0

0 0 0 0 0 0 0

Output Image

Case 2: Q=-1
𝑄+1
∑ 𝑔(𝑟,𝑐)
^ 𝑟,𝑐∈𝑆𝑥𝑦
𝑓=
𝑄
∑ 𝑔(𝑟,𝑐)
𝑟,𝑐∈𝑆𝑥𝑦

−1+1
∑ 𝑔(𝑟,𝑐)
^ 𝑟,𝑐∈𝑆𝑥𝑦
𝑓=
1
∑ 𝑔(𝑟,𝑐)
𝑟,𝑐∈𝑆𝑥𝑦

0
∑ 𝑔(𝑟,𝑐)
^ 𝑟,𝑐∈𝑆𝑥𝑦
𝑓=
−1
∑ 𝑔(𝑟,𝑐)
𝑟,𝑐∈𝑆𝑥𝑦

0
∑ 𝑔(𝑟, 𝑐) =1+1+1+1+1+1+1+1+1=9
𝑟,𝑐∈𝑆𝑥𝑦

1 1 1 1 1 1 1 1 1 1
∑ 𝑔(𝑟, 𝑐) = 100
+ 35 + 40 + 25 + 90 + 45 + 28 + 40 + 50 =0.21761
𝑟,𝑐∈𝑆𝑥𝑦

^ 9
𝑓= 0.21761
^
𝑓 = 41. 35
0 0 0 0 0 0 0

0 12 25 30 28 15 0

0 20 100 35 40 22 0

0 18 25 41. 45 20 0
35

0 22 28 40 50 18 0

0 15 20 25 30 10 0

0 0 0 0 0 0 0

​ ​ Output image

Median Filter
●​ Definition: A nonlinear filter that replaces each pixel with the median value of its
neighbourhood pixels.
●​ Formula: f(x,y)=median{g(r,c)}
●​ Process:
1.​ Collect all pixel values in the m×n neighbourhood.
2.​ Sort the values.
3.​ Replace the central pixel with the middle value.

(a)corrupted by salt paper noise(b) result of one pass median filter

●​ Effect:
1.​ Very effective at removing Salt-and-Pepper noise.
2.​ Preserves edges much better than the Arithmetic Mean Filter.
3.​ Does not blur sharp details as much as averaging filters.

55 70 90 122 21
88 36 49 32 45
67 100 50 87 65
34 80 120 56 34
65 75 39 45 27
​ ​ ​ ​ ​
Input image

Given 5x5 pixel image (intensity values):


Let kernel (window size) =3x3 for each pixel value.
Let compute the filtered value at pixel with value 50.
The values in this neighbourhood are
[36,49,32,100,50,87,80,120,56]
Now sort the values
[32,36,49,50,56,80,87,100,120]
Now find the median value from the sorted list
The median value is 56
replace the orginal value with 56

0 49 90 36 0
55 67 40 50 32
36 67 56 50 34
65 67 75 50 34
0 39 45 34 0
​ ​ ​ ​ ​ Output image
Max Filter
●​ Definition: Replaces each pixel with the maximum value in its neighbourhood.
●​ Formula: f(x,y)=max{g(r,c)}

(a)corrupted by salt paper noise(b) result of filtering with max filter

●​ Effect:
o​ Removes pepper noise (black spots).
o​ Brightens image regions.
o​ Useful for highlighting bright details.
●​ Disadvantage: Can exaggerate bright regions and lose dark detail.
55 70 90 122 21
88 36 49 32 45
67 100 50 87 65
34 80 120 56 34
65 75 39 45 27
​ ​ ​ ​ ​ Input image

Given 5x5 pixel image (intensity values):


Let kernel (window size) =3x3 for each pixel value.
Let compute the filtered value at pixel with value 50.
The values in this neighbourhood are
[36,49,32,100,50,87,80,120,56]

Find the maximum value

f(x,y)=max{g(r,c)}

f(3,3)=max(36,49,32,100,50,87,80,120,56)
f(3,3)=120

80 90 122 122 122

100 10 122 122 122


100 120 120 120 87
100 120 120 120 87

80 120 120 120 56

​ ​ ​ ​ ​
Output image

Min Filter
●​ Definition: Replaces each pixel with the minimum value in its neighbourhood.
●​ Formula: f(x,y)=min{g(r,c)}

(a)corrupted by salt paper noise(b) result of filtering with min filter


●​ Effect:
o​ Removes salt noise (white spots).
o​ Darkens image regions.
o​ Useful for highlighting dark details.
●​ Disadvantage: Can exaggerate dark regions and lose bright detail.
55 70 90 122 21
88 36 49 32 45
67 100 50 87 65
34 80 120 56 34
65 75 39 45 27
​ ​ ​ ​ ​ Input image

Given 5x5 pixel image (intensity values):


Let kernel (window size) =3x3 for each pixel value.
Let compute the filtered value at pixel with value 50.
The values in this neighbourhood are

[36,49,32,100,50,87,80,120,56]

Find the minimum value

f(x,y)=min{g(r,c)}

f(3,3)=min(36,49,32,100,50,87,80,120,56)

f(3,3)=32

0 0 0 0 0
0 36 32 21 0
0 36 32 82 0
0 34 39 34 0
0 0 0 0 0
​ ​ ​ ​
​ Output image

Alpha-Trimmed Mean Filter


●​ Definition: A filter that removes a fixed number of the highest and lowest pixel
values in the neighbourhood, then averages the rest.
●​ Formula:

^ 1
𝑓(𝑥, 𝑦) = 𝑚𝑛−𝑑
∑ 𝑔𝑅(𝑟, 𝑐)
(𝑟,𝑥)ϵ𝑆𝑥𝑦
where:
●​ g(r,c) = pixel values after removing extremes
●​ d = number of pixels discarded (usually even, with d/2 lowest and d/2 highest
removed)
●​ m×n= neighbourhood size

(a)corrupted by salt paper noise(b) result of filtering with alpha trimmed mean filter
●​ Effect:
o​ For d=0 → it becomes the Arithmetic Mean Filter.
o​ For large d → it approaches the Median Filter.
o​ Good balance for removing salt-and-pepper noise while still reducing
Gaussian noise.
o​ Useful when an image has both Gaussian and impulse noise.
55 70 90 122 21
88 36 49 32 45
67 100 50 87 65
34 80 120 56 34
65 75 39 45 27
​ ​ ​ ​ ​
Input image

Given 5x5 pixel image (intensity values):

Let kernel (window size) =3x3 for each pixel value.


Let compute the filtered value at pixel with value 50.
The values in this neighbourhood are
[36,49,32,100,50,87,80,120,56]
Now sort the values
[32,36,49,50,56,80,87,100,120]
Remove the extremes (d=4)
Remove d/2=4/2=2 smallest and largest value
Remove smallest [32,36]
Remove largest [100,120]

g(r,c)={49, 50,56,80,87}

^ 1
𝑓(𝑥, 𝑦) = 𝑚𝑛−𝑑
∑ 𝑔𝑅(𝑟, 𝑐)
(𝑟,𝑥)ϵ𝑆𝑥𝑦

^ 1
𝑓(3, 3) = 3*3−4
(49 + 50 + 56 + 80 + 87)

^ 1
𝑓(3, 3) = 9−4
(322)

^ 322
𝑓(3, 3) = 5

^
𝑓(3, 3) = 64. 4

Now replace the value with the main pixel value

45. 74.5 85.6 98.6 22.


3 13

76. 55.4 42.6 36.1 46


5 3

24. 1052 64.4 74.8 72.


5 6 56

25. 70 112.3 62.2 37.


3 15

67. 75 43.2 42.8 26.


3 4

​ ​ ​ ​ ​
Output image

Mid Point Filter


The mid point filter computes the mid point between the maximum and minimum values in
the area encompassed by the filter
^ 1
𝑓(𝑥, 𝑦) = 2
[max{g(r,c)}+ min{g(r,c)}]

𝐸𝑓𝑓𝑒𝑐𝑡
●​ Good for reducing uniform noise.
●​ Not very effective for impulse noise (salt-and-pepper) or Gaussian noise.
●​ Preserves edges better than mean filtering because it only considers extremes.

55 70 90 122 21
88 36 49 32 45
67 100 50 87 65
34 80 120 56 34
65 75 39 45 27
​ ​ ​ ​ ​ Input image

Given 5x5 pixel image (intensity values):


Let kernel (window size) =3x3 for each pixel value.
Let compute the filtered value at pixel with value 50.

The values in this neighbourhood are

[36,49,32,100,50,87,80,120,56]

Now find the maximum value


Max{g(r,c)}=max{36,49,32,100,50,87,80,120,56}
=120

Min{g(r,c)}=min{36,49,32,100,50,87,80,120,56}
=32

Mid point
^ 1
𝑓(𝑥, 𝑦) = 2
[max{g(r,c)}+ min{g(r,c)}]

^ 1
𝑓(3, 3) = 2
[120+32]

^ 1
𝑓(3, 3) = 2
[152]
^
𝑓(3, 3) = 76
Replace the value with the orginal pixel value

65 73.2 86.4 112. 21.


5 4

71. 34.4 42.3 31.2 40


2 5

60. 86.5 76 82.3 60.


2 8

32. 70.3 106. 45.6 45.


23 2 3

52. 73 32.3 40.1 23.


8 9

​ ​ ​ ​ ​
Output image

5.3 Adaptive Filter


●​ Definition: A filter that adjusts its behavior based on local image statistics (mean,
variance, etc.) instead of applying the same operation everywhere.
●​ Process:
o​ Calculates local mean and variance in a neighborhood.
o​ Compares with global image variance.
o​ Adapts smoothing strength:

▪​ Strong smoothing in noisy regions.

▪​ Weak smoothing in detailed regions (edges, textures).


(a)corrupted by additive noise (b)result of filtering with adaptive filter

Effect:
o​ Reduces Gaussian noise while preserving edges and details.
o​ More powerful than mean/median filters because it adapts locally.
o​ Used in applications like medical imaging, satellite imaging, and
photography.
●​ Disadvantage:
o​ Computationally expensive compared to simple filters.

Adaptive Median Filter


●​ Definition: An advanced nonlinear filter that adjusts the size of its neighbourhood
window to remove noise more effectively.
●​ Process:
1.​ Start with a small window (e.g., 3×3).
2.​ Check if the median is a valid (non-noise) pixel:
▪​ If yes → replace center pixel with the median.
▪​ If no → increase the window size.
3.​ Repeat until:
▪​ A valid median is found, or
▪​ The maximum window size is reached.
(a)corrupted by salt-paper noise(b)filtered by adaptive filter(c)filtered by adaptive mean filter
●​ Effect:
1.​ Very effective for salt-and-pepper noise, even with high noise density.
2.​ Preserves edges and fine details better than a standard median filter.
3.​ Handles variable noise levels in different regions of the image.
●​ Disadvantage:
1.​ More computationally expensive than the standard median filter.
Summary of Filters:

Filter Name Best For Notes

Arithmetic mean Gaussian noise Simple average but blur


image

Geometric mean Multiplactive noise Less blurring

Harmonic mean Salt noise Reduce high values

Contra -Harmonic mean Salt or papper Adjust Q to remove specific


noise

Median filter salt-and-papper Preserve edges while


denoising

Max filter Paper noise Emphisizes brighter regions

Min filter Salt noise Emphasizes dark regions

Mid point filter Uniform noise Simple and fast for low
noise
Adaptive Local Noise Reduction Filter
It’s a spatial filter that reduces noise while trying to preserve image details (like edges and
fine structures).​
Unlike simple mean/median filters, this one adapts based on local statistics of the image

Working Principle
The filter assumes that the noise is additive white Gaussian noise (AWGN) with known
varianc σn2

For each pixel (x,y):

1.​ Take a local neighborhood (e.g., m×n window).


2.​ Compute:
o​ Local mean: Sxy
o​ Local variance: ση2
3.​ Replace the pixel using the formula:
2
^ ση
𝑓(𝑥, 𝑦) = 𝑔(𝑥, 𝑦) − 2 [g(x,y)-𝑧Sxy]
σ𝑆𝑥𝑦

Where:
^
●​ 𝑓(𝑥, 𝑦) → Restored pixel (noise reduced output).
●​ g(x,y) → Original noisy pixel at location (x,y)(x,y)(x,y).
●​ ​ση2→ Noise variance (assumed known or estimated from flat areas of the image).
2
●​ σ𝑆𝑥𝑦 → Local variance in the neighborhood window around pixel (x,y)(x,y)(x,y).
●​ 𝑧Sxy → Local mean in that same neighbourhood.

Intuition
1.if ση2 is zero the filter should return the value of g at (x,y).This is the trivial,zero
noise case in which g is equal to f at (x,y) .
2
2.if σ𝑆𝑥𝑦 the local variance is high relative to ση2 the filter should return a value close
to g at (x,y).A high local variance typically is associated with edges and these should
be preserved.
3.if the two variance are equal, we want the filter too return the arithmetic mean value
of the pixels in Sxy.This condition occurs when the local area has the same properties
as the overall image, local noise is to be reduced by averaging.

5.4 Periodic Noise Reduction Using Frequency Domain Filtering:


Notch Filtering
A Notch Filter is a frequency domain filter that removes specific frequency
components from an image while keeping others unchanged.

How It Works

●​ Take the Fourier Transform of the noisy image.


●​ Periodic noise shows up as distinct bright spikes (sinusoidal components) in
the frequency spectrum, away from the center.
●​ Apply a notch reject filter (NRF) to remove those spikes.
●​ Perform the inverse Fourier Transform to get the cleaned image.
Type of Notch Filter

1.​ Ideal Notch Reject Filter (INRF):

𝑄
HNR​(u,v)= ∏ 𝐻𝑘(𝑢, 𝑣)𝐻−𝑘(𝑢, 𝑣)
𝑘=1

Removes frequencies within a sharp cut off radius Dk.

2.​ Butterworth Notch Reject Filter (BNRF):

H(u,v)= ∏ [
3
1 1
𝑛 ][ 𝑛 ]
𝑘=1 1+[𝐷0𝑘/𝐷𝑘(𝑢,𝑣)] 1+[𝐷0𝑘/𝐷−𝑘(𝑢,𝑣)]

Has smoother transition (controlled by order nnn).

3.​ Gaussian Notch Reject Filter (GNRF):


2 2
𝐷𝑘 (𝑢,𝑣) 𝐷−𝑘 (𝑢,𝑣)
− 2 − 2
2𝐷0𝑘 2𝐷0𝑘
H(u,v)=1-𝑒 .𝑒

Gives the smoothest result (no ringing effects).

Here:

●​ Dk (u,v), D-k (u,v )= distances from the noise peaks (symmetric about the center).
●​ D0k​= cut off radius.
●​ n= order of the filter.

Use Case
●​ Removing periodic noise in scanned images, satellite images, medical images, etc.
●​ Example: If a scanned photo has vertical stripes → those correspond to spikes in
Fourier domain → notch filter removes them.
Optimum Notch Filtering
An optimum notch filter is a frequency domain filter specifically designed to remove
periodic noise in images while preserving as much useful detail as possible.

It improves upon standard notch reject filters (ideal, Butterworth, Gaussian) by optimizing
the balance between noise removal and image detail preservation. Linear
Position-Invariant (LPI) Degradation
●​ When an image is captured, it may be blurred or degraded due to the imaging
system (motion blur, atmospheric turbulence, defocus, etc.).
●​ If the degradation process is:
1.​ Linear → output is a linear combination of inputs.
2.​ Position-invariant (space-invariant) → the degradation is the same
everywhere in the image (does not change with location).

Then the system is modeled as a convolution:

g(x,y)=h(x,y)∗f(x,y)+η(x,y)

Where:

●​ f(x,y)→ original (true) image


●​ h(x,y)→ point spread function (PSF) of the system
●​ g(x,y) → degraded (observed) image
●​ η(x,y)) → additive noise

Fourier Domain Representation


Using the Convolution Theorem:

G(u,v)=H(u,v)⋅F(u,v)+N(u,v)

Where:

●​ F(u,v) = Fourier Transform of the original image


●​ H(u,v) = Transfer function (FT of PSF)
●​ G(u,v) = Fourier Transform of degraded image
●​ N(u,v) = noise spectrum

So, degradation in the spatial domain corresponds to multiplication in frequency domain.

5.5 Linear, Positive, Invariant Degradation:


When an image is degraded (blurred, noisy, etc.), we often assume a mathematical model of
degradation:

g(x,y)=h(x,y)∗f(x,y)+η(x,y)

Where:

●​ g(x,y) → observed degraded image


●​ f(x,y) → original (ideal) image
●​ h(x,y) → point spread function (PSF) of the system (degradation function)
●​ η(x,y) → additive noise
●​ ∗ → convolution operator

This model is called Linear Positive Invariant (LPI) degradation.

Explanation of Each Term

1.​ Linear
○​ The degradation process is assumed to be linear.
○​ Means: If two inputs are combined, their degraded outputs also combine
linearly.
○​ Example: If f1 and f2are two images, then

D{af1+bf2}=aD{ f1}+bD{ f2}

2.​ Positive
○​ Pixel intensities are non-negative.
○​ Both original image f(x,y) and degraded image g(x,y) should have values ≥ 0.
3.​ Invariant
○​ The degradation function h(x,y) does not change over space.
○​ Meaning: The blur or distortion is the same across the whole image (spatially
invariant).
○​ If the system is shift-invariant, convolution can be used to model degradation.
○​

5.6 Estimation of Degradation Function


When an image is degraded (blurred, noisy, or distorted), we can estimate the degradation
function 𝐻(𝑢, 𝑣) to restore it. There are three main approaches:

1. Estimation by Image Observation (Blind Estimation)


Concept:
●​ Estimate 𝐻(𝑢, 𝑣) directly from the degraded image.

●​ Assumptions:
a.​ Linear system: output proportional to input.
b.​ Space-invariant: degradation is the same across the image.
Process:
1.​ Select a subimage (region of interest):
o​ Choose a small area with strong features (edges, contrast).
o​ Avoid flat or noisy regions.
2.​ Reduce noise influence:
o​ Pick regions with high signal-to-noise ratio.
3.​ Process the subimage:
o​ Let degraded subimage:

𝑔𝑠(𝑥, 𝑦)

o​ Let estimated original:


^
𝑓𝑠(𝑥, 𝑦)

4.​ Transform into frequency domain:


𝐺𝑠(𝑢,𝑣)
𝐻𝑠(𝑢, 𝑣) = 𝐹𝑠(𝑢,𝑣)

​ Where:

o​ 𝐺𝑠(𝑢, 𝑣) = Fourier transform of degraded subimage


o​ 𝐹𝑠(𝑢, 𝑣) = Fourier transform of estimated original subimage
o​ 𝐻𝑠(𝑢, 𝑣) = estimated degradation function
5.​ Deduce the degradation function:
o​ Analyze the shape of 𝐻𝑠(𝑢, 𝑣).
o​ Example: Radial plot looks Gaussian → global 𝐻(𝑢, 𝑣) is Gaussian.
6.​ Extend from local to global:
o​ Apply 𝐻(𝑢, 𝑣) to the entire image.
7.​ Use in restoration:
o​ Apply Inverse Filtering or Wiener Filtering using 𝐻(𝑢, 𝑣).

2. Estimation by Image Experimentation


Concept:
●​ If the original system is available, we can experimentally determine 𝐻(𝑢, 𝑣).
●​ Principle: Linear, space-invariant systems are fully characterized by their impulse
response.
Process:
1.​ Replicate acquisition environment:
o​ Use same/similar equipment.
o​ Capture images under different settings to match degradation.
2.​ Generate an impulse input:
o​ Impulse (delta function) approximated by a tiny bright dot on dark
background.
3.​ Capture impulse response:
o​ Pass impulse through system → output is the impulse response:

ℎ(𝑥, 𝑦)

4.​ Frequency domain:


𝐺(𝑢,𝑣)
𝐻(𝑢, 𝑣) = 𝐴

​ Where:

o​ 𝐺(𝑢, 𝑣) = Fourier transform of observed impulse image


o​ 𝐴 = amplitude of the impulse
5.​ Use in restoration:
o​ Apply 𝐻(𝑢, 𝑣) in Inverse/Wiener filtering to restore image.

3. Estimation by Image Modeling


Concept:
●​ Mathematically model the degradation due to environmental or acquisition conditions.
●​ Once 𝐻(𝑢, 𝑣) is known, the original image can be restored.
Example 1: Atmospheric Turbulence (Hufnagel & Stanley Model)
2 2 5/6
−𝑘(𝑢 +𝑣 )
𝐻(𝑢, 𝑣) = 𝑒

●​ 𝑘 = constant depending on turbulence strength


●​ Small 𝑘 → mild turbulence
●​ Large 𝑘 → severe turbulence
●​ Used in astronomy: observing stars and planets

Example 2: Motion Blur (Uniform Linear Motion)


●​ Let exposure time = 𝑇, total displacement = 𝑎 along x-direction.
Spatial domain (blurred image):
𝑇
𝑎
𝑔(𝑥, 𝑦) = ∫ 𝑓(𝑥 − 𝑇
𝑡, 𝑦) 𝑑𝑡
0

Frequency domain:
𝐺(𝑢, 𝑣) = 𝐻(𝑢, 𝑣)𝐹(𝑢, 𝑣)

Degradation function for uniform motion in x-direction:


𝑠𝑖𝑛(π𝑢𝑎) −𝑗π𝑢𝑎 −𝑗π𝑢𝑎
𝐻(𝑢, 𝑣) = π𝑢𝑎
𝑒 = 𝑠𝑖𝑛𝑐(𝑢𝑎)𝑒

●​ Explains streaking along the motion direction.


Summary of Key Equations in LaTeX:
1.​ Observation-based:
𝐺𝑠(𝑢,𝑣)
𝐻𝑠(𝑢, 𝑣) = 𝐹𝑠(𝑢,𝑣)

2.​ Experiment-based:
𝐺(𝑢,𝑣)
𝐻(𝑢, 𝑣) = 𝐴

3.​ Atmospheric Turbulence:


2 2 5/6
−𝑘(𝑢 +𝑣 )
𝐻(𝑢, 𝑣) = 𝑒

4.​ Motion Blur:


−𝑗π𝑢𝑎
𝐻(𝑢, 𝑣) = 𝑠𝑖𝑛𝑐(𝑢𝑎)𝑒 , 𝐺(𝑢, 𝑣) = 𝐻(𝑢, 𝑣)𝐹(𝑢, 𝑣)

5.7 Inverse Filtering:


Inverse filtering is a restoration technique used in image processing to recover an original
image that has been blurred or degraded by a known filter (usually due to motion blur,
out-of-focus lens, or system imperfections).

The basic idea:

●​ An image gets degraded:

g(x,y)=h(x,y)∗f(x,y)+n(x,y)g(x,y)

where:

o​ g(x,y) → degraded image


o​ h(x,y) → degradation function (blurring filter)
o​ f(x,y) → original image
o​ n(x,y)→ noise
o​ → convolution
●​ In frequency domain:

G(u,v)=H(u,v)⋅F(u,v)+N(u,v)G(u,v)

●​ To restore, we try to invert the degradation:


^ 𝐺(𝑢,𝑣)
𝑓= 𝐻(𝑢,𝑣)

This is called inverse filtering.

Steps in Inverse Filtering


1.​ Take the Fourier Transform of the degraded image G(u,v)G(u,v)G(u,v).
2.​ Divide by the degradation function H(u,v)H(u,v)H(u,v).
3.​ Apply Inverse Fourier Transform to get the restored image.

Example (No Noise)


Suppose blur is caused by a motion filter H(u,v)

If there’s no noise, dividing works perfectly:


^ 𝐺(𝑢,𝑣)
𝐹= 𝐻(𝑢,𝑣)

●​ We get back the original image.

Problem with Noise


When noise N(u,v) is present:
^ 𝐺(𝑢,𝑣)
𝐹 = 𝐹(𝑢, 𝑣) + 𝐻(𝑢,𝑣)

●​ If H(u,v) is very small for some frequencies → division amplifies noise.


●​ This makes inverse filtering very sensitive to noise.

That’s why in practice, Wiener filtering is often used instead, since it balances between
noise suppression and restoration.

5.8 Minimum Mean Square Error (Wiener) Filtering


Wiener filtering is an optimal restoration technique used in image and signal processing.​
It improves over inverse filtering by taking both the degradation function and noise
statistics into account.
The goal:

●​ Minimize the mean square error (MSE) between the restored image and the original
image.
●​ Unlike inverse filtering, Wiener filtering balances de-blurring and noise
suppression.

Mathematical Formulation
The Wiener filter is defined as:
2
^ 1 |𝐻(𝑢,𝑣)|
𝐹(𝑢, 𝑣) = [ 𝐻(𝑢,𝑣) 2 ]G(u,v)
|𝐻(𝑢,𝑣)| +𝑆η(𝑢,𝑣)/𝑆𝑓(𝑢,𝑣)

Where:

●​ H(u,v) → degradation function (blur filter)


●​ H∗(u,v)→ complex conjugate of H(u,v)
●​ SN(u,v) → power spectrum of noise
●​ SF(u,v) → power spectrum of original image
^
●​ 𝐹(𝑢, 𝑣) → restored image

Intuition
●​ Inverse filtering divides only by H(u,v)H(u,v)H(u,v).
●​ Wiener filtering adds a correction term SNSF\frac{S_N}{S_F}SF​SN​​to prevent
amplifying noise when H(u,v)H(u,v)H(u,v) is very small.
●​ It finds a compromise: restore image details without boosting noise too much.

Special Cases
If noise = 0 → Wiener filter reduces to Inverse Filter:
^ 1
𝐹(𝑢, 𝑣) = 𝐻(𝑢,𝑣)
G(u,v)

If blur = 0 (no degradation, only noise) → Wiener filter acts as a noise smoother.

5.9 Constrained Least Squares (CLS) Filtering


CLS filtering is an image restoration method that reduces blur while controlling the
amplification of noise by introducing a smoothness constraint.

It is based on the idea:

●​ Simply minimizing error (like inverse filtering) amplifies noise.


●​ Adding a constraint (regularization) helps produce a more stable and smooth
restored image.
Mathematical Formulation
We want to minimize:

𝑀−1 𝑁−1
2 2
C= ∑ ∑ [∇ 𝑓(𝑥, 𝑦)]
𝑥=0 𝑦=0

^ 2
∥𝑔 − 𝐻𝑓 ‖=‖ η ‖

Where:

●​ g → degraded image
●​ H → degradation function
●​ f → original image (to be estimated)
●​ C→ Laplacian operator (measures smoothness of image)
●​ → regularization parameter (controls trade-off between fidelity and smoothness)

Frequency Domain Solution

In frequency domain, the CLS filter is:


^ |𝐻*(𝑢,𝑣)|
𝐹(𝑢, 𝑣) = [ 2 ]G(u,v)
|𝐻(𝑢,𝑣)| +γ𝑃(𝑢,𝑣)

Where:

●​ H(u,v) → Fourier transform of blur function


●​ P(u,v) → Fourier transform of Laplacian operator
●​ γ → regularization parameter

This looks similar to Wiener filtering, but instead of noise statistics, it uses a smoothness
penalty.

Intuition
●​ Inverse filtering: Divides only by HHH, unstable.
●​ Wiener filtering: Balances noise vs. blur using noise & signal statistics.
●​ CLS filtering: Balances noise vs. blur using a smoothness constraint (no need for
noise statistics).

5.10 The geometric-mean filter:


In frequency domain the degraded image is
𝐺(𝑢, 𝑣) = 𝐻(𝑢, 𝑣)𝐹(𝑢, 𝑣) + 𝑁(𝑢, 𝑣),
where 𝐻 is the degradation (PSF in frequency), 𝐹 is the true original spectrum, and 𝑁 is
additive noise.
The geometric-mean filter is written as a geometric combination of two transfer functions
(inverse and parametric Wiener). One convenient algebraic form is:
*
^ *
𝐻 (𝑢,𝑣) 𝑎 𝐻 (𝑢,𝑣) 𝑆𝐹(𝑢,𝑣) 1−𝑎
𝐹(𝑢, 𝑣) = [ 2 ] [ 2 ] 𝐺(𝑢, 𝑣),
|𝐻(𝑢,𝑣)| |𝐻(𝑢,𝑣)| 𝑆𝐹(𝑢,𝑣)+𝑏

where
*
●​ 𝐻 = complex conjugate of 𝐻,
2 *
●​ |𝐻| = 𝐻𝐻 ,

●​ 𝑆𝐹(𝑢, 𝑣) = power spectral density (PSD) of the original image,

●​ 𝑎 ∈ [0, 1] and 𝑏 ≥ 0 are real constants (or can be made frequency dependent),

●​ 𝐺(𝑢, 𝑣) is the observed degraded image spectrum.

The two bracketed terms are raised to powers 𝑎 and 1 − 𝑎; hence the “geometric mean”
1
name when 𝑎 = 2 .

Algebraic simplification
* *
Multiply the two brackets — note both contain 𝐻 so the combined exponent of 𝐻 is
𝑎 + (1 − 𝑎) = 1. So (1) simplifies to:
1−𝑎
^ *
𝐹(𝑢, 𝑣) = 𝐻 (𝑢, 𝑣) 𝐺(𝑢, 𝑣) |𝐻(𝑢, 𝑣)|
−2𝑎
( 𝑆𝐹(𝑢,𝑣)
2
|𝐻(𝑢,𝑣)| 𝑆𝐹(𝑢,𝑣)+𝑏 ) .

*
This form makes behavior easier to read: there's always an 𝐻 𝐺 factor (like a matched filter),
modified by two amplitude factors:
−2𝑎
●​ |𝐻| (pushes toward inverse filtering as 𝑎 → 1),
𝑆𝐹 1−𝑎
●​ the second term ( 2 ) (pushes toward Wiener behavior as 𝑎 → 0).
|𝐻| 𝑆𝐹+𝑏

Special / limiting cases — sanity checks


●​ 𝑎 = 1: bracket 1 only ⇒ inverse filter:
*
^ 𝐻 𝐺
𝐹= 2 𝐺= 𝐻
(𝑐𝑙𝑎𝑠𝑠𝑖𝑐 𝑖𝑛𝑣𝑒𝑟𝑠𝑒 𝑓𝑖𝑙𝑡𝑒𝑟).
|𝐻|

​ Very aggressive; amplifies noise where |𝐻| is small.

●​ 𝑎 = 0: bracket 2 only ⇒ parametric Wiener:


*
^ 𝐻 𝑆𝐹
𝐹= 2 𝐺.
|𝐻| 𝑆𝐹+𝑏

​ If 𝑏 is chosen as the noise PSD 𝑆𝑁 (or 𝑏 = 𝑆𝑁/𝑆𝐹 depending on convention) this


reduces to standard Wiener.
1
●​ 𝑎 = 2
, 𝑏 = 1: both factors equally weighted, giving the geometric mean of the
inverse and parametric Wiener filters (called spectrum-equalization when 𝑏 = 1).
Intuitively this balances deblurring and noise suppression.

Intuition: what 𝑎 and 𝑏 do


●​ 𝑎 controls the trade-off between deblurring (inverse) vs. denoising (Wiener).

o​ 𝑎 → 1 → stronger deblurring, but more noise amplification.

o​ 𝑎 → 0 → more smoothing/regularization, less noise amplification.

●​ 𝑏 controls the strength of the noise term in the Wiener-like factor. Larger 𝑏 → more
smoothing (more like low-pass); smaller 𝑏 → more aggressive restoration (less
smoothing).

●​ Both can be made frequency dependent, i.e. 𝑎(𝑢, 𝑣), 𝑏(𝑢, 𝑣), for
spatially/frequency-varying behavior.

5.11 Image Reconstruction from Projections (CT)


🔹 Concept
●​ CT (Computed Tomography): reconstructs a 2-D image from multiple 1-D
projections (X-ray absorption profiles).
●​ Each projection = set of raysums (line integrals of absorption along rays).
●​ Backprojection: process of smearing a projection back over the image plane along
the same direction it was captured.
●​ With multiple projections from different angles → overlapping regions reinforce the
true object → reconstruction emerges.
Issue: Backprojection alone causes blurring/halo (star-shaped artifacts). Needs filtering
(→ filtered backprojection).
🔹 Process (Step-by-Step)
1.​ Projection
o​ X-rays pass through object → detectors measure absorption profile.
o​ Mathematically:

𝑝θ(𝑡) = ∫ 𝑓(𝑥, 𝑦) δ(𝑡 − 𝑥𝑐𝑜𝑠θ − 𝑦𝑠𝑖𝑛θ) 𝑑𝑥 𝑑𝑦
−∞
​ where 𝑓(𝑥, 𝑦) = object, θ = projection angle, 𝑡 = detector position.

2.​ Backprojection
o​ Spread projection back across image plane:
π
𝑓𝐵𝑃(𝑥, 𝑦) = ∫ 𝑝θ(𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ) 𝑑θ
0

o​ Result approximates the object, but blurred (halo effect).


3.​ Multiple Projections
o​ Few projections → poor reconstruction (star-like blur).
o​ Many projections (e.g., 32) → object shape clearer, halo more circular.
o​ Projections 180° apart are mirror images → only need 0°–180° range.

🔹 Why Halo Appears


●​ Backprojection smears projection values equally → spreads energy across entire
image.
●​ Regions of intersection (true object) become brighter.
●​ But smearing creates low-frequency blur, seen as a halo around the object.

🔹 Practical Takeaways
●​ Backprojection alone is not enough → filtered backprojection (FBP) is required to
remove halo and sharpen.
●​ Number of projections matters: more = better approximation.
●​ Symmetry: only half-circle angles needed (0°–180°).

🔹 Key Math (Radon Transform)


●​ Projection (Radon Transform):

𝑝θ(𝑡) = ∫𝑓(𝑥, 𝑦) δ(𝑡 − 𝑥𝑐𝑜𝑠θ − 𝑦𝑠𝑖𝑛θ) 𝑑𝑥 𝑑𝑦

●​ Backprojection:
π
𝑓𝐵𝑃(𝑥, 𝑦) = ∫ 𝑝θ(𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ) 𝑑θ
0

Projections and the Radon Transform


🔹 Idea
●​ In CT, each projection is a set of line integrals (raysums) of the object’s absorption
function.
●​ Collecting projections at multiple angles → we can reconstruct the 2D image.
●​ Mathematical backbone: the Radon Transform.

🔹 Line Representation
A line in the plane can be represented in normal form (instead of slope-intercept):
𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ = 𝑟

●​ 𝑥, 𝑦: coordinates in the image plane

●​ θ: angle of the projection (rotation of X-ray beam)

●​ 𝑟: perpendicular distance of line from origin

Every projection corresponds to summing values of the object function 𝑓(𝑥, 𝑦) along such

🔹 Radon Transform (Continuous)


lines.

The Radon Transform of an image 𝑓(𝑥, 𝑦) is defined as:


∞ ∞
𝑔(𝑟, θ) = ∫ ∫ 𝑓(𝑥, 𝑦) δ(𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ − 𝑟) 𝑑𝑥 𝑑𝑦
−∞ −∞

●​ Formula name: Radon Transform


●​ Symbols:

o​ 𝑓(𝑥, 𝑦): object/image function (absorption at point (𝑥, 𝑦))

o​ 𝑔(𝑟, θ): projection value at angle θ and distance 𝑟

o​ δ(·): Dirac delta (picks only points lying exactly on the line)

o​ 𝑥, 𝑦: spatial coordinates

o​ 𝑟, θ: projection coordinates (distance, angle)

This formula says: sum (integrate) all pixel values of 𝑓(𝑥, 𝑦) that lie on the line

🔹 Radon Transform (Discrete)


𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ = 𝑟.

For a digital image (size 𝑀 × 𝑁):


𝑀−1 𝑁−1
𝑔(𝑟, θ) = ∑ ∑ 𝑓(𝑥, 𝑦) δ(𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ − 𝑟)
𝑥=0 𝑦=0

●​ Same concept → now pixels are summed instead of continuous integration.

●​ Each projection is obtained by scanning across all 𝑟 at a fixed θ.


🔹 Example: Circular Disk
Suppose we have a circular object of radius 𝑅, centered at origin:
2 2 2
𝑓(𝑥, 𝑦) = {𝐴, 𝑥 + 𝑦 ≤ 𝑅 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

●​ 𝐴: absorption constant inside the circle

●​ 𝑅: radius

Because of circular symmetry, projections are identical for all angles θ.​

So, it’s enough to compute for θ = 0 .

Projection:
2 2
𝑅 −𝑟
𝑔(𝑟) = ∫ 𝐴 𝑑𝑦
2 2
− 𝑅 −𝑟

Solution:
2 2
𝑔(𝑟) = {2𝐴 𝑅 − 𝑟 , |𝑟| ≤ 𝑅 0, |𝑟| > 𝑅

●​ Symbols:

o​ 𝑔(𝑟): projection value at distance 𝑟

o​ 𝑅: radius of disk

o​ 𝐴: constant absorption inside disk

Interpretation: the projection is the chord length through the circle multiplied by 𝐴.

🔹 How it Works (Intuition)


●​ Each projection = “shadow” of object along one direction.
●​ Radon Transform = mathematical tool to describe all shadows at all angles.
●​ Symmetric objects (like circles) → projections look the same in every direction.
●​ Later, inverse Radon Transform is used to reconstruct the full image.

Example — Radon Transform of a Circular Disk (worked


solution)
Given:​
A circular disk centered at the origin with radius 𝑅 and constant intensity 𝐴:
2 2 2
𝑓(𝑥, 𝑦) = {𝐴, 𝑥 + 𝑦 ≤ 𝑅 , 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.
Goal: compute the projection (Radon transform) 𝑔(𝑟, θ). Because the disk is rotationally
symmetric, 𝑔 will not depend on θ. So we compute 𝑔(𝑟) for θ = 0 and that result holds for
all angles.

1. Start from the Radon transform (continuous)


𝑔(𝑟, θ) = ∬ 2𝑓(𝑥, 𝑦) δ(𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ − 𝑟) 𝑑𝑥 𝑑𝑦.
𝑅

For θ = 0, 𝑐𝑜𝑠θ = 1, 𝑠𝑖𝑛θ = 0, so this becomes


𝑔(𝑟) = ∬𝑓(𝑥, 𝑦) δ(𝑥 − 𝑟) 𝑑𝑥 𝑑𝑦.

The delta δ(𝑥 − 𝑟) forces 𝑥 = 𝑟. Integrate out 𝑥:



𝑔(𝑟) = ∫ 𝑓(𝑟, 𝑦) 𝑑𝑦.
−∞

Interpretation: for fixed 𝑟 we sum (integrate) the values of 𝑓 along the vertical line 𝑥 = 𝑟.

2. Determine the integration limits (where the line intersects the


disk)
Points (𝑟, 𝑦) lie inside the disk iff
2 2 2 2 2 2
𝑟 +𝑦 ≤𝑅 ⇒ 𝑦 ≤𝑅 −𝑟 .

So the permitted 𝑦-values satisfy

𝑦 ∈ ⎡⎢− 𝑅 − 𝑟 ⎤⎥.
2 2 2 2
𝑅 −𝑟 ,
⎣ ⎦
If |𝑟| > 𝑅 the line misses the disk and the projection is zero.

3. Evaluate the integral


Inside the disk 𝑓(𝑟, 𝑦) = 𝐴. Thus, for |𝑟| ≤ 𝑅,
2 2
𝑅 −𝑟
𝑔(𝑟) =
− 𝑅 −𝑟

2 2
𝐴 𝑑𝑦 = 𝐴 · 2 𝑅 − 𝑟 .( 2
)
2

For |𝑟| > 𝑅, 𝑔(𝑟) = 0.


So the final result is
2 2
𝑔(𝑟) = {2𝐴 𝑅 − 𝑟 , |𝑟| ≤ 𝑅, 0, |𝑟| > 𝑅,

4. Quick consequences & intuition


●​ 𝑔(0) = 2𝐴𝑅: the maximum projection (through the disk center) equals chord length
2𝑅 times amplitude 𝐴.
2 2
●​ 𝑔(𝑟) is semicircular in shape vs. 𝑟: it’s proportional to 𝑅 −𝑟 .

●​ Because the object is rotationally symmetric, 𝑔(𝑟, θ) is the same for every angle θ;
i.e. 𝑔(𝑟, θ) = 𝑔(𝑟).

5. Discrete (digital) note — how to compute numerically


For a digital image 𝑓[𝑥, 𝑦] (pixel grid), a projection at angle θ is approximated by summing
pixels intersecting each line 𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ = 𝑟. In practice:
●​ Loop over detector positions 𝑟𝑗;

●​ For each pixel, compute its signed distance 𝑑 = 𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ;

●​ Accumulate pixel value into bin for the nearest 𝑟𝑗 (or use interpolation / Siddon
algorithm for accuracy);

[
●​ This yields a discrete sinogram 𝑔 𝑟𝑗, θ𝑘 . ]
BACKPROJECTION
1. Short statement (what backprojection does)
A backprojection takes each 1-D projection 𝑔(𝑟, θ) and “smears” it back across the image
along the line 𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ = 𝑟. Summing these smeared projections over angles
reconstructs an image — this sum is the laminogram (a blurred image).

2. Key formulas (continuous)


Single-angle backprojection (contribution at angle θ)
𝑓θ(𝑥, 𝑦) = 𝑔​(𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ, θ)

This says: for each image point (𝑥, 𝑦), take the projection value at detector coordinate
𝑟 = 𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ.

Sum (integral) over all angles → backprojected image


π π
𝑓𝐵𝑃(𝑥, 𝑦) = ∫ 𝑓θ(𝑥, 𝑦) 𝑑θ = ∫ 𝑔(𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ, θ) 𝑑θ.
0 0

Discrete version (angles θ𝑘):

𝐾−1
(
𝑓𝐵𝑃(𝑥, 𝑦) ≈ ∑ 𝑔 𝑥𝑐𝑜𝑠θ𝑘 + 𝑦𝑠𝑖𝑛θ𝑘, θ𝑘 ∆θ.
𝑘=0
)
Symbols
●​ 𝑓(𝑥, 𝑦): true image (unknown)

●​ 𝑔(𝑟, θ): Radon transform / projection at distance 𝑟 and angle θ


●​ 𝑟 = 𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ: detector coordinate corresponding to point (𝑥, 𝑦) for angle θ

●​ 𝑓θ(𝑥, 𝑦): backprojection from single angle θ

●​ 𝑓𝐵𝑃(𝑥, 𝑦): laminogram (sum of backprojections)

●​ ∆θ: angular increment (discrete case)

3. Why plain backprojection blurs (intuition + math)


●​ Backprojection is basically a smoothing operator: each projection value is copied
along a whole line — that spreads energy and acts like a low-pass filter.
●​ Mathematically, backprojection corresponds to convolution of the true image with a
point spread function that behaves like 1/ρ (in polar frequency), which emphasizes
low frequencies and produces a halo/blur (star-like artifacts when few angles).
Short: backprojection reconstructs geometry but overweights low frequencies → blur.

4. How to fix it — Filtered Backprojection (FBP) (core idea)


Use the Central Slice Theorem (aka Projection Slice Theorem) + a frequency-domain
correction:

Central Slice Theorem


Let 𝐺θ(ω) be the 1-D Fourier transform (FT) of projection 𝑔(𝑟, θ) w.r.t. 𝑟:


−𝑗ω𝑟
𝐺θ(ω) = ∫ 𝑔(𝑟, θ) 𝑒 𝑑𝑟.
−∞

Then the theorem states:


𝐺θ(ω) = 𝐹(ω𝑐𝑜𝑠θ, ω𝑠𝑖𝑛θ),

( )
where 𝐹 𝑘𝑥, 𝑘𝑦 is the 2-D FT of the image 𝑓(𝑥, 𝑦). In words: the 1-D FT of the projection at
angle θ equals the 2-D FT of the image sampled along the radial line at angle θ in frequency
domain.

From that to reconstruction


( )
1.​ Reconstruct 𝐹 𝑘𝑥, 𝑘𝑦 by combining all radial samples 𝐺θ(ω) for all θ.

2.​ Invert the 2-D FT to get 𝑓(𝑥, 𝑦). Doing algebra gives a reconstruction formula that
contains a factor |ω| in the 1-D frequency domain (this is the ramp).

Filtered backprojection formula (continuous)


Filter each projection in frequency by the ramp |ω|, then backproject:
π
𝑓(𝑥, 𝑦) = ∫[ (𝑔(·, θ) * ℎ)( 𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ )] 𝑑θ
0
where:
●​ ℎ(𝑟) is the filter whose frequency response is 𝐻(ω) = |ω| (the ramp).

●​ Convolution * is in the detector (r) variable.

Equivalently, frequency-domain expression:

( )
π ∞
𝑗ω(𝑥𝑐𝑜𝑠θ+𝑦𝑠𝑖𝑛θ)
𝑓(𝑥, 𝑦) = ∫ ∫ |ω| 𝐺θ(ω) 𝑒 𝑑ω 𝑑θ.
0 −∞

Takeaway: Filtering multiplies high-frequency components by |ω| to compensate for the


1/|ω| weighting introduced by backprojection, thus restoring correct amplitude across
frequencies.

5. Discrete implementation (practical algorithm)


[ ]
1.​ Collect sinogram 𝑔 𝑟𝑗, θ𝑘 (discrete 𝑟, θ).

2.​ For each angle θ𝑘:​


a. Compute 1-D FFT along 𝑟: 𝐺θ [ω].​
𝑘

b. Multiply by filter 𝐻[ω] (ramp or windowed ramp): 𝑄θ [ω] = 𝐻[ω] 𝐺θ [ω].​


𝑘 𝑘

c. Inverse-FFT to get filtered projection 𝑞θ [𝑟].


𝑘

3.​ Backproject: for each image pixel (𝑥, 𝑦), sum contributions
[ ]
𝑞θ 𝑟 = 𝑥𝑐𝑜𝑠θ𝑘 + 𝑦𝑠𝑖𝑛θ𝑘 over all 𝑘 (use interpolation because 𝑟 may not land
𝑘

exactly on discrete bins). Multiply by ∆θ.

4.​ Normalize/clamp intensities.


Filters used in practice
●​ Ram-Lak (ramp): 𝐻(ω) = |ω|. Sharp, good resolution, amplifies noise.

●​ Shepp-Logan: ramp × sinc window; reduces noise at highest freq.


●​ Hamming / Hann: smoother windows to reduce ringing/noise.​
Pick based on noise vs resolution tradeoff.

6. Practical notes & tips


●​ Angular sampling: need enough angles to avoid angular undersampling artifacts (star
lines). Typical CT uses hundreds of angles.

●​ Interpolation: for backprojection use linear interpolation along 𝑟 when mapping


pixel (𝑥, 𝑦) to detector bins.

●​ Zero-padding & windowing in FFT reduces wrap-around and ringing.


●​ Noise amplification: ramp boosts high frequencies; combine ramp with smooth
window to avoid blowing up noise.
●​ Limited-angle problems: if angles don’t span 0–π well, reconstruction is degraded
(elongation/artifacts).

Fourier-Slice Theorem (Projection-Slice Theorem)


Core idea:​
The 1D Fourier Transform of a projection = a slice of the 2D Fourier Transform of the
object.
Math steps:
1.​ Projection at angle θ:

𝑔θ(𝑟) = ∫ 𝑓(𝑥, 𝑦) δ(𝑟 − 𝑥𝑐𝑜𝑠θ − 𝑦𝑠𝑖𝑛θ) 𝑑𝑥𝑑𝑦
−∞

2.​ Take 1D Fourier Transform wrt 𝑟:


−𝑗2π𝑣𝑟
𝐺θ(𝑣) = ∫𝑔θ(𝑟)𝑒 𝑑𝑟

3.​ Substituting & simplifying gives:

𝐺θ(𝑣) = 𝐹(𝑢, 𝑣) 𝑤𝑖𝑡ℎ 𝑢 = 𝑣𝑐𝑜𝑠θ, 𝑣 = 𝑣𝑠𝑖𝑛θ

Means: projection FT = line (slice) in 2D FT space.

Image Reconstruction via Filtered Backprojection


If we want 𝑓(𝑥, 𝑦), we don’t directly invert 2D FT (too expensive).​
Instead:
1.​ Compute 1D FT of each projection:

𝐺θ(𝑣)

2.​ Multiply by filter |𝑣| (ramp filter, aka Ram-Lak):


~
𝐺θ(𝑣) = |𝑣|𝐺θ(𝑣) · 𝑊(𝑣)

(𝑊(𝑣) = window, e.g. Hamming/Hann, to reduce ringing).


3.​ Inverse FT → filtered projection:
~ ~ 𝑗2π𝑣𝑟
𝑔θ(𝑟) = ∫𝐺θ(𝑣)𝑒 𝑑𝑣

4.​ Backproject all filtered projections (sum over θ):


π
~
𝑓(𝑥, 𝑦) = ∫ 𝑔θ(𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ) 𝑑θ
0

Key Notes
●​ Ramp filter: boosts high frequencies → fixes blur from plain backprojection.
●​ Windowing (Hamming/Hann): reduces ringing but adds slight blur.
●​ Sampling:
o​ Enough rays (per projection) = good radial sampling.
o​ Enough angles (θ steps) = avoids streak artifacts.

RECONSTRUCTION USING FAN-BEAM FILTERED


BACKPROJECTIONS

Fan-Beam Geometry
●​ Source rotates on a circle around the object.
●​ Detectors arranged along a circular arc.
●​ Angles:

o​ 𝑏 = source angle (wrt y-axis)

o​ 𝑎 = detector angle (wrt center ray)

●​ Fan ray line can be converted to parallel-beam parameters:

𝑢 = 𝑏 + 𝑎 𝑎𝑛𝑑 𝑟 = 𝐷𝑠𝑖𝑛𝑎

(D = distance from source to rotation center)

Fan-Beam Reconstruction Formula


Start with parallel-beam filtered backprojection formula:
π
~
𝑓(𝑥, 𝑦) = ∫ 𝑔θ(𝑥𝑐𝑜𝑠θ + 𝑦𝑠𝑖𝑛θ) 𝑑θ
0

Convert to fan-beam coordinates (a,b):


𝑎𝑚 2π

𝑓(𝑥, 𝑦) = ∫ ∫ 𝑝(𝑎, 𝑏) 𝑐𝑜𝑠(γ) 𝑑𝑏 𝑑𝑎


−𝑎𝑚 0

●​ 𝑝(𝑎, 𝑏) = fan-beam projection

●​ 𝑎𝑚 = maximum detector angle covering ROI

●​ γ = 𝑎𝑛𝑔𝑙𝑒 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑟𝑎𝑦 𝑎𝑛𝑑 𝑐𝑒𝑛𝑡𝑒𝑟 𝑟𝑎𝑦

Step-by-Step (Math)
1.​ Convert fan-ray → parallel-ray:

𝑟 = 𝐷𝑠𝑖𝑛𝑎, 𝑢 = 𝑏+ 𝑎

2.​ Weighting factor:


●​ Fan-beam rays are longer further from source → multiply by:
1
𝑤= 2𝑅

(R = distance from source to the pixel being reconstructed)

3.​ Convolution for filtering:


●​ Use ramp filter (like parallel-beam):
2
𝑠𝑖𝑛 𝑎
ℎ(𝑎) = 2

●​ Convolve with projections:

𝑞(𝑎, 𝑏) = 𝑝(𝑎, 𝑏)𝑐𝑜𝑠𝑎

●​ Reconstruction formula (convolution form):


𝑎𝑚 2π

𝑓(𝑥, 𝑦) = ∫ ∫ 𝑞(𝑎, 𝑏) * ℎ(𝑎) 𝑑𝑏 𝑑𝑎


−𝑎𝑚 0

Key Points
●​ Fan-beam = more realistic geometry in modern CT.

●​ Convert fan → parallel using 𝑢 = 𝑏 + 𝑎, 𝑟 = 𝐷𝑠𝑖𝑛𝑎

●​ Use weighted convolution → correct for varying ray lengths.


●​ Ramp filter + windowing (Hamming/Hann) still applies.
●​ Backproject all filtered projections over the full circle

You might also like