Spatial filtering modifies an image by replacing the value of each pixel by a function of the
values of the pixel and its neighbors. If the operation performed on the image pixels is
linear, then the filter is called a linear spatial filter. Otherwise, the filter is a nonlinear
spatial filter.
✔ Any system that satisfies linearity and shift invariance is a Linear Shift Invariance System(LSIS)
✔ LSIS implies convolution and
✔ Correlation consists of moving the center of a kernel over an image, and computing the sum of products at each
location.
✔ The mechanics of spatial convolution are the same, except that the correlation kernel is rotated by 180°.
✔ Thus, when the values of a kernel are symmetric about its center, correlation and convolution yield the same result.
How do you deal with the edge values?
TU]3ZNK3OSM3RUUQY3SUXK3
ROQK3TGZ[XGR3RUUQOTM3OSM3
TU3HRUIQ_3GXZOLGIZY
A maximum value in the center, the value drops as moving away from the center
MG[YYOGT3
L[TIZOUT3OT3
JOYIXKZK3
JUSGOT
YU3OT3ZKXSY3UL33RGXMKX3SGYQ3IUSV[ZGZOUTGR3KLLOIOKTI_3\GRU
We know from matrix theory that a matrix resulting from the product of a column vector and a
row vector always has a rank of 1. By definition, a separable kernel is formed by such a product.
Therefore, to determine if a kernel is separable, all we have to do is determine if its rank is 1.
If a matrix has rank 1, it means:
The matrix contains only one linearly independent row or column, and all other rows or
columns are scalar multiples of that one.
✔ linear spatial filtering consists of convolving an image with a filter kernel.
✔ Convolving a smoothing kernel with an image blurs the image, with the degree of blurring being determined
by the size of the kernel and the values of its coefficients
✔ The lowpass filter blurred the image and its noise reduction performance was poor.
✔ The superiority in all respects of median over lowpass filtering in this case is evident.
1.Choose a Window (Kernel):
A small window of size k×k (e.g., 3×3, 5×5) is moved across the image pixel by pixel.
2.Extract Neighbors:
For each pixel, extract the pixel values covered by the window. These are the neighboring values including the
current pixel.
3.Sort the Values:
Sort these k×k in ascending order.
4.Find the Median:
The median value (middle value in the sorted list) is chosen.
5.Replace the Center Pixel: [100, 102, 255]
Replace the central pixel of the window with this median value. [ 98, 96, 103]
6.Slide the Window: [250, 255, 105]
Move the window to the next pixel and repeat the process.
Sorted list:
[96, 98, 100, 102, 103, 105, 250, 255, 255]
Median = 103 → center pixel (96) will be replaced with 103.
✔ Edges in digital images often are ramp-like transitions in intensity, in which case the first derivative of the image
would result in thick edges because the derivative is nonzero along a ramp.
✔ On the other hand, the second derivative would produce a double edge one pixel thick, separated by zeros.
✔ From this, we conclude that the second derivative enhances fine detail much better than the first derivative, a
property ideally suited for sharpening images.
✔ Also, second derivatives require fewer operations to implement than first derivatives
FIGURE 3.46 (a) Blurred image of the North
Pole of the moon.
(b) Laplacian image obtained using the
kernel in Fig. 3.45(a).
(c) Image sharpened using Eq. (3-54) with c
=−1.
(d) Image sharpened using the same
procedure, but with the kernel in Fig.
3.45(b). (Original image courtesy of NASA.)
in the dark image that the most populated histogram bins are concentrated on the lower (dark) end of the
intensity scale.
Similarly, the most populated bins of the light image are biased toward the higher end of the scale.
An image with low contrast has a narrow histogram located typically toward the middle of the intensity
scale, as Fig. 3.16(c) shows.
the components of the histogram of the high-contrast image cover a wide range of the intensity scale, and
the distribution of pixels is not too far from uniform, with few bins being much higher than the others.
Intuitively, it is reasonable to conclude that an image whose pixels tend to occupy the entire range of
possible intensity levels and, in addition, tend to be distributed uniformly, will have an appearance of
high contrast and will exhibit a large variety of gray tones.
The net effect will be an image that shows a great deal of gray-level detail and has a high dynamic
range.
Histogram shape is related to image appearance
As you can see, lower intensity levels (0, 1, 2) dominate
— so the image is likely dark with low contrast.
This is more spread out than the original histogram, where most values were in [0–2].
Gray Level 0 1 2 3 4 5 6 7
𝒓𝒌
number of pixels 790 1023 850 656 329 245 122 81
𝒏𝒌
𝒏𝒌 790
𝒑𝒓 (𝒓𝒌 )=
𝑴𝑵 = 0.19 0.25 0.21 0.16 0.08 0.06 0.03 0.02
4096
𝒔𝒌 =(𝑳 − 𝟏) σ𝒌𝒋=𝟎 𝒑𝒓 (𝒓𝒋 ) 1.33 3.08 4.55 5.67 6.23 6.65 6.86 7.00
Rounded Values 1 3 5 6 6 7 7 7
pixels value in the 790 1023 850 985 448
histogram equalized
image
𝑠0 → 1 𝑠1 → 3 𝑠2 → 5 𝑠3 → 6 𝑠4 → 6 𝑠5,6,7 → 7
•Originally, most pixels were clustered in lower intensities (dark pixels) ⇒ low contrast.
•After equalization:
• Intensities were spread across the full range (1, 3, 5, 6, 7).
• Some intensity levels were pushed up (e.g., 2 → 5, 3 → 6, etc.).
• This causes low and mid-level intensities to become more distinguishable, revealing more texture and detail.
Thus, contrast increases because the distribution of intensity levels is stretched across a wider range, making differences
between objects and background more visually noticeable.
FIGURE 3.20 Left column: Images from Fig. 3.16. Center column: Corresponding
histogram-equalized images. Right column: histograms of the images in the
center column (compare with the histograms in Fig. 3.16).
HISTOGRAM MATCHING (SPECIFICATION)
As explained in the last section, histogram equalization produces a transformation function that seeks to generate an output image
with a uniform histogram.
When automatic enhancement is desired, this is a good approach to consider because the results from this technique are
predictable and the method is simple to implement.
However, there are applications in which histogram equalization is not suitable.
In particular, it is useful sometimes to be able to specify the shape of the histogram that we wish the processed image to have. The
method used to generate images that have a specified histogram is called histogram matching or histogram specification.
Input Image
Gray Level 0 1 2 3 4 5 6 7
𝒓𝒌
number of pixels 790 1023 850 656 329 245 122 81
𝒏𝒌
𝒏𝒌 790
𝒑𝒓 (𝒓𝒌 )=
𝑴𝑵 = 0.19 0.25 0.21 0.16 0.08 0.06 0.03 0.02
4096
𝒔𝒌 =(𝑳 − 𝟏) σ𝒌𝒋=𝟎 𝒑𝒓 (𝒓𝒋 ) 1.33 3.08 4.55 5.67 6.23 6.65 6.86 7.00
Rounded Values 1 3 5 6 6 7 7 7
pixels value in the 790 1023 850 985 448
histogram equalized
image
Target Image
Gray Level 0 1 2 3 4 5 6 7
𝒛𝒒
𝒑𝒛 (𝒛𝒒 ) 0.00 0.00 0.00 0.15 0.20 0.30 0.20 0.15
𝒒
𝑮(𝒛𝒒)=(𝑳 − 𝟏) σ𝒊=𝟎 𝒑𝒛 (𝒛𝒊 ) 0.00 0.00 0.00 1.05 2.45 4.55 5.95 7.00
Rounded Values 0 0 0 1 2 5 6 7
𝒓 𝒔𝒌 𝑮(𝒛𝒒 ) 𝒛𝒒 Gray 𝒔𝒌 𝑮(𝒛𝒒 ) Map
Level
0 1 0 0
1 3 0 1 0 1 0 3
2 5 0 2 1 3 0 4
3 6 1 3 2 5 0 5
4 6 2 4 3 6 1 6
5 7 5 5 4 6 2 6
6 7 6 6 5 7 5 7
7 7 7 7 6 7 6 7
7 7 7 7
Modified Image
Gray Level 0 1 2 3 4 5 6 7
number of pixels 0 0 0 790 1023 850 985 448
The first step is to obtain the histogram-equalized values, which we did in Example 3.5:
𝑠0 → 1 𝑠1 → 3 𝑠2 → 5 𝑠3 → 6 𝑠4 → 6 𝑠5,6,7 → 7
Compute𝑮(𝒛𝒒 ) CDF of Specified Histogram
find the corresponding value of 𝑧𝑞 so that is 𝑮(𝒛𝒒 ) closest to 𝑠𝑘 . Store these mappings from s to z.
When more than one value of 𝑧𝑞 gives the same match (i.e., the mapping is not unique), choose the
smallest value by convention.
First and Last Columns of target histogram Last 2 Columns of input histogram
𝒛𝒒 𝑮(𝒛𝒒 ) 𝒔𝒌
0 0 1 790
1 0 3 1023
2 0 5 850
3 1 6
985
4 2 6
5 5 7
6 6 7 448
7 7 7
Modified Image
Gray Level 0 1 2 3 4 5 6 7
number of pixels 0 0 0 790 1023 850 985 448