[go: up one dir, main page]

0% found this document useful (0 votes)
42 views84 pages

Image Processing Image Enhancement-1

Image enhancement improves digital image quality without knowing the source of degradation, aiming to enhance perception for better processing. Techniques are categorized into spatial and frequency domain methods, with applications including contrast enhancement and histogram manipulation. Various methods like linear and nonlinear stretching, histogram equalization, and filtering techniques are employed to achieve desired image quality.

Uploaded by

SOURAV
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views84 pages

Image Processing Image Enhancement-1

Image enhancement improves digital image quality without knowing the source of degradation, aiming to enhance perception for better processing. Techniques are categorized into spatial and frequency domain methods, with applications including contrast enhancement and histogram manipulation. Various methods like linear and nonlinear stretching, histogram equalization, and filtering techniques are employed to achieve desired image quality.

Uploaded by

SOURAV
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Image Processing:

Image Enhancement

Suman Halder, Assistant Professor


Image Enhancement
Ø Image enhancement is the improvement of digital image quality without
knowledge about the source of degradation.
Ø The goal of image enhancement is to improve the perception of information
in images for better image processing techniques.
Ø It is the process of improving the visual quality of the image by manipulating
the gray levels of the pixels for a specific application.
Ø Image enhancement techniques can be divided into two broad categories:
1. Spatial domain methods, which operate directly on pixels
2. Frequency domain methods, which operate on the Fourier transform of
an image.
Ø Image enhancement techniques are used as pre-processing tools for other
image processing techniques
Suman Halder, Assistant Professor
Spatial Domain Method
Ø Spatial domain method refers to the image plane itself and is based on direct
manipulation of pixels in an image.
g(x, y) = T[f(x, y)]
where f(x, y) is the input image and g(x, y) is the processed image. T is an
operator on f, defined over some neighbourhood N of (x, y). For N we mostly
use a rectangular subimage that is centered at (x, y).
Ø In the spatial domain, the operations are performed by a convolution

Suman Halder, Assistant Professor


Frequency Domain Method
Ø Frequency domain method is based on modifying the Fourier transform of an
image. Let g(x, y) be the image formed by the convolution of f(x, y) and a
linear position invarient operator h(x, y).
g(x, y) = h(x, y) * f(x, y)
Ø Applying the convolution theorem,
G(u, v) = H(u, v) . F(u, v)
where G, H, and F are the Fourier transform of g, h and f respectively. The
transform H(u, v) is the transfer function of a linear system or process.
Ø Applying the inverse Fourier transform on G(u, v) yields the output image:
g(x, y) = Ƒ-1[H(u, v) . F(u, v)]

Suman Halder, Assistant Professor


Contrast Enhancement
Ø Contrast is the difference in luminance or colour that makes an object
distinguishable. In visual perception of the real world, contrast is determined
by the difference in the colour and brightness of the object and other objects
within the same field of view.
Ø A low contrast image looks grayish or foggy. A high contrast image appears
almost poster like.
Ø Poor constrast may be caused by inadequate lighting, aperture size, shutter
speed and/or non-linear mapping of the image intensity.
Ø The definitions of contrast represent a ratio of the type
��������� ����������
������� ���������

Suman Halder, Assistant Professor


Contrast Enhancement
Ø Contrast enhancement is a process that makes the image features stand out
more clearly by making optimal use of the colors available on the display or
output device.
Ø Contrast manipulations involve changing the range of values in an image in
order to increase contrast.
Ø For example, an image might start with a range of values between 40 and 90.
When this is stretched to a range of 0 to 255, the differences between
features is accentuated.
Ø Contrast enhancement can be done using histogram stretching, linear
stretching, non-linear stretching.

Suman Halder, Assistant Professor


Linear Stretching
Ø Linear contrast enhancement, also referred to as a linear stretching that
linearly expands the original digital values of the image into a new distribution.
Ø Methods of linear stretching are:
o Minimum-Maximum Linear Stretching
o Percentage Linear Stretching
o Standard deviation contrast stretch
o Piecewise Linear Stretching

Suman Halder, Assistant Professor


Minimum-Maximum Linear Stretching
Ø The original minimum and maximum values of the data are assigned to a
newly specified set of values that utilize the full range of available brightness
values.
Ø The transformation function can be represented by a straight line having the
slop (nmax - nmin) / (omax - omin), always greater than or equal to 1. If the slop is
1, there is no enhancement.
Ø So, the transformation function
(���� − ����)
g(x, y) = T(o) = (f(x, y) - omin) + nmin
(���� − ����)
Ø This transformation function shifts and stretches the gray level range of the
input image [omin, omax] to occupy the entire range [nmin, nmax]

Suman Halder, Assistant Professor


Minimum-Maximum Linear Stretching
Ø Consider an image with a minimum brightness value of 45 and a maximum
value of 205. When such an image is viewed without enhancements, the
values of 0 to 44 and 206 to 255 are not displayed. Important spectral
differences can be detetected by stretching the minimum value of 45 to 0 and
the maximum value of 120 to 255.

Suman Halder, Assistant Professor


Percentage Linear Stretching
Ø The percentage linear contrast stretch is similar to the minimum-maximum
linear contrast stretch except this method uses a specified minimum and
maximum values that lie in a certain percentage of pixels from the mean of
the histogram.
Ø A certain percentage of pixels top or low values of the image will be set 0 or
255, rest of values will be linearly stretched to 0 to 255.
Ø A standard deviation from the mean is often used to push the tails of the
histogram beyond the original minimum and maximum values.
Ø If the percentage coincides with a standard deviation percentage, then it is
called a standard deviation contrast stretch.

Suman Halder, Assistant Professor


Nonlinear Stretching
Ø Nonlinear contrast enhancement, also referred to as a Nonlinear stretching
that non-linearly reduces the contrast in the very light or dark parts of the
image associated with the tails of a normally distributed histogram.
Ø Nonlinear stretching applies when the gray level histogram shows high
population of pixels in the lower gray level zone of the gray scale, need to
stretch this portion more at the cost of compressing the higher gray level
zone.
Ø Some pixels that originally have differently values are now assigned the
same value (perhaps loss information), while other value that were once very
close together are now spread out, increasing the contrast between them.
Ø It is customary to use logarithmic transformation function or exponential
transformation function.

Suman Halder, Assistant Professor


Histogram
Ø An image histogram is a graphical display of the values of the pixels in a
digital image.
Ø The histogram plots the number of pixels (frequency) in the image (vertical
axis) with a gray level intensity value (horizontal axis).
Ø Histogram is a graph showing the number of pixels (frequency) in an image
at each different intensity value found in that image.
Ø Histograms for 8-bits grayscale image graphically display 256 different
possible intensities with distribution of pixels amongst those grayscale value.
Ø Histograms for color images either individual histogram of red, green and
blue channels can be taken or a 3-D histogram can be produced.

Suman Halder, Assistant Professor


Histogram
The histogram of the picture of the Einstein would be like this

Suman Halder, Assistant Professor


Applications of Histograms
Ø Histograms has many uses in image processing. It is used to analysis of an
image. Properties of an image can be predicted by the detailed study of the
histogram.
Ø The another use of histogram is for brightness purposes. The brightness of
the image can be adjusted by having the details of its histogram. Not only in
brightness, but histograms are also used in adjusting contrast of an image.
Ø Another important use of histogram is to equalize an image. Gray level
intensities are expanded along the x-axis to produce a high contrast image.
Ø Histogram has wide use in thresholding as it improves the appearance of the
image. This is mostly used in computer vision.

Suman Halder, Assistant Professor


Histogram sliding
Ø In Histogram sliding the
complete histogram is shifted
towards rightwards or
leftwards. When a histogram
is shifted towards the right or
left, clear changes are seen
in the brightness of the image.
Ø The brightness of the image
is defined by the intensity of
light which is emitted by a
particular light source.

Suman Halder, Assistant Professor


Histogram Stretching
Ø In histogram stretching, contrast of an image is increased. The contrast of an
image is defined between the maximum and minimum value of pixel intensity.
Ø If we want to increase the contrast of an image, histogram of that image will
be fully stretched and covered the dynamic range of the histogram.
Ø From histogram of an image, we can check that the image has low or high
contrast.
�(�, �) − ����
g(x, y) = * 2bpp
���� −����

Suman Halder, Assistant Professor


Histogram Stretching

Suman Halder, Assistant Professor


Histogram Equalization
Ø Histogram equalization is used for equalizing all the pixel values of an image.
Transformation is done in such a way that uniform flattened histogram is
produced.
Ø Histogram equalization increases the dynamic range of pixel values and
makes an equal count of pixels at each level which produces a flat histogram
with high contrast image.
Ø While stretching histogram, the shape of histogram remains the same
whereas in Histogram equalization, the shape of histogram changes and it
generates only one image.
Ø The first two steps are calculating the PDF and CDF. All pixel values of the
image will be equalized.

Suman Halder, Assistant Professor


Histogram Equalization
Ø Probability Mass Function (PMF): It gives the probability of each number in
the data set (frequency of each element).
Ø Cumulative Distributed Function (CDF): Cumulative sum of values calculated
by PMF.
Ø CDF will be calculated using the histogram. CDF makes the PDF grow
monotonically. Monotonical growth is necessary for histogram equalization.

h(v) = round ( (���) − ���


���(�) −������
x (L-1) )
���

where v is the pixel intensity, cdfmin is the minimum non-zero value of the
cumulative distribution function, M × N gives the image's number of pixels and
L is the number of gray levels used

Suman Halder, Assistant Professor


Histogram Equalization

Suman Halder, Assistant Professor


Histogram Equalization

Suman Halder, Assistant Professor


Histogram Equalization

Suman Halder, Assistant Professor


Noise Models
Ø Gaussian Noise: poor illumination
Ø Laplacian Noise
Ø Uniform Noise
Ø Rayleigh Noise: range image
Ø Gamma: laser imaging
Ø Impulse (Salt-and-Pepper) Noise : faulty switch during imaging
Ø Exponential Noise

Suman Halder, Assistant Professor


Convolution
Ø Convolution is a process to produce the output image g(x, y) by multiplying
in spacial domain, an original image f(x, y) and a spatial mask h(x, y).
g(x, y) = h(x, y) . f(x, y)
Ø Generally, a spatial mask is much smaller than the input image (3×3, 5×5,
or 7×7) and sometimes it is known as the kernel. For a square kernel with
size M×M, we can calculate the output image with the following formula:
M/2 �/2

g(x, y) = ℎ(�, �) �(� − �, � − �)


i=−M/2 �=−�/2

Ø The convolution process is performed by sliding the mask over the


originaimage, normally starting at the top left to the bottm right.
Ø Boundary columns and rows of (NxN) image are neglected and the filtered
image is of size (N-2)x(N-2)
Suman Halder, Assistant Professor
Smoothing
Ø Images may contain various types of noises that reduce the quality of the
image.
Ø Blurring or smoothing is the technique for reducing the image noises and
improve its quality.
Ø Usually, it is achieved by convolving an image with a low pass filter that
removes high-frequency content like edges from the image.
Ø Different techniques can used for image smoothing like Averaging, Gaussian
Blur and Median Filter

Suman Halder, Assistant Professor


Image Averaging
Ø A noisy image g(x, y), formed by the addition of a certain amount of noise η(x,
y) to an original image f (x, y)
g(x, y) = f(x, y) + η(x, y)
Ø It is assumed that at every pair (x, y) the noise is uncorrelated (thus uniform
over the image) and has an average value of zero. The goal is to reduce the
noise-effects by adding a set of noisy images {gi(x, y)}
1 �
� (x, y) = � (�,
�=1 �
�)

Ø Averaging of the image is done by applying a convolution operation on the
image with a normalized box filter. In convolution operation, the filter or kernel
is slides across an image and the average of all the pixels is found under the
kernel area and replace this average with the central element of the image.

Suman Halder, Assistant Professor


Image Averaging
Ø Image averaging is a digital image processing technique that is often
employed to enhance video images that have been corrupted by random
noise.
Ø The algorithm operates by computing an average or arithmetic mean of the
intensity values for each pixel position in a set of captured images from the
same scene.
Ø The main use of image averaging is to reduce the noise effects by adding a
set of noisy images {gi(x, y)}.
1 �
�(x, y) = � (�,
�=1 �
�) and

E{�(x, y)} = f(x, y)
Ø If M increases then E{�(x, y)} is close to f(x, y))

Suman Halder, Assistant Professor


Spatial filtering
Ø The use of spatial masks for image processing is called spatial filtering, and
the masks are called spatial filters.
Ø Two type of filters: Linear filters and Non-Linear filters
Ø Linear filters: If an explicit mask is used, then it is said to be Linear filters.
Ø Examples of Linear filters are Low-pass filter, High-pass filter, High-boost
filter, Derivative filters
Ø Non-Linear filters: They do not explicitly use coefficients. Their operation is
based directly on the values of the pixels in the neighbourhood under
consideration.
Ø Examples of Non-Linear filters are Maximum filter, Minimum filter, Median
filter

Suman Halder, Assistant Professor


Mean Filter
Ø In the mean filter simply replace each pixel value in an image with the mean
value of its neighbors, including itself.
Ø It has the effect of eliminating pixel values which are unrepresentative of their
surroundings. Mean filtering is usually thought of as a convolution filter. Like
other convolutions it is based around a kernel, which represents the shape
and size of the neighborhood to be sampled when calculating the mean.
Ø Often a 3×3 square kernel is used, although larger kernels (e.g. 5×5
squares) can be used for more severe smoothing.
Ø A small kernel can be applied more than once in order to produce a similar
but not identical effect as a single pass with a large kernel.
Ø The smoothing of an image depends upon the kernel size. If Kernel size is
large then it removes the small feature of the image. But if the kernel size is
too small then it is not able to remove the noise.
Suman Halder, Assistant Professor
Arithmetic Mean Filter
Ø This is the simplest mean filters. Let Sxy represent the filter window of size
mXn, with the centered point at (x, y).
Ø The arithmetic mean filter simply calculates the average value of the pixels
in the window area, and then assigns the average value to the pixel at the
center of the window:
Ø It can remove uniform noise and Gaussian noise, but it will cause a certain
degree of blurring of the image.

Suman Halder, Assistant Professor


Arithmetic Mean Filter
Ø It has some effect on the salt and pepper noise but not much. It just made
them blurred.

Suman Halder, Assistant Professor


Geometric Mean
Ø The pixels of the filtered image are given by the power of 1/� � of the
product of the pixels in the template window.
Ø Compared with the arithmetic mean filter, the geometric mean filter can
better extract Gaussian noise and retain more edge information of the image.
Ø It is very sensitive to the 0 value. As long as there is a pixel with a gray value
of 0 in the filter window, the output result of the filter will be 0.

Suman Halder, Assistant Professor


Harmonic Mean
Ø Harmonic Mean Works well for salt noise, but fails for pepper noise.
Ø Also does well for other kinds of noise such as Gaussian noise

Suman Halder, Assistant Professor


Median Filter
Ø The median filer is one of the most widely used non-linear filtering techniques
to remove salt-and-pepper noise.
Ø The median filter technique is very similar to the averaging filtering technique.
It computes the median of all the pixels under the kernel window and the
central pixel is replaced with this median value instead of the average value.
Ø The median is calculated by first sorting all the pixels values within the kxk
window in numerical order and then replacing the pixel being considered with
the middle pixel value.
Ø Median Blurring always reduces the noise effectively because in this filtering
technique the central element is always replaced by some pixel value in the
image. But in the mean filters, the central element is a newly calculated value
which may be a pixel value in the image or a new value.

Suman Halder, Assistant Professor


Median Filter
Ø The following figure illustrates the principle with an example of a 3x3
window median filter applied on pixel with value 255, which will be
replaced with a value of 96.

Suman Halder, Assistant Professor


Median Filter
Ø Median filtering is a nonlinear operation often used in image processing to
reduce "salt and pepper" noise.

Suman Halder, Assistant Professor


Low-pass Filtering
Ø The most basic of filtering operations is called low-pass filter.
Ø The low-pass filters eliminate high frequencies, resulting in the removal of
edges and sharpness in images, thus blurring/smoothing the overall image,
Ø A low-pass filter, also called a "blurring" or "smoothing" filter, averages out
rapid changes in intensity.
Ø The simplest low-pass filter just calculates the average of a pixel and all of its
eight immediate neighbors. The result replaces the original value of the pixel.
The process is repeated for every pixel in the image.
Ø The average of all the pixels in the considered neighbourhood. For this
reason lowpass spatial filtering is also called neighbourhood averaging.
Ø Often a 3×3 square kernel is used, although larger kernels (e.g. 5×5
squares) can be used for more severe smoothing.
Suman Halder, Assistant Professor
Image Sharpening
Ø Image sharpening is an effect applied to digital images to give them a
sharper appearance.
Ø As the opposite of low-pass filtering for image smoothing and noise reduction,
high-pass filtering can sharpen the image (the edges are more pronounced),
thereby enhancing and emphasizing the detailed information (high spatial
frequency components) in the image.

Suman Halder, Assistant Professor


High Pass Filtering
Ø The high-pass filters eliminate low frequencies, resulting in sharper images.
Ø The kernel of the high pass filter is designed to increase the brightness of the
center pixel relative to neighboring pixels. The kernel array usually contains a
single positive value at its center, which is completely surrounded by negative
values.
Ø High-pass filtering can be carried out by subtracting the low-pass filtered
image from its original version, which can be considered as all-pass filtered
by an impulse or delta function kernel:

Suman Halder, Assistant Professor


High Pass Filtering
Ø When a given image I0 is convolved with this delta function is not changed:
Wap * I0 = I0
Ø As convolution is a linear operation
Ihp = Iap - Ilp = Wap * I0 - Wlp *I0 = (Wap - Wlp) * I0 = Whp * I0

Ø where Whp = Wap - Wlp is the high-pass kernel corresponding to the low-pass
kernel Wlp.
Ø The all-pass filter in frequency domain, a constant, is the Fourier transform of
the all-pass convolution kernel, an impulse, in spatial domain.

Suman Halder, Assistant Professor


High Pass Filtering
Ø We can obtain a high-pass filtering kernel corresponding to each of the low-
pass filter kernels by subtracting the low-pass kernel from the all-pass kernel.
The resulting kernels are various forms of high-pass filtering kernels, also
called the Laplace operators.

Ø The sum of all elements of the resulting high-pass filter is always zero.
Suman Halder, Assistant Professor
High-boost Filtering
Ø It is often desirable to emphasize high frequency components representing
the image details (by means such as sharpening) without eliminating low
frequency components representing the basic form of the signal.
Ø In this case, the high-boost filter can be used to enhance high frequency
component while still keeping the low frequency components:

Ihb = I0 + cIhp = (Wap + cWhp) * I0 = Whb * I0

where c is a constant and Whb = cWap + Whp is the high boost convolution
kernel

Suman Halder, Assistant Professor


High-boost Filtering

Ø High-boost filtering has advantages over normal highpass filtering because


the resulting image is not darkened.
Ø However, boosting the higher frequencies may also result in the revealing of
noise.

Suman Halder, Assistant Professor


High-boost Filtering

Ø The example of high-boost filtering obtained by the above high-boost


convolution kernel with c=2

Suman Halder, Assistant Professor


High-boost Filtering
Ø A special case of unsharp masking
Ø HP filters cut the zero frequency component, namely the mean value. The
resulting image is zero mean and looks very dark. High boost filtering sums
the original image to the result of HPF in order to get an image with sharper
(emphasized) edges but with same range of gray values as the original one.
Ø In formula
High pass: fhp(x, y) = f(x, y) - flp(x, y)
High boost: fhb(x, y) = Af(x, y) - flp(x, y)
= Af(x, y) - f(x, y) + fhp(x, y)
= (A-1)f(x, y) + fhp(x, y)
For A=1 the high-boost corresponds to the HP
For A>1 the contribution of the original image becomes larger
Suman Halder, Assistant Professor
Derivative Filtering
Ø Derivative filters provide a quantitative measurement for the rate of change in
pixel brightness information present in a digital image. When a derivative
filter is applied to a digital image, the resulting information about brightness
change rates can be used to enhance contrast, detect edges and boundaries,
and to measure feature orientation.
Ø The disadvantage of the previous methods is that they are based on
averaging the gray-levels of the pixels in a region, which actually results in a
blurring of the image. Because averaging uses a sum (and therefore – in the
continuous case – it can also be expressed as integrating), one might expect
that a derivation has the opposite effect (the result will be a sharper image).

Suman Halder, Assistant Professor


Derivative Filtering
Ø The formula for the 1st derivative of a function is as follows:

Ø It is just the difference between subsequent values and measures the rate of
change of the function.
Ø The formula for the 2nd derivative of a function is as follows:

Ø Simply takes into account the values both before and after the current value.
Suman Halder, Assistant Professor
Derivative Filtering
Ø Differentation is expressed by using the gradient:

where �f is the gradient of the function f (x, y) at the coordinates (x, y) (note
that the gradient is a vector).
Ø The magnitude of this vector:

Ø The magnitude is based on the Euclidean metric and that numerous


approaches to image differentation are based on it. The Manhattan-metric
can be use instead of the Euclidean metric.

Suman Halder, Assistant Professor


Derivative Filtering
Ø The 2nd order derivative is more useful for image enhancement than 1st
order derivative.
Ø The Laplacian is defined as follows:

where the partial 1st order derivative in the x direction is defined as follows:

and in the y direction as follows:

Suman Halder, Assistant Professor


Derivative Filtering
Ø Therefore, the Laplacian can be given as follows:
�2f = f(x+1, y) + f(x-1, y) + f(x, y+1) + f(x, y-1) - 4f(x, y)
Ø It can be easily implemented using following filters:

or

Ø The result of a Laplacian fitering is not an enhanced image. Subtract the


Laplacian result from the original image to generate the final sharpened
enhanced image.
g(x, y) = f(x, y) - �2f

Suman Halder, Assistant Professor


Derivative Filtering
g(x, y) = f(x, y) - �2f
= f(x, y) - [f(x+1, y) + f(x-1, y) + f(x, y+1) + f(x, y-1) - 4f(x, y)]
= 5f(x, y) - f(x+1, y) - f(x-1, y) - f(x, y+1) - f(x, y-1)
Ø Therefore, the final filter can be implemented as:

Suman Halder, Assistant Professor


Derivative Filtering

In the final sharpened image edged and fine detail are much more obvious

Suman Halder, Assistant Professor


Derivative Filtering
Ø To include the diagonal directions, the filter can implemented as follows:

Variants of Laplacian
Ø These Laplacian filters are very sensitive to noise. To counter this, the image
is often Gaussian smoothed before applying the Laplacian filter. This pre-
processing step reduces the high frequency noise components prior to the
differentiation step.

Suman Halder, Assistant Professor


Homomorphic Filtering
Ø Homomorphic filtering is a generalized technique for signal and image
processing involving a non-linear mapping to different domain in which linear
filter techniques are applied followed by mapping back to the original domain.
Ø Homomorphic filtering simultaneously normalizes the brightness across an
image and increase contrast.
Ø Homomorphic filtering is used for removing multiplicative noise, improving the
apearance of grayscale image and also used in correcting non-uniforn
illumination in images.

Suman Halder, Assistant Professor


Homomorphic Filtering
Ø Illumination-reflectance model: An image can be modeled as the product
of an illumination function and the reflectance function at every point.
I(x, y) = L(x, y) × R(x, y)
where L(x,y) is illumination and R(x, y) is reflectance
Ø The illumination-reflectance model can be used to solve the problem of
improving the quality of an image that has been acquired under poor
illumination conditions.
Ø For many images, illumination is the primary contributor to the dynamic range
and varies slowly in space.
Ø While the reflectance component represents the details of object edges and
varies rapidly in space.

Suman Halder, Assistant Professor


Homomorphic Filtering
Ø The idea of the homomorphic filter is to separate the components
illumination-reflectance model and apply two different transfer functions to
have more control.
f(x, y) = i(x, y).r(x, y)
Ø The problem with Fourier Transform is that the product of two functions is not
separable since Fourier Transform is not distributive over multiplication
Ƒ[f(x,y)] = Ƒ[i(x,y) . r(x,y)] ≠ Ƒ[i(x,y)] . Ƒ[r(x,y)]

Suman Halder, Assistant Professor


Homomorphic Filtering
The steps involved in homomorphic filtering:
1. Logarithmic function is applied to the input image so that in be expressed as
a sum of its illumination and reflectance components:
z(x, y) = ln f(x, y) = ln [i(x, y).r(x, y)] = ln i(x, y) + ln r(x, y)
2. Then apply Fourier Tranform to the log of these components
Ƒ[z(x, y)] = Ƒ[ln i(x, y)] + Ƒ[ln r(x, y)]
Or Z(u, v) = Fi(u, v) + Fr(u, v)
3. We process Z(u, v) by means of Filter Function H(u, v), so design filters
separately for illumination and reflectance components. Transfer functions of
the filters can be different for these two components.
S(u, v) = H(u, v)Z(u, v) = H(u, v)[Fi(u, v) + Fr(u, v)]
= H(u, v)Fi(u, v) + H(u, v)Fr(u, v), where S(u, v) is the FT of result
Suman Halder, Assistant Professor
Homomorphic Filtering
4. Apply the Inverse Fourier Transform to the filtered image
s(x, y) = Ƒ-1[S(u, v)]
= Ƒ-1[H(u, v)Fi(u, v) + H(u, v)Fr(u, v)]
= Ƒ-1[H(u, v)Fi(u, v)] + Ƒ-1[H(u, v)Fr(u, v)]
= i’(x, y) +r’(x, y)
5. Now, to offset the logarithmic applied in step-1, apply antilog function to
recover original image.
g(x, y) = es(x, y) = ei’(x, y) +r’(x, y) = ei’(x, y) er’(x, y) = i0(x, y)r0(x, y)
Here, i0(x, y) and r0(x, y) are the illumination and reflectance components of
the output image

Suman Halder, Assistant Professor


Gaussian Filter
Ø The values inside the kernel are computed by the Gaussian function:

Ø A 3×3 Gaussian Kernel Approximation (2D) with Standard Deviation = 1

Suman Halder, Assistant Professor


Frequency Domain Enhancement
Ø Frequency filters are processed an image in frequency domain.
Ø The image Fourier transformed into frequency domain, multiplied with the
filter function and then re-transformed into the spatial domain.

Suman Halder, Assistant Professor


Frequency Domain Enhancement
Ø Basics of filtering in the frequency domain
1. Multiply the input image by (-1)x+y to center the transform to u = M/2 and v
= N/2 (if M and N are even numbers, then the shifted coordinates will be
integers)
2. Compute F(u,v), the DFT of the image from (1)
3. Multiply F(u,v) by a filter function H(u,v)
4. Compute the inverse DFT of the result in (3)
5. Obtain the real part of the result in (4)
6. Multiply the result in (5) by (-1)x+y to cancel the multiplication of the input
image.

Suman Halder, Assistant Professor


Frequency Domain Enhancement
G(u, v) = H(u, v) F(u, v)
Ø H(u,v) is the filter transfer function, which is the DFT of the filter impulse
response
Ø The implementation consists in multiplying point-wise the filter H(u,v) with the
function F(u,v)
Ø Real filters are called zero phase shift filters because they don’t change the
phase of F(u,v)
Ø Filtered image is obtained by taking the inverse DFT of the resulting image
Ø It can happen that the filtered image has spurious imaginary components
even though the original image f(x,y) and the filter h(x,y) are real. These are
due to numerical errors and are neglected.
Ø The final result is thus the real part of the filtered image
Suman Halder, Assistant Professor
Frequency Domain Enhancement
Ø The concept of filtering is easier to visualize in the frequency domain.
Ø Therefore, enhancement of image can be done in the frequency domain,
based on its DFT.
Ø The form of the filter determines the effects of the operator.
Ø There are three diiferent kinds of filters:
o low-pass filter
ohigh-pass filter
oband-pass filter

Suman Halder, Assistant Professor


Frequency Domain Enhancement
Ø Edges and sharp transitions in gray values in an image contribute
significantly to high-frequency content of its Fourier transform
Ø Regions of relatively uniform gray values in an image contribute to low-
frequency content of its Fourier transform.
Ø Hence, an image can be smoothed in the Frequency domain by attenuating
the high-frequency content of its Fourier transform
Ø For simplicity, we will consider only those filters that are real and symmetric.

Suman Halder, Assistant Professor


Low pass filter
Ø Low pass filter removes the high frequency components that means it keeps
low frequency components. It is used for smoothing the image. It is used to
smoothen the image by attenuating high frequency components and
preserving low frequency components. It helps in removal of aliasing effect.
Ø Different kind of low-pass filters are following:
qIdeal low-pass filter
qButterworth low-pass filter
qGaussian low-pass filter

Suman Halder, Assistant Professor


Ideal low-pass filter
Ø The most simplest low-pass filter is the Ideal Lowpass Filter (ILPF).
Ø A 2-D ideal low-pass filter can be specified by the function:
1 if D(u,v) ≤ D0
H(u,v) =
0 if D(u,v) > D0
Ø Where, D0 is a positive constant. ILPF passes all the frequencies within a
circle of radius D0 from the origin without attenuation and cuts off all the
frequencies outside the circle.
Ø This D0 is the transition point between H(u, v) = 1 and H(u, v) = 0, so this is
termed as cutoff frequency.
Ø D(u, v) is the Euclidean Distance from any point (u, v) to the origin of the
frequency plane,
i.e. D(u, v)= �2 + �2
Suman Halder, Assistant Professor
Ideal low-pass filter

Perspective plot, image representation and cross section of Ideal low-pass filter

Suman Halder, Assistant Professor


Ideal low-pass filter
Ø The drawback of this filter function is a ringing effect that occurs along the
edges of the filtered image.

Original image Result of ideal low pass filter of


radius 30

Suman Halder, Assistant Professor


Butterworth low-pass filter
Ø The transformation function of a Butterworth lowpass filter (BLPF) of order n
with cutoff frequency at distance D0 from the origin is defined as:
1
�(�, �) = �(�,�) 2�
1+
�0
Ø Where, D0 is a positive constant. BLPF passes all the frequencies less than
D0 value without attenuation and cuts off all the frequencies greater than it.
Ø This D0 is the transition point between H(u, v) = 1 and H(u, v) = 0, so this is
termed as cutoff frequency. But instead of making a sharp cut-off (like, ILPF),
it introduces a smooth transition from 1 to 0 to reduce ringing artifacts.
Ø D(u, v) is the Euclidean Distance from any point (u, v) to the origin of the
frequency plane,
i.e. D(u, v)= �2 + �2
Suman Halder, Assistant Professor
Butterworth low-pass filter

Perspective plot, image representation and cross section of butterworth low-pass filter

Suman Halder, Assistant Professor


Butterworth low-pass filter

Original image Result of Butterworth low pass filter


of order 2 and cutoff radius 20

Suman Halder, Assistant Professor


Gaussian low-pass filter
Ø The transformation function of a Gaussian lowpass filter (GLPF) is defined
as:
�(�, �) = �−� 2(�, �)/2� 2
0

Ø Where, D0 is a positive constant. GLPF passes all the frequencies less than
D0 value without attenuation and cuts off all the frequencies greater than it.
Ø This D0 is termed as cutoff frequency and D0 = � (Standard Deviation).
Ø D(u, v) is the Euclidean Distance from any point (u, v) to the origin of the
frequency plane,
i.e. D(u, v)= �2 + �2
Ø The inverse Fourier transformation of the Gaussian filter is also a Gaussian.

Suman Halder, Assistant Professor


Gaussian low-pass filter

Perspective plot, image representation and cross section of gaussian low-pass filter

Suman Halder, Assistant Professor


Gaussian low-pass filter
Ø A low pass Gaussian filter is used to connect broken text
Ø Different lowpass Gaussian filters used to remove blemishes in a photograph
Ø Better results can be achieved with a Gaussian shaped filter function.
Ø The advantage is that the Gaussian has the same shape in the spatial and
Fourier domains and therefore does not incur the ringing effect in the spatial
domain of the filtered image.
Ø The computational cost of the spatial filter increases with the standard
deviation (i.e. with the size of the filter kernel), whereas the costs for a
frequency filter are independent of the filter function.
Ø Hence, the spatial Gaussian filter is more appropriate for narrow lowpass
filters, while the Butterworth filter is a better implementation for wide lowpass
filters.
Suman Halder, Assistant Professor
High pass filter
Ø High pass filter removes the low frequency components that means it keeps
high frequency components. It is used for sharpening the image. It is used to
sharpen the image by attenuating low frequency components and preserving
high frequency components.
Ø High pass frequencies are precisely the reverse of low pass filters,so
Hhp(u, v) = 1-Hlp(u, v)
Ø Different kind of high-pass filters are following:
qIdeal high-pass filter
qButterworth high-pass filter
qGaussian high-pass filter

Suman Halder, Assistant Professor


Ideal high-pass filter
Ø The most simplest high-pass filter is the Ideal High-pass Filter (IHPF).
Ø A 2-D ideal high-pass filter can be specified by the function:
0 if D(u,v) ≤ D0
H(u,v) =
1 if D(u,v) > D0
Ø Where, D0 is a positive constant. IHPF passes all the frequencies outside of
a circle of radius D0 from the origin without attenuation and cuts off all the
frequencies within the circle.
Ø This D0 is the transition point between H(u, v) = 1 and H(u, v) = 0, so this is
termed as cutoff frequency.
Ø D(u, v) is the Euclidean Distance from any point (u, v) to the origin of the
frequency plane,
i.e. D(u, v)= �2 + �2
Suman Halder, Assistant Professor
Ideal high-pass filter

Perspective plot, image representation and cross section of Ideal high-pass filter

Suman Halder, Assistant Professor


Butterworth high-pass filter
Ø The transformation function of a Butterworth highpass filter (BHPF) of order n
with cutoff frequency at distance D0 from the origin is defined as:
1
�(�, �) = �0 2�
1+
�(�,�)

Ø Where, D0 is a positive constant. BHPF passes all the frequencies greater


than D0 value without attenuation and cuts off all the frequencies less than it.
Ø This D0 is the transition point between H(u, v) = 1 and H(u, v) = 0, so this is
termed as cutoff frequency. But instead of making a sharp cut-off (like, IHPF),
it introduces a smooth transition from 0 to 1 to reduce ringing artifacts.
Ø D(u, v) is the Euclidean Distance from any point (u, v) to the origin of the
frequency plane,
i.e. D(u, v)= �2 + �2
Suman Halder, Assistant Professor
Butterworth high-pass filter

Perspective plot, image representation and cross section of butterworth high-pass filter

Suman Halder, Assistant Professor


Gaussian high-pass filter
Ø The transformation function of a Gaussian highpass filter (GHPF) is defined
as:
�(�, �) = 1 − �−� 2(�, �)/2� 2
0

Ø Where, D0 is a positive constant. GLPF passes all the frequencies greater


than D0 value without attenuation and cuts off all the frequencies less than it.
Ø This D0 is termed as cutoff frequency and D0 = � (Standard Deviation).
Ø D(u, v) is the Euclidean Distance from any point (u, v) to the origin of the
frequency plane,
i.e. D(u, v)= �2 + �2
Ø The inverse Fourier transformation of the Gaussian filter is also a Gaussian.

Suman Halder, Assistant Professor


Gaussian high-pass filter

Suman Halder, Assistant Professor


Syllabus Summary
Ø Introduction [3L]
Background, Digital Image Representation, Fundamental steps in Image Processing,
Elements of Digital Image Processing - Image Acquisition, Storage, Processing,
Communication, Display
Ø Digital Image Formation [4L]
A Simple Image Model, Geometric Model- Basic Transformation (Translation, Scaling,
Rotation), Perspective Projection, Sampling & Quantization - Uniform & Non uniform
Ø Mathematical Preliminaries[9L]
Neighbour of pixels, Connectivity, Relations, Equivalence & Transitive Closure;
Distance Measures, Arithmetic/Logic Operations, Fourier Transformation, Properties of
The Two Dimensional Fourier Transform, Discrete FourierTransform, Discrete Cosine &
SineTransform.
Syllabus Summary
Ø Image Enhancement [8L]
Spatial Domain Method, Frequency Domain Method, Contrast Enhancement -Linear &
Nonlinear Stretching, Histogram Processing; Smoothing - Image Averaging, Mean Filter, Low-
pass Filtering; Image Sharpening. High_pass Filtering, High-boost Filtering, Derivative Filtering,
Homomorphic Filtering; Enhancement in the frequency domain - Low pass filtering, High pass
filtering
Ø Image Restoration [7L]
Degradation Model, Discrete Formulation, Algebraic Approach to Restoration-Unconstrained &
Constrained; Constrained Least Square Restoration, Restoration by Homomorphic Filtering,
Geometric Transformation - Spatial Transformation, Gray Level Interpolation
Ø Image Segmentation [7L]
Point Detection, Line Detection, Edge detection, Combined detection, Edge Linking & Boundary
Detection - Local Processing, Global Processing via The Hough Transform; Thresholding -
Foundation, Simple Global Thresholding, Optimal Thresholding; Region Oriented Segmentation
- Basic Formulation, Region Growing by Pixel Aggregation, Region Splitting & Merging
References
Ø Gonzalez, R. C. and Woods. R. E.[2007] . Digital Image Processing (3rd
edition), Prentice Hall
Ø Gose, Earl and Johnsonbau, Richard [1999]. Pattern Recognition and Image
Analysis
Ø Duda, R. O., Hart, P.E. and Stork, D. G[2001]. Pattern Classification., John
Wiley & Sons, NY

You might also like