[go: up one dir, main page]

0% found this document useful (0 votes)
70 views13 pages

Lab Manual

Download as doc, pdf, or txt
Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1/ 13

IT – 78 DIGITAL IMAGE PROCESSING

LAB MANUAL

Prepared by
P. Venkatakrishnan
J. Jennifer Ranjani
List of Experiments

 Image Enhancement using Intensity Transform

 Bit Plane Slicing and Histogram Processing

 Image Enhancement in Spatial Domain

 Properties of 2D Transform

 Implementation of Successive Doubling Algorithm in FFT

 Image Enhancement in Frequency Domain

 Denoising using Wavelet Thresholding

 Line, Point and Edge Detection

 Huffman Coding
Properties of 2D Fourier Transform

Aim:
To verify the Translation and Separable properties of 2D Fourier Transform

Theory:

The 2D Fourier Transform perform the properties like translation, separablility


etc., by multiplying the image by exp[j2π (u0x +v0y)/N] and taking the transform of the
product results in shift of the origin of the frequency plane to the point (u 0 , v0). Similarly
multiplying F(u,v) by the conjugate of the above mentioned exponential term and taking
inverse transform moves the origin of the spatial plane to (x0 , y0).
The Separable property reveals the advantage that F(u,v) or f(x,y) can be obtained
in two steps by successive application of the 1D Fourier Transform or its inverse.

Algorithm:

Translation
1) Input the gray scale image, resize the image to ensure that M and N are even
2) Transform it into the Fourier domain
3) Translate the image by (M/2 + 1, N/2 + 1) since the computer implementation
has the limits from 1.
4) Take inverse FFT to show that the coordinates (1, 1) is shifted to (M/2, N/2)
which is the centre of the Image.
5) Thus the translation property of the Fourier transform is verified.

Separable Property
1) Input the gray scale image.
2) Perform a 2D Fourier transformation on the image.
3) Now, perform 1D transformation along the rows and then another 1D
transformation along the columns (inverse is also possible)
4) Compare the values of the 2D FFT and the two 1D FFT.
5) Both are equal and thus the separable property of the Fourier transform is
verified.

Result:
Thus the properties of Fourier Transform like Translation and Separability are
implemented and verified
Implementation of Successive Doubling Algorithm

Aim:
To implement the fast fourier transform based on the successive doubling
algorithm and to compare the computational complexity with the DFT

Theory:

The number of complex multiplications and addition is the computation of DFT is


proportional to N2. Proper decomposition of the DFT equation can make the number of
multiplications and additions proportional to N log 2 N. This is called as Fast Fourier
Transform. The FFT algorithm is based on the method called as successive doubling.

Algorithm:

1) Calculate the Discrete Fourier Transform for the given input and also its
computational time
2) Let the value of K = M/2
3) Calculate Feven and Fodd for the first K samples
4) Calculate F also for the first K samples using the following expression
F(u) = 0.5 [Feven(u) + Fodd(u) exp((-j2πu)/2K)] where u = 1,2, …, K
5) F for the remaining samples can be computed by
F(u+K) = 0.5 [Feven(u) - Fodd(u) exp((-j2πu)/2K)] where u = 1,2, …, K
6) Compare the computational time required to implement the successive
doubling algorithm with that of DFT

Result:
Thus the fast fourier transform based on the successive doubling algorithm is
implemented and the computational complexity is compared with that of DFT.
Image Compression using KL Transform

Aim:
To compress the multispectral image using KL transform

Theory:

Algorithm:
1) For the given n – band multispectral image, calculate the n – dimensional
vector from each set of corresponding pixels in the images.
2) Calculate the mean and covariance matrix for the vectors
3) Form a matrix A whose rows are formed from eigen vectors of the covariance
matrix, which is ordered so that the first row of A is the eigen vector
corresponding to the largest eigen value and last row corresponding to the
smallest eigen value
4) Map x’s vector into y by using A as a transformation matrix using
y = A(x – mx)
5) The x vector can be recovered using the expression
x = ATy + mx
Image Enhancement using Gray level Transformation

Aim:
To enhance an image using negative transformation, contrast stretching and gray
level slicing

Theory:
Negative transformation is particularly suited for enhancing white or gray image
details embedded in dark regions of an image, especially when black areas are dominant
in size.
Contrast stretching (often called normalization) is a simple image enhancement
technique that attempts to improve the contrast in an image by `stretching' the range of
intensity values it contains to span a desired range of values, e.g. the full range of pixel
values that the image type concerned allows. It differs from the more sophisticated
histogram equalization in that it can only apply a linear scaling function to the image
pixel values. As a result the `enhancement' is less harsh.
Gray level slicing is done to highlight a specific range of gray levels in the given
image. To enhance features like masses of water in satellite imagery and to enhance flaws
in X – ray images are some of the prominent applications.

Algorithm:

Image Negative
1) Input the image with gray levels in the range [0, L-1]
2) The Negative Transformation is obtained by transforming the original gray
level r into s using s = L – 1 – r
3) Thus the photographic negative of the image is obtained

Contrast Stretching

1) Set the upper and lower limits. For 8-bit gray level images, the lower and
upper limits might be 0 and 255. Call the lower and the upper limits a, and b
respectively.
2) Find the lowest and highest pixel values currently present in the image.
Call these as c and d. Then each pixel P is scaled using the following function:

3) Take a histogram of the image, and then select c and d at, say, the
5th and 95th percentile in the histogram. This prevents outliers
affecting the scaling so much.
4) The scaled pixels give the contrast stretched image.
Gray Level Slicing
1) Input the Gray scale image
2) Input the range of pixels (A,B) intensities to be highlighted
3) First method, is to set all the pixels with intensities in this range to a higher
intensity value
4) Second method, is to set all the pixels with intensities in this range to a higher
intensity value and set the pixels outside this range to zero

Result
Thus the various gray level transformations are applied over an image and
enhancement is done
Bit plane Slicing and Histogram Processing

Aim:
1) To prove that most of the information in an image is available in the most
significant bits, using bit plane slicing
2) And to perform histogram equalization on an image to improve its contrast

Theory:

Slicing the image at different planes (bit-planes) plays an important role in image
processing. An application of this technique is data compression. The lower bit planes
contain less information and more of the visual information is in the higher bit planes.
Histogram equalization improves contrast and its goal is to obtain a uniform
histogram. This technique can be used on a whole image or just on a part of an image.
Histogram equalization will not "flatten" a histogram. It redistributes intensity
distributions. If the histogram of any image has many peaks and valleys, it will still have
peaks and valley after equalization, but peaks and valley will be shifted. Because of this,
"spreading" is a better term than "flattening" to describe histogram equalization.
Histogram equalization is a point process, and new intensities will not be introduced into
the image. Existing values will be mapped to new values.

Algorithm:

Bit Plane Slicing


1) Input the 8 bit gray scale image.
2) Find the binary equivalent of all the pixels
3) Form 8 bit planes
4) Set the lower bit plane arrays to zero, and reconstruct the original image with
these values
5) Compare the reconstructed image with the original image
Histogram Equalization
1) Input the gray scale image.
2) Compute the histogram of the image
3) Store the sum of all the histogram values in an array. In this array, element l
would contain the sum of histogram elements l and 0. Element 255 would
contain the sum of histogram elements 255, 254, 253,... l ,0.
4) This array is then normalized by multiplying each element by (maximum-
pixel-value/number of pixels).
5) Using this array as an look up table transform the input image

Result
1) By bit plane slicing it is proved that only the most significant bits contain
more visual data
2) The Contrast of an image is enhanced using histogram equalization
Denoising using Wavelet Thresholding

Aim:
To remove, the Gaussian Noise in an image using wavelet shrinkage.

Theory:

Wavelet shrinkage is usually performed using one of two predominant


thresholding schemes. The hard threshold Hh removes coefficients below a threshold
value T, determined by the noise variance. This is sometimes referred to as the “keep or
kill” method. The soft threshold filter Hs shrinks the wavelet coefficients above and
below the threshold. Soft thresholding reduces coefficients toward zero. It has been
shown that if we desire the resulting signal to be smooth, the soft threshold filter should
be used. However, the hard threshold filter performs better. A small threshold value
creates a noisy result near the input, while a large threshold value introduces bias. for
certain applications, the optimal threshold is simply computed as a constant c times the
noise variance. The Universal method assigns a threshold level equal to the variance
times (sqrt(2log(n))), where n is the sample size

Algorithm:

1) Input the 8 bit gray scale image.


2) Choose an appropriate threshold.
3) Represent the image in the wavelet domain
4) Perform wavelet shrinkage either by soft or by hard thresholding. The Results
are compared to identify the best wavelet shrinkage method.
5) Inverse the wavelet coefficient to get the denoised image.

Result

Thus the using wavelet shrinkage the Gaussian noise can be removed.
Line, Point and Edge Detection

Aim:
To perform line detection, point detection, and edge detection in the given image.

Theory:

Points, Lines and edges are the three basic types of gray level discontinuities in a
digital image. The most common way to look for discontinuities is to run mask through
the image. The isolated point will be quite different from its surroundings and thus be
easily detectable by running a simple mask. The next level of complexity is line
detection. Four masks are used to detect horizontal, vertical and lines oriented at +45 and
-45. Each mask would respond more strongly to lines oriented in its direction.
Edge detection is the most common approach for detecting the meaningful
discontinuities in gray level. Sobel, Prewitt, Roberts Cross detectors are common used
edge detectors.

Algorithm:

1) Input the 8 bit gray scale image.


2) To perform point detection the following kernel is used
[ -1 -1 -1; -1 8 -1; -1 -1 -1]
3) The horizontal and vertical edge detection can be detected using the kernels

[ -1 -2 -1; 0 0 0 ; 1 2 1] and [-1 0 1; -2 0 2; -1 0 1] respectively


4) For line detection determine the response R1, R2, R3, and R4 for the
following horizontal,+45, vertical and -45 kernel [-1 -1 -1; 2 2 2;-1 -1 -1] , [-1
-1 2;-1 2 -1; 2 -1 -1] , [-1 2 -1;-1 2 -1;-1 2 -1] and [2 -1 -1; -1 2 -1; -1 -1 2]
respectively.
5) If, at a certain point in the image |R i| > |Rj|, the point is more likely to be
associated with a line in the direction of the mask i.

Result

Thus the points, lines, and edges are detected from the given image.
Huffman Codes

Aim:
To code the symbols of an information source, using Huffman coding.

Theory:

Huffman coding yields the smallest possible number of code symbols per source
symbol. It also creates the optimal code for a set of symbols and probabilities subject to
the constraint that the symbols be coded one at a time. After the code has been created,
coding and decoding is accomplished in a simple look up table manner. It is called as the
instantaneous uniquely decodable block code.
It is called a block code because each source symbol is mapped into a fixed
sequence of code symbols. It is instantaneous because each code word in a string of code
symbols can be decoded without referencing succeeding symbols. It is uniquely
decodable, because any string of code symbols can be decoded by examining the
individual symbols of the string in a left to right manner.

Algorithm:

1) Create a series of source reductions by ordering the probabilities of the


symbols under consideration.
2) Combine the low probable symbols into a single symbol and replace them in
the next source reduction.
3) Repeat Step 2 until a reduced source with two symbols is reached.
4) Code each reduced source starting with the smallest source and working
back to the original source.

Result
Thus the symbols of an information source are coded with optimal number of
code symbols per source symbol, using Huffman codes.
Image Enhancement in the Spatial Domain

Aim:
To enhance an image in the spatial domain using smoothening and sharpening
filters

Theory:

The principal objective of enhancement is to process an image so that the result is


more suitable than the original image for a specific application. The term spatial refers to
the image plane itself, and approaches in this category are based on direct manipulation
of pixels in an image. Smoothening filters are used for blurring and noise reduction.
Blurring is used preprocessing steps, such as removal of small details from an image
prior to object extraction, and bridging of small gaps in lines or curves. Noise reduction
can be accomplished by blurring with a linear filter and also by non linear filtering.
The principal objective of sharpening is to highlight fine details in an image or to
enhance detail that has been blurred, either in error or as a natural effect of a particular
method of image acquisition. Uses of image sharpening vary and include applications
ranging from electronic printing and medical imaging to industrial inspection and
autonomous guidance in military systems.

Algorithm:

1) Input the image to be enhanced.


2) Replace the value of every pixel in an image by the average of the gray
levels in the neighborhood defined by the filter mask.
3) Thus sharp transitions in gray levels are reduced.
4) Apply the Laplacian operator over the image to highlight the gray level
discontinuities and deemphasizes regions with slowly varying gray
levels.
5) Recover the background features by adding the original and Laplacian
images and thereby preserving the sharpening effect of the Laplacian
operation

Result:
Thus the image is enhanced using the smoothening and sharpening filters in the
spatial domain
Image Enhancement in the Frequency Domain

Aim:
To enhance an image in the frequency domain using smoothening and sharpening
filters

Theory:

The principal objective of enhancement is to process an image so that the result is


more suitable than the original image for a specific application. The Frequency domain
processing techniques are based on modifying the Fourier transform of the image.
The Edges and sharp transitions in the gray levels of the image contribute
significantly to the high frequency content of its Fourier transform. Smoothening is
achieved in the frequency domain by attenuating a specific range of the high frequency
components of the transform of a given image. Image sharpening can be achieved by a
high pass filtering process, which attenuates the low frequency components without
disturbing high frequency information in the Fourier transform.

Algorithm:

1) Input the image to be enhanced.


2) Basic model of filtering the frequency domain is given by
G(u,v) = H(u,v)F(u,v)
3) Compute the fourier transform of the input image.
4) Compute the distance D(u,v) = [(u – M/2)2 + (v – N/2)2]1/2
5) The ideal low pass filter is given by
H(u,v) = 1 if D(u,v) <= D0
0 if D(u,v) > D0
6) The ideal high pass transfer function is given by
H(u,v) = 1 if D(u,v) > D0
0 if D(u,v) <= D0
7) Inverse G(u,v) to get the enhanced image

Result:
Thus the image is enhanced using the smoothening and sharpening filters in the
frequency domain

You might also like