Lab Manual
Lab Manual
Lab Manual
LAB MANUAL
Prepared by
P. Venkatakrishnan
J. Jennifer Ranjani
List of Experiments
Properties of 2D Transform
Huffman Coding
Properties of 2D Fourier Transform
Aim:
To verify the Translation and Separable properties of 2D Fourier Transform
Theory:
Algorithm:
Translation
1) Input the gray scale image, resize the image to ensure that M and N are even
2) Transform it into the Fourier domain
3) Translate the image by (M/2 + 1, N/2 + 1) since the computer implementation
has the limits from 1.
4) Take inverse FFT to show that the coordinates (1, 1) is shifted to (M/2, N/2)
which is the centre of the Image.
5) Thus the translation property of the Fourier transform is verified.
Separable Property
1) Input the gray scale image.
2) Perform a 2D Fourier transformation on the image.
3) Now, perform 1D transformation along the rows and then another 1D
transformation along the columns (inverse is also possible)
4) Compare the values of the 2D FFT and the two 1D FFT.
5) Both are equal and thus the separable property of the Fourier transform is
verified.
Result:
Thus the properties of Fourier Transform like Translation and Separability are
implemented and verified
Implementation of Successive Doubling Algorithm
Aim:
To implement the fast fourier transform based on the successive doubling
algorithm and to compare the computational complexity with the DFT
Theory:
Algorithm:
1) Calculate the Discrete Fourier Transform for the given input and also its
computational time
2) Let the value of K = M/2
3) Calculate Feven and Fodd for the first K samples
4) Calculate F also for the first K samples using the following expression
F(u) = 0.5 [Feven(u) + Fodd(u) exp((-j2πu)/2K)] where u = 1,2, …, K
5) F for the remaining samples can be computed by
F(u+K) = 0.5 [Feven(u) - Fodd(u) exp((-j2πu)/2K)] where u = 1,2, …, K
6) Compare the computational time required to implement the successive
doubling algorithm with that of DFT
Result:
Thus the fast fourier transform based on the successive doubling algorithm is
implemented and the computational complexity is compared with that of DFT.
Image Compression using KL Transform
Aim:
To compress the multispectral image using KL transform
Theory:
Algorithm:
1) For the given n – band multispectral image, calculate the n – dimensional
vector from each set of corresponding pixels in the images.
2) Calculate the mean and covariance matrix for the vectors
3) Form a matrix A whose rows are formed from eigen vectors of the covariance
matrix, which is ordered so that the first row of A is the eigen vector
corresponding to the largest eigen value and last row corresponding to the
smallest eigen value
4) Map x’s vector into y by using A as a transformation matrix using
y = A(x – mx)
5) The x vector can be recovered using the expression
x = ATy + mx
Image Enhancement using Gray level Transformation
Aim:
To enhance an image using negative transformation, contrast stretching and gray
level slicing
Theory:
Negative transformation is particularly suited for enhancing white or gray image
details embedded in dark regions of an image, especially when black areas are dominant
in size.
Contrast stretching (often called normalization) is a simple image enhancement
technique that attempts to improve the contrast in an image by `stretching' the range of
intensity values it contains to span a desired range of values, e.g. the full range of pixel
values that the image type concerned allows. It differs from the more sophisticated
histogram equalization in that it can only apply a linear scaling function to the image
pixel values. As a result the `enhancement' is less harsh.
Gray level slicing is done to highlight a specific range of gray levels in the given
image. To enhance features like masses of water in satellite imagery and to enhance flaws
in X – ray images are some of the prominent applications.
Algorithm:
Image Negative
1) Input the image with gray levels in the range [0, L-1]
2) The Negative Transformation is obtained by transforming the original gray
level r into s using s = L – 1 – r
3) Thus the photographic negative of the image is obtained
Contrast Stretching
1) Set the upper and lower limits. For 8-bit gray level images, the lower and
upper limits might be 0 and 255. Call the lower and the upper limits a, and b
respectively.
2) Find the lowest and highest pixel values currently present in the image.
Call these as c and d. Then each pixel P is scaled using the following function:
3) Take a histogram of the image, and then select c and d at, say, the
5th and 95th percentile in the histogram. This prevents outliers
affecting the scaling so much.
4) The scaled pixels give the contrast stretched image.
Gray Level Slicing
1) Input the Gray scale image
2) Input the range of pixels (A,B) intensities to be highlighted
3) First method, is to set all the pixels with intensities in this range to a higher
intensity value
4) Second method, is to set all the pixels with intensities in this range to a higher
intensity value and set the pixels outside this range to zero
Result
Thus the various gray level transformations are applied over an image and
enhancement is done
Bit plane Slicing and Histogram Processing
Aim:
1) To prove that most of the information in an image is available in the most
significant bits, using bit plane slicing
2) And to perform histogram equalization on an image to improve its contrast
Theory:
Slicing the image at different planes (bit-planes) plays an important role in image
processing. An application of this technique is data compression. The lower bit planes
contain less information and more of the visual information is in the higher bit planes.
Histogram equalization improves contrast and its goal is to obtain a uniform
histogram. This technique can be used on a whole image or just on a part of an image.
Histogram equalization will not "flatten" a histogram. It redistributes intensity
distributions. If the histogram of any image has many peaks and valleys, it will still have
peaks and valley after equalization, but peaks and valley will be shifted. Because of this,
"spreading" is a better term than "flattening" to describe histogram equalization.
Histogram equalization is a point process, and new intensities will not be introduced into
the image. Existing values will be mapped to new values.
Algorithm:
Result
1) By bit plane slicing it is proved that only the most significant bits contain
more visual data
2) The Contrast of an image is enhanced using histogram equalization
Denoising using Wavelet Thresholding
Aim:
To remove, the Gaussian Noise in an image using wavelet shrinkage.
Theory:
Algorithm:
Result
Thus the using wavelet shrinkage the Gaussian noise can be removed.
Line, Point and Edge Detection
Aim:
To perform line detection, point detection, and edge detection in the given image.
Theory:
Points, Lines and edges are the three basic types of gray level discontinuities in a
digital image. The most common way to look for discontinuities is to run mask through
the image. The isolated point will be quite different from its surroundings and thus be
easily detectable by running a simple mask. The next level of complexity is line
detection. Four masks are used to detect horizontal, vertical and lines oriented at +45 and
-45. Each mask would respond more strongly to lines oriented in its direction.
Edge detection is the most common approach for detecting the meaningful
discontinuities in gray level. Sobel, Prewitt, Roberts Cross detectors are common used
edge detectors.
Algorithm:
Result
Thus the points, lines, and edges are detected from the given image.
Huffman Codes
Aim:
To code the symbols of an information source, using Huffman coding.
Theory:
Huffman coding yields the smallest possible number of code symbols per source
symbol. It also creates the optimal code for a set of symbols and probabilities subject to
the constraint that the symbols be coded one at a time. After the code has been created,
coding and decoding is accomplished in a simple look up table manner. It is called as the
instantaneous uniquely decodable block code.
It is called a block code because each source symbol is mapped into a fixed
sequence of code symbols. It is instantaneous because each code word in a string of code
symbols can be decoded without referencing succeeding symbols. It is uniquely
decodable, because any string of code symbols can be decoded by examining the
individual symbols of the string in a left to right manner.
Algorithm:
Result
Thus the symbols of an information source are coded with optimal number of
code symbols per source symbol, using Huffman codes.
Image Enhancement in the Spatial Domain
Aim:
To enhance an image in the spatial domain using smoothening and sharpening
filters
Theory:
Algorithm:
Result:
Thus the image is enhanced using the smoothening and sharpening filters in the
spatial domain
Image Enhancement in the Frequency Domain
Aim:
To enhance an image in the frequency domain using smoothening and sharpening
filters
Theory:
Algorithm:
Result:
Thus the image is enhanced using the smoothening and sharpening filters in the
frequency domain