Digital Image Processing 2020-2021 - Mulltimedia Course
Digital Image Processing 2020-2021 - Mulltimedia Course
2020-2021
اﻟﻤﺮﺣﻠﺔ اﻟﺜﺎﻟﺜﺔ /اﻟﻔﺼﻞ اﻟﺪراﺳﻲ اﻟﺜﺎﻧﻲ
اﺳﺘﺎذ اﻟﻤﺎدة:
أ.م.د .ﻧﺎﺻﺮ ﺣﺴﯿﻦ ﺳﻠﻤﺎن
Digital Image Processing, 2nd ed. www.imageprocessingbook.com
IMAGE FORMATION
IMAGE FORMATION
• A=imread('c:\lena.jpg')
• figure
• imshow(A)
• imfinfo('C:\lena.jpg')
• for i=1:380
• for j=1:380
• B(i,380+1-j)=A(i,j);
• end
• end
• figure
• subplot(2,2,1)
• imshow(A)
• %figure
• subplot(2,2,2)
• imshow(B)
• subplot(2,2,3)
• %figure
• imshow(A)
• subplot(2,2,4)
• imhist(B)
• size (B)
L=2^K
• Neighborhood
• Adjacency
• Connectivity
• Paths
• Adjacency
Let V be the set of intensity values
IMAGE FORMATION
- In lecture one
Image Formation
Sensors types
Image types
Image file size
Introduction
• What is Digital Image Processing?
Digital Image
— A two-dimensional function f ( x , y ) x and y are spatial
coordinates
The amplitude of f is called intensity or gray level at the point (x, y)
Pixel
— The elements of a digital image
image definition
Blue
Green
Red
f ( x, y) :
:
a a ...........................a
© 2002 R. C. Gonzalez & R. E. Woods M 1,0 M 1,1 M 1, N 1
Digital Image Processing, 2nd ed. www.imageprocessingbook.com
Pixels in image
The digital image produced by the digitizer goes into temporary storage on a
suitable device. In response to instructions from the operator, the computer
calls up and executes image processing programs from library.
During the execution, The input image is read into the computer
line by line .Operating upon one or several lines, the computer
generates the output image, pixel by pixel, and store it on the
output data storage device, line by line.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com
Example
Chapter 1: Introduction
2-X-Rays imaging
Chapter 1: Introduction
3-Ultraviolet imaging
Satellite Imaging
1-Spectral bands
Results of
automated
reading of the
plate content by
the system
Wavelets &
Colour Image Image Morphological
Multiresolution
Processing Compression Processing
processing
Image
Restoration
Segmentation
Image
Acquisition Object
Recognition
Problem Domain
Fundamental Steps in DIP:
Step 1: Image Acquisition
The image is captured by a sensor (eg.
Camera), and digitized if the output of the
camera or sensor is not already in digital
form, using Analogue-to-Digital convertor
Image Acquisition equipment
Cont. Fundamental Steps in DIP:
Step 2: Image Enhancement
The process of manipulating an image so that
the result is more suitable than the original for
specific applications.
The idea behind enhancement techniques is
to bring out details that are hidden, or simple
to highlight certain features of interest in an
image.
•Filtering with morphological operators.
•Histogram equalization.
•Noise removal using a Wiener filter.
•Linear contrast adjustment.
•Median filtering.
•Unsharp mask filtering.
Cont. Fundamental Steps in DIP:
Step 3: Image Restoration
- Improving the appearance of an image
- Tend to be mathematical or probabilistic
models. Enhancement, on the other hand, is
based on human subjective preferences
regarding what constitutes a “good”
enhancement result.
Cont. Fundamental Steps in DIP:
Step 4: Colour Image Processing
Use the colour of the image to extract
features of interest in an image
Color to grey and negative
Cont. Fundamental Steps in DIP:
Step 5: Wavelets
Are the foundation of representing images
in various degrees of resolution. It is used
for image data compression.
Fundamental Steps in DIP:
Step 6: Compression
Techniques for reducing the storage
required to save an image or the
bandwidth required to transmit it.
Fundamental Steps in DIP:
Step 7: Morphological Processing
Tools for extracting image components
that are useful in the representation and
description of shape.
Fully convolutional network :Classify the Ground truths are “true and accurate”
object class for each pixel within an image. segmentations that are typically made by
That means there is a label for each pixel. one or more human experts, sa
Fundamental Steps in DIP:
Step 9: Representation and Description
- Representation: Make a decision whether the data
should be represented as a boundary or as a complete
region. It is almost always follows the output of a
segmentation stage.
- Boundary Representation: Focus on external
shape characteristics, such as corners and
inflections ()اﻧﺣﻧﺎءات
- Region Representation: Focus on internal
properties, such as texture or skeleton ( )ھﯾﻛﻠﯾﺔshape
Extracted the three regions
Selected three regions on the original image
Specialized image
Image processing
Hardcopy processing
software
hardware
Typical general-
Image sensors purpose DIP
Problem Domain system
Components of an Image Processing System
1. Image Sensors
Two elements are required to acquire
digital images. The first is the physical
device that is sensitive to the energy
radiated by the object we wish to image
(Sensor). The second, called a digitizer,
is a device for converting the output of the
physical sensing device into digital form.
Components of an Image
Processing System
2. Specialized Image Processing Hardware
Usually consists of the digitizer, mentioned before, plus
hardware that performs other primitive operations, such as
an arithmetic logic unit (ALU), which performs arithmetic and
logical operations in parallel on entire images.
Wavelets &
Colour Image Image Morphological
Multiresolution
Processing Compression Processing
processing
Image
Restoration
Segmentation
Image
Acquisition Object
Recognition
Problem Domain
٤ اﻟﻣﺣﺎﺿرة
اﻣﺛﻠﺔ ﻋﻠﻰ ﺗﻛوﯾن ﺻور رﻗﻣﯾﺔ
A colormap is an m-by-3 matrix of real numbers between 0.0 and 1.0. Each row is an
RGB vector that defines one color. The kth row of the colormap defines the kth color,
where map(k,:) = [r(k) g(k) b(k)]) specifies the intensity of red, green, and blue.
colormap(map) sets the colormap to the matrix map. If any values in map are outside
the interval [0 1], MATLAB returns the error Colormap must have values in [0,1].
2nd example
Example 3
Image Statistics
Arithmetic Mean, ٤ اﻟﻣﺣﺎﺿرة
Standard Deviation, اﻟﺧﺻﺎﺋص اﻻﺣﺻﺎﺋﯾﺔ ﻟﻠﺻورة اﻟرﻗﻣﯾﺔ
and Variance
ﻣﻊ اﻟﺗطﺑﯾق
• Useful statistical features of an image are its arithmetic mean,
• standard deviation, and variance. These are well known
mathematical constructs that, when applied to a digital image, can
reveal important information.
• The arithmetic mean is the image's average value.
• The standard deviation is a measure of the frequency distribution,
or range of pixel values, of an image. If an image is supposed to be
uniform throughout, the standard deviation should be small. A
small standard deviation indicates that the pixel intensities do not
stray very far from the mean; a large value indicates a greater
range.
• The standard deviation is the square root of the variance.
• The variance is a measure of how spread out a distribution is. It is computed as the
average squared deviation of each number from its mean
THE VARIANCE
• The variance is a measure of how spread out a distribution
is. It is computed as the average squared deviation of each
number from its mean. For example, for the numbers 1, 2,
and 3, the mean is 2 and the variance is (1+2+3)/3=2
•
σ2 = .
The formula (in summation notation) for the variance in a
population is
0
0
0
0
0
0
0
0
The vertical flipped image B (N xM) of A (N x M) can be obtained
as B(i,M + 1 - j) = A(i, j) (i = 0,…….,N - 1; j = 0,…..,M - 1).
clear B;
A=imread('c:\lena.jpg');
for i = 1 : 380
for j = 1 : 380
B(i, 380 + 1 - j) = A(i, j);
end
figure
subplot(1,2,1)
imshow(A)
subplot(1,2,2)
imshow(B)
The cropped image B (N1xN2) of A (N xM), starting from (n1, n2), can be
obtained as B(k, l) = A(n1+k, n2+l) (k = 0,………,N1-1; l = 0,….. ,N2-1).
A=imread('c:\lena.jpg');
for k = 1 : 64
for j = 1 : 128
B(k,j) = A(220+k,220+j);
end
end
figure
subplot(1,2,1)
imshow(A)
subplot(1,2,2)
imshow(B)
ﺗﻘﻧﯾﺎت ﺗﺣﺳﯾن اﻟﺻورة اﻟرﻗﻣﯾﺔ ﻣﻊ اﻻﻣﺛﻠﺔ
Enhancement
Techniques
Frequency Domain
Spatial
Operates on FT of
Operates on pixels
Image
Intensity Transformations and Spatial Filtering:
Intensity transformation Functions
g(x,y)=T[f(x,y)]
f(x,y) – input image,
g(x,y) – output image
T is an operator on f defined over a neighborhood of point (x,y)
Intensity Transformation
• 1 x 1 is the smallest possible neighborhood.
• In this case g depends only on value of f at a
single point (x,y)
and we call T an intensity (gray-level
mapping) transformation and write
s = T(r)
where s and r denotes respectively the
intensity of g and f at any point (x, y).
Spatial Domain Methods
• In these methods a operation (linear or non-
linear) is performed on the pixels in the
neighborhood of coordinate (x,y) in the input
image F, giving enhanced image F’
• Neighborhood can be any shape but generally
it is rectangular ( 3x3, 5x5, 9x9 etc)
g(x,y) = T[f(x,y)]
Grey Scale Manipulation
• Simplest form of window (1x1)
• Assume input gray scale values are in range [0, L-1] (in 8 bit images
L = 256)
• nth root Transformation
s = c (r)n
• S is output image, r input im
• A=imread('d:\flowers.jpg');
• A=rgb2gray(A);
• C=1;
• n=.5;
• B=C*double(A).^n; before after
• figure
• subplot(1,2,1)
• imshow(A,[])
• subplot(1,2,2)
• imshow(B,[])
Some intensity transform functions
•
f is the input image, gamma controls the curve,
and [low_in high_in] and [low_out high_out] are
used for clipping. Values below low_in are
clipped to low_out and values above high_in are
clipped to high_out. For the purposes of this lab,
we use [] for both [low_in high_in] and [low_out
high_out]. This means that the full range of the
input is mapped to the full range of the output
Contrast Stretching
• To increase the dynamic range of the gray
levels in the image being processed.
contd…
color resolution
colors number 2
Fig. 4.16: Additive and subtractive color. (a): RGB is used to specify additive color. (b):
CMY is used to specify subtractive color
ﺑﺎﻻﺿﺎﻓﺔ اﻟﻰ اﻻﺳﻮدRGB اﻻﻟﻮان اﻻﺳﺎﺳﯿﺔ ﻓﻲ ﻧﻈﺎم
واﻻﺑﯿﺾ
The Red Green Blue
color
Red 255 0 0
Green 0 255 0
Blue 0 0 255
White 255 255 255
Black 0 0 0
C 0 255 255 G+B
M 255 0 255 B+R
Y 255 255 0 R+G
CMY System
The Y M C
color
Red 255 255 0 Absorb C and reflect Y,M
Green 255 0 255
Blue 255 0 0
White 0 0 0
Black 255 255 255 اﻟﻤﻌﺎدﻻت ادﻧﺎه ﻣﻦ ﻣﻘﺎرﻧﺔ اﻟﺠﺪوﻟﯿﻦ
C 0 0 255 C=255-R
M 0 255 0 M=255-G
Y 255 0 0 Y=255-B
ﺗﻤﺜﻞ اﻗﻞ ﻗﯿﻤﺔL ﺣﯿﺚCMY ﻣﻦCMYK اﺣﺘﺴﺎب ﻗﯿﻢ
CMY ﺑﯿﻦ اﻟﻘﯿﻢ
C L
C
255 L
M L
M
255 L
Y L
Y
255 L
L
K
255
CMYK اﻟﻰ اﻟﻨﻈﺎمRGB ( ﻣﻦ اﻟﻨﻈﺎم96,134,200) ﺣﻮل اﻟﻘﯿﻢ
اﻟﻰCMY ﺛﻢ ﻣﻦCMY اﻟﻰ اﻟﻨﻈﺎمRGB ﻧﺠﺮي اﻟﺘﺤﻮﯾﻞ ﻣﻦ اﻟﻨﻈﺎم:اوﻻ
CMYK اﻟﻨﻈﺎم
Masks , windows
Using filters
Filter and image
filters From previous lecture 5
Masks , windows
Using filters
Filter and image
Lecture 6
Spatial Filtering process
٨
Mean filter
unfiltered values
5 3 6
2 1 9
8 4 7
5 + 3 + 6 + 2 + 1 + 9 + 8 + 4 + 7 = 45
45 / 9 = 5
٩
mean filtered
* * *
* 5 *
* * *
١٠
median filter
• The median filter is also a sliding-window
spatial filter, but it replaces the center value
in the window with the median of all the
pixel values in the window. As for the mean
filter, the kernel is usually square but can be
any shape. An example of median filtering
of a single 3x3 window of values is shown
below.
١١
unfiltered values
6 2 0
3 97 4
19 3 10
in order:
0, 2, 3, 3, 4, 6, 10, 19, 97
١٢
median filtered
* * *
* 4 *
* * *
١٣
• This illustrates one of the celebrated features
of the median filter: its ability to remove
'impulse' noise (outlying values, either high or
low).
• The median filter is also widely claimed to be
'edge-preserving' since it theoretically
preserves step edges without blurring.
• However, in the presence of noise it does blur
edges in images slightly.
١٤
Basic idea of Spatial Filtering
• Spatial Filtering is sometimes also known as
neighborhood processing. Neighborhood processing is
an appropriate name because you define a center point
and perform an operation (or apply a filter) to only
those pixels in predetermined neighborhood of that
center point.
• The result of the operation is one value, which
becomes the value at the center point's location in the
modified image. Each point in the image is processed
with its neighbors. The general idea is shown below as
a "sliding filter" that moves throughout the image to
calculate the value at the center location
The following diagram is meant to illustrate in further details
how the filter is applied. The filter (an averaging filter) is
applied to location 2,2 .
• Notice how the resulting value is placed at location 2,2
in the filtered image.
• The breakdown of how the resulting value of 251
(rounded up from 250.66) was calculated
mathematically is:
• = 251*0.1111 + 255*0.1111 + 250*0.1111 +
251*0.1111 + 244*0.1111 + 255*0.1111 + 255*0.1111
+ 255*0.1111 + 240*0.1111
• = 27.88888 + 28.33333 + 27.77777 + 27.88888 +
27.11111 + 28.33333 + 28.33333 + 28.33333 +
26.66666
• = 250.66
The following illustrates the averaging filter applied to
location 4,4.
• Once again, the mathematical breakdown of
how 125 (rounded up from 124.55) was
calculated is below:
• = 240*0.1111 + 183*0.1111 + 0*0.1111 +
250*0.1111 + 12*0.1111 + 87*0.1111 +
255*0.1111 + 0*0.1111 + 94*0.1111
• = 26.6666 + 20.3333 + 0 + 27.7777 + 1.3333 +
9.6666 + 28.3333 + 0 + 10.4444
• = 124.55
Boundary Options
for j=1:y
img(i,j)=g(i,j)*w(1,1)+g(i+1,j)*w(2,1)+g(i+2,j)*w(3,1) ... %first column
+ g(i,j+1)*w(1,2)+g(i+1,j+1)*w(2,2)+g(i+2,j+1)*w(3,2)... %second column
+ g(i,j+2)*w(1,3)+g(i+1,j+2)*w(2,3)+g(i+2,j+2)*w(3,3);…%third column
%img(i,j)=g(i,j)*w(1,1)+g(i+1,j)*w(2,1)... %first column
% + g(i,j+1)*w(1,2)+g(i+1,j+1)*w(2,2); %second column
end
end
%Convert to uint--otherwise there are double values and the expected
%range is [0, 1] when the image is displayed
img=uint8(img);
example
examples
Edge detection
smooth
Fourier Transforms
ﺗﺣوﯾﻼت ﻓورﯾر اﻟرﯾﺎﺿﯾﺔ ﻟﻠﺻور اﻟرﻗﻣﯾﺔ
The Fourier transform is a representation of an image as a sum of
complex exponentials of varying magnitudes, frequencies, and
phases.
• The DFT is used to convert an image from the spatial domain into frequency
domain, in other words it allows us to separate high frequency from low frequency
coefficients and neglect or alter specific frequencies leading to an image with less
information but still with a convenient level of quality .
Frequency domain
We first transform the image to its frequency distribution. Then our black box
system perform what ever processing it has to performed, and the output of the
black box in this case is not an image, but a transformation. After performing
inverse transformation, it is converted into an image which is then viewed in spatial
domain.
example
Background
Imaginary part:
Magnitude-phase
representation:
Magnitude
(spectrum):
Phase
(spectrum):
Power
Spectrum:
2D DFT Properties
Spatial domain
differentiation:
Frequency domain
differentiation:
Distribution law:
Laplacian:
Spatial domain
Periodicity:
Frequency domain
periodicity:
Computation of 2D-DFT
• To compute the 1D-DFT of a 1D signal x (as a vector):
~
x FN x
To compute the inverse 1D-DFT:
1 *~
x FN x
N
• To compute the 2D-DFT of an image X (as a matrix):
~
X FN XFN
To compute the inverse 2D-DFT:
1 *~ *
X 2 FN XFN
N
WN e j 2 / N , where, N 4
Computation of 2D-DFT W4 e j 2 / 4 e j / 2
remember Fourier transform matrices
1 1 1 ··· 1 1 1 1 ··· 1
2 N-1 -1 -2 1-N
1 WN WN · · · WN 1 WN WN · · · WN
*
FN · · · · F
N · · · ·
· · · · · · · ·
· · · · 2 · · · ·
N-1 2(N-1) (N-1) 1-N 2(1-N) - (N-1) 2
1 WN WN · · · WN 1 WN WN · · · WN
= relationship: FN1
1 * 1 *~ *
FN X 2 FN XFN
N
N . •
In particular, for N = 4:
1 1 1 1
1 1 1 1 1 j 1 j
1 j 1 j
F4*
F4 1 1 1 1
1 1 1 1 WN e j 2 / N
1 j 1 j
1 j 1 j
Calculate of Fourier matrices
WN e j 2 / N , where, N 4
W4 e j 2 / 4 e j / 2
77 2 3 2
4 11 4 5 0 5 0 5
~ 9 8
X real ~ 7 4
13 6 11 6 X imag
0 13 0 13
4 5 4 11
9 4 7 8
Magnitude: Phase:
77 5.39 3 5.39 0 1.19 0 1.19
9.85 13.60 8.06 6.4 1.15 2.51
~
Xmagnitude ~ 2.09 2.47
13 14.32 11 14.32 Xphase
3.14 2.00 3.14 2.00
9.85 6.40 8.06 13.60
1.15 2.47 2.09 2.51
Computation of 2D-DFT: Example
• Compute the inverse 2D-DFT:
1 1 1 1 77 25 j 3 25 j 1 1 1 1
1 j 1 j49 j 118 j 47 j 54 j 1 j 1 j
~ 1
F4*XF4* 2
4 1 1 1 1 13 613j 11 613j1 1 1 1
1 j 1 j 4 9 j 5 4 j 4 7 j 118 j 1 j 1 j
1 1 1 1 21 21 19 16
1 1 j 1 j 4 3 j 1 2 j 4 5 j 5 j
4 1 1 1 1 9 7 3 6
1 j 1 j 4 3 j 1 2 j 4 5 j 5 j
1 3 6 8
9 8 8 2
X
5 4 2 3 MATLAB function: ifft2
6 6 3 3
From previous lecture 7
Fourier Transforms
ﺗﺣوﯾﻼت ﻓورﯾر اﻟرﯾﺎﺿﯾﺔ ﻟﻠﺻور اﻟرﻗﻣﯾﺔ
The Fourier transform is a representation of an image as
a sum of complex exponentials of varying magnitudes,
frequencies, and phases.
The DFT is used to convert an image from the spatial domain into frequency
domain, in other words it allows us to separate high frequency from low
frequency coefficients and neglect or alter specific frequencies leading to
an image with less information but still with a convenient level of quality .
Frequency domain
We first transform the image to its frequency distribution. Then our black box
system perform what ever processing it has to performed, and the output of the
black box in this case is not an image, but a transformation. After performing
inverse transformation, it is converted into an image which is then viewed in spatial
domain.
4
of
20
5
of
20
example
6
of
20
7
of
20 Background
detect lines
The masks below will extract lines that are
one pixel thick and running in a particular
direction
15
of
20
Line Detection (cont…)
Binary image of a wire
bond mask
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
After
Result of
processing
thresholding
with -45° line
filtering result
detector
16
of
20
Edge Detection
An edge is a set of connected pixels that lie
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
and, conversely,
where the symbol "*" indicates convolution of the two functions. The important
thing to extract out of this is that the multiplication of two Fourier transforms
corresponds to the convolution of the associated functions in the spatial
domain. This means we can perform linear spatial filters as a simple
component-wise multiply in the frequency domain.
Lecture 9
Dr Nassir H. Salman
From previous lecture 8
One-Dimensional Fourier transform and it’s inverse
M 1 N 1
1 u 0, 1, 2,, M - 1 and v 0, 1, 2,, N - 1
F (u , v ) f ( x , y )e j 2 ( ux / M vy / N )
MN x 0 y 0
for image f (x, y) of size M x N
M 1 N 1
f (x, y) F(u, v)e j 2 (ux/ M vy/ N ) x 0, 1, 2, , M - 1 and y 0, 1, 2, , N - 1
u 0 v0
Fourier transform
• The Fourier transform of a discrete function
(DFT) of one variable , f(x), x = 0,1 ,…, M-1,
is given by the equation:
• 1) F ( u ) 1 M 1 f ( x ) e j 2 ux / M for u = 0, 1, 2, … , M-1.
M
u0
or u0
M 1
1
1) F (u)
M
f (x)[cos2ux / M j sin 2ux / M ]
u 0
1) F ( u ) F ( u ) e j ( u )
Where;
2 2 1
F ( u ) [ R (U ) I ( u )] 2
low
u
high high
Example:
From [Gonzalez
& Woods]
Log-Magnitude Visualization
2D-DFT
centered
s c log(1 r )
Apply to Images
f=zeros(30,30);
f(5:24,13:17)=1;
F=fft2(f, 256,256);
F2=fftshift(F);
figure,imshow(log(1+abs(F2)),[])
• and, conversely,
g ( x, y ) f ( x, y ) h( x, y )
IMAGE FORMATION
IMAGE FORMATION
Introduction
• What is Digital Image Processing?
Digital Image
— A two-dimensional function f ( x , y ) x and y are spatial
coordinates
The amplitude of f is called intensity or gray level at the point (x, y)
Pixel
— The elements of a digital image
image definition
Blue
Green
Red
Wavelets &
Colour Image Image Morphological
Multiresolution
Processing Compression Processing
processing
Image
Restoration
Segmentation
Image
Enhancement Representation
& Description
Knowledge Base
Image
Acquisition Object
Recognition
Problem Domain
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com
Components of an Image
Processing
Network
System
Specialized
Image
image
Hardcopy processing
processing
software
hardware
Typical general-
Image sensors purpose DIP
Problem Domain system
Image Statistics
Arithmetic Mean, اﻟﺧﺻﺎﺋص اﻻﺣﺻﺎﺋﯾﺔ ﻟﻠﺻورة اﻟرﻗﻣﯾﺔ
Standard Deviation,
ﻣﻊ اﻟﺗطﺑﯾق
and Variance
• Useful statistical features of an image are its arithmetic mean,
• standard deviation, and variance. These are well known
mathematical constructs that, when applied to a digital image, can
reveal important information.
• The arithmetic mean is the image's average value.
• The standard deviation is a measure of the frequency distribution, or
range of pixel values, of an image. If an image is supposed to be
uniform throughout, the standard deviation should be small. A small
standard deviation indicates that the pixel intensities do not stray
very far from the mean; a large value indicates a greater range.
• The standard deviation is the square root of the variance.
filters
Masks ,
windows
Using filters
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0f 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
be= myfilter(f,
• function img applied w) to an image
• %MYFILTER Performs spatial correlation
• % I=MYFILTER(f, w) produces an image that has undergone
correlation.
• % f is the original image
• % w is the filter (assumed to be 3x3)
• % The original image is padded with 0's
• %Author: Nova Scheidt
•
• % check that w is 3x3
• [m,n]=size(w);
• if m~=3 | n~=3
• error('Filter must be 3x3')
• end
© 2002 R. C. Gonzalez & R. E. Woods
•
%then, store f withinDigital
g Image Processing, 2nd ed. www.imageprocessingbook.com
for i=1:x 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
for j=1:y 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
W11 W12 W13
g(i+1,j+1)=f(i,j); W21 W22 W23
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
end W31 W32 W33 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0f 0 0 0 0 0
end 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
%cycle through the array and apply the 0 filter
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
for i=1:x 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
for j=1:y 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
img(i,j)=g(i,j)*w(1,1)+g(i+1,j)*w(2,1)+g(i+2,j)*w(3,1) ...0 %first
0
column
+ g(i,j+1)*w(1,2)+g(i+1,j+1)*w(2,2)+g(i+2,j+1)*w(3,2)...
%second column
+ g(i,j+2)*w(1,3)+g(i+1,j+2)*w(2,3)+g(i+2,j+2)*w(3,3);…%third
column
%img(i,j)=g(i,j)*w(1,1)+g(i+1,j)*w(2,1)... %first column
% + g(i,j+1)*w(1,2)+g(i+1,j+1)*w(2,2); %second column
end
end
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com
Fourier Transforms
ﺗﺣوﯾﻼت ﻓورﯾر اﻟرﯾﺎﺿﯾﺔ ﻟﻠﺻور اﻟرﻗﻣﯾﺔ
The Fourier transform is a representation of an image as a sum
of complex exponentials of varying magnitudes, frequencies,
and phases.
• The DFT is used to convert an image from the spatial domain into frequency
domain, in other words it allows us to separate high frequency from low frequency
coefficients and neglect or alter specific frequencies leading to an image with less
information but still with a convenient level of quality .
Frequency domain
We first transform the image to its frequency distribution. Then
our black box system perform what ever processing it has to
performed, and the output of the black box in this case is not
an image, but a transformation. After performing inverse
transformation, it is converted into an image which is then
© 2002 R. C. Gonzalez & R. E. Woods
example
Background