Enhancement in Spatial Domain
Enhancement in Spatial Domain
Spatial Domain
Principle Objective of
Enhancement
Enhancement refers to accentuation, or
sharpening, of image features such as
edges, boundaries, or contrast to make
graphic display more useful for display and
analysis.
Does not increase the inherent information
content in the data, only increase the
dynamic range of chosen feature.
2
Principle Objective of
Enhancement
Process an image so that the result will be
more suitable than the original image for
a specific application.
The suitableness is up to each application.
A method which is quite useful for
enhancing an image may not necessarily be
the best approach for enhancing another
images
3
2 domains
Spatial Domain : (image plane)
Techniques are based on direct manipulation of
pixels in an image
Frequency Domain :
Techniques are based on modifying the Fourier
transform of an image
There are some enhancement techniques based
on various combinations of methods from these
two categories.
4
Good images
For human visual
The visual evaluation of image quality is a highly
subjective process.
It is hard to standardize the definition of a good
image.
For machine perception
The evaluation task is easier.
A good image is one which gives the best machine
recognition results.
A certain amount of trial and error usually is
required before a particular image
enhancement approach is selected. 5
Spatial Domain
Procedures that operate
directly on pixels.
g(x,y) = T[f(x,y)]
where
f(x,y) is the input image
6
Mask/Filter
Neighborhood of a point (x,y)
can be defined by using a
square/rectangular (common
(x,y)
used) or circular subimage area
centered at (x,y)
•
The center of the subimage is
moved from pixel to pixel
starting at the top of the
corner
7
Point Processing
Neighborhood = 1x1 pixel
g depends on only the value of f at (x,y)
T = gray level (or intensity or mapping)
transformation function
s = T(r)
Where
r = gray level of f(x,y)
s = gray level of g(x,y)
8
Contrast Stretching
Produce higher contrast
than the original by
darkening the levels below
m in the original image
9
Thresholding
Produce a two-level
(binary) image
10
Mask Processing or Filter
Neighborhood is bigger than 1x1 pixel
Use a function of the values of f in a
predefined neighborhood of (x,y) to determine
the value of g at (x,y)
The value of the mask coefficients determine
the nature of the process
Used in techniques
Image Sharpening
Image Smoothing
11
3 basic gray-level
transformation functions
Linear function
Negative
nth root
Negative and identity
transformations
Output gray level, s
Log
nth power
Logarithm function
Log and inverse-log
transformation
Power-law function
Inverse Log
nth power and nth root
Identity
transformations
Input gray level, r
12
Identity function
Output intensities
Negative
are identical to input
nth root
intensities.
Output gray level, s
Log
nth power
Is included in the
graph only for
completeness.
Log
nth power s = L – 1 –r
Reversing the intensity
levels of an image.
Suitable for enhancing white
Identity Inverse Log
or gray detail embedded in
dark regions of an image,
especially when the black
Input gray level, r area dominant in size.
14
Example of Negative Image
Log
Log curve maps a narrow
nth power range of low gray-level
values in the input image
into a wider range of
output levels.
Inverse Log
Used to expand the
values of dark pixels in
Identity
an image while
Input gray level, r compressing the higher-
level values.
16
Log Transformations
It compresses the dynamic range of images
with large variations in pixel values
Example of image with dynamic range: Fourier
spectrum image
It can have intensity range from 0 to 106 or
higher.
We can’t see the significant degree of detail
as it will be lost in the display.
17
Example of Logarithm Image
19
Power-Law Transformations
s = cr
c and are positive
Output gray level, s
constants
Power-law curves with
fractional values of map
a narrow range of dark
input values into a wider
range of output values,
with the opposite being
true for higher values of
input levels.
Input gray level, r c = = 1 Identity
Plots of s = cr for various values of function
(c = 1 in all cases) 20
Gamma correction
Cathode ray tube (CRT)
devices have an
Monitor
intensity-to-voltage
response that is a
power function, with
varying from 1.8 to 2.5
= 2.5
Gamma
correction
The picture will become
darker.
Gamma correction is
Monitor
done by preprocessing
the image before
inputting it to the
monitor with s = cr1/
21
=1/2.5 = 0.4
a b
Another example : MRI c d
23
a b
Another example c d
25
Contrast Stretching
increase the dynamic range of
the gray levels in the image
(b) a low-contrast image : result
from poor illumination, lack of
dynamic range in the imaging
sensor, or even wrong setting of
a lens aperture of image
acquisition
(c) result of contrast
stretching: (r1,s1) = (rmin,0) and
(r2,s2) = (rmax,L-1)
(d) result of thresholding
26
Gray-level slicing
Highlighting a specific
range of gray levels in an
image
Display a high value of all
gray levels in the range of
interest and a low value for
all other gray levels
(a) transformation highlights
range [A,B] of gray level and
reduces all others to a
constant level
(b) transformation highlights
range [A,B] but preserves all
other levels
27
Bit-plane slicing
Highlighting the contribution
Bit-plane 7
made to total image
One 8-bit byte
(most significant) appearance by specific bits
Suppose each pixel is
represented by 8 bits
Higher-order bits contain
the majority of the visually
significant data
Bit-plane 0
(least significant)
Useful for analyzing the
relative importance played by
each bit of the image
28
Example
The (binary) image for
bit-plane 7 can be
obtained by processing
the input image with a
thresholding gray-level
transformation.
Map all levels between 0
and 127 to 0
Map all levels between 129
and 255 to 255
Bit-plane 7 Bit-plane 6
30
Histogram Processing
Histogram of a digital image with gray levels in
the range [0,L-1] is a discrete function
h(rk) = nk
Where
rk : the kth gray level
nk : the number of pixels in the image having gray
level rk
h(rk) : histogram of a digital image with gray levels rk
31
Normalized Histogram
dividing each of histogram at gray level rk by the
total number of pixels in the image, n
p(rk) = nk / n
For k = 0,1,…,L-1
p(rk) gives an estimate of the probability of
occurrence of gray level rk
The sum of all components of a normalized
histogram is equal to 1
32
Histogram Processing
Basic for numerous spatial domain
processing techniques
Used effectively for image enhancement
Information inherent in histograms also
is useful in image compression and
segmentation
33
h(rk) or p(rk)
Example rk
Dark image
Components of histogram
are concentrated on the low
side of the gray scale.
Bright image
Components of histogram
are concentrated on the
high side of the gray scale.
34
Example
Low-contrast image
histogram is narrow
and centered toward
the middle of the
gray scale
High-contrast image
histogram covers broad
range of the gray scale
and the distribution of
pixels is not too far from
uniform, with very few
vertical lines being much
higher than the others
35
Histogram Equalization
As the low-contrast image’s histogram is
narrow and centered toward the middle of the
gray scale, if we distribute the histogram to a
wider range the quality of the image will be
improved.
We can do it by adjusting the probability
density function of the original histogram of
the image so that the probability spread
equally
36
Histogram transformation
s s = T(r)
Where 0 r 1
T(r) satisfies
(a). T(r) is single-valued
sk= T(rk)
and monotonically
T(r)
increasingly in the
interval 0 r 1
(b). 0 T(r) 1 for
0r1
0 rk 1 r
37
2 Conditions of T(r)
Single-valued (one-to-one relationship) guarantees
that the inverse transformation will exist
Monotonicity condition preserves the increasing
order from black to white in the output image
thus it won’t cause a negative image
0 T(r) 1 for 0 r 1 guarantees that the
output gray levels will be in the same range as the
input levels.
The inverse transformation from s back to r is
r = T -1(s) ;0s1
38
Probability Density Function
The gray levels in an image may be viewed
as random variables in the interval [0,1]
PDF is one of the fundamental descriptors
of a random variable
39
Random Variables
Random variables often are a source of
confusion when first encountered.
This need not be so, as the concept of a
random variable is in principle quite
simple.
40
Random Variables
A random variable, x, is a real-valued function
defined on the events of the sample space, S.
In words, for each event in S, there is a real
number that is the corresponding value of the
random variable.
Viewed yet another way, a random variable
maps each event in S onto the real line.
line
That is it. A simple, straightforward definition.
41
Random Variables
Part of the confusion often found in
connection with random variables is the
fact that they are functions.
functions
The notation also is partly responsible
for the problem.
42
Random Variables
In other words, although typically the
notation used to denote a random
variable is as we have shown it here, x, or
some other appropriate variable,
to be strictly formal, a random variable
should be written as a function x(·)
where the argument is a specific event
being considered.
43
Random Variables
However, this is seldom done, and, in our
experience, trying to be formal by using
function notation complicates the issue
more than the clarity it introduces.
Thus, we will opt for the less formal
notation, with the warning that it must
be keep clearly in mind that random
variables are functions.
44
Random Variables
Example:
Consider the experiment of drawing a single
card from a standard deck of 52 cards.
Suppose that we define the following events.
A: a heart; B: a spade; C: a club; and D: a
diamond, so that S = {A, B, C, D}.
A random variable is easily defined by letting
x = 1 represent event A, x = 2 represent
event B, and so on.
45
Random Variables
As a second illustration,
consider the experiment of throwing a single die and
observing the value of the up-face.
We can define a random variable as the numerical
outcome of the experiment (i.e., 1 through 6), but
there are many other possibilities.
For example, a binary random variable could be
defined simply by letting x = 0 represent the event
that the outcome of throw is an even number and
x = 1 otherwise.
46
Random Variables
Note
the important fact in the examples just
given that the probability of the events have
not changed;
all a random variable does is map events onto
the real line.
47
Random Variables
Thus far we have been concerned with
random variables whose values are
discrete.
To handle continuous random variables
we need some additional tools.
In the discrete case, the probabilities of
events are numbers between 0 and 1.
48
Random Variables
When dealing with continuous quantities
(which are not denumerable) we can no
longer talk about the "probability of an
event" because that probability is zero.
This is not as unfamiliar as it may seem.
49
Random Variables
For example,
given a continuous function we know that the
area of the function between two limits a
and b is the integral from a to b of the
function.
However, the area at a point is zero because
the integral from,say, a to a is zero.
We are dealing with the same concept in the
case of continuous random variables.
50
Random Variables
Thus, instead of talking about the probability of a
specific value, we talk about the probability that
the value of the random variable lies in a specified
range.
In particular, we are interested in the probability
that the random variable is less than or equal to
(or, similarly, greater than or equal to) a specified
constant a.
We write this as
F(a) = P(x a)
51
Random Variables
If this function is given for all values of a (i.e.,
< a < ), then the values of random variable
x have been defined.
Function F is called the cumulative probability
distribution function or simply the cumulative
distribution function (cdf).
The shortened term distribution function also
is used.
52
Random Variables
Observe that the notation we have used makes
no distinction between a random variable and
the values it assumes.
If confusion is likely to arise, we can use more
formal notation in which we let capital letters
denote the random variable and lowercase
letters denote its values.
For example, the cdf using this notation is
written as
FX(x) = P(X x)
53
Random Variables
When confusion is not likely, the cdf
often is written simply as F(x).
This notation will be used in the following
discussion when speaking generally about
the cdf of a random variable.
54
Random Variables
Due to the fact that it is a probability,
the cdf has the following properties:
1. F(-) = 0
2. F() = 1
3. 0 F(x) 1
4. F(x1) F(x2) if x1 < x2
5. P(x1 < x x2) = F(x2) – F(x1)
6. F(x+) = F(x),
dF ( x )
p( x )
dx
56
Random Variables
The pdf satisfies the following properties:
57
Random Variables
The preceding concepts are applicable to discrete
random variables.
In this case, there is a finite no. of events and we
talk about probabilities, rather than probability
density functions.
Integrals are replaced by summations and,
sometimes, the random variables are subscripted.
For example, in the case of a discrete variable
with N possible values we would denote the
probabilities by P(xi), i=1, 2,…, N.
58
Random Variables
If a random variable x is transformed by a
monotonic transformation function T(x) to
produce a new random variable y,
the probability density function of y can be
obtained from knowledge of T(x) and the
probability density function of x, as follows:
dx
p y ( y ) px ( x )
dy
where the vertical bars signify the absolute value.
59
Random Variables
A function T(x) is monotonically
increasing if T(x1) < T(x2) for x1 < x2, and
A function T(x) is monotonically
decreasing if T(x1) > T(x2) for x1 < x2.
The preceding equation is valid if T(x) is
an increasing or decreasing monotonic
function.
60
Applied to Image
Let
pr(r) denote the PDF of random variable r
ps (s) denote the PDF of random variable s
If pr(r) and T(r) are known and T-1(s) satisfies
condition (a) then ps(s) can be obtained using
a formula :
dr
ps(s) pr(r)
ds 61
Applied to Image
62
Transformation function
A transformation function is a cumulative
distribution function (CDF) of random
variable r :
r
s T ( r ) pr ( w )dw
0
where w is a dummy variable of
integration
Note: T(r)
Note: depends on
T(r) depends on pprr(r)
(r)
63
Cumulative
Distribution function
CDF is an integral of a probability
function (always positive) is the area
under the function
Thus, CDF is always single valued and
monotonically increasing
Thus, CDF satisfies the condition (a)
We can use CDF as a transformation
function
64
Finding ps(s) from given T(r)
ds dT ( r )
dr dr
r dr
d p s ( s ) pr ( r )
pr ( w )dw ds
dr 0
1
pr ( r ) pr ( r )
pr ( r )
1 where 0 s 1
Substitute and yield
65
ps(s)
As ps(s) is a probability function, it must
be zero outside the interval [0,1] in this
case because its integral over all values
of s must equal 1.
Called ps(s) as a uniform probability
density function
ps(s) is always a uniform, independent of
the form of pr(r)
66
r
s T ( r ) pr ( w )dw
0
yields
Ps(s
)
a random variable s 1
characterized by
a uniform probability
function s
0
67
Discrete
transformation function
The probability of occurrence of gray
level in an image is approximated by
nk
pr ( rk ) where k 0 , 1, ..., L-1
n
The discrete version of transformation
k
sk T ( rk ) pr ( r j )
j 0
k nj
where k 0 , 1, ..., L-1
j 0 n 68
Histogram Equalization
Thus, an output image is obtained by mapping
each pixel with level rk in the input image into a
corresponding pixel with level sk in the output
image
In discrete space, it cannot be proved in
general that this discrete transformation will
produce the discrete equivalent of a uniform
probability density function, which would be a
uniform histogram
69
Example
before after Histogram
equalization
70
Example
before after Histogram
equalization
The quality is
not improved
much because
the original
image already
has a broaden
gray-level scale
71
Example No. of pixels
6
2 3 3 2 5
4 2 4 3 4
3 2 3 5 3
2
2 4 2 4
1
Gray level
4x4 image
0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9]
histogram
72
Gray
0 1 2 3 4 5 6 7 8 9
Level(j)
No. of
0 0 6 5 4 1 0 0 0 0
pixels
k
n
j 0
j 0 0 6 11 15 16 16 16 16 16
k nj 6 11 15 16 16 16 16 16
s 0 0 / / / / / / / /
j 0 n
16 16 16 16 16 16 16 16
3.3 6.1 8.4
sx9 0 0 9 9 9 9 9
3 6 8
Example No. of pixels
6
3 6 6 3 5
8 3 8 6 4
6 3 6 9 3
2
3 8 3 8
1
Output image
0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9] Gray level
Histogram equalization
74
Note
It is clearly seen that
Histogram equalization distributes the gray level to
reach the maximum gray level (white) because the
cumulative distribution function equals 1 when
0 r L-1
If the cumulative numbers of gray levels are slightly
different, they will be mapped to little different or
same gray levels as we may have to approximate the
processed gray level of the output image to integer
number
Thus the discrete transformation function can’t
guarantee the one to one mapping relationship
75
Histogram Matching
(Specification)
Histogram equalization has a disadvantage
which is that it can generate only one type
of output image.
With Histogram Specification, we can
specify the shape of the histogram that
we wish the output image to have.
It doesn’t have to be a uniform histogram
76
Consider the continuous domain
r
s T ( r ) pr ( w )dw Histogram equalization
0
z = G-1(s) = G-1[T(r)]
Assume G-1 exists and satisfies the condition (a) and (b)
We can map an input gray level r to output gray level z
78
Procedure Conclusion
1. Obtain the transformation function T(r) by
calculating the histogram equalization of the
input image
r
s T ( r ) pr ( w )dw
0
80
Example
Assume an image has a gray level probability density
function pr(r) as shown.
Pr(r) 2r 2 ;0 r 1
pr ( r )
2 0 ; elsewhere
1 r
p ( w )dw 1
0
r
0 1 2 r 81
Example
We would like to apply the histogram specification with
the desired probability density function pz(z) as shown.
Pz(z
) 2z ;0 z 1
2 pz ( z )
0 ; elsewhere
1 z
p ( w )dw 1
0
z
z
0 1 2 82
Step 1:
Obtain the transformation function T(r)
r
s=T(r)
s T ( r ) pr ( w )dw
0
1 r
( 2 w 2 )dw
One to one 0
mapping 2
r
function w 2 w
0
r 2
0 1 r 2r
83
Step 2:
Obtain the transformation function G(z)
z
z
2 2
G ( z ) ( 2 w )dw z z
0
0
84
Step 3:
Obtain the inversed transformation function G-1
G ( z ) T ( r )
2 2
z r 2r
2
z 2r r
We can guarantee that 0 z 1 when 0 r 1
85
Discrete formulation
k
sk T ( rk ) pr ( r j )
j 0
k nj
k 0 ,1,2 ,..., L 1
j 0 n
k
G ( z k ) pz ( z i ) sk k 0 ,1,2 ,..., L 1
i 0
z k G 1 T ( rk )
G 1
sk k 0 ,1,2 ,..., L 1 86
Example
Result image
after histogram
equalization
Transformation function
Histogram of the result image
for histogram equalization
The histogram equalization doesn’t make the result image look better than
the original image. Consider the histogram of the result image, the net
effect of this method is to map a very narrow interval of dark pixels into
the upper end of the gray scale of the output image. As a consequence, the
output image is light and has a washed-out appearance. 88
Solve the problem Histogram Equalization
a reasonable approach is to
modify the histogram of that
image so that it does not have
this property
89
Histogram Specification
(1) the transformation
function G(z) obtained
from
k
G ( z k ) pz ( z i ) sk
i 0
k 0 ,1,2 ,..., L 1
(2) the inverse
transformation G-1(s)
90
Result image and its histogram
92
Note
Histogram processing methods are global
processing, in the sense that pixels are
modified by a transformation function
based on the gray-level content of an
entire image.
Sometimes, we may need to enhance
details over small areas in an image,
which is called a local enhancement.
93
a) Original image
(slightly blurred to
reduce noise)
b) global histogram
96
Logic Operations
Logic operation performs on gray-level
images, the pixel values are processed as
binary numbers
light represents a binary 1, and dark
represents a binary 0
NOT operation = negative transformation
97
Example of AND Operation
100
a b
c d
Image Subtraction
a). original fractal image
b). result of setting the four
lower-order bit planes to zero
refer to the bit-plane slicing
the higher planes contribute
significant detail
the lower planes contribute more
to fine detail
image b). is nearly identical
visually to image a), with a very
slightly drop in overall contrast
due to less variability of the
gray-level values in the image.
c). difference between a). and b).
(nearly black)
d). histogram equalization of c).
(perform contrast stretching
transformation)
101
Mask mode radiography
h(x,y) is the mask, an X-ray
image of a region of a
patient’s body captured by an
intensified TV camera (instead
of traditional X-ray film)
located opposite an X-ray
source
f(x,y) is an X-ray image taken
after injection a contrast
mask image an image (taken after medium into the patient’s
injection of a contrast bloodstream
medium (iodine) into the images are captured at TV
bloodstream) with mask rates, so the doctor can see
Note: subtracted out. how the medium propagates
• the background is dark because it through the various arteries in
doesn’t change much in both images. the area being observed (the
effect of subtraction) in a
• the difference area is bright because it movie showing mode.
has a big change 102
Note
We may have to adjust the gray-scale of the subtracted
image to be [0, 255] (if 8-bit is used)
first, find the minimum gray value of the subtracted
image
second, find the maximum gray value of the subtracted
image
set the minimum value to be zero and the maximum to be
255
while the rest are adjusted according to the interval
104
Image Averaging
if noise has zero mean and be
uncorrelated then it can be shown that if
K
1
g ( x, y ) g ( x, y )
i
K i 1
105
Image Averaging
then
2 1 2
g ( x, y ) ( x, y )
K
2 2
g ( x, y ) , ( x , y ) = variances of g and
E{g ( x, y )} f ( x, y )
107
Image Averaging
Note: the images gi(x,y) (noisy images)
must be registered (aligned) in order to
avoid the introduction of blurring and
other artifacts in the output image.
108
a b
c d
Example e f
a) original image
b) image corrupted by
additive Gaussian noise
with zero mean and a
standard deviation of 64
gray levels.
c). -f). results of
averaging K = 8, 16, 64
and 128 noisy images
109
Spatial Filtering
use filter (can also be called as
mask/kernel/template or window)
the values in a filter subimage are
referred to as coefficients, rather than
pixel.
our focus will be on masks of odd sizes,
e.g. 3x3, 5x5,…
110
Spatial Filtering Process
simply move the filter mask from point to
point in an image.
at each point (x,y), the response of the
filter at that point is calculated using a
predefined relationship.
R w1 z1 w2 z 2 ... wmn z mn
mn
wi zi
i i
111
Linear Filtering
Linear Filtering of an image f of size
MxN filter mask of size mxn is given by
the expression
a b
g ( x, y ) w( s, t ) f ( x s, y t )
t a t b
114
Smoothing Linear Filters
replacing the value of every pixel in an image
by the average of the gray levels in the
neighborhood will reduce the “sharp”
transitions in gray levels.
sharp transitions
random noise in the image
edges of objects in the image
thus, smoothing can reduce noises (desirable)
and blur edges (undesirable)
115
3x3 Smoothing Linear Filters
117
General form : smoothing mask
filter of size mxn (m and n odd)
a b
w(s, t ) f ( x s, y t )
g ( x, y ) s at b a b
w(s, t )
s at b
121
Median Filters
replaces the value of a pixel by the median of
the gray levels in the neighborhood of that
pixel (the original value of the pixel is included
in the computation of the median)
quite popular because for certain types of
random noise (impulse noise salt and pepper
noise)
noise , they provide excellent noise-reduction
capabilities,
capabilities with considering less blurring than
linear smoothing filters of similar size.
122
Median Filters
forces the points with distinct gray levels to
be more like their neighbors.
isolated clusters of pixels that are light or
dark with respect to their neighbors, and
whose area is less than n2/2 (one-half the
filter area), are eliminated by an n x n median
filter.
eliminated = forced to have the value equal the
median intensity of the neighbors.
larger clusters are affected considerably less
123
Example : Median Filters
124
Sharpening Spatial Filters
to highlight fine detail in an image
or to enhance detail that has been
blurred, either in error or as a natural
effect of a particular method of image
acquisition.
125
Blurring vs. Sharpening
as we know that blurring can be done in
spatial domain by pixel averaging in a
neighbors
since averaging is analogous to integration
thus, we can guess that the sharpening
must be accomplished by spatial
differentiation.
126
Derivative operator
the strength of the response of a derivative
operator is proportional to the degree of
discontinuity of the image at the point at which
the operator is applied.
thus, image differentiation
enhances edges and other discontinuities (noise)
deemphasizes area with slowly varying gray-level
values.
127
First-order derivative
a basic definition of the first-order
derivative of a one-dimensional function
f(x) is the difference
f
f ( x 1) f ( x)
x
128
Second-order derivative
similarly, we define the second-order
derivative of a one-dimensional function
f(x) is the difference
2
f
2
f ( x 1) f ( x 1) 2 f ( x)
x
129
First and Second-order
derivative of f(x,y)
when we consider an image function of
two variables, f(x,y), at which time we
will dealing with partial derivatives along
the two spatial axes.
f ( x, y ) f ( x, y ) f ( x, y )
Gradient operator f
xy x y
Laplacian operator 2 2
2 f ( x, y ) f ( x, y )
(linear operator) f 2
2
x y 130
Discrete Form of Laplacian
2
from
f
2
f ( x 1, y ) f ( x 1, y ) 2 f ( x, y )
x
2
f
2
f ( x, y 1) f ( x, y 1) 2 f ( x, y )
y
yield,
2
f [ f ( x 1, y ) f ( x 1, y )
f ( x, y 1) f ( x, y 1) 4 f ( x, y )]
131
Result Laplacian mask
132
Laplacian mask implemented an
extension of diagonal neighbors
133
Other implementation of
Laplacian masks
138
Mask of Laplacian + addition
g ( x, y ) f ( x, y ) [ f ( x 1, y ) f ( x 1, y )
f ( x, y 1) f ( x, y 1) 4 f ( x, y )]
5 f ( x, y ) [ f ( x 1, y ) f ( x 1, y )
f ( x, y 1) f ( x, y 1)]
0 -1 0
-1 5 -1
0 -1 0
139
Example
140
2
f ( x , y ) f ( x, y )
g ( x, y )
Note
2
f ( x, y ) f ( x, y )
0 -1 0 0 0 0 0 -1 0
-1 5 -1 = 0 1 0 + -1 4 -1
0 -1 0 0 0 0 0 -1 0
0 -1 0 0 0 0 0 -1 0
-1 9 -1 = 0 1 0 + -1 8 -1
0 -1 0 0 0 0 0 -1 0
141
Unsharp masking
f s ( x, y ) f ( x , y ) f ( x, y )
sharpened image
sharpened image == original
original image
image –– blurred
blurred image
image
142
High-boost filtering
f hb ( x, y ) Af ( x, y ) f ( x, y )
f hb ( x, y ) ( A 1) f ( x, y ) f ( x, y ) f ( x, y )
( A 1) f ( x, y ) f s ( x, y )
generalized form of Unsharp masking
A1
143
High-boost filtering
f hb ( x, y ) ( A 1) f ( x, y ) f s ( x, y )
if we use Laplacian filter to create
sharpen image fs(x,y) with addition of
original image
2
f ( x, y ) f ( x, y )
f s ( x, y ) 2
f ( x, y ) f ( x, y )
144
High-boost filtering
if the center coefficient
yields of the Laplacian mask is
negative
2
Af ( x, y ) f ( x, y )
f hb ( x, y ) 2
Af ( x, y ) f ( x, y )
A1
if A = 1, it becomes “standard” Laplacian
sharpening 146
Example
147
f
Gx x
f f
Gradient Operator Gy
y
first derivatives are implemented using
the magnitude of the gradient.
gradient
1
2 2
f mag (f ) [G G ] x y
2
1
commonly approx.
f 2 f 2 2
x y
f Gx G y
the magnitude becomes nonlinear
148
z1 z2 z3
z4 z5 z6
Gradient Mask z7 z8 z9
simplest approximation, 2x2
Gx ( z8 z5 ) and G y ( z6 z5 )
1 1
2 2 2 2
f [G G ]
x y
2
[( z8 z5 ) ( z6 z5 ) ] 2
f z8 z5 z6 z5
149
z1 z2 z3
z4 z5 z6
Gradient Mask z7 z8 z9
Roberts cross-gradient operators, 2x2
Gx ( z9 z5 ) and G y ( z8 z6 )
1 1
2 2 2 2
f [G G ]
x y
2
[( z9 z5 ) ( z8 z6 ) ] 2
f z9 z5 z8 z6
150
z1 z2 z3
z4 z5 z6
Gradient Mask z7 z8 z9
Sobel operators, 3x3
Gx ( z7 2 z8 z9 ) ( z1 2 z 2 z3 )
G y ( z3 2 z6 z9 ) ( z1 2 z 4 z7 )
f Gx G y
the weight value 2 is to
achieve smoothing by
giving more important
to the center point 151
Note
the summation of coefficients in all
masks equals 0, indicating that they
would give a response of 0 in an area of
constant gray level.
152
Example
153
Example of Combining Spatial
Enhancement Methods
want to sharpen the
original image and bring
out more skeletal
detail.
problems: narrow
dynamic range of gray
level and high noise
content makes the
image difficult to
enhance
154
Example of Combining Spatial
Enhancement Methods
solve :