[go: up one dir, main page]

0% found this document useful (0 votes)
3 views157 pages

Enhancement in Spatial Domain

Uploaded by

Sheela Angel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views157 pages

Enhancement in Spatial Domain

Uploaded by

Sheela Angel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 157

Image Enhancement in the

Spatial Domain
Principle Objective of
Enhancement
 Enhancement refers to accentuation, or
sharpening, of image features such as
edges, boundaries, or contrast to make
graphic display more useful for display and
analysis.
 Does not increase the inherent information
content in the data, only increase the
dynamic range of chosen feature.
2
Principle Objective of
Enhancement
 Process an image so that the result will be
more suitable than the original image for
a specific application.
 The suitableness is up to each application.
 A method which is quite useful for
enhancing an image may not necessarily be
the best approach for enhancing another
images
3
2 domains
 Spatial Domain : (image plane)
 Techniques are based on direct manipulation of
pixels in an image
 Frequency Domain :
 Techniques are based on modifying the Fourier
transform of an image
 There are some enhancement techniques based
on various combinations of methods from these
two categories.

4
Good images
 For human visual
 The visual evaluation of image quality is a highly
subjective process.
 It is hard to standardize the definition of a good
image.
 For machine perception
 The evaluation task is easier.
 A good image is one which gives the best machine
recognition results.
 A certain amount of trial and error usually is
required before a particular image
enhancement approach is selected. 5
Spatial Domain
 Procedures that operate
directly on pixels.
g(x,y) = T[f(x,y)]
where
 f(x,y) is the input image

 g(x,y) is the processed


image
 T is an operator on f
defined over some
neighborhood of (x,y)

6
Mask/Filter
 Neighborhood of a point (x,y)
can be defined by using a
square/rectangular (common
(x,y)
used) or circular subimage area
centered at (x,y)

 The center of the subimage is
moved from pixel to pixel
starting at the top of the
corner

7
Point Processing
 Neighborhood = 1x1 pixel
 g depends on only the value of f at (x,y)
 T = gray level (or intensity or mapping)
transformation function
s = T(r)
 Where

r = gray level of f(x,y)

s = gray level of g(x,y)

8
Contrast Stretching
 Produce higher contrast
than the original by
 darkening the levels below
m in the original image

 Brightening the levels


above m in the original
image

9
Thresholding
 Produce a two-level
(binary) image

10
Mask Processing or Filter
 Neighborhood is bigger than 1x1 pixel
 Use a function of the values of f in a
predefined neighborhood of (x,y) to determine
the value of g at (x,y)
 The value of the mask coefficients determine
the nature of the process
 Used in techniques
 Image Sharpening
 Image Smoothing

11
3 basic gray-level
transformation functions
 Linear function
Negative

nth root
 Negative and identity
transformations
Output gray level, s

Log
nth power
 Logarithm function
 Log and inverse-log
transformation
 Power-law function
Inverse Log
nth power and nth root
Identity 

transformations
Input gray level, r
12
Identity function
 Output intensities
Negative
are identical to input
nth root
intensities.
Output gray level, s

Log
nth power
 Is included in the
graph only for
completeness.

Identity Inverse Log

Input gray level, r


13
Image Negatives
 An image with gray level in
Negative the range [0, L-1]
nth root where L = 2n ; n = 1, 2…
Negative transformation :
Output gray level, s


Log
nth power s = L – 1 –r
 Reversing the intensity
levels of an image.
 Suitable for enhancing white
Identity Inverse Log
or gray detail embedded in
dark regions of an image,
especially when the black
Input gray level, r area dominant in size.
14
Example of Negative Image

Original mammogram Negative Image : gives a


showing a small lesion of better vision to analyze the
a breast image 15
Log Transformations
s = c log (1+r)
Negative  c is a constant
nth root
and r  0
Output gray level, s

Log
 Log curve maps a narrow
nth power range of low gray-level
values in the input image
into a wider range of
output levels.
Inverse Log
 Used to expand the
values of dark pixels in
Identity

an image while
Input gray level, r compressing the higher-
level values.
16
Log Transformations
 It compresses the dynamic range of images
with large variations in pixel values
 Example of image with dynamic range: Fourier
spectrum image
 It can have intensity range from 0 to 106 or
higher.
 We can’t see the significant degree of detail
as it will be lost in the display.

17
Example of Logarithm Image

Fourier Spectrum with range Result after apply the log


= 0 to 1.5 x 106 transformation with c = 1,
range = 0 to 6.2 18
Inverse Logarithm
Transformations
 Do opposite to the Log Transformations
 Used to expand the values of high pixels
in an image while compressing the
darker-level values.

19
Power-Law Transformations
s = cr
 c and  are positive
Output gray level, s

constants
 Power-law curves with
fractional values of  map
a narrow range of dark
input values into a wider
range of output values,
with the opposite being
true for higher values of
input levels.
Input gray level, r  c =  = 1  Identity
Plots of s = cr for various values of  function
(c = 1 in all cases) 20
Gamma correction
 Cathode ray tube (CRT)
devices have an
Monitor
intensity-to-voltage
response that is a
power function, with 
varying from 1.8 to 2.5
 = 2.5
Gamma
correction
 The picture will become
darker.
 Gamma correction is
Monitor
done by preprocessing
the image before
inputting it to the
monitor with s = cr1/
21
 =1/2.5 = 0.4
a b
Another example : MRI c d

(a) a magnetic resonance image of


an upper thoracic human spine
with a fracture dislocation and
spinal cord impingement
 The picture is predominately dark
 An expansion of gray levels are
desirable  needs  < 1
(b) result after power-law
transformation with  = 0.6, c=1
(c) transformation with  = 0.4
(best result)
(d) transformation with  = 0.3
(under acceptable level) 22
Effect of decreasing gamma
 When the  is reduced too much, the
image begins to reduce contrast to the
point where the image started to have
very slight “wash-out” look, especially in
the background

23
a b
Another example c d

(a) image has a washed-out


appearance, it needs a
compression of gray levels 
needs  > 1
(b) result after power-law
transformation with  = 3.0
(suitable)
(c) transformation with  = 4.0
(suitable)
(d) transformation with  = 5.0
(high contrast, the image
has areas that are too dark,
some detail is lost) 24
Piecewise-Linear
Transformation Functions
 Advantage:
 The form of piecewise functions can be
arbitrarily complex
 Disadvantage:
 Their specification requires considerably
more user input

25
Contrast Stretching
 increase the dynamic range of
the gray levels in the image
 (b) a low-contrast image : result
from poor illumination, lack of
dynamic range in the imaging
sensor, or even wrong setting of
a lens aperture of image
acquisition
 (c) result of contrast
stretching: (r1,s1) = (rmin,0) and
(r2,s2) = (rmax,L-1)
 (d) result of thresholding

26
Gray-level slicing
 Highlighting a specific
range of gray levels in an
image
 Display a high value of all
gray levels in the range of
interest and a low value for
all other gray levels
 (a) transformation highlights
range [A,B] of gray level and
reduces all others to a
constant level
 (b) transformation highlights
range [A,B] but preserves all
other levels

27
Bit-plane slicing
 Highlighting the contribution
Bit-plane 7
made to total image
One 8-bit byte
(most significant) appearance by specific bits
 Suppose each pixel is
represented by 8 bits
 Higher-order bits contain
the majority of the visually
significant data
Bit-plane 0
(least significant)
 Useful for analyzing the
relative importance played by
each bit of the image

28
Example
 The (binary) image for
bit-plane 7 can be
obtained by processing
the input image with a
thresholding gray-level
transformation.
 Map all levels between 0
and 127 to 0
 Map all levels between 129
and 255 to 255

An 8-bit fractal image


29
8 bit planes

Bit-plane 7 Bit-plane 6

Bit- Bit- Bit-


plane 5 plane 4 plane 3

Bit- Bit- Bit-


plane 2 plane 1 plane 0

30
Histogram Processing
 Histogram of a digital image with gray levels in
the range [0,L-1] is a discrete function
h(rk) = nk
 Where
 rk : the kth gray level
 nk : the number of pixels in the image having gray
level rk
 h(rk) : histogram of a digital image with gray levels rk

31
Normalized Histogram
 dividing each of histogram at gray level rk by the
total number of pixels in the image, n
p(rk) = nk / n
 For k = 0,1,…,L-1
 p(rk) gives an estimate of the probability of
occurrence of gray level rk
 The sum of all components of a normalized
histogram is equal to 1
32
Histogram Processing
 Basic for numerous spatial domain
processing techniques
 Used effectively for image enhancement
 Information inherent in histograms also
is useful in image compression and
segmentation

33
h(rk) or p(rk)

Example rk

Dark image
Components of histogram
are concentrated on the low
side of the gray scale.

Bright image
Components of histogram
are concentrated on the
high side of the gray scale.

34
Example
Low-contrast image
histogram is narrow
and centered toward
the middle of the
gray scale
High-contrast image
histogram covers broad
range of the gray scale
and the distribution of
pixels is not too far from
uniform, with very few
vertical lines being much
higher than the others
35
Histogram Equalization
 As the low-contrast image’s histogram is
narrow and centered toward the middle of the
gray scale, if we distribute the histogram to a
wider range the quality of the image will be
improved.
 We can do it by adjusting the probability
density function of the original histogram of
the image so that the probability spread
equally

36
Histogram transformation
s s = T(r)
 Where 0  r  1
 T(r) satisfies
 (a). T(r) is single-valued
sk= T(rk)
and monotonically
T(r)
increasingly in the
interval 0  r  1
 (b). 0  T(r)  1 for
0r1

0 rk 1 r
37
2 Conditions of T(r)
 Single-valued (one-to-one relationship) guarantees
that the inverse transformation will exist
 Monotonicity condition preserves the increasing
order from black to white in the output image
thus it won’t cause a negative image
 0  T(r)  1 for 0  r  1 guarantees that the
output gray levels will be in the same range as the
input levels.
 The inverse transformation from s back to r is
r = T -1(s) ;0s1
38
Probability Density Function
 The gray levels in an image may be viewed
as random variables in the interval [0,1]
 PDF is one of the fundamental descriptors
of a random variable

39
Random Variables
 Random variables often are a source of
confusion when first encountered.
 This need not be so, as the concept of a
random variable is in principle quite
simple.

40
Random Variables
 A random variable, x, is a real-valued function
defined on the events of the sample space, S.
 In words, for each event in S, there is a real
number that is the corresponding value of the
random variable.
 Viewed yet another way, a random variable
maps each event in S onto the real line.
line
 That is it. A simple, straightforward definition.

41
Random Variables
 Part of the confusion often found in
connection with random variables is the
fact that they are functions.
functions
 The notation also is partly responsible
for the problem.

42
Random Variables
 In other words, although typically the
notation used to denote a random
variable is as we have shown it here, x, or
some other appropriate variable,
 to be strictly formal, a random variable
should be written as a function x(·)
where the argument is a specific event
being considered.
43
Random Variables
 However, this is seldom done, and, in our
experience, trying to be formal by using
function notation complicates the issue
more than the clarity it introduces.
 Thus, we will opt for the less formal
notation, with the warning that it must
be keep clearly in mind that random
variables are functions.
44
Random Variables
 Example:
 Consider the experiment of drawing a single
card from a standard deck of 52 cards.
 Suppose that we define the following events.
A: a heart; B: a spade; C: a club; and D: a
diamond, so that S = {A, B, C, D}.
 A random variable is easily defined by letting
x = 1 represent event A, x = 2 represent
event B, and so on.

45
Random Variables
 As a second illustration,
 consider the experiment of throwing a single die and
observing the value of the up-face.
 We can define a random variable as the numerical
outcome of the experiment (i.e., 1 through 6), but
there are many other possibilities.
 For example, a binary random variable could be
defined simply by letting x = 0 represent the event
that the outcome of throw is an even number and
x = 1 otherwise.

46
Random Variables
 Note
 the important fact in the examples just
given that the probability of the events have
not changed;
 all a random variable does is map events onto
the real line.

47
Random Variables
 Thus far we have been concerned with
random variables whose values are
discrete.
 To handle continuous random variables
we need some additional tools.
 In the discrete case, the probabilities of
events are numbers between 0 and 1.

48
Random Variables
 When dealing with continuous quantities
(which are not denumerable) we can no
longer talk about the "probability of an
event" because that probability is zero.
 This is not as unfamiliar as it may seem.

49
Random Variables
 For example,
 given a continuous function we know that the
area of the function between two limits a
and b is the integral from a to b of the
function.
 However, the area at a point is zero because
the integral from,say, a to a is zero.
 We are dealing with the same concept in the
case of continuous random variables.

50
Random Variables
 Thus, instead of talking about the probability of a
specific value, we talk about the probability that
the value of the random variable lies in a specified
range.
 In particular, we are interested in the probability
that the random variable is less than or equal to
(or, similarly, greater than or equal to) a specified
constant a.
 We write this as
F(a) = P(x  a)
51
Random Variables
 If this function is given for all values of a (i.e.,
  < a < ), then the values of random variable
x have been defined.
 Function F is called the cumulative probability
distribution function or simply the cumulative
distribution function (cdf).
 The shortened term distribution function also
is used.

52
Random Variables
 Observe that the notation we have used makes
no distinction between a random variable and
the values it assumes.
 If confusion is likely to arise, we can use more
formal notation in which we let capital letters
denote the random variable and lowercase
letters denote its values.
 For example, the cdf using this notation is
written as

FX(x) = P(X  x)
53
Random Variables
 When confusion is not likely, the cdf
often is written simply as F(x).
 This notation will be used in the following
discussion when speaking generally about
the cdf of a random variable.

54
Random Variables
 Due to the fact that it is a probability,
the cdf has the following properties:

1. F(-) = 0
2. F() = 1
3. 0  F(x)  1
4. F(x1)  F(x2) if x1 < x2
5. P(x1 < x  x2) = F(x2) – F(x1)
6. F(x+) = F(x),

where x+ = x + , with  being a positive,


infinitesimally small number. 55
Random Variables
The probability density function
(pdf or shortly called density function)
of random variable x is defined as the
derivative of the cdf:

dF ( x )
p( x ) 
dx
56
Random Variables
The pdf satisfies the following properties:

57
Random Variables
 The preceding concepts are applicable to discrete
random variables.
 In this case, there is a finite no. of events and we
talk about probabilities, rather than probability
density functions.
 Integrals are replaced by summations and,
sometimes, the random variables are subscripted.
 For example, in the case of a discrete variable
with N possible values we would denote the
probabilities by P(xi), i=1, 2,…, N.

58
Random Variables
 If a random variable x is transformed by a
monotonic transformation function T(x) to
produce a new random variable y,
 the probability density function of y can be
obtained from knowledge of T(x) and the
probability density function of x, as follows:
dx
p y ( y )  px ( x )
dy
where the vertical bars signify the absolute value.
59
Random Variables
 A function T(x) is monotonically
increasing if T(x1) < T(x2) for x1 < x2, and
 A function T(x) is monotonically
decreasing if T(x1) > T(x2) for x1 < x2.
 The preceding equation is valid if T(x) is
an increasing or decreasing monotonic
function.

60
Applied to Image
 Let
 pr(r) denote the PDF of random variable r
 ps (s) denote the PDF of random variable s
 If pr(r) and T(r) are known and T-1(s) satisfies
condition (a) then ps(s) can be obtained using
a formula :
dr
ps(s)  pr(r)
ds 61
Applied to Image

The PDF of the transformed variable s


is determined by
the gray-level PDF of the input image
and by
the chosen transformation function

62
Transformation function
 A transformation function is a cumulative
distribution function (CDF) of random
variable r :
r
s T ( r ) pr ( w )dw
0
where w is a dummy variable of
integration
Note: T(r)
Note: depends on
T(r) depends on pprr(r)
(r)
63
Cumulative
Distribution function
 CDF is an integral of a probability
function (always positive) is the area
under the function
 Thus, CDF is always single valued and
monotonically increasing
 Thus, CDF satisfies the condition (a)
 We can use CDF as a transformation
function

64
Finding ps(s) from given T(r)

ds dT ( r )

dr dr
r dr
d   p s ( s )  pr ( r )
  pr ( w )dw  ds
dr  0 
1
 pr ( r )  pr ( r )
pr ( r )
1 where 0 s 1
Substitute and yield
65
ps(s)
 As ps(s) is a probability function, it must
be zero outside the interval [0,1] in this
case because its integral over all values
of s must equal 1.
 Called ps(s) as a uniform probability
density function
 ps(s) is always a uniform, independent of
the form of pr(r)

66
r
s T ( r ) pr ( w )dw
0

yields

Ps(s
)
a random variable s 1
characterized by
a uniform probability
function s
0
67
Discrete
transformation function
 The probability of occurrence of gray
level in an image is approximated by
nk
pr ( rk )  where k  0 , 1, ..., L-1
n
 The discrete version of transformation
k
sk T ( rk )  pr ( r j )
j 0
k nj
 where k  0 , 1, ..., L-1
j 0 n 68
Histogram Equalization
 Thus, an output image is obtained by mapping
each pixel with level rk in the input image into a
corresponding pixel with level sk in the output
image
 In discrete space, it cannot be proved in
general that this discrete transformation will
produce the discrete equivalent of a uniform
probability density function, which would be a
uniform histogram
69
Example
before after Histogram
equalization

70
Example
before after Histogram
equalization

The quality is
not improved
much because
the original
image already
has a broaden
gray-level scale
71
Example No. of pixels
6
2 3 3 2 5

4 2 4 3 4

3 2 3 5 3

2
2 4 2 4
1
Gray level
4x4 image
0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9]
histogram
72
Gray
0 1 2 3 4 5 6 7 8 9
Level(j)
No. of
0 0 6 5 4 1 0 0 0 0
pixels
k

n
j 0
j 0 0 6 11 15 16 16 16 16 16

k nj 6 11 15 16 16 16 16 16
s  0 0 / / / / / / / /
j 0 n
16 16 16 16 16 16 16 16
3.3 6.1 8.4
sx9 0 0 9 9 9 9 9
3 6 8
Example No. of pixels
6
3 6 6 3 5

8 3 8 6 4

6 3 6 9 3

2
3 8 3 8
1

Output image
0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9] Gray level
Histogram equalization
74
Note
 It is clearly seen that
 Histogram equalization distributes the gray level to
reach the maximum gray level (white) because the
cumulative distribution function equals 1 when
0  r  L-1
 If the cumulative numbers of gray levels are slightly
different, they will be mapped to little different or
same gray levels as we may have to approximate the
processed gray level of the output image to integer
number
 Thus the discrete transformation function can’t
guarantee the one to one mapping relationship
75
Histogram Matching
(Specification)
 Histogram equalization has a disadvantage
which is that it can generate only one type
of output image.
 With Histogram Specification, we can
specify the shape of the histogram that
we wish the output image to have.
 It doesn’t have to be a uniform histogram
76
Consider the continuous domain

Let pr(r) denote continuous probability density


function of gray-level of input image, r

Let pz(z) denote desired (specified) continuous


probability density function of gray-level of
output image, z

Let s be a random variable with the property

r
s T ( r ) pr ( w )dw Histogram equalization
0

Where w is a dummy variable of integration


77
Next, we define a random variable z with the property
z
g ( z ) pz ( t )dt s Histogram equalization
0

Where t is a dummy variable of integration


thus
s = T(r) = G(z)

Therefore, z must satisfy the condition

z = G-1(s) = G-1[T(r)]

Assume G-1 exists and satisfies the condition (a) and (b)
We can map an input gray level r to output gray level z
78
Procedure Conclusion
1. Obtain the transformation function T(r) by
calculating the histogram equalization of the
input image
r
s T ( r ) pr ( w )dw
0

2. Obtain the transformation function G(z) by


calculating histogram equalization of the
desired density function
z
G ( z ) pz ( t )dt s
79
0
Procedure Conclusion
3. Obtain the inversed transformation
function G-1
z = G-1(s) = G-1[T(r)]

4. Obtain the output image by applying the


processed gray-level from the inversed
transformation function to all the
pixels in the input image

80
Example
Assume an image has a gray level probability density
function pr(r) as shown.

Pr(r)   2r  2 ;0 r 1
pr ( r ) 
2  0 ; elsewhere

1 r

p ( w )dw 1
0
r

0 1 2 r 81
Example
We would like to apply the histogram specification with
the desired probability density function pz(z) as shown.

Pz(z
)  2z ;0 z 1
2 pz ( z ) 
 0 ; elsewhere
1 z

p ( w )dw 1
0
z
z
0 1 2 82
Step 1:
Obtain the transformation function T(r)
r
s=T(r)
s T ( r ) pr ( w )dw
0
1 r
(  2 w  2 )dw
One to one 0
mapping 2
r
function  w  2 w
0
r 2
0 1  r  2r
83
Step 2:
Obtain the transformation function G(z)

z
z
2 2
G ( z ) ( 2 w )dw z z
0
0

84
Step 3:
Obtain the inversed transformation function G-1

G ( z ) T ( r )
2 2
z  r  2r
2
z  2r  r
We can guarantee that 0  z 1 when 0  r 1
85
Discrete formulation
k
sk T ( rk )  pr ( r j )
j 0
k nj
 k 0 ,1,2 ,..., L  1
j 0 n
k
G ( z k )  pz ( z i ) sk k 0 ,1,2 ,..., L  1
i 0

z k G  1  T ( rk )
G 1
 sk  k 0 ,1,2 ,..., L  1 86
Example

Image is dominated by large, dark areas,


resulting in a histogram characterized by
a large concentration of pixels in pixels in
the dark end of the gray scale
Image of Mars moon 87
Image Equalization

Result image
after histogram
equalization
Transformation function
Histogram of the result image
for histogram equalization
The histogram equalization doesn’t make the result image look better than
the original image. Consider the histogram of the result image, the net
effect of this method is to map a very narrow interval of dark pixels into
the upper end of the gray scale of the output image. As a consequence, the
output image is light and has a washed-out appearance. 88
Solve the problem Histogram Equalization

Since the problem with the


transformation function of the
histogram equalization was
caused by a large concentration
of pixels in the original image
with levels near 0 Histogram Specification

a reasonable approach is to
modify the histogram of that
image so that it does not have
this property
89
Histogram Specification
 (1) the transformation
function G(z) obtained
from
k
G ( z k )  pz ( z i ) sk
i 0

k 0 ,1,2 ,..., L  1
 (2) the inverse
transformation G-1(s)

90
Result image and its histogram

The output image’s histogram

Notice that the output


histogram’s low end has
After applied shifted right toward the
Original image the histogram lighter region of the gray
equalization scale as desired.
91
Note
 Histogram specification is a trial-and-
error process
 There are no rules for specifying
histograms, and one must resort to
analysis on a case-by-case basis for any
given enhancement task.

92
Note
 Histogram processing methods are global
processing, in the sense that pixels are
modified by a transformation function
based on the gray-level content of an
entire image.
 Sometimes, we may need to enhance
details over small areas in an image,
which is called a local enhancement.
93
a) Original image
(slightly blurred to
reduce noise)
b) global histogram

Local Enhancement equalization (enhance


noise & slightly
increase contrast but
the construction is not
changed)
c) local histogram
equalization using
7x7 neighborhood
(reveals the small
squares inside larger
ones of the original
image.
(a) (b) (c)
 define a square or rectangular neighborhood and move the center
of this area from pixel to pixel.
 at each location, the histogram of the points in the neighborhood is
computed and either histogram equalization or histogram
specification transformation function is obtained.
 another approach used to reduce computation is to utilize
nonoverlapping regions, but it usually produces an undesirable
checkerboard effect. 94
Explain the result in c)
 Basically, the original image consists of many
small squares inside the larger dark ones.
 However, the small squares were too close in
gray level to the larger ones, and their sizes
were too small to influence global histogram
equalization significantly.
 So, when we use the local enhancement
technique, it reveals the small areas.
 Note also the finer noise texture is resulted
by the local processing using relatively small
neighborhoods.
95
Enhancement using Arithmetic/
Logic Operations
 Arithmetic/Logic operations perform on
pixel by pixel basis between two or more
images
 except NOT operation which perform
only on a single image

96
Logic Operations
 Logic operation performs on gray-level
images, the pixel values are processed as
binary numbers
 light represents a binary 1, and dark
represents a binary 0
 NOT operation = negative transformation

97
Example of AND Operation

original image AND image result of AND


mask operation
98
Example of OR Operation

original image OR image result of OR


mask operation
99
Image Subtraction

g(x,y) = f(x,y) – h(x,y)

 enhancement of the differences between


images

100
a b
c d
Image Subtraction
 a). original fractal image
 b). result of setting the four
lower-order bit planes to zero

refer to the bit-plane slicing

the higher planes contribute
significant detail

the lower planes contribute more
to fine detail

image b). is nearly identical
visually to image a), with a very
slightly drop in overall contrast
due to less variability of the
gray-level values in the image.
 c). difference between a). and b).
(nearly black)
 d). histogram equalization of c).
(perform contrast stretching
transformation)

101
Mask mode radiography
 h(x,y) is the mask, an X-ray
image of a region of a
patient’s body captured by an
intensified TV camera (instead
of traditional X-ray film)
located opposite an X-ray
source
 f(x,y) is an X-ray image taken
after injection a contrast
mask image an image (taken after medium into the patient’s
injection of a contrast bloodstream
medium (iodine) into the  images are captured at TV
bloodstream) with mask rates, so the doctor can see
Note: subtracted out. how the medium propagates
• the background is dark because it through the various arteries in
doesn’t change much in both images. the area being observed (the
effect of subtraction) in a
• the difference area is bright because it movie showing mode.
has a big change 102
Note
 We may have to adjust the gray-scale of the subtracted
image to be [0, 255] (if 8-bit is used)
 first, find the minimum gray value of the subtracted

image
 second, find the maximum gray value of the subtracted

image
 set the minimum value to be zero and the maximum to be

255
 while the rest are adjusted according to the interval

[0, 255], by timing each value with 255/max


 Subtraction is also used in segmentation of moving pictures
to track the changes
 after subtract the sequenced images, what is left should

be the moving elements in the image, plus noise 103


Image Averaging
 consider a noisy image g(x,y) formed by
the addition of noise (x,y) to an original
image f(x,y)

g(x,y) = f(x,y) + (x,y)

104
Image Averaging
 if noise has zero mean and be
uncorrelated then it can be shown that if

g ( x, y ) = image formed by averaging


K different noisy images

K
1
g ( x, y )   g ( x, y )
i
K i 1
105
Image Averaging
 then
2 1 2
 g ( x, y )    ( x, y )
K
2 2
 g ( x, y ) ,  ( x , y ) = variances of g and 

if K increase, it indicates that the variability (noise) of the


pixel at each location (x,y) decreases.
106
Image Averaging
 thus

E{g ( x, y )}  f ( x, y )

E{g ( x, y )} = expected value of g


(output after averaging)

= original image f(x,y)

107
Image Averaging
 Note: the images gi(x,y) (noisy images)
must be registered (aligned) in order to
avoid the introduction of blurring and
other artifacts in the output image.

108
a b
c d
Example e f

 a) original image
 b) image corrupted by
additive Gaussian noise
with zero mean and a
standard deviation of 64
gray levels.
 c). -f). results of
averaging K = 8, 16, 64
and 128 noisy images
109
Spatial Filtering
 use filter (can also be called as
mask/kernel/template or window)
 the values in a filter subimage are
referred to as coefficients, rather than
pixel.
 our focus will be on masks of odd sizes,
e.g. 3x3, 5x5,…
110
Spatial Filtering Process
 simply move the filter mask from point to
point in an image.
 at each point (x,y), the response of the
filter at that point is calculated using a
predefined relationship.
R w1 z1  w2 z 2  ...  wmn z mn
mn
 wi zi
i i
111
Linear Filtering
 Linear Filtering of an image f of size
MxN filter mask of size mxn is given by
the expression
a b
g ( x, y )    w( s, t ) f ( x  s, y  t )
t  a t  b

where a = (m-1)/2 and b = (n-1)/2

To generate a complete filtered image this equation must


be applied for x = 0, 1, 2, … , M-1 and y = 0, 1, 2, … , N-1
112
Smoothing Spatial Filters
 used for blurring and for noise reduction
 blurring is used in preprocessing steps,
such as
 removal of small details from an image prior
to object extraction
 bridging of small gaps in lines or curves
 noise reduction can be accomplished by
blurring with a linear filter and also by a
nonlinear filter
113
Smoothing Linear Filters
 output is simply the average of the pixels
contained in the neighborhood of the filter
mask.
 called averaging filters or lowpass filters.

114
Smoothing Linear Filters
 replacing the value of every pixel in an image
by the average of the gray levels in the
neighborhood will reduce the “sharp”
transitions in gray levels.
 sharp transitions
 random noise in the image
 edges of objects in the image
 thus, smoothing can reduce noises (desirable)
and blur edges (undesirable)

115
3x3 Smoothing Linear Filters

box filter weighted average


the center is the most important and other
pixels are inversely weighted as a function of
their distance from the center of the mask116
Weighted average filter
 the basic strategy behind weighting the
center point the highest and then
reducing the value of the coefficients as
a function of increasing distance from
the origin is simply an attempt to
reduce blurring in the smoothing
process.

117
General form : smoothing mask
 filter of size mxn (m and n odd)
a b

  w(s, t ) f ( x  s, y  t )
g ( x, y )  s  at  b a b

  w(s, t )
s  at  b

summation of all coefficient of the mask


118
a b
c d
Example e f

 a). original image 500x500 pixel


 b). - f). results of smoothing
with square averaging filter
masks of size n = 3, 5, 9, 15 and
35, respectively.
 Note:
 big mask is used to eliminate small
objects from an image.
 the size of the mask establishes
the relative size of the objects
that will be blended with the
background.
119
Example

original image result after smoothing result of thresholding


with 15x15 averaging mask
we can see that the result after smoothing and thresholding,
the remains are the largest and brightest objects in the image.
120
Order-Statistics Filters
(Nonlinear Filters)
 the response is based on ordering
(ranking) the pixels contained in the
image area encompassed by the filter
 example
 median filter : R = median{zk |k = 1,2,…,n x n}
 max filter : R = max{zk |k = 1,2,…,n x n}
 min filter : R = min{zk |k = 1,2,…,n x n}
 note: n x nis the size of the mask

121
Median Filters
 replaces the value of a pixel by the median of
the gray levels in the neighborhood of that
pixel (the original value of the pixel is included
in the computation of the median)
 quite popular because for certain types of
random noise (impulse noise  salt and pepper
noise)
noise , they provide excellent noise-reduction
capabilities,
capabilities with considering less blurring than
linear smoothing filters of similar size.

122
Median Filters
 forces the points with distinct gray levels to
be more like their neighbors.
 isolated clusters of pixels that are light or
dark with respect to their neighbors, and
whose area is less than n2/2 (one-half the
filter area), are eliminated by an n x n median
filter.
 eliminated = forced to have the value equal the
median intensity of the neighbors.
 larger clusters are affected considerably less
123
Example : Median Filters

124
Sharpening Spatial Filters
 to highlight fine detail in an image
 or to enhance detail that has been
blurred, either in error or as a natural
effect of a particular method of image
acquisition.

125
Blurring vs. Sharpening
 as we know that blurring can be done in
spatial domain by pixel averaging in a
neighbors
 since averaging is analogous to integration
 thus, we can guess that the sharpening
must be accomplished by spatial
differentiation.

126
Derivative operator
 the strength of the response of a derivative
operator is proportional to the degree of
discontinuity of the image at the point at which
the operator is applied.
 thus, image differentiation
 enhances edges and other discontinuities (noise)
 deemphasizes area with slowly varying gray-level
values.

127
First-order derivative
 a basic definition of the first-order
derivative of a one-dimensional function
f(x) is the difference

f
 f ( x  1)  f ( x)
x

128
Second-order derivative
 similarly, we define the second-order
derivative of a one-dimensional function
f(x) is the difference

2
 f
2
 f ( x  1)  f ( x  1)  2 f ( x)
x

129
First and Second-order
derivative of f(x,y)
 when we consider an image function of
two variables, f(x,y), at which time we
will dealing with partial derivatives along
the two spatial axes.
f ( x, y ) f ( x, y ) f ( x, y )
Gradient operator f   
xy x y
Laplacian operator 2 2
2  f ( x, y )  f ( x, y )
(linear operator)  f  2
 2
x y 130
Discrete Form of Laplacian
2
from
 f
2
 f ( x  1, y )  f ( x  1, y )  2 f ( x, y )
x
2
 f
2
 f ( x, y  1)  f ( x, y  1)  2 f ( x, y )
y
yield,
2
 f [ f ( x  1, y )  f ( x  1, y )
 f ( x, y  1)  f ( x, y  1)  4 f ( x, y )]
131
Result Laplacian mask

132
Laplacian mask implemented an
extension of diagonal neighbors

133
Other implementation of
Laplacian masks

give the same result, but we have to keep in mind that


when combining (add / subtract) a Laplacian-filtered
image with another image. 134
Effect of Laplacian Operator
 as it is a derivative operator,
 it highlights gray-level discontinuities in an
image
 it deemphasizes regions with slowly varying
gray levels
 tends to produce images that have
 grayish edge lines and other discontinuities,
all superimposed on a dark,
 featureless background.
135
Correct the effect of
featureless background
 easily by adding the original and Laplacian
image.
 be careful with the Laplacian filter used
if the center coefficient
of the Laplacian mask is
 f ( x, y )  2 f ( x, y ) negative
g ( x, y )  2
 f ( x , y )   f ( x, y )
if the center coefficient
of the Laplacian mask is
positive
136
Example
 a). image of the North
pole of the moon
 b). Laplacian-filtered
image with
1 1 1
1 -8 1
1 1 1

 c). Laplacian image scaled


for display purposes
 d). image enhanced by
addition with original
image
137
Mask of Laplacian + addition
 to simply the computation, we can create
a mask which do both operations,
Laplacian Filter and Addition the original
image.

138
Mask of Laplacian + addition
g ( x, y )  f ( x, y )  [ f ( x  1, y )  f ( x  1, y )
 f ( x, y  1)  f ( x, y  1)  4 f ( x, y )]
5 f ( x, y )  [ f ( x  1, y )  f ( x  1, y )
 f ( x, y  1)  f ( x, y  1)]

0 -1 0
-1 5 -1
0 -1 0
139
Example

140
2
 f ( x , y )   f ( x, y )
g ( x, y ) 
Note
2
 f ( x, y )   f ( x, y )

0 -1 0 0 0 0 0 -1 0
-1 5 -1 = 0 1 0 + -1 4 -1
0 -1 0 0 0 0 0 -1 0

0 -1 0 0 0 0 0 -1 0
-1 9 -1 = 0 1 0 + -1 8 -1
0 -1 0 0 0 0 0 -1 0

141
Unsharp masking

f s ( x, y )  f ( x , y )  f ( x, y )
sharpened image
sharpened image == original
original image
image –– blurred
blurred image
image

 to subtract a blurred version of an image


produces sharpening output image.

142
High-boost filtering

f hb ( x, y )  Af ( x, y )  f ( x, y )

f hb ( x, y ) ( A  1) f ( x, y )  f ( x, y )  f ( x, y )
( A  1) f ( x, y )  f s ( x, y )
 generalized form of Unsharp masking
 A1
143
High-boost filtering
f hb ( x, y ) ( A  1) f ( x, y )  f s ( x, y )
 if we use Laplacian filter to create
sharpen image fs(x,y) with addition of
original image
2
 f ( x, y )   f ( x, y )
f s ( x, y )  2
 f ( x, y )   f ( x, y )
144
High-boost filtering
if the center coefficient
 yields of the Laplacian mask is
negative

2
 Af ( x, y )   f ( x, y )
f hb ( x, y )  2
 Af ( x, y )   f ( x, y )

if the center coefficient


of the Laplacian mask is
positive
145
High-boost Masks

 A1
 if A = 1, it becomes “standard” Laplacian
sharpening 146
Example

147
 f 
 Gx   x 
f    f 
Gradient Operator  Gy   
 y 
 first derivatives are implemented using
the magnitude of the gradient.
gradient
1
2 2
f mag (f ) [G  G ] x y
2

1
commonly approx.
  f  2  f  2  2

      
  x   y  
f  Gx  G y
the magnitude becomes nonlinear
148
z1 z2 z3
z4 z5 z6
Gradient Mask z7 z8 z9
 simplest approximation, 2x2

Gx ( z8  z5 ) and G y ( z6  z5 )
1 1
2 2 2 2
f [G  G ]
x y
2
[( z8  z5 )  ( z6  z5 ) ] 2

f  z8  z5  z6  z5

149
z1 z2 z3
z4 z5 z6
Gradient Mask z7 z8 z9
 Roberts cross-gradient operators, 2x2
Gx ( z9  z5 ) and G y ( z8  z6 )
1 1
2 2 2 2
f [G  G ]
x y
2
[( z9  z5 )  ( z8  z6 ) ] 2

f  z9  z5  z8  z6

150
z1 z2 z3
z4 z5 z6
Gradient Mask z7 z8 z9
 Sobel operators, 3x3
Gx ( z7  2 z8  z9 )  ( z1  2 z 2  z3 )
G y ( z3  2 z6  z9 )  ( z1  2 z 4  z7 )
f  Gx  G y
the weight value 2 is to
achieve smoothing by
giving more important
to the center point 151
Note
 the summation of coefficients in all
masks equals 0, indicating that they
would give a response of 0 in an area of
constant gray level.

152
Example

153
Example of Combining Spatial
Enhancement Methods
 want to sharpen the
original image and bring
out more skeletal
detail.
 problems: narrow
dynamic range of gray
level and high noise
content makes the
image difficult to
enhance

154
Example of Combining Spatial
Enhancement Methods
 solve :

1. Laplacian to highlight fine detail


2. gradient to enhance prominent
edges
3. gray-level transformation to
increase the dynamic range of
gray levels
155
156
157

You might also like