[go: up one dir, main page]

0% found this document useful (0 votes)
45 views108 pages

Ch03-Enhancement 3 Merged

Uploaded by

Ahmed Osrf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views108 pages

Ch03-Enhancement 3 Merged

Uploaded by

Ahmed Osrf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 108

Digital Image Processing

Introduction
What is an image?
An image may be defined as a two-dimensional function, f (x, y), where x and y are spatial (plane)
coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of
the image at that point.
a digital image is composed of a finite number of elements, each of which has a particular location and
value.
What is an image?
- Image data structure is 2D array of pixel values.
- Pixel values are gray levels in range 0‐255 or RGB colors
- Array values can be any data type (bit, byte, int, float,double, etc.)
Examples of Digital Images
a) Natural landscape

b) Synthetically generated scene

c) Poster graphic

d) Computer screenshot

e) Black and white illustration

f) Barcode

g) Fingerprint

h) X‐ray

i) Microscope slide

j) Satellite Image

k) Radar image

l) Astronomical object
Relationship with other Fields
emulate human vision,
including learning and being
able to make inferences and
take actions based on visual
inputs. (AI)

its inputs generally are


images, but its outputs are
attributes extracted from
those images (e.g., edges,
contours, and the identity of
individual objects).

Apply operations on the


image which results in a
new image
Key Stages in Digital Image Processing
Acquisition could
be as simple as
Key Stages in Digital Image Processing being given an
image that is
already in digital
form.
The process of
Key Stages in Digital Image Processing manipulating an
image so the result
is more suitable than
the original for a
specific application
Techniques tend to
Key Stages in Digital Image Processing be based on
mathematical
or probabilistic
models of image
degradation
Morphological
processing deals with
Key Stages in Digital Image Processing tools for extracting
image components
that are useful in the
representation and
description of shape
Key Stages in Digital Image Processing
partitions an image
into its constituent
parts or objects.
Key Stages in Digital Image Processing Feature
description assigns
quantitative attributes
to the detected
features
Key Stages in Digital Image Processing The process that
assigns a label (e.g.,
“vehicle”) to an
object based on its
feature descriptors
Techniques for
Key Stages in Digital Image Processing reducing the storage
required to save an
image, or the
bandwidth required
to transmit it.
Key Stages in Digital Image Processing Fundamental concepts
in color models and
basic color processing
in a digital domain.
Digital Image Fundamentals
A SIMPLE IMAGE FORMATION MODEL
Images are two-dimensional
functions of the form f (x ,y ). The
value of f at spatial coordinates
(x, y) is a scalar quantity whose
physical meaning is determined
by the source of the image, and
whose values are proportional to
energy radiated by a physical
source (e.g., electromagnetic
waves).

As a consequence, f(x,y) must be


nonnegative, and finite.

f(x,y) = i(x,y)r(x,y)

i(x,y) is the illumination

r(x,y) is the reflected illumination


Image sampling and quantization
Image sampling and Quantization
To digitize an image, we have to sample the function in both coordinates and also in
amplitude. Digitizing the coordinate values is called sampling. Digitizing the amplitude values
is called quantization.
Digital Image representation

We assume that the discrete levels are equally spaced and that they are integers in the range [ 0, L-1]

K is the number of bits assigned to each pixel

b is the total bit needed to store the image


Sensing elements
Spatial resolution
spatial resolution can be stated in several
ways, with line pairs per unit distance, and
dots (pixels) per unit distance.

Dots per unit distance is a measure of image


resolution used in the printing and publishing
industry.

Example: dots per inch (dpi).

In computer vision we use image resolution


i.e. 5Mpixels
Intensity resolution
Intensity resolution is number of intensity levels used to represent the image.

The more intensity levels used, the finer the level of detail discernable in an image.

Intensity level resolution usually given in terms of number of bits used to store each intensity level.
Intensity resolution

Using an insufficient number of intensity levels(less than 16 intensity levels) in smooth areas of a digital image causes false contouring
Relationships between pixels
Relationships between pixels
Relationships between pixels

(a) An arrangement of pixels. (b) Pixels that are 8-adjacent (c) m-adjacency.

m-connectivity eliminates the multiple path connections that arise in 8-connectivity


Distance measure
The Euclidean distance between p = (x,y) and q = (u,v) is defined as:

The D4 distance, (called the city-block distance) between p and q is defined as

The D8 distance (called the chessboard distance) between p and q is defined as


Distance measure
2 3 4 0 0
Arithmetic operations
0 03 5 1 0

used extensively in most branches of image processing. 0 2 6 1 7

„ Arithmetic operations for 2 pixels p and q : 0 0 00 1 0

● Addition : p+q used in image average to reduce 0 0 1 0 0


noise.
● „ Subtraction : p-q basic tool in medical imaging.
● „ Multiplication : pxq 1 0 0 0 0
● „ Division : p/q 0 24 7 0 0

„ Arithmetic Operation entire images are carried out pixel 0 0 3 6 0


by pixel.
0 0 02 4 5

0 0 0 1 3
Logic operations
● AND : p AND q (p•q)
● „ OR : p OR q (p+q)
● „ COMPLEMENT : NOT q , q’
● „ logic operations apply only to binary images.
● „ arithmetic operations apply to multivalued pixels.
● „ logic operations used for tasks such as masking, feature detection, and
shape analysis.
● „ logic operations perform pixel by pixel.
Mask operations
Besides pixel-by-pixel processing on entire images, arithmetic and Logical
operations are used in neighborhood oriented operations.

Let the value assigned to a pixel be a function of its gray level and the gray level of
its neighbors.
Mask operations
replace the gray value of pixel Z5 with the average gray values of it’s
neighborhood within a 3x3 mask

Input image Output image


Mask operations

x =
Digital Image Processing
Image enhancement
Principle Objective of Enhancement
● Process an image so that the result will be more suitable than the original image for a specific
application.

● „ Techniques are problem oriented.

● „ A method which is quite useful for enhancing an image may not necessarily be the best
approach for enhancing another images

● „ No general theory on image enhancement exists


Processing Domain
● Spatial Domain (image plane):
Techniques are based on direct manipulation of pixels in an image.

❖ „ Gray level transformations.


❖ „ Histogram processing.
❖ „ Arithmetic/Logic operations.
❖ „ Filtration techniques.

● „ Frequency Domain :
„ Techniques are based on modifying the Fourier transform of an image.
Spatial Domain

where
„„ - f(x,y)f(x,y) is the input image
„„ - g(x,y)g(x,y) is the processed
image
„„ - T is an operator on f defined over
some neighborhood of (x,y)(x,y)
Spatial Domain
● Neighborhood of a point (x,y) can be defined by
using a square/rectangular (common used) or
circular subimage area centered at (x,y)
● „ The center of the sub-image is moved from
pixel to pixel starting at the top of the corner
Point Processing
● Neighborhood = 1x1 pixel
● g depends on only the value of f at (x,y)
● T = gray level (or intensity or mapping) transformation function
○ s = T(r)
● Where
○ r = gray level of f(x,y)
○ s = gray level of g(x,y)
Contrast Stretching
● Produce higher contrast than the
original by„:

○ darkening the levels below k in the


original image

○ „ Brightening the levels above k in


the original image
Thresholding
● Produce a two-level (binary)
image
Intensity Transformation Functions
❏ Thresholding
❏ Log transformation
❏ Power-law (Gamma correction)
❏ Piecewise-linear transformation
❏ Histogram processing
Image Negatives
Example of negative Image
Log Transformations
Example of Logarithm Image
Power-Law Transformations
Power-Law Transformations
Power-Law Transformations
● MRI example
● The picture is dark
● When the γ is reduced too much,

the image begins to reduce contrast

to the point where the image

started to have very slight “wash-

out” look, especially in the

background
Another Examplr
Piecewise-Linear Transformation Functions
● Advantage:
○ „ Allow more control on the complexity of T(r).
● „ Disadvantage:
○ „ Their specification requires considerably more user input
● „ Contrast stretching.
● „ Gray-level slicing.
● „ Bit-plane slicing.
Contrast Stretching
Gray-level slicing
Gray-level slicing
Bit-plane slicing
❏ Highlighting the contribution
made to total image
appearance by specific bits
❏ „ Suppose each pixel is
represented by 8 bits
❏ „ Higher-order bits contain
the majority of the visually
significant data
❏ „ Useful for analyzing the
relative importance played
by each bit of the image
Bit-plane slicing
● The (binary) image for bit-plane 7 can be
obtained by processing the input image with
a thresholding gray-level transformation.

❏ „ Map all levels between 0 and 127 to 0


❏ „ Map all levels between 129 and 255
to 255
Bit-plane slicing
Histogram
Histogram
Histogram
❏ Used effectively for image enhancement

❏ Information inherent in histograms also is useful in image compression and

segmentation

❏ Data-dependent pixel-based image enhancement method.


Histogram
Histogram
Histogram equalization
● Spreading out the intensities in an image to improve dark or washed out images
● Output images have uniform intensity distribution
Histogram Equalization Implementation
1. Obtain the histogram of the input image.

2. For each input gray level k, compute the cumulative sum.

3. For each gray level k, scale the sum by (max gray level)/(number of pixels).

4. Discretize the result obtained in 3.

5. Replace each gray level k in the input image by the corresponding level

obtained in 4.
Histogram Equalization Implementation
Histogram Equalization Implementation
Color Image Histograms
Two types:

1. Intensity histogram:

● Convert color image to grayscale


● Display histogram of grayscale

2. Individual Color Channel Histograms:

● histograms (R,G,B)
Enhancement using Arithmetic/Logic Operations
● Arithmetic/Logic operations are performed on pixel by pixel basis between two
or more images
● except NOT operation which perform only on a single image
● Logic operation is performed on gray level images, the pixel values are
processed as binary numbers
● NOT operation = negative transformation
Example of AND Operation
Example of OR Operation
Image Subtraction
A A
g(x,y) = f(x,y) – h(x,y)
● enhancement of the differences between
images
● Image similarity
● We may have to adjust the grayscale of the
subtracted image to be [0, 255] (if 8-bit is used)
● Subtraction is also used in segmentation of
moving pictures to track the changes.
● after subtract the sequenced images, what is left
should be the moving elements in the image,
plus noise

Histogram
A-B Equalization of A-B
Image Averaging
Consider a noisy image modeled as:
g(x,y) = f(x,y) + η(x,y)
Where f(x,y) is the original image, and η(x,y) is a noise process „
Objective: to reduce the noise content by averaging a set of noisy images
„ Define an image formed by averaging K different noisy images:

expected value of g (output after averaging) = original image f(x,y)


Image Averaging
Spatial Filtering
● Use filter (can also be called as mask/kernel/template or window) „
● The values in a filter subimage are referred to as coefficients,
rather than pixel.
● „ Our focus will be on masks of odd sizes, e.g. 3x3, 5x5,…
● simply move the filter mask from point to point in an image.
● at each point (x,y), the response of the filter at that point is
calculated using a predefined relationship.
Smoothing Spatial Filters
● „ output is simply the average of the pixels contained in the neighborhood of the
filter mask. „

● called averaging filters or lowpass filters. „

● sharp details are lost.


Smoothing Spatial Filters
Smoothing Spatial Filters
● reduce the "sharp" transitions in gray levels.

● sharp transitions
○ random noise in the image
○ edges of objects in the image

● thus, smoothing can reduce noises (desirable) and blur edges (may be
undesirable)
Smoothing Spatial Filters
Smoothing Spatial Filters: Example
Order-Statistics Filters (Nonlinear Filters)
❏ Nonlinear spatial filters whose response is based on ordering (ranking) the pixels
contained in the filter mask and then replacing the value of the center pixel with the
result of the ranking operation

example

❏ median filter : R = median{zk |k = 1,2,…,n x n}


❏ max filter : R = max{zk |k = 1,2,…,n x n}
❏ min filter : R = min{zk |k = 1,2,…,n x n}

note: n x n is the size of the mask


Order-Statistics Filters (Nonlinear Filters)
❏ popular for certain types of random noise
❏ impulse noise impulse noise > salt an salt and d pepper noise pepper noise
❏ they provide excellent provide excellent noise-reduction capabilities, with
considering less blurring than linear filters of similar size.
❏ forces the points with distinct gray levels to be more like their neighbors.
Order-Statistics Filters (Nonlinear Filters)

Nonlinear filters
Linear filters
Order-Statistics Filters (Nonlinear Filters)
Sharpening Spatial Filters
❏ to highlight fine detail in an image
❏ or to enhance detail that has been blurred - either in error or as an effect of a method of
image acquisition.
❏ the sharpening must be accomplished by spatial spatial differentiation.
❏ In contrast to averaging which is similar to integration
Sharpening Spatial Filters
❏ First-order derivative (1D)
❏ a basic definition of the first-order derivative of a one-dimensional function f(x) is
the difference

❏ Second-order derivative (1D)


❏ similarly, we define the second-order derivative of a one-dimensional function f(x)
is the difference
Edges in digital images often
are ramp-like transitions in
intensity, in which case
the first derivative of the image
would result in thick edges
because the derivative
is nonzero along a ramp. On
the other hand, the second
derivative would produce a
double edge one pixel thick,
separated by zeros
First and Second-order derivative of f(x,y) (2D)
❏ when we consider an image function of two variables, f(x,y), at which time we will
dealing with partial derivatives along the two spatial axes.
Discrete Form of Laplacian
Laplacian mask implemented an extension of diagonal
neighbors
Other implementation of Laplacian masks
Laplacian Operator
❏ Isotropic filters: response is independent of direction (rotation-invariant).
❏ The simplest isotropic derivative operator is the Laplacian

To get a sharp image:

❏ easily by adding the original and Laplacian image. „ be careful with the Laplacian filter
used
Example
a). image of the North pole of the moon

b). Laplacian-filtered image with

c) Laplacian image scaled for display


purposes

d). image enhanced by subtraction with


original image
Mask of Laplacian + addition
❏ to simply the computation, we can create a mask which do both operations,
Laplacian Filter and Addition the original image.
Mask of Laplacian + addition
Mask of Laplacian + addition
Unsharp masking
1. Blur the original image.
2. Subtract the blurred image from the original (the resulting difference is called
the mask.)
3. Add the mask to the original.
High-boost filtering

❏ generalized form of Unsharp masking


❏ A≥1
High-boost filtering

if we use Laplacian filter to create sharpen image fs(x,y) with addition of original image
High-boost Masks

A≥1

if A = 1, it becomes “standard” Laplacian sharpening


Example
Use of First Derivatives for Enhancement-The Gradient
❏ First derivatives in image processing are implemented using the magnitude of
the gradient.
Magnitude of the gradient.

commonly approx:
Gradient Mask
❏ Sobel operators, 3x3
❏ An approximation using absolute values
❏ the weight value 2 is to achieve smoothing by giving more important to the center point

the summation of coefficients in all masks equals 0, indicating that they would give a
response of 0 in an area of constant gray level.
Example
Example of Combining Spatial Enhancement Methods
❏ want to sharpen the original
image and bring out more
skeletal detail.

❏ problems: narrow dynamic


range of gray level and high
noise content makes the
image difficult to enhance
Example of Combining Spatial Enhancement Methods
solve :

1. Laplacian to highlight fine detail

2. gradient to enhance prominent edges

3. gray-level transformation to increase the dynamic range of gray levels

You might also like