[go: up one dir, main page]

0% found this document useful (0 votes)
24 views342 pages

Digital Image Processing 2020-2021 - Mulltimedia Course

Uploaded by

aaltnazfti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views342 pages

Digital Image Processing 2020-2021 - Mulltimedia Course

Uploaded by

aaltnazfti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 342

‫‪Digital Image Processing‬‬

‫‪2020-2021‬‬
‫اﻟﻤﺮﺣﻠﺔ اﻟﺜﺎﻟﺜﺔ ‪ /‬اﻟﻔﺼﻞ اﻟﺪراﺳﻲ اﻟﺜﺎﻧﻲ‬

‫اﺳﺘﺎذ اﻟﻤﺎدة‪:‬‬
‫أ‪.‬م‪.‬د‪ .‬ﻧﺎﺻﺮ ﺣﺴﯿﻦ ﺳﻠﻤﺎن‬
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

IMAGE FORMATION

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

IMAGE FORMATION

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

When x, y , and the amplitude values of f are all finite,


discrete quantities, we call the image a digital image.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Image Sensing and Acquisition

A digital image is nothing more


than data—numbers indicating
variations of red, green, and blue at
a particular location on a grid of
pixels.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Image acquisition using a single sensor

This is an arrangement used in high precision scanning, where a film negative is


mounted onto a drum whose mechanical rotation provides displacement in one
dimension .the single sensor is mounted on a lead screw that provides motion in
perpendicular direction .In this case we generate a 2-D image using single sensor,
where there has a relative displacements in both the x- and y directions between
the sensor and the area to be imaged .
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Image acquisition using line sensor or sensor strips


Medical and Industrial Computerized Axial Tomography (CAT)
Magnetic resonance images (MRI)
Positron Emission Tomography (PET)

In-line arrangement of sensors in the form of a


sensor strip. The strip provides imaging elements in
one direction. Motion perpendicular to the strip
provides imaging in the other direction. This is the
type of arrangement used in the most flat bed
scanners. 4000 in-line sensor is possible. The
number of the sensor in the strip establishes the
sampling limitations in one image direction. But in
single sensor , sampling is accomplished by selecting
the number of individual mechanical increments.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Image acquisition using sensor arrays

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

continuous image projected onto a sensor array

© 2002 R. C. Gonzalez & R. E. Woods


• Pixel Values Digital Image Processing, 2nd ed. www.imageprocessingbook.com
• Each of the pixels that represents an image stored inside a computer has a pixel
value which describes how bright that pixel is, and/or what color it should be. In
the simplest case of binary images, the pixel value is a 1-bit number indicating
either foreground or background. For a grayscale images, the pixel value is a single
number that represents the brightness of the pixel. The most common pixel format
is the byte image, where this number is stored as an 8-bit integer giving a range of
possible values from 0 to 255. Typically zero is taken to be black, and 255 is taken
to be white. Values in between make up the different shades of gray.
• To represent color images, separate red, green and blue components must be
specified for each pixel (assuming an RGB colorspace), and so the pixel `value' is
actually a vector of three numbers. Often the three different components are stored
as three separate `grayscale' images known as color planes (one for each of red,
green and blue), which have to be recombined when displaying or processing.
• Multi-spectral images can contain even more than three components for each pixel,
and by extension these are stored in the same kind of way, as a vector pixel value,
or as separate color planes. See satellite images 7 band in TM ……..
• The actual grayscale or color component intensities for each pixel may not actually
be stored explicitly. Often, all that is stored for each pixel is an index into a
colormap in which the actual intensity or colors can be looked up.
• Although simple 8-bit integers or vectors of 8-bit integers are the most common
sorts of pixel values used, some image formats support different types of value, for
instance 32-bit signed integers or floating point values. Such values are extremely
useful in image processing as they allow processing to be carried out on the image
where the resulting pixel values are not necessarily 8-bit integers. If this approach
is used then it is usually necessary to set up a colormap which relates particular
ranges of pixel values to particular displayed colors.

© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

MATLAB CODE TO READ DISPLAY IMAGES


examples

• A=imread('c:\lena.jpg')
• figure
• imshow(A)
• imfinfo('C:\lena.jpg')
• for i=1:380
• for j=1:380
• B(i,380+1-j)=A(i,j);
• end
• end

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

• figure
• subplot(2,2,1)
• imshow(A)
• %figure
• subplot(2,2,2)
• imshow(B)
• subplot(2,2,3)
• %figure
• imshow(A)
• subplot(2,2,4)
• imhist(B)
• size (B)

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Representing Digital Images

L=2^K

© 2002 R. C. Gonzalez & R. E. Woods


Spatial andImage
Digital Intensity
Processing,Resolution
2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Spatial andImage
Digital Intensity
Processing,Resolution
2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Spatial andImage
Digital Intensity
Processing,Resolution
2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Basic Relationships Between Pixels

• Neighborhood

• Adjacency

• Connectivity

• Paths

• Regions and boundaries

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Basic Relationships Between Pixels

• Neighbors of a pixel p at coordinates (x,y)

 4-neighbors of p, denoted by N4(p):


(x-1, y), (x+1, y), (x,y-1), and (x, y+1).

 4 diagonal neighbors of p, denoted by ND(p):


(x-1, y-1), (x+1, y+1), (x+1,y-1), and (x-1, y+1).

 8 neighbors of p, denoted N8(p)


N8(p) = N4(p) U ND(p)

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Basic Relationships Between Pixels

• Adjacency
Let V be the set of intensity values

 4-adjacency: Two pixels p and q with values from V are 4-


adjacent if q is in the set N4(p).

 8-adjacency: Two pixels p and q with values from V are 8-


adjacent if q is in the set N8(p).

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

IMAGE FORMATION

- In lecture one
 Image Formation
 Sensors types
 Image types
 Image file size

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Image Sensing and Acquisition

A digital image is nothing more


than data—numbers indicating
variations of red, green, and blue at
a particular location on a grid of
pixels.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Introduction
• What is Digital Image Processing?
Digital Image
— A two-dimensional function f ( x , y ) x and y are spatial
coordinates
The amplitude of f is called intensity or gray level at the point (x, y)

Digital Image Processing


— Process digital images by means of computer, it covers low-, mid-
, and high-level processes
low-level: inputs and outputs are images
mid-level: outputs are attributes extracted from input images
high-level: an ensemble of recognition of individual objects

Pixel
— The elements of a digital image

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

image definition

• An image may be defined as a two –


dimensional function, f(x, y), where x and y
are spatial (plane) coordinates, and the
amplitude of f at any pair coordinates (x, y) is
called the intensity or gray level of the image
at that point.
• When x, y , and the amplitude values of f are
all finite, discrete quantities, we call the image
a digital image.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Digital image representation

A digital image is nothing more


than data—numbers indicating
variations of red, green, and blue
at a particular location on a grid
of pixels.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Digital image representation

This image cannot currently be displayed.

Blue
Green

Red

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Digital image Array representation

 f (0,0) f (0,1) ............................ f (0, N  1)


 f (1,0) f (1,1) ............................. f (1, N  1) 
 
f ( x, y )  : 
 
: 
 f ( M  1,0) f ( M  1,1) ..... f ( M  1, N  1) 

a0,0 a0,1 ......................................a0, N 1 


 
a a
 1,0 1,1 ..............................
.........a1, N 1 

f ( x, y)  : 
 
: 
a a ...........................a 
© 2002 R. C. Gonzalez & R. E. Woods  M 1,0 M 1,1 M 1, N 1 
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Sources for Images

• Electromagnetic (EM) energy spectrum


• Acoustic
• Ultrasonic
• Electronic
• Synthetic images produced by computer

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Electromagnetic (EM) energy spectrum

Major fields in which digital image processing is widely used


Gamma-ray imaging: nuclear medicine and astronomical observations
X-rays: medical diagnostics (X-rays of body), industry, and astronomy, etc.
Ultraviolet: lithography, industrial inspection, microscopy, lasers, biological
imaging, and astronomical observations
Visible and infrared bands: light microscopy, astronomy, remote sensing,
industry, and law enforcement
Microwave band: Radar imaging
Radio band: medicine (such as MRI) and astronomy

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Pixels in image

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

1. Assume a window or image with a given WIDTH and HEIGHT.


2. We then know the pixel array has a total number of elements equaling
WIDTH * HEIGHT.
3. For any given X, Y point in the window, the location in our 1 dimensional
pixel array is: LOCATION = X + Y*WIDTH

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Digital image processing


field
Digital image processing deals with manipulation
of digital images through a digital computer. It is
a subfield of signals and systems but focus
particularly on images. DIP focuses on
developing a computer system that is able to
perform processing on an image. The input of
that system is a digital image and the system
process that image using efficient algorithms,
and gives an image as an output. The most
common example is Adobe Photoshop. It is one
of the widely used application for processing
digital images.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

In the above figure, an


image has been captured by
a camera and has been sent
to a digital system to
remove all the other details,
and just focus on the water
drop by zooming it in such a
way that the quality of the
image remains the same.
http://www.tutorialspoint.com/dip/
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

• Note that a digital image is composed of a finite number of


elements, each of which has a particular location and
value. These elements are referred to as picture elements,
image elements, pels, and pixels.[1]
Interest in digital image processing methods stems
from two principal application areas:
1- improvement of pictorial information for human
interpretation; and
2- processing of image data for storage, transmission,
and representation for autonomous machine perception.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed.
Complete system for Image processing www.imageprocessingbook.com

The digital image produced by the digitizer goes into temporary storage on a
suitable device. In response to instructions from the operator, the computer
calls up and executes image processing programs from library.

During the execution, The input image is read into the computer
line by line .Operating upon one or several lines, the computer
generates the output image, pixel by pixel, and store it on the
output data storage device, line by line.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

During the processing, the pixels may be


modified. After processing, the final product is displayed
by a process is the reversed of digitization:
The gray level of each pixel is used to determine the
brightness of the corresponding point on a display screen.
The processed image is thereby made visible, and once
again amenable to human interpretation.
the brightness of each pixel is represented by a numeric
value. Gray-scale images typically contain values in the
range from 0 to 255, with 0 representing black, 255
representing white and values in between representing
shades of gray
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Types of Computerized Processes


• 1-Low Level Process
• 2-Mid Level Process
• 3-High Level Process
• ========================
• 1-low Level Process involves primitive
operations, such as image processing to reduce
noise, contrast, enhancement and image
sharpening. In this level, both its input and
output are digital images

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

2-Mid Level Process


• Involves tasks such as ,segmentation
(partitioning an image into regions or
objects), description these objects to reduce
them to a form suitable to computer, and
classification (recognition ) of individual
objects. The inputs are digital images and
the outputs are attributes extracted from
those images (i.e, edges, contours, and the
identity of individual objects.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

PEARS COLOR IMAGE

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Pears GRAY IMAGE

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

3-High Level Process


• Involves making sense of a recognized objects as in
image analysis , for example : if a digital image
contains a number of objects , a program may
analyzed the image and extract the objects.
• So the digital image process encompasses
processes whose inputs and outputs are images
and in addition, encompasses processes that
extract attributes from images up to and including
recognition of individual objects.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Example

Consider the area of automated analysis of text:


• The processes of acquiring an image of the
area containing the text.
• Preprocessing the image.
• Extracting (segmentation ) the individual
characters.
• Describing the characters in away suitable for
computer.
• Recognizing these characters.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Where use images

• Medicine(diagnostics, brain images x-rays


• Web sites
• Books and magazine
• TV, movies, graphics,
• Digital camera
• Official application forms
• ID
• Licence
• Satellite
© 2002 R. C. Gonzalez & R. E. Woods
N2
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

The Electromagnetic Spectrum

© 2002 R. C. Gonzalez & R. E. Woods


Slide 29

N2 REMOTE SENSING COURSE DIGITAL IMAGE DR.NASSIR H. SALMAN 2005-2006


Nassir; 29/11/2005
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Examples Of Fields That Use Digital Image Processing


Chapter 1: Introduction

1-Gamma rays imaging

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Chapter 1: Introduction

2-X-Rays imaging

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Chapter 1: Introduction

3-Ultraviolet imaging

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Satellite Imaging

1-Spectral bands

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Landsat satellite images in TM Bands

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Combining TM bands 5, 4, & 2 to make an image


© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Satellite Images Examples: Basra-IRAQ ,


April 4,2003 ,L7, 742

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Images used In metrology

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed.
Examples: Automated Visual Inspection www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Examples: Automated Visual Inspection

Results of
automated
reading of the
plate content by
the system

The area in which


the imaging system
detected the plate

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed.
Examples: MRI (Radio Band) www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed.
Examples: Ultrasound Imaging www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Lect 3 DIP steps
1-from lecture 2:→Digital Image Definition

• An image can be defined as a two-


dimensional function f(x, y)
• x, y: Spatial coordinate
• F: the amplitude of any pair of coordinate
x, y, which is called the intensity or gray
level of the image at that point.
• x, y and F, are all finite and discrete
quantities.
Fundamental Steps in Digital Image Processing:

Outputs of these processes generally are image attributes


Outputs of these processes generally are images

Wavelets &
Colour Image Image Morphological
Multiresolution
Processing Compression Processing
processing

Image
Restoration
Segmentation

Image Knowledge Base


Enhancement Representation
& Description

Image
Acquisition Object
Recognition

Problem Domain
Fundamental Steps in DIP:
Step 1: Image Acquisition
The image is captured by a sensor (eg.
Camera), and digitized if the output of the
camera or sensor is not already in digital
form, using Analogue-to-Digital convertor
Image Acquisition equipment
Cont. Fundamental Steps in DIP:
Step 2: Image Enhancement
The process of manipulating an image so that
the result is more suitable than the original for
specific applications.
The idea behind enhancement techniques is
to bring out details that are hidden, or simple
to highlight certain features of interest in an
image.
•Filtering with morphological operators.
•Histogram equalization.
•Noise removal using a Wiener filter.
•Linear contrast adjustment.
•Median filtering.
•Unsharp mask filtering.
Cont. Fundamental Steps in DIP:
Step 3: Image Restoration
- Improving the appearance of an image
- Tend to be mathematical or probabilistic
models. Enhancement, on the other hand, is
based on human subjective preferences
regarding what constitutes a “good”
enhancement result.
Cont. Fundamental Steps in DIP:
Step 4: Colour Image Processing
Use the colour of the image to extract
features of interest in an image
Color to grey and negative
Cont. Fundamental Steps in DIP:
Step 5: Wavelets
Are the foundation of representing images
in various degrees of resolution. It is used
for image data compression.
Fundamental Steps in DIP:
Step 6: Compression
Techniques for reducing the storage
required to save an image or the
bandwidth required to transmit it.
Fundamental Steps in DIP:
Step 7: Morphological Processing
Tools for extracting image components
that are useful in the representation and
description of shape.

In this step, there would be a transition


from processes that output images, to
processes that output image attributes.
Fundamental Steps in DIP:
Step 8: Image Segmentation
Segmentation procedures partition an image into its constituent
parts or objects.

Important Tip: The more accurate the segmentation, the more


likely recognition is to succeed.
Ground truth of a satellite image
means the collection of information
at a particular location. It allows
satellite image data to be related to
real features and materials on
the ground. This information is
frequently used for calibration of
remote sensing data and compares
the result with ground truth

Fully convolutional network :Classify the Ground truths are “true and accurate”
object class for each pixel within an image. segmentations that are typically made by
That means there is a label for each pixel. one or more human experts, sa
Fundamental Steps in DIP:
Step 9: Representation and Description
- Representation: Make a decision whether the data
should be represented as a boundary or as a complete
region. It is almost always follows the output of a
segmentation stage.
- Boundary Representation: Focus on external
shape characteristics, such as corners and
inflections (‫)اﻧﺣﻧﺎءات‬
- Region Representation: Focus on internal
properties, such as texture or skeleton (‫ )ھﯾﻛﻠﯾﺔ‬shape
Extracted the three regions
Selected three regions on the original image

Calculated edge map of the segmented regions


Fundamental Steps in DIP:
Step 9: Representation and Description
- Choosing a representation is only part of the solution
for transforming raw data into a form suitable for
subsequent computer processing (mainly recognition)

- Description: also called, feature


selection, deals with extracting attributes
that result in some information of interest.
Fundamental Steps in DIP:
Step 9: Recognition and Interpretation
Recognition: the process that assigns
label to an object based on the information
provided by its description.
All the pixels in region 1 ,
Have label 1, and so on for
other regions, according
to some criteria
As a Conclusion:
Fundamental Steps in DIP:
Step 10: Knowledge Base
Knowledge about a problem domain is
coded into an image processing system in
the form of a knowledge database.
The DIP steps can be
summarized as:
Components of an Image
Processing System
Network

Image displays Computer Mass storage

Specialized image
Image processing
Hardcopy processing
software
hardware

Typical general-
Image sensors purpose DIP
Problem Domain system
Components of an Image Processing System
1. Image Sensors
Two elements are required to acquire
digital images. The first is the physical
device that is sensitive to the energy
radiated by the object we wish to image
(Sensor). The second, called a digitizer,
is a device for converting the output of the
physical sensing device into digital form.
Components of an Image
Processing System
2. Specialized Image Processing Hardware
Usually consists of the digitizer, mentioned before, plus
hardware that performs other primitive operations, such as
an arithmetic logic unit (ALU), which performs arithmetic and
logical operations in parallel on entire images.

This type of hardware sometimes is called a front-end


subsystem, and its most distinguishing characteristic is
speed. In other words, this unit performs functions that
require fast data throughputs that the typical main computer
cannot handle.
Components of an Image
Processing System
3. Computer
The computer in an image processing system is a
general-purpose computer and can range from a PC to a
supercomputer. In dedicated applications, sometimes
specially designed computers are used to achieve a
required level of performance.
Components of an Image
Processing System
4. Image Processing Software
Software for image processing consists of specialized
modules that perform specific tasks. A well-designed
package also includes the capability for the user to write
code that, as a minimum, utilizes the specialized modules.
Components of an Image
Processing System
5. Mass Storage Capability
Mass storage capability is a must in a image processing
applications. And image of sized 1024 * 1024 pixels
requires one megabyte of storage space if the image is
not compressed.

Digital storage for image processing applications falls into


three principal categories:
1. Short-term storage for use during processing.
2. on line storage for relatively fast recall
3. Archival storage, characterized by infrequent access
Components of an Image
Processing System
5. Mass Storage Capability
One method of providing short-term storage is computer memory.
Another is by specialized boards, called frame buffers, that store
one or more images and can be accessed rapidly.

The on-line storage method, allows virtually instantaneous image


zoom, as well as scroll (vertical shifts) and pan (horizontal shifts).
On-line storage generally takes the form of magnetic disks and
optical-media storage. The key factor characterizing on-line
storage is frequent access to the stored data.

Finally, archival storage is characterized by massive storage


requirements but infrequent need for access.
Components of an Image
Processing System
6. Image Displays
The displays in use today are mainly
color (preferably flat screen) TV monitors.
Monitors are driven by the outputs of the
image and graphics display cards that are
an integral part of a computer system.
Components of an Image
Processing System
7. Hardcopy devices
Used for recording images, include laser
printers, film cameras, heat-sensitive
devices, inkjet units and digital units,
such as optical and CD-Rom disks.
Components of an Image
Processing System
8. Networking
Is almost a default function in any computer
system, in use today. Because of the large
amount of data inherent in image processing
applications the key consideration in image
transmission is bandwidth.

In dedicated networks, this typically is not a


problem, but communications with remote sites
via the internet are not always as efficient.
‫ﻣراﺟﻌﺔ اﻟﻣﺣﺎﺿرة ‪٣‬‬
Fundamental Steps in Digital Image Processing:
٣ ‫ﻣراﺟﻌﺔ اﻟﻣﺣﺎﺿرة‬

Outputs of these processes generally are image attributes


Outputs of these processes generally are images

Wavelets &
Colour Image Image Morphological
Multiresolution
Processing Compression Processing
processing

Image
Restoration
Segmentation

Image Knowledge Base


Enhancement Representation
& Description

Image
Acquisition Object
Recognition

Problem Domain
٤ ‫اﻟﻣﺣﺎﺿرة‬
‫اﻣﺛﻠﺔ ﻋﻠﻰ ﺗﻛوﯾن ﺻور رﻗﻣﯾﺔ‬

A colormap is an m-by-3 matrix of real numbers between 0.0 and 1.0. Each row is an
RGB vector that defines one color. The kth row of the colormap defines the kth color,
where map(k,:) = [r(k) g(k) b(k)]) specifies the intensity of red, green, and blue.
colormap(map) sets the colormap to the matrix map. If any values in map are outside
the interval [0 1], MATLAB returns the error Colormap must have values in [0,1].
2nd example
Example 3
Image Statistics
 Arithmetic Mean, ٤ ‫اﻟﻣﺣﺎﺿرة‬
 Standard Deviation, ‫اﻟﺧﺻﺎﺋص اﻻﺣﺻﺎﺋﯾﺔ ﻟﻠﺻورة اﻟرﻗﻣﯾﺔ‬
 and Variance
‫ﻣﻊ اﻟﺗطﺑﯾق‬
• Useful statistical features of an image are its arithmetic mean,
• standard deviation, and variance. These are well known
mathematical constructs that, when applied to a digital image, can
reveal important information.
• The arithmetic mean is the image's average value.
• The standard deviation is a measure of the frequency distribution,
or range of pixel values, of an image. If an image is supposed to be
uniform throughout, the standard deviation should be small. A
small standard deviation indicates that the pixel intensities do not
stray very far from the mean; a large value indicates a greater
range.
• The standard deviation is the square root of the variance.
• The variance is a measure of how spread out a distribution is. It is computed as the
average squared deviation of each number from its mean
THE VARIANCE
• The variance is a measure of how spread out a distribution
is. It is computed as the average squared deviation of each
number from its mean. For example, for the numbers 1, 2,
and 3, the mean is 2 and the variance is (1+2+3)/3=2

σ2 = .
The formula (in summation notation) for the variance in a
population is

where μ is the mean and N is the number of scores.


0
0
0
0
0
0
0
0
:

0
0
0
0
0
0
0
0
The vertical flipped image B (N xM) of A (N x M) can be obtained
as B(i,M + 1 - j) = A(i, j) (i = 0,…….,N - 1; j = 0,…..,M - 1).

clear B;
A=imread('c:\lena.jpg');
for i = 1 : 380
for j = 1 : 380
B(i, 380 + 1 - j) = A(i, j);
end

figure
subplot(1,2,1)
imshow(A)
subplot(1,2,2)
imshow(B)
The cropped image B (N1xN2) of A (N xM), starting from (n1, n2), can be
obtained as B(k, l) = A(n1+k, n2+l) (k = 0,………,N1-1; l = 0,….. ,N2-1).

A=imread('c:\lena.jpg');
for k = 1 : 64
for j = 1 : 128
B(k,j) = A(220+k,220+j);
end
end
figure
subplot(1,2,1)
imshow(A)
subplot(1,2,2)
imshow(B)
‫ﺗﻘﻧﯾﺎت ﺗﺣﺳﯾن اﻟﺻورة اﻟرﻗﻣﯾﺔ ﻣﻊ اﻻﻣﺛﻠﺔ‬

Enhancement
Techniques

Frequency Domain
Spatial
Operates on FT of
Operates on pixels
Image
Intensity Transformations and Spatial Filtering:
Intensity transformation Functions

Making changes in the intensity is done


through Intensity Transformation Functions:

•photographic negative (using imcomplement)


•gamma transformation (using imadjust)
•logarithmic transformations (using c*log(1+f))
•contrast-stretching transformations
(using 1./(1+(m./(double(f)+eps)).^E)
Representing digital image

value f(x,y) at each x, y is called intensity level or gray level


Intensity Transformations and Filters

g(x,y)=T[f(x,y)]
f(x,y) – input image,
g(x,y) – output image
T is an operator on f defined over a neighborhood of point (x,y)
Intensity Transformation
• 1 x 1 is the smallest possible neighborhood.
• In this case g depends only on value of f at a
single point (x,y)
and we call T an intensity (gray-level
mapping) transformation and write
s = T(r)
where s and r denotes respectively the
intensity of g and f at any point (x, y).
Spatial Domain Methods
• In these methods a operation (linear or non-
linear) is performed on the pixels in the
neighborhood of coordinate (x,y) in the input
image F, giving enhanced image F’
• Neighborhood can be any shape but generally
it is rectangular ( 3x3, 5x5, 9x9 etc)

g(x,y) = T[f(x,y)]
Grey Scale Manipulation
• Simplest form of window (1x1)
• Assume input gray scale values are in range [0, L-1] (in 8 bit images
L = 256)
• nth root Transformation
s = c (r)n
• S is output image, r input im
• A=imread('d:\flowers.jpg');
• A=rgb2gray(A);
• C=1;
• n=.5;
• B=C*double(A).^n; before after
• figure
• subplot(1,2,1)
• imshow(A,[])
• subplot(1,2,2)
• imshow(B,[])
Some intensity transform functions

• Linear: Negative, Identity


• Logarithmic: Log, Inverse Log
• Power-Law: nth power, nth root
Brain image and its Image Negatives
Denote [0, L-1] intensity levels of the image.

Image negative is obtained by s= L-1-r


Power Law Transformation
• s = crγ
• C,  : positive
constants
• Gamma
correction
• A=imread('d:\flowers.jpg');
• A=rgb2gray(A);
• C=1;
• gamma1=0.6;
• B=C*double(A).^gamma1;
• figure
• subplot(1,2,1) before using Gamma
• imshow(A,[])
• subplot(1,2,2)
• imshow(B,[])
In Matlab gamma transformation
(using imadjust)
• imadjust(f, [low_in high_in], [low_out high_out], gamma)

With Gamma Transformations,


you can curve the grayscale
components either to brighten the
intensity (when gamma is less
than one) or darken the intensity
(when gamma is greater than
one).
imadjust(f, [low_in high_in], [low_out high_out], gamma)


f is the input image, gamma controls the curve,
and [low_in high_in] and [low_out high_out] are
used for clipping. Values below low_in are
clipped to low_out and values above high_in are
clipped to high_out. For the purposes of this lab,
we use [] for both [low_in high_in] and [low_out
high_out]. This means that the full range of the
input is mapped to the full range of the output
Contrast Stretching
• To increase the dynamic range of the gray
levels in the image being processed.
contd…

• The locations of (r1,s1) and (r2,s2) control the


shape of the transformation function.
– If r1= s1 and r2= s2 the transformation is a linear function and
produces no changes.
– If r1=r2, s1=0 and s2=L-1, the transformation becomes a
thresholding function that creates a binary image.
– Intermediate values of (r1,s1) and (r2,s2) produce various
degrees of spread in the gray levels of the output image, thus
affecting its contrast.
– Generally, r1≤r2 and s1≤s2 is assumed.
Example
In MATLAB Contrast stretching
transform
• g=1./(1 + (m./(double(f) + eps)).^E)
• I=imread('tire.tif');
• I2=im2double(I);
• m=mean2(I2)
• contrast1=1./(1+(m./(I2+eps)).^4);
• contrast2=1./(1+(m./(I2+eps)).^5);
• contrast3=1./(1+(m./(I2+eps)).^10);
• imshow(I2)
• figure,imshow(contrast1)
• figure,imshow(contrast2)
• figure,imshow(contrast3)
Again Some Intensity Transformation Functions
Power–Law (Gamma)
transformation
s = crγ, c,γ –positive constants
curve the grayscale components either to brighten the intensity (when γ < 1)
or darken the intensity (when γ > 1).
Power –Law (Gamma) transformation
Power –Law (Gamma) transformation
Contrast stretching
Contrast stretching is a process that expands the range of intensity levels in a image
so that it spans the full intensity range of the recording medium or display device.
Contrast-stretching transformations increase the contrast between the darks and the lights
Thresholding function
Log Transformations
s = clog(1+r), c – const, r ≥ 0
Maps a narrow range of low intensity values in the input into a wider range of
output levels. The opposite is true for higher values of input levels.
In MATLAB code Logarithmic
transformation
• I=imread('tire.tif');
• imshow(I)
• I2=im2double(I);
• J=1*log(1+I2);
• J2=2*log(1+I2);
• J3=5*log(1+I2);
• figure, imshow(J)
• figure, imshow(J2)
• figure, imshow(J3)
Some intensity transform functions

Linear: Negative, Identity •


Logarithmic: Log, Inverse Log •
Power-Law: nth power, nth root •

From previous lecture


• A=imread('d:\flowers.jpg');
• A=rgb2gray(A);
From previous lecture
• C=1;
• gamma1=0.6;
• B=C*double(A).^gamma1;
• figure
• subplot(1,2,1) before using Gamma
• imshow(A,[])
• subplot(1,2,2)
• imshow(B,[])
Image processing course
Lecture 5

Color System , number system and


Filters
Dr. Nassir H. Salman
No. of colors and image file size

color resolution
colors number 2

image size  image resolution x color resolution


Color resolution = no. bits used to record each
pixel , if it is 1 give us binary image, if it is 8
give gray image etc…
Image resolution= rows x columns= M x N =
no. of pixels in the image
‫اﻟﺴﺎدس ﻋﺸﺮ‬ ‫اﻟﻨﻈﺎم اﻟﻌﺸﺮي اﻟﻨﻈﺎم اﻟﺜﻨﺎﺋﻲ‬
‫‪0‬‬ ‫‪0000‬‬ ‫‪0‬‬
‫‪1‬‬ ‫‪0001‬‬ ‫‪1‬‬
‫‪2‬‬ ‫‪0010‬‬ ‫‪2‬‬
‫‪3‬‬ ‫‪0011‬‬ ‫‪3‬‬
‫‪4‬‬ ‫‪0100‬‬ ‫‪4‬‬
‫‪5‬‬ ‫‪0101‬‬ ‫‪5‬‬
‫‪6‬‬ ‫‪0110‬‬ ‫‪6‬‬
‫‪7‬‬ ‫‪0111‬‬ ‫‪7‬‬
‫ﺗﻜﻤﻠﺔ اﻟﻨﻈﺎم ‪١٦‬‬

‫‪8‬‬ ‫‪1000‬‬ ‫‪8‬‬


‫‪9‬‬ ‫‪1001‬‬ ‫‪9‬‬
‫‪A‬‬ ‫‪1010‬‬ ‫‪10‬‬
‫‪B‬‬ ‫‪1011‬‬ ‫‪11‬‬
‫‪C‬‬ ‫‪1100‬‬ ‫‪12‬‬
‫‪D‬‬ ‫‪1101‬‬ ‫‪13‬‬
‫‪E‬‬ ‫‪1110‬‬ ‫‪14‬‬
‫‪F‬‬ ‫‪1111‬‬ ‫‪15‬‬
‫ﺗﻤﺜﯿﻞ اﻟﺒﻜﺴﻞ اﻟﻤﻠﻮﻧﺔ‬

‫‪11011001 00111011 00001111‬‬

‫‪11011001‬‬ ‫‪00111011‬‬ ‫‪00001111‬‬

‫اﻻﺣﻤﺮ‬ ‫اﺧﻀﺮ‬ ‫ازرق‬


‫ﺗﻤﺜﯿﻞ اﻟﺒﻜﺴﻞ اﻟﻤﻠﻮﻧﺔ ﺑﻨﻈﺎم اﻟـ‪١٦‬‬

‫‪11011001 00111011 00001111‬‬


‫‪11011001‬‬ ‫‪00111011‬‬ ‫‪00001111‬‬
‫‪1101 1001 0011 1011 0000 1111‬‬
‫‪D‬‬ ‫‪9‬‬ ‫‪3‬‬ ‫‪B‬‬ ‫‪0‬‬ ‫‪F‬‬
‫‪D9‬‬ ‫‪3B‬‬ ‫‪0F‬‬
‫‪#D93B0F‬‬
Fundamentals of Multimedia, Chapter 4 Li & Drew Prentice Hall 2003

Primary and Secondary Colors


• Colors are seen as variable combinations of the the so-called primary
colors R, G and B.
Mixture of lights (additive primaries)
• The primary colors of light can be added to produce the secondary
colors of light : magenta (M=R+B), cyan (C=G+B), and yellow
(Y=R+G)
• Mixing the three primaries in the right intensities produces white
light.
Mixture of Pigments (Subtractive primaries)
• Primary color of pigments is defined as one that subtracts or absorbs
a primary color of light and reflects or transmits the other two.
• The primary colors of pigments are magenta, cyan, and yellow. And
the secondary colors are R,G and B.
• The proper combinations of the three pigments primaries, produces
black.
Fundamentals of Multimedia, Chapter 4 Li & Drew Prentice Hall 2003

• Fig. 4.16: color combinations that result from combining primary


colors available in the two situations, additive color and subtractive
color.

Fig. 4.16: Additive and subtractive color. (a): RGB is used to specify additive color. (b):
CMY is used to specify subtractive color
‫ ﺑﺎﻻﺿﺎﻓﺔ اﻟﻰ اﻻﺳﻮد‬RGB ‫اﻻﻟﻮان اﻻﺳﺎﺳﯿﺔ ﻓﻲ ﻧﻈﺎم‬
‫واﻻﺑﯿﺾ‬
The Red Green Blue
color
Red 255 0 0
Green 0 255 0
Blue 0 0 255
White 255 255 255
Black 0 0 0
C 0 255 255 G+B
M 255 0 255 B+R
Y 255 255 0 R+G
CMY System
The Y M C
color
Red 255 255 0 Absorb C and reflect Y,M
Green 255 0 255
Blue 255 0 0
White 0 0 0
Black 255 255 255 ‫اﻟﻤﻌﺎدﻻت ادﻧﺎه ﻣﻦ ﻣﻘﺎرﻧﺔ اﻟﺠﺪوﻟﯿﻦ‬
C 0 0 255 C=255-R
M 0 255 0 M=255-G
Y 255 0 0 Y=255-B
‫ ﺗﻤﺜﻞ اﻗﻞ ﻗﯿﻤﺔ‬L ‫ ﺣﯿﺚ‬CMY ‫ ﻣﻦ‬CMYK ‫اﺣﺘﺴﺎب ﻗﯿﻢ‬
CMY ‫ﺑﯿﻦ اﻟﻘﯿﻢ‬

C  L
C 
255  L
M  L
M 
255  L
Y  L
Y 
255  L
L
K 
255
CMYK ‫ اﻟﻰ اﻟﻨﻈﺎم‬RGB ‫( ﻣﻦ اﻟﻨﻈﺎم‬96,134,200) ‫ﺣﻮل اﻟﻘﯿﻢ‬
‫ اﻟﻰ‬CMY ‫ ﺛﻢ ﻣﻦ‬CMY ‫ اﻟﻰ اﻟﻨﻈﺎم‬RGB ‫ ﻧﺠﺮي اﻟﺘﺤﻮﯾﻞ ﻣﻦ اﻟﻨﻈﺎم‬:‫اوﻻ‬
CMYK ‫اﻟﻨﻈﺎم‬

CL 159  55 C  255  R  255  96  159


C   0.52
255  L 255  55 M  255  G  255  134  121
M  L 121  55
M    0.33 Y  255  B  255  200  55
255  L 255  55
Y
Y L

55  55
0
 CMY  (159,121,55)
255  L 255  55
L 55
K   0.216
255 255
 CMYK  (52%,33%,0%,21.6%)
‫ﻣﺜﺎل‪ :‬ﺣﻮل اﻟﻠﻮن ‪ #7AB50F‬اﻟﻰ اﻟﻔﻀﺎء اﻟﻠﻮﻧﻲ ‪CMY‬‬
‫ﻧﻘﻮم ﺑﺘﺤﻮﯾﻞ اﻟﺘﻤﺜﯿﻞ ‪ ١٦‬ﻟﻨﻈﺎم ‪ RGB‬اﻟﻰ اﻟﻨﻈﺎم اﻟﻌﺸﺮي‬
‫ﺛﻢ ﻧﻘﻮم ﺑﺎﻟﺘﺤﻮﯾﻞ ﻣﻦ ‪ RGB‬اﻟﻰ ‪CMY‬‬

‫‪(7 A)16  7 X 16  10  122  red‬‬


‫‪( B5)16  11 X 16  5  181  green‬‬
‫‪(0 F )16  0 X 16  15  15  blue‬‬

‫‪C  255  R  255  122  133‬‬


‫‪M  255  G  255  181  74‬‬
‫‪Y  255  B  255  15  240‬‬
‫)‪ CMY  (133,74,240‬‬
filters

Masks , windows
Using filters
Filter and image
filters From previous lecture 5

Masks , windows
Using filters
Filter and image
Lecture 6
Spatial Filtering process

For spatial domain filtering, we are performing


filtering operations directly on the pixels of an
image.
Spatial filtering is a technique that uses a pixel
and its neighbors to select a new value for the
pixel
• There are two main types of filtering applied to
images:
• spatial domain filtering
• frequency domain filtering
In a later lab we will talk about frequency domain
filtering, which makes use of the Fourier Transform.
For spatial domain filtering, we are performing filtering
operations directly on the pixels of an image.
• Spatial filtering is a technique that uses a pixel and its
neighbors to select a new value for the pixel. The
simplest type of spatial filtering is called linear filtering
• It attaches a weight to the pixels in the neighborhood
of the pixel of interest, and these weights are used to
blend those pixels together to provide a new value for
the pixel of interest
• Linear filtering can be uses to smooth, blur,
sharpen, or find the edges of an image. The
following four images are meant to
demonstrate what spatial filtering can do. The
original image is shown in the upper left-hand
corner.

• Smooth blur sharpen find the edges


• Such non-linear filters are useful for
smoothing only smooth areas, enhancing only
strong edges or removing speckles from
images.
• In signal processing, it is often desirable to be able to
perform some kind of noise reduction on an image or signal.
The median filter is a nonlinear digital filtering technique,
often used to remove noise. Such noise reduction is a typical
pre-processing step to improve the results of later
processing (for example, edge detection on an image).
Median filtering is very widely used in digital image
processing because, under certain conditions, it preserves
edges while removing noise
What are the mean and median filters‫؟‬

• The mean filter is a simple sliding-window


spatial filter that replaces the center value
in the window with the average (mean) of
all the pixel values in the window. The
window, or kernel, is usually square but can
be any shape. An example of mean filtering
of a single 3x3 window of values is shown
below.

٨
Mean filter

unfiltered values
5 3 6
2 1 9
8 4 7
5 + 3 + 6 + 2 + 1 + 9 + 8 + 4 + 7 = 45
45 / 9 = 5

٩
mean filtered
* * *
* 5 *
* * *

Center value (previously 1) is replaced by the mean of all


nine values (5).

١٠
median filter
• The median filter is also a sliding-window
spatial filter, but it replaces the center value
in the window with the median of all the
pixel values in the window. As for the mean
filter, the kernel is usually square but can be
any shape. An example of median filtering
of a single 3x3 window of values is shown
below.

١١
unfiltered values

6 2 0
3 97 4
19 3 10

in order:
0, 2, 3, 3, 4, 6, 10, 19, 97

١٢
median filtered

* * *
* 4 *
* * *

Center value (previously 97) is replaced by


the median of all nine values (4).

١٣
• This illustrates one of the celebrated features
of the median filter: its ability to remove
'impulse' noise (outlying values, either high or
low).
• The median filter is also widely claimed to be
'edge-preserving' since it theoretically
preserves step edges without blurring.
• However, in the presence of noise it does blur
edges in images slightly.

١٤
Basic idea of Spatial Filtering
• Spatial Filtering is sometimes also known as
neighborhood processing. Neighborhood processing is
an appropriate name because you define a center point
and perform an operation (or apply a filter) to only
those pixels in predetermined neighborhood of that
center point.
• The result of the operation is one value, which
becomes the value at the center point's location in the
modified image. Each point in the image is processed
with its neighbors. The general idea is shown below as
a "sliding filter" that moves throughout the image to
calculate the value at the center location
The following diagram is meant to illustrate in further details
how the filter is applied. The filter (an averaging filter) is
applied to location 2,2 .
• Notice how the resulting value is placed at location 2,2
in the filtered image.
• The breakdown of how the resulting value of 251
(rounded up from 250.66) was calculated
mathematically is:
• = 251*0.1111 + 255*0.1111 + 250*0.1111 +
251*0.1111 + 244*0.1111 + 255*0.1111 + 255*0.1111
+ 255*0.1111 + 240*0.1111
• = 27.88888 + 28.33333 + 27.77777 + 27.88888 +
27.11111 + 28.33333 + 28.33333 + 28.33333 +
26.66666
• = 250.66
The following illustrates the averaging filter applied to
location 4,4.
• Once again, the mathematical breakdown of
how 125 (rounded up from 124.55) was
calculated is below:
• = 240*0.1111 + 183*0.1111 + 0*0.1111 +
250*0.1111 + 12*0.1111 + 87*0.1111 +
255*0.1111 + 0*0.1111 + 94*0.1111
• = 26.6666 + 20.3333 + 0 + 27.7777 + 1.3333 +
9.6666 + 28.3333 + 0 + 10.4444
• = 124.55
Boundary Options

• See section 3.5. in your textbook.


• The example above deliberately applied the
filter at location 2,2. This is because there is
an inherent problem when you are working
with the corners and edges. The problem is
that some of the "neighbors" are missing.
Consider location 1,1:
• In this case, there are no upper neighbors or
neighbors to the left. Two solutions, zero
padding and replicating, are shown below. The
pixels highlighted in blue have been added to
the original image:
Zero padding is the default. You can also specify a value other
than zero to use as a padding value.
Another solution is replicating the pixel values along the edges:
• As a note, if your filter were larger than 3x3,
then the "border padding" would have to be
extended. For a filter of size 3x3, 'replicate'
and 'symmetric' yield the same results.
• The following images show the results of the
four different boundary options. The filter
used below is a 5x5 averaging filter that was
created with the following syntax:
h=fspecial('average',5)
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0f 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
The following MATLAB function
demonstrates how spatial filtering may
be applied to an image • :

• function img = myfilter(f, w)


• %MYFILTER Performs spatial correlation
• % I=MYFILTER(f, w) produces an image that has undergone
correlation.
• % f is the original image
• % w is the filter (assumed to be 3x3)
• % The original image is padded with 0's
• %Author: Nova Scheidt

• % check that w is 3x3
• [m,n]=size(w);
• if m~=3 | n~=3
• error('Filter must be 3x3')
• end

%get size of f
[x,y]=size(f);
0 0 0 0 0 0 0 0 0 0
%create padded f (called g) 0 0 0 0 0 0 0 0 0 0
%first, fill with zeros 0 0 0 0 0 0 0 0 0 0
W11 W12 W13 0 0 0 0 0 0 0 0 0 0
g=zeros(x+2,y+2); 0 0 0 0 0 0 0 0 0 0
%then, store f within g W21 W22 W23 0 0 0 0 0 0 0 0 0 0
W31 W32 W33 0 0 0 0 0 0 0 0 0 0
for i=1:x 0 0 0 0 0f 0 0 0 0 0
for j=1:y 0 0 0 0 0 0 0 0 0 0
g(i+1,j+1)=f(i,j); 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
end 0 0 0 0 0 0 0 0 0 0
end 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
%cycle through the array and apply the filter 0 0 0 0 0 0 0 0 0 0
for i=1:x 0 0 0 0 0 0 0 0 0 0

for j=1:y
img(i,j)=g(i,j)*w(1,1)+g(i+1,j)*w(2,1)+g(i+2,j)*w(3,1) ... %first column
+ g(i,j+1)*w(1,2)+g(i+1,j+1)*w(2,2)+g(i+2,j+1)*w(3,2)... %second column
+ g(i,j+2)*w(1,3)+g(i+1,j+2)*w(2,3)+g(i+2,j+2)*w(3,3);…%third column
%img(i,j)=g(i,j)*w(1,1)+g(i+1,j)*w(2,1)... %first column
% + g(i,j+1)*w(1,2)+g(i+1,j+1)*w(2,2); %second column
end
end
%Convert to uint--otherwise there are double values and the expected
%range is [0, 1] when the image is displayed
img=uint8(img);
example
examples

Edge detection

smooth
Fourier Transforms
‫ﺗﺣوﯾﻼت ﻓورﯾر اﻟرﯾﺎﺿﯾﺔ ﻟﻠﺻور اﻟرﻗﻣﯾﺔ‬
The Fourier transform is a representation of an image as a sum of
complex exponentials of varying magnitudes, frequencies, and
phases.

F(u, v) Fourier Coefficients


f(x, y) is an image
• The Fourier Transform is an important image processing tool which is used to
decompose an image into its sine and cosine components. The output of
the transformation represents the image in the Fourier or frequency domain,
while the input image is the spatial domain equivalent.

• The DFT is used to convert an image from the spatial domain into frequency
domain, in other words it allows us to separate high frequency from low frequency
coefficients and neglect or alter specific frequencies leading to an image with less
information but still with a convenient level of quality .

• Fourier transform is a mathematical formula by which we can extract out the


frequency domain components of a continuous time domain signal. Using fourier
transform we can process time domain signal in frequency domain. We can use various
Frequency domain filters to process the signal.

• If signal is descrete, Discrete Fourier Transform use to analyse discrete signal


• The Fast Fourier Transform (FFT) is commonly used to transform
an image between the spatial and frequency domain. Unlike other domains such as
Hough and Radon, the FFT method preserves all original data. Plus, FFT fully
transforms images into the frequency domain, unlike time-frequency or wavelet
transforms.
Difference between spatial domain and frequency
domain
In spatial domain, we deal with images as it
is. The value of the pixels of the image
change with respect to scene. Whereas in
frequency domain, we deal with the rate at
which the pixel values are changing in
spatial domain.

Frequency domain
We first transform the image to its frequency distribution. Then our black box
system perform what ever processing it has to performed, and the output of the
black box in this case is not an image, but a transformation. After performing
inverse transformation, it is converted into an image which is then viewed in spatial
domain.
example
Background

• The frequency domain refers


to the plane of the two
dimensional discrete Fourier
transform of an image.
• The purpose of the Fourier
transform is to represent a
signal as a linear
combination of sinusoidal
signals of various frequencies.
Euler formula
2D Fourier transform for digital image
M 1 N 1
1 u  0, 1, 2,, M - 1 and v  0, 1, 2,, N - 1
F (u , v )   f ( x , y )e  j 2  ( ux / M  vy / N )
MN x 0 y 0

for image f (x, y) of size M x N

IDFT inverse discrete Fourier Transform


M 1 N 1
 j 2 (ux/ M vy/ N ) x  0, 1, 2,  , M - 1 and y  0, 1, 2,  , N - 1
f (x, y)  F(u, v)e
u 0 v0
The Fourier transform
• The Fourier transform plays a critical role in a
broad range of image processing applications,
including enhancement, analysis, restoration,
and compression.
• Example:
• If f(m, n) is a function of two discrete spatial
variables m and n, then the two-dimensional
Fourier transform of f(m, n) is defined by the
relationship
• The variables ω1 and ω2 are frequency variables; their
units are radians per sample.
• F(ω1,ω2) is often called the frequency
domain representation of f(m, n).
• F(ω1,ω2) is a complex-valued function that is periodic both
in ω1 and ω2, with period . Because of the periodicity,
usually only the range is displayed.
• Note that F(0,0) is the sum of all the values of f(m,n). For
this reason, F(0,0) is often called the constant
component or DC component of the Fourier transform.
• The inverse of a transform is an operation that
when performed on a transformed image
produces the original image. The inverse two-
dimensional Fourier transform is given by

• Roughly speaking, this equation means that f(m,


n) can be represented as a sum of an infinite
number of complex exponentials (sinusoids) with
different frequencies. The magnitude and phase
of the contribution at the frequencies (ω1,ω2) are
given by F(ω1,ω2).
The Meaning of DFT and Spatial Frequencies
• Important Concept
Any signal can be represented as a linear combination of
a set of basic components

– Fourier components: sinusoidal patterns


– Fourier coefficients: weighting factors assigned to the Fourier
components

• Spatial frequency: The frequency of Fourier component

• Not to confused with electromagnetic frequencies (e.g.,


the frequencies associated with light colors)
Real Part, Imaginary Part, Magnitude, Phase, Spectrum
Real part:

Imaginary part:

Magnitude-phase
representation:

Magnitude
(spectrum):

Phase
(spectrum):

Power
Spectrum:
2D DFT Properties
Spatial domain
differentiation:

Frequency domain
differentiation:

Distribution law:

Laplacian:

Spatial domain
Periodicity:

Frequency domain
periodicity:
Computation of 2D-DFT
• To compute the 1D-DFT of a 1D signal x (as a vector):
~
x  FN x
To compute the inverse 1D-DFT:
1 *~
x  FN x
N
• To compute the 2D-DFT of an image X (as a matrix):
~
X  FN XFN
To compute the inverse 2D-DFT:
1 *~ *
X  2 FN XFN
N
WN  e  j 2 / N , where, N  4
Computation of 2D-DFT W4  e  j 2 / 4  e  j / 2
remember Fourier transform matrices

1 1 1 ··· 1 1 1 1 ··· 1
2 N-1 -1 -2 1-N
1 WN WN · · · WN 1 WN WN · · · WN
*
FN  · · · · F 
N · · · ·
· · · · · · · ·
· · · · 2 · · · ·
N-1 2(N-1) (N-1) 1-N 2(1-N) - (N-1) 2
1 WN WN · · · WN 1 WN WN · · · WN

= relationship: FN1 
1 * 1 *~ *
FN X  2 FN XFN
N
N . •
In particular, for N = 4:
1 1 1 1
1 1 1 1 1 j  1  j 
1  j  1 j 
F4*   
F4    1  1 1  1 
1  1 1  1  WN  e  j 2  / N  
  1  j  1 j 
1 j  1  j 
Calculate of Fourier matrices
WN  e  j 2 / N , where, N  4
W4  e  j 2 / 4  e  j / 2

Sin(3x)= Sin(2x)Cos(x) + Cos(2x)Sin(x)


Cos3x = Cos (2x + x) = Cos2xCosx - Sin2xSinx
Computation of 2D-DFT: Example
• A 4x4 image • Compute its 2D-DFT:
1 3 6 8 1 1 1 1  1 3 6 8  1 1 1 1
9 8 8 2 1  j  1 j  9
~ 8 8 2 1  j  1 j 
X X  F4 XF4   
5 4 2 3 1  1 1  1  5 4 2 3 1  1 1  1 
     
1 j  1  j  6 6 3 3 1 j  1  j 
6 6 3 3 
 21 21 19 16  1 1 1 1 
 4  3 j  1  2 j 4  5 j 5  j  1  j  1 j 
MATLAB function: fft2    
 9 7 3 6  1  1 1  1 
  
  4  3 j  1  2 j 4  5 j 5  j 1 j  1  j 
lowest frequency  77 25 j 3 25 j 
component 4  9 j 11 8 j  4  7 j  5  4 j 
 
 13  6 13j 11  6 13j
highest frequency  
 4  9 j  5  4 j  4  7 j  11  8 j 
component
Computation of 2D-DFT: Example
 77 2 5 j 3 2 5 j 
 
~ 4  9 j 11 8 j  4  7 j  5  4 j 
X
 13  6 13j 11  6 13j
 
4  9 j  5  4 j  4  7 j 11  8 j 
Real part: Imaginary part:

 77 2 3 2 
 4  11  4  5   0 5 0 5 
~  9 8
X real   ~  7  4 
 13  6  11  6  X imag  
   0 13 0  13
 4  5  4  11  
9 4 7 8 
Magnitude: Phase:
 77 5.39 3 5.39  0 1.19 0 1.19 
9.85 13.60 8.06 6.4  1.15 2.51
~
Xmagnitude  ~  2.09  2.47
 13 14.32 11 14.32 Xphase  
 
 3.14 2.00 3.14  2.00
9.85 6.40 8.06 13.60  
 1.15 2.47 2.09  2.51
Computation of 2D-DFT: Example
• Compute the inverse 2D-DFT:
1 1 1 1  77 25 j 3 25 j 1 1 1 1
1 j 1  j49 j 118 j 47 j 54 j 1 j 1  j
~ 1
F4*XF4*  2   
4 1 1 1 1 13 613j 11 613j1 1 1 1
   
1  j 1 j 4  9 j 5  4 j  4  7 j 118 j 1  j 1 j 
1 1 1 1   21 21 19 16 
  
1 1 j  1  j   4  3 j  1  2 j 4  5 j 5  j 

4 1  1 1  1    9 7 3 6 
  
1  j  1 j   4  3 j  1  2 j 4  5 j 5  j 
1 3 6 8
9 8 8 2
 X
5 4 2 3 MATLAB function: ifft2
 
6 6 3 3
From previous lecture 7

Fourier Transforms
‫ﺗﺣوﯾﻼت ﻓورﯾر اﻟرﯾﺎﺿﯾﺔ ﻟﻠﺻور اﻟرﻗﻣﯾﺔ‬
The Fourier transform is a representation of an image as
a sum of complex exponentials of varying magnitudes,
frequencies, and phases.

F(u, v) Fourier Coefficients


f(x, y) is an image
Course Website: http://www.comp.dit.ie/bmacnamee
2 The Fourier Transform is an important image processing tool which is used to
of decompose an image into its sine and cosine components. The output of
20 the transformation represents the image in the Fourier or frequency
domain, while the input image is the spatial domain equivalent.

The DFT is used to convert an image from the spatial domain into frequency
domain, in other words it allows us to separate high frequency from low
frequency coefficients and neglect or alter specific frequencies leading to
an image with less information but still with a convenient level of quality .

Fourier transform is a mathematical formula by which we can extract out the


frequency domain components of a continuous time domain signal. Using fourier
transform we can process time domain signal in frequency domain. We can use
various Frequency domain filters to process the signal.

If signal is descrete, Discrete Fourier Transform use to analyse discrete signal


The Fast Fourier Transform (FFT) is commonly used to transform
an image between the spatial and frequency domain. Unlike other domains such
as Hough and Radon, the FFT method preserves all original data. Plus, FFT fully
transforms images into the frequency domain, unlike time-frequency or wavelet
transforms.
Difference
3 between spatial domain and frequency
domain
of In spatial domain, we deal with images as it
20 is. The value of the pixels of the image
change with respect to scene. Whereas in
frequency domain, we deal with the rate at
which the pixel values are changing in
spatial domain.

Frequency domain
We first transform the image to its frequency distribution. Then our black box
system perform what ever processing it has to performed, and the output of the
black box in this case is not an image, but a transformation. After performing
inverse transformation, it is converted into an image which is then viewed in spatial
domain.
4
of
20
5
of
20
example
6
of
20
7
of
20 Background

The frequency domain


refers to the plane of the
two dimensional discrete
Fourier transform of an
image.
The purpose of the Fourier
transform is to represent a
signal as a linear
combination of sinusoidal
signals of various
frequencies.
lecture today: 8

Lect 8.Image Segmentation issue:


– The segmentation problem
– Finding points, lines and edges

Course Website: http://www.comp.dit.ie/bmacnamee


9
of
20
The Segmentation Problem
Segmentation attempts to partition the pixels
of an image into groups that strongly
correlate with the objects in an image
Typically the first step in any automated
computer vision application
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
of
20
10
Segmentation Examples
11
of
20
Detection Of Discontinuities
There are three basic types of grey level
discontinuities that we tend to look for in
digital images:
– Points
– Lines
– Edges
We typically find discontinuities using masks
and correlation
12
of
20
Point Detection
Point detection can be achieved simply
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

using the mask below:

Points are detected at those pixels in the


subsequent filtered image that are above a
set threshold
13
of
20
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Point Detection (cont…)

X-ray image of Result of point Result of


a turbine blade detection thresholding
14
of
20
Line Detection
The next level of complexity is to try to
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

detect lines
The masks below will extract lines that are
one pixel thick and running in a particular
direction
15
of
20
Line Detection (cont…)
Binary image of a wire
bond mask
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

After
Result of
processing
thresholding
with -45° line
filtering result
detector
16
of
20
Edge Detection
An edge is a set of connected pixels that lie
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

on the boundary between two regions


17
of
20
Edges & Derivatives
We have already spoken
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

about how derivatives


are used to find
discontinuities
1st derivative tells us
where an edge is
2nd derivative can
be used to show
edge direction
18
of
20
Derivatives & Noise
Derivative based edge detectors are
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

extremely sensitive to noise


We need to keep this in mind
19
of
20
Common Edge Detectors
Given a 3*3 region of an image the following
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

edge detection filters can be used


20
of
20
Edge Detection Example
Original Image Horizontal Gradient Component
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Vertical Gradient Component Combined Edge Image


Images taken from Gonzalez & Woods, Digital Image Processing (2002)
of
20
21
Edge Detection Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
of
20
22
Edge Detection Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
of
20
23
Edge Detection Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
of
20
24
Edge Detection Example
25
of
20
Edge Detection Problems
Often, problems arise in edge detection in
that there are is too much detail
For example, the brickwork in the previous
example
One way to overcome this is to smooth
images prior to edge detection
26
of
Edge Detection Example With
20 Smoothing
Original Image Horizontal Gradient Component
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Vertical Gradient Component Combined Edge Image


27
of
20
Laplacian Edge Detection
We encountered the 2nd-order derivative
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

based Laplacian filter already

The Laplacian is typically not used by itself


as it is too sensitive to noise
Usually hen used for edge detection the
Laplacian is combined with a smoothing
Gaussian filter
28
of
20
Laplacian Of Gaussian
The Laplacian of Gaussian (or Mexican hat)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

filter uses the Gaussian for noise removal


and the Laplacian for edge detection
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
of
20
29
Laplacian Of Gaussian Example
30
of
20
Summary
In this lecture we have begun looking at
segmentation, and in particular edge detection
Edge detection is massively important as it is
in many cases the first step to object
recognition
31
of Convolution and Correlation in Image Processing
20

Convolution and Correlation in Image Processing is quite similar


except that the kernel are rotated 180 degree between these 2
operation.
For example, image A = [10 20 30 ; 40 50 60; 70 80 90] and the
kernel h = [1 2 3; 4 5 6; 7 8 9], correlation of 2 matrices is sum of
the elements multiplication of 2 matrices
For the convolution, the operation is almost the
same, except that the kernel is rotated 180
degree.

Correlation result = (10 x 1) + (20 x 2) + Convolution result = (10 x 9) + (20 x 8) + (30 x 7)


(30 x 3) + (40 x 4) + (50 x 5) + (60 x 6) + + (40 x 6) + (50 x 5) + (60 x 4) + (70 x 3) + (80 x
(70 x 7) + (80 x 8) + (90 x 9) = 2850 2) + (90 x 1) = 1650
In the real Image convolution and correlation, the
kernel is sliding over every single pixel and
performs above operation to form a new image.
32
of Frequency Domain Versions of Spatial Filters
20

The following convolution theorem shows an interesting relationship between


the spatial domain and frequency domain:

and, conversely,

where the symbol "*" indicates convolution of the two functions. The important
thing to extract out of this is that the multiplication of two Fourier transforms
corresponds to the convolution of the associated functions in the spatial
domain. This means we can perform linear spatial filters as a simple
component-wise multiply in the frequency domain.
Lecture 9

• Frequency Domain Filtering

Dr Nassir H. Salman
From previous lecture 8
One-Dimensional Fourier transform and it’s inverse

Fourier transforms pair


• The Fourier transform , F(u), of a single variable,
continuous function ,f(x) is defined by :

 j 2  ux
1) F ( u )   f ( x )e dx j  1


• where conversely , given F(u), we can obtain f(x)


by means of the inverse Fourier transform:

j 2  ux
2) f (x)   F (u )e du


They indicate the important fact that a function can be


recovered from its transform
Fourier transform
• The two equations above can be extended to
two variables , and : following Fourier transform
and it’s inverse of 2-D
 
 j 2 ( ux  vy )
1) F (u , v )    f ( x, y )e dxdy
 
 
j 2  ( ux  vy )
2) f ( x, y )    F (u , v ) e dudv
 

M 1 N 1
1 u  0, 1, 2,, M - 1 and v  0, 1, 2,, N - 1
F (u , v )   f ( x , y )e  j 2  ( ux / M  vy / N )
MN x 0 y 0
for image f (x, y) of size M x N
M 1 N 1
f (x, y)  F(u, v)e j 2 (ux/ M vy/ N ) x  0, 1, 2,  , M - 1 and y  0, 1, 2,  , N - 1

u 0 v0
Fourier transform
• The Fourier transform of a discrete function
(DFT) of one variable , f(x), x = 0,1 ,…, M-1,
is given by the equation:
• 1) F ( u )  1 M 1 f ( x ) e  j 2 ux / M for u = 0, 1, 2, … , M-1.
M

u0

• Inverse (DFT) , See 1/M Value


M 1
• 2) f (x)   F ( u ) e j 2  ux / M
or x = 0, 1, 2, … , M-1 .
u 0 1
M
How to Compute 1-D F(u)
• To compute F(u) we start by substituting u = 0 in the
exponential term and then summing for all values of x. we
then substitute u =1 in the exponential and repeat the
summation over all values of x. we repeat this process for all
M values of u in order to obtain the complete Fourier
transform.
M 1
1
F (u )   f ( x)e  j 2ux / M
M u 0
2
• It takes approximately M summations and multiplications
to compute discrete Fourier transforms. Like f(x), the
transform is a discrete quantity, and it has the same
components as f(x). Similar comments apply to the
computation of the inverse Fourier transform
• An important property of the discrete transform
pair is that :the discrete Fourier transform and its
inverse always exists. This can be shown by
substituting either of
• 1 M 1  j 2 ux / M
M 1
1) F (u )  
M u 0
f ( x ) e 2 ) f ( x )   F ( u ) e j 2  ux / M

or u0

into other and making orthogonality of the


exponential ,we need the following orthogonality
property
M 1
j 2rx / M  j 2ux / M M if r  u
e e 
otherwise
x 0 0
• The concept of the frequency domain follows from Euler’s
formula:
e j  cos  j sin 

• Fourier transform becomes: After we substitute in 1-D Fourier T


1 M 1
1) F (u)   f ( x)[cos2ux / M  j sin 2ux / M ] for u = 0, 1, 2, … , M-1.
M u 0
• Thus we see that each term of Fourier transform [that is, the
value of F(u) for each value of u ] composed of the sum of all
values of the function f(x). The values of f(x) , in turn, are
multiplied by sines and cosines of various frequencies .The
domain (values of u) over which the values of F(u) range is
appropriately called frequency domain.

• The Fourier transform may be viewed as a “mathematical


prism” that separates a function into various components
based on frequency contents.
• In general we see from equations:
M 1
1 j 2  ux / M
1) F (u )   f ( x )e 
M u0

M 1
1
1) F (u) 
M
 f (x)[cos2ux / M  j sin 2ux / M ]
u 0

That the components of the Fourier transform are complex


quantities. As in analysis of complex numbers, we find it is
convenient sometimes to express F(u) in polar coordinates:

1) F ( u )  F ( u ) e  j  ( u )
Where;
2 2 1
F ( u )  [ R (U )  I ( u )] 2

• is called magnitude or spectrum of the


Fourier transform , and
1  I (u ) 
 ( u )  tan  
 R (u ) 

is called the phase angle or phase spectrum of the


transform
The power spectrum or spectral density, defined as the square of
the Fourier spectrum:
2 2 2
P (u )  F (u )  R (u )  I (u )
Lecture today 9:
Simple 1-D Example of the DFT
a) M=1024, A=1, K= 8 non – zero POINTS
Chapter 4
Image Enhancement in the
Frequency Domain
Chapter 4
Image Enhancement in the
Frequency Domain
Centered Representation
v MATLAB
(-N/2, -N/2) (-N/2, N/2) function: fftshift
high high

low
u
high high

(N/2, -N/2) (N/2, N/2) From Prof. Al Bovik

Example:

From [Gonzalez
& Woods]
Log-Magnitude Visualization
2D-DFT

centered

s  c log(1  r )
Apply to Images

2D-DFT  centered  log intensity transformation

From [Gonzalez & Woods]


How to Display a Fourier Spectrum using MATLAB
The following table is meant to describe the various steps behind displaying the
Fourier Spectrum.
To get the results shown in the last image of the
table, you can also combine MATLAB calls as in
:

f=zeros(30,30);
f(5:24,13:17)=1;
F=fft2(f, 256,256);
F2=fftshift(F);
figure,imshow(log(1+abs(F2)),[])

Notice in these calls to imshow, the second argument is


empty square brackets. This maps the minimum value
in the image to black and the maximum value in the
image to white.
• The following convolution theorem shows an interesting relationship
between the spatial domain and frequency domain:

• and, conversely,

where the symbol "*" indicates convolution


of the two functions. The important thing to
extract out of this is that the multiplication of
two Fourier transforms corresponds to the
convolution of the associated functions in
the spatial domain. This means we can
perform linear spatial filters as a simple component-wise multiply
in the frequency domain.
This suggests that we could use Fourier transforms to speed up
spatial filters. This only works for large images that are correctly
padded, where multiple transformations are applied in the
frequency domain before moving back to the spatial domain.
When applying Fourier transforms padding is very important.
Note that, because images are infinitely tiled in the frequency
domain, filtering produces wraparound artefacts if you don't zero
pad the image to a larger size. The paddedsize function below
calculates a correct padding size to avoid this problem. The
paddedsize function can also help optimize the performance of
the DFT by providing power of 2 padding sizes. See
paddesize's help header for details on how to do this.
Basic Steps in DFT Filtering
2D-DFT (Frequency) Domain Filtering
Convolution Theorem

f (x,y) h(x,y) g(x,y)


input impulse response output
image (filter) image

g ( x, y )  f ( x, y )  h( x, y )

DFT IDFT DFT IDFT DFT IDFT

G(u,v) = F(u,v) H(u,v)


Frequency Domain Filtering

Filter design: design H(u, v)


From [Gonzalez & Woods]
2D-DFT Domain Filter Design

• Ideal lowpass, bandpass and highpass

low-frequency mid-frequency high-frequency


mask mask mask

From Prof. Al Bovik


Example: Applying the Sobel Filter
in the Frequency Domain
For example, let's apply the Sobel filter to
the following picture in both the spatial
domain and frequency domain.
Frequency Domain Specific Filters

As you have already seen, based on the property that


multiplying the FFT of two functions from the spatial domain
produces the convolution of those functions, you can use
Fourier transforms as a fast convolution on large images. Note
that on small images it is faster to work in the spatial domain.
However, you can also create filters directly in the frequency
domain. There are three commonly discussed filters in the
frequency domain:
Lowpass filters, sometimes known as smoothing filters
Highpass filters, sometimes known as sharpening filters
Notch filters, sometimes known as band-stop filters
Lowpass filters:

 create a blurred (or smoothed) image


 attenuate the high frequencies and leave the low frequencies of the Fourier
transform relatively unchanged
Three main lowpass filters are discussed in Digital Image Processing Using
MATLAB:
1. ideal lowpass filter (ILPF)
2. Butterworth lowpass filter (BLPF)
3. Gaussian lowpass filter (GLPF)
The corresponding formulas and visual representations of these filters are
shown in the table below. In the formulae, D0 is a specified nonnegative
number. D(u,v) is the distance from point (u,v) to the center of the filter.
Matlab code
%Display the blurred image
figure, imshow(LPF_football, [])

% Display the Fourier Spectrum


% Move the origin of the transform to the center of the frequency rectangle.
Fc=fftshift(F);
Fcf=fftshift(LPFS_football);
% use abs to compute the magnitude and use log to brighten display
S1=log(1+abs(Fc));
S2=log(1+abs(Fcf));
figure, imshow(S1,[])
figure, imshow(S2,[])
Highpass filters:
sharpen (or shows the edges of) an image
attenuate the low frequencies and leave the high frequencies of
the Fourier transform relatively unchanged
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

IMAGE FORMATION

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

IMAGE FORMATION

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

When x, y , and the amplitude values of f are all finite,


discrete quantities, we call the image a digital image.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Image Sensing and Acquisition

A digital image is nothing more


than data—numbers indicating
variations of red, green, and blue at
a particular location on a grid of
pixels.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Introduction
• What is Digital Image Processing?
Digital Image
— A two-dimensional function f ( x , y ) x and y are spatial
coordinates
The amplitude of f is called intensity or gray level at the point (x, y)

Digital Image Processing


— Process digital images by means of computer, it covers low-, mid-
, and high-level processes
low-level: inputs and outputs are images
mid-level: outputs are attributes extracted from input images
high-level: an ensemble of recognition of individual objects

Pixel
— The elements of a digital image

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

image definition

• An image may be defined as a two –


dimensional function, f(x, y), where x and y
are spatial (plane) coordinates, and the
amplitude of f at any pair coordinates (x, y) is
called the intensity or gray level of the image
at that point.
• When x, y , and the amplitude values of f are
all finite, discrete quantities, we call the image
a digital image.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Digital image representation

Blue
Green

Red

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Sources for Images

• Electromagnetic (EM) energy spectrum


• Acoustic
• Ultrasonic
• Electronic
• Synthetic images produced by computer

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Electromagnetic (EM) energy spectrum

Major fields in which digital image processing is widely used


Gamma-ray imaging: nuclear medicine and astronomical observations
X-rays: medical diagnostics (X-rays of body), industry, and astronomy, etc.
Ultraviolet: lithography, industrial inspection, microscopy, lasers, biological
imaging, and astronomical observations
Visible and infrared bands: light microscopy, astronomy, remote sensing,
industry, and law enforcement
Microwave band: Radar imaging
Radio band: medicine (such as MRI) and astronomy

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Types of Computerized Processes


• 1-Low Level Process
• 2-Mid Level Process
• 3-High Level Process
• ========================
• 1-low Level Process involves primitive
operations, such as image processing to reduce
noise, contrast, enhancement and image
sharpening. In this level, both its input and
output are digital images

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

2-Mid Level Process


• Involves tasks such as ,segmentation
(partitioning an image into regions or
objects), description these objects to reduce
them to a form suitable to computer, and
classification (recognition ) of individual
objects. The inputs are digital images and
the outputs are attributes extracted from
those images (i.e, edges, contours, and the
identity of individual objects.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

3-High Level Process


• Involves making sense of a recognized objects as in
image analysis , for example : if a digital image
contains a number of objects , a program may
analyzed the image and extract the objects.
• So the digital image process encompasses
processes whose inputs and outputs are images
and in addition, encompasses processes that
extract attributes from images up to and including
recognition of individual objects.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed.
Fundamental Steps in Digital Image Processing: www.imageprocessingbook.com

Outputs of these processes generally are image attributes


Outputs of these processes generally are images

Wavelets &
Colour Image Image Morphological
Multiresolution
Processing Compression Processing
processing

Image
Restoration
Segmentation

Image
Enhancement Representation
& Description
Knowledge Base
Image
Acquisition Object
Recognition

Problem Domain
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Fundamental Steps in DIP:

Step 1: Image Acquisition


The image is captured by a sensor (eg.
Camera), and digitized if the output of the
camera or sensor is not already in digital form,
using Analogue-to-Digital convertor

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Cont. Fundamental Steps in DIP:


Step 2: Image Enhancement
The process of manipulating an image so that the
result is more suitable than the original for
specific applications.
The idea behind enhancement techniques is to
bring out details that are hidden, or simple to
highlight certain features of interest in an image.

•Filtering with morphological operators.


•Histogram equalization.
•Noise removal using a Wiener filter.
•Linear contrast adjustment.
•Median filtering.
•Unsharp mask filtering.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Cont. Fundamental Steps in DIP:

Step 3: Image Restoration


- Improving the appearance of an image
- Tend to be mathematical or probabilistic
models. Enhancement, on the other hand, is based
on human subjective preferences regarding what
constitutes a “good” enhancement result.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Cont. Fundamental Steps in DIP:

Step 4: Colour Image Processing


Use the colour of the image to extract features
of interest in an image

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Color to grey and negative

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

The DIP steps can be summarized


as:

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Components of an Image
Processing
Network
System

Image displays Computer Mass storage

Specialized
Image
image
Hardcopy processing
processing
software
hardware
Typical general-
Image sensors purpose DIP
Problem Domain system

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Image Statistics
 Arithmetic Mean, ‫اﻟﺧﺻﺎﺋص اﻻﺣﺻﺎﺋﯾﺔ ﻟﻠﺻورة اﻟرﻗﻣﯾﺔ‬
 Standard Deviation,
‫ﻣﻊ اﻟﺗطﺑﯾق‬
 and Variance
• Useful statistical features of an image are its arithmetic mean,
• standard deviation, and variance. These are well known
mathematical constructs that, when applied to a digital image, can
reveal important information.
• The arithmetic mean is the image's average value.
• The standard deviation is a measure of the frequency distribution, or
range of pixel values, of an image. If an image is supposed to be
uniform throughout, the standard deviation should be small. A small
standard deviation indicates that the pixel intensities do not stray
very far from the mean; a large value indicates a greater range.
• The standard deviation is the square root of the variance.

• The variance is a measure of how spread out a distribution is.


It is computed as the average squared deviation of each number
from its mean
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

filters

Masks ,
windows

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Using filters

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Filter and image

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

• Linear filtering can be uses to smooth, blur,


sharpen, or find the edges of an image. The
following four images are meant to
demonstrate what spatial filtering can do. The
original image is shown in the upper left-hand
corner.

• Smooth blur sharpen find the edges

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com
Zero padding is the default. You can also specify a value other
than zero to use as a padding value.
Another solution is replicating the pixel values along the edges:

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

• As a note, if your filter were larger than 3x3,


then the "border padding" would have to be
extended. For a filter of size 3x3, 'replicate'
and 'symmetric' yield the same results.
• The following images show the results of the
four different boundary options. The filter used
below is a 5x5 averaging filter that was created
with the following syntax:
h=fspecial('average',5)

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0f 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

The following MATLAB function


demonstrates how spatial filtering may
• :

be= myfilter(f,
• function img applied w) to an image
• %MYFILTER Performs spatial correlation
• % I=MYFILTER(f, w) produces an image that has undergone
correlation.
• % f is the original image
• % w is the filter (assumed to be 3x3)
• % The original image is padded with 0's
• %Author: Nova Scheidt

• % check that w is 3x3
• [m,n]=size(w);
• if m~=3 | n~=3
• error('Filter must be 3x3')
• end
© 2002 R. C. Gonzalez & R. E. Woods


%then, store f withinDigital
g Image Processing, 2nd ed. www.imageprocessingbook.com

for i=1:x 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
for j=1:y 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
W11 W12 W13
g(i+1,j+1)=f(i,j); W21 W22 W23
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
end W31 W32 W33 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0f 0 0 0 0 0
end 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
%cycle through the array and apply the 0 filter
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
for i=1:x 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
for j=1:y 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
img(i,j)=g(i,j)*w(1,1)+g(i+1,j)*w(2,1)+g(i+2,j)*w(3,1) ...0 %first
0

column
+ g(i,j+1)*w(1,2)+g(i+1,j+1)*w(2,2)+g(i+2,j+1)*w(3,2)...
%second column
+ g(i,j+2)*w(1,3)+g(i+1,j+2)*w(2,3)+g(i+2,j+2)*w(3,3);…%third
column
%img(i,j)=g(i,j)*w(1,1)+g(i+1,j)*w(2,1)... %first column
% + g(i,j+1)*w(1,2)+g(i+1,j+1)*w(2,2); %second column
end
end
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Fourier Transforms
‫ﺗﺣوﯾﻼت ﻓورﯾر اﻟرﯾﺎﺿﯾﺔ ﻟﻠﺻور اﻟرﻗﻣﯾﺔ‬
The Fourier transform is a representation of an image as a sum
of complex exponentials of varying magnitudes, frequencies,
and phases.

F(u, v) Fourier Coefficients


f(x, y) is an image
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com
• The Fourier Transform is an important image processing tool which is used to
decompose an image into its sine and cosine components. The output of
the transformation represents the image in the Fourier or frequency domain,
while the input image is the spatial domain equivalent.

• The DFT is used to convert an image from the spatial domain into frequency
domain, in other words it allows us to separate high frequency from low frequency
coefficients and neglect or alter specific frequencies leading to an image with less
information but still with a convenient level of quality .

• Fourier transform is a mathematical formula by which we can extract out the


frequency domain components of a continuous time domain signal. Using fourier
transform we can process time domain signal in frequency domain. We can use various
Frequency domain filters to process the signal.

• If signal is descrete, Discrete Fourier Transform use to analyse discrete signal


• The Fast Fourier Transform (FFT) is commonly used to transform
an image between the spatial and frequency domain. Unlike other domains such as
Hough and Radon, the FFT method preserves all original data. Plus, FFT fully
transforms images into the frequency domain, unlike time-frequency or wavelet
transforms.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed.
Difference between spatial domain and www.imageprocessingbook.com

frequency domain In spatial domain, we deal with


images as it is. The value of the
pixels of the image change with
respect to scene. Whereas in
frequency domain, we deal with
the rate at which the pixel
values are changing in spatial
domain.

Frequency domain
We first transform the image to its frequency distribution. Then
our black box system perform what ever processing it has to
performed, and the output of the black box in this case is not
an image, but a transformation. After performing inverse
transformation, it is converted into an image which is then
© 2002 R. C. Gonzalez & R. E. Woods

viewed in spatial domain.


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

example

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Background

• The frequency domain


refers to the plane of the two
dimensional discrete Fourier
transform of an image.
• The purpose of the Fourier
transform is to represent a
signal as a linear combination
of sinusoidal signals of
various frequencies.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

2D Fourier transform for digital image


M 1 N 1
1 u  0, 1, 2,, M - 1 and v  0, 1, 2,, N - 1
F (u , v )   f ( x , y )e  j 2  ( ux / M  vy / N )
MN x 0 y 0

for image f (x, y) of size M x N

IDFT inverse discrete Fourier Transform


M 1 N 1
 j 2 (ux/ M vy/ N ) x  0, 1, 2,  , M - 1 and y  0, 1, 2,  , N - 1
f (x, y)  F(u, v)e
u 0 v0

© 2002 R. C. Gonzalez & R. E. Woods

You might also like