Digital Image Processing (ECE 501)
Introduction
Subject Coordinator
Dr. Bhupendra Singh Kirar
Department of Electronics and Communication Engineering
E-Mail ID: bhup17@gmail.com
Google Scholar ID: https://scholar.google.co.in/citations?user=cBu3VZwAAAAJ&hl=en
Scopus ID: 57195508883
Orchid ID: 0000-0002-0417-8709
Subject Coordinator: Dr. Bhupendra Singh Kirar Subject Name & Code: DIP (ECE 501) Semester: VI 2
Digital Image Processing
By
Dr. Bhupendra Singh Kirar
Assistant Professor
Department of Electronics and Communication Engineering
Indian Institute of Information Technology, Bhopal
TOPICS (Previous ppt)
• Introduction
• What is Digital Image Processing?
• The Origins of Digital Image Processing
• Examples of Fields that Use Digital Image Processing
• Fundamental Steps in Digital Image Processing
• Components of an Image Processing System
TOPICS
• Digital Image Fundamentals
• Elements of Visual Perception
• Light and the Electromagnetic Spectrum
• Image Sensing and Acquisition
• Image Sampling and Quantization
• Some Basic Relationships Between Pixels
• Introduction to the Basic Mathematical Tools Used in Digital
Image Processing
TOPICS
• Elements of Visual Perception
• The eye is nearly a sphere, with an average
diameter of approximately 20mm.
• Three membranes enclose the eye:
• The cornea and sclera outer cover the choroid the
retina.
8
Cornea
• The cornea is a tough, transparent tissue that covers
the anterior surface of the eye.
• Continuous with the cornea,the sclera is an opaque
membrane that encloses the remainder of the optic
globe.
9
Choroid
• The choroid lies directly below the sclera.
• This membrane contains a net- work of blood vessels that
serve as the major source of nutrition to the eye.
• The choroid coat is heavily pigmented and hence helps to
reduce the amount of extraneous light entering the eye
and the backscatter within the optical globe.
10
• At its anterior extreme, the choroid is divided into the
ciliary body and the iris diaphragm.
• The latter contracts or expands to control the amount
of light that enters the eye
• The front of the iris contains the visible
pigment of the eye, whereas the back contains a
black pigment.
11
• The lens is made up of concentric layers of fibrous
cells and is suspended by fibers that attach to the
ciliary body.
• It contains 60 to 70% water, about 6% fat, and more
protein than any other tissue in the eye.
12
Retina
• The innermost membrane of the eye is the retina,
which lines the Inside of the ǁall’s entire posterior
portion.
• When the eye is properly focused, light from an object
outside the eye is imaged on the retina.
• Pattern vision is afforded by the distribution of discrete
light receptors over the surface of the retina.
13
• There are two classes of receptors: cones and rods.
• The cones in each eye number between 6 and 7
million.
• They are located primarily in the central portion of
the retina, called the fovea, and are highly sensitive
to color.
14
• Muscles controlling the eye rotate the eyeball until
the image of an object of interest falls on the fovea.
• Cone vision is called photopic or bright-light vision.
• The number of rods is much larger:
Some 75 to 150 million are distributed over
the retinal surface.
15
• The absence of receptors in this area results in the
so-called blind spot.
• Fig. shows that cones are most dense in the center
of the retina (in the center area of the fovea)
17
IMAGE FORMATION IN THE EYE
Image Formation in the Eye
• The principal difference between the lens of the eye
and an ordinary optical lens is that the former is
flexible.
• The shape of the lens is controlled by tension in the
fibers of the ciliary body.
• To focus on distant objects, the controlling muscles
cause the lens to be relatively flattened.
• Similarly, these muscles allow the lens to
become
thicker in order to focus on objects near the eye. 24
• The distance between the center of the lens and the
retina called the focal length varies from
approximately 17 mm to about 14 mm, as
refractive power of the lens increases from itst
minimum to its maximum. he
• When the eye focuses on an object farther away the
lens exhibits its lowest refractive power.
• When the eye focuses on a nearby object, the lens is
most strongly refractive.
20
• For example, the observer is looking at a tree 15 m
high at a distance of 100 m.
• If h is the height in mm of that object in the
retinal image,the geometry of Fig. yields
15/100 = h/17 or h=2.55mm.
21
BRIGHTNESS ADAPTATION AND DISCRIMINATION
Light and the Electromagnetic
Spectrum
• Sir Isaac Newton discovered that when a beam of
sunlight is passed through a glass prism,
• The emerging beam of light is not white but consists
instead of a continuous spectrum of colors ranging
from violet at one end to red at the other.
34
The electromagnetic spectrum
35
• The electromagnetic spectrum can be expressed in
terms of wavelength, frequency, or energy.
• Wavelength (l)and frequency (n)are related by the
expression
• where c is the speed of light (2.998*108 m s)
• The energy of the electromagnetic spectrum is given by
the expression E = hv
• where h is Plank”s constant
37
Digital Image Processing
By
Dr. Bhupendra Singh Kirar
Assistant Professor
Department of Electronics and Communication Engineering
Indian Institute of Information Technology, Bhopal
IMAGE SENSING AND ACQUISITION
IMAGE SENSING AND ACQUISITION
Most of the images in which we are interested are generated by the combination of an “illumination”
source and the reflection or absorption of energy from that source by the elements of the “scene” being
imaged.
Illumination may originate from a source of electromagnetic energy, such as a radar, infrared, or X-ray
system, ultrasound or even a computer-generated illumination pattern.
Depending on the nature of the source, illumination energy is reflected from, or transmitted through,
objects.
IMAGE SENSING AND ACQUISITION
An example in the first category is light reflected from a planar surface.
An example in the second category is when X-rays pass through a patient’s body for the purpose of
generating a diagnostic X-ray image.
In some applications, the reflected or transmitted energy is focused onto a photo converter (e.g., a
phosphor screen) that converts the energy into visible light.
Electron microscopy and some applications of gamma imaging use this approach.
IMAGE SENSING AND ACQUISITION
Figure 2.12 shows the three principal sensor
arrangements used to transform incident energy
into digital images.
1. Image Acquisition Using A Single Sensing
Element
2. Image Acquisition Using Sensor Strips
3. Image Acquisition Using Sensor Arrays
The idea is simple: Incoming energy is
transformed into a voltage by a combination of the
input electrical power and sensor material that is
responsive to the type of energy being detected.
The output voltage waveform is the response of the
sensor, and a digital quantity is obtained by
digitizing that response.
IMAGE SENSING AND ACQUISITION
IMAGE SENSING AND ACQUISITION
IMAGE SENSING AND ACQUISITION
IMAGE ACQUISITION USING A SINGLE SENSING ELEMENT
Figure shows the components of a single sensing
element.
A familiar sensor of this type is the photodiode, which
is constructed of silicon materials and whose output is a
voltage proportional to light intensity.
Using a filter in front of a sensor improves its
selectivity.
IMAGE SENSING AND ACQUISITION
IMAGE ACQUISITION USING A SINGLE SENSING ELEMENT
For example, an optical green-transmission filter favors
light in the green (pass) band of the color spectrum. As
a consequence, the sensor output would be stronger for
green light than for other visible light components.
In order to generate a 2-D image using a single sensing
element, there has to
be relative displacements in both the x- and y-directions
between the sensor and the area to be imaged.
Figure shows an arrangement used in high-precision scanning, where a film negative is mounted onto a
drum whose mechanical rotation provides displacement in one dimension. The sensor is mounted on a lead
screw that provides motion in the perpendicular direction.
A light source is contained inside the drum. As the light passes through the film, its intensity is modified by
the film density before it is captured by the sensor. This "modulation" of the light intensity causes
corresponding variations in the sensor voltage, which are ultimately converted to image intensity levels by
digitization.
This method is an inexpensive way to obtain high-resolution images because
mechanical motion can be controlled with high precision.
The main disadvantages of this method are that it is slow and not readily portable.
IMAGE SENSING AND ACQUISITION
IMAGE ACQUISITION USING SENSOR STRIPS
A geometry used more frequently than single sensors is an in-line sensor strip, as in Fig.
The strip provides imaging elements in one direction. Motion perpendicular to the strip provides imaging
in the other direction.
In-line sensors are used routinely in airborne imaging applications, in which the imaging system is
mounted on an aircraft that flies at a constant altitude and speed over the geographical area to be imaged.
One dimensional imaging sensor strips that respond to various bands of the electromagnetic spectrum are
mounted perpendicular to the direction of flight.
An imaging strip gives one line of an image at a time, and the motion of the strip relative to the scene
completes the other dimension of a 2-D image.
IMAGE ACQUISITION USING SENSOR STRIPS
Sensor strips in a ring configuration are used in
medical and industrial imaging
to obtain cross-sectional (“slice”) images of 3-D
objects, as Fig. shows.
A rotating X-ray source provides illumination, and
X-ray sensitive sensors opposite the source collect
the energy that passes through the object. This is the
basis for medical and industrial computerized axial
tomography (CAT) imaging.
The output of the sensors is processed by
reconstruction algorithms whose objective is to
transform the sensed data into meaningful cross
sectional images.
A 3-D digital volume consisting of stacked images is generated as the object is
moved in a direction perpendicular to the sensor ring.
Other modalities of imaging based on the CAT principle include magnetic resonance
imaging (MRI) and positron emission tomography (PET).
IMAGE SENSING AND ACQUISITION
IMAGE ACQUISITION USING SENSOR ARRAYS Figure shows individual sensing elements arranged in the
form of a 2-D array.
Because the sensor array in Fig. is two dimensional, its
key advantage is that a complete image can be obtained
by focusing the energy pattern onto the surface of the
array.
Motion obviously is not necessary, as is the case with the
sensor arrangements discussed in the preceding two
sections.
This is also the predominant arrangement found in digital
cameras.
CCD-Charge Coupled Device
A typical sensor for these cameras is a CCD.
IMAGE SENSING AND ACQUISITION
IMAGE ACQUISITION USING SENSOR ARRAYS
A CCD (charge-coupled device) array, which can be
manufactured with a broad range of sensing properties and can
be packaged in rugged arrays of 4000x4000 elements or more.
CCD sensors are used widely in digital cameras and other light-
sensing instruments.
Electromagnetic and ultrasonic sensing devices frequently are
arranged in this manner.
The response of each sensor is proportional to the integral of the
light energy projected onto the surface of the sensor, a property
that is used in astronomical and other applications requiring low
noise images.
Fig. CCD-Charge Coupled Device
IMAGE ACQUISITION USING SENSOR ARRAYS
IMAGE ACQUISITION USING SENSOR ARRAYS
Figure 2.15 shows the
principal manner in which
array sensors are used.
This figure shows the energy
from an illumination source
being reflected from a scene.
The first function performed
by the imaging system in Fig.
2.15(c) is to collect the
incoming energy and focus it
onto an image plane.
IMAGE ACQUISITION USING SENSOR ARRAYS
If the illumination is light, the
front end of the imaging
system is an optical lens that
projects the viewed scene
onto the focal plane of the
lens, as Fig. 2.15(d) shows.
The sensor array, which is
coincident with the focal
plane, produces outputs
proportional to the integral of
the light received at each
sensor.
IMAGE ACQUISITION USING SENSOR ARRAYS
Digital and analog circuitry
sweep these outputs and
convert them to an analog
signal, which is then digitized
by another section of the
imaging system.
The output is a digital image,
as shown diagrammatically in
Fig. 2.15(e).
A SIMPLE IMAGE FORMATION MODEL
A SIMPLE IMAGE FORMATION MODEL
we denote images by two-dimensional functions of the
form f ( x, y ).
The value of f at spatial coordinates ( x, y ) is a scalar quantity whose physical
meaning is determined by the source of the image, and whose values are
proportional to energy radiated by a physical source. (e.g. electromagnetic
waves).
The value or amplitude of f at spatial coordinates ( x, y ) gives the intensity
(brightness) of the image at that point.
As a consequence, f ( x, y ) must be nonnegative† and finite; that is,
A SIMPLE IMAGE FORMATION MODEL
Function f ( x, y ) is characterized by two components:
1. The amount of source illumination incident on the scene being viewed, and
2. The amount of illumination reflected by the objects in the scene.
Appropriately, these are called the illumination and reflectance components, and are
denoted by i( x, y) and r(x, y) , respectively.
The nature of i( x, y) is determined by the illumination source, and
r(x,y) is determined by the characteristics of the imaged objects.
Reflectance is bounded by 0 (total absorption) and 1 (total reflectance).
A SIMPLE IMAGE FORMATION MODEL
The two functions combine as a product to form f ( x, y ):
A SIMPLE IMAGE FORMATION MODEL
Some typical values of illumination and reflectance
Illumination
The following are typical values of i( x, y) :
➢ On a clear day, the sun may produce in excess of 90 000 lm/m2 of
illumination on the surface of the earth.
➢ This value decreases to less than 10 000 lm/m2 on a cloudy day.
➢ On a clear evening, a full moon yields about 0.1 lm/m2 of illumination.
➢ The typical illumination level in a commercial office is about 1 000
lm/m2.
A SIMPLE IMAGE FORMATION MODEL
Some typical values of illumination and reflectance
Reflectance
Similarly, the following are typical values of r(x,y) :
➢ 0.01 for black velvet
➢ 0.65 for stainless steel
➢ 0.80 for flat-white wall paint
➢ 0.90 for silver-plated metal
➢ 0.93 for snow
A SIMPLE IMAGE FORMATION MODEL
Let the intensity (gray level) of a monochrome image at any coordinates
f ( x, y ) be denoted by
Then
lies in the range
A SIMPLE IMAGE FORMATION MODEL
The interval [Lmin , Lmax ] is called the intensity (or gray) scale.
Common practice is to shift this interval numerically to the interval [0 , 1], or
[0 , C] where L = 0 is considered black and
L = 1 (or C) is considered white on the scale.
All intermediate values are shades of gray varying from black to white.
67
IMAGE SAMPLING AND QUANTIZATION
IMAGE SAMPLING AND QUANTIZATION
IMAGE SAMPLING AND QUANTIZATION
There are numerous ways to acquire images, but
our objective in all is the same: to generate digital images from sensed data.
The output of most sensors is a continuous voltage waveform whose amplitude and
spatial behavior are related to the physical phenomenon being sensed.
To create a digital image, we need to convert the continuous sensed data into a
digital format.
This requires two processes:
1. Sampling and
2. Quantization
IMAGE SAMPLING AND QUANTIZATION
BASIC CONCEPTS IN SAMPLING AND QUANTIZATION
Digitizing the
coordinate
values
Digitizing the
amplitude
values
IMAGE SAMPLING AND QUANTIZATION
BASIC CONCEPTS IN SAMPLING AND QUANTIZATION
IMAGE SAMPLING AND QUANTIZATION
Figure 2.16(a) shows a continuous image f that we want to convert to digital form.
An image may be continuous with respect to the x- and y-coordinates, and also in
amplitude.
To digitize it, we have to sample the function in both coordinates and
also in amplitude.
Digitizing the coordinate values is called sampling.
Digitizing the amplitude values is called quantization.
IMAGE SAMPLING AND QUANTIZATION
The one-dimensional function in Fig. 2.16(b) is a plot of amplitude (intensity
level) values of the continuous image along the line segment AB in Fig. 2.16(a).
The random variations are due to image noise.
IMAGE SAMPLING AND QUANTIZATION
To sample this function, we take equally spaced samples along line AB, as shown in
Fig. 2.16(c).
IMAGE SAMPLING AND QUANTIZATION
The samples are shown as small dark squares superimposed on the function, and
their (discrete) spatial locations are indicated by corresponding tick marks in the
bottom of the figure.
The set of dark squares constitute the sampled function.
However, the values of the samples still span (vertically) a continuous range of
intensity values.
In order to form a digital function, the intensity values also must be converted
(quantized) into discrete quantities. The vertical gray bar in Fig. 2.16(c) depicts the
intensity scale divided into eight discrete intervals, ranging from black to white.
The vertical tick marks indicate the specific value assigned to each of the eight
intensity intervals.
IMAGE SAMPLING AND QUANTIZATION
The continuous intensity levels are quantized by assigning one of the eight values
to each sample, depending on the vertical proximity of a sample to a vertical tick
mark.
The digital samples resulting from both sampling and quantization are shown as
white squares in Fig. 2.16(d).
Starting at the top of the continuous image and carrying out this procedure
downward, line by line, produces a two-dimensional digital image.
It is implied in Fig. 2.16 that, in addition to the number of discrete levels used, the
accuracy achieved in quantization is highly dependent on the noise content of the
sampled signal.
IMAGE SAMPLING AND QUANTIZATION
In practice, the method of sampling is determined by the sensor arrangement used
to generate the image.
When an image is generated by a single sensing element
combined with mechanical motion, as in Fig. 2.13, the output of the sensor is
quantized in the manner described above.
However, spatial sampling is accomplished by selecting the number of individual
mechanical increments at which we activate the
sensor to collect data.
Mechanical motion can be very exact so, in principle, there is
almost no limit on how fine we can sample an image using this approach.
IMAGE SAMPLING AND QUANTIZATION
In practice, limits on sampling accuracy are determined by other factors, such as the
quality of the optical components used in the system.
When a sensing strip is used for image acquisition, the number of sensors in the
strip establishes the samples in the resulting image in one direction, and mechanical
motion establishes the number of samples in the other.
Quantization of the sensor outputs completes the process of generating a digital
image.
When a sensing array is used for image acquisition, no motion is required.
The number of sensors in the array establishes the limits of sampling in both
directions.
Quantization of the sensor outputs is as explained above.
IMAGE SAMPLING AND QUANTIZATION
Figure 2.17 illustrates this concept.
Figure 2.17(a) shows a continuous image projected onto the plane of a 2-D
sensor.
Figure 2.17(b) shows the image after sampling and quantization.
The quality of a digital image is determined to a large degree by the number of
samples and discrete intensity levels used in sampling and quantization.
However, as we will show later in this section, image content also plays a role in
the choice of these parameters.
IMAGE SAMPLING AND QUANTIZATION
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
Let f (s, t) represent a continuous image function of two continuous variables, s and
t.
We convert this function into a digital image by sampling and quantization.
Suppose that we sample the continuous image into a digital image, f (x, y), containing M
rows and N columns, where (x, y) are discrete coordinates.
For notational clarity and convenience, we use integer values for these discrete coordinates:
x = 0 1 2 , , , , … M-1 and y = 0 1 2 , , , , … N-1.
REPRESENTING DIGITAL IMAGES
Thus, for example, the value of the digital image at the origin is f (0 , 0), and its value at the
next coordinates along the first row is f ( 0 , 1).
Here, the notation (0, 1) is used to denote the second sample along the first row. It does not
mean that these are the values of the physical coordinates when the image was sampled.
In general, the value of a digital image at any coordinates (x, y) is denoted f ( x, y ), where x
and y are integers.
When we need to refer to specific coordinates ( i, j), we use the notation f ( i, j ), where the
arguments are integers.
The section of the real plane spanned by the coordinates of an image is called the spatial
domain, with x and y being referred to as spatial variables or spatial coordinates.
REPRESENTING DIGITAL IMAGES
Figure 2.18 shows three
ways of representing
f ( x, y ).
REPRESENTING DIGITAL IMAGES
Figure 2.18(a) is a plot of the function,
with two axes determining spatial location
and the third axis being the values of f as a
function of x and y.
This representation is useful when
working with grayscale sets whose
elements are expressed as triplets of the
form ( x, y, z ),
Where x and y are spatial coordinates and
z is the value of f at coordinates ( x, y ).
REPRESENTING DIGITAL IMAGES
This representation is more common, and it shows f ( x, y )
as it would appear on a computer display or photograph.
Here, the intensity of each point in the display is
proportional to the value of f at that point.
In this figure, there are only three equally spaced intensity
values.
If the intensity is normalized to the interval [ 0, 1], then
each point in the image has the value 0, 0.5, or 1.
A monitor or printer converts these three values to black,
gray, or white, respectively, as in Fig. 2.18(b).
This type of representation includes color images, and
allows us to view results at a glance.
REPRESENTING DIGITAL IMAGES
As Fig. 2.18(c) shows, the third representation is an array (matrix) composed of
the numerical values of f (x, y).
This is the representation used for computer processing.
REPRESENTING DIGITAL IMAGES
As Fig. 2.18(c) shows, the third representation is an array (matrix) composed of
the numerical values of f (x, y).
This is the representation used for computer processing.
REPRESENTING DIGITAL IMAGES
As Fig. 2.18(c) shows, the third representation is an array (matrix) composed of
the numerical values of f (x, y).
This is the representation used for computer processing.
REPRESENTING DIGITAL IMAGES
As Fig. 2.18(c) shows, the third representation is an array (matrix) composed of
the numerical values of f (x, y).
This is the representation used for computer processing.
REPRESENTING DIGITAL IMAGES
As Fig. 2.18(c) shows, the third representation is an array (matrix) composed of
the numerical values of f (x, y).
This is the representation used for computer processing.
REPRESENTING DIGITAL IMAGES
As Fig. 2.18(c) shows, the third representation is an array (matrix) composed of
the numerical values of f (x, y).
This is the representation used for computer processing.
REPRESENTING DIGITAL IMAGES
As Fig. 2.18(c) shows, the third representation is an array (matrix) composed of
the numerical values of f (x, y).
This is the representation used for computer processing.
REPRESENTING DIGITAL IMAGES
In equation form, we write the representation of an M*N numerical array as
The right side of this equation is a digital image represented as an array of real
numbers.
Each element of this array is called an image element, picture element, pixel, or
pel.
We will use the terms image and pixel throughout the lectures to denote a digital
image and its elements.
REPRESENTING DIGITAL IMAGES
Figure 2.19 shows a graphical representation of an image array, where the x- and y-axis are
used to denote the rows and columns of the array.
Specific pixels are values of the array at a fixed pair of coordinates.
We generally use f (i, j ), when referring to a pixel with coordinates (i, j).
We can also represent a digital image in a traditional matrix form:
Clearly, aij = f (i,j), so Eqs. (2-9) and (2-10) denote identical arrays.
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
As Fig. 2.19 shows, we define the origin of an image at the top left corner.
This is a convention based on the fact that many image displays (e.g., TV monitors) sweep
an image starting at the top left and moving to the right, one row at a time.
More important is the fact that the first element of a matrix is by convention at the top left of
the array.
Choosing the origin of f (x, y) at that point makes sense mathematically because digital
images in reality are matrices.
In fact, as you will see, sometimes we use x and y interchangeably in equations with the rows
(r) and columns (c) of a matrix
REPRESENTING DIGITAL IMAGES
It is important to note that the representation in Fig. 2.19, in which the positive
x-axis extends downward and the positive y-axis extends to the right, is precisely the
right-handed Cartesian coordinate system with which you are familiar,† but shown
rotated by 90° so that the origin appears on the top, left.
The center of an M × N digital image with origin at (0 , 0) and range to (M-1 , N-1) is
obtained by dividing M and N by 2 and rounding down to the nearest integer.
This operation sometimes is denoted using the floor operator,
as shown in Fig. 2.19. This holds true for M and N even or odd. For example, the center of
an image of size 1023 × 1024 is at ( 511, 512).
Some programming languages (e.g., MATLAB) start indexing at 1 instead of at 0. The center
of an image in that case is found at
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
Sometimes, the range of values spanned by the gray scale is referred to as the
dynamic range, a term used in different ways in different fields.
Here, we define the dynamic range of an imaging system to be the ratio of the maximum
measurable intensity to the minimum detectable intensity level in the system.
As a rule, the upper limit is determined by saturation and the lower limit by noise, although
noise can be present also in lighter intensities.
Figure 2.20 shows examples of saturation and slight visible noise.
Because the darker regions are composed primarily of pixels
with the minimum detectable intensity, the background in Fig. 2.20 is the noisiest
part of the image; however, dark background noise typically is much harder to see.
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
The dynamic range establishes the lowest and highest intensity levels that a system
can represent and, consequently, that an image can have.
Closely associated with this concept is image contrast, which we define as the difference in
intensity between the highest and lowest intensity levels in an image.
The contrast ratio is the ratio of these two quantities.
When an appreciable number of pixels in an image have a high
dynamic range, we can expect the image to have high contrast.
Conversely, an image with low dynamic range typically has a dull, washed-out gray look.
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
Representing Digital Images
Weeks 1 & 2 127
Representing Digital Images
• The representation of an M×N numerical array as
f (0, 0) f (0,1) ... f (0, N − 1)
f (1, 0) f (1,1) ... f (1, N − 1)
f ( x, y ) =
... ... ... ...
f ( M − 1, 0) f ( M − 1,1) ... f ( M − 1, N − 1)
Weeks 1 & 2 128
Representing Digital Images
• The representation of an M×N numerical array as
a0,0 a0,1 ... a0, N −1
a a1,1 ... a1, N −1
A= 1,0
... ... ... ...
aM −1,0 aM −1,1 ... aM −1, N −1
Weeks 1 & 2 129
Representing Digital Images
• The representation of an M×N numerical array in MATLAB
f (1,1) f (1, 2) ... f (1, N )
f (2,1) f (2, 2) ... f (2, N )
f ( x, y ) =
... ... ... ...
f ( M ,1) f ( M , 2) ... f (M , N )
Weeks 1 & 2 130
Representing Digital Images
• Discrete intensity interval [0, L-1], L=2k
• The number b of bits required to store a M × N digitized image
b=M×N×k
Weeks 1 & 2 131
THANK
YOU
VERY MUCH
Subject Coordinator: Dr. Bhupendra Singh Kirar Subject Name & Code: DIP (ECE 501) Semester: VI 13