[go: up one dir, main page]

0% found this document useful (0 votes)
10 views7 pages

Image Sensing and Acquisition

The document discusses image sensing and acquisition, outlining three principal sensor arrangements: single imaging sensors, line sensors, and array sensors. It explains how these sensors work, including the use of filters to enhance selectivity and the methods for generating 2-D images through mechanical motion. Additionally, it covers the processes of image sampling and quantization, defining digital images and their representation as finite elements called pixels.

Uploaded by

Pooja Laturiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views7 pages

Image Sensing and Acquisition

The document discusses image sensing and acquisition, outlining three principal sensor arrangements: single imaging sensors, line sensors, and array sensors. It explains how these sensors work, including the use of filters to enhance selectivity and the methods for generating 2-D images through mechanical motion. Additionally, it covers the processes of image sampling and quantization, defining digital images and their representation as finite elements called pixels.

Uploaded by

Pooja Laturiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

UNIT - 2

Image Sensing and Acquisition

Image Sensing and Acquisition

There are 3 principal sensor arrangements (produce an electrical output proportional to light
intensity).
(i)Single imaging Sensor (ii)Line sensor (iii)Array sensor

(i)

(ii)

(iii)

Fig: (i)Single image (ii)Sensor line sensor (iii)Array sensor


Image Acquisition using a single sensor

The most common sensor of this type is the photodiode, which is constructed of silicon materials
and whose output voltage waveform is proportional to light. The use of a filter in front of a
sensor improves selectivity. For example, a green (pass) filter in front of a light sensor favours
light in the green band of the color spectrum. As a consequence, the sensor output will be
stronger for green light than for other components in the visible spectrum.

Fig: Combining a single sensor with motion to generate a 2-D image

In order to generate a 2-D image using a single sensor, there have to be relative displacements in
both the x- and y-directions between the sensor and the area to be imaged. An arrangement used
in high precision scanning, where a film negative is mounted onto a drum whose mechanical
rotation provides displacement in one dimension. The single sensor is mounted on a lead screw
that provides motion in the perpendicular direction. Since mechanical motion can be controlled
with high precision, this method is an inexpensive (but slow) way to obtain high-resolution
images.
Image Acquisition using Sensor Strips

Fig: (a) Image acquisition using linear sensor strip (b) Image acquisition using circular
sensor strip.

The strip provides imaging elements in one direction. Motion perpendicular to the strip provides
imaging in the other direction. This is the type of arrangement used in most flatbed scanners.
Sensing devices with 4000 or more in-line sensors are possible. In-line sensors are used routinely
in airborne imaging applications, in which the imaging system is mounted on an aircraft that flies
at a constant altitude and speed over the geographical area to be imaged. One-dimensional
imaging sensor strips that respond to various bands of the electromagnetic spectrum are mounted
perpendicular to the direction of flight. The imaging strip gives one line of an image at a time,
and the motion of the strip completes the other dimension of a two-dimensional image. Sensor
strips mounted in a ring configuration are used in medical and industrial imaging to obtain cross-
sectional (“slice”) images of 3-D objects. A rotating X-ray source provides illumination and the
portion of the sensors opposite the source collect the X-ray energy that pass through the object
(the sensors obviously have to be sensitive to X-ray energy).This is the basis for medical and
industrial computerized axial tomography (CAT) imaging.

Image Acquisition using Sensor Arrays

Fig: An example of the digital image acquisition process (a) energy source (b) An element
of a scene (d) Projection of the scene into the image (e) digitized image

This type of arrangement is found in digital cameras. A typical sensor for these cameras is a
CCD array, which can be manufactured with a broad range of sensing properties and can be
packaged in rugged arrays of 4000 * 4000 elements or more. CCD sensors are used widely in
digital cameras and other light sensing instruments. The response of each sensor is proportional
to the integral of the light energy projected onto the surface of the sensor, a property that is used
in astronomical and other applications requiring low noise images.
The first function performed by the imaging system is to collect the incoming energy and focus it
onto an image plane. If the illumination is light, the front end of the imaging system is a lens,
which projects the viewed scene onto the lens focal plane. The sensor array, which is coincident
with the focal plane, produces outputs proportional to the integral of the light received at each
sensor.

A Simple Image Formation Model

An image is defined by two dimensional function f(x,y). The value or amplitude of f at spatial
coordinates (x,y) is a positive scalar quantity. When an image is generated from a physical
process, its value are proportional to energy radiated by physical source. As a consequence, f(x,y)
must be nonzero and finite; that is,

The function f(x,y) may be characterized by two components: (1) the amount of source
illumination incident on the scene being viewed and (2) the amount of illumination reflected by
the objects in the scene. These are called illumination and reflectance components denoted by
i(x,y) and r(x,y) respectively. The two function combine as product to form f(x,y):

f(x,y)=i(x,y) r(x,y)

Where 0 < i(x,y)< ∞ and 0 <r(x,y)< 1 r(x,y)=0 means total absorption r(x,y)=1 means total
reflectance
We call the intensity of a monochrome image at any coordinates (x,y) the gray level (l) of the
image at that point. That is l=f(x,y).
The interval of l ranges from [0,L-1]. Where l=0 indicates black and l=1 indicates white. All the
intermediate values are shades of gray varying from black to white.

Image Sampling and Quantization

Sampling and quantization are the two important processes used to convert continuous analog
image into digital image. Image sampling refers to discretization of spatial coordinates (along x
axis) whereas quantization refers to discretization of gray level values (amplitude (along y axis)).

(Given a continuous image, f(x,y), digitizing the coordinate values is called sampling and
digitizing the amplitude (intensity) values is called quantization.)
The one dimensional function shown in fig 2.16(b) is a plot of amplitude (gray level) values of
the continuous image along the line segment AB in fig 2.16(a). The random variation is due to
the image noise. To sample this function, we take equally spaced samples along line AB as
shown in fig 2.16 (c).In order to form a digital function, the gray level values also must be
converted(quantizes) into discrete quantities. The right side of fig 2.16 (c) shows the gray level
scale divided into eight discrete levels, ranging from black to white. The result of both sampling
and quantization are shown in fig 2.16 (d).

Representing Digital Image


An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial
(plane) coordinates, and the amplitude of „f ‟ at any pair of coordinates (x, y) is called the
intensity or gray level of the image at that point.
Fig: Coordinate convention used to represent digital images

Fig: Zoomed image, where small white boxes inside the image represent pixels

Digital image is composed of a finite number of elements referred to as picture elements, image
elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital
image.

We can represent M*N digital image as compact matrix as shown in fig below

When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a
digital image. The field of digital image processing refers to processing digital images by
means of a digital computer.

You might also like