GIMA-RS-chapter5
GIMA-RS-chapter5
Remote sensing images from spaceborne sensors with resolutions from 1 km to < 1 m become
more and more available at reasonable costs. For some remote sensing sensors already large
archives for periods over 20 year are available via the World Wide Web (e.g., Landsat,
NOAA-AVHRR). This has resulted in the application of remote sensing in a large number of
application fields ranging from agriculture, environmental monitoring, forestry to
oceanography.
However, the remote sensing data acquisition process is affected by several factors that can
decrease the quality of the images collected. This may have an impact on the accuracy of the
image analysis. Image rectification and restoration aims to correct distorted and degraded
image data to create a more faithful representation of the original scene. This step is often
termed preprocessing because it normally precedes the actual image analysis that extracts
information from an image for a specific application (see chapter 5: Image processing).
Typical preprocessing operations include (1) radiometric preprocessing to adjust digital
values for the effect of for example a hazy atmosphere, and/or (2) geometric preprocessing to
bring an image into registration with a map or another image.
Although certain preprocessing procedures are frequently used, there can be no definitive list
of “standard” preprocessing steps. Each application of remote sensing data requires individual
choices on the preprocessing steps required. However, keep in mind that preprocessing
changes the original data. Therefore, the choices should tailor preprocessing to the data at
hand and the needs for the specific application, using only those preprocessing operations
essential to obtain a specific result. For example, early image analysis in remote sensing often
directly employed the digital numbers (DNs) produced by the sensor to estimate land surface
variables like Leaf Area Index (LAI). It is now realized that for an accurate quantification of
surface variables, preprocessing steps are required to convert DNs to physical quantities like
radiance and reflectance. This means that when you would like to compare remote sensing
images, taken at different observation dates and times (different atmospheric conditions) and
at different altitudes with different sensor systems, then preprocessing must be performed.
In this chapter we describe the basic concepts of preprocessing in remote sensing. We will
give some examples of regularly occurring distortions in remote sensing images and give an
overview of the most common techniques for radiometric and geometric preprocessing.
Finally, we will present some recent developments in image quality assessment that can be
used to assess images prior to the analysis and automatic preprocessing chains as applied for
MODIS. The chapter will focus on the preprocessing of optical remote sensing data. For
preprocessing of other types of remote sensing data (e.g., thermal), handbooks as listed in the
last section of this chapter can be consulted.
To correct remotely sensed data, internal and external errors must be determined. Internal
errors are created by the sensor itself. They are generally systematic (predictable) and
stationary (constant) and may be determined from prelaunch or in-flight radiometric
calibration measurements. External errors are due to platform perturbations, and the
1
influence of the atmosphere and specific characteristics of the remotely sensed object, which
are variable in nature. Atmospheric correction and geometrical correction are the most
common preprocessing steps to account for external errors in remotely sensed imagery.
Often remote sensing data (e.g., Landsat, NOAA-AVHRR) are represented in DNs when they
are purchased from the data providers. A digital number (DN) is nothing more than a measure
for the strength of an electrical current which is produced by a light-sensitive cell. The more
energy falls on the sensor, the stronger the electrical current, which is stored with a larger DN
(after AD-conversion). However, for many quantitative applications, instead of DNs,
measurements of absolute radiances are required. For example, such conversions are
necessary when changes in absolute reflectance of objects are to be measured over time using
different sensors (e.g., multispectral sensor on Landsat-3 versus the one on Landsat-5). Also,
these conversions are important for the development of mathematical models that physically
relate image data to quantitative ground measurements (e.g., vegetation characteristics like
LAI and biomass, and water quality data).
The raw data or DNs as acquired by the remote sensing sensor depend on the characteristics
of the detector and the amount of energy received. The electromagnetic energy that reaches
the sensor in the case of optical satellite sensors, is originating from the sun (Figure 5.1).
There are several pathways how this electromagnetic energy or light from the sun reaches the
sensor (Schott, 2007). The most important ones are:
• Direct reflected light: photons that originate from the sun, pass through the atmosphere,
are reflected from the earth’s surface, and propagate back through the atmosphere to the
sensor (pathway A);
• Skylight: photons that originate from the sun, are scattered by the atmosphere, and are
then reflected by the earth to the sensor (pathway B);
• Air light: photons that originate from the sun are scattered by the atmosphere and are
directly reflected to the sensor without reaching the earth (pathway C)
The summed energy of these pathways results in the upwelling spectral radiance at the top of
the atmosphere (TOA). This energy is represented by the remote sensing sensor in the form of
DNs.
B B
2
Several pre-processing steps are required to convert DNs to absolute radiance values or
surface reflectance. These steps are represented in the so-called spectral pre-processing chain.
Figure 5.2 shows how the spectral representation of a remotely sensed spectrum changes
during the different preprocessing steps (Ustin et al., 2004). In this case the spectrum is
represented by a vegetation pixel. In the upper left panel you see how the spectral reflectance
of this pixel looks like when it is measured on the ground with a field spectrometer (so-called
reflectance at top of canopy). The upper right panel in Figure 5.2 shows the “true” upwelling
spectral radiance for this pixel measured at the top of the atmosphere (at the elevation of the
satellite sensor). In other words this is the amount of energy which the sensor measures for
one specific area (the pixel area) of the earth surface. The middle panel illustrates how this
radiance is measured by the sensor in the form of digital numbers. In a next step, the so-called
radiometric calibration, these data must be calibrated to account for instrument performance,
resulting in a simplified radiance spectrum (lower left panel of Figure 5.2). You can see that
the form of this curve approximates the true radiance at the top of the atmosphere (upper right
panel). By taking into account the original solar irradiance, it is possible to transform the
calibrated radiance at the sensor to reflectance at top of atmosphere (not shown in figure).
3
Finally, to derive surface spectral reflectance we have to account for atmospheric influence
(Figure 5.1). By using an atmospheric correction procedure we can account for the scattering
and absorbing properties of gases and particles in the atmosphere. In the lower right panel of
Figure 5.2 the resulting surface reflectance spectrum of the vegetated pixel after correction is
shown. This spectrum shows a close match with a plant reflectance spectrum as measured in
the field (upper left panel of Figure 5.2). In the following paragraphs the different steps of the
spectral preprocessing chain will be described in more detail.
The radiance measured by a sensor (Figure 5.2) for a specific object on the earth is influenced
by factors as changes in scene illumination, atmospheric conditions, viewing geometry, and
instrument response characteristics. Through the application of radiometric correction
techniques we are able to reduce and calibrate the influence of these aspects. In this paragraph
we will explain the different steps of the radiometric correction process which are dependent
upon the characteristics of the sensor used to acquire the image data. The atmospheric
correction procedure is presented in a separate paragraph.
Cloud cover is often a problem in optical remote sensing. This problem can be overcome by
taking a sequence of images (say on five consecutive days) and cutting and pasting together
an image that represents a cloud-free composite (e.g., MODIS NDVI product: 16 days).
However, for this application it is necessary to correct for differences in sun elevation and
earth-sun distance. The sun elevation correction accounts for the seasonal position of the sun
relative to the earth. The earth-sun distance correction is applied to normalize for seasonal
changes in the distance between the earth and the sun. The parameters for these corrections
are normally part of the ancillary data supplied with the image and depend on date and time of
image acquisition.
Another radiometric data processing step required for quantitative applications is the process
that converts recorded sensor digital numbers to an absolute scale of radiance (watts per
steradian per square meter) which is independent of the image forming characteristics of the
sensor (e.g., integration time, band centre, intensity of input signal). Normally, detectors are
designed to produce a linear response to incident spectral radiance. Figure 5.3 shows the
linear radiometric response function typical of an individual Landsat TM channel (band). The
figure shows that increasing DN values correspond to increasing radiance values for the
indicated TM channel. When a sensor is built, accurate measurements of the radiometric
properties of the response function are made before the sensor is sent into space. This is called
preflight calibration. Each spectral band of the sensor has its own response function, and its
characteristics can be monitored after the launch of the sensor in space using onboard
calibration lamps (in-flight calibration). In this way changes in the sensor calibration during
its operational use can be monitored and if required adapted.
Figure 5.3 shows that for the Landsat TM sensor the relationship between radiance and DN
values can be characterized by a linear fit (Lillesand et al., 2004):
DN = A0 + A1* L (5.1)
where DN is the recorded digital number value, A1 is the slope of the response function
(channel gain), L is the measured spectral radiance, and A0 is the intercept of the response
function (channel offset). In Figure 5.3, LMIN is the spectral radiance corresponding to a DN
response of 0 and LMAX is the minimum radiance required to generate the maximum DN
4
(here 255). This means that LMAX represents the radiance at which the channel saturates.
The inverse of the radiometric response function can be used to convert a measured DN in a
particular band to absolute units of spectral radiance. For most sensors, these so-called
preflight coefficients, channel gain and offset, are provided by the sensor builders and they
can be found in remote sensing handbooks.
A special kind of error in remote sensing images related to sensor characteristics is called
image noise. Image noise is any unwanted disturbance in image data due to limitations in the
sensing, signal digitization, or data recording process. It can be the result of periodic drift or
malfunction of a detector, electronic interference between sensor components and intermittent
data losses in data transmission and recording sequence of an image (Lillesand et al., 2004).
Noise can either degrade or totally mask the true radiometric information content of a digital
image. In most cases these kinds of errors can already be deduced from a visual check of the
raw DN data (Figure 5.4). Specialized procedures are available to remove or restore image
noise features. When it is know that certain types of image noise occur for a sensor, often this
information will be provided by the data provider or it is restored before delivery of the
image. Well-known types of image noise in remote sensing are:
• Striping or banding is a systematic noise type and is related to sensors that sweep multiple
scan lines simultaneously. This stems from variations in the response of the individual
detectors used within each band. For example the radiometric response of one of the six
detectors of the early Landsat MSS sensor tended to drift over time (Figure 5.4 left). This
resulted in relatively higher or lower values along every sixth line in the image data. A
common way to destripe an image is the histogram method (Lillesand et al., 2004).
• Another line-oriented noise problem is line drop, where a number of adjacent pixels along
a line (or an entire line) may contain erroneous DNs (Figure 5.4 right). This problem is
5
solved by replacing the defective DNs with the average of the values for the pixels
occurring in the lines just above and below.
• Bit errors are a good example of random noise within an image. Such noise causes images
to have a “salt and pepper” or “snowy” appearance. This kind of noise can be removed by
using moving neighborhood windows, where all pixels are compared to their neighbors. If
the difference between a given pixel and its surroundings exceeds a certain threshold, the
pixel is assumed to contain noise and is replaced by an average value of the surrounding
pixels.
: Examples of image noise showing (left) the striping effect for Landsat MSS and
(right) dropped lines for Landsat TM (adapted from ccrs.nrcan.gc.ca).
As indicated in Figure 5.1, the composition of the atmosphere has an important effect on the
measurement of radiance with remote sensing. The atmosphere consists mainly of molecular
nitrogen and oxygen (clean dry air). In addition, it contains water vapour and particles
(aerosols) such as dust, soot, water droplets and ice crystals. For certain applications of
remote sensing, information on the atmospheric conditions are required to determine ozone
and N2 concentrations as indicator for smog or for weather forecast. However, for most land
applications the adverse effects of the atmosphere needs to be removed before remotely
sensed data can be properly analyzed. The atmosphere affects the radiance measured at any
pixel in an image in two different ways. On the one hand, it reduces the energy illuminating
the earth surface for example through absorption of light. This affects the direct reflected
light: pathway A of Figure 5.1. On the other hand, the atmosphere acts as reflector itself, the
resulting diffuse radiation is caused by scattering (pathway B and C of Figure 5.1). This
means that the most important step in atmospheric correction is to distinguish “real” radiance
as reflected by the earth surface from the disturbing path radiance originating from
atmospheric scattering. When meteorological field data on the composition of the atmosphere
during the image acquisition are available, it is possible to reduce these effects using
atmospheric correction models.
6
Several atmospheric correction models are available which vary a great deal in complexity. In
principle they correct for two main effects: scattering and absorption. Scattering can be
described as disturbance of the electromagnetic field by the constituents of the atmosphere
resulting in a change of the direction and the spectral distribution of the energy in the beam.
Three kinds of scattering effects can be distinguished:
• Rayleigh scattering is generated through the influence of air molecules on radiation. This
Rayleigh scattering is wavelength dependent, with shorter wavelengths showing larger
scattering effects. This wavelength dependency explains for example why we see the sky
as blue. Blue is located at shorter wavelengths and thus scattered more than green and red
which are located at larger wavelengths. As a result in every direction you look, some of
this scattered blue light reaches your eyes. Since you see the blue light from everywhere
overhead, the sky looks blue.
• Mie scattering is the result of the influence of aerosols on radiance. Aerosols are small
solid or liquid particles that remain suspended in the air and follow the motion of the air
within certain broad limits. The considered particles have a diameter of 0.1 to 10 times the
influenced wavelength. Aerosols are the most uncertain factor in calculating solar
radiation on the ground. They are highly variable in size, distribution, composition, and
optical properties.
• The non-selective scattering, finally, is not dependent on the wavelength of the radiation.
The relevant particles, like dust, smoke and rain, are much larger than the wavelength.
This type of scattering is of minor importance on a clear day.
Absorption takes place due to the presence of molecules in the atmosphere. Their influence on
the attenuation of radiation varies highly with wavelength. Di-atomic oxygen (O2), di-atomic
nitrogen (N2), atomic oxygen (O), nitrogen (N) and ozone (O3) are the five principal absorbers
in the ultraviolet and visible spectrum. Of the gases that absorb solar electromagnetic
radiation in the infrared wavelengths, the most important are H2O, CO2, O2, N2O, CO, O3 and
N2. Some wavelengths are being absorbed completely: e.g., far-ultraviolet (<0.20 μm) by
atomic and molecular oxygen, nitrogen and ozone. Others are hardly being absorbed. These
form the so-called atmospheric windows. For example, the bands of the Landsat-TM have
been chosen, as good as possible, within these windows.
A simple method to correct for atmospheric effects like haze in an image is the so called
darkest pixel method (Liang et al., 2001). In this method objects are identified from which we
know they have very low reflectance values (and thus have a dark appearance in the image).
For example, the reflectance of deep clear water is essentially zero in the near-infrared region
of the spectrum. Therefore any signal measured over this kind of water represents signal
originating from the atmosphere only (path radiance). To correct for the atmospheric haze, the
measured signal value is subtracted from all image pixels in that band. Figure 5.5 shows the
result of a more complex atmospheric correction method for a Landsat TM image (Liang et
al., 2001). This method also accounts for aerosol composition of the atmosphere and so-called
adjacency effects. The final result of an atmospheric correction procedure are surface
reflectance values for all image pixels which can be used for further image processing, e.g.,
classification and variable estimation (LAI).
7
: Example of Landsat TM image before (left) and after (right) atmospheric
correction (from Liang et al., 2001)
The previous paragraphs have mainly focused on correction of remote sensing images in the
spectral domain. However, also distortion in the spatial domain is usually occurring during
remote sensing data acquisition. Geometric correction of remote sensing is normally
implemented as a two-step procedure. First, those distortions are considered that are
systematic. Secondly, distortions that are random, or unpredictable, are corrected. Systematic
errors are predictable in nature and can be corrected using data from the orbit of the platform
and knowledge of internal sensor distortion. Common types of systematic distortions are: scan
skew, mirror-scan velocity, panoramic distortions, platform velocity, earth rotation,
perspective (Jensen, 1996). Most commercially available remote sensing data (e.g., Landsat,
SPOT) already have much of the systematic error removed.
Unsystematic errors are corrected based on geometric registration of remote sensing imagery
to a known ground coordinate system (e.g., topographic map). The geometric registration
process involves the identification of the image coordinates of ground control points (GCP),
clearly discernible points (e.g., road crossings), in the distorted image and an available map.
The well-distributed GCP pairs are used to determine the proper transformation equations.
The equations are then applied to the original (row and column) image coordinates to map
them into their new ground coordinates. This approach is called image-to-map registration.
An alternative would be to register one (or more) images to another image, the so-called
image-to-image registration. This procedure is often applied for multitemporal image
comparison when for example land cover changes are monitored using Landsat TM images.
Different resampling methods can be applied to calculate the new pixel values from the
original digital pixel values in the uncorrected image. An overview of these resampling
methods is given in chapter 7 of the handbook Introduction to Geographic Information
Systems of Chang (2006).
8
: Geometric distortions due to aircraft orientation. Gray boundaries represent
nominal coverage; black boundaries represent actual coverage (from Schott, 2007).
Space-based sensor platforms are usually geometrically stabilized such that the only motion
of the platform during the imaging is the along-track motion of the spacecraft (e.g., platform
velocity). Aircraft sensors are often not stabilized so that the orientation of the aircraft can
change from one line to the next, or, in extreme cases, even from pixel to pixel within a line
(Schott, 2007). An overview of possible distortions due to aircraft movement is given in
Figure 5.6. Pitch and yaw effects are generally relatively constant errors typically removed in
preprocessing. Roll effects may vary considerably on a line-to-line basis and can be corrected
using measurements from a gyroscope. Figure 5.7 shows an image before and after roll
compensation was performed using signals recorded from a gyroscope. The roll effect is
especially observed in the buildings in the left part of the image.
: Example image before (left) and after (right) lines were shifted to correct for roll
distortion of an airborne sensor acquisition (from Schott, 2007).
9
5.7 Recent developments in preprocessing
With the increasing use of remote sensing for global monitoring of the earth (e.g., using the
medium resolution MODIS and MERIS sensors), it becomes very important to know the
quality of the images that are acquired and compared with each other. To facilitate the quality
assessment of remote sensing images and products, recent developments are aiming at the
standardization of the preprocessing process. This means that for a specific remote sensing
sensor automated processing chains are developed that process the acquired images from raw
data to surface reflectance and finally to a set of higher order products (NDVI, LAI, albedo
etc.). A good example of this approach is the development of the processing chain for the
Moderate Resolution Imaging Spectrometer (MODIS). Specific preprocessing software is
developed to generate standard products which can be applied in a broad range of application
fields (modis.gsfc.nasa.gov). An important strength of this approach is that automated image
quality checks are built in and the quality of every image is described and made available to
the user.
Aspinall, R.J., W.A. Marcus and J.W. Boardman, 2002. Considerations in collecting,
processing, and analyzing spatial resolution hyperspectral data for environmental
investigations. Journal of Geographical Information Systems 4: 15-29.
Chang, K., 2006. Introduction to Geographic Information Systems, third edition. McGraw-
Hill Higher education, New York.
Lillesand, T.M., R.W. Kiefer and J.W. Chipman, 2008. Remote sensing and image
interpretation. Sixth Edition. J. Wiley & Sons, Inc., New York.
Liang S., F. Hongliang, and M. Chen, 2001. Atmospheric correction of Landsat ETM+ Land
surface imagery – Part I: methods. IEEE Transactions on geoscience and remote sensing
39: 2490-2498.
Liang, S., 2004. Quantitative remote sensing of land surfaces. John Wiley and Sons, Inc.,
New York.
Jensen, John R., 2005, Introductory Digital Image Processing, 3rd Ed., Upper Saddle River,
NJ: Prentice Hall.
Schott, J.R., 2007. Remote sensing. The image chain approach. 2nd Ed., Oxford University
Press, New York.
Ustin S.L., D.A. Roberts, J.A. Gamon, G.P. Asner and R.O. Green., 2004. Using imaging
spectroscopy to study ecosystem processes and properties. BioScience 54: 523-534.
Websources
http://ccrs.nrcan.gc.ca/resource/index_e.php: remote sensing tutorials available from Natural
Resources Canada (NRCan)
http://modis.gsfc.nasa.gov/: home page for the MODIS satellite sensor
http://envisat.esa.int/instruments/meris/: home page for the MERIS satellite sensor
10