[go: up one dir, main page]

Next Issue
Volume 4, March
Previous Issue
Volume 4, January
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 4, Issue 2 (February 2012) – 10 articles , Pages 327-560

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
3184 KiB  
Article
Two Linear Unmixing Algorithms to Recognize Targets Using Supervised Classification and Orthogonal Rotation in Airborne Hyperspectral Images
by Amir Averbuch and Michael Zheludev
Remote Sens. 2012, 4(2), 532-560; https://doi.org/10.3390/rs4020532 - 21 Feb 2012
Cited by 13 | Viewed by 8143
Abstract
The goal of the paper is to detect pixels that contain targets of known spectra. The target can be present in a sub- or above pixel. Pixels without targets are classified as background pixels. Each pixel is treated via the content of its [...] Read more.
The goal of the paper is to detect pixels that contain targets of known spectra. The target can be present in a sub- or above pixel. Pixels without targets are classified as background pixels. Each pixel is treated via the content of its neighborhood. A pixel whose spectrum is different from its neighborhood is classified as a “suspicious point”. In each suspicious point there is a mix of target(s) and background. The main objective in a supervised detection (also called “target detection”) is to search for a specific given spectral material (target) in hyperspectral imaging (HSI) where the spectral signature of the target is known a priori from laboratory measurements. In addition, the fractional abundance of the target is computed. To achieve this we present two linear unmixing algorithms that recognize targets with known (given) spectral signatures. The CLUN is based on automatic feature extraction from the target’s spectrum. These features separate the target from the background. The ROTU algorithm is based on embedding the spectra space into a special space by random orthogonal transformation and on the statistical properties of the embedded result. Experimental results demonstrate that the targets’ locations were extracted correctly and these algorithms are robust and efficient. Full article
(This article belongs to the Special Issue Hyperspectral Remote Sensing)
Show Figures


<p>Exemplary points that were taken from the area pointed by the arrow (full image is given in 2nd figure in Section 2.1).</p>
Full article ">
<p>Zoom on the exemplary point in <a href="#f1-remotesensing-04-00532" class="html-fig">Figure 1</a>. Left: The marked pixel contains the target with the spectrum <span class="html-italic">T</span> as a whole pixel. Center: The marked pixel is a mixed pixel with the spectrum <span class="html-italic">P</span>. Right: The marked pixel is a pure background pixel with the spectrum <span class="html-italic">B</span>.</p>
Full article ">
<p>Spectra of the background, the mixed pixel and the target from <a href="#f1-remotesensing-04-00532" class="html-fig">Figures 1</a> and <a href="#f2-remotesensing-04-00532" class="html-fig">2</a>.</p>
Full article ">
<p>The dataset “desert” is an hyperspectral image of a desert place taken by an airplane flying 10,000 feet above sea level. The resolution is 1.3 m/pixel, 286 <span class="html-italic">×</span> 2,640 pixels per waveband with 168 wavebands.</p>
Full article ">
<p>The dataset “city” is an hyperspectral image of a city taken by an airplane flying 10,000 feet above sea level. The resolution is 1.5 m/pixel, 294 <span class="html-italic">×</span> 501 pixels per waveband with 28 wavebands.</p>
Full article ">
<p>The dataset “field” is an hyperspectral image of a field taken by an airplane flying 9,500 feet above sea level. The resolution is 1.2 m/pixel, 286 <span class="html-italic">×</span> 300 pixels per waveband with 50 wavebands.</p>
Full article ">
<p>The “unmixing histogram” for <a href="#f3-remotesensing-04-00532" class="html-fig">Figure 3</a>. The <span class="html-italic">x</span>-axis is the <span class="html-italic">b</span> partition of [0<span class="html-italic">,</span> 1], <span class="html-italic">ɛ</span> = 0.01. The <span class="html-italic">y</span>-axis is the number of coordinates that belongs to each interval in the <span class="html-italic">b</span> partition</p>
Full article ">
<p>The solid line corresponds to the real spectrum of the background and the dotted line corresponds to the estimated spectrum of the background</p>
Full article ">
<p>Random spectral vector signatures.</p>
Full article ">
1065 KiB  
Article
Recovery of Forest Canopy Parameters by Inversion of Multispectral LiDAR Data
by Andrew Wallace, Caroline Nichol and Iain Woodhouse
Remote Sens. 2012, 4(2), 509-531; https://doi.org/10.3390/rs4020509 - 17 Feb 2012
Cited by 63 | Viewed by 10399
Abstract
We describe the use of Bayesian inference techniques, notably Markov chain Monte Carlo (MCMC) and reversible jump MCMC (RJMCMC) methods, to recover forest structural and biochemical parameters from multispectral LiDAR (Light Detection and Ranging) data. We use a variable dimension, multi-layered model to [...] Read more.
We describe the use of Bayesian inference techniques, notably Markov chain Monte Carlo (MCMC) and reversible jump MCMC (RJMCMC) methods, to recover forest structural and biochemical parameters from multispectral LiDAR (Light Detection and Ranging) data. We use a variable dimension, multi-layered model to represent a forest canopy or tree, and discuss the recovery of structure and depth profiles that relate to photochemical properties. We first demonstrate how simple vegetation indices such as the Normalized Differential Vegetation Index (NDVI), which relates to canopy biomass and light absorption, and Photochemical Reflectance Index (PRI) which is a measure of vegetation light use efficiency, can be measured from multispectral data. We further describe and demonstrate our layered approach on single wavelength real data, and on simulated multispectral data derived from real, rather than simulated, data sets. This evaluation shows successful recovery of a subset of parameters, as the complete recovery problem is ill-posed with the available data. We conclude that the approach has promise, and suggest future developments to address the current difficulties in parameter inversion. Full article
(This article belongs to the Special Issue Laser Scanning in Forests)
Show Figures


<p>(<b>a</b>) Multispectral pulsed laser system built and used for measurement of a small conifer at SELEX GALILEO. (<b>b</b>) <span class="html-italic">Cupressus macrocarpa</span> was the species used for the measurements.</p>
Full article ">
<p>(<b>a</b>) Normalised data for stressed conifer. (<b>b</b>) Normalised data for unstressed conifer. The data at 532 and 550 nm were taken at the same time on the same part of the tree; the data at 690 nm and 780 nm were taken later on a different part of the tree.</p>
Full article ">
<p>Comparing (<b>a</b>) NDVI and (<b>b</b>) PRI as a function of time (depth) on <span class="html-italic">Cupressus macrocarpa</span>.</p>
Full article ">
<p>(<b>a</b>) Photograph of the small conifer mounted on a mixed soil and grass baseboard. The height of the conifer was ∼1.10 m. (<b>b</b>) Full waveform from central pixel (<b>c</b>) from peripheral pixel (<b>d</b>) integrated signal formed by summing all pixel contributions.</p>
Full article ">
<p>(<b>a</b>) Integrated response from all 100 pixels of the tree image. The blue curve shows the real, raw data, and the red curve the result of a synthetic, noise-free waveform generated from the parameter estimation by RJMCMC. (<b>b</b>) This shows the trace of <span class="html-italic">k</span> for 10<sup>5</sup> iterations, and the a-posteriori distribution on <span class="html-italic">k</span>, the number of layers used to approximate the waveform.</p>
Full article ">
<p>Variation of NDVI as a function of <span class="html-italic">C<sub>ab</sub></span> and leaf/bark ratio.</p>
Full article ">
<p>(<b>a</b>) Assumed distribution of current, first year, and older leaves using published data. (<b>b</b>) Normalised distributions of leaf bark and soil used in the simulations (<b>c</b>) Resulting C<sub>ab</sub> profile using previous data (<b>d</b>) Example of generated multispectral signature with added noise.</p>
Full article ">
<p>(<b>a</b>) An example of the recovered waveform in comparison with the synthesized data at 531 nm. (<b>b</b>) The recovered areas for a known abundance profile (<b>c</b>) A comparison of the recovered <span class="html-italic">C<sub>ab</sub></span> concentrations against the true values. The root mean square error is 1.96 ug·cm<sup>−2</sup>.</p>
Full article ">
<p>(<b>a</b>) An example of the recovered waveform in comparison with the synthesized data at 531 nm. (<b>b</b>) The recovered areas for a known abundance profile (<b>c</b>) A comparison of the recovered <span class="html-italic">C<sub>ab</sub></span> concentrations against the true values. The root mean square error is 1.96 ug·cm<sup>−2</sup>.</p>
Full article ">
<p>Results from repeated RJMCMC analysis on noisy multispectral data. (<b>a</b>–<b>d</b>) show fitted results from one trial at wavelengths 531 nm, 570 nm, 690 nm and 780 nm respectively. (<b>e</b>) shows the recovered area values as a function of bin number through the canopy for all 15 trials. (<b>f</b>) shows as scattered points the recovered <span class="html-italic">C<sub>ab</sub></span> values, as a solid black line a regression fit to the scattered points, and as a solid red line the ground truth, <span class="html-italic">i.e.</span>, the modeled <span class="html-italic">C<sub>ab</sub></span> profile.</p>
Full article ">
<p>Results from repeated RJMCMC analysis on noisy multispectral data. (<b>a</b>–<b>d</b>) show fitted results from one trial at wavelengths 531 nm, 570 nm, 690 nm and 780 nm respectively. (<b>e</b>) shows the recovered area values as a function of bin number through the canopy for all 15 trials. (<b>f</b>) shows as scattered points the recovered <span class="html-italic">C<sub>ab</sub></span> values, as a solid black line a regression fit to the scattered points, and as a solid red line the ground truth, <span class="html-italic">i.e.</span>, the modeled <span class="html-italic">C<sub>ab</sub></span> profile.</p>
Full article ">
586 KiB  
Article
Estimating Biophysical Parameters of Individual Trees in an Urban Environment Using Small Footprint Discrete-Return Imaging Lidar
by Rupesh Shrestha and Randolph H. Wynne
Remote Sens. 2012, 4(2), 484-508; https://doi.org/10.3390/rs4020484 - 15 Feb 2012
Cited by 61 | Viewed by 12983
Abstract
Quantification of biophysical parameters of urban trees is important for urban planning, and for assessing carbon sequestration and ecosystem services. Airborne lidar has been used extensively in recent years to estimate biophysical parameters of trees in forested ecosystems. However, similar studies are largely [...] Read more.
Quantification of biophysical parameters of urban trees is important for urban planning, and for assessing carbon sequestration and ecosystem services. Airborne lidar has been used extensively in recent years to estimate biophysical parameters of trees in forested ecosystems. However, similar studies are largely lacking for individual trees in urban landscapes. Prediction models to estimate biophysical parameters such as height, crown area, diameter at breast height, and biomass for over two thousand individual trees were developed using best subsets multiple linear regression for a study area in central Oklahoma, USA using point cloud distributional metrics from an Optech ALTM 2050 lidar system. A high level of accuracy was attained for estimating individual tree height (R2 = 0.89), dbh (R2 = 0.82), crown diameter (R2 = 0.90), and biomass (R2 = 0.67) using lidar-based metrics for pooled data of all tree species. More variance was explained in species-specific estimates of biomass (R2 = 0.68 for Juniperus virginiana to 0.84 for Ulmus parviflora) than in estimates from broadleaf deciduous (R2 = 0.63) and coniferous (R2 = 0.45) taxonomic groups—or the data set analysed as a whole (R2 = 0.67). The metric crown area performed particularly well for most of the species-specific biomass equations, which suggests that tree crowns should be delineated accurately, whether manually or using automatic individual tree detection algorithms, to obtain a good estimation of biomass using lidar-based metrics. Full article
(This article belongs to the Special Issue Laser Scanning in Forests)
Show Figures


<p>Study area at Tinker Air Force Base, Oklahoma. Trees measured are shown in darker shades.</p>
Full article ">
<p>Distribution of trees in each class showing field measured total height (solid line) and lidar measured maximum height <span class="html-italic">H</span><sub>MAX</sub> (dotted line) of conifers and broad leaves.</p>
Full article ">
<p>Relationship between field-measured height and lidar-measured maximum height.</p>
Full article ">
<p>Relationship between field-measured height and lidar-measured maximum height in (<b>a</b>) broadleaf trees (total 2292 trees) and (<b>b</b>) conifers (total 1058 trees).</p>
Full article ">
<p>Relationship between field-measured height and lidar-measured maximum height in major species of the study area. Tree species abbreviations: JUVI = <span class="html-italic">Juniperus virginiana</span>, PINI = <span class="html-italic">Pinus nigra</span>, ULPU = <span class="html-italic">Ulmus pumila</span>, PYCA = <span class="html-italic">Pyrus calleryana</span>, PLOC = <span class="html-italic">Platanus occidentalis</span>, FRPE = <span class="html-italic">Fraxinus pennsylvanica</span>, ULPA = <span class="html-italic">Ulmus parviflora</span>, ACSA = <span class="html-italic">Acer saccharinum</span>, QUSH = <span class="html-italic">Quercus shumardii.</span></p>
Full article ">
<p>Relationship of field-measured crown radius (m) with (<b>a</b>) lidar-measured crown-radius (m) and (<b>b</b>) lidar-measured maximum height (m) for all trees.</p>
Full article ">
<p>Relationship between total ground-predicted tree biomass and lidar-predicted biomass of major species of the study area. Tree species abbreviations: JUVI = <span class="html-italic">Juniperus virginiana</span>, PINI = <span class="html-italic">Pinus nigra</span>, ULPU = <span class="html-italic">Ulmus pumila</span>, PYCA = <span class="html-italic">Pyrus calleryana</span>, PLOC = <span class="html-italic">Platanus occidentalis</span>, FRPE = <span class="html-italic">Fraxinus pennsylvanica</span>, ULPA = <span class="html-italic">Ulmus parviflora</span>, ACSA = <span class="html-italic">Acer saccharinum</span>, QUSH = <span class="html-italic">Quercus shumardii</span>.</p>
Full article ">
2125 KiB  
Article
How Robust Are Burn Severity Indices When Applied in a New Region? Evaluation of Alternate Field-Based and Remote-Sensing Methods
by C. Alina Cansler and Donald McKenzie
Remote Sens. 2012, 4(2), 456-483; https://doi.org/10.3390/rs4020456 - 9 Feb 2012
Cited by 134 | Viewed by 15431
Abstract
Remotely sensed indices of burn severity are now commonly used by researchers and land managers to assess fire effects, but their relationship to field-based assessments of burn severity has been evaluated only in a few ecosystems. This analysis illustrates two cases in which [...] Read more.
Remotely sensed indices of burn severity are now commonly used by researchers and land managers to assess fire effects, but their relationship to field-based assessments of burn severity has been evaluated only in a few ecosystems. This analysis illustrates two cases in which methodological refinements to field-based and remotely sensed indices of burn severity developed in one location did not show the same improvement when used in a new location. We evaluated three methods of assessing burn severity in the field: the Composite Burn Index (CBI)—a standardized method of assessing burn severity that combines ecologically significant variables related to burn severity into one numeric site index—and two modifications of the CBI that weight the plot CBI score by the percentage cover of each stratum. Unexpectedly, models using the CBI had higher R2 and better classification accuracy than models using the weighted versions of the CBI. We suggest that the weighted versions of the CBI have lower accuracies because weighting by percentage cover decreases the influence of the dominant tree stratum, which should have the strongest relationship to optically sensed reflectance, and increases the influence of the substrates strata, which should have the weakest relationship with optically sensed reflectance in forested ecosystems. Using a large data set of CBI plots (n = 251) from four fires and CBI scores derived from additional field-based assessments of burn severity (n = 388), we predicted two metrics of image-based burn severity, the Relative differenced Normalized Burn Ratio (RdNBR) and the differenced Normalized Burn Ratio (dNBR). Predictive models for RdNBR showed slightly better classification accuracy than for dNBR (overall accuracy = 62%, Kappa = 0.40, and overall accuracy = 59%, Kappa= 0.36, respectively), whereas dNBR had slightly better explanatory power, but strong differences were not apparent. RdNBR may provide little or no improvement over dNBR in systems where pre-fire reflectance is not highly variable, but may be more appropriate for comparing burn severity among regions. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of Wildland Fires)
Show Figures


<p>The four fires assessed in this study, labeled and outlined in black, are located in the northern Cascade Range of Washington, USA. The polygon within the Tripod fire is an area that burned approximately 30 years earlier, and was unburned by the fire.</p>
Full article ">
<p>Regression models for the WCBI data set (n = 146) between (<b>a</b>) dNBR and CBI, (<b>b</b>) dNBR and WCBI, (<b>c</b>) RdNBR and CBI, and (<b>d</b>) RdNBR and WCBI. Using WCBI instead of CBI allowed for the use of a linear model, but a similar amount of variance was explained in all models.</p>
Full article ">
<p>Regression models for the WCBI data set (n = 146) between (<b>a</b>) dNBR and CBI, (<b>b</b>) dNBR and WCBI, (<b>c</b>) RdNBR and CBI, and (<b>d</b>) RdNBR and WCBI. Using WCBI instead of CBI allowed for the use of a linear model, but a similar amount of variance was explained in all models.</p>
Full article ">
<p>Linear regression models for the GeoCBI data set (n = 52) from the two fires measured after the GeoCBI was published in 2008. Models are for (<b>a</b>) dNBR and CBI, (<b>b</b>) dNBR and WCBI, (<b>c</b>) dNBR and GeoCBI, (<b>d</b>) RdNBR and CBI, (<b>e</b>) RdNBR and WCBI, and (<b>f</b>) RdNBR and GeoCBI.</p>
Full article ">
<p>Top: CBI and WCBI for (<b>a</b>) understory strata (substrates, herbs and low shrubs, tall shrubs), (<b>b</b>) overstory strata (intermediate trees, and dominant trees), and (<b>c</b>) the plot. Bottom: CBI and GeoCBI for (<b>d</b>) understory strata (substrates, herbs and low shrubs, tall shrubs), (<b>e</b>) overstory strata (intermediate trees, and dominant trees), and (<b>f</b>) the plot. Dashed lines shows theoretical CBI = WCBI, or CBI = GeoCBI.</p>
Full article ">
<p>Regression models for (<b>a</b>) dNBR, and (<b>b</b>) RdNBR.</p>
Full article ">
<p>The influence of each stratum’s WCBI score on the total plot WCBI score plotted against the influence of CBI score on the total plot CBI score. Strata that were not assessed (<span class="html-italic">i.e.</span>, 0% cover) were excluded from this analysis. Dashed lines represent the mean CBI and WCBI influence. Comparison of the dashed lines shows that use of WCBI increased the influence of the substrates and herbs and low shrubs strata, and decreased the influence of tall shrubs, intermediate trees, and dominant trees.</p>
Full article ">
<p>Examples of (<b>a</b>,<b>b</b>) low, (<b>c</b>,<b>d</b>) moderate, and (<b>e</b>,<b>f</b>) high severity field plots. Plots on left (a,c,e) are example of plots with little differences between CBI and WCBI. Plots on right are examples of plots where the use of WCBI notably decreased (b,d) or increased (f) the estimate of field-based burn severity.</p>
Full article ">
708 KiB  
Article
Satellite NDVI Assisted Monitoring of Vegetable Crop Evapotranspiration in California’s San Joaquin Valley
by Lee F. Johnson and Thomas J. Trout
Remote Sens. 2012, 4(2), 439-455; https://doi.org/10.3390/rs4020439 - 6 Feb 2012
Cited by 136 | Viewed by 13484
Abstract
Reflective bands of Landsat-5 Thematic Mapper satellite imagery were used to facilitate the estimation of basal crop evapotranspiration (ETcb), or potential crop water use, in San Joaquin Valley fields during 2008. A ground-based digital camera measured green fractional cover (Fc) of 49 commercial [...] Read more.
Reflective bands of Landsat-5 Thematic Mapper satellite imagery were used to facilitate the estimation of basal crop evapotranspiration (ETcb), or potential crop water use, in San Joaquin Valley fields during 2008. A ground-based digital camera measured green fractional cover (Fc) of 49 commercial fields planted to 18 different crop types (row crops, grains, orchard, vineyard) of varying maturity over 11 Landsat overpass dates. Landsat L1T terrain-corrected images were transformed to surface reflectance and converted to normalized difference vegetation index (NDVI). A strong linear relationship between NDVI and Fc was observed (r2 = 0.96, RMSE = 0.062). The resulting regression equation was used to estimate Fc for crop cycles of broccoli, bellpepper, head lettuce, and garlic on nominal 7–9 day intervals for several study fields. Prior relationships developed by weighing lysimeter were used to transform Fc to fraction of reference evapotranspiration, also known as basal crop coefficient (Kcb). Measurements of grass reference evapotranspiration from the California Irrigation Management Information System were then used to calculate ETcb for each overpass date. Temporal profiles of Fc, Kcb, and ETcb were thus developed for the study fields, along with estimates of seasonal water use. Daily ETcb retrieval uncertainty resulting from error in satellite-based Fc estimation was < 0.5 mm/d, with seasonal uncertainty of 6–10%. Results were compared with FAO-56 irrigation guidelines and prior lysimeter observations for reference. Full article
Show Figures


<p>Study area. Polygons show fields used for objective #1 (NDVI-Fc relationship). Fields designated as A-J used for objective #2 (crop cycle monitoring) (see also table in Section 2.4). Location of WSREC lysimeter shown for reference.</p>
Full article ">
<p>Measurement of green fractional cover (Fc) for low and high stature crops using multispectral camera.</p>
Full article ">
<p>Historical average daily reference evapotranspiration (ETo) for study area, exemplified by data from the CIMIS Five Points station at UC-WSREC. Note seasonality effect, with higher values in summer.</p>
Full article ">
<p>Relationship between Landsat NDVI and ground measurements of Fc. A total of 18 major SJV crop types and 11 satellite overpasses are represented. Number of observations of each crop type shown in parentheses.</p>
Full article ">
<p>Comparison of <a href="#FD1" class="html-disp-formula">Equation (1)</a> with published relationships for multiple horticultural crops [<a href="#b23-remotesensing-04-00439" class="html-bibr">23</a>], wheat [<a href="#b21-remotesensing-04-00439" class="html-bibr">21</a>,<a href="#b22-remotesensing-04-00439" class="html-bibr">22</a>], barley [<a href="#b19-remotesensing-04-00439" class="html-bibr">19</a>], and grape [<a href="#b20-remotesensing-04-00439" class="html-bibr">20</a>]. Each line covers approximate data range of respective study.</p>
Full article ">
<p>Landsat time-series of mean Fc beginning at apparent start of development stage for four different crops, with trendlines. Letters represent fields of <a href="#t2-remotesensing-04-00439" class="html-table">Table 2</a>. Observed profiles for fields E, F, G, and H terminate prior to harvest due to cloud cover past DOY 319. Full growth cycle captured for the other fields. The trendlines exclude the final plotted datapoints for fields C, D, I, and J, which represent apparent post-harvest conditions and are shown for reference only. Fc measurements obtained during respective lysimeter experiments [<a href="#b15-remotesensing-04-00439" class="html-bibr">15</a>] also shown for reference.</p>
Full article ">
<p>Landsat based time-series of Kcb for four different crops, with trendlines. Letters represent fields of <a href="#t2-remotesensing-04-00439" class="html-table">Table 2</a>. Dashed line is planning guideline for crop development, mid-season, and late-season stages from FAO-56 Tables 11 and 17 [<a href="#b4-remotesensing-04-00439" class="html-bibr">4</a>] as available (FAO does not provide stage duration for garlic.). FAO development stage start dates were roughly aligned with observed dates for comparison purposes.</p>
Full article ">
<p>Cumulative basal evapotranspriation (ETcb) for four different crops, excluding initial stage, developed by lysimeter equations (<a href="#t3-remotesensing-04-00439" class="html-table">Table 3</a>) constrained by satellite-based Fc, and combined with CIMIS reference evapotranspiration. Letters represent fields of <a href="#t2-remotesensing-04-00439" class="html-table">Table 2</a>. Dotted portions of bellpepper (C,D) and lettuce (I,J) lines represent harvest period. Garlic profiles terminate at irrigation cutoff. Lysimeter [<a href="#b15-remotesensing-04-00439" class="html-bibr">15</a>] and FAO profiles are provided for reference as available. Dashed portion of the lysimeter line for bellpepper represents reported harvest period. Satellite-based profiles for fields E, F, G, and H are incomplete due to cloud cover.</p>
Full article ">
1114 KiB  
Article
Burned Area Mapping in Greece Using SPOT-4 HRVIR Images and Object-Based Image Analysis
by Anastasia Polychronaki and Ioannis Z. Gitas
Remote Sens. 2012, 4(2), 424-438; https://doi.org/10.3390/rs4020424 - 3 Feb 2012
Cited by 36 | Viewed by 10798
Abstract
The devastating series of fire events that occurred during the summers of 2007 and 2009 in Greece made evident the need for an operational mechanism to map burned areas in an accurate and timely fashion to be developed. In this work, Système pour [...] Read more.
The devastating series of fire events that occurred during the summers of 2007 and 2009 in Greece made evident the need for an operational mechanism to map burned areas in an accurate and timely fashion to be developed. In this work, Système pour l’Observation de la Terre (SPOT)-4 HRVIR images are introduced in an object-based classification environment in order to develop a classification procedure for burned area mapping. The development of the procedure was based on two images and then tested for its transferability to other burned areas. Results from the SPOT-4 HRVIR burned area mapping showed very high classification accuracies ( 0.86 kappa coefficient), while the object-based classification procedure that was developed proved to be transferable when applied to other study areas. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of Wildland Fires)
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>Location of the four study areas, extent of each image (orange boxes) and corresponding available SPOT-4 HRVIR images: (<b>1</b>) Peloponnese, (<b>2</b>) East Attica, (<b>3</b>) Parnitha, and (<b>4</b>) Pelion.</p>
Full article ">
<p>Flowchart of the methodology followed during the development of the classification procedure.</p>
Full article ">
<p>(<b>a</b>) Detail of the SPOT-4 HRVIR image depicting the East Attica fire (R: NIR band, G: red band, B: green band), (<b>b</b>) classification of the seed burned area objects (in yellow) and water bodies (in blue), (<b>c</b>) refinement of the classification using the grow region algorithm. Unclassified objects with a mean value of NBR less than −0.15 were finally merged with objects classified as “burned”, (<b>d</b>) final classification of the burned areas.</p>
Full article ">
1365 KiB  
Article
An Object-Based Image Analysis Method for Monitoring Land Conversion by Artificial Sprawl Use of RapidEye and IRS Data
by Stéphane Dupuy, Eric Barbe and Maud Balestrat
Remote Sens. 2012, 4(2), 404-423; https://doi.org/10.3390/rs4020404 - 2 Feb 2012
Cited by 28 | Viewed by 12833
Abstract
In France, in the peri-urban context, urban sprawl dynamics are particularly strong with huge population growth as well as a land crisis. The increase and spreading of built-up areas from the city centre towards the periphery takes place to the detriment of natural [...] Read more.
In France, in the peri-urban context, urban sprawl dynamics are particularly strong with huge population growth as well as a land crisis. The increase and spreading of built-up areas from the city centre towards the periphery takes place to the detriment of natural and agricultural spaces. The conversion of land with agricultural potential is all the more worrying as it is usually irreversible. The French Ministry of Agriculture therefore needs reliable and repeatable spatial-temporal methods to locate and quantify loss of land at both local and national scales. The main objective of this study was to design a repeatable method to monitor land conversion characterized by artificial sprawl: (i) We used an object-based image analysis to extract artificial areas from satellite images; (ii) We built an artificial patch that consists of aggregating all the peripheral areas that characterize artificial areas. The “artificialized” patch concept is an innovative extension of the urban patch concept, but differs in the nature of its components and in the continuity distance applied; (iii) The diachronic analysis of artificial patch maps enables characterization of artificial sprawl. The method was applied at the scale of four departments (similar to provinces) along the coast of Languedoc-Roussillon, in the South of France, based on two satellite datasets, one acquired in 1996–1997 (Indian Remote Sensing) and the other in 2009 (RapidEye). In the four departments, we measured an increase in artificial areas of from 113,000 ha in 1997 to 133,000 ha in 2009, i.e., an 18% increase in 12 years. The package comes in the form of a 1/15,000 valid cartography, usable at the scale of a commune (the smallest territorial division used for administrative purposes in France) that can be adapted to departmental and regional scales. The method is reproducible in homogenous spatial-temporal terms, so that it could be used periodically to assess changes in land conversion rates in France as a whole. Full article
(This article belongs to the Special Issue Remote Sensing in Support of Environmental Policy)
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>The study area in the Languedoc Roussillon region, in the South of France. The method was applied at the scale of the four coastal departments.</p>
Full article ">
<p>Order of steps in the fine characterization of artificial sprawl.</p>
Full article ">
<p>OBIA procedure for mapping artificial areas from 2009 RapidEye images. This procedure is composed of two successive segmentation-classification steps.</p>
Full article ">
<p>The mathematical morphology operation used to produce an artificial patch. The process combines two basic operations, dilation and erosion to merge separate objects.</p>
Full article ">
<p>Result of the classification: Artificial areas in 1996–1997 and 2009.</p>
Full article ">
<p>Result of the dilation/erosion process: artificialized patches in 1996–1997 and 2009.</p>
Full article ">
<p>Combining the artificialized patches enabled us to locate and quantify artificial sprawl between 1996–1997 and 2009—fine scale extract.</p>
Full article ">
<p>Combining the artificialized patch enabled us to locate and quantify artificial sprawl between 1996–1997 and 2009: Coarse scale extract.</p>
Full article ">
<p>Increase in artificial sprawl between 1996–1997 and 2009.</p>
Full article ">
3693 KiB  
Article
Tree Species Detection Accuracies Using Discrete Point Lidar and Airborne Waveform Lidar
by Nicholas R. Vaughn, L. Monika Moskal and Eric C. Turnblom
Remote Sens. 2012, 4(2), 377-403; https://doi.org/10.3390/rs4020377 - 2 Feb 2012
Cited by 88 | Viewed by 13339
Abstract
Species information is a key component of any forest inventory. However, when performing forest inventory from aerial scanning Lidar data, species classification can be very difficult. We investigated changes in classification accuracy while identifying five individual tree species (Douglas-fir, western redcedar, bigleaf maple, [...] Read more.
Species information is a key component of any forest inventory. However, when performing forest inventory from aerial scanning Lidar data, species classification can be very difficult. We investigated changes in classification accuracy while identifying five individual tree species (Douglas-fir, western redcedar, bigleaf maple, red alder, and black cottonwood) in the Pacific Northwest United States using two data sets: discrete point Lidar data alone and discrete point data in combination with waveform Lidar data. Waveform information included variables which summarize the frequency domain representation of all waveforms crossing individual trees. Discrete point data alone provided 79.2 percent overall accuracy (kappa = 0.74) for all 5 species and up to 97.8 percent (kappa = 0.96) when comparing individual pairs of these 5 species. Incorporating waveform information improved the overall accuracy to 85.4 percent (kappa = 0.817) for five species, and in several two-species comparisons. Improvements were most notable in comparing the two conifer species and in comparing two of the three hardwood species. Full article
(This article belongs to the Special Issue Laser Scanning in Forests)
Show Figures


<p>An example waveform and probable associated discrete return points. The 120 waveform samples are shown as circles and a spline fit to these data appears as a solid line. A peak detector might detect two peaks at about 338 and 347 meters and return the intensity value when the peak is detected as shown with exes. Without knowledge of future sample values, real time peak detection algorithms usually produce a slight lag in peak location.</p>
Full article ">
<p>A map of the University of Washington Arboretum in the city of Seattle. The Arboretum boundary is shown as a red line. The helicopter flight path is plotted as a blue line and the associated Lidar coverage area is blue-tinted.</p>
Full article ">
<p>Three-dimensional plot of discrete point Lidar data extracted from waveform Lidar data. This view is looking northwest at the area outlined with a tan colored dotted line in <a href="#f2-remotesensing-04-00377" class="html-fig">Figure 2</a>. This image shows the variable density of the trees in the University of Washington Arboretum. A thicker patch of black cottonwood and red alder is visible in front of the elevated freeway ramp. Most of the crown of larger conifers are visible in other parts of the image. Outside the Arboretum, many neighborhood streets and power lines are visible.</p>
Full article ">
<p>The combination of all component waves from the discrete Fourier transform, given in <a href="#FD15" class="html-disp-formula">Equation (15)</a>, passes through each point in the original time series.</p>
Full article ">
<p><b>(a)</b> A visual depiction of variable <span class="html-italic">r<sub>area</sub></span>. The top eight layers of a tree’s voxel representation, set <b>U</b>, are shown projected to two dimensions. The variable <span class="html-italic">r<sub>area</sub></span> is the area of this projection divided by the area of the convex hull encompassing the row and column centers of this projection. <b>(b)</b> A visualization of the crown surface model in Equation parametrized by the variables <span class="html-italic">s<sub>a</sub></span> and <span class="html-italic">s<sub>b</sub></span>. Only the voxels in set <b>T</b> are shown.</p>
Full article ">
<p>Loadings of the first three principal components of the Fourier median variables (<span class="html-italic">m</span><sub>1</sub> to <span class="html-italic">m</span><sub>30</sub>).</p>
Full article ">
<p>Loadings of the first three principal components of the Fourier interquartile range variables (<span class="html-italic">q</span><sub>1</sub> to <span class="html-italic">q</span><sub>30</sub>).</p>
Full article ">
1619 KiB  
Article
Inter-Sensor Comparison between THEOS and Landsat 5 TM Data in a Study of Two Crops Related to Biofuel in Thailand
by Naruemon Phongaksorn, Nitin K. Tripathi, Sivanappan Kumar and Peeyush Soni
Remote Sens. 2012, 4(2), 354-376; https://doi.org/10.3390/rs4020354 - 1 Feb 2012
Cited by 10 | Viewed by 9191
Abstract
Knowledge of the spatial distribution of biofuel crops is an important criterion to determine the sustainability of biofuel energy production. Remotely sensed image analysis is a proven and effective tool for describing the spatial distribution of crops using vegetation characteristics. Increases in the [...] Read more.
Knowledge of the spatial distribution of biofuel crops is an important criterion to determine the sustainability of biofuel energy production. Remotely sensed image analysis is a proven and effective tool for describing the spatial distribution of crops using vegetation characteristics. Increases in the number of options and availability of satellite sensors have expanded the horizon of choices of imagery sources for appropriate image acquisitions. The Thailand Earth Observation System (THEOS) satellite is one of the newest satellite sensors. The growing number of satellite sensors warrants their comparative evaluation and the standardization of data obtained from various sensors. This study conducted an inter-sensor comparison of the visible/near-infrared surface reflectance and Normalized Difference Vegetation Index (NDVI) data collected from the Landsat 5 Thematic Mapper (TM) and THEOS. The surface reflectance and the derived NDVI of the sensors were randomly obtained for two biofuel crops, namely, cassava and sugarcane. These crops had low values of visible surface reflectance, which were not significantly (p < 0.05) different. In contrast, the crops had high values of near-infrared surface reflectance that differed significantly (p > 0.05) between the crops. Strong linear relationships between the remote sensing products for the examined sensors were obtained for both cassava and sugarcane. The regression models that were developed can be used to compute the NDVI for THEOS using those determined from Landsat 5 TM and vice versa for the given biofuel crops. Full article
(This article belongs to the Special Issue Remote Sensing for Sustainable Energy Systems)
Show Figures


<p>Study areas and locations of sample fields marked on a THEOS image. (<b>a</b>) Cassava site in Nakhon Ratchasima Province. (<b>b</b>) Sugarcane site in Suphanburi Province.</p>
Full article ">
<p>Multispectral images from THEOS and Landsat 5 TM for the study area: (<b>a</b>) THEOS for cassava. (<b>b</b>) Landsat 5 TM for cassava. (<b>c</b>) THEOS for sugarcane. (<b>d</b>) Landsat 5 TM for sugarcane.</p>
Full article ">
<p>Spectral responses of the THEOS and Landsat 5 TM multi-spectral sensors Source: Geo-Informatics and Space Technology Development Agency (Public Organization) [<a href="#b11-remotesensing-04-00354" class="html-bibr">11</a>]; European Space Agency (ESA) [<a href="#b26-remotesensing-04-00354" class="html-bibr">26</a>].</p>
Full article ">
<p>Mean of the percent surface reflectance in the blue, green, red and near-infrared bands of THEOS and Landsat 5 TM for cassava and sugarcane.</p>
Full article ">
<p>Relationships between THEOS and Landsat 5 TM surface reflectances for two biofuel crops. (<b>a</b>) Blue. (<b>b</b>) Green. (<b>c</b>) Red. (<b>d</b>) Near-infrared (NIR) bands. (<b>e</b>) NDVI.</p>
Full article ">
<p>Relationships between THEOS and Landsat 5 TM surface reflectances for two biofuel crops. (<b>a</b>) Blue. (<b>b</b>) Green. (<b>c</b>) Red. (<b>d</b>) Near-infrared (NIR) bands. (<b>e</b>) NDVI.</p>
Full article ">
<p>The LULC mapping of Nakhon Ratchasima Province (cassava site). (<b>a</b>) THEOS. (<b>b</b>) Landsat 5 TM and Suphanburi Province (sugarcane site). (<b>c</b>) THEOS. (<b>d</b>) Landsat 5 TM.</p>
Full article ">
<p>The LULC mapping of Nakhon Ratchasima Province (cassava site). (<b>a</b>) THEOS. (<b>b</b>) Landsat 5 TM and Suphanburi Province (sugarcane site). (<b>c</b>) THEOS. (<b>d</b>) Landsat 5 TM.</p>
Full article ">
3154 KiB  
Article
Vegetation Cover Analysis of Hazardous Waste Sites in Utah and Arizona Using Hyperspectral Remote Sensing
by Jungho Im, John R. Jensen, Ryan R. Jensen, John Gladden, Jody Waugh and Mike Serrato
Remote Sens. 2012, 4(2), 327-353; https://doi.org/10.3390/rs4020327 - 31 Jan 2012
Cited by 43 | Viewed by 12304
Abstract
This study investigated the usability of hyperspectral remote sensing for characterizing vegetation at hazardous waste sites. The specific objectives of this study were to: (1) estimate leaf-area-index (LAI) of the vegetation using three different methods (i.e., vegetation indices, red-edge positioning (REP), [...] Read more.
This study investigated the usability of hyperspectral remote sensing for characterizing vegetation at hazardous waste sites. The specific objectives of this study were to: (1) estimate leaf-area-index (LAI) of the vegetation using three different methods (i.e., vegetation indices, red-edge positioning (REP), and machine learning regression trees), and (2) map the vegetation cover using machine learning decision trees based on either the scaled reflectance data or mixture tuned matched filtering (MTMF)-derived metrics and vegetation indices. HyMap airborne data (126 bands at 2.3 × 2.3 m spatial resolution), collected over the U.S. Department of Energy uranium processing sites near Monticello, Utah and Monument Valley, Arizona, were used. Grass and shrub species were mixed on an engineered disposal cell cover at the Monticello site while shrub species were dominant in the phytoremediation plantings at the Monument Valley site. Regression trees resulted in the best calibration performance of LAI estimation (R2 > 0.80. The use of REPs failed to accurately predict LAI (R2 < 0.2). The use of the MTMF-derived metrics (matched filter scores and infeasibility) and a range of vegetation indices in decision trees improved the vegetation mapping when compared to the decision tree classification using just the scaled reflectance. Results suggest that hyperspectral imagery are useful for characterizing biophysical characteristics (LAI) and vegetation cover on capped hazardous waste sites. However, it is believed that the vegetation mapping would benefit from the use of higher spatial resolution hyperspectral data due to the small size of many of the vegetation patches ( < 1 m) found on the sites. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>The HyMap imagery of two study sites (RGB = Hyperspectral bands 24, 17, 11). The yellow symbols are <span class="html-italic">in situ</span> sampling locations.</p>
Full article ">
<p>Digital image processing flow diagram.</p>
Full article ">
<p>The correlation matrices using the vegetation index approach to estimate LAI: using (<b>a</b>) VI1, and (<b>b</b>) VI2 for the Monticello, UT site; and using (<b>c</b>) VI1 and (<b>d</b>) VI2 for the Monument Valley, AZ site.</p>
Full article ">
<p>The scatterplots between each of the Red-edge position (REP) and LAI: using (<b>a</b>) LI_REP, (<b>b</b>) LG_REP, and (<b>c</b>) LE_REP for the Monticello, UT site; and using (<b>d</b>) LI_REP, (<b>e</b>) LG_REP, and (<b>f</b>) LE_REP for the Monument Valley, AZ Site. The <span class="html-italic">R<sup>2</sup></span> and RMSEs from calibration (CAL) and cross-validation (CV) are also provided.</p>
Full article ">
<p>LAI estimation using the regression tree approach for (<b>a</b>) the Monticello, UT Site, and (<b>b</b>) the Monument Valley, AZ Site. The <span class="html-italic">R<sup>2</sup></span> and RMSEs from calibration (CAL) and cross-validation (CV) are summarized in the plots.</p>
Full article ">
<p>The estimated LAI distribution maps for (<b>a</b>) the Monticello site, and (<b>b</b>) the Monument Valley site. The dirt road and other land cover classes were masked out for the Monticello site.</p>
Full article ">
<p>The relationships between the matched filter scores and the percent cover of the vegetation species for the Monticello site: (<b>a</b>) sagebrush, (<b>b</b>) rabbitbrush, (<b>c</b>) wheatgrass, and (<b>d</b>) litter.</p>
Full article ">
<p>The relationships between the matched filter scores and the percent cover of the vegetation species for the Monument Valley site: (<b>a</b>) greasewood and (<b>b</b>) saltbush.</p>
Full article ">
<p>Box plots showing the performance variation of the multiple decision trees using different sets of training and testing samples: (<b>a</b>) for the Monticello Site, and (<b>b</b>) for the Monument Valley Site. MV represents the decision trees using the MTMF-derived metrics and vegetation indices and REF represents the decision trees using the original scaled reflectance data.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop