[go: up one dir, main page]

 
 
remotesensing-logo

Journal Browser

Journal Browser

Earth Observation Technology Cluster: Innovative Sensor Systems for Advanced Land Surface Studies

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (30 November 2012) | Viewed by 101077

Special Issue Editors


E-Mail Website
Guest Editor
Department of Geography, Mary Immaculate College, South Circular Road, V94 VN26 Limerick, Ireland
Interests: remote sensing; land cover classification; scales of observation; environmental applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Geography, University of Nottingham, Nottingham, UK
Interests: remote sensing; earth observation; ecology; biogeography; earth system science

E-Mail Website
Guest Editor
School of Geography, University of Nottingham, Nottingham NG7 2RD, UK

Special Issue Information

Dear Colleagues,

This special issue focuses on innovative technology used in remote sensing of the terrestrial or land surface. The Earth Observation Technology Cluster is an initiative to promote development and communication in this field (www.eotechcluster.org.uk). The observation or measurement of some property of the land surface is central to a wide range of scientific investigations conducted in many different disciplines, and in practice there is much consistency in the instruments used for observation and the techniques used to map and model the environmental phenomena of interest. Using remote sensing technology as a unifying theme, this initiative provides an opportunity for presentation of novel developments from, and cross-fertilisation of ideas between, the many and diverse members of the terrestrial remote sensing community. The scope of the special issue covers the full range of remote sensing operation, from new platform and sensor development, through image retrieval and analysis, to data applications and environmental modelling. Example topics include novel remote sensing platforms such as unmanned aerial vehicles; emerging instrumentation such as fourier transform infrared spectroscopy and terrestrial LiDAR; modern image retrieval and storage techniques such as networked data transmission and distributed computing; new image analysis and modelling approaches such as hypertemporal observation; and contemporary and significant application areas such as circumpolar and cryospheric remote sensing. Research papers and innovative review papers are invited on any topic under the broad theme of technological developments in remote sensing of the land surface.

Dr. Paul Aplin
Dr. Doreen Sandra Boyd
Dr. Alison Marsh
Guest Editors

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

1442 KiB  
Editorial
Innovative Technologies for Terrestrial Remote Sensing
by Paul Aplin and Doreen S. Boyd
Remote Sens. 2015, 7(4), 4968-4972; https://doi.org/10.3390/rs70404968 - 22 Apr 2015
Cited by 1 | Viewed by 6540
Abstract
Characterizing and monitoring terrestrial, or land, surface features, such as forests, deserts, and cities, are fundamental and continuing goals of Earth Observation (EO). EO imagery and related technologies are essential for increasing our scientific understanding of environmental processes, such as carbon capture and [...] Read more.
Characterizing and monitoring terrestrial, or land, surface features, such as forests, deserts, and cities, are fundamental and continuing goals of Earth Observation (EO). EO imagery and related technologies are essential for increasing our scientific understanding of environmental processes, such as carbon capture and albedo change, and to manage and safeguard environmental resources, such as tropical forests, particularly over large areas or the entire globe. This measurement or observation of some property of the land surface is central to a wide range of scientific investigations and industrial operations, involving individuals and organizations from many different backgrounds and disciplines. However, the process of observing the land provides a unifying theme for these investigations, and in practice there is much consistency in the instruments used for observation and the techniques used to map and model the environmental phenomena of interest. There is therefore great potential benefit in exchanging technological knowledge and experience among the many and diverse members of the terrestrial EO community. [...] Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Earth Observation Technology Cluster word cloud, showing the initiative’s principal EO technology connections and collaborations.</p>
Full article ">

Research

Jump to: Editorial

3298 KiB  
Article
Mapping Complex Urban Land Cover from Spaceborne Imagery: The Influence of Spatial Resolution, Spectral Band Set and Classification Approach
by Rahman Momeni, Paul Aplin and Doreen S. Boyd
Remote Sens. 2016, 8(2), 88; https://doi.org/10.3390/rs8020088 - 23 Jan 2016
Cited by 109 | Viewed by 14228
Abstract
Detailed land cover information is valuable for mapping complex urban environments. Recent enhancements to satellite sensor technology promise fit-for-purpose data, particularly when processed using contemporary classification approaches. We evaluate this promise by comparing the influence of spatial resolution, spectral band set and classification [...] Read more.
Detailed land cover information is valuable for mapping complex urban environments. Recent enhancements to satellite sensor technology promise fit-for-purpose data, particularly when processed using contemporary classification approaches. We evaluate this promise by comparing the influence of spatial resolution, spectral band set and classification approach for mapping detailed urban land cover in Nottingham, UK. A WorldView-2 image provides the basis for a set of 12 images with varying spatial and spectral characteristics, and these are classified using three different approaches (maximum likelihood (ML), support vector machine (SVM) and object-based image analysis (OBIA)) to yield 36 output land cover maps. Classification accuracy is evaluated independently and McNemar tests are conducted between all paired outputs (630 pairs in total) to determine which classifications are significantly different. Overall accuracy varied between 35% for ML classification of 30 m spatial resolution, 4-band imagery and 91% for OBIA classification of 2 m spatial resolution, 8-band imagery. The results demonstrate that spatial resolution is clearly the most influential factor when mapping complex urban environments, and modern “very high resolution” or VHR sensors offer great advantage here. However, the advanced spectral capabilities provided by some recent sensors, coupled with contemporary classification approaches (especially SVMs and OBIA), can also lead to significant gains in mapping accuracy. Ongoing development in instrumentation and methodology offer huge potential here and imply that urban mapping opportunities will continue to grow. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Nottingham, UK study area location and WorldView-2 image (© DigitalGlobe, Inc. All Rights Reserved).</p>
Full article ">Figure 2
<p>WorldView-2 spectral wavebands (top line) and spectral band subsets used for comparative analysis.</p>
Full article ">Figure 3
<p>The 36 image data set/classifier combinations (4 spatial resolutions × 3 spectral band sets × 3 classifiers) used for comparative classification analysis.</p>
Full article ">Figure 4
<p>Multi-stage object-based classification procedure.</p>
Full article ">Figure 5
<p>Land cover maps (detail) for the 36 data set/classifier combinations, plus the WorldView-2 image (© DigitalGlobe, Inc. All Rights Reserved) and MasterMap vector data (© Crown Copyright and Database Right 2015. Ordnance Survey (Digimap Licence).</p>
Full article ">Figure 6
<p>Overall land cover classification accuracies for the 36 data set/classifier combinations (ML = maximum likelihood, SVM = support vector machine, OBIA = object-based image analysis).</p>
Full article ">Figure 7
<p>Matrix of McNemar test z values showing the statistical significance of differences between all classification pairs (ML = maximum likelihood, SVM = support vector machine, OBIA = object-based image analysis; 30 m, 10 m, 4 m and 2 m refer to spatial resolution; 4b, 6b and 8b refer to the number of spectral bands; z values ≥ 3.2 (highlighted grey) are statistically significant at the 99% confidence level).</p>
Full article ">
905 KiB  
Article
Testing the Application of Terrestrial Laser Scanning to Measure Forest Canopy Gap Fraction
by F. Alberto Ramirez, Richard P. Armitage and F. Mark Danson
Remote Sens. 2013, 5(6), 3037-3056; https://doi.org/10.3390/rs5063037 - 19 Jun 2013
Cited by 33 | Viewed by 7421
Abstract
Terrestrial laser scanners (TLS) have the potential to revolutionise measurement of the three-dimensional structure of vegetation canopies for applications in ecology, hydrology and climate change. This potential has been the subject of recent research that has attempted to measure forest biophysical variables from [...] Read more.
Terrestrial laser scanners (TLS) have the potential to revolutionise measurement of the three-dimensional structure of vegetation canopies for applications in ecology, hydrology and climate change. This potential has been the subject of recent research that has attempted to measure forest biophysical variables from TLS data, and make comparisons with two-dimensional data from hemispherical photography. This research presents a systematic comparison between forest canopy gap fraction estimates derived from TLS measurements and hemispherical photography. The TLS datasets used in the research were obtained between April 2008 and March 2009 at Delamere Forest, Cheshire, UK. The analysis of canopy gap fraction estimates derived from TLS data highlighted the repeatability and consistency of the measurements in comparison with those from coincident hemispherical photographs. The comparison also showed that estimates computed considering only the number of hits and misses registered in the TLS datasets were consistently lower than those estimated from hemispherical photographs. To examine this difference, the potential information available in the intensity values recorded by TLS was investigated and a new method developed to estimate canopy gap fraction proposed. The new approach produced gap fractions closer to those estimated from hemispherical photography, but the research also highlighted the limitations of single return TLS data for this application. Full article
Show Figures


<p>Geographic location of Delamere Forest, UK.</p>
Full article ">
<p>Full frame hemispherical photograph (<b>top</b>) and laser scanner image displayed in hemispherical projection (<b>bottom</b>) at the broadleaved deciduous plot (9 March 2008).</p>
Full article ">
<p>Intensity-range distributions (maximum values) corresponding to TLS datasets acquired at broadleaved deciduous (<b>a</b>) and needle-leaved evergreen (<b>b</b>) plots. The measurements were obtained on 22 July 2008 (solid circles) and 19 March 2009 (empty circles).</p>
Full article ">
<p>Gap fractions derived from the point-based method (empty circles), intensity-based method (solid circles) and hemispherical photography (solid squares) using datasets collected at the needle-leaved evergreen plot.</p>
Full article ">
<p>Gap fractions derived from the point-based method (empty circles), intensity-based method (solid circles) and hemispherical photography (solid squares) using datasets collected at the broadleaved deciduous plot.</p>
Full article ">
<p>Gap fractions derived from the point-based method (empty circles), intensity-based method (solid circles) and hemispherical photography (solid squares) using datasets collected at the larch plot.</p>
Full article ">
<p>Differences in gap fractions derived from datasets collected at the needle-leaved evergreen plot, (<b>a</b>) intensity-based (IB)–point-based (PB) methods and (<b>b</b>) intensity-based (IB) method–hemispherical photography (HP).</p>
Full article ">
<p>Differences in gap fractions derived from datasets collected at the broadleaved deciduous plot, (<b>a</b>) intensity-based (IB)–point-based (PB) methods and (<b>b</b>) intensity-based (IB) method–hemispherical photography (HP).</p>
Full article ">
<p>Differences in gap fractions derived from datasets collected at the broadleaved deciduous plot, (<b>a</b>) intensity-based (IB)–point-based (PB) methods and (<b>b</b>) intensity-based (IB) method–hemispherical photography (HP).</p>
Full article ">
<p>Differences in gap fractions derived from datasets collected at the larch plot, (<b>a</b>) intensity-based (IB)–point-based (PB) methods and (<b>b</b>) intensity-based (IB) method–hemispherical photography (HP).</p>
Full article ">
<p>Differences in gap fractions derived from datasets collected at the larch plot, (<b>a</b>) intensity-based (IB)–point-based (PB) methods and (<b>b</b>) intensity-based (IB) method–hemispherical photography (HP).</p>
Full article ">
820 KiB  
Article
Exploring the Potential for Automatic Extraction of Vegetation Phenological Metrics from Traffic Webcams
by David E. Morris, Doreen S. Boyd, John A. Crowe, Caroline S. Johnson and Karon L. Smith
Remote Sens. 2013, 5(5), 2200-2218; https://doi.org/10.3390/rs5052200 - 10 May 2013
Cited by 28 | Viewed by 9261
Abstract
Phenological metrics are of potential value as direct indicators of climate change. Usually they are obtained via either satellite imaging or ground based manual measurements; both are bespoke and therefore costly and have problems associated with scale and quality. An increase in the [...] Read more.
Phenological metrics are of potential value as direct indicators of climate change. Usually they are obtained via either satellite imaging or ground based manual measurements; both are bespoke and therefore costly and have problems associated with scale and quality. An increase in the use of camera networks for monitoring infrastructure offers a means of obtaining images for use in phenological studies, where the only necessary outlay would be for data transfer, storage, processing and display. Here a pilot study is described that uses image data from a traffic monitoring network to demonstrate that it is possible to obtain usable information from the data captured. There are several challenges in using this network of cameras for automatic extraction of phenological metrics, not least, the low quality of the images and frequent camera motion. Although questions remain to be answered concerning the optimal employment of these cameras, this work illustrates that, in principle, image data from camera networks such as these could be used as a means of tracking environmental change in a low cost, highly automated and scalable manner that would require little human involvement. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>Map of England showing locations of the cameras used for the results shown in this paper. The bold lines represent the roads covered by Highways Agency cameras.</p>
Full article ">
<p>Flow-chart describing the image collection and alignment process, along with example images output at different stages of the process.</p>
Full article ">
<p>Graphical description of the determination of phenological dates. A, season maximum; B, start of senescence, 80% between D and A; C, end of senescence, 20% between D and A; D, inter-seasonal minimum; E, start of leaf-up, 20% between D and G; F, end of leaf up, 80% between D and G; G, season maximum. Vertical lines show intersections for date read-off.</p>
Full article ">
<p>Views of the cameras shown in <a href="#f1-remotesensing-05-02200" class="html-fig">Figure 1</a>, after realignment and averaging over all images with corresponding calculated masks (white) of vegetation used for analysis.</p>
Full article ">
<p>Calculated greenness levels using <a href="#FD1" class="html-disp-formula">Equation (1)</a> for each camera shown in <a href="#f1-remotesensing-05-02200" class="html-fig">Figure 1</a> over the period of analysis. The crosses show individual data points and the continuous curves show the smoothed data used for date extraction.</p>
Full article ">
<p>Ground truth dates <span class="html-italic">versus</span> dates automatically calculated from cameras; (<b>a</b>) end of senescence 2011; (<b>b</b>) start of green up 2012. Note: missing points are due to lack of camera data for visual inspection.</p>
Full article ">
561 KiB  
Article
Generating Virtual Images from Oblique Frames
by Antonio M. G. Tommaselli, Mauricio Galo, Marcus V. A. De Moraes, José Marcato, Jr., Carlos R. T. Caldeira and Rodrigo F. Lopes
Remote Sens. 2013, 5(4), 1875-1893; https://doi.org/10.3390/rs5041875 - 15 Apr 2013
Cited by 48 | Viewed by 8209
Abstract
Image acquisition systems based on multi-head arrangement of digital cameras are attractive alternatives enabling a larger imaging area when compared to a single frame camera. The calibration of this kind of system can be performed in several steps or by using simultaneous bundle [...] Read more.
Image acquisition systems based on multi-head arrangement of digital cameras are attractive alternatives enabling a larger imaging area when compared to a single frame camera. The calibration of this kind of system can be performed in several steps or by using simultaneous bundle adjustment with relative orientation stability constraints. The paper will address the details of the steps of the proposed approach for system calibration, image rectification, registration and fusion. Experiments with terrestrial and aerial images acquired with two Fuji FinePix S3Pro cameras were performed. The experiments focused on the assessment of the results of self-calibrating bundle adjustment with and without relative orientation constraints and the effects to the registration and fusion when generating virtual images. The experiments have shown that the images can be accurately rectified and registered with the proposed approach, achieving residuals smaller than one pixel. Full article
Show Figures


<p>Resulting rectified images of dual cameras: (<b>a</b>) left image from camera 2, and (<b>b</b>) right image from camera 1, (<b>c</b>) resulting fused image from two rectified images after registration and, (<b>d</b>) cropped without the borders.</p>
Full article ">
<p>Dual head system with two Fuji S3 Pro cameras.</p>
Full article ">
<p>(<b>a</b>) Image of the calibration field; (<b>b</b>) origin of the arbitrary object reference system; and (<b>c</b>) existing targets and distances directly measured with a precision calliper for quality control.</p>
Full article ">
<p>Root Mean Squared Error (RMSE) of the check distances.</p>
Full article ">
<p>Estimated standard deviations of <span class="html-italic">f</span>, <span class="html-italic">x</span><sub>0</sub> and <span class="html-italic">y</span><sub>0</sub> for both cameras.</p>
Full article ">
<p>Standard deviations of the computed base components.</p>
Full article ">
<p>Standard deviations of rotation elements of the Relative Rotation matrix computed from estimated exterior orientation parameters (EOP).</p>
Full article ">
<p>(<b>a</b>) Set of virtual images used in the fusion experiments; (<b>b</b>) reduced set used in the bundle block adjustment.</p>
Full article ">
<p>Average values for the standard deviations of discrepancies in tie points coordinates of 5 rectified image pairs with different sets of Interior Orientation Parameters (IOP) and Relative Orientation Parameters (ROP).</p>
Full article ">
490 KiB  
Article
Azimuth-Variant Signal Processing in High-Altitude Platform Passive SAR with Spaceborne/Airborne Transmitter
by Wen-Qin Wang and Huaizong Shao
Remote Sens. 2013, 5(3), 1292-1310; https://doi.org/10.3390/rs5031292 - 14 Mar 2013
Cited by 4 | Viewed by 7789
Abstract
High-altitude platforms (HAP) or near-space vehicle offers several advantages over current low earth orbit (LEO) satellite and airplane, because HAP is not constrained by orbital mechanics and fuel consumption. These advantages provide potential for some specific remote sensing applications that require persistent monitoring [...] Read more.
High-altitude platforms (HAP) or near-space vehicle offers several advantages over current low earth orbit (LEO) satellite and airplane, because HAP is not constrained by orbital mechanics and fuel consumption. These advantages provide potential for some specific remote sensing applications that require persistent monitoring or fast-revisiting frequency. This paper investigates the azimuth-variant signal processing in HAP-borne bistatic synthetic aperture radar (BiSAR) with spaceborne or airborne transmitter for high-resolution remote sensing. The system configuration, azimuth-variant Doppler characteristics and two-dimensional echo spectrum are analyzed. Conceptual system simulation results are also provided. Since the azimuth-variant BiSAR geometry brings a challenge for developing high precision data processing algorithms, we propose an image formation algorithm using equivalent velocity and nonlinear chirp scaling (NCS) to address the azimuth-variant signal processing problem. The proposed algorithm is verified by numerical simulation results. Full article
Show Figures


<p>General geometry of HAP-borne passive BiSAR system.</p>
Full article ">
<p>Azimuth-variant Doppler characteristics. <b>(a)</b> Configurations A; <b>(b)</b> Configuration B; <b>(c)</b> Configuration C.</p>
Full article ">
<p>Targets with the same range delay at the zero-Doppler but will have different range histories.</p>
Full article ">
<p>The constraints of the Loffeld’s BiSAR spectrum model. <b>(a)</b> In small squint-angle cases; <b>(b)</b> In small difference between <span class="html-italic">v<sub>t</sub></span> and <span class="html-italic">v<sub>t</sub></span>.</p>
Full article ">
<p>Azimuth-variant BiSAR geometry and its equivalent model with stationary receiver.</p>
Full article ">
<p>Range errors caused by the equivalent BiSAR geometry model. <b>(a)</b> Spaceborne transmitter (<span class="html-italic">R</span><sub><span class="html-italic">t</span>0</sub> = 800 km, <span class="html-italic">v<sub>t</sub></span> = 7, 600 m/s); <b>(b)</b> Airborne transmitter (<span class="html-italic">R</span><sub><span class="html-italic">t</span>0</sub> = 15 km, <span class="html-italic">v<sub>t</sub></span> = 100 m/s).</p>
Full article ">
<p>The NCS-based azimuth-variant bistatic SAR imaging algorithm.</p>
Full article ">
<p>Processing results with the equivalent velocity and NCS combined algorithm. <b>(a)</b> Configuration A; <b>(b)</b> Configuration B; <b>(c)</b> Configuration C.</p>
Full article ">
<p>Processing results with the equivalent velocity and NCS combined algorithm. <b>(a)</b> Configuration A; <b>(b)</b> Configuration B; <b>(c)</b> Configuration C.</p>
Full article ">
938 KiB  
Article
New Microslice Technology for Hyperspectral Imaging
by Robert Content, Simon Blake, Colin Dunlop, David Nandi, Ray Sharples, Gordon Talbot, Tom Shanks, Danny Donoghue, Nikolaos Galiatsatos and Peter Luke
Remote Sens. 2013, 5(3), 1204-1219; https://doi.org/10.3390/rs5031204 - 6 Mar 2013
Cited by 10 | Viewed by 9287
Abstract
We present the results of a project to develop a proof of concept for a novel hyperspectral imager based on the use of advanced micro-optics technology. The technology gives considerably more spatial elements than a classic pushbroom which translates into far more light [...] Read more.
We present the results of a project to develop a proof of concept for a novel hyperspectral imager based on the use of advanced micro-optics technology. The technology gives considerably more spatial elements than a classic pushbroom which translates into far more light being integrated per unit of time. This permits us to observe at higher spatial and/or spectral resolution, darker targets and under lower illumination, as in the early morning. Observations of faint glow at night should also be possible but need further studies. A full instrument for laboratory demonstration and field tests has now been built and tested. It has about 10,000 spatial elements and spectra 150 pixel long. It is made of a set of cylindrical fore-optics followed by a new innovative optical system called a microslice Integral Field Unit (IFU) which is itself followed by a standard spectrograph. The fore-optics plus microslice IFU split the field into a large number of small slit-like images that are dispersed in the spectrograph. Our goal is to build instruments with at least hundreds of thousands of spatial elements. Full article
Show Figures


<p>Basic principle of the microslice spectrograph.</p>
Full article ">
<p>Layout of the collimator.</p>
Full article ">
<p>The prototype snapshot hyperspectral imager assembled in its field configuration with fold mirror. The cover is not fitted.</p>
Full article ">
<p>One of the microlens array elements in the slicer being aligned using a set of micromanipulators. The array is illuminated from below using a red laser and the spot pattern matched to a printed target produced from ray tracing of the ideal optical design.</p>
Full article ">
<p>Magnified images of the output from the microslicer when illuminated with a whitelight continuum source. These virtual slitlets are used as inputs to the spectrograph, as shown in <a href="#f1-remotesensing-05-01204" class="html-fig">Figure 1</a>. A cross-section through one of these slices is shown in <a href="#f6-remotesensing-05-01204" class="html-fig">Figure 6</a>.</p>
Full article ">
<p>The bottom curve shows a cross section across one of the slitlet images produced by the microslicer (<a href="#f5-remotesensing-05-01204" class="html-fig">Figure 5</a>). The sharpness of the edges is used to determine the image quality of the microslice system. The upper curve shows the derivative of this profile; the width of the two peaks corresponds closely to the spatial Point Spread Function (PSF) of the system (76% In Slit Energy) and is less than 1 pixel on the spectrograph detector.</p>
Full article ">
<p>White light spectra on the detector with the whole system and uniform illumination.</p>
Full article ">
<p>Detector response with white light illumination and a 475–650 nm bandpass filter. Each pixel is made of 3 layers. In each layer one of the three colors is detected. The three lines show the counts (arbitrary units) in the three different layers of the Foveon sensor.</p>
Full article ">
<p>(<b>Left</b>) Image of the stripe target made of black and white parallel lines 1.7 pixels wide plus a central white gap. The gap is visible in the second row of slices. (<b>Right</b>) Profiles one pixel wide through 6 different spectra showing the contrast in the red at left, then green, blue, red, green &amp; blue.</p>
Full article ">
4887 KiB  
Article
Decision Tree and Texture Analysis for Mapping Debris-Covered Glaciers in the Kangchenjunga Area, Eastern Himalaya
by Adina Racoviteanu and Mark W. Williams
Remote Sens. 2012, 4(10), 3078-3109; https://doi.org/10.3390/rs4103078 - 18 Oct 2012
Cited by 105 | Viewed by 13759
Abstract
In this study we use visible, short-wave infrared and thermal Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data validated with high-resolution Quickbird (QB) and Worldview2 (WV2) for mapping debris cover in the eastern Himalaya using two independent approaches: (a) a decision tree [...] Read more.
In this study we use visible, short-wave infrared and thermal Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data validated with high-resolution Quickbird (QB) and Worldview2 (WV2) for mapping debris cover in the eastern Himalaya using two independent approaches: (a) a decision tree algorithm, and (b) texture analysis. The decision tree algorithm was based on multi-spectral and topographic variables, such as band ratios, surface reflectance, kinetic temperature from ASTER bands 10 and 12, slope angle, and elevation. The decision tree algorithm resulted in 64 km2 classified as debris-covered ice, which represents 11% of the glacierized area. Overall, for ten glacier tongues in the Kangchenjunga area, there was an area difference of 16.2 km2 (25%) between the ASTER and the QB areas, with mapping errors mainly due to clouds and shadows. Texture analysis techniques included co-occurrence measures, geostatistics and filtering in spatial/frequency domain. Debris cover had the highest variance of all terrain classes, highest entropy and lowest homogeneity compared to the other classes, for example a mean variance of 15.27 compared to 0 for clouds and 0.06 for clean ice. Results of the texture image for debris-covered areas were comparable with those from the decision tree algorithm, with 8% area difference between the two techniques. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>Location map of the study area (the Sikkim Himalaya). The figure on the right is a false color composite (FCC) ASTER 543. In this composite, glacier ice is shown as turquoise, vegetation is green, clouds are white and bare land is red. The yellow arrow points to the clouds obstructing the view of glaciers on the southern part of the image. Also shown is the subset area on which texture analysis was performed (Section 2.4). Numbers refer to glaciers mentioned in the text: 1- S.Lonak; 2- Broken; 3- E.Langpo; 4-Ramtang; 5-Yalung; 6-Zemu; 7-Tongshuong; 8-Talung; 9-E.Rathong; 10-Onglaktang; 11-Kangchenjunga; 12-Umaram Kang.</p>
Full article ">
<p>A section of the debris covered tongue of Onglaktang glacier, seen from Goecha La (∼5,300 m). The morphology of the debris-cover tongue is visible as cones and mounds (A), and ice walls (B). Also visible is the sharp moraine ridge crest (C), lateral (periglacial) moraine (D) and seasonal snow on the glacier tongue and on the lateral moraines (E). The photograph was taken in November 2006.</p>
Full article ">
<p>Sikkim section of the Swiss Foundation for Alpine Research topographic map at 1:150,000 used as baseline data in this study. Black solid lines represent glaciers digitized manually from the 1970s topographic map.</p>
Full article ">
<p>ASTER color composite 321 showing ROIs digitized manually and shown as solid polygons: Debris (red), snow/ice (blue), bare rock and sand (yellow) and clouds (magenta) were digitized and all four classes were used for the GCLM texture measures; only the first three classes were used for the geostatistical analysis (Section 4.4.3).</p>
Full article ">
<p>Results of the clean ice/snow delineation based on the 2001 ASTER scene, using the NDSI algorithm. The letters point to: A-correctly classified clean ice and snow; B-proglacial lakes misclassified as snow/ice; C-shadows on glacier surface; D-nunataks; E-debris covered tongue of Zemu glacier, missed by the NDSI algorithm; F-transient snow misclassified as glacier ice and G- clouds. Adapted from Racoviteanu <span class="html-italic">et al.</span> [<a href="#b41-remotesensing-04-03078" class="html-bibr">41</a>].</p>
Full article ">
<p>(<b>a</b>) AST08 product (kinetic temperature band) for Zemu glacier, with two transects: across the Zemu glacier and surroundings (transect #1, in red) and along the glacier (transect #2, in green); (<b>b</b>) ASTER color composite 432 shown for comparison; (<b>c</b>) surface temperature across Zemu and surrounding surfaces (direction NW to SE); on the lower left graph, we point to the sharp lateral moraine ridge crests visible on as bright pixels on the surface temperature image; (<b>d</b>) surface temperature along the tongue of Zemu glacier (direction SW to NE). Surface temperature generally increases towards the glacier terminus, indicating a thicker debris cover.</p>
Full article ">
<p>The ENVI decision tree based on topographic and multispectral criteria and thresholds. Each criteria consist in a conditional statement (left side ellipsoids). Every time a condition is fulfilled, the resulting class/binary map (right side rectangular boxes) is excluded from the area of potential debris, resulting in a final map of suitable areas for debris cover.</p>
Full article ">
<p>Four of the classes resulting from the multispectral criteria, shown as examples of the output of the criteria in the decision tree in <a href="#f7-remotesensing-04-03078" class="html-fig">Figure 7</a>: (<b>a</b>) NDSI map with clean glacier areas in blue; (<b>b</b>) ASTER band 4 with cloud outlines in yellow; (<b>c</b>) NDVI map with vegetation mask in green and (<b>d</b>) HSV 235 color transform with the hue component for non-vegetated bare rock/sand shown in red.</p>
Full article ">
<p>Results of the decision tree classification for the 2001 ASTER scene. Potential debris covered ice is shown in red, and is overlayed on a 321 color composite of the ASTER scene.</p>
Full article ">
13402 KiB  
Article
Gigapixel Imaging and Photogrammetry: Development of a New Long Range Remote Imaging Technique
by Matthew J. Lato, George Bevan and Michael Fergusson
Remote Sens. 2012, 4(10), 3006-3021; https://doi.org/10.3390/rs4103006 - 10 Oct 2012
Cited by 28 | Viewed by 10676
Abstract
The use of terrestrial remote imaging techniques, specifically LiDAR (Light Detection And Ranging) and digital stereo-photogrammetry, are widely proven and accepted for the mapping of geological structure and monitoring of mass movements. The use of such technologies can be limited, however: LiDAR generally [...] Read more.
The use of terrestrial remote imaging techniques, specifically LiDAR (Light Detection And Ranging) and digital stereo-photogrammetry, are widely proven and accepted for the mapping of geological structure and monitoring of mass movements. The use of such technologies can be limited, however: LiDAR generally by the cost of acquisition, and stereo-photogrammetry by the tradeoff between possible resolution within the scene versus the spatial extent of the coverage. The objective of this research is to test a hybrid gigapixel photogrammetry method, and investigate optimal equipment configurations for use in mountainous terrain. The scope of the work included field testing at variable ranges, angles, resolutions, and in variable geological and climatologically settings. Original field work was carried out in Canada to test various lenses and cameras, and detailed field mapping excursions were conducted in Norway. The key findings of the research are example data generated by gigapixel photogrammetry, a detailed discussion on optimal photography equipment for gigapixel imaging, and implementations of the imaging possibilities for rockfall mapping. This paper represents a discussion about a new terrestrial 3-dimensional imaging technique. The findings of this research will directly benefit natural hazard mapping programs in which rockfall potential must be recorded and the use of standard 3-dimensional imaging techniques cannot be applied. Full article
Show Figures


<p>Resectioned GigaPan camera stations (blue) and corresponding bundle points (yellow) in ADAMTech CalibCam. (<b>a</b>) The peak from Nebbit Mountain in the Gudvangen Valley, Norway. The base-distance is 1:7. (<b>b</b>) An outcrop in the Rjukan Valley, Norway. The base distance is 1:4.</p>
Full article ">
<p>Giagapan Epic Pro set up with D7000 and 300 mm f4. Note that the no-parallax point (NPP) is behind the sensor plane.</p>
Full article ">
<p>(<b>a</b>) 1.91 gigapixel image, captured with a Nikon D300s and a 135 mm f2. (<b>b</b>) Gigapixel image zoomed into the natural resolution illustrating fractures contributing to rockfall. (<b>c</b>) Gigapixel image zoomed into the natural resolution illustrating open fractures in the rockmass</p>
Full article ">
<p>Manual Pano-Gimbal head with 400 mm and 2× teleconverter on D7000: effective focal length of 1,200 mm.</p>
Full article ">
<p>Overview map and pictures of field sites in Norway in which gigapixel photogrammetry was tested (note scale, coordinates and north arrow apply to background map only).</p>
Full article ">
<p>(<b>a</b>) Three-dimensional surface model constructed from the gigapixel imagery collected in Gudvangen, Norway. Original photographic data were collected using a Nikon D3s with a 400 mm f4 lens at a distance of approximately 1,200 m. Red directional lighting (from below) highlights all overhanging surface on the rock face. (<b>b</b>) Zoomed in section of the 3D model illustrating failed rock blocks and the high level of detail in the data. (<b>c</b>) Stereonet plot of discontinuities measured from the 3D mesh, three joint set families are identified.</p>
Full article ">
<p>(<b>a</b>) Photograph of the site under evaluation in the Rjukan Valley, the planar sliding surface as well as potentially unstable blocks are outlined. (<b>b</b>) Meshed gigapixel photogrammetry data of all data points positioned above the potential planar sliding surface. (<b>c</b>) Stereonet evaluation of the kinematic stability of the highlighted discontinuity with respect to the slope face, indicating instability.</p>
Full article ">
10051 KiB  
Article
Capability of C-Band SAR for Operational Wetland Monitoring at High Latitudes
by Julia Reschke, Annett Bartsch, Stefan Schlaffer and Dmitry Schepaschenko
Remote Sens. 2012, 4(10), 2923-2943; https://doi.org/10.3390/rs4102923 - 1 Oct 2012
Cited by 62 | Viewed by 11673
Abstract
Wetlands store large amounts of carbon, and depending on their status and type, they release specific amounts of methane gas to the atmosphere. The connection between wetland type and methane emission has been investigated in various studies and utilized in climate change monitoring [...] Read more.
Wetlands store large amounts of carbon, and depending on their status and type, they release specific amounts of methane gas to the atmosphere. The connection between wetland type and methane emission has been investigated in various studies and utilized in climate change monitoring and modelling. For improved estimation of methane emissions, land surface models require information such as the wetland fraction and its dynamics over large areas. Existing datasets of wetland dynamics present the total amount of wetland (fraction) for each model grid cell, but do not discriminate the different wetland types like permanent lakes, periodically inundated areas or peatlands. Wetland types differently influence methane fluxes and thus their contribution to the total wetland fraction should be quantified. Especially wetlands of permafrost regions are expected to have a strong impact on future climate due to soil thawing. In this study ENIVSAT ASAR Wide Swath data was tested for operational monitoring of the distribution of areas with a long-term SW near 1 (hSW) in northern Russia (SW = degree of saturation with water, 1 = saturated), which is a specific characteristic of peatlands. For the whole northern Russia, areas with hSW were delineated and discriminated from dynamic and open water bodies for the years 2007 and 2008. The area identified with this method amounts to approximately 300,000 km2 in northern Siberia in 2007. It overlaps with zones of high carbon storage. Comparison with a range of related datasets (static and dynamic) showed that hSW represents not only peatlands but also temporary wetlands associated with post-forest fire conditions in permafrost regions. Annual long-term monitoring of change in boreal and tundra environments is possible with the presented approach. Sentinel-1, the successor of ENVISAT ASAR, will provide data that may allow continuous monitoring of these wetland dynamics in the future complementing global observations of wetland fraction. Full article
Show Figures


<p>Extent of mosaics processed for the ALANIS Methane dataset, with IDs included (e.g., 11 corresponds to Ob region and 12 to the Lena basin and Delta).</p>
Full article ">
<p>Extent of permafrost of the West Siberian Lowlands (Dataset IIASA [<a href="#b26-remotesensing-04-02923" class="html-bibr">26</a>]). The test areas of Ob basin and Delta and Lena basin and Delta are outlined in black. The extents of subsets and transects used for validation are shown in red, green and blue (compare Chapter 4: Assessment).</p>
Full article ">
<p>Major vegetation zones of the West Siberian Lowlands (Dataset IIASA [<a href="#b26-remotesensing-04-02923" class="html-bibr">26</a>]). The test areas of Ob basin and Delta and Lena basin and Delta are outlined in black. The extents of subsets and transects used for validation are shown in red, green and blue (compare Chapter 4: Assessment).</p>
Full article ">
<p>Schematic representation of the classification method to derive h<span class="html-italic">S<sub>W</sub></span> areas.</p>
Full article ">
<p>(<b>left</b>) h<span class="html-italic">S<sub>W</sub></span> areas and inundation classification using the minimum and maximum of time series data, (<b>right</b>) h<span class="html-italic">S<sub>W</sub></span> areas and inundation classification using statistical measures 5th percentile and 95th percentile of time series data. Artifacts and data errors are minimized, but the extent of h<span class="html-italic">S<sub>W</sub></span> areas is reduced as well. Universal Polar Stereographic projection, extent: S2 in <a href="#f3-remotesensing-04-02923" class="html-fig">Figure 3</a>.</p>
Full article ">
<p>The 95th percentile (<b>left</b>) and the 5th percentile (<b>middle</b>) of the backscatter time series, which are used in the classification tree in comparison to a single day backscatter (<b>right</b>), mapped in Universal Polar Stereographic projection. Extent: S1 in <a href="#f3-remotesensing-04-02923" class="html-fig">Figure 3</a>.</p>
Full article ">
<p>Percental h<span class="html-italic">S<sub>W</sub></span> area from total area in the mosaics covering northern Russia. For location see <a href="#f1-remotesensing-04-02923" class="html-fig">Figure 1</a>.</p>
Full article ">
<p>Comparison of h<span class="html-italic">S<sub>W</sub></span> classification (<b>left</b>) with carbon content map ((<b>right</b>); IIASA data set [<a href="#b32-remotesensing-04-02923" class="html-bibr">32</a>]).</p>
Full article ">
<p>Comparison of h<span class="html-italic">S<sub>W</sub></span> area (light blue) and open water bodies area (dark blue) derived from ASAR WS data statistics with the area of the regional wetland product (red) in different landscape types of Ob and Lena region. The dynamics of total wetland areas are comparable in most regions. The location of the transects are shown in <a href="#f3-remotesensing-04-02923" class="html-fig">Figure 3</a>.</p>
Full article ">
Back to TopTop