[go: up one dir, main page]

 
 
remotesensing-logo

Journal Browser

Journal Browser

Airborne Laser Scanning

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (21 October 2016) | Viewed by 187820

Special Issue Editors


E-Mail Website
Guest Editor
School of Civil Engineering, Purdue University, West Lafayette, IN 47907, USA
Interests: automated aerospace image and LiDAR mapping; geospatial modeling and analysis; geosocial data mining
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Airborne laser scanning has recently embraced a revolution in technological advancements and various innovations in practical applications. Among numerous developments, we have notably experienced progressive changes from discrete recording to waveform recording, from single spectral (band) to multispectral laser scanning, and from traditional single pulse collection to multi pulse (Geiger mode) and single photon collections. Additionally, UAV laser scanning is emerging. As a result of such technological advancements, the operational platform altitude may vary from tens of meters to over ten-thousand meters; the point density reaches from a couple of points per square meter to tens or even a hundred points per square meter; additionally, the recorded targets can be air, canopy, ground, or even below water. Representative applications of such advancements include, among others, air quality detection, biophysical estimation, high definition terrain generation, topographic mapping, coastal mapping, change detection, and various types of 3D modeling.

As Guest Editors, we would like to dedicate this Special Issue to timely documenting these revolutionary developments and innovative applications. Well-prepared, unpublished submissions that address one or more of the following topics on airborne laser scanning are solicited:

  • Advances in laser scanning systems
  • Accurate direct sensor geo-referencing
  • Point cloud generation from LiDAR measurements
  • Filtering, segmentation, clustering and classification of LiDAR point clouds
  • Feature or object extraction and reconstruction from LiDAR point clouds
  • Combined use of laser scanning data and other geospatial data
  • Feasibility studies with new systems and sensors
  • New applications

Prof. Jie Shan
Prof. Juha Hyyppä
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Airborne LiDAR
  • Airborne Laser Scanning
  • Point clouds
  • Direct georeferencing
  • Clustering and classification
  • Segmentation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21646 KiB  
Article
Multi-Feature Registration of Point Clouds
by Tzu-Yi Chuang and Jen-Jer Jaw
Remote Sens. 2017, 9(3), 281; https://doi.org/10.3390/rs9030281 - 16 Mar 2017
Cited by 14 | Viewed by 7077
Abstract
Light detection and ranging (LiDAR) has become a mainstream technique for rapid acquisition of 3-D geometry. Current LiDAR platforms can be mainly categorized into spaceborne LiDAR system (SLS), airborne LiDAR system (ALS), mobile LiDAR system (MLS), and terrestrial LiDAR system (TLS). Point cloud [...] Read more.
Light detection and ranging (LiDAR) has become a mainstream technique for rapid acquisition of 3-D geometry. Current LiDAR platforms can be mainly categorized into spaceborne LiDAR system (SLS), airborne LiDAR system (ALS), mobile LiDAR system (MLS), and terrestrial LiDAR system (TLS). Point cloud registration between different scans of the same platform or different platforms is essential for establishing a complete scene description and improving geometric consistency. The discrepancies in data characteristics should be manipulated properly for precise transformation estimation. This paper proposes a multi-feature registration scheme suitable for utilizing point, line, and plane features extracted from raw point clouds to realize the registrations of scans acquired within the same LIDAR system or across the different platforms. By exploiting the full geometric strength of the features, different features are used exclusively or combined with others. The uncertainty of feature observations is also considered within the proposed method, in which the registration of multiple scans can be simultaneously achieved. The simulated test with an ideal geometry and data simplification was performed to assess the contribution of different features towards point cloud registration in a very essential fashion. On the other hand, three real cases of registration between LIDAR scans from single platform and between those acquired by different platforms were demonstrated to validate the effectiveness of the proposed method. In light of the experimental results, it was found that the proposed model with simultaneous and weighted adjustment rendered satisfactory registration results and showed that not only features inherited in the scene can be more exploited to increase the robustness and reliability for transformation estimation, but also the weak geometry of poorly overlapping scans can be better treated than utilizing only one single type of feature. The registration errors of multiple scans in all tests were all less than point interval or positional error, whichever dominating, of the LiDAR data. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The demonstration of cross-platform point cloud registration.</p>
Full article ">Figure 2
<p>Two-endpoint and four-parameter forms of the line-based transformation.</p>
Full article ">Figure 3
<p>Normal vector and polar forms of the plane-based transformation.</p>
Full article ">Figure 4
<p>Illustrations of experimental configurations.</p>
Full article ">Figure 5
<p>Quality assessment.</p>
Full article ">Figure 6
<p>Quality assessment (<span class="html-italic">r</span> indicates the redundancy).</p>
Full article ">Figure 7
<p>Illustrations of feature combinations.</p>
Full article ">Figure 8
<p>Assessment of the transformation quality of feature combinations.</p>
Full article ">Figure 9
<p>The structure for feature acquisition.</p>
Full article ">Figure 10
<p>Illustration of the terrestrial scans.</p>
Full article ">Figure 11
<p>The feature extraction and matching results.</p>
Full article ">Figure 12
<p>The registration results by the proposed method.</p>
Full article ">Figure 13
<p>Terrestrial and airborne LiDAR point clouds.</p>
Full article ">Figure 14
<p>The feature extraction results.</p>
Full article ">Figure 15
<p>The feature matching results.</p>
Full article ">Figure 16
<p>The discrepancies in each scan pair.</p>
Full article ">Figure 17
<p>The terrestrial registration results reviewing from different directions.</p>
Full article ">Figure 18
<p>The feature correspondences and check features.</p>
Full article ">Figure 19
<p>The visual inspections on the registered point clouds.</p>
Full article ">Figure 20
<p>Mobile and airborne LiDAR data.</p>
Full article ">Figure 21
<p>Cross-platform LiDAR registration.</p>
Full article ">
13953 KiB  
Article
Automated Reconstruction of Building LoDs from Airborne LiDAR Point Clouds Using an Improved Morphological Scale Space
by Bisheng Yang, Ronggang Huang, Jianping Li, Mao Tian, Wenxia Dai and Ruofei Zhong
Remote Sens. 2017, 9(1), 14; https://doi.org/10.3390/rs9010014 - 27 Dec 2016
Cited by 58 | Viewed by 7528
Abstract
Reconstructing building models at different levels of detail (LoDs) from airborne laser scanning point clouds is urgently needed for wide application as this method can balance between the user’s requirements and economic costs. The previous methods reconstruct building LoDs from the finest 3D [...] Read more.
Reconstructing building models at different levels of detail (LoDs) from airborne laser scanning point clouds is urgently needed for wide application as this method can balance between the user’s requirements and economic costs. The previous methods reconstruct building LoDs from the finest 3D building models rather than from point clouds, resulting in heavy costs and inflexible adaptivity. The scale space is a sound theory for multi-scale representation of an object from a coarser level to a finer level. Therefore, this paper proposes a novel method to reconstruct buildings at different LoDs from airborne Light Detection and Ranging (LiDAR) point clouds based on an improved morphological scale space. The proposed method first extracts building candidate regions following the separation of ground and non-ground points. For each building candidate region, the proposed method generates a scale space by iteratively using the improved morphological reconstruction with the increase of scale, and constructs the corresponding topological relationship graphs (TRGs) across scales. Secondly, the proposed method robustly extracts building points by using features based on the TRG. Finally, the proposed method reconstructs each building at different LoDs according to the TRG. The experiments demonstrate that the proposed method robustly extracts the buildings with details (e.g., door eaves and roof furniture) and illustrate good performance in distinguishing buildings from vegetation or other objects, while automatically reconstructing building LoDs from the finest building points. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Improved morphological reconstruction for a building. (<b>a</b>) Raw point clouds of a building; (<b>b</b>) A cross-section of the raw point clouds, where the cross plane is illustrated in (<b>a</b>), and the width of each segment is annotated; (<b>c</b>) Result of morphological reconstruction at the scale of 2 m, where parts of the inclined roofs <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mn>3</mn> <mn>0</mn> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mn>4</mn> <mn>0</mn> </msubsup> </mrow> </semantics> </math> are flattened; (<b>d</b>) Result of recovering the inclined segments <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mn>3</mn> <mn>0</mn> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mn>4</mn> <mn>0</mn> </msubsup> </mrow> </semantics> </math>; (<b>e</b>) Result of modifying false segments, where <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mn>1</mn> <mn>1</mn> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mn>5</mn> <mn>1</mn> </msubsup> </mrow> </semantics> </math> are flattened onto the larger segment.</p>
Full article ">Figure 2
<p>Generation of scale-space and the topological relationship graph within a building. (<b>a</b>–<b>c</b>) Results of morphological reconstruction at the first scale (<math display="inline"> <semantics> <mrow> <mi>s</mi> <mo>=</mo> <mn>0</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </semantics> </math>), the second scale (<math display="inline"> <semantics> <mrow> <mi>s</mi> <mo>=</mo> <mn>2</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </semantics> </math>) and the third scale (<math display="inline"> <semantics> <mrow> <mi>s</mi> <mo>=</mo> <mn>4</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </semantics> </math>); (<b>d</b>–<b>f</b>) Results of scale space are displayed by cross-sections, and the location of the cross plane is illustrated in <a href="#remotesensing-09-00014-f001" class="html-fig">Figure 1</a>a; (<b>g</b>) Topological relationship graph, which is generated by relinking the relationship between two segments from adjacent levels.</p>
Full article ">Figure 3
<p>Four types of the relationship between two adjacent segments, which are dotted in different colors.</p>
Full article ">Figure 4
<p>Labeling the relationship between the pair of two adjacent segments from each level of the generated topological relationship graph (TRG), and two adjacent segments should be from the same father node. When two adjacent segments are from different father nodes, the relationship is not labeled, and it could be derived from their father nodes.</p>
Full article ">Figure 5
<p>Flowchart of generating building LoDs from Airborne LiDAR point clouds.</p>
Full article ">Figure 6
<p>An example for building point detection. (<b>a</b>) Raw point cloud. There are a building and several trees, and three trees are near to the building; (<b>b</b>) Filtering result. Ground and non-ground points are separated; (<b>c</b>) Some non-ground segments; (<b>d</b>) Generated building candidate regions by grouping non-ground segments; (<b>e</b>) Result of TRG classification. Only one candidate region is labeled as a building; (<b>f</b>) Non-building points near the building are removed, and the remained points are classified as building points.</p>
Full article ">Figure 7
<p>Generating the TRG for the building candidate region B of <a href="#remotesensing-09-00014-f006" class="html-fig">Figure 6</a>d. (<b>a</b>) Raw point cloud; (<b>b</b>–<b>d</b>) Segmentation results of three scales. Each segment is dotted in one color, and each segment is annotated with a unique identification; (<b>e</b>) Generating TRG according to the method in <a href="#sec2-remotesensing-09-00014" class="html-sec">Section 2</a>.</p>
Full article ">Figure 8
<p>Modifying one TRG (i.e., <a href="#remotesensing-09-00014-f007" class="html-fig">Figure 7</a>e) from the finest level to the coarsest level after building point detection. If all points in the segment are classified as non-building, the segment node and its relationships are removed from the TRG.</p>
Full article ">Figure 9
<p>Raw point clouds of the Toronto dataset provided by International Society for Photogrammetry and Remote Sensing (ISPRS).</p>
Full article ">Figure 10
<p>Detecting buildings from the Toronto dataset. (<b>a</b>) Filtering result; (<b>b</b>) Non-ground segments, and each segment is dotted in one color; (<b>c</b>) Result of generating building candidate regions, where each region is dotted in one color; (<b>d</b>) Result of TRG classification; (<b>e</b>) Result of extracting buildings; (<b>f</b>) Result of the extracted buildings, and different buildings are dotted in different colors.</p>
Full article ">Figure 11
<p>Generating the scale space and the corresponding TRG for a building candidate region PB in <a href="#remotesensing-09-00014-f010" class="html-fig">Figure 10</a>c. (<b>a</b>) Point clouds of the building candidate region; (<b>b</b>–<b>e</b>) Segmentation results at four scales, where different segments are dotted in different colors. Additionally, each segment is annotated with a unique identification; (<b>f</b>) Generated TRGs.</p>
Full article ">Figure 12
<p>Extracting building points and modifying the TRG from the building candidate region PB in <a href="#remotesensing-09-00014-f010" class="html-fig">Figure 10</a>c. (<b>a</b>) TRG classification. Two TRGs are classified as non-building, and one TRG is labeled as a building; (<b>b</b>) Final result of building point detection; (<b>c</b>,<b>d</b>) Process of modifying the TRG according to the result of building point detection, and only one segment node is removed.</p>
Full article ">Figure 13
<p>Reconstructing the building LoDs of the building candidate region PB in <a href="#remotesensing-09-00014-f010" class="html-fig">Figure 10</a>c, where there are four levels. The roof structures are changed from complicated to simple with the increasing scale. (<b>a</b>) The building model within the scale of 0 m. (<b>b</b>) The building model within the scale of 2 m. (<b>c</b>) The building model within the scale of 4 m. (<b>d</b>) The building model within the scale of 8 m.</p>
Full article ">Figure 14
<p>Results of reconstructing the building LoDs in the entire scene. The roof structures are changed from complicated to simple with the increasing of the scale. Because different buildings have different levels, the model of the maximum scale is utilized at a larger scale. (<b>a</b>) The building models within the scale of 0 m. (<b>b</b>) The building models within the scale of 2 m. (<b>c</b>) The building models within the scale of 4 m. (<b>d</b>) The building models within the scale of 8 m. (<b>e</b>) The building models within the scale of 16 m. (<b>f</b>) The building models within the scale of 32 m.</p>
Full article ">Figure 15
<p>Elevation result provided by ISPRS. Yellow pixels are true positives, red pixels are false positives, and blue pixels are false negatives.</p>
Full article ">Figure 16
<p>A result of detecting a building. (<b>a</b>) top-view of the building detection result; (<b>b</b>) side-view of the building detection result; (<b>c</b>) cross-section of the black line in (<b>a</b>) for detailed description of the building detection result, where roof furniture and annex structures are preserved, and vegetation points and noise points are removed; (<b>d</b>) corresponding building model at the scale of 0 m.</p>
Full article ">Figure 17
<p>An example for describing some problems in the result of building LoDs. Because some dormers are missed in the building point detection, the models of some levels may be incomplete. (<b>a</b>) Extracted building points. (<b>b</b>) The building model within the scale of 0 m. (<b>c</b>) The building model within the scale of 2 m. (<b>d</b>) The building model within the scale of 4 m. (<b>e</b>) The building model within the scale of 8 m. (<b>f</b>) The building model within the scale of 16 m.</p>
Full article ">Figure 18
<p>Comparison of LoDs from CityGML and the proposed method for a connected building. (<b>a</b>) Building LoDs from the proposed method; (<b>b</b>) Building LoDs from CityGML.</p>
Full article ">Figure 19
<p>Comparison of LoDs from CityGML and the proposed method for a building with multiple stories. (<b>a</b>) Building LoDs from the proposed method; (<b>b</b>) Building LoDs from CityGML.</p>
Full article ">
31180 KiB  
Article
Scanning, Multibeam, Single Photon Lidars for Rapid, Large Scale, High Resolution, Topographic and Bathymetric Mapping
by John J. Degnan
Remote Sens. 2016, 8(11), 958; https://doi.org/10.3390/rs8110958 - 18 Nov 2016
Cited by 88 | Viewed by 11460
Abstract
Several scanning, single photon sensitive, 3D imaging lidars are herein described that operate at aircraft above ground levels (AGLs) between 1 and 11 km, and speeds in excess of 200 knots. With 100 beamlets and laser fire rates up to 60 kHz, we, [...] Read more.
Several scanning, single photon sensitive, 3D imaging lidars are herein described that operate at aircraft above ground levels (AGLs) between 1 and 11 km, and speeds in excess of 200 knots. With 100 beamlets and laser fire rates up to 60 kHz, we, at the Sigma Space Corporation (Lanham, MD, USA), have interrogated up to 6 million ground pixels per second, all of which can record multiple returns from volumetric scatterers such as tree canopies. High range resolution has been achieved through the use of subnanosecond laser pulsewidths, detectors and timing receivers. The systems are presently being deployed on a variety of aircraft to demonstrate their utility in multiple applications including large scale surveying, bathymetry, forestry, etc. Efficient noise filters, suitable for near realtime imaging, have been shown to effectively eliminate the solar background during daytime operations. Geolocation elevation errors measured to date are at the subdecimeter level. Key differences between our Single Photon Lidars, and competing Geiger Mode lidars are also discussed. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A comparison of Single Photon Lidars with conventional Discrete Return and Digitized Waveform lidars in interacting with a tree canopy (Courtesy of D. Harding, NASA GSFC).</p>
Full article ">Figure 2
<p>Leafcutter was the first Sigma Single Photon Lidar (SPL) to split the laser beam into 100 beamlets. In early mapping missions, the dual wedge scanner was used to generate either linear raster scans at 45° to the flight line or a conical scan with cone half angles up to 13.5 degrees. At the design AGL of 1 km, pixels on the ground were separated by 15 cm. Contiguous alongtrack and crosstrack mapping on a single pass was achieved by ensuring: (1) that the distance traveled by the aircraft during one scan cycle did not exceed the 1.5 m dimension of the single pulse array; and (2) that ground array patterns from subsequent pulses overlapped along the full circumference of the conical scan and the length of the linear scans.</p>
Full article ">Figure 3
<p>A collage of daytime images created on a single overflight by the Leafcutter SPL. The images in the left half were over low reflectance (10% to 15%) surfaces at above ground levels (AGLs) of 1 km or less while those in the right half were high reflectance cryospheric measurements in Greenland and Antarctica from AGLs up to 2.5 km. The images are color-coded according to the lidar-derived surface elevation (blue = low, red = high). Note the bathymetry results in the bottom two images.</p>
Full article ">Figure 4
<p>NASA Mini-ATM (Airborne Topographic Mapper) and its designated host aircraft, the Viking 300 micro-UAV.</p>
Full article ">Figure 5
<p>Moderate altitude HRQLS-1 and HRQLS-2 lidars and the King Air B200 host aircraft.</p>
Full article ">Figure 6
<p>The NASA MABEL pushbroom lidar, jointly developed by NASA Goddard Space Flight Center and Sigma Space Corporation, has successfully generated 2D surface profiles in Greenland from an AGL of 20 km. The surface returns are highly spatially correlated and stand out against the dense “salt-and-pepper” solar noise background resulting from the high reflectance (typically 80% to 96%) of snow and ice at 532 nm.</p>
Full article ">Figure 7
<p>The automated filtering of HRQLS-1 lidar data taken on a single overflight of a residential community in Oakland, MD. The raw/unfiltered point cloud data is taken with a range gate of 4.6 microseconds corresponding to a total range interval of 690 m. The color scheme is deep blue to red in order of increasing elevation, and it should be mentioned that the solar noise is equally dense below the surface but does not show up as well in the raw unfiltered image because of the poor contrast against the black background. The first stage filter isolates a 90 m interval that contains the surface data as well as roughly 13% of the total noise, and the second stage filter uses narrower range bins (~5 m) to eliminate the vast majority of the remaining noise.</p>
Full article ">Figure 8
<p>This color-coded elevation map of Garrett County, occupying approximately 1700 km<sup>2</sup> in the state of Maryland, was generated by HRQLS-1 from an AGL of 2.3 km. Total flight duration was approximately 12 h at an air speed of 278 km/h which included a 50% overlap between flight line, ferries, and turn maneuvers. The scanner was operating with a cone half angle of 17° resulting in a swath of 1.36 km and a mapping rate of 378 km<sup>2</sup>/h. Highest and lowest elevations are: red = 857 m, blue = 551 m.</p>
Full article ">Figure 9
<p>A Garrett County coal mine in which buildings, conveyor belts, and even black coal piles are clearly visible. Elevation Scales: Top Left red = 803.4 m, blue = 759.8 m; Bottom Left and Bottom Right red = 795.2 m, blue = 767.3 m.</p>
Full article ">Figure 10
<p>HRQLS-1 SPL point cloud profiles showing different growth patterns within a 1 square kilometer of forested area in Garrett County, MD. (<b>a</b>) Short even aged stand with little understory vegetation; (<b>b</b>) Uneven aged stand composed of tall trees and dense midstory vegetation; (<b>c</b>) Even aged stand with some mid and understory growth; (<b>d</b>) Tall open stand with distinct understory vegetation (Courtesy of the University of Maryland [<a href="#B11-remotesensing-08-00958" class="html-bibr">11</a>]).</p>
Full article ">Figure 11
<p>HRQLS-1 lidar image and digital color photograph of the area surrounding the Naval Post Graduate School in Monterey, California.</p>
Full article ">Figure 12
<p>“Fused” HRQLS-1 lidar-photographic 3D image of the Naval Post Graduate School in Monterrey, California.</p>
Full article ">Figure 13
<p><b>Top</b>: Colored HRQLS-1 lidar topo-bathymetric 3D pointcloud of a hilltop monastery and the beach at Pt. Lobos near Monterey, CA; <b>Bottom</b>: The bottom image shows the 2D lidar profile along the blue line in the top figure and extending from the monastery to the beach and into the Pacific Ocean to an optical depth of 17.3 m or a physical depth of 13 m. Vertical grid size = 10 m, Horizontal grid size = 50 m.</p>
Full article ">Figure 14
<p><b>Top</b>: Two passes of HRQLS-1 over a cruise ship docked at Ft. Lauderdale, Florida; <b>Bottom</b>: Multiple HRQLS-1 passes over a power line grid in North Carolina yielding over 40 points per square meter from an AGL of 1.83 km and an aircraft velocity of 296 km/h.</p>
Full article ">Figure 15
<p>(<b>a</b>) Fraction of pixels recording surface returns as a function of surface signal strength, <span class="html-italic">n</span>, and mean number of noise photons detected within a half range gate; (<b>b</b>) ratio of signal to noise counts as a function of the same two parameters.</p>
Full article ">Figure 16
<p>Surface detection probabilities for SPL and Geiger Mode (GM) lidars as a function of the unobscured signal strength for a tree canopy having a one way transmission of 40%. Unlike the GM lidar which has an unobscured signal strength that optimizes the surface detection probability, the SPL lidar can “power” through the canopy by increasing the laser pulse energy.</p>
Full article ">Figure 17
<p>The relative performance of SPL and GM lidars over a wide range of one-way tree canopy transmissions (<span class="html-italic">T<sub>c</sub></span> = 0.1 to 1) A value γ = 1 is assumed. The top left graph demonstrates that, as the tree canopy transmission decreases, the optimum unobscured signal for maximum penetration decreases, further reducing the detectability of the under canopy surface by the GM lidar. The bottom right graph describes the increasing advantages of the SPL technique in detecting the under canopy surface as the one way canopy transmission decreases.</p>
Full article ">
8872 KiB  
Article
Capability Assessment and Performance Metrics for the Titan Multispectral Mapping Lidar
by Juan Carlos Fernandez-Diaz, William E. Carter, Craig Glennie, Ramesh L. Shrestha, Zhigang Pan, Nima Ekhtari, Abhinav Singhania, Darren Hauser and Michael Sartori
Remote Sens. 2016, 8(11), 936; https://doi.org/10.3390/rs8110936 - 10 Nov 2016
Cited by 146 | Viewed by 14347
Abstract
In this paper we present a description of a new multispectral airborne mapping light detection and ranging (lidar) along with performance results obtained from two years of data collection and test campaigns. The Titan multiwave lidar is manufactured by Teledyne Optech Inc. (Toronto, [...] Read more.
In this paper we present a description of a new multispectral airborne mapping light detection and ranging (lidar) along with performance results obtained from two years of data collection and test campaigns. The Titan multiwave lidar is manufactured by Teledyne Optech Inc. (Toronto, ON, Canada) and emits laser pulses in the 1550, 1064 and 532 nm wavelengths simultaneously through a single oscillating mirror scanner at pulse repetition frequencies (PRF) that range from 50 to 300 kHz per wavelength (max combined PRF of 900 kHz). The Titan system can perform simultaneous mapping in terrestrial and very shallow water environments and its multispectral capability enables new applications, such as the production of false color active imagery derived from the lidar return intensities and the automated classification of target and land covers. Field tests and mapping projects performed over the past two years demonstrate capabilities to classify five land covers in urban environments with an accuracy of 90%, map bathymetry under more than 15 m of water, and map thick vegetation canopies at sub-meter vertical resolutions. In addition to its multispectral and performance characteristics, the Titan system is designed with several redundancies and diversity schemes that have proven to be beneficial for both operations and the improvement of data quality. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The Titan’s operational wavelengths with reference to reflectance spectra of different land cover features and the Landsat 8 Operational Land Imager (OLI) passive imaging bands.</p>
Full article ">Figure 2
<p>Pulse repetition frequency (PRF) versus range operation regions for NCALM’s Titan sensor. The graph shows both regions of operation (<b>white</b>) as well as range ambiguity regions depicted as the solid colored bands.</p>
Full article ">Figure 3
<p>Laser shot density envelope for a single pass and single channel of the Titan sensor.</p>
Full article ">Figure 4
<p>The Titan multspectral lidar sensor integrated into a DHC-6 Twin Otter aircraft: (<b>a</b>) Overview of installation layout from the port side of sensor head; (<b>b</b>) View from front looking aft, sensor control rack is in the foreground, sensor head in the background; (<b>c</b>) View of the sensor head through the mapping port of the aircraft. The laser output window is the rectangular window on the right, and the DIMAC camera lens is behind the circular window.</p>
Full article ">Figure 5
<p>Intensity and structural images generated from the Titan multispectral data. (<b>a</b>) Intensity image generated from the 1550 nm channel; (<b>b</b>) intensity image for the 1064 nm channel; (<b>c</b>) intensity image for the 532 nm channel; (<b>d</b>) false color multispectral intensity image generated by using the 1550 nm intensity for the red channel and the 1064 and 532 nm intensities for the green and blue channels; (<b>e</b>) structural image based on the spread of the returns height; (<b>f</b>) structural image based on the height above ground; (<b>g</b>) structural image based on the number of returns per pulse; (<b>h</b>) ground cloud classification results map.</p>
Full article ">Figure 6
<p>Spectral and spatial data products derived with the Titan sensor of the archaeological site of Teotihuacan in central Mexico. (<b>a</b>) False color multispectral lidar intensity image generated by using the 1550 nm intensity for the red channel and the 532 and 1064 nm intensities for the green and blue channels; (<b>b</b>) digital surface model (DSM) derived from the lidar spatial data; (<b>c</b>) perspective view generated by overlaying the false color multispectral intensity image over a 3D surface model based on the lidar DSM.</p>
Full article ">Figure 7
<p>Potential spectral separability of loose and compacted snow, roads with compacted ice and snow are marked with yellow arrows in the figures. (<b>a</b>) Aerial oblique image of McMurdo Station at the time of lidar data collection; (<b>b</b>) Perspective view of a 3D surface model overlaid with a false color lidar intensity image; (<b>c</b>) Active intensity image generated from the 1550 nm channel; (<b>d</b>) Active intensity image generated from the 1064 nm channel.</p>
Full article ">Figure 8
<p>Illustration of the bathymetric test area: the East Pass near Destin, FL, USA. The solid red line represents the test line that was flown multiple times with different configurations. The white solid line represents the track of the validation samples obtained with an acoustic Doppler current profiler. The yellow rectangle represents the coverage of one the acquired test lines, and the bathymetric elevations derived from that test line dataset are presented as a color map that is offset to the east of the pass.</p>
Full article ">Figure 9
<p>Small sample of a bathymetric survey surrounding the Green Cay, Bahamas: (<b>a</b>) Rendering of the point cloud of the first returns of the bathymetric channel colored by flight line and intensity; (<b>b</b>) Topographic and bathymetric color map showing water depths and island elevations.</p>
Full article ">Figure 10
<p>Bathymetric depth accuracy assessment equipment and results. (<b>a</b>) Photo of the SonTek acoustic Doppler current profiler (ADCP) and GPS antenna mounted on a small catamaran; (<b>b</b>) Dispersion plot showing results from the bathymetric depth accuracy assessment.</p>
Full article ">Figure 11
<p>Illustration of Titan’s footprints and surface illumination from a single pass of the sensor. (<b>a</b>) Intensity rendering of a test swath generated from Titan channel 1; (<b>b</b>) Graph that plots the position and footprints of returns from all of Titan’s channels for the red square sample of the test swath. This graphs illustrated how much of the target surface is illuminated by the laser beams.</p>
Full article ">Figure 12
<p>Image maps for the Taylor and Pearse valleys in Antarctica. (<b>a</b>) Image map showing the topographic relief of the valleys based on the lidar DEM; (<b>b</b>) Image map showing the laser return density obtained from the valleys.</p>
Full article ">
2182 KiB  
Article
Voxel-Based Spatial Filtering Method for Canopy Height Retrieval from Airborne Single-Photon Lidar
by Hao Tang, Anu Swatantran, Terence Barrett, Phil DeCola and Ralph Dubayah
Remote Sens. 2016, 8(9), 771; https://doi.org/10.3390/rs8090771 - 19 Sep 2016
Cited by 48 | Viewed by 7965
Abstract
Airborne single-photon lidar (SPL) is a new technology that holds considerable potential for forest structure and carbon monitoring at large spatial scales because it acquires 3D measurements of vegetation faster and more efficiently than conventional lidar instruments. However, SPL instruments use green wavelength [...] Read more.
Airborne single-photon lidar (SPL) is a new technology that holds considerable potential for forest structure and carbon monitoring at large spatial scales because it acquires 3D measurements of vegetation faster and more efficiently than conventional lidar instruments. However, SPL instruments use green wavelength (532 nm) lasers, which are sensitive to background solar noise, and therefore SPL point clouds require more elaborate noise filtering than other lidar instruments to determine canopy heights, particularly in daytime acquisitions. Histogram-based aggregation is a commonly used approach for removing noise from photon counting lidar data, but it reduces the resolution of the dataset. Here we present an alternate voxel-based spatial filtering method that filters noise points efficiently while largely preserving the spatial integrity of SPL data. We develop and test our algorithms on an experimental SPL dataset acquired over Garrett County in Maryland, USA. We then compare canopy attributes retrieved using our new algorithm with those obtained from the conventional histogram binning approach. Our results show that canopy heights derived using the new algorithm have a strong agreement with field-measured heights (r2 = 0.69, bias = 0.42 m, RMSE = 4.85 m) and discrete return lidar heights (r2 = 0.94, bias = 1.07 m, RMSE = 2.42 m). Results are consistently better than height accuracies from the histogram method (field data: r2 = 0.59, bias = 0.00 m, RMSE = 6.25 m; DRL: r2 = 0.78, bias = ?0.06 m and RMSE = 4.88 m). Furthermore, we find that the spatial-filtering method retains fine-scale canopy structure detail and has lower errors over steep slopes. We therefore believe that automated spatial filtering algorithms such as the one presented here can support large-scale, canopy structure mapping from airborne SPL data. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An overview of single photon lidar (SPL) data acquired from the High-Resolution Quantum Lidar System (HRQLS): (<b>a</b>) The conical scanning mechanism of HRQLS with both forward and backward scans; (<b>b</b>) An example of point clouds acquired in an individual scan; (<b>c</b>) An example of cross sectional profile composited from multiple scans; (<b>d</b>) 3D point clouds including photons from canopy, terrain and solar noise.</p>
Full article ">Figure 2
<p>A flowchart of deriving canopy height from SPL data using two independent methods: the histogram based method and the spatial filtering method.</p>
Full article ">Figure 3
<p>(<b>a</b>) Comparisons between field heights and canopy heights (p99) derived from SPL data (using both histogram method and the spatial-filtering method); and (<b>b</b>) height differences between field data and SPL as a function of averaged slope value at each plot. For the histogram method ΔH = 0.43 × Slope − 4.05 with <span class="html-italic">r</span><sup>2</sup> = 0.28 (<span class="html-italic">p</span> &lt; 0.01), and for the spatial-filtering method ΔH = 0.26 × Slope − 2.07 with <span class="html-italic">r</span><sup>2</sup> = 0.18 (<span class="html-italic">p</span> &lt; 0.01). Symbols of different color and shape stand for different processing methods (red: histogram method, and blue: spatial-filtering method) over different types of forests (dbf: deciduous broadleaf forests, conif: coniferous forests, and mix: mixed forests).</p>
Full article ">Figure 4
<p>(<b>a</b>) Comparisons between canopy heights derived from discrete return lidar (DRL) and SPL data (using both histogram method and the spatial-filtering method); and (<b>b</b>) height differences between DRL and SPL as a function of averaged slope value at each plot. For the histogram method ΔH = 0.21 × Slope − 1.88 with <span class="html-italic">r</span><sup>2</sup> = 0.11 (<span class="html-italic">p</span> &lt; 0.01). There was no significant relationship between ΔH and slope for the spatial-filtering method (<span class="html-italic">p</span> = 0.77). Same legend as <a href="#remotesensing-08-00771-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>A comparison example of plot-level canopy height products derived from SPL data using both histogram method and the spatial-filtering method. The plot is on a slope of about 30° with a DRL measured canopy height of 33.87 m and a field measured height of 30.3 m. The left part shows the raw level 1 HRQLS data over the plot with noises distributed both above and below the canopy-terrain layer. The center shows the pseudo-waveform generated from the histogram method, with identified canopy top (616.15 m), ground peak (574.35 m) and canopy height (42.3 m) in dash lines. The right part shows the noise-removal and point-classification results of HRQLS data using the spatial-filtering method. The ground points are in blue, and canopy points are in red with an estimated canopy height (p99) of 35.46 m.</p>
Full article ">Figure 6
<p>An illustrative example of the impact of different voxel sizes on noise removal at the individual tree level. The voxel sizes are expressed as combinations of different horizontal resolutions (xy, unit: m) and vertical resolutions (z, unit: m) in (<b>a</b>–<b>c</b>). All the three voxels of different sizes can identify the majority of noise photons (green points) both above canopy and below ground. However, an extra-fine resolution voxel may fail to capture the top of individual trees (<b>a</b>), and an extra-coarse horizontal resolution voxel may miss the entire small tree in open space (<b>c</b>).</p>
Full article ">
6526 KiB  
Article
Evaluation of Single Photon and Geiger Mode Lidar for the 3D Elevation Program
by Jason M. Stoker, Qassim A. Abdullah, Amar Nayegandhi and Jayna Winehouse
Remote Sens. 2016, 8(9), 767; https://doi.org/10.3390/rs8090767 - 19 Sep 2016
Cited by 73 | Viewed by 12303
Abstract
Data acquired by Harris Corporation’s (Melbourne, FL, USA) Geiger-mode IntelliEarth™ sensor and Sigma Space Corporation’s (Lanham-Seabrook, MD, USA) Single Photon HRQLS sensor were evaluated and compared to accepted 3D Elevation Program (3DEP) data and survey ground control to assess the suitability of these [...] Read more.
Data acquired by Harris Corporation’s (Melbourne, FL, USA) Geiger-mode IntelliEarth™ sensor and Sigma Space Corporation’s (Lanham-Seabrook, MD, USA) Single Photon HRQLS sensor were evaluated and compared to accepted 3D Elevation Program (3DEP) data and survey ground control to assess the suitability of these new technologies for the 3DEP. While not able to collect data currently to meet USGS lidar base specification, this is partially due to the fact that the specification was written for linear-mode systems specifically. With little effort on part of the manufacturers of the new lidar systems and the USGS Lidar specifications team, data from these systems could soon serve the 3DEP program and its users. Many of the shortcomings noted in this study have been reported to have been corrected or improved upon in the next generation sensors. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Areas of interest used for processing and assessing bare earth.</p>
Full article ">Figure 2
<p>Location of leaf-on linear mode collection (LMWptLO15), IntelliEarth™ daytime collection (GMHarLO15_7.5kDT), and IntelliEarth™ sensor 2 leaf-off collection (GMHarLF15_26k).</p>
Full article ">Figure 3
<p>Location of checkpoints used for Vertical Accuracy Assessments. Plus signs are NVA checkpoints. Triangles are VVA checkpoints.</p>
Full article ">Figure 4
<p>Example of range walk.</p>
Full article ">Figure 5
<p>Cross section used for comparisons, overlaid on imagery (<b>image top</b>). Sample profiles of HRQLS, leaf on (<b>top profile</b>); linear-mode lidar, leaf-on (<b>middle-top profile</b>); linear-mode lidar, leaf-off; (<b>middle-bottom profile</b>); and IntelliEarth™ lidar, leaf on (<b>bottom profile</b>).</p>
Full article ">Figure 6
<p>Example of an intensity image from: a linear-mode system (<b>top</b>); and the IntelliEarth™ system (<b>bottom</b>).</p>
Full article ">Figure 7
<p>Correlations between Dewberry’s and Woolpert’s IntelliEarth™ DEM differences (<b>top</b>); and HRQLS DEM differences (<b>bottom</b>) from accepted 3DEP lidar. r = 0.74 and 0.55, respectively.</p>
Full article ">
13280 KiB  
Article
Deep-Learning-Based Classification for DTM Extraction from ALS Point Cloud
by Xiangyun Hu and Yi Yuan
Remote Sens. 2016, 8(9), 730; https://doi.org/10.3390/rs8090730 - 5 Sep 2016
Cited by 155 | Viewed by 13562
Abstract
Airborne laser scanning (ALS) point cloud data are suitable for digital terrain model (DTM) extraction given its high accuracy in elevation. Existing filtering algorithms that eliminate non-ground points mostly depend on terrain feature assumptions or representations; these assumptions result in errors when the [...] Read more.
Airborne laser scanning (ALS) point cloud data are suitable for digital terrain model (DTM) extraction given its high accuracy in elevation. Existing filtering algorithms that eliminate non-ground points mostly depend on terrain feature assumptions or representations; these assumptions result in errors when the scene is complex. This paper proposes a new method for ground point extraction based on deep learning using deep convolutional neural networks (CNN). For every point with spatial context, the neighboring points within a window are extracted and transformed into an image. Then, the classification of a point can be treated as the classification of an image; the point-to-image transformation is carefully crafted by considering the height information in the neighborhood area. After being trained on approximately 17 million labeled ALS points, the deep CNN model can learn how a human operator recognizes a point as a ground point or not. The model performs better than typical existing algorithms in terms of error rate, indicating the significant potential of deep-learning-based methods in feature extraction from a point cloud. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow of the proposed approach. “T” means the samples of ground points and “F” means the samples of non-ground points.</p>
Full article ">Figure 2
<p>Point-to-image transformation (source: own study in the “FugroViewer”).</p>
Full article ">Figure 3
<p>The architecture of the proposed deep CNN.</p>
Full article ">Figure 4
<p>Four examples of the training ALS point clouds with different terrain features: (<b>a</b>,<b>b</b>) flat terrain with buildings and farmland; (<b>c</b>,<b>d</b>): mountainous terrain. White denotes the ground points, and green denotes the non-ground points.</p>
Full article ">Figure 5
<p>Training samples of the feature images corresponding to: (<b>a</b>) ground points; and (<b>b</b>) non-ground points.</p>
Full article ">Figure 6
<p>Detailed comparison with other methods and the proposed algorithm across 15 samples in the ISPRS dataset: (<b>a</b>) error rates of Terrasan, (<b>b</b>) error rates of Mongus 2012, (<b>c</b>) error rates of SGF, (<b>d</b>) error rates of Axelsson, (<b>e</b>) error rates of Mongus 2014, (<b>f)</b> error rates of Deep CNN.</p>
Full article ">Figure 6 Cont.
<p>Detailed comparison with other methods and the proposed algorithm across 15 samples in the ISPRS dataset: (<b>a</b>) error rates of Terrasan, (<b>b</b>) error rates of Mongus 2012, (<b>c</b>) error rates of SGF, (<b>d</b>) error rates of Axelsson, (<b>e</b>) error rates of Mongus 2014, (<b>f)</b> error rates of Deep CNN.</p>
Full article ">Figure 7
<p>Error rate of TerraScan (<b>a</b>) and Deep CNN (<b>b</b>) in 40 test cases.</p>
Full article ">Figure 7 Cont.
<p>Error rate of TerraScan (<b>a</b>) and Deep CNN (<b>b</b>) in 40 test cases.</p>
Full article ">Figure 8
<p>Comparison of root mean square error (RMSE) between the generated DTM with the ground truths.</p>
Full article ">Figure 9
<p>Comparison of the proposed method and TerraScan on the detailed difference of the DTM. Column (<b>a</b>) is the ground truth TIN-rendered gray image of the test data. Columns (b,d) are the results of filtration by TerraScan and Deep CNN, respectively; the white points denote correctly classified ground points, the green points denote correctly classified non-ground points, the red points denote accepted non-ground points, and the blue points denote rejected ground points. Columns (<b>c</b>) and (<b>e</b>) are TIN-rendered DTM extracted from the results of filtration by TerraScan and deep CNN model, respectively. In Column (<b>c</b>), blue ellipses denote type I error and red ellipses denote type II error.</p>
Full article ">Figure 10
<p>Comparison of the proposed method and TerraScan on the details of the plain area: (<b>a</b>) raw ALS data; (<b>b</b>) ground truth; (<b>c</b>) result of TerraScan; and (<b>d</b>) result of deep CNN model.</p>
Full article ">Figure 11
<p>Comparison of the proposed method and TerraScan on the details of the mountain area: (<b>a</b>) raw ALS data; (<b>b</b>) ground truth; (<b>c</b>) result of TerraScan; and (<b>d</b>) result of deep CNN model.</p>
Full article ">Figure 12
<p>Comparison of the proposed method and TerraScan on the details of the complex area: (<b>a</b>) raw ALS data; (<b>b</b>) ground truth; (<b>c</b>) result of TerraScan; and (<b>d</b>) result of deep CNN model.</p>
Full article ">Figure 13
<p>(<b>a</b>) Area that deep CNN model accept many wrong ground points, where the white points denote correctly classified ground points, the green points denote correctly classified non-ground points, the red points denote accepted non-ground points, and the blue points denote rejected ground points; (<b>b</b>) the profile of that area; (<b>c</b>) DTM of that area by deep CNN model; and (<b>d</b>) ground truth DTM of that area. The root mean square error (RMSE) between the DTM by deep CNN model with the ground truth DTM in this section shown below is 0.1 m.</p>
Full article ">Figure 13 Cont.
<p>(<b>a</b>) Area that deep CNN model accept many wrong ground points, where the white points denote correctly classified ground points, the green points denote correctly classified non-ground points, the red points denote accepted non-ground points, and the blue points denote rejected ground points; (<b>b</b>) the profile of that area; (<b>c</b>) DTM of that area by deep CNN model; and (<b>d</b>) ground truth DTM of that area. The root mean square error (RMSE) between the DTM by deep CNN model with the ground truth DTM in this section shown below is 0.1 m.</p>
Full article ">
9232 KiB  
Article
Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods
by Huan Ni, Xiangguo Lin, Xiaogang Ning and Jixian Zhang
Remote Sens. 2016, 8(9), 710; https://doi.org/10.3390/rs8090710 - 1 Sep 2016
Cited by 127 | Viewed by 16627
Abstract
This paper presents an automated and effective method for detecting 3D edges and tracing feature lines from 3D-point clouds. This method is named Analysis of Geometric Properties of Neighborhoods (AGPN), and it includes two main steps: edge detection and feature line tracing. In [...] Read more.
This paper presents an automated and effective method for detecting 3D edges and tracing feature lines from 3D-point clouds. This method is named Analysis of Geometric Properties of Neighborhoods (AGPN), and it includes two main steps: edge detection and feature line tracing. In the edge detection step, AGPN analyzes geometric properties of each query point’s neighborhood, and then combines RANdom SAmple Consensus (RANSAC) and angular gap metric to detect edges. In the feature line tracing step, feature lines are traced by a hybrid method based on region growing and model fitting in the detected edges. Our approach is experimentally validated on complex man-made objects and large-scale urban scenes with millions of points. Comparative studies with state-of-the-art methods demonstrate that our method obtains a promising, reliable, and high performance in detecting edges and tracing feature lines in 3D-point clouds. Moreover, AGPN is insensitive to the point density of the input data. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The definition of two types of edges.</p>
Full article ">Figure 2
<p>Overview of the proposed AGPN.</p>
Full article ">Figure 3
<p>Flowchart for the edge detection step in AGPN.</p>
Full article ">Figure 4
<p>Local plane (rendered in red) fitted by the RANSAC algorithm in the nearest neighbor point set <span class="html-italic">P</span>. (<b>a</b>,<b>b</b>) show two types of neighbor point sets respectively. There are three planes in (<b>a</b>) and two planes in (<b>b</b>).</p>
Full article ">Figure 5
<p>Distribution of the nearest neighbors of an unlabeled point on a surface: (<b>a</b>) the neighborhood of an interior point; (<b>b</b>) the neighborhood of a point on a boundary.</p>
Full article ">Figure 6
<p>Distribution of the nearest neighbors of an unlabeled point on a surface intersecting structure: (<b>a</b>) the neighborhood of an interior point on one of the intersecting surfaces; (<b>b</b>) the neighborhood of a fold edge point; (<b>c</b>) the neighborhood of a point on the two intersecting surfaces with different point densities.</p>
Full article ">Figure 7
<p>Distribution of <math display="inline"> <semantics> <mrow> <msub> <mi>G</mi> <mi>θ</mi> </msub> </mrow> </semantics> </math> , each point is colored according to its value of <math display="inline"> <semantics> <mrow> <msub> <mi>G</mi> <mi>θ</mi> </msub> </mrow> </semantics> </math> .</p>
Full article ">Figure 8
<p>Feature line tracing, (<b>a</b>) feature line segments generated by region growing method; (<b>b</b>) feature line segments generated by the proposed feature line tracing method. The traced segments are marked by different colors.</p>
Full article ">Figure 9
<p>Average values of correctness and mislabeled rates for different values of <math display="inline"> <semantics> <mrow> <msubsup> <mi>d</mi> <mi>r</mi> <mn>1</mn> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>d</mi> <mi>r</mi> <mn>2</mn> </msubsup> </mrow> </semantics> </math> : (<b>a</b>) results of the edge detection step with different values of <math display="inline"> <semantics> <mo>;</mo> </semantics> </math> and (<b>b</b>) results of the feature line tracing step with different values of <math display="inline"> <semantics> <mrow> <msubsup> <mi>d</mi> <mi>r</mi> <mn>2</mn> </msubsup> </mrow> </semantics> </math>.</p>
Full article ">Figure 10
<p>Normal estimation of the neighborhood with two intersecting planes: (<b>a</b>) PCA-Normal <math display="inline"> <semantics> <mover accent="true"> <mi>n</mi> <mo>→</mo> </mover> </semantics> </math>; (<b>b</b>) RANSAC-Normal <math display="inline"> <semantics> <mover accent="true"> <mi>n</mi> <mo>→</mo> </mover> </semantics> </math> estimated by our method.</p>
Full article ">Figure 11
<p>(<b>a</b>) Original input data without down-sampling; (<b>b</b>) the input data down-sampled to 6673 points; (<b>c</b>) correctness rates under different densities.</p>
Full article ">Figure 12
<p>The results of a small area, (<b>a</b>) shows the four surfaces, and three kinds of edges in this area; (<b>b</b>) shows the edges detected by our method; (<b>c</b>) shows the feature line segments traced by our method.</p>
Full article ">Figure 13
<p>Results of Site 1: (<b>a</b>) edge detection result overlaid on the original input data; (<b>b</b>) edges; (<b>c</b>) traced feature line segments depicted in different colors; (<b>d</b>–<b>f</b>) details of edges and traced feature line segments demarcated by different colored outlines corresponding to (<b>a</b>).</p>
Full article ">Figure 14
<p>Results of Site 2: (<b>a</b>) edge detection result overlaid on the original input data; (<b>b</b>) edges; (<b>c</b>) traced feature line segments depicted in different colors; (<b>d</b>–<b>f</b>) details of edges and traced feature line segments, demarcated by different colored outlines corresponding to (<b>a</b>).</p>
Full article ">Figure 15
<p>Comparison of the results of the AGPN and existing methods: (<b>a</b>) original input data; (<b>b</b>) edges detected by our edge detection method; (<b>c</b>) edges detected by PCL, where the results are optimal, obtained by training ten sets of parameters; (<b>d</b>) feature line segments traced by our method; (<b>e</b>) feature line segments traced by the method in the literature [<a href="#B13-remotesensing-08-00710" class="html-bibr">13</a>].</p>
Full article ">
3573 KiB  
Article
Detecting Terrain Stoniness From Airborne Laser Scanning Data †
by Paavo Nevalainen, Maarit Middleton, Raimo Sutinen, Jukka Heikkonen and Tapio Pahikkala
Remote Sens. 2016, 8(9), 720; https://doi.org/10.3390/rs8090720 - 31 Aug 2016
Cited by 16 | Viewed by 6033
Abstract
Three methods to estimate the presence of ground surface stones from publicly available Airborne Laser Scanning (ALS) point clouds are presented. The first method approximates the local curvature by local linear multi-scale fitting, and the second method uses Discrete-Differential Gaussian curvature based on [...] Read more.
Three methods to estimate the presence of ground surface stones from publicly available Airborne Laser Scanning (ALS) point clouds are presented. The first method approximates the local curvature by local linear multi-scale fitting, and the second method uses Discrete-Differential Gaussian curvature based on the ground surface triangulation. The third baseline method applies Laplace filtering to Digital Elevation Model (DEM) in a 2 m regular grid data. All methods produce an approximate Gaussian curvature distribution which is then vectorized and classified by logistic regression. Two training data sets consisted of 88 and 674 polygons of mass-flow deposits, respectively. The locality of the polygon samples is a sparse canopy boreal forest, where the density of ALS ground returns is sufficiently high to reveal information about terrain micro-topography. The surface stoniness of each polygon sample was categorized for supervised learning by expert observation on the site. The leave-pair-out (L2O) cross-validation of the local linear fit method results in the area under curve A U C = 0 . 74 and A U C = 0 . 85 on two data sets, respectively. This performance can be expected to suit real world applications such as detecting coarse-grained sediments for infrastructure construction. A wall-to-wall predictor based on the study was demonstrated. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The process flow, methods covered in this paper are highlighted. Data formats: (1) 2 m raster; (2) point cloud; (3) task-specific TIN model; (4) curvature value sets; (5) sample vectors. LLC can optionally use either original point cloud (2) or vertex points (3) produced by SAF TIN model. Wall-to-wall classification is a possibility provided by the resulting binary classifier.</p>
Full article ">Figure 2
<p><b>Upper left</b>: The site near Kemijärvi Finland. The research area covered by 120 open data <span class="html-italic">.las</span> files covering 1080 km<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>. <b>Upper right</b>: the relative location of sample polygons. Amount of sample sets in parenthesis. <b>Lower left</b>: A view of a sample site in boreal forest. <b>Lower right</b>: approximately the same view as at lower left after solid angle filtering (see <a href="#sec2dot4-remotesensing-08-00720" class="html-sec">Section 2.4</a>) of the point cloud. The stone formation has been circled. Location is at UTM map T5212C3, polygon 11240.</p>
Full article ">Figure 3
<p>A stony (upper row) and a non-stony (lower row) sample polygon. Original polygons are approximated by <math display="inline"> <semantics> <mrow> <mn>10</mn> <mspace width="0.166667em"/> <mtext>m</mtext> <mo>×</mo> <mn>10</mn> <mspace width="0.166667em"/> <mtext>m</mtext> </mrow> </semantics> </math> batches. The ground height (DEM 2 m) and its Laplace discrete operator signals with 2 m and 4 m radius are depicted. The border noise has been removed from actual analysis. The 100 m scale is aligned to North.</p>
Full article ">Figure 4
<p>The solid angle distribution of positive and negative samples among the <span class="html-italic">data2015</span> data set. Averages can be distinguished well but variation among samples is high.</p>
Full article ">Figure 5
<p>Approximative properties of <span class="html-italic">data2015</span> data set. Similar qualities of <span class="html-italic">data2014</span> are not available. <b>Left</b>: The number of stones at a spatial partition when the partitioning range (the grid size <span class="html-italic">δ</span>) changes. A sensible approximation of e.g., local ground inclination is possible only when there are at least 3 points per grid square. <b>Right</b>: The difference between positive and negative samples is mainly in stone size distribution. The practical detection limit in size is approx. 1.0 m.</p>
Full article ">Figure 6
<p><b>Left</b>: The Laplace difference operator returns the height difference between the center point (1) and the average of points A. The modified Laplace difference operator does the same but using points B. These two kernels define each an average circumferential height difference <math display="inline"> <semantics> <mover accent="true"> <mi>Z</mi> <mo>¯</mo> </mover> </semantics> </math>. <b>Right</b>: The geometric relation between <math display="inline"> <semantics> <mover accent="true"> <mi>Z</mi> <mo>¯</mo> </mover> </semantics> </math> and approximate mean curvature <math display="inline"> <semantics> <msub> <mi>κ</mi> <mi>H</mi> </msub> </semantics> </math>. Horizontal line represents average ground level at the circumference.</p>
Full article ">Figure 7
<p>Curvature distributions produced by each method. <b>Upper left</b>: LLC and grid size 2m. <b>Upper right</b>: LLC and grid size 4 m. Larger grid size results in narrow band around <math display="inline"> <semantics> <mrow> <mi>κ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>. <b>Lower left</b>: DEM curvatures are characterized by kurtosis. <b>Lower right</b>: the LTC distribution.</p>
Full article ">Figure 8
<p><b>Left</b>: The local height from DEM files, <math display="inline"> <semantics> <mrow> <mn>30</mn> <mspace width="0.166667em"/> <mtext>km</mtext> <mo>×</mo> <mn>36</mn> <mspace width="0.166667em"/> <mtext>km</mtext> </mrow> </semantics> </math> area depicted. The scale is oriented northwards. The general location of the rectangle can be seen in upper left part of the <a href="#remotesensing-08-00720-f002" class="html-fig">Figure 2</a>. <b>Right</b>: Stoniness probability by DEC method. The scale is probabilty of having stones on a particular pixel. Roads and waterways are classified as stony areas. LLC and LTC methods are much less sensitive to roads and constructed details.</p>
Full article ">Figure 9
<p>Solid angle filtering. (<b>A</b>) The set of adjoining triangles <math display="inline"> <semantics> <msub> <mi>T</mi> <mi>k</mi> </msub> </semantics> </math> of a point <math display="inline"> <semantics> <msub> <mi>p</mi> <mi>k</mi> </msub> </semantics> </math> seen from above; (<b>B</b>) A compartment <math display="inline"> <semantics> <mrow> <mi>i</mi> <mi>j</mi> <mi>l</mi> </mrow> </semantics> </math> of the vertex point <math display="inline"> <semantics> <msub> <mi>p</mi> <mi>k</mi> </msub> </semantics> </math> presented in detail. A solid angle <math display="inline"> <semantics> <msub> <mo>Ω</mo> <mi>k</mi> </msub> </semantics> </math> is a sum of compartment angles <math display="inline"> <semantics> <msub> <mi>ω</mi> <mrow> <mi>i</mi> <mi>l</mi> <mi>j</mi> </mrow> </msub> </semantics> </math> of Equation (A2). Point <math display="inline"> <semantics> <msub> <mi>p</mi> <mi>l</mi> </msub> </semantics> </math> is an arbitrary point directly below the vertex point <math display="inline"> <semantics> <msub> <mi>p</mi> <mi>k</mi> </msub> </semantics> </math>.</p>
Full article ">Figure 10
<p><b>Left</b>: An individual local plane <math display="inline"> <semantics> <mrow> <mi mathvariant="script">P</mi> <mo>(</mo> <msub> <mi>p</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">n</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> </semantics> </math> at grid point <math display="inline"> <semantics> <msub> <mi>c</mi> <mi>k</mi> </msub> </semantics> </math> and its parameters (local plane center point <math display="inline"> <semantics> <msub> <mi>p</mi> <mi>k</mi> </msub> </semantics> </math> and normal <math display="inline"> <semantics> <msub> <mi mathvariant="bold">n</mi> <mi>k</mi> </msub> </semantics> </math>). A triangulation <span class="html-italic">T</span> of the grid avoids squares with incomplete data. A local cloud point set <math display="inline"> <semantics> <msub> <mi>Q</mi> <msub> <mi>c</mi> <mi>k</mi> </msub> </msub> </semantics> </math> and neighboring triangles <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi>k</mi> </msub> <mo>⊂</mo> <mi>T</mi> </mrow> </semantics> </math> of a grid slot <math display="inline"> <semantics> <msub> <mi>c</mi> <mi>k</mi> </msub> </semantics> </math> are also depicted. <b>Center</b>: a stone revealed by two adjacent tilted planes. This stone provides a signal with the grid size <math display="inline"> <semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math> m. Note the amount of missing planes due to a lack of cloud points. <b>Right</b>: The grid of size <math display="inline"> <semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math> m at the same spot. The stone does not appear, local variation has disappeared but the grid is almost full approximating the sample polygon shape.</p>
Full article ">
3194 KiB  
Article
A Sparsity-Based Regularization Approach for Deconvolution of Full-Waveform Airborne Lidar Data
by Mohsen Azadbakht, Clive S. Fraser and Kourosh Khoshelham
Remote Sens. 2016, 8(8), 648; https://doi.org/10.3390/rs8080648 - 8 Aug 2016
Cited by 31 | Viewed by 6843
Abstract
Full-waveform lidar systems capture the complete backscattered signal from the interaction of the laser beam with targets located within the laser footprint. The resulting data have advantages over discrete return lidar, including higher accuracy of the range measurements and the possibility of retrieving [...] Read more.
Full-waveform lidar systems capture the complete backscattered signal from the interaction of the laser beam with targets located within the laser footprint. The resulting data have advantages over discrete return lidar, including higher accuracy of the range measurements and the possibility of retrieving additional returns from weak and overlapping pulses. In addition, radiometric characteristics of targets, e.g., target cross-section, can also be retrieved from the waveforms. However, waveform restoration and removal of the effect of the emitted system pulse from the returned waveform are critical for precise range measurement, 3D reconstruction and target cross-section extraction. In this paper, a sparsity-constrained regularization approach for deconvolution of the returned lidar waveform and restoration of the target cross-section is presented. Primal-dual interior point methods are exploited to solve the resulting nonlinear convex optimization problem. The optimal regularization parameter is determined based on the L-curve method, which provides high consistency in varied conditions. Quantitative evaluation and visual assessment of results show the superior performance of the proposed regularization approach in both removal of the effect of the system waveform and reconstruction of the target cross-section as compared to other prominent deconvolution approaches. This demonstrates the potential of the proposed approach for improving the accuracy of both range measurements and geophysical attribute retrieval. The feasibility and consistency of the presented approach in the processing of a variety of lidar data acquired under different system configurations is also highlighted. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Optimal regularization parameter on the L-curve.</p>
Full article ">Figure 2
<p>(<b>a</b>) The synthetic system waveform; and (<b>b</b>–<b>d</b>) selected sample waveforms at different noise levels.</p>
Full article ">Figure 3
<p>(<b>a</b>) A typical recorded waveform in the NSW dataset; and (<b>b</b>) its corresponding original system waveform.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>f</b>) Qualitative comparison of sample retrieved waveforms by the proposed method, and the R-L, Wiener filter, Tikhonov regularization and Gaussian decomposition methods.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>–<b>f</b>) Qualitative comparison of sample retrieved waveforms by the proposed method, and the R-L, Wiener filter, Tikhonov regularization and Gaussian decomposition methods.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>–<b>f</b>) Qualitative comparison of sample retrieved waveforms by the proposed method, and the R-L, Wiener filter, Tikhonov regularization and Gaussian decomposition methods.</p>
Full article ">Figure 5
<p>Comparison of the proposed method with others at different noise levels in terms of the three different metrics: (<b>a</b>) SAM distance; (<b>b</b>) Correlation Coefficient and (<b>c</b>) Fréchet distance.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>b</b>) Negative amplitudes resulting from Gaussian decomposition. Red curves represent individual scatterers, while the sum of all Gaussian functions are shown in black.</p>
Full article ">Figure 7
<p>The L-curve plots related to the proposed and Tikhonov regularization approaches for the same waveform at two different noise levels: (<b>a</b>) σ = 0.01 and (<b>b</b>) σ = 0.05.</p>
Full article ">Figure 8
<p>(<b>a</b>) Estimated system waveforms based on blind deconvolution; and (<b>b</b>) restored target responses using different methods.</p>
Full article ">Figure 9
<p>(<b>a</b>) The reconstructed system waveform by blind deconvolution versus the average of the recorded system waveforms; and (<b>b</b>) comparison of the restored signals based upon the original incomplete system waveform and that retrieved from blind deconvolution.</p>
Full article ">Figure 10
<p>(<b>a</b>) The L-curve results corresponding to the <span class="html-italic">l</span><sub>1</sub>-norm regularization and Tikhonov regularization methods; and (<b>b</b>) restored signals from different methods.</p>
Full article ">Figure 11
<p>(<b>a</b>) Raw received waveform and its noise-reduced version; and (<b>b</b>) the recovered differential cross-sections for different methods.</p>
Full article ">Figure 12
<p>(<b>a</b>) Target cross-section; (<b>b</b>) the reflectance; and (<b>c</b>) the backscatter coefficient for the selected targets aggregated over different scan angles.</p>
Full article ">
1154 KiB  
Article
Extracting Canopy Surface Texture from Airborne Laser Scanning Data for the Supervised and Unsupervised Prediction of Area-Based Forest Characteristics
by Mikko T. Niemi and Jari Vauhkonen
Remote Sens. 2016, 8(7), 582; https://doi.org/10.3390/rs8070582 - 9 Jul 2016
Cited by 18 | Viewed by 6314
Abstract
Area-based analyses of airborne laser scanning (ALS) data are an established approach to obtain wall-to-wall predictions of forest characteristics for vast areas. The analyses of sparse data in particular are based on the height value distributions, which do not produce optimal information on [...] Read more.
Area-based analyses of airborne laser scanning (ALS) data are an established approach to obtain wall-to-wall predictions of forest characteristics for vast areas. The analyses of sparse data in particular are based on the height value distributions, which do not produce optimal information on the horizontal forest structure. We evaluated the complementary potential of features quantifying the textural variation of ALS-based canopy height models (CHMs) for both supervised (linear regression) and unsupervised (k-Means clustering) analyses. Based on a comprehensive literature review, we identified a total of four texture analysis methods that produced rotation-invariant features of different order and scale. The CHMs and the textural features were derived from practical sparse-density, leaf-off ALS data originally acquired for ground elevation modeling. The features were extracted from a circular window of 254 m2 and related with boreal forest characteristics observed from altogether 155 field sample plots. Features based on gray-level histograms, distribution of forest patches, and gray-level co-occurrence matrices were related with plot volume, basal area, and mean diameter with coefficients of determination (R2) of up to 0.63–0.70, whereas features that measured the uniformity of local binary patterns of the CHMs performed poorer. Overall, the textural features compared favorably with benchmark features based on the point data, indicating that the textural features contain additional information useful for the prediction of forest characteristics. Due to the developed processing routines for raster data, the CHM features may potentially be extracted with a lower computational burden, which promotes their use for applications such as pre-stratification or guiding the field plot sampling based solely on ALS data. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Points with height values &gt;5 m above ground level extracted from an example plot. Canopy height models interpolated from the point data using pixel sizes of: (<b>b</b>) 0.5 m; (<b>c</b>) 0.75 m; (<b>d</b>) 1.0 m; (<b>e</b>) 2.0 m; and (<b>f</b>) 3.0 m. In (<b>a</b>), the sizes of the dots are scaled according to the height values, whereas the tick marks of each sub-plot correspond to a horizontal distance of 2 m.</p>
Full article ">Figure 2
<p>The R<sup>2</sup> of selected textural features and total stem volume with the pixel sizes of 0.5 m and 1.0 m. The interpolation parameter (α) was set to 0.1, 0.5, 1, 2…, 9 and 10.</p>
Full article ">Figure 3
<p>The effect of the pixel size to the coefficients of determination (R<sup>2</sup>) of selected textural features and plot-level forest attributes (total stem volume, basal area, mean diameter (DBH), and DBH variation). The interpolation parameter α was set to 5.</p>
Full article ">Figure 4
<p>(<b>a</b>) The relationship between the airborne laser scanning estimate of mean height (H) × canopy cover (CC) and plot volume (V) in the data studied. The filled and open circles correspond to clusters including single or several plots, respectively, and the size of the open circles indicates the dispersion of the plot from the respective cluster center; (<b>b</b>) The development of the root-mean-squared-error, when the prediction of V is based on Equation (6) and an increasing number of sample plots for model fitting, selected in the order of the decreasing dispersion.</p>
Full article ">
4589 KiB  
Article
An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation
by Wuming Zhang, Jianbo Qi, Peng Wan, Hongtao Wang, Donghui Xie, Xiaoyan Wang and Guangjian Yan
Remote Sens. 2016, 8(6), 501; https://doi.org/10.3390/rs8060501 - 15 Jun 2016
Cited by 1133 | Viewed by 53603
Abstract
Separating point clouds into ground and non-ground measurements is an essential step to generate digital terrain models (DTMs) from airborne LiDAR (light detection and ranging) data. However, most filtering algorithms need to carefully set up a number of complicated parameters to achieve high [...] Read more.
Separating point clouds into ground and non-ground measurements is an essential step to generate digital terrain models (DTMs) from airborne LiDAR (light detection and ranging) data. However, most filtering algorithms need to carefully set up a number of complicated parameters to achieve high accuracy. In this paper, we present a new filtering method which only needs a few easy-to-set integer and Boolean parameters. Within the proposed approach, a LiDAR point cloud is inverted, and then a rigid cloth is used to cover the inverted surface. By analyzing the interactions between the cloth nodes and the corresponding LiDAR points, the locations of the cloth nodes can be determined to generate an approximation of the ground surface. Finally, the ground points can be extracted from the LiDAR point cloud by comparing the original LiDAR points and the generated surface. Benchmark datasets provided by ISPRS (International Society for Photogrammetry and Remote Sensing) working Group III/3 are used to validate the proposed filtering method, and the experimental results yield an average total error of 4.58%, which is comparable with most of the state-of-the-art filtering algorithms. The proposed easy-to-use filtering method may help the users without much experience to use LiDAR data and related technology in their own applications more easily. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the cloth simulation algorithm.</p>
Full article ">Figure 2
<p>Schematic illustration of mass-spring model. Each circle indicates a particle and each line represents a spring.</p>
Full article ">Figure 3
<p>Main Steps in CSF: (<b>a</b>) Initial state. A cloth is place above the inverted LiDAR measurements; (<b>b</b>) The displacement of each particle is calculated under the influence of gravity. Thus, some particles may appear under the ground measurements; (<b>c</b>) Intersection check. For those who are under the ground, they are moved on the ground and set as unmovable; (<b>d</b>) Considering internal forces. The movable particles are moved according to forces produced by neighbour particles.</p>
Full article ">Figure 4
<p>Constraint between particles.</p>
Full article ">Figure 5
<p>Parameterization of rigidness.</p>
Full article ">Figure 6
<p>Post-processing of the steep slope.</p>
Full article ">Figure 7
<p>Illustration of strongly connected component (SCC). Movable particles are handled from 1 to 18.</p>
Full article ">Figure 8
<p>Results of each group (choose samp31, samp11, and samp53 as representatives): (<b>the first column</b>) are original datasets; (<b>the second column</b>) are the DTMs that are generated from the reference data of samp31, samp11, and samp53; (<b>the third column</b>) are the DTMs that are produced from the CSF algorithm; (<b>the last column</b>) are the spatial distributions of the type I and type II errors.</p>
Full article ">Figure 9
<p>Removal of large buildings in urban area: (<b>a</b>) Cross sections from (<b>b</b>) and (<b>c</b>); (<b>b</b>) Dataset 1; (<b>c</b>) Produced DTM. In this dataset, it contains a number of connected large low buildings (see the cross section), when the cloth is relatively hard, it will not drop into this large hole, then these buildings can be removed.</p>
Full article ">Figure 10
<p>Preservation of microtopography: (<b>a</b>) Cross sections from (<b>b</b>) and (<b>c</b>); (<b>b</b>) Dataset 2; (<b>c</b>) Produced DTM. When post-processing is enabled, the cloth can stick to the surface more closely, some small steep slopes can be preserved.</p>
Full article ">Figure 11
<p>Sparse ground measurements: (<b>a</b>) Cross sections from (<b>b</b>) and (<b>c</b>); (<b>b</b>) Dataset 3; (<b>c</b>) Produced DTM. In some hilly topography areas, some parts of the cloth may not sticks to the ground well, this will cause classification errors (BE may be treated as OBJ).</p>
Full article ">Figure 12
<p>Preservation of steep slopes: (<b>a</b>) Cross sections from (<b>b</b>) and (<b>c</b>); (<b>b</b>) Dataset 4; (<b>c</b>) Produced DTM. In this dataset, the main objects are vegetation and contains large number of ground measurements, the cloth can be very soft to maximumly fit the terrain shape with less consideration of T.II error. Companied with post-processing, large steep slopes can be preserved well.</p>
Full article ">Figure 13
<p>Total errors for each time step: Group I (<b>a</b>); Group II (<b>b</b>); Group III (<b>c</b>); Mean (<b>d</b>).</p>
Full article ">Figure 14
<p>Total errors for each grid resolution: Group I (<b>a</b>); Group II (<b>b</b>); Group III (<b>c</b>); Mean (<b>d</b>).</p>
Full article ">Figure 15
<p>The influences of <math display="inline"> <semantics> <msub> <mi>h</mi> <mrow> <mi>c</mi> <mi>c</mi> </mrow> </msub> </semantics> </math> on total errors: Group I (<b>a</b>); Group II (<b>b</b>); Group III (<b>c</b>); Mean (<b>d</b>).</p>
Full article ">Figure 16
<p>Maximum height variation (M_HV) and average height variation (A_HV).</p>
Full article ">Figure 17
<p>Simulated cloth over an area with steep slopes: (<b>a</b>) simulated cloth before post-processing; (<b>b</b>) simulated cloth after post-processing.</p>
Full article ">Figure 18
<p>Illustration of a bridge: height variation along road is much less than that in direction of river.</p>
Full article ">
12875 KiB  
Article
Three-Dimensional Reconstruction of Building Roofs from Airborne LiDAR Data Based on a Layer Connection and Smoothness Strategy
by Yongjun Wang, Hao Xu, Liang Cheng, Manchun Li, Yajun Wang, Nan Xia, Yanming Chen and Yong Tang
Remote Sens. 2016, 8(5), 415; https://doi.org/10.3390/rs8050415 - 16 May 2016
Cited by 16 | Viewed by 7728
Abstract
A new approach for three-dimensional (3-D) reconstruction of building roofs from airborne light detection and ranging (LiDAR) data is proposed, and it includes four steps. Building roof points are first extracted from LiDAR data by using the reversed iterative mathematic morphological (RIMM) algorithm [...] Read more.
A new approach for three-dimensional (3-D) reconstruction of building roofs from airborne light detection and ranging (LiDAR) data is proposed, and it includes four steps. Building roof points are first extracted from LiDAR data by using the reversed iterative mathematic morphological (RIMM) algorithm and the density-based method. The corresponding relations between points and rooftop patches are then established through a smoothness strategy involving “seed point selection, patch growth, and patch smoothing.” Layer-connection points are then generated to represent a layer in the horizontal direction and to connect different layers in the vertical direction. Finally, by connecting neighboring layer-connection points, building models are constructed with the second level of detailed data. The key contributions of this approach are the use of layer-connection points and the smoothness strategy for building model reconstruction. Experimental results are analyzed from several aspects, namely, the correctness and completeness, deviation analysis of the reconstructed building roofs, and the influence of elevation to 3-D roof reconstruction. In the two experimental regions used in this paper, the completeness and correctness of the reconstructed rooftop patches were about 90% and 95%, respectively. For the deviation accuracy, the average deviation distance and standard deviation in the best case were 0.05 m and 0.18 m, respectively; and those in the worst case were 0.12 m and 0.25 m. The experimental results demonstrated promising correctness, completeness, and deviation accuracy with satisfactory 3-D building roof models. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Smoothing of building rooftop points: (<b>a</b>) original points; (<b>b</b>) segmented points; and (<b>c</b>) smoothed points.</p>
Full article ">Figure 2
<p>An example of layer-connection points: (<b>a</b>) yellow points represent the first layer (ground), blue points represent the second layer, and red points represent the third layer; (<b>b</b>) an enlarged view; (<b>c</b>); (<b>d</b>) lines to connect points from two layers; and (<b>e</b>) line to connect points from three layers.</p>
Full article ">Figure 3
<p>Example of merging rooftop patches into a layer. (<b>a</b>) Points with different colors represent different rooftop patches; the blue line represents the intersection line between S1 and S2; and (<b>b</b>) red points and blue points represent a roof layer.</p>
Full article ">Figure 4
<p>Calculation of layer-connection points, points with different colors representing different roof layers, and the blue rectangle representing the <span class="html-italic">x</span>–<span class="html-italic">y</span> coordinates of the derived layer-connection point: (<b>a</b>) the points inside the five cells belonging to the same roof layer; and (<b>b</b>), (<b>c</b>), (<b>d</b>), (<b>e</b>) the points inside the five cells belonging to different roof layers.</p>
Full article ">Figure 5
<p>An example of building model reconstruction: (<b>a</b>) layer-connection points; (<b>b</b>) rooftop construction; and (<b>c</b>) wall construction.</p>
Full article ">Figure 6
<p>Experimental Region 1: (<b>a</b>) aerial orthophotos with 0.3 m resolution (no-data areas are shown by yellow boxes); (<b>b</b>) airborne LiDAR data; and(<b>c</b>) no-data areas (black), corresponding the yellow boxes in (<b>a</b>) with letters.</p>
Full article ">Figure 7
<p>Experimental Region 2: (<b>a</b>) aerial orthophotos with 0.3 m resolution; and (<b>b</b>) airborne LiDAR data.</p>
Full article ">Figure 8
<p>Reconstruction results in Region 1: (<b>a</b>) an overview; (<b>b</b>) a side view of the local reconstructed roof models; and (<b>c</b>) and (<b>d</b>), building roof models for the red box in (<b>b</b>).</p>
Full article ">Figure 9
<p>Reconstruction results in Region 2: (<b>a</b>) an overview; (<b>b</b>) a side view of the local reconstructed roof models; and (<b>c</b>) and (<b>d</b>), building roof models for the red box in (<b>b</b>).</p>
Full article ">Figure 10
<p>Deviation distances between the reconstructed building roof models and the LiDAR-derived validation data, as represented by points with different colors: (<b>a</b>), (<b>b</b>) Region 1 and Region 2, respectively.</p>
Full article ">Figure 11
<p>Evaluation of the building roofs’ deviations under different elevations, where the solid squares in the figures represent the average values of the deviation distances under each elevation range, and the error bars represent the positive and negative deviations of each average value: (<b>a</b>), (<b>b</b>) Region 1 and Region 2, respectively.</p>
Full article ">Figure 12
<p>Comparison of Approaches (abbreviated as <span class="html-italic">App.</span>) A, B, and C: (<b>a</b>), (<b>b</b>), and (<b>c</b>) the reconstructed roof models of Buildings 1, 2, and 3, respectively.</p>
Full article ">Figure 13
<p>Comparison of Approaches (abbreviated as <span class="html-italic">App.</span>) A, B, and C: (<b>a</b>), (<b>b</b>), and (<b>c</b>) The reconstructed roof models of Buildings 4, 5, and 6, respectively.</p>
Full article ">Figure 14
<p>Roughness comparison between Approaches A and B: (<b>a</b>), (<b>c</b>) roughness of roof models reconstructed using Approach A for Buildings 4 and 6; (<b>b</b>), (<b>d</b>) roughness of roof models reconstructed using Approach B for Buildings 4 and 6, respectively. Data are represented by points with different colors.</p>
Full article ">
4232 KiB  
Article
Detection and Segmentation of Small Trees in the Forest-Tundra Ecotone Using Airborne Laser Scanning
by Marius Hauglin and Erik Næsset
Remote Sens. 2016, 8(5), 407; https://doi.org/10.3390/rs8050407 - 11 May 2016
Cited by 17 | Viewed by 5563
Abstract
Due to expected climate change and increased focus on forests as a potential carbon sink, it is of interest to map and monitor even marginal forests where trees exist close to their tolerance limits, such as small pioneer trees in the forest-tundra ecotone. [...] Read more.
Due to expected climate change and increased focus on forests as a potential carbon sink, it is of interest to map and monitor even marginal forests where trees exist close to their tolerance limits, such as small pioneer trees in the forest-tundra ecotone. Such small trees might indicate tree line migrations and expansion of the forests into treeless areas. Airborne laser scanning (ALS) has been suggested and tested as a tool for this purpose and in the present study a novel procedure for identification and segmentation of small trees is proposed. The study was carried out in the Rollag municipality in southeastern Norway, where ALS data and field measurements of individual trees were acquired. The point density of the ALS data was eight points per m2, and the field tree heights ranged from 0.04 to 6.3 m, with a mean of 1.4 m. The proposed method is based on an allometric model relating field-measured tree height to crown diameter, and another model relating field-measured tree height to ALS-derived height. These models are calibrated with local field data. Using these simple models, every positive above-ground height derived from the ALS data can be related to a crown diameter, and by assuming a circular crown shape, this crown diameter can be extended to a crown segment. Applying this model to all ALS echoes with a positive above-ground height value yields an initial map of possible circular crown segments. The final crown segments were then derived by applying a set of simple rules to this initial “map” of segments. The resulting segments were validated by comparison with field-measured crown segments. Overall, 46% of the field-measured trees were successfully detected. The detection rate increased with tree size. For trees with height >3 m the detection rate was 80%. The relatively large detection errors were partly due to the inherent limitations in the ALS data; a substantial fraction of the smaller trees was hit by no or just a few laser pulses. This prevents reliable detection of changes at an individual tree level, but monitoring changes on an area level could be a possible application of the method. The results further showed that some variation must be expected when the method is used for repeated measurements, but no significant differences in the mean number of segmented trees were found over an intensively measured test area of 11.4 ha. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Picture of the landscape and vegetation in the study area.</p>
Full article ">Figure 2
<p>Visualization of five field-measured trees, and the ALS echoes from ALS2012 in the corresponding area. Viewed from above (<b>upper figure</b>) and from the side (<b>lower figure</b>). The field-measured height and crown extent is colored red, and the ALS echoes are colored from grey to black. The highest echoes are colored black. Note that the terrain height has been subtracted from the ALS echo heights (see text for details), and that only a sample of trees was measured.</p>
Full article ">Figure 3
<p>Graphical representation of steps in the segmentation procedure: (<b>a</b>) Laser echoes viewed from above, darker color indicates echoes higher above ground; (<b>b</b>) Each echo is associated with a circular segment. Note that the segment of the echo highest above ground is created first, and echoes inside this segment are treated as reflected from this segment; (<b>c</b>) Overlapping circular segments are merged based on the degree of overlap (see text for details), with the echoes of the smaller segments added as vertices in the larger segment, forming the final segment (shown in black).</p>
Full article ">Figure 4
<p>Flow diagram showing an outline of the segmentation process.</p>
Full article ">Figure 5
<p>Observed <span class="html-italic">versus</span> predicted values for the two linear models: Height–crown diameter model given in Equation (1) (<b>top</b>), and ALS height–field height given in Equation (2) (<b>bottom</b>).</p>
Full article ">Figure 6
<p>Single-tree segments from the described procedure (hollow segments) and field-measured crown ellipses of detected (light grey) and undetected (dark grey) trees. Note that only a sample of the trees was measured in the field. ALS echoes are colored according to the above-ground height.</p>
Full article ">Figure 7
<p>Visualization of single-tree segments and ALS echoes from ALS2012, covering the same area as <a href="#remotesensing-08-00407-f002" class="html-fig">Figure 2</a>. Viewed from above (<b>upper figure</b>) and from the side (<b>lower figure</b>). The segments extent and estimated tree heights are colored green, and the ALS echoes are colored from grey to black. The highest echoes are colored black. Note that the terrain height has been subtracted from the ALS echo heights (see <a href="#sec2dot3-remotesensing-08-00407" class="html-sec">Section 2.3</a> for details).</p>
Full article ">
17382 KiB  
Article
Fast and Accurate Plane Segmentation of Airborne LiDAR Point Cloud Using Cross-Line Elements
by Teng Wu, Xiangyun Hu and Lizhi Ye
Remote Sens. 2016, 8(5), 383; https://doi.org/10.3390/rs8050383 - 5 May 2016
Cited by 19 | Viewed by 7718
Abstract
Plane segmentation is an important step in feature extraction and 3D modeling from light detection and ranging (LiDAR) point cloud. The accuracy and speed of plane segmentation are two issues difficult to balance, particularly when dealing with a massive point cloud with millions [...] Read more.
Plane segmentation is an important step in feature extraction and 3D modeling from light detection and ranging (LiDAR) point cloud. The accuracy and speed of plane segmentation are two issues difficult to balance, particularly when dealing with a massive point cloud with millions of points. A fast and easy-to-implement algorithm of plane segmentation based on cross-line element growth (CLEG) is proposed in this study. The point cloud is converted into grid data. The points are segmented into line segments with the Douglas-Peucker algorithm. Each point is then assigned to a cross-line element (CLE) obtained by segmenting the points in the cross-directions. A CLE determines one plane, and this is the rationale of the algorithm. CLE growth and point growth are combined after selecting the seed CLE to obtain the segmented facets. The CLEG algorithm is validated by comparing it with popular methods, such as RANSAC, 3D Hough transformation, principal component analysis (PCA), iterative PCA, and a state-of-the-art global optimization-based algorithm. Experiments indicate that the CLEG algorithm runs much faster than the other algorithms. The method can produce accurate segmentation at a speed of 6 s per 3 million points. The proposed method also exhibits good accuracy. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow of plane segmentation using cross-line elements.</p>
Full article ">Figure 2
<p>Line segmentation in four directions.</p>
Full article ">Figure 3
<p>Seed CLE and neighbors of the cross-point.</p>
Full article ">Figure 4
<p>Points on CLE may not be on a plane.</p>
Full article ">Figure 5
<p>Cross-line element and its characteristic.</p>
Full article ">Figure 6
<p>Cross-point of CLE.</p>
Full article ">Figure 7
<p>Step one of CLE growth.</p>
Full article ">Figure 8
<p>Segmentation of roof points in Vaihingen. (<b>a</b>) complex roof; (<b>b</b>) with noise points; (<b>c</b>) with small planes.</p>
Full article ">Figure 9
<p>Segmentation of roof points in the Wuhan area. (<b>a</b>) normal structure; (<b>b</b>) complex structure; (<b>c</b>) symmetric structure.</p>
Full article ">Figure 10
<p>Segmentation of roof points in the Guangzhou area. (<b>a</b>) weak edge; (<b>b</b>) symmetric structure; (<b>c</b>) complex structure.</p>
Full article ">Figure 11
<p>Disadvantage of the proposed method.</p>
Full article ">Figure 12
<p>Segmentation results in the Vaighingen area using dataset (a).</p>
Full article ">Figure 13
<p>Segmentation results in the Wuhan area using dataset (b).</p>
Full article ">Figure 14
<p>Segmentation results in the Guangzhou area using dataset (c).</p>
Full article ">Figure 15
<p>Segmentation results in Vaighingen using dataset (d).</p>
Full article ">Figure 16
<p>Segmentation results in the Wuhan area using dataset (e).</p>
Full article ">Figure 17
<p>Segmentation results in the Guangzhou area using dataset (f).</p>
Full article ">Figure 18
<p>Segmentation results in the Guangzhou area using dataset (g).</p>
Full article ">Figure 19
<p>The influence of min line length. (<b>a</b>) Corresponding image; (<b>b</b>) <span class="html-italic">l</span> = 1.8 m, a narrow plane is missed; (<b>c</b>) <span class="html-italic">l</span> = 3.0 m, small planes are missed; (<b>d</b>) <span class="html-italic">l</span> = 4.2 m, more small planes are missed; (<b>e</b>) <span class="html-italic">l</span> = 6 m, a large plane is missed; (<b>f</b>) <span class="html-italic">l</span> = 7.2 m, more large planes are missed.</p>
Full article ">
Back to TopTop