[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,343)

Search Parameters:
Keywords = point cloud data processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 23509 KiB  
Article
Learning Airfoil Flow Field Representation via Geometric Attention Neural Field
by Li Xiao, Mingjie Zhang and Xinghua Chang
Appl. Sci. 2024, 14(22), 10685; https://doi.org/10.3390/app142210685 - 19 Nov 2024
Viewed by 56
Abstract
Numerical simulation in fluid dynamics can be computationally expensive and difficult to achieve. To enhance efficiency, developing high-performance and accurate surrogate models is crucial, where deep learning shows potential. This paper introduces geometric attention (GeoAttention), a method that leverages attention mechanisms to encode [...] Read more.
Numerical simulation in fluid dynamics can be computationally expensive and difficult to achieve. To enhance efficiency, developing high-performance and accurate surrogate models is crucial, where deep learning shows potential. This paper introduces geometric attention (GeoAttention), a method that leverages attention mechanisms to encode geometry represented by point cloud, thereby enhancing the neural network’s generalizability across different geometries. Furthermore, by integrating GeoAttention with neural field, we propose the geometric attention neural field (GeoANF), specifically for learning representations of airfoil flow fields. The GeoANF embeds observational data independently of the specific discretization process into a latent space, constructing a mapping that relates geometric shape to the corresponding flow fields under given initial conditions. We use the public dataset AirfRANS to evaluate our approach, GeoANF significantly surpasses the baseline models on four key performance metrics, particularly in volume flow field and surface pressure measurements, achieving mean squared errors of 0.0038 and 0.0089, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>CFD grid data with pixel representation.</p>
Full article ">Figure 2
<p>Implicit neural representation.</p>
Full article ">Figure 3
<p>Airfoil flow field visualization.</p>
Full article ">Figure 4
<p>Airfoil optimization and GeoANF.</p>
Full article ">Figure 5
<p>GeoAttention neural field overview. (<b>a</b>) provides an overview of GeoANF, including both the inference and training phases. Airfoil boundary points (assumed to consist of <span class="html-italic">m</span> points) and <span class="html-italic">n</span> feature space query points, are each represented in a <math display="inline"><semantics> <msub> <mi>d</mi> <mi>p</mi> </msub> </semantics></math>-dimensional space. These inputs are processed through distinct NN block to the neural representation space <math display="inline"><semantics> <mi mathvariant="script">H</mi> </semantics></math>. GeoAttention processes the query points, in conjunction with the airfoil boundary points, obtain a geometric representation <span class="html-italic">g</span>. Subsequently, we concatenate <span class="html-italic">z</span> and <span class="html-italic">g</span>, which passing the neural field to decode into the flow field. (<b>b</b>) shows that the basic module used by the neural network is three-layer ReLU MLP, which has <math display="inline"><semantics> <msub> <mi>c</mi> <mi>i</mi> </msub> </semantics></math> input and <math display="inline"><semantics> <msub> <mi>c</mi> <mi>o</mi> </msub> </semantics></math> output channels. (<b>c</b>) Shows the process of GeoAttention. A point representation <math display="inline"><semantics> <msub> <mi>z</mi> <mi>i</mi> </msub> </semantics></math>, calculates attention weights with geometric representations of the airfoil boundary points, obtaining <math display="inline"><semantics> <msub> <mi>g</mi> <mi>i</mi> </msub> </semantics></math>. (<b>d</b>) shows the structure of the neural field. The concatenated query point and geometric information representations enter an NN block, are processed by a ReLU MLP with batch normalization, and are then decoded into the flow field output by another NN block.</p>
Full article ">Figure 6
<p>Multi-head attention.</p>
Full article ">Figure 7
<p>GeoAttention 2 situations.</p>
Full article ">Figure 8
<p>GeoANF training process.</p>
Full article ">Figure 9
<p>Flow field comparison between the prediction and CFD.</p>
Full article ">Figure 10
<p>Streamtrace visualization comparison.</p>
Full article ">Figure 11
<p>Spearman correlation visualization (full data regime).</p>
Full article ">Figure 12
<p>Error comparison. Visualization of the error results with the real flow field, from top to bottom, the total error, the error of velocity-x, the error of velocity-y, the error of pressure and the error of <math display="inline"><semantics> <mi>μ</mi> </semantics></math>t.</p>
Full article ">Figure 13
<p>The pressure coefficient distribution on the airfoil surface for two CFD cases.</p>
Full article ">Figure 14
<p>Normalized GeoAttention weight visualization.</p>
Full article ">
18 pages, 46116 KiB  
Article
Structural Complexity Significantly Impacts Canopy Reflectance Simulations as Revealed from Reconstructed and Sentinel-2-Monitored Scenes in a Temperate Deciduous Forest
by Yi Gan, Quan Wang and Guangman Song
Remote Sens. 2024, 16(22), 4296; https://doi.org/10.3390/rs16224296 - 18 Nov 2024
Viewed by 249
Abstract
Detailed three-dimensional (3D) radiative transfer models (RTMs) enable a clear understanding of the interactions between light, biochemistry, and canopy structure, but they are rarely explicitly evaluated due to the availability of 3D canopy structure data, leading to a lack of knowledge on how [...] Read more.
Detailed three-dimensional (3D) radiative transfer models (RTMs) enable a clear understanding of the interactions between light, biochemistry, and canopy structure, but they are rarely explicitly evaluated due to the availability of 3D canopy structure data, leading to a lack of knowledge on how canopy structure/leaf characteristics affect radiative transfer processes within forest ecosystems. In this study, the newly released 3D RTM Eradiate was extensively evaluated based on both virtual scenes reconstructed using the quantitative structure model (QSM) by adding leaves to point clouds generated from terrestrial laser scanning (TLS) data, and real scenes monitored by Sentinel-2 in a typical temperate deciduous forest. The effects of structural parameters on reflectance were investigated through sensitivity analysis, and the performance of the 3D model was compared with the 5-Scale and PROSAIL radiative transfer models. The results showed that the Eradiate-simulated reflectance achieved good agreement with the Sentinel-2 reflectance, especially in the visible and near-infrared spectral regions. Furthermore, the simulated reflectance, particularly in the blue and shortwave infrared spectral bands, was clearly shown to be influenced by canopy structure using the Eradiate model. This study demonstrated that the Eradiate RTM, based on the 3D explicit representation, is capable of providing accurate radiative transfer simulations in the temperate deciduous forest and hence provides a basis for understanding tree interactions and their effects on ecosystem structure and functions. Full article
Show Figures

Figure 1

Figure 1
<p>The workflow of this study.</p>
Full article ">Figure 2
<p>The locations of the seven TLS-measured points with a Sentinel-2 Level-2A MSI image (R: B4; G: B3; B: B2) as the base map (CRS: EPSG:6676—JGD2011/Japan Plane Rectangular CS VIII).</p>
Full article ">Figure 3
<p>The basic leaf shape and information used in the addition of leaves on the branches.</p>
Full article ">Figure 4
<p>The reconstruction processes of 3D forest scene based on TLS point clouds. (<b>a</b>) Raw plot-based point clouds after co-registration with multiple scans; (<b>b</b>) segmented vegetation points and ground points, colorized with black and red; (<b>c</b>) vegetation points and colorized seed clusters; (<b>d</b>) segmented and filtered tree points by Dijkstra segmentation algorithm; (<b>e</b>) reconstructed 3D tree quantitative structure models with TreeQSM approach; (<b>f</b>) generated virtual forest scene for RTM simulations, and 3D tree QSMs coupling with FaNNI foliage insertion algorithm.</p>
Full article ">Figure 5
<p>Seven reconstructed virtual forest scenes for RTM simulations in this study.</p>
Full article ">Figure 6
<p>Results of global sensitivity analysis for input parameters to the bidirectional reflectance factor (BRF) in the Eradiate radiative transfer model.</p>
Full article ">Figure 7
<p>Influence of leaf area index (LAI, from 1.0 to 7.0) on the simulated bidirectional reflectance factor (BRF) of the Eradiate (<b>a</b>–<b>j</b>), 5-Scale (<b>k</b>–<b>t</b>), and PROSAIL (<b>u</b>–<b>D</b>) radiative transfer models at the solar zenith angle (SZA) of 30° over different view zenith angles (VZAs).</p>
Full article ">Figure 8
<p>Relationships of the reflectance at different leaf area index (LAI) levels between Eradiate and 5-Scale as well as PROSAIL radiative transfer models (RTMs).</p>
Full article ">Figure 9
<p>Comparison of simulated reflectance from (<b>a</b>) Eradiate, (<b>b</b>) 5-Scale, and (<b>c</b>) PROSAIL with the reflectance extracted from Sentinel-2 MSI images. The line and shaded area depict the mean and standard deviation of reflectance.</p>
Full article ">Figure 10
<p>The performance ((<b>a</b>) RMSE; (<b>b</b>) MB; (<b>c</b>) MGE) of Eradiate, 5-Scale, and PROSAIL radiative transfer models (RTMs) for reflectance simulation vs. Sentinel-2-extracted reflectance.</p>
Full article ">
13 pages, 2762 KiB  
Article
Advanced Point Cloud Techniques for Improved 3D Object Detection: A Study on DBSCAN, Attention, and Downsampling
by Wenqiang Zhang, Xiang Dong, Jingjing Cheng and Shuo Wang
World Electr. Veh. J. 2024, 15(11), 527; https://doi.org/10.3390/wevj15110527 (registering DOI) - 15 Nov 2024
Viewed by 223
Abstract
To address the challenges of limited detection precision and insufficient segmentation of small to medium-sized objects in dynamic and complex scenarios, such as the dense intermingling of pedestrians, vehicles, and various obstacles in urban environments, we propose an enhanced methodology. Firstly, we integrated [...] Read more.
To address the challenges of limited detection precision and insufficient segmentation of small to medium-sized objects in dynamic and complex scenarios, such as the dense intermingling of pedestrians, vehicles, and various obstacles in urban environments, we propose an enhanced methodology. Firstly, we integrated a point cloud processing module utilizing the DBSCAN clustering algorithm to effectively segment and extract critical features from the point cloud data. Secondly, we introduced a fusion attention mechanism that significantly improves the network’s capability to capture both global and local features, thereby enhancing object detection performance in complex environments. Finally, we incorporated a CSPNet downsampling module, which substantially boosts the network’s overall performance and processing speed while reducing computational costs through advanced feature map segmentation and fusion techniques. The proposed method was evaluated using the KITTI dataset. Under moderate difficulty, the BEV mAP for detecting cars, pedestrians, and cyclists achieved 87.74%, 55.07%, and 67.78%, reflecting improvements of 1.64%, 5.84%, and 5.53% over PointPillars. For 3D mAP, the detection accuracy for cars, pedestrians, and cyclists reached 77.90%, 49.22%, and 62.10%, with improvements of 2.91%, 5.69%, and 3.03% compared to PointPillars. Full article
(This article belongs to the Special Issue Recent Advances in Intelligent Vehicle)
Show Figures

Figure 1

Figure 1
<p>PointPillars network architecture.</p>
Full article ">Figure 2
<p>Comparison of point cloud before and after processing.</p>
Full article ">Figure 3
<p>Feature extraction incorporating the attention mechanism.</p>
Full article ">Figure 4
<p>Flowchart of CSPNet network.</p>
Full article ">Figure 5
<p>Comparison of the results of PointPillars with the algorithm of this paper. The left part of each scene is the result of the baseline, and the right part is the result of the proposed approach. (<b>a</b>,<b>d</b>) show improvements for false detections caused by under-segmentation of small objects, while (<b>b</b>,<b>c</b>) show improvements for missed detections caused by occlusion.</p>
Full article ">
27 pages, 27328 KiB  
Article
An Aerial Photogrammetry Benchmark Dataset for Point Cloud Segmentation and Style Translation
by Meida Chen, Kangle Han, Zifan Yu, Andrew Feng, Yu Hou, Suya You and Lucio Soibelman
Remote Sens. 2024, 16(22), 4240; https://doi.org/10.3390/rs16224240 - 14 Nov 2024
Viewed by 471
Abstract
The recent surge in diverse 3D datasets spanning various scales and applications marks a significant advancement in the field. However, the comprehensive process of data acquisition, refinement, and annotation at a large scale poses a formidable challenge, particularly for individual researchers and small [...] Read more.
The recent surge in diverse 3D datasets spanning various scales and applications marks a significant advancement in the field. However, the comprehensive process of data acquisition, refinement, and annotation at a large scale poses a formidable challenge, particularly for individual researchers and small teams. To this end, we present a novel synthetic 3D point cloud generation framework that can produce detailed outdoor aerial photogrammetric 3D datasets with accurate ground truth annotations without the labor-intensive and time-consuming data collection/annotation processes. Our pipeline procedurally generates synthetic environments, mirroring real-world data collection and 3D reconstruction processes. A key feature of our framework is its ability to replicate consistent quality, noise patterns, and diversity similar to real-world datasets. This is achieved by adopting UAV flight patterns that resemble those used in real-world data collection processes (e.g., the cross-hatch flight pattern) across various synthetic terrains that are procedurally generated, thereby ensuring data consistency akin to real-world scenarios. Moreover, the generated datasets are enriched with precise semantic and instance annotations, eliminating the need for manual labeling. Our approach has led to the development and release of the Semantic Terrain Points Labeling—Synthetic 3D (STPLS3D) benchmark, an extensive outdoor 3D dataset encompassing over 16 km2, featuring up to 19 semantic labels. We also collected, reconstructed, and annotated four real-world datasets for validation purposes. Extensive experiments on these datasets demonstrate our synthetic datasets’ effectiveness, superior quality, and their value as a benchmark dataset for further point cloud research. Full article
(This article belongs to the Special Issue Point Cloud Processing with Machine Learning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The proposed synthetic data generation pipeline.</p>
Full article ">Figure 2
<p>The class distribution of the <span class="html-italic">real dataset</span> of our STPLS3D. Note the logarithmic scale for the vertical axis.</p>
Full article ">Figure 3
<p>Additional examples of synthetic and real-world point clouds in our STPLS3D dataset.</p>
Full article ">Figure 4
<p>The class distribution of <span class="html-italic">synthetic</span> subsets of our STPLS3D. Note the logarithmic scale for the vertical axis. Please refer to the appendix for the detailed definition of the semantic categories in this dataset.</p>
Full article ">Figure 5
<p>Qualitative comparison of tree crowns generated by ray-casted, synthetic photogrammetry, and real photogrammetry.</p>
Full article ">Figure 6
<p>Example visualization of the FDc dataset.</p>
Full article ">Figure 7
<p>Comparison of real image and point cloud and synthetic data and style transfer result.</p>
Full article ">
18 pages, 982 KiB  
Review
Remote Sensing and GIS in Natural Resource Management: Comparing Tools and Emphasizing the Importance of In-Situ Data
by Sanjeev Sharma, Justin O. Beslity, Lindsey Rustad, Lacy J. Shelby, Peter T. Manos, Puskar Khanal, Andrew B. Reinmann and Churamani Khanal
Remote Sens. 2024, 16(22), 4161; https://doi.org/10.3390/rs16224161 - 8 Nov 2024
Viewed by 900
Abstract
Remote sensing (RS) and Geographic Information Systems (GISs) provide significant opportunities for monitoring and managing natural resources across various temporal, spectral, and spatial resolutions. There is a critical need for natural resource managers to understand the expanding capabilities of image sources, analysis techniques, [...] Read more.
Remote sensing (RS) and Geographic Information Systems (GISs) provide significant opportunities for monitoring and managing natural resources across various temporal, spectral, and spatial resolutions. There is a critical need for natural resource managers to understand the expanding capabilities of image sources, analysis techniques, and in situ validation methods. This article reviews key image analysis tools in natural resource management, highlighting their unique strengths across diverse applications such as agriculture, forestry, water resources, soil management, and natural hazard monitoring. Google Earth Engine (GEE), a cloud-based platform introduced in 2010, stands out for its vast geospatial data catalog and scalability, making it ideal for global-scale analysis and algorithm development. ENVI, known for advanced multi- and hyperspectral image processing, excels in vegetation monitoring, environmental analysis, and feature extraction. ERDAS IMAGINE specializes in radar data analysis and LiDAR processing, offering robust classification and terrain analysis capabilities. Global Mapper is recognized for its versatility, supporting over 300 data formats and excelling in 3D visualization and point cloud processing, especially in UAV applications. eCognition leverages object-based image analysis (OBIA) to enhance classification accuracy by grouping pixels into meaningful objects, making it effective in environmental monitoring and urban planning. Lastly, QGIS integrates these remote sensing tools with powerful spatial analysis functions, supporting decision-making in sustainable resource management. Together, these tools when paired with in situ data provide comprehensive solutions for managing and analyzing natural resources across scales. Full article
Show Figures

Figure 1

Figure 1
<p>Articles published using different image analysis tools in different time intervals.</p>
Full article ">Figure 2
<p>Map of sites identified and included in database.</p>
Full article ">
19 pages, 16743 KiB  
Article
Low-Cost and Contactless Survey Technique for Rapid Pavement Texture Assessment Using Mobile Phone Imagery
by Zhenlong Gong, Marco Bruno, Margherita Pazzini, Anna Forte, Valentina Alena Girelli, Valeria Vignali and Claudio Lantieri
Sustainability 2024, 16(22), 9630; https://doi.org/10.3390/su16229630 - 5 Nov 2024
Viewed by 498
Abstract
Collecting pavement texture information is crucial to understand the characteristics of a road surface and to have essential data to support road maintenance. Traditional texture assessment techniques often require expensive equipment and complex operations. To ensure cost sustainability and reduce traffic closure times, [...] Read more.
Collecting pavement texture information is crucial to understand the characteristics of a road surface and to have essential data to support road maintenance. Traditional texture assessment techniques often require expensive equipment and complex operations. To ensure cost sustainability and reduce traffic closure times, this study proposes a rapid, cost-effective, and non-invasive surface texture assessment technique. This technology consists of capturing a set of images of a road surface with a mobile phone; then, the images are used to reconstruct the 3D surface with photogrammetric processing and derive the roughness parameters to assess the pavement texture. The results indicate that pavement images taken by a mobile phone can reconstruct the 3D surface and extract texture features with accuracy, meeting the requirements of a time-effective documentation. To validate the effectiveness of this technique, the surface structure of the pavement was analyzed in situ using a 3D structured light projection scanner and rigorous photogrammetry with a high-end reflex camera. The results demonstrated that increasing the point cloud density can enhance the detail level of the real surface 3D representation, but it leads to variations in road surface roughness parameters. Therefore, appropriate density should be chosen when performing three-dimensional reconstruction using mobile phone images. Mobile phone photogrammetry technology performs well in detecting shallow road surface textures but has certain limitations in capturing deeper textures. The texture parameters and the Abbott curve obtained using all three methods are comparable and fall within the same range of acceptability. This finding demonstrates the feasibility of using a mobile phone for pavement texture assessments with appropriate settings. Full article
Show Figures

Figure 1

Figure 1
<p>Asphalt mixture grading curves.</p>
Full article ">Figure 2
<p>The overall workflow of CRP.</p>
Full article ">Figure 3
<p>(<b>a</b>) Parallel axis capture; (<b>b</b>) Schematic diagram of the shooting platform.</p>
Full article ">Figure 4
<p>Example of an image acquired by the reflex camera and containing coded targets.</p>
Full article ">Figure 5
<p>(<b>a</b>) The Structured-light scanner employed; (<b>b</b>) A 3D point cloud obtained.</p>
Full article ">Figure 6
<p>Image of pavement sweeping site.</p>
Full article ">Figure 7
<p>Results of dense point cloud from CRP technique based on mobile phone.</p>
Full article ">Figure 8
<p>Cloud maps with different point cloud sizes and cloud maps from scanner.</p>
Full article ">Figure 9
<p>Abbott curves from different point cloud sizes and from the scanner.</p>
Full article ">Figure 10
<p>Abbott curves for results of different methods in five locations.</p>
Full article ">Figure 11
<p>Roughness parameters for different locations.</p>
Full article ">Figure 12
<p>The results of cloud maps at different locations by CRP based on mobile phone.</p>
Full article ">Figure 13
<p>Mobile phone point cloud of Location 4, represented with a color gradient showing the Z values (in mm) differences concerning the cloud scanned with SLS.</p>
Full article ">Figure 14
<p>The results of cloud maps in case of contamination.</p>
Full article ">Figure 15
<p>Abbott curves for results of different methods of four samples.</p>
Full article ">Figure 16
<p>Roughness parameters for different locations in case of contamination.</p>
Full article ">
29 pages, 9173 KiB  
Article
Structure Deterioration Identification and Model Updating for Prestressed Concrete Bridges Based on Massive Point Cloud Data
by Zhe Sun, Sihan Zhao, Bin Liang and Zhansheng Liu
Appl. Sci. 2024, 14(21), 10007; https://doi.org/10.3390/app142110007 - 1 Nov 2024
Viewed by 607
Abstract
As a critical component of the transportation system, the safety of bridges is directly related to public safety and the smooth flow of traffic. This study addresses the aforementioned issues by focusing on the identification of bridge structure deterioration and the updating of [...] Read more.
As a critical component of the transportation system, the safety of bridges is directly related to public safety and the smooth flow of traffic. This study addresses the aforementioned issues by focusing on the identification of bridge structure deterioration and the updating of finite element models, proposing a systematic research framework. First, this study presents a preprocessing method for bridge point cloud data and determines the parameter ranges for key algorithms through parameter tuning. Subsequently, based on the massive point cloud data, this research explores and optimizes the methods for identifying bridge cracks and spatial deformations, significantly enhancing the accuracy and efficiency of identification. On this basis, the particle swarm optimization algorithm is employed to optimize the key parameters in crack detection, ensuring the reliability and precision of the algorithm. Additionally, the study summarizes the methods for detecting bridge structural deformations based on point cloud data and establishes a framework for updating the bridge model. Finally, by integrating the results of bridge crack and deformation detection and combining Bayesian model correction and adaptive nested sampling methods, this research sets up the process for updating finite element model parameters and applies it to the analysis of actual bridge point cloud data. Full article
(This article belongs to the Special Issue Infrastructure Management and Maintenance: Methods and Applications)
Show Figures

Figure 1

Figure 1
<p>Vision map of research methods in articles.</p>
Full article ">Figure 2
<p>Point cloud secondary thinning process and effect diagram.</p>
Full article ">Figure 3
<p>Schematic diagram of noise and outliers in bridge point cloud data.</p>
Full article ">Figure 4
<p>Point cloud filtering of bridge data in 2017.</p>
Full article ">Figure 5
<p>Manual registration renderings of point clouds.</p>
Full article ">Figure 6
<p>Framework for mixed modeling crack detection process based on spatiotemporal data.</p>
Full article ">Figure 7
<p>Diagram of the correspondence between flocking principles and particle swarm optimization algorithm.</p>
Full article ">Figure 8
<p>Particle swarm optimization algorithm flowchart.</p>
Full article ">Figure 9
<p>Framework diagram of deformation detection process model.</p>
Full article ">Figure 10
<p>Segmentation rendering of bridge piers.</p>
Full article ">Figure 11
<p>Flow chart for updating parameters of bridge finite element model.</p>
Full article ">Figure 12
<p>Elevations and plans of the continuous rigid frame section of Hedong Bridge (unit: cm).</p>
Full article ">Figure 13
<p>Standard control group operation results.</p>
Full article ">Figure 14
<p>Results of the crack detection program based on different proximity thresholds for neighboring points in density estimation: (<b>A</b>) Results with a parameter selection of 0.01, (<b>B</b>) Results with a parameter selection of 0.08, (<b>C</b>) Results with a parameter selection of 1.0.</p>
Full article ">Figure 15
<p>Results of the crack detection program based on different tuning parameter for sparsity threshold: (<b>A</b>) Results with a parameter selection of 0.5, (<b>B</b>) Results with a parameter selection of 1.0, (<b>C</b>) Results with a parameter selection of 2.0.</p>
Full article ">Figure 16
<p>Particle swarm optimization results and crack detection result graph.</p>
Full article ">Figure 17
<p>Three-dimensional graph of parameter correlation.</p>
Full article ">
25 pages, 24649 KiB  
Article
Power Corridor Safety Hazard Detection Based on Airborne 3D Laser Scanning Technology
by Shuo Wang, Zhigen Zhao and Hang Liu
ISPRS Int. J. Geo-Inf. 2024, 13(11), 392; https://doi.org/10.3390/ijgi13110392 - 1 Nov 2024
Viewed by 655
Abstract
Overhead transmission lines are widely deployed across both mountainous and plain areas and serve as a critical infrastructure for China’s electric power industry. The rapid advancement of three-dimensional (3D) laser scanning technology, with airborne LiDAR at its core, enables high-precision and rapid scanning [...] Read more.
Overhead transmission lines are widely deployed across both mountainous and plain areas and serve as a critical infrastructure for China’s electric power industry. The rapid advancement of three-dimensional (3D) laser scanning technology, with airborne LiDAR at its core, enables high-precision and rapid scanning of the detection area, offering significant value in identifying safety hazards along transmission lines in complex environments. In this paper, five transmission lines, spanning a total of 160 km in the mountainous area of Sanmenxia City, Henan Province, China, serve as the primary research objects and generate several insights. The location and elevation of each power tower pole are determined using an Unmanned Aerial Vehicle (UAV), which assesses the direction and elevation changes in the transmission lines. Moreover, point cloud data of the transmission line corridor are acquired and archived using a UAV equipped with LiDAR during variable-height flight. The data processing of the 3D laser point cloud of the power corridor involves denoising, line repair, thinning, and classification. By calculating the clearance, horizontal, and vertical distances between the power towers, transmission lines, and other surface features, in conjunction with safety distance requirements, information about potential hazards can be generated. The results of detecting these five transmission lines reveal 54 general hazards, 22 major hazards, and an emergency hazard in terms of hazards of the vegetation type. The type of hazard in the current working condition is mainly vegetation, and the types of cross-crossing hazards are power lines and buildings. The detection results are submitted to the local power department in a timely manner, and relevant measures are taken to eliminate hazards and ensure the normal supply of power resources. The research in this paper will provide a basis and an important reference for identifying the potential safety hazards of transmission lines in Henan Province and other complex environments and solving existing problems in the manual inspection of transmission lines. Full article
Show Figures

Figure 1

Figure 1
<p>Research method flow chart.</p>
Full article ">Figure 2
<p>Location of study area in Henan Province.</p>
Full article ">Figure 3
<p>Distribution of transmission lines.</p>
Full article ">Figure 4
<p>Topographic map of study area: (<b>a</b>) DEM topographic map; (<b>b</b>) topographic profile.</p>
Full article ">Figure 5
<p>Spatial location information of a power tower.</p>
Full article ">Figure 6
<p>Schematic diagram of the UAV flying with variable altitude.</p>
Full article ">Figure 7
<p>Flight path of UAV with variable altitude.</p>
Full article ">Figure 8
<p>Schematic diagram of crossing an intersecting line with “6 points crossing method”.</p>
Full article ">Figure 9
<p>Schematic diagram based on statistical filtering algorithm.</p>
Full article ">Figure 10
<p>Point cloud map of power corridor before denoising.</p>
Full article ">Figure 11
<p>Point cloud map of power corridor after denoising.</p>
Full article ">Figure 12
<p>Point cloud map of scene environment after denoising.</p>
Full article ">Figure 13
<p>Flowchart for point cloud repair of transmission lines.</p>
Full article ">Figure 14
<p>Color point cloud data generation graph.</p>
Full article ">Figure 15
<p>Point cloud classification results of power corridor.</p>
Full article ">Figure 16
<p>True color tinted point cloud image of power corridor.</p>
Full article ">Figure 17
<p>Elevation tinted point cloud image of power corridor.</p>
Full article ">Figure 18
<p>Category tinted point cloud image of power corridor.</p>
Full article ">Figure 19
<p>Feature tinted point cloud image of power corridor.</p>
Full article ">Figure 20
<p>Result map of point cloud distance measurement (here, <span class="html-italic">3D</span> means clearance distance, <span class="html-italic">2D</span> means horizontal distance, and <span class="html-italic">H</span> means vertical distance).</p>
Full article ">Figure 21
<p>Detection result of a transmission line crossing another line.</p>
Full article ">Figure 22
<p>Detection result of a transmission line crossing a road.</p>
Full article ">Figure 23
<p>Clearance distance of vegetation does not meet provision (No. 010-011).</p>
Full article ">Figure 24
<p>Clearance distance of vegetation does not meet provision (No. 024-025).</p>
Full article ">Figure 25
<p>Elevation system.</p>
Full article ">
26 pages, 284813 KiB  
Article
Automatic Method for Detecting Deformation Cracks in Landslides Based on Multidimensional Information Fusion
by Bo Deng, Qiang Xu, Xiujun Dong, Weile Li, Mingtang Wu, Yuanzhen Ju and Qiulin He
Remote Sens. 2024, 16(21), 4075; https://doi.org/10.3390/rs16214075 - 31 Oct 2024
Viewed by 633
Abstract
As cracks are a precursor landslide deformation feature, they can provide forecasting information that is useful for the early identification of landslides and determining motion instability characteristics. However, it is difficult to solve the size effect and noise-filtering problems associated with the currently [...] Read more.
As cracks are a precursor landslide deformation feature, they can provide forecasting information that is useful for the early identification of landslides and determining motion instability characteristics. However, it is difficult to solve the size effect and noise-filtering problems associated with the currently available automatic crack detection methods under complex conditions using single remote sensing data sources. This article uses multidimensional target scene images obtained by UAV photogrammetry as the data source. Firstly, under the premise of fully considering the multidimensional image characteristics of different crack types, this article accomplishes the initial identification of landslide cracks by using six algorithm models with indicators including the roughness, slope, eigenvalue rate of the point cloud and pixel gradient, gray value, and RGB value of the images. Secondly, the initial extraction results are processed through a morphological repair task using three filtering algorithms (calculating the crack orientation, length, and frequency) to address background noise. Finally, this article proposes a multi-dimensional information fusion method, the Bayesian probability of minimum risk methods, to fuse the identification results derived from different models at the decision level. The results show that the six tested algorithm models can be used to effectively extract landslide cracks, providing Area Under the Curve (AUC) values between 0.6 and 0.85. After the repairing and filtering steps, the proposed method removes complex noise and minimizes the loss of real cracks, thus increasing the accuracy of each model by 7.5–55.3%. Multidimensional data fusion methods solve issues associated with the spatial scale effect during crack identification, and the F-score of the fusion model is 0.901. Full article
(This article belongs to the Topic Landslides and Natural Resources)
Show Figures

Figure 1

Figure 1
<p>Location of study area: (<b>a</b>) the location and traffic conditions of the study area on satellite images; (<b>b</b>) optical image of the landslide (photographed by UAV in May 2021); (<b>c</b>) main deformation area and DSM at the landslide site.</p>
Full article ">Figure 2
<p>Flight route and terrain products of the UAV operation in WuLiPo: (<b>a</b>) planned flight plane route and checkpoint positions; (<b>b</b>) FeiMa D200 drone; (<b>c</b>) terrain-following flight route; (<b>d</b>) DOM; (<b>e</b>) 3D point cloud.</p>
Full article ">Figure 3
<p>Results of the field investigation at Wulipo: (<b>a</b>) section A–A’ and material composition characteristics; (<b>b</b>) manual survey results of cracks; (<b>c</b>) on-site photos of the main cracks (numbers correspond to the shooting range of the black rectangular frame in (<b>b</b>)).</p>
Full article ">Figure 4
<p>Flow chart showing the automatic landslide crack detection process utilizing multidimensional data fusion.</p>
Full article ">Figure 5
<p>Schematic diagram representing the image pre-processing method.</p>
Full article ">Figure 6
<p>2D and 3D characteristics of landslide cracks with different scales: (<b>a</b>,<b>b</b>) the texture and morphology of the same landslide crack in the image and point cloud (the same numbered frames represent crack comparisons at the same location); (<b>c</b>) schematic diagram of the tensile crack formation; (<b>d</b>) schematic diagram of the shear crack formation. The base map of c and d is digitized from [<a href="#B3-remotesensing-16-04075" class="html-bibr">3</a>].</p>
Full article ">Figure 7
<p>Schematic diagram of the principle by which a K-D tree is used to search the local neighborhood in the point cloud and generate various crack-extraction indicators.</p>
Full article ">Figure 8
<p>Schematic diagram showing the crack edge threshold segmentation principle in which grayscale images and the Sobel gradient map are used: (<b>a</b>) grayscale image of a crack; (<b>b</b>) local grayscale feature of the crack and the background surface; (<b>c</b>) gradient feature of the crack image processed by the Sobel operator; and (<b>d</b>) edge binarization effect of the crack image.</p>
Full article ">Figure 9
<p>Object classification results derived based on maximum likelihood supervision: (<b>a</b>) image map and sampling points; (<b>b</b>) distribution of objects after the classification process; (<b>c</b>,<b>d</b>) spatial distribution and categories of sample pixels before and after the classification process, respectively.</p>
Full article ">Figure 10
<p>Process by which the crack binary image is repaired using the morphological closure operation: (<b>a</b>,<b>b</b>) cracks and background surfaces after binarization, respectively; (<b>c</b>,<b>d</b>) expanding and corroding effects of local crack pixels, respectively; and (<b>e</b>) repaired crack.</p>
Full article ">Figure 11
<p>Schematic diagram showing the principles of the crack-filtering method: (<b>a</b>) local image of crack binary image orientational filtering convolution (from the rectangular box in (<b>d</b>)); (<b>b</b>) eigenvalues after orientational filter convolution; (<b>c</b>) original image with cracks; (<b>d</b>) preliminary automatically extracted crack image; (<b>e</b>) crack identification image after orientation, frequency, and length filtering; (<b>f</b>) principle by which a single crack is clustered using DBSCAN; (<b>g</b>) local characteristics of cracks after clustering (from the rectangular box in (<b>d</b>)); and (<b>h</b>) local characteristics of cracks after orientation and frequency filtering (from the rectangular box in (<b>e</b>)).</p>
Full article ">Figure 12
<p>WuLiPo orthophoto image and 3D point cloud pre-processing results.</p>
Full article ">Figure 13
<p>Recognition results of the WuLiPo cracks obtained by each model: (<b>a</b>–<b>c</b>) calculation results of the point cloud roughness, eigenvalue ratios, and slope, respectively; (<b>d</b>–<b>f</b>) grid conversion results corresponding to panels (<b>a</b>–<b>c</b>); (<b>g</b>) binary image transformed by Sobel; (<b>h</b>) preprocessed grayscale image.</p>
Full article ">Figure 14
<p>WuLiPo image classification results derived based on maximum likelihood supervised learning: (<b>a</b>) Orthophoto and manually selected sampling locations; (<b>b</b>) distribution of sample categories after prediction.</p>
Full article ">Figure 15
<p>Crack pixel binary classification confusion matrix.</p>
Full article ">Figure 16
<p>ROC curve test results of each crack identification and classification model.</p>
Full article ">Figure 17
<p>Semantic segmentation results of cracks in WuLiPo derived using various models.</p>
Full article ">Figure 18
<p>Effects of the repairing and filtering processes on the initial extraction crack results of each model (The red areas in the image are the identified crack pixels).</p>
Full article ">Figure 19
<p>Statistical chart of TPR, FPR, and precision metrics of the crack extraction models before and after repair filtering.</p>
Full article ">Figure 20
<p>Crack identification results of Wulipo: (<b>a</b>) automatic detection results of gradient value segmentation and slope segmentation model; (<b>b</b>) manual investigation results.</p>
Full article ">Figure 21
<p>Distribution of the image fusion features and the posterior probability comparison results derived for WuLiPo cracks: (<b>a</b>) distribution of 64 fusion feature samples; (<b>b</b>) posterior probability values of 64 fusion feature samples, and the reference line with red font indicates that the fusion result has equal probability of cracks and non-cracks.</p>
Full article ">Figure 22
<p>Bayesian probability fusion results derived under different risk factors: (<b>a</b>–<b>f</b>) are the results of crack fusion recognition when <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>λ</mi> </mrow> <mrow> <mn>1</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>:</mo> <msub> <mrow> <mi>λ</mi> </mrow> <mrow> <mn>1,2</mn> </mrow> </msub> <mo>:</mo> <msub> <mrow> <mi>λ</mi> </mrow> <mrow> <mn>2</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>:</mo> <msub> <mrow> <mi>λ</mi> </mrow> <mrow> <mn>2</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>= (0:1.5:1:0), (0:1.2:1:0), (0:1:1:0), (0:1:2:0), (0:1:4:0), and (0:1:7:0), respectively. The red areas in the image are the identified crack pixels.</p>
Full article ">Figure 23
<p>Changes in model evaluation indicators under different risk ratios based on Bayesian probability fusion: (<b>a</b>–<b>d</b>) represent the changes of TPR, FPR, Precision, and F-score under different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>λ</mi> </mrow> <mrow> <mn>12</mn> </mrow> </msub> <mo>:</mo> <msub> <mrow> <mi>λ</mi> </mrow> <mrow> <mn>21</mn> </mrow> </msub> </mrow> </semantics></math>, respectively.</p>
Full article ">
34 pages, 11021 KiB  
Article
Comprehensive Review of Tunnel Blasting Evaluation Techniques and Innovative Half Porosity Assessment Using 3D Image Reconstruction
by Jianjun Shi, Yang Wang, Zhengyu Yang, Wenxin Shan and Huaming An
Appl. Sci. 2024, 14(21), 9791; https://doi.org/10.3390/app14219791 - 26 Oct 2024
Viewed by 700
Abstract
To meet the increasing demand for rapid and efficient evaluation of tunnel blasting quality, this study presents a comprehensive review of the current state of the art in tunnel blasting evaluation, organized into five key areas: Blasting Techniques and Optimization, 3D Reconstruction and [...] Read more.
To meet the increasing demand for rapid and efficient evaluation of tunnel blasting quality, this study presents a comprehensive review of the current state of the art in tunnel blasting evaluation, organized into five key areas: Blasting Techniques and Optimization, 3D Reconstruction and Visualization, Monitoring and Assessment Technologies, Automation and Advanced Techniques, and Half Porosity in Tunnel Blasting. Each section provides an indepth analysis of the latest research and developments, offering insights into enhancing blasting efficiency, improving safety, and optimizing tunnel design. Building on this foundation, we introduce a digital identification method for assessing half porosity through 3D image reconstruction. Utilizing the Structure from Motion (SFM) technique, we re-construct the 3D contours of tunnel surfaces and bench faces after blasting. Curvature values are employed as key indicators for extracting 3D point cloud data from boreholes. The acquired postblasting point cloud data is processed using advanced software that incorporates the RANSAC algorithm to accurately project and fit the borehole data, leading to the determination of the target circle and borehole axis. The characteristics of the boreholes are analyzed based on the fitting results, culminating in the calculation of half porosity. Field experiments conducted on the Huangtai Tunnel (AK20 + 970.5 to AK25 + 434), part of the new National Highway 109 project, provided data from shell holes generated during blasting. These data were analyzed and compared with traditional onsite measurements to validate the proposed method’s effectiveness. The computed half porosity value using this technique was 58.7%, showing minimal deviation from the traditional measurement of 60%. This methodology offers significant advantages over conventional measurement techniques, including easier equipment acquisition, non-interference with construction activities, a comprehensive detection range, rapid processing speed, reduced costs, and improved accuracy. The findings demonstrate the method’s potential for broader application in tunnel blasting assessments. Full article
Show Figures

Figure 1

Figure 1
<p>3D Reconstruction Workflow for Tunnel Blasting Evaluation.</p>
Full article ">Figure 2
<p>Pillow distortion and barrel distortion.</p>
Full article ">Figure 3
<p>(<b>a</b>) Reprojection error; (<b>b</b>) Euclidean distance; (<b>c</b>) Pixel Change Difference; (<b>d</b>) Chessboard position.</p>
Full article ">Figure 3 Cont.
<p>(<b>a</b>) Reprojection error; (<b>b</b>) Euclidean distance; (<b>c</b>) Pixel Change Difference; (<b>d</b>) Chessboard position.</p>
Full article ">Figure 4
<p>Camera shooting position schematic.</p>
Full article ">Figure 5
<p>Image Acquisition Schematic.</p>
Full article ">Figure 6
<p>Projection Error Schematic.</p>
Full article ">Figure 7
<p>Flow chart of the experiment.</p>
Full article ">Figure 8
<p>Tunnel Location Map.</p>
Full article ">Figure 9
<p>Huangtai tunnel.</p>
Full article ">Figure 10
<p>Slagging after tunnel blast.</p>
Full article ">Figure 11
<p>Tunnel palm surface.</p>
Full article ">Figure 12
<p>Sparse reconstruction.</p>
Full article ">Figure 13
<p>Dense reconstruction.</p>
Full article ">Figure 14
<p>Elevation ramp diagram.</p>
Full article ">Figure 15
<p>Segmented point cloud model.</p>
Full article ">Figure 16
<p>Contrast before and after filtering.</p>
Full article ">Figure 17
<p>Visualization of halfhole point cloud data.</p>
Full article ">Figure 18
<p>Extraction and Analysis of Single Blast Hole Point Cloud Model.</p>
Full article ">Figure 19
<p>Borehole point cloud projection.</p>
Full article ">Figure 20
<p>Schematic diagram of arc fitting.</p>
Full article ">Figure 21
<p>Individual borehole identification.</p>
Full article ">Figure 22
<p>Borehole Length Diagram.</p>
Full article ">Figure 23
<p>Borehole Fitting Position.</p>
Full article ">
22 pages, 15919 KiB  
Article
A Unified Virtual Model for Real-Time Visualization and Diagnosis in Architectural Heritage Conservation
by Federico Luis del Blanco García, Alejandro Jesús González Cruz, Cristina Amengual Menéndez, David Sanz Arauz, Jose Ramón Aira Zunzunegui, Milagros Palma Crespo, Soledad García Morales and Luis Javier Sánchez-Aparicio
Buildings 2024, 14(11), 3396; https://doi.org/10.3390/buildings14113396 - 25 Oct 2024
Viewed by 594
Abstract
The aim of this paper is to propose a workflow for the real-time visualization of virtual environments that supports diagnostic tasks in heritage buildings. The approach integrates data from terrestrial laser scanning (3D point clouds and meshes), along with panoramic and thermal images, [...] Read more.
The aim of this paper is to propose a workflow for the real-time visualization of virtual environments that supports diagnostic tasks in heritage buildings. The approach integrates data from terrestrial laser scanning (3D point clouds and meshes), along with panoramic and thermal images, into a unified virtual model. Additionally, the methodology incorporates several post-processing stages designed to enhance the user experience in visualizing both the building and its associated damage. The methodology was tested on the Medieval Templar Church of Vera Cruz in Segovia, utilizing a combination of visible and infrared data, along with manually prepared damage maps. The project results demonstrate that the use of a hybrid digital model—combining 3D point clouds, polygonal meshes, and panoramic images—is highly effective for real-time rendering, providing detailed visualization while maintaining adaptability for mobile devices with limited computational power. Full article
(This article belongs to the Special Issue Selected Papers from the REHABEND 2024 Congress)
Show Figures

Figure 1

Figure 1
<p>From scan to VR/AR. Workflow diagram from 3D point cloud to the generation of a three-dimensional mesh for creating a digital twin to be integrated into virtual reality and augmented reality.</p>
Full article ">Figure 2
<p>Workflow diagram from 360° images and thermographic images for the creation of an immersive scene in virtual reality.</p>
Full article ">Figure 3
<p>Exterior view of La Veracruz Church.</p>
Full article ">Figure 4
<p>Geometric analysis of the Church based on a simplified 3D model created before the generation of the point cloud.</p>
Full article ">Figure 5
<p>A 3D point cloud of the Church: (<b>a</b>) outdoors; (<b>b</b>) indoors.</p>
Full article ">Figure 6
<p>Floor plan of the church with the locations of the 360-degree images.</p>
Full article ">Figure 7
<p>The 360 infrared images used to map the virtual models.</p>
Full article ">Figure 8
<p>Overlaid thermographic images of one of the walls of the central nave of the Church.</p>
Full article ">Figure 9
<p>Top: real-time visualization of the three-dimensional mesh. Bottom: real-time visualization of the low-resolution point cloud (1 cm), adjusting the size of each point.</p>
Full article ">Figure 10
<p>Real-time visualization of the point cloud with low resolution (1 cm) for its implementation in mobile devices. Left: using a selective Eye-Dome Lighting. Right: using the default UE5 system.</p>
Full article ">Figure 11
<p>Blueprints to create an interactive material selection menu in UE5.</p>
Full article ">Figure 12
<p>Different materials applied to a wall, highlighting areas of dampness and cracks.</p>
Full article ">Figure 13
<p>Applying texture mapping to 360° images.</p>
Full article ">Figure 14
<p>Mapping of color alterations caused by moisture using 360-degree infrared images.</p>
Full article ">Figure 15
<p>Screenshots of the scene using 360-degree images with a selection interface. Visualization of lesions: biological colonization, detachment, efflorescence, erosion, cracks, moisture stains.</p>
Full article ">Figure 16
<p>Screenshots of the scene using 360-degree infrared images. Visualization of moisture with infrared images: infrared image, moisture.</p>
Full article ">Figure 17
<p>Comparison of the resources utilized in the virtual reality system for each digital model.</p>
Full article ">Figure 18
<p>Workflow for creating a virtual reality project designed for visualization across various platforms, allowing for material changes to observe the building’s damage.</p>
Full article ">Figure 19
<p>Comparison of the scan-generated model with the initial Church documentation.</p>
Full article ">
17 pages, 13097 KiB  
Article
Airborne LiDAR Point Cloud Classification Using Ensemble Learning for DEM Generation
by Ting-Shu Ciou, Chao-Hung Lin and Chi-Kuei Wang
Sensors 2024, 24(21), 6858; https://doi.org/10.3390/s24216858 - 25 Oct 2024
Viewed by 538
Abstract
Airborne laser scanning (ALS) point clouds have emerged as a predominant data source for the generation of digital elevation models (DEM) in recent years. Traditionally, the generation of DEM using ALS point clouds involves the steps of point cloud classification or ground point [...] Read more.
Airborne laser scanning (ALS) point clouds have emerged as a predominant data source for the generation of digital elevation models (DEM) in recent years. Traditionally, the generation of DEM using ALS point clouds involves the steps of point cloud classification or ground point filtering to extract ground points and labor-intensive post-processing to correct the misclassified ground points. The current deep learning techniques leverage the ability of geometric recognition for ground point classification. However, the deep learning classifiers are generally trained using 3D point clouds with simple geometric terrains, which decrease the performance of model inferencing. In this study, a point-based deep learning model with boosting ensemble learning and a set of geometric features as the model inputs is proposed. With the ensemble learning strategy, this study integrates specialized ground point classifiers designed for different terrains to boost classification robustness and accuracy. In experiments, ALS point clouds containing various terrains were used to evaluate the feasibility of the proposed method. The results demonstrated that the proposed method can improve the point cloud classification and the quality of generated DEMs. The classification accuracy and F1 score are improved from 80.9% to 92.2%, and 82.2% to 94.2%, respectively, by using the proposed methods. In addition, the DEM generation error, in terms of mean squared error (RMSE), is reduced from 0.318–1.362 m to 0.273–1.032 m by using the proposed ensemble learning. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>The network structure of the DGCNN segmentation model.</p>
Full article ">Figure 2
<p>The edge convolution operation.</p>
Full article ">Figure 3
<p>The inconsistent intensity value in point cloud data.</p>
Full article ">Figure 4
<p>Workflow of ground point determination by using ensemble learning.</p>
Full article ">Figure 5
<p>Spatial distribution of training datasets. (<b>Left</b>) the locations of mountain, urban, and mixed datasets are marked gray, orange, and pink; (<b>right</b>) examples of datasets.</p>
Full article ">Figure 6
<p>Results of the urban classifier <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>M</mi> </mrow> <mrow> <mi>u</mi> <mi>r</mi> <mi>b</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math> applied to an urban dataset. (<b>Top</b>) ground truth; (<b>bottom</b>) prediction result. The point cloud profile of the red line in the left subfigure is displayed in the right subfigure, and the ground points are marked in orange.</p>
Full article ">Figure 7
<p>Results of the urban classifier <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>M</mi> </mrow> <mrow> <mi>m</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math> applied to mountain data. (<b>Top</b>) ground truth; (<b>bottom</b>) prediction result. The point cloud profile of the red line in the left subfigure is displayed in the right subfigure, and the ground points are marked in orange.</p>
Full article ">Figure 8
<p>Comparison of prediction results of three ground point extraction processes on the mixed dataset. The <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">S</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">E</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> represent the start and end of the profile. The locations of the profiles are marked with red, and the ground points are marked in orange.</p>
Full article ">Figure 9
<p>Classification result of AHN dataset using the proposed method. The locations of the profiles are marked with red, and the ground points are marked in orange.</p>
Full article ">Figure 10
<p>Comparison of ground points in ground truth and prediction. The locations of the profiles are marked with red.</p>
Full article ">Figure 11
<p>Error maps of the generated DEM.</p>
Full article ">
17 pages, 3301 KiB  
Article
Stereo and LiDAR Loosely Coupled SLAM Constrained Ground Detection
by Tian Sun, Lei Cheng, Ting Zhang, Xiaoping Yuan, Yanzheng Zhao and Yong Liu
Sensors 2024, 24(21), 6828; https://doi.org/10.3390/s24216828 - 24 Oct 2024
Viewed by 505
Abstract
In many robotic applications, creating a map is crucial, and 3D maps provide a method for estimating the positions of other objects or obstacles. Most of the previous research processes 3D point clouds through projection-based or voxel-based models, but both approaches have certain [...] Read more.
In many robotic applications, creating a map is crucial, and 3D maps provide a method for estimating the positions of other objects or obstacles. Most of the previous research processes 3D point clouds through projection-based or voxel-based models, but both approaches have certain limitations. This paper proposes a hybrid localization and mapping method using stereo vision and LiDAR. Unlike the traditional single-sensor systems, we construct a pose optimization model by matching ground information between LiDAR maps and visual images. We use stereo vision to extract ground information and fuse it with LiDAR tensor voting data to establish coplanarity constraints. Pose optimization is achieved through a graph-based optimization algorithm and a local window optimization method. The proposed method is evaluated using the KITTI dataset and compared against the ORB-SLAM3, F-LOAM, LOAM, and LeGO-LOAM methods. Additionally, we generate 3D point cloud maps for the corresponding sequences and high-definition point cloud maps of the streets in sequence 00. The experimental results demonstrate significant improvements in trajectory accuracy and robustness, enabling the construction of clear, dense 3D maps. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Pose optimization based on ground information. <span class="html-italic">T</span>, <span class="html-italic">p</span>, and <span class="html-italic">q</span> represent the transformation matrix, points on the plane, and points off the plane, respectively.</p>
Full article ">Figure 2
<p>The stereo sensor model and the coordinate systems used [<a href="#B34-sensors-24-06828" class="html-bibr">34</a>].</p>
Full article ">Figure 3
<p>Region of interest extraction. (<b>a</b>) Left image. (<b>b</b>) Right image. (<b>c</b>) Disparity image. (<b>d</b>) v-disparity. (<b>e</b>) u-disparity. (<b>d</b>,<b>e</b>) are derived from (<b>c</b>). (<b>f</b>) Large obstacles removed by removing peak values from (<b>e</b>). (<b>g</b>) v-disparity based on (<b>f</b>), and red line is disparity profile of ground plane. (<b>h</b>) Detected ground plane and region of interest (RoI); RoI in red box. (<b>i</b>) City 3D reconstruction; green represents ground.</p>
Full article ">Figure 4
<p>Graph-structure optimization. <span class="html-italic">P</span> represents the nodes of visual points, and <span class="html-italic">X</span> represents the pose of the frame. “Ground” denotes the ground information extracted from the 3D reconstruction.</p>
Full article ">Figure 5
<p>Trajectory estimates in the KITTI dataset. (<b>a</b>) 00. (<b>b</b>) 01. (<b>c</b>) 05. (<b>d</b>) 07. (<b>e</b>) 08. (<b>f</b>) 09.</p>
Full article ">Figure 6
<p>High-definition display of point clouds for some streets in the 00 sequence. (<b>a</b>) 00. (<b>b</b>) 01. (<b>c</b>) 05. (<b>d</b>) 07. (<b>e</b>) 08. (<b>f</b>) 09.</p>
Full article ">Figure 7
<p>3D reconstruction based on road constraints, where green represents the road. (<b>a</b>) 00. (<b>b</b>) 01. (<b>c</b>) 05. (<b>d</b>) 07. (<b>e</b>) 08. (<b>f</b>) 09.</p>
Full article ">Figure 7 Cont.
<p>3D reconstruction based on road constraints, where green represents the road. (<b>a</b>) 00. (<b>b</b>) 01. (<b>c</b>) 05. (<b>d</b>) 07. (<b>e</b>) 08. (<b>f</b>) 09.</p>
Full article ">Figure 8
<p>High-definition display of point clouds for some streets in the 00 sequence.The image in the top left corner is a 3D reconstruction of the entire city, and the other images depict details of its streets (<b>a</b>–<b>e</b>).</p>
Full article ">
33 pages, 28774 KiB  
Article
Quality Evaluation of Sizeable Surveying-Industry-Produced Terrestrial Laser Scanning Point Clouds That Facilitate Building Information Modeling—A Case Study of Seven Point Clouds
by Sander Varbla, Raido Puust and Artu Ellmann
Buildings 2024, 14(11), 3371; https://doi.org/10.3390/buildings14113371 - 24 Oct 2024
Viewed by 490
Abstract
Terrestrial laser scanning can provide high-quality, detailed point clouds, with state-of-the-art research reporting the potential for sub-centimeter accuracy. However, state-of-the-art research may not represent real-world practices reliably. This study aims to deliver a different perspective through collaboration with the surveying industry, where time [...] Read more.
Terrestrial laser scanning can provide high-quality, detailed point clouds, with state-of-the-art research reporting the potential for sub-centimeter accuracy. However, state-of-the-art research may not represent real-world practices reliably. This study aims to deliver a different perspective through collaboration with the surveying industry, where time constraints and productivity requirements limit the effort which can go to ensuring point cloud quality. Seven sizeable buildings’ point clouds (490 to 1392 scanning stations) are evaluated qualitatively and quantitatively. Quantitative evaluations based on independent total station control surveys indicate that sub-centimeter accuracy is achievable for smaller point cloud portions (e.g., a single building story) but caution against such optimism for sizable point clouds of large, multi-story buildings. The control surveys reveal common registration errors around the 5 cm range, resulting from complex surface geometries, as in stairways. Potentially hidden from visual inspection, such systematic errors can cause misalignments between point cloud portions in the compound point cloud structure, which could be detrimental to further applications of the point clouds. The study also evaluates point cloud georeferencing, affirming the resection method’s capability of providing high consistency and an accuracy of a few centimeters. Following the study’s findings, practical recommendations for terrestrial laser scanning surveys and data processing are formulated. Full article
(This article belongs to the Special Issue BIM Uptake and Adoption: New Perspectives)
Show Figures

Figure 1

Figure 1
<p>Overview of the experimental process. The headings refer to sections where further details on a specific aspect can be found.</p>
Full article ">Figure 2
<p>The CON point cloud (also refer to <a href="#buildings-14-03371-t001" class="html-table">Table 1</a>); the views depict opposite facades.</p>
Full article ">Figure 3
<p>Distribution of TLS surveyed buildings (in color) of the Tallinn University of Technology (also refer to <a href="#buildings-14-03371-t001" class="html-table">Table 1</a>), assessed within the frames of this study. Background orthophoto originates from the Estonian Land Board.</p>
Full article ">Figure 4
<p>The U03 + U03B C1 point cloud (also refer to <a href="#buildings-14-03371-f003" class="html-fig">Figure 3</a> and <a href="#buildings-14-03371-t001" class="html-table">Table 1</a>); the views depict opposite facades.</p>
Full article ">Figure 5
<p>Reflections (red arrows) and an occlusion (blue arrow) in the U04 (refer to <a href="#buildings-14-03371-f003" class="html-fig">Figure 3</a>) point cloud (<b>a</b>), and occlusions in the U03 + U03B C3 (refer to <a href="#buildings-14-03371-f003" class="html-fig">Figure 3</a>) point cloud ((<b>b</b>); the point cloud is colored according to intensity information—blue denotes high intensity and reddish low).</p>
Full article ">Figure 6
<p>A selection of more significant registration errors in different point clouds: basement floor corridor wall discrepancies (horizontal plane) in the CON point cloud (<b>a</b>), vertical discrepancies in the eaves overhang of the U05 (refer to <a href="#buildings-14-03371-f003" class="html-fig">Figure 3</a>) point cloud (<b>b</b>), vertical outer wall discrepancies in the U06A (refer to <a href="#buildings-14-03371-f003" class="html-fig">Figure 3</a>) point cloud at a window (<b>c</b>), and vertical (4.3 cm) and horizontal (6.6 cm) discrepancies in the U03 + U03B C3 (refer to <a href="#buildings-14-03371-f003" class="html-fig">Figure 3</a>) point cloud at an exterior doorway (<b>d</b>). Point clouds are colored according to intensity information—blue denotes high intensity, and the reddish color low.</p>
Full article ">Figure 7
<p>Descriptive statistics of discrepancies between validation points’ coordinates and those extracted from point clouds. Black lines denote mean values, colored bars standard deviation estimates, and colored crosses minimum and maximum discrepancies. Note that the CON-associated statistics represent an indoor survey, whereas all others describe outdoor surveys’ results.</p>
Full article ">Figure 8
<p>Histograms and descriptive statistics (MoAD—mean of absolute discrepancies; SD—standard deviation) of baseline discrepancies (cf. Equation (1)). Note that the CON-associated statistics represent an indoor survey, whereas all others describe outdoor surveys’ results.</p>
Full article ">Figure 9
<p>Descriptive statistics of discrepancies between validation points’ coordinates and those extracted from each CON point cloud floor; the statistics of all 59 validation points are the same as in <a href="#buildings-14-03371-f007" class="html-fig">Figure 7</a>. Black lines denote mean values, colored bars standard deviation estimates, and colored crosses minimum and maximum discrepancies. Notice that more significant discrepancies are associated with the basement floor (height) and third floor (<span class="html-italic">X</span>- and <span class="html-italic">Y</span>-coordinates).</p>
Full article ">Figure 10
<p>Discrepancies between validation points’ coordinates and those extracted from the CON point cloud portion of the basement floor (sub-plots (<b>a</b>–<b>c</b>)) and the third floor (sub-plots (<b>d</b>–<b>f</b>)). The dashed red lines show the corresponding discrepancy trends. Note that discrepancies have been projected along a baseline aligned with the corridor connecting the secondary stairway (left) to the main stairway (right; refer to <a href="#buildings-14-03371-f0A1" class="html-fig">Figure A1</a>). Residual standard deviation (SD) describes discrepancy residuals relative to the trend.</p>
Full article ">Figure 11
<p>Baseline discrepancies (the same as in <a href="#buildings-14-03371-f008" class="html-fig">Figure 8</a>) plotted relative to the total station estimated baseline lengths for U03 C1 (<b>a</b>) and U03 C3 (<b>b</b>) outdoor and U03 C1 (<b>c</b>) and U03 C3 (<b>d</b>) third-floor surveys. The dashed red lines show the corresponding trends, and the scale error values are estimated according to Equation (2).</p>
Full article ">Figure 12
<p>Descriptive statistics of discrepancies between validation points’ coordinates and those extracted from the point clouds of the U03 building’s third floor. Black lines denote mean values, colored bars standard deviation estimates, and colored crosses minimum and maximum discrepancies.</p>
Full article ">Figure 13
<p>Histograms and descriptive statistics (MoAD—mean of absolute discrepancies; SD—standard deviation) of baseline discrepancies (cf. Equation (1)) representing the U03 building’s third floor.</p>
Full article ">Figure 14
<p>Discrepancies between validation points’ coordinates and those extracted from the U03 C1 (sub-plots (<b>a</b>–<b>c</b>)) and U03 C3 (sub-plots (<b>d</b>–<b>f</b>)) point clouds. The dashed red lines show the corresponding discrepancy trends. Note that discrepancies have been projected along a baseline aligned with the corridor connecting the secondary stairway (left) to the main stairway (right; refer to <a href="#buildings-14-03371-f0A3" class="html-fig">Figure A3</a>). Residual standard deviation (SD) describes discrepancy residuals relative to the trend. Notice vertical scale offsets in the C3-associated sub-plots.</p>
Full article ">Figure A1
<p>Outdoor survey network points (red dots) of the CON building (<b>left</b>) and descriptive statistics (based on 24 comparisons) of discrepancies between initial coordinates and control measurements (initial coordinates were subtracted from the latter) describing the CON building total station survey (<b>right</b>). On the <b>left</b> sub-plot: the green triangle shows the location of base station for the total station survey; the red arrow points roughly to the location of the main stairway, and the blue arrow points to the secondary stairway; background orthophoto originates from the Estonian Land Board. On the <b>right</b> sub-plot: black lines denote mean values, colored bars standard deviation estimates, and colored crosses minimum and maximum discrepancies.</p>
Full article ">Figure A2
<p>Mini prism and tripod used during total station surveys.</p>
Full article ">Figure A3
<p>Outdoor survey network points (blue and red dots) for the main campus. Blue dots denote points that were initially measured using RTK-GNSS. The yellow dot denotes a temporary point established with a wooden stake, and the green triangle shows the location of the total station survey’s base station. The red arrow points to the location of the main stairway, and the blue arrow points to the secondary stairway of the U03 building. Background orthophoto originates from the Estonian Land Board.</p>
Full article ">Figure A4
<p>Scheme of resection traverse loops’ closings. Yellow dots denote survey network points used for closing the loops. Red triangles and arrows show survey stations’ locations and prism sights for determining the initial coordinates of survey network points, whereas blue triangles and arrows denote survey stations’ locations and prism sights of control measurements. Numbers associated with <span class="html-italic">X</span>- and <span class="html-italic">Y</span>-coordinates and heights show discrepancies between initial coordinates and control measurements (initial coordinates were subtracted from the latter). Background orthophoto originates from the Estonian Land Board.</p>
Full article ">
23 pages, 5405 KiB  
Article
CPH-Fmnet: An Optimized Deep Learning Model for Multi-View Stereo and Parameter Extraction in Complex Forest Scenes
by Lingnan Dai, Zhao Chen, Xiaoli Zhang, Dianchang Wang and Lishuo Huo
Forests 2024, 15(11), 1860; https://doi.org/10.3390/f15111860 - 23 Oct 2024
Viewed by 568
Abstract
The three-dimensional reconstruction of forests is crucial in remote sensing technology, ecological monitoring, and forestry management, as it yields precise forest structure and tree parameters, providing essential data support for forest resource management, evaluation, and sustainable development. Nevertheless, forest 3D reconstruction now encounters [...] Read more.
The three-dimensional reconstruction of forests is crucial in remote sensing technology, ecological monitoring, and forestry management, as it yields precise forest structure and tree parameters, providing essential data support for forest resource management, evaluation, and sustainable development. Nevertheless, forest 3D reconstruction now encounters obstacles including higher equipment costs, reduced data collection efficiency, and complex data processing. This work introduces a unique deep learning model, CPH-Fmnet, designed to enhance the accuracy and efficiency of 3D reconstruction in intricate forest environments. CPH-Fmnet enhances the FPN Encoder-Decoder Architecture by meticulously incorporating the Channel Attention Mechanism (CA), Path Aggregation Module (PA), and High-Level Feature Selection Module (HFS), alongside the integration of the pre-trained Vision Transformer (ViT), thereby significantly improving the model’s global feature extraction and local detail reconstruction abilities. We selected three representative sample plots in Haidian District, Beijing, China, as the study area and took forest stand sequence photos with an iPhone for the research. Comparative experiments with the conventional SfM + MVS and MVSFormer models, along with comprehensive parameter extraction and ablation studies, substantiated the enhanced efficacy of the proposed CPH-Fmnet model in addressing difficult circumstances such as intricate occlusions, poorly textured areas, and variations in lighting. The test results show that the model does better on a number of evaluation criteria. It has an RMSE of 1.353, an MAE of only 5.1%, an r value of 1.190, and a forest reconstruction rate of 100%, all of which are better than current methods. Furthermore, the model produced a more compact and precise 3D point cloud while accurately determining the properties of the forest trees. The findings indicate that CPH-Fmnet offers an innovative approach for forest resource management and ecological monitoring, characterized by cheap cost, high accuracy, and high efficiency. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Overview of the study area.</p>
Full article ">Figure 2
<p>The architecture of CPH-Fmnet. Three sub-module: pretrained ViT, CPHFPN, and 3D UNet.</p>
Full article ">Figure 3
<p>Channel attention mechanism framework [<a href="#B37-forests-15-01860" class="html-bibr">37</a>].</p>
Full article ">Figure 4
<p>Comparison of 3D reconstruction results of different methods in three representative forest scenes. (<b>a</b>–<b>c</b>) correspond to plot 1, plot 2, and plot 3, respectively.</p>
Full article ">Figure 5
<p>Comparison of tree trunk detail reconstruction results using different methods.</p>
Full article ">Figure 6
<p>Comparison of tree crown detail reconstruction results using different methods.</p>
Full article ">Figure 7
<p>Comparison of extraction results of diameter at breast height of a single tree using different methods.</p>
Full article ">
Back to TopTop