[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (103)

Search Parameters:
Keywords = kd-tree

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 3993 KiB  
Article
Automated Defect Detection through Flaw Grading in Non-Destructive Testing Digital X-ray Radiography
by Bata Hena, Gabriel Ramos, Clemente Ibarra-Castanedo and Xavier Maldague
NDT 2024, 2(4), 378-391; https://doi.org/10.3390/ndt2040023 - 4 Oct 2024
Viewed by 375
Abstract
Process automation utilizes specialized technology and equipment to automate and enhance production processes, leading to higher manufacturing efficiency, higher productivity, and cost savings. The aluminum die casting industry has significantly gained from the implementation of process automation solutions in manufacturing, serving safety-critical sectors [...] Read more.
Process automation utilizes specialized technology and equipment to automate and enhance production processes, leading to higher manufacturing efficiency, higher productivity, and cost savings. The aluminum die casting industry has significantly gained from the implementation of process automation solutions in manufacturing, serving safety-critical sectors such as automotive and aerospace industries. However, this method of component fabrication is very susceptible to generating manufacturing flaws, hence necessitating adequate non-destructive testing (NDT) to ascertain the fitness for use of such components. Machine learning has taken the center stage in recent years as a tool for developing automated solutions for detecting and classifying flaws in digital X-ray radiography. These machine learning-based solutions have increasingly been developed and deployed for component inspection, to keep pace with the high production throughput in manufacturing industries. This work focuses on the development of a defect grading algorithm that assesses detected flaws to ascertain if they constitute a defect that could render a component unfit for use. Guided by ASTM 2973-15; Standard Digital Reference Images for Inspection of Aluminum and Magnesium Die Castings, a grading pipeline utilizing K-D (k-dimensional) trees was developed to effectively structure detected flaws, enabling the system to make decisions based on acceptable grading terms. This solution is dynamic in terms of its conformity to different grading criteria and offers the possibility to achieve automated decision making (Accept/Reject) in digital X-ray radiography applications. Full article
Show Figures

Figure 1

Figure 1
<p>Process flow in NDT according to ASTM E1316-22a.</p>
Full article ">Figure 2
<p>Grading references from ASTM 2973 with enhanced visualization of Porosity flaws; arranged in order of increasing severity, from 1 to 4.</p>
Full article ">Figure 3
<p>Workflow for creating input images with real flaws, each flaw color-coded to differentiate between four distinct classes.</p>
Full article ">Figure 4
<p>(<b>a</b>) shows an image with random distribution of generated flaws; (<b>b</b>) shows a random classification of flaws into 4 distinct color codes. Original image size is 3098 × 3097 pixels.</p>
Full article ">Figure 5
<p>Schematic representation of our proposed defect grading methodology.</p>
Full article ">Figure 6
<p>Rows (<b>a</b>–<b>d</b>) represent Porosity, Shrinkage, Cold Fill, and Foreign body; columns (<b>i</b>–<b>iii</b>) show the isolated flaw class based on the flaw class, bounding boxes, and highlighted bounding boxes. Bounding box color indicates the grade as follows: grade 1 (blue), grade 2 (green), grade 3 (orange), grade 4 (brown).</p>
Full article ">Figure 6 Cont.
<p>Rows (<b>a</b>–<b>d</b>) represent Porosity, Shrinkage, Cold Fill, and Foreign body; columns (<b>i</b>–<b>iii</b>) show the isolated flaw class based on the flaw class, bounding boxes, and highlighted bounding boxes. Bounding box color indicates the grade as follows: grade 1 (blue), grade 2 (green), grade 3 (orange), grade 4 (brown).</p>
Full article ">
17 pages, 8360 KiB  
Article
Mode I Stress Intensity Factor Solutions for Cracks Emanating from a Semi-Ellipsoidal Pit
by Hasan Saeed, Robin Vancoillie, Farid Mehri Sofiani and Wim De Waele
Materials 2024, 17(19), 4777; https://doi.org/10.3390/ma17194777 - 28 Sep 2024
Viewed by 453
Abstract
In linear elastic fracture mechanics, the stress intensity factor describes the magnitude of the stress singularity near a crack tip caused by remote stress and is related to the rate of fatigue crack growth. The literature lacks SIF solutions for cracks emanating from [...] Read more.
In linear elastic fracture mechanics, the stress intensity factor describes the magnitude of the stress singularity near a crack tip caused by remote stress and is related to the rate of fatigue crack growth. The literature lacks SIF solutions for cracks emanating from a three-dimensional semi-ellipsoidal pit. This study undertakes a comprehensive parametric investigation of the Mode I stress intensity factor (KI) concerning cracks originating from a semi-ellipsoidal pit in a plate. This work utilizes finite element analysis, controlled by Python scripts, to conduct an extensive study on the effect of various pit dimensions and crack lengths on KI. Two cracks in the shape of a circular arc are introduced at the pit mouth perpendicular to the loading direction. The KI values are calculated using the displacement extrapolation method. The effect of normalized geometric parameters pit-depth-to-pit-width (a/2c), pit-depth-to-plate-thickness (a/t), and crack-radius-to-pit-depth (R/a) are investigated. The crack-radius-to-pit-depth (R/a) is found to be the dominating parameter based on correlation analysis. The data obtained from 216 FEA simulations are incorporated into a predictive model using a k-dimensional (k-d) tree and k-Nearest Neighbour (k-NN) algorithm. Full article
(This article belongs to the Special Issue Plastic Deformation and Mechanical Behavior of Metallic Materials)
Show Figures

Figure 1

Figure 1
<p>SEM image of propagating fatigue crack that initiated from the pit mouth (reproduced with permission from [<a href="#B16-materials-17-04777" class="html-bibr">16</a>]).</p>
Full article ">Figure 2
<p>Schematic representation of the geometrical model of a pitted plate with emanating cracks for finite element analysis.</p>
Full article ">Figure 3
<p>Flowchart of the script used to automate the finite element modelling and analysis.</p>
Full article ">Figure 4
<p>Representation of a hexahedral FE mesh; (<b>a</b>) prismatic view of the plate with pit, (<b>b</b>) a close-up view of the pit, (<b>c</b>) a cross-sectional view of the pit with a crack and (<b>d</b>) a close-up view of the crack front at the pit mouth.</p>
Full article ">Figure 5
<p>Three primary modes of crack propagation encountered in fracture mechanics (adapted from [<a href="#B35-materials-17-04777" class="html-bibr">35</a>]).</p>
Full article ">Figure 6
<p>(<b>a</b>) Crack tip mesh with illustration of the reference frame in the node for which the SIF is calculated, (<b>b</b>) Illustration of the displacement extrapolation method to determine the SIF value at the crack tip.</p>
Full article ">Figure 7
<p>Illustration of the radial path used to calculate <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math>: (<b>a</b>) arc length (yellow) and (<b>b</b>) projected length (blue) perpendicular to the crack front (red) for the lower crack tip.</p>
Full article ">Figure 8
<p>(<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> error (%) and (<b>b</b>) computational time, versus number of elements at crack for a pit with <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>/</mo> <mn>2</mn> <mi>c</mi> </mrow> </semantics></math> = 0.66, <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>/</mo> <mi>a</mi> </mrow> </semantics></math> = 0.65, and <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>/</mo> <mi>t</mi> </mrow> </semantics></math> = 0.1.</p>
Full article ">Figure 9
<p>Stress plot of three different pits configurations: <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>/</mo> <mn>2</mn> <mi>c</mi> </mrow> </semantics></math> = (0.25, 0.5, 1.0), <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math> = 0.1 mm, <math display="inline"><semantics> <mrow> <mi>a</mi> </mrow> </semantics></math> = 1.5 mm at applied stress of 100 MPa.</p>
Full article ">Figure 10
<p>Scatter plot of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> (MPa.√mm) at upper and lower crack tips. Results are organized for (<b>a</b>) groups of <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>/</mo> <mn>2</mn> <mi>c</mi> </mrow> </semantics></math> values and (<b>b</b>) groups of <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>/</mo> <mi>a</mi> </mrow> </semantics></math> values.</p>
Full article ">Figure 11
<p>Evolution of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> (MPa.√mm) for the upper crack tip for <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>/</mo> <mi>t</mi> </mrow> </semantics></math> equal to (<b>a</b>) 0.01, (<b>b</b>) 0.04, (<b>c</b>) 0.07, (<b>d</b>) 0.1, (<b>e</b>) 0.3, and (<b>f</b>) 0.5, and varying <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>/</mo> <mn>2</mn> <mi>c</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>/</mo> <mi>a</mi> </mrow> </semantics></math> values.</p>
Full article ">Figure 12
<p>Evolution of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> (MPa. √mm) for the lower crack tip for <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>/</mo> <mi>t</mi> </mrow> </semantics></math> equal to (<b>a</b>) 0.01, (<b>b</b>) 0.04, (<b>c</b>) 0.07, (<b>d</b>) 0.1, (<b>e</b>) 0.3, and (<b>f</b>) 0.5, and varying <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>/</mo> <mn>2</mn> <mi>c</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>/</mo> <mi>a</mi> </mrow> </semantics></math> values.</p>
Full article ">Figure 13
<p>(<b>a</b>) Correlation of normalized geometric parameters with <math display="inline"><semantics> <mrow> <mi>β</mi> </mrow> </semantics></math> at both crack tips. (<b>b</b>) Adjusted correlation analysis excluding two specific cases with <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>/</mo> <mi>t</mi> </mrow> </semantics></math> = 0.3 and 0.5.</p>
Full article ">Figure 14
<p>(<b>a</b>) Mean squared error (MSE) for different values of the number of nearest neighbours (k); (<b>b</b>) Actual vs. predicted values of <math display="inline"><semantics> <mrow> <mi>β</mi> </mrow> </semantics></math> (upper crack tip) using the k-NN model for k = 4.</p>
Full article ">Figure 15
<p>(<b>a</b>) An example of a k-d tree nearest neighbour structured in 3D for shape values at the upper crack tip, and (<b>b</b>) predicted <math display="inline"><semantics> <mrow> <mi>β</mi> </mrow> </semantics></math> value using the nearest neighbour approach.</p>
Full article ">
18 pages, 5098 KiB  
Article
Prediction of Greenhouse Area Expansion in an Agricultural Hotspot Using Landsat Imagery, Machine Learning and the Markov–FLUS Model
by Melis Inalpulat
Sustainability 2024, 16(19), 8456; https://doi.org/10.3390/su16198456 - 28 Sep 2024
Viewed by 540
Abstract
Greenhouses (GHs) are important elements of agricultural production and help to ensure food security aligning with United Nations Sustainable Development Goals (SDGs). However, there are still environmental concerns due to excessive use of plastics. Therefore, it is important to understand the past and [...] Read more.
Greenhouses (GHs) are important elements of agricultural production and help to ensure food security aligning with United Nations Sustainable Development Goals (SDGs). However, there are still environmental concerns due to excessive use of plastics. Therefore, it is important to understand the past and future trends on spatial distribution of GH areas, whereby use of remote sensing data provides rapid and valuable information. The present study aimed to determine GH area changes in an agricultural hotspot, Serik, Türkiye, using 2008 and 2022 Landsat imageries and machine learning, and to predict future patterns (2036 and 2050) via the Markov–FLUS model. Performances of random forest (RF), k-nearest neighborhood (KNN), and k-dimensional trees k-nearest neighborhood (KD-KNN) algorithms were compared for GH discrimination. Accordingly, the RF algorithm gave the highest accuracies of over 90%. GH areas were found to increase by 73% between 2008 and 2022. The majority of new areas were converted from agricultural lands. Markov-based predictions showed that GHs are likely to increase by 43% and 54% before 2036 and 2050, respectively, whereby reliable simulations were generated with the FLUS model. This study is believed to serve as a baseline for future research by providing the first attempt at the visualization of future GH conditions in the Turkish Mediterranean region. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area within Antalya province, and Türkiye.</p>
Full article ">Figure 2
<p>The implemented steps during this study.</p>
Full article ">Figure 3
<p>The ancillary data used in the ANN training step.</p>
Full article ">Figure 4
<p>Produced LULC maps of 2008 (<b>a</b>) LULC<sub>2008-RF</sub>, (<b>b</b>) LULC<sub>2008-KNN</sub>, and (<b>c</b>) LULC<sub>2008-KD-KNN.</sub></p>
Full article ">Figure 5
<p>Produced LULC maps of 2022 (<b>a</b>) LULC<sub>2022-RF</sub>, (<b>b</b>) LULC<sub>2022-KNN</sub>, and (<b>c</b>) LULC<sub>2022-KD-KNN.</sub></p>
Full article ">Figure 6
<p>LULC class areas (ha) of LULC<sub>2008-RF</sub>, LULC<sub>2008-KNN</sub>, LULC<sub>2008-KD-KNN</sub>, LULC<sub>2022-RF</sub>, LULC<sub>2022-KNN</sub>, and LULC<sub>2022-KD-KNN.</sub></p>
Full article ">Figure 7
<p>LULC conversions from one class to another between 2008 and 2022.</p>
Full article ">Figure 8
<p>LULC conversion map representing converted pixel locations.</p>
Full article ">Figure 9
<p>Predictions for class areas (ha) of LULC<sub>2036</sub> and LULC<sub>2050.</sub></p>
Full article ">Figure 10
<p>Simulation of predicted class areas (<b>a</b>) LULC<sub>2036</sub> and (<b>b</b>) LULC<sub>2050.</sub></p>
Full article ">
20 pages, 4893 KiB  
Article
Interactive 3D Vase Design Based on Gradient Boosting Decision Trees
by Dongming Wang, Xing Xu, Xuewen Xia and Heming Jia
Algorithms 2024, 17(9), 407; https://doi.org/10.3390/a17090407 - 11 Sep 2024
Viewed by 552
Abstract
Traditionally, ceramic design began with sketches on rough paper and later evolved into using CAD software for more complex designs and simulations. With technological advancements, optimization algorithms have gradually been introduced into ceramic design to enhance design efficiency and creative diversity. The use [...] Read more.
Traditionally, ceramic design began with sketches on rough paper and later evolved into using CAD software for more complex designs and simulations. With technological advancements, optimization algorithms have gradually been introduced into ceramic design to enhance design efficiency and creative diversity. The use of Interactive Genetic Algorithms (IGAs) for ceramic design is a new approach, but an IGA requires a significant amount of user evaluation, which can result in user fatigue. To overcome this problem, this paper introduces the LightGBM algorithm and the CatBoost algorithm to improve the IGA because they have excellent predictive capabilities that can assist users in evaluations. The algorithms are also applied to a vase design platform for validation. First, bicubic Bézier surfaces are used for modeling, and the genetic encoding of the vase is designed with appropriate evolutionary operators selected. Second, user data from the online platform are collected to train and optimize the LightGBM and CatBoost algorithms. Finally, LightGBM and CatBoost are combined with an IGA and applied to the vase design platform to verify their effectiveness. Comparing the improved algorithm to traditional IGAs, KD trees, Random Forest, and XGBoost, it is found that IGAs improve with LightGBM, and CatBoost performs better overall, requiring fewer evaluations and less time. Its R2 is higher than other proxy models, achieving 0.816 and 0.839, respectively. The improved method proposed in this paper can effectively alleviate user fatigue and enhance the user experience in product design participation. Full article
Show Figures

Figure 1

Figure 1
<p>Proxy model flowchart.</p>
Full article ">Figure 2
<p>Updating the proxy model.</p>
Full article ">Figure 3
<p>Composition of vase coding.</p>
Full article ">Figure 4
<p>Roulette wheel selection.</p>
Full article ">Figure 5
<p>Graphic interaction mechanism.</p>
Full article ">Figure 6
<p>Algorithm flow chart.</p>
Full article ">Figure 7
<p>Comparison of evolutionary strategies.</p>
Full article ">Figure 8
<p>Comparison of predicted and actual values for five proxy models.</p>
Full article ">Figure 9
<p>Interactive interface.</p>
Full article ">Figure 10
<p>(<b>a</b>) Comparison of average fitness values. (<b>b</b>) Comparison of average fitness values in the last generation.</p>
Full article ">Figure 11
<p>(<b>a</b>) Comparison of average maximum fitness values. (<b>b</b>) Comparison of average maximum fitness values in the last generation.</p>
Full article ">Figure 12
<p>Mean value and standard deviation of evaluation numbers.</p>
Full article ">Figure 13
<p>Mean value and standard deviation of evaluation time.</p>
Full article ">
25 pages, 7594 KiB  
Article
A Novel Point Cloud Adaptive Filtering Algorithm for LiDAR SLAM in Forest Environments Based on Guidance Information
by Shuhang Yang, Yanqiu Xing, Dejun Wang and Hangyu Deng
Remote Sens. 2024, 16(15), 2714; https://doi.org/10.3390/rs16152714 - 24 Jul 2024
Cited by 1 | Viewed by 665
Abstract
To address the issue of accuracy in Simultaneous Localization and Mapping (SLAM) for forested areas, a novel point cloud adaptive filtering algorithm is proposed in the paper, based on point cloud data obtained by backpack Light Detection and Ranging (LiDAR). The algorithm employs [...] Read more.
To address the issue of accuracy in Simultaneous Localization and Mapping (SLAM) for forested areas, a novel point cloud adaptive filtering algorithm is proposed in the paper, based on point cloud data obtained by backpack Light Detection and Ranging (LiDAR). The algorithm employs a K-D tree to construct the spatial position information of the 3D point cloud, deriving a linear model that is the guidance information based on both the original and filtered point cloud data. The parameters of the linear model are determined by minimizing the cost function using an optimization strategy, and a guidance point cloud filter is subsequently constructed based on these parameters. The results demonstrate that, comparing the diameter at breast height (DBH) and tree height before and after filtering with the measured true values, the accuracy of SLAM mapping is significantly improved after filtering. The Mean Absolute Error (MAE) of DBH before and after filtering are 2.20 cm and 1.16 cm; the Root Mean Square Error (RMSE) values are 4.78 cm and 1.40 cm; and the relative RMSE values are 29.30% and 8.59%. For tree height, the MAE before and after filtering are 0.76 m and 0.40 m; the RMSE values are 1.01 m and 0.50 m; the relative RMSE values are 7.33% and 3.65%. The experimental results validate that the proposed adaptive point cloud filtering method based on guided information is an effective point cloud preprocessing method for enhancing the accuracy of SLAM mapping in forested areas. Full article
(This article belongs to the Special Issue Remote Sensing and Smart Forestry II)
Show Figures

Figure 1

Figure 1
<p>Study area.</p>
Full article ">Figure 2
<p>The BLS system.</p>
Full article ">Figure 3
<p>Data collection trajectory. (<b>a</b>) Plot 1. (<b>b</b>) Plot 2. (<b>c</b>) Plot 3. (<b>d</b>) Plot 4. (<b>e</b>) Plot 7. (<b>f</b>) Plot 8. (<b>g</b>) Plot 13. (<b>h</b>) Plot 14. (<b>i</b>) Plot 15.</p>
Full article ">Figure 3 Cont.
<p>Data collection trajectory. (<b>a</b>) Plot 1. (<b>b</b>) Plot 2. (<b>c</b>) Plot 3. (<b>d</b>) Plot 4. (<b>e</b>) Plot 7. (<b>f</b>) Plot 8. (<b>g</b>) Plot 13. (<b>h</b>) Plot 14. (<b>i</b>) Plot 15.</p>
Full article ">Figure 4
<p>Location of the adaptive guided filter in SLAM algorithms.</p>
Full article ">Figure 5
<p>The flowchart of the adaptive guided point cloud filter.</p>
Full article ">Figure 6
<p>Before and after filtering a single-frame point cloud. (<b>a</b>) Original point cloud; (<b>b</b>) Filtered point cloud.</p>
Full article ">Figure 7
<p>Before and after filtering the forest point cloud. (<b>a</b>) Original point cloud. (<b>b</b>) Filtered point cloud.</p>
Full article ">Figure 8
<p>Comparison of DBH errors at various plots before and after implementing an adaptive guided point cloud filtering algorithm. (<b>a</b>) Bias; (<b>b</b>) MAE; and (<b>c</b>) RMSE.</p>
Full article ">Figure 9
<p>Comparison of tree height errors at various plots before and after implementing an adaptive guided point cloud filtering algorithm. (<b>a</b>) Bias; (<b>b</b>) MAE; and (<b>c</b>) RMSE.</p>
Full article ">Figure 10
<p>Original point cloud frame.</p>
Full article ">Figure 11
<p>Effect of different parameters of the adaptive guided point cloud filtering algorithm on the point cloud frame. (<b>a</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 5, <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.02. (<b>b</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 5, <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.05. (<b>c</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 5, <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.10. (<b>d</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 5, <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.20. (<b>e</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 10, <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.02. (<b>f</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 10, <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.05. (<b>g</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 10 <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.10. (<b>h</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 10, <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.20. (<b>i</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 20, <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.02. (<b>j</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 20, <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.05. (<b>k</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 20, <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.10. (<b>l</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math> = 20, <math display="inline"><semantics> <mi>ε</mi> </semantics></math> = 0.20.</p>
Full article ">Figure 12
<p>Error analysis of tree parameters under different sample plot slopes. (<b>a</b>) DBH error analysis under different sample plot slopes. (<b>b</b>) Tree height error analysis under different sample plot slopes.</p>
Full article ">Figure 13
<p>Accuracy assessment chart for DBH estimation before and after filtering. (<b>a</b>) Original point cloud. (<b>b</b>) Filtered point cloud.</p>
Full article ">Figure 14
<p>Accuracy assessment chart for DBH estimation across multiple intervals before and after filtering. (<b>a</b>) Original point cloud. (<b>b</b>) Filtered point cloud. The blue radial line represents the correlation coefficient (<span class="html-italic">R</span><sup>2</sup>), the horizontal and vertical axes represent the standard deviation (SD) of the estimated and true values, the red arcs represent the standard deviation of the true DBH values, the colored symbols represent the models for the five different DBH intervals, and the distance of the colored symbols to the observation point represents the Root Mean Square Error (RMSE).</p>
Full article ">Figure 15
<p>Accuracy assessment chart for tree height estimation before and after filtering. (<b>a</b>) Original point cloud. (<b>b</b>) Filtered point cloud.</p>
Full article ">Figure 16
<p>Accuracy assessment chart for tree height estimation across multiple intervals before and after filtering. (<b>a</b>) Original point cloud. (<b>b</b>) Filtered point cloud. The blue radial line represents the correlation coefficient (<span class="html-italic">R</span><sup>2</sup>), the horizontal and vertical axes represent the standard deviation (SD) of the estimated and true values, the red arcs represent the standard deviation of the true tree height values, the colored symbols represent the models for the two different tree height intervals, and the distance of the colored symbols to the observation point represents the Root Mean Square Error (RMSE).</p>
Full article ">Figure 17
<p>Comparison of the tree height point cloud before and after filtering. The red point cloud represents the original point cloud; the black point cloud represents the filtered point cloud. (<b>a</b>) Dense single-tree point clouds; (<b>b</b>) single-tree point clouds with ground noise; and (<b>c</b>) sparse single-tree point clouds.</p>
Full article ">
22 pages, 13840 KiB  
Article
Tree Canopy Volume Extraction Fusing ALS and TLS Based on Improved PointNeXt
by Hao Sun, Qiaolin Ye, Qiao Chen, Liyong Fu, Zhongqi Xu and Chunhua Hu
Remote Sens. 2024, 16(14), 2641; https://doi.org/10.3390/rs16142641 - 19 Jul 2024
Cited by 1 | Viewed by 594
Abstract
Canopy volume is a crucial biological parameter for assessing tree growth, accurately estimating forest Above-Ground Biomass (AGB), and evaluating ecosystem stability. Airborne Laser Scanning (ALS) and Terrestrial Laser Scanning (TLS) are advanced precision mapping technologies that capture highly accurate point clouds for forest [...] Read more.
Canopy volume is a crucial biological parameter for assessing tree growth, accurately estimating forest Above-Ground Biomass (AGB), and evaluating ecosystem stability. Airborne Laser Scanning (ALS) and Terrestrial Laser Scanning (TLS) are advanced precision mapping technologies that capture highly accurate point clouds for forest digitization studies. Despite advances in calculating canopy volume, challenges remain in accurately extracting the canopy and removing gaps. This study proposes a canopy volume extraction method based on an improved PointNeXt model, fusing ALS and TLS point cloud data. In this work, improved PointNeXt is first utilized to extract the canopy, enhancing extraction accuracy and mitigating under-segmentation and over-segmentation issues. To effectively calculate canopy volume, the canopy is divided into multiple levels, each projected into the xOy plane. Then, an improved Mean Shift algorithm, combined with KdTree, is employed to remove gaps and obtain parts of the real canopy. Subsequently, a convex hull algorithm is utilized to calculate the area of each part, and the sum of the areas of all parts multiplied by their heights yields the canopy volume. The proposed method’s performance is tested on a dataset comprising poplar, willow, and cherry trees. As a result, the improved PointNeXt model achieves a mean intersection over union (mIoU) of 98.19% on the test set, outperforming the original PointNeXt by 1%. Regarding canopy volume, the algorithm’s Root Mean Square Error (RMSE) is 0.18 m3, and a high correlation is observed between predicted canopy volumes, with an R-Square (R2) value of 0.92. Therefore, the proposed method effectively and efficiently acquires canopy volume, providing a stable and accurate technical reference for forest biomass statistics. Full article
Show Figures

Figure 1

Figure 1
<p>Geographic location map of the study area.</p>
Full article ">Figure 2
<p>Field scanning scene.</p>
Full article ">Figure 3
<p>Distribution of foundations of the 17 stations in the sample site.</p>
Full article ">Figure 4
<p>Raw point cloud data. (<b>a</b>) TLS point cloud. (<b>b</b>) ALS point cloud.</p>
Full article ">Figure 5
<p>Overall framework.</p>
Full article ">Figure 6
<p>Point cloud data after registration.</p>
Full article ">Figure 7
<p>Filtering results of tree point cloud. (<b>a</b>) Original tree point cloud. (<b>b</b>) Filtered and denoised tree point cloud.</p>
Full article ">Figure 8
<p>Extraction of tree results. (<b>a</b>,<b>b</b>) Including brackets, people, and weeds. (<b>c</b>,<b>d</b>) Pretreated to contain no weeds.</p>
Full article ">Figure 9
<p>The result of PointNet++.</p>
Full article ">Figure 10
<p>Overall network structure of the improved PointNeXt model.</p>
Full article ">Figure 11
<p>Grouped vector attention block.</p>
Full article ">Figure 12
<p>Group coding function.</p>
Full article ">Figure 13
<p>EdgeConv block.</p>
Full article ">Figure 14
<p>EdgeConv Calculation Principle.</p>
Full article ">Figure 15
<p>Parts of the dataset. (<b>a</b>) area1, (<b>b</b>) area2, (<b>c</b>) area3, (<b>d</b>) area4, (<b>e</b>) area5, and (<b>f</b>) area6.</p>
Full article ">Figure 16
<p>Semantic segmentation results.</p>
Full article ">Figure 17
<p>Comparison of before and after improvement. (<b>a</b>) Based on initial Mean Shift. (<b>b</b>) Based on improved Mean Shift.</p>
Full article ">Figure 18
<p>Different methods of presentation. (<b>a</b>) Voxel-based. (<b>b</b>) 3D convex hull. (<b>c</b>) HPCP-CC.</p>
Full article ">Figure 19
<p>Scatter plots of the three algorithms compared to the volume obtained from manual measurement of the true value. (<b>a</b>) Voxel-based. (<b>b</b>) 3D convex hull. (<b>c</b>) HPCP-CC.</p>
Full article ">
17 pages, 10669 KiB  
Article
Adherent Peanut Image Segmentation Based on Multi-Modal Fusion
by Yujing Wang, Fang Ye, Jiusun Zeng, Jinhui Cai and Wangsen Huang
Sensors 2024, 24(14), 4434; https://doi.org/10.3390/s24144434 - 9 Jul 2024
Viewed by 518
Abstract
Aiming at the problem of the difficult segmentation of adherent images due to the not fully convex shape of peanut pods, their complex surface texture, and their diverse structures, a multimodal fusion algorithm is proposed to achieve a 2D segmentation of adherent peanut [...] Read more.
Aiming at the problem of the difficult segmentation of adherent images due to the not fully convex shape of peanut pods, their complex surface texture, and their diverse structures, a multimodal fusion algorithm is proposed to achieve a 2D segmentation of adherent peanut images with the assistance of 3D point clouds. Firstly, the point cloud of a running peanut is captured line by line using a line structured light imaging system, and its three-dimensional shape is obtained through splicing and combining it with a local surface-fitting algorithm to calculate a normal vector and curvature. Seed points are selected based on the principle of minimum curvature, and neighboring points are searched using the KD-Tree algorithm. The point cloud is filtered and segmented according to the normal angle and the curvature threshold until achieving the completion of the point cloud segmentation of the individual peanut, and then the two-dimensional contour of the individual peanut model is extracted by using the rolling method. The search template is established, multiscale feature matching is implemented on the adherent image to achieve the region localization, and finally, the segmentation region is optimized by an opening operation. The experimental results show that the algorithm improves the segmentation accuracy, and the segmentation accuracy reaches 96.8%. Full article
(This article belongs to the Special Issue Advances in 3D Imaging and Multimodal Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>The different shapes of peanut pods.</p>
Full article ">Figure 2
<p>The multimodal fusion segmentation algorithm flow.</p>
Full article ">Figure 3
<p>The peanut 3D reconstruction flow.</p>
Full article ">Figure 4
<p>Line structured light 3D schematic.</p>
Full article ">Figure 5
<p>Three-dimensional (3D) regional growth algorithm flow.</p>
Full article ">Figure 6
<p>Boundary extraction by rolling ball method.</p>
Full article ">Figure 7
<p>Geometric relationship diagram of the rolling ball method.</p>
Full article ">Figure 8
<p>Matching-based segmentation algorithm flow.</p>
Full article ">Figure 9
<p>Peanut template-matching segmentation effect: (<b>a</b>) peanut contour template; (<b>b</b>) matching result; (<b>c</b>) segmentation result.</p>
Full article ">Figure 10
<p>Comparison of opening operation: (<b>a</b>) before the opening operation; (<b>b</b>) after the opening operation.</p>
Full article ">Figure 11
<p>Peanut 3D reconstruction effect: (<b>a</b>) peanut 2D original; (<b>b</b>) peanut 3D point cloud.</p>
Full article ">Figure 12
<p>Adhesive peanut 3D segmentation.</p>
Full article ">Figure 13
<p>Segmentation results of different algorithms: (<b>a</b>) watershed algorithm based on distance transformation; (<b>b</b>) watershed algorithm based on gradient transformation; (<b>c</b>) proposed algorithm.</p>
Full article ">Figure 14
<p>Adhesion segmentation experiment of different light source intensities.</p>
Full article ">Figure 15
<p>Segmentation results of different adhesion conditions.</p>
Full article ">Figure 16
<p>Peanut effective proportion adhesion segmentation experiment.</p>
Full article ">Figure 17
<p>Adhesion segmentation experiment of different numbers of similar samples.</p>
Full article ">
18 pages, 7593 KiB  
Article
A Hybrid Improved SAC-IA with a KD-ICP Algorithm for Local Point Cloud Alignment Optimization
by Yinbao Cheng, Haiman Chu, Yaru Li, Yingqi Tang, Zai Luo and Shaohui Li
Photonics 2024, 11(7), 635; https://doi.org/10.3390/photonics11070635 - 2 Jul 2024
Viewed by 716
Abstract
To overcome incomplete point cloud data obtained from laser scanners scanning complex surfaces, multi-viewpoint cloud data needs to be aligned for use. A hybrid improved SAC-IA with a KD-ICP algorithm is proposed for local point cloud alignment optimization. The scanned point cloud data [...] Read more.
To overcome incomplete point cloud data obtained from laser scanners scanning complex surfaces, multi-viewpoint cloud data needs to be aligned for use. A hybrid improved SAC-IA with a KD-ICP algorithm is proposed for local point cloud alignment optimization. The scanned point cloud data is preprocessed with statistical filtering, as well as uniform down-sampling. The sampling consistency initial alignment (SAC-IA) algorithm is improved by introducing a dissimilarity vector for point cloud initial alignment. In addition, the iterative closest point (ICP) algorithm is improved by incorporating bidirectional KD-tree to form the KD-ICP algorithm for fine point cloud alignment. Finally, the algorithms are compared in terms of runtime and alignment accuracy. The implementation of the algorithms is based on the Visual Studio 2013 software configurating point cloud library environment for testing experiments and practical experiments. The overall alignment method can be 40%~50% faster in terms of running speed. The improved SAC-IA algorithm provides better transformed poses, combined with the KD-ICP algorithm to select the corresponding nearest neighbor pairs, which improves the accuracy, as well as the applicability of the alignment. Full article
(This article belongs to the Special Issue Recent Advances in 3D Optical Measurement)
Show Figures

Figure 1

Figure 1
<p>Scope of influence.</p>
Full article ">Figure 2
<p>Flowchart of improved alignment algorithm.</p>
Full article ">Figure 3
<p>Classical sample. (<b>a</b>) Partial point cloud; (<b>b</b>) Complete point cloud.</p>
Full article ">Figure 4
<p>Initial alignment of classical sample. (<b>a</b>) Initial position; (<b>b</b>) SAC-IA; (<b>c</b>) RANSAC; (<b>d</b>) Improved SAC-IA.</p>
Full article ">Figure 5
<p>Local differences in the initial alignment of the dragon point cloud.</p>
Full article ">Figure 6
<p>Comparison of initial alignment algorithms for classical sample. (<b>a</b>) Bunny sample; (<b>b</b>) Dragon sample.</p>
Full article ">Figure 7
<p>Fine alignment of classical sample. (<b>a</b>) SAC-IA+ICP; (<b>b</b>) SAC-IA+KD-ICP; (<b>c</b>) Improved SAC-IA+ICP; (<b>d</b>) The proposed algorithm.</p>
Full article ">Figure 8
<p>Laser scanner experimental setup.</p>
Full article ">Figure 9
<p>Scanned workpiece model. (<b>a</b>) Scanned data; (<b>b</b>) Ideal CAD data.</p>
Full article ">Figure 10
<p>Comparison of scanned workpiece before and after preprocessing. (<b>a</b>) Original scanned point cloud; (<b>b</b>) Preprocessed point cloud.</p>
Full article ">Figure 11
<p>Scanned workpiece model initial alignment. (<b>a</b>) Initial position; (<b>b</b>) SAC-IA; (<b>c</b>) RANSAC; (<b>d</b>) Improved SAC-IA.</p>
Full article ">Figure 12
<p>Comparison of initial alignment algorithms for scanned workpiece model.</p>
Full article ">Figure 13
<p>Scanned workpiece model fine alignment. (<b>a</b>) SAC-IA+ICP; (<b>b</b>) SAC-IA+KD-ICP; (<b>c</b>) Improved SAC-IA+ICP; (<b>d</b>) The proposed algorithm.</p>
Full article ">
15 pages, 4809 KiB  
Article
LiDAR Point Cloud Super-Resolution Reconstruction Based on Point Cloud Weighted Fusion Algorithm of Improved RANSAC and Reciprocal Distance
by Xiaoping Yang, Ping Ni, Zhenhua Li and Guanghui Liu
Electronics 2024, 13(13), 2521; https://doi.org/10.3390/electronics13132521 - 27 Jun 2024
Viewed by 639
Abstract
This paper proposes a point-by-point weighted fusion algorithm based on an improved random sample consensus (RANSAC) and inverse distance weighting to address the issue of low-resolution point cloud data obtained from light detection and ranging (LiDAR) sensors and single technologies. By fusing low-resolution [...] Read more.
This paper proposes a point-by-point weighted fusion algorithm based on an improved random sample consensus (RANSAC) and inverse distance weighting to address the issue of low-resolution point cloud data obtained from light detection and ranging (LiDAR) sensors and single technologies. By fusing low-resolution point clouds with higher-resolution point clouds at the data level, the algorithm generates high-resolution point clouds, achieving the super-resolution reconstruction of lidar point clouds. This method effectively reduces noise in the higher-resolution point clouds while preserving the structure of the low-resolution point clouds, ensuring that the semantic information of the generated high-resolution point clouds remains consistent with that of the low-resolution point clouds. Specifically, the algorithm constructs a K-d tree using the low-resolution point cloud to perform a nearest neighbor search, establishing the correspondence between the low-resolution and higher-resolution point clouds. Next, the improved RANSAC algorithm is employed for point cloud alignment, and inverse distance weighting is used for point-by-point weighted fusion, ultimately yielding the high-resolution point cloud. The experimental results demonstrate that the proposed point cloud super-resolution reconstruction method outperforms other methods across various metrics. Notably, it reduces the Chamfer Distance (CD) metric by 0.49 and 0.29 and improves the Precision metric by 7.75% and 4.47%, respectively, compared to two other methods. Full article
(This article belongs to the Special Issue Digital Security and Privacy Protection: Trends and Applications)
Show Figures

Figure 1

Figure 1
<p>DenseFPNet.</p>
Full article ">Figure 2
<p>Flowchart of point-by-point weighted fusion algorithm for point clouds.</p>
Full article ">Figure 3
<p>Diagram of the point cloud alignment process.</p>
Full article ">Figure 4
<p>Frame diagram of the proposed point cloud super-resolution reconstruction method.</p>
Full article ">Figure 5
<p>Feature-level fusion unit of DenseFPNet.</p>
Full article ">Figure 6
<p>Schematic diagram of various files of the dataset built in this paper.</p>
Full article ">Figure 7
<p>Comparison of visualization with SPSG and CIRCLE.</p>
Full article ">Figure 8
<p>Graph of qualitative comparison of ablation experiments.</p>
Full article ">
22 pages, 30137 KiB  
Article
Satellite Image Cloud Automatic Annotator with Uncertainty Estimation
by Yijiang Gao, Yang Shao, Rui Jiang, Xubing Yang and Li Zhang
Fire 2024, 7(7), 212; https://doi.org/10.3390/fire7070212 - 25 Jun 2024
Viewed by 1109
Abstract
In satellite imagery, clouds obstruct the ground information, directly impacting various downstream applications. Thus, cloud annotation/cloud detection serves as the initial preprocessing step in remote sensing image analysis. Recently, deep learning methods have significantly improved in the field of cloud detection, but training [...] Read more.
In satellite imagery, clouds obstruct the ground information, directly impacting various downstream applications. Thus, cloud annotation/cloud detection serves as the initial preprocessing step in remote sensing image analysis. Recently, deep learning methods have significantly improved in the field of cloud detection, but training these methods necessitates abundant annotated data, which requires experts with professional domain knowledge. Moreover, the influx of remote sensing data from new satellites has further led to an increase in the cost of cloud annotation. To address the dependence on labeled datasets and professional domain knowledge, this paper proposes an automatic cloud annotation method for satellite remote sensing images, CloudAUE. Unlike traditional approaches, CloudAUE does not rely on labeled training datasets and can be operated by users without domain expertise. To handle the irregular shapes of clouds, CloudAUE firstly employs a convex hull algorithm for selecting cloud and non-cloud regions by polygons. When selecting convex hulls, the cloud region is first selected, and points at the edges of the cloud region are sequentially selected as polygon vertices to form a polygon that includes the cloud region. Then, the same selection is performed on non-cloud regions. Subsequently, the fast KD-Tree algorithm is used for pixel classification. Finally, an uncertainty method is proposed to evaluate the quality of annotation. When the confidence value of the image exceeds a preset threshold, the annotation process terminates and achieves satisfactory results. When the value falls below the threshold, the image needs to undergo a subsequent round of annotation. Through experiments on two labeled datasets, HRC and Landsat 8, CloudAUE demonstrates comparable or superior accuracy to deep learning algorithms, and requires only one to two annotations to obtain ideal results. An unlabeled self-built Google Earth dataset is utilized to validate the effectiveness and generalizability of CloudAUE. To show the extension capabilities in various fields, CloudAUE also achieves desirable results on a forest fire dataset. Finally, some suggestions are provided to improve annotation performance and reduce the number of annotations. Full article
(This article belongs to the Special Issue Intelligent Forest Fire Prediction and Detection)
Show Figures

Figure 1

Figure 1
<p>Cloud detection automatic annotation flowchart.</p>
Full article ">Figure 2
<p>Illustration of two convex hulls on satellite image, marked with colored polygons.</p>
Full article ">Figure 3
<p>Illustration of the convex hull algorithm.</p>
Full article ">Figure 4
<p>The qualitative results of the various methods on an image covered by thick clouds. Red and blue polygons are annotation areas of CloudAUE for cloud and non-cloud regions, respectively.</p>
Full article ">Figure 5
<p>The qualitative results of the various methods on an image covered by thin clouds.</p>
Full article ">Figure 6
<p>Performance of various methods on eight background types. Green octagon represents CloudAUE. Red, blue, and orange octagons represent UNet, Deeplabv3+, and Cloud-AttU, respectively.</p>
Full article ">Figure 7
<p>The qualitative results of the various methods on an image covered with thin clouds from Landsat 8 dataset.</p>
Full article ">Figure 8
<p>The qualitative results of various methods on an image covered with scattered clouds from Landsat 8 dataset.</p>
Full article ">Figure 9
<p>Annotation results of CloudAUE on self-built Google Earth dataset.</p>
Full article ">Figure 10
<p>Annotation results of CloudAUE on a forest fire dataset.</p>
Full article ">Figure 11
<p>Two different selections of annotation areas and corresponding annotation results.</p>
Full article ">Figure 12
<p>The distribution of confidence values on the HRC and Landsat 8 datasets.</p>
Full article ">Figure 13
<p>The number of annotations on the HRC and Landsat 8 datasets.</p>
Full article ">Figure 14
<p>The qualitative results of number of annotations on an urban background image. (<b>a</b>) Original image and the first annotation areas. (<b>b</b>) Result of the first annotation and the second annotation areas. (<b>c</b>) Result of the second annotation and the third annotation areas. (<b>d</b>) Result of the third annotation.</p>
Full article ">Figure 15
<p>The qualitative results of the number of annotations on an agriculture image. (<b>a</b>) Original image and the first annotation areas. (<b>b</b>) Result of the first annotation and the second annotation areas. (<b>c</b>) Result of the second annotation and the third annotation areas. (<b>d</b>) Result of the third annotation.</p>
Full article ">
23 pages, 5664 KiB  
Article
3D Vase Design Based on Interactive Genetic Algorithm and Enhanced XGBoost Model
by Dongming Wang and Xing Xu
Mathematics 2024, 12(13), 1932; https://doi.org/10.3390/math12131932 - 21 Jun 2024
Cited by 1 | Viewed by 705
Abstract
The human–computer interaction attribute of the interactive genetic algorithm (IGA) allows users to participate in the product design process for which the product needs to be evaluated, and requiring a large number of evaluations would lead to user fatigue. To address this issue, [...] Read more.
The human–computer interaction attribute of the interactive genetic algorithm (IGA) allows users to participate in the product design process for which the product needs to be evaluated, and requiring a large number of evaluations would lead to user fatigue. To address this issue, this paper utilizes an XGBoost proxy model modified by particle swarm optimization and the graphical interaction mechanism (GIM) to construct an improved interactive genetic algorithm (PXG-IGA), and then the PXG-IGA is applied to 3D vase design. Firstly, the 3D vase shape has been designed by using a bicubic Bézier surface, and the individual genetic code is binary and includes three parts: the vase control points, the vase height, and the texture picture. Secondly, the XGBoost evaluation of the proxy model has been constructed by collecting user online evaluation data, and the particle swarm optimization algorithm has been used to optimize the hyperparameters of XGBoost. Finally, the GIM has been introduced after several generations, allowing users to change product styles independently to better meet users’ expectations. Based on the PXG-IGA, an online 3D vase design platform has been developed and compared to the traditional IGA, KD tree, random forest, and standard XGBoost proxy models. Compared with the traditional IGA, the number of evaluations has been reduced by 58.3% and the evaluation time has been reduced by 46.4%. Compared with other proxy models, the accuracy of predictions has been improved up from 1.3% to 20.2%. To a certain extent, the PXG-IGA reduces users’ operation fatigue and provides new ideas for improving user experience and product design efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>The implementation process of evaluation of the proxy model.</p>
Full article ">Figure 2
<p>Update the proxy model.</p>
Full article ">Figure 3
<p>Vase contour.</p>
Full article ">Figure 4
<p>Vase mesh model.</p>
Full article ">Figure 5
<p>The coding composition of the vase.</p>
Full article ">Figure 6
<p>Gene codes.</p>
Full article ">Figure 7
<p>Crossover and mutation.</p>
Full article ">Figure 8
<p>Graphic interaction mechanism.</p>
Full article ">Figure 9
<p>Algorithm flow chart.</p>
Full article ">Figure 10
<p>The predicted values of the XGBoost and PSO-XGBoost models are compared with the real values.</p>
Full article ">Figure 11
<p>The predicted values of the KDT, RF, and PSO-XGBoost models were compared with the true values.</p>
Full article ">Figure 12
<p>Interactive interface.</p>
Full article ">Figure 13
<p>The average fitness comparison of users’ evaluation individuals.</p>
Full article ">Figure 14
<p>Comparison of average maximum fitness values of users’ evaluation individuals.</p>
Full article ">Figure 15
<p>Comparison of the mean value of number of evaluations and time of evaluations.</p>
Full article ">Figure 16
<p>Comparison of standard deviation of number of evaluations and time of evaluations.</p>
Full article ">
17 pages, 14025 KiB  
Article
Point Cloud Registration Algorithm Based on Adaptive Neighborhood Eigenvalue Loading Ratio
by Zhongping Liao, Tao Peng, Ruiqi Tang and Zhiguo Hao
Appl. Sci. 2024, 14(11), 4828; https://doi.org/10.3390/app14114828 - 3 Jun 2024
Viewed by 665
Abstract
Traditional iterative closest point (ICP) registration algorithms are sensitive to initial positions and easily fall into the trap of locally optimal solutions. To address this problem, a point cloud registration algorithm is put forward in this study based on adaptive neighborhood eigenvalue loading [...] Read more.
Traditional iterative closest point (ICP) registration algorithms are sensitive to initial positions and easily fall into the trap of locally optimal solutions. To address this problem, a point cloud registration algorithm is put forward in this study based on adaptive neighborhood eigenvalue loading ratios. In the algorithm, the resolution of the point cloud is first calculated and used as an adaptive basis to determine the raster widths and radii of spherical neighborhoods in the raster filtering; then, the adaptive raster filtering is implemented to the point cloud for denoising, while the eigenvalue loading ratios of point neighborhoods are calculated to extract and match the contour feature points; subsequently, sample consensus initial alignment (SAC-IA) is used to carry out coarse registration; and finally, a fine registration is delivered with KD-tree-accelerated ICP. The experimental results of this study demonstrate that the feature points extracted with this method are highly representative while consuming only 35.6% of the time consumed by other feature point extraction algorithms. Additionally, in noisy and low-overlap scenarios, the registration error of this method can be controlled at a level of 0.1 mm, with the registration speed improved by 56% on average over that of other algorithms. Taken together, the method in this study cannot only ensure strong robustness in registration but can also deliver high registration accuracy and efficiency. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Flow chart of point cloud preprocessing and combined registration algorithm.</p>
Full article ">Figure 2
<p>Point neighborhood division.</p>
Full article ">Figure 3
<p>Comparison of feature point extraction: primordial point cloud, hybrid point cloud, feature point cloud.</p>
Full article ">Figure 4
<p>Three different types of experimental subjects.</p>
Full article ">Figure 5
<p>Visualization of the results after instance pre-processing.</p>
Full article ">Figure 6
<p>Comparison of the extraction effects of the two algorithms in different scenes.</p>
Full article ">Figure 6 Cont.
<p>Comparison of the extraction effects of the two algorithms in different scenes.</p>
Full article ">Figure 7
<p>Optimization maps of matched harnesses and mapping of feature point descriptors.</p>
Full article ">Figure 8
<p>Comparison of the registration effects of the four types of algorithms in different scenarios, the best performance figures are highlighted in bold font.</p>
Full article ">
17 pages, 5657 KiB  
Article
Study on Obstacle Detection Method Based on Point Cloud Registration
by Hongliang Wang, Jianing Wang, Yixin Wang, Dawei Pi, Yijie Chen and Jingjing Fan
World Electr. Veh. J. 2024, 15(6), 241; https://doi.org/10.3390/wevj15060241 - 30 May 2024
Viewed by 834
Abstract
An efficient obstacle detection system is one of the most important guarantees for improving the active safety performance of autonomous vehicles. This paper proposes an obstacle detection method based on high-precision positioning applied to blocked zones to solve the problems of the high [...] Read more.
An efficient obstacle detection system is one of the most important guarantees for improving the active safety performance of autonomous vehicles. This paper proposes an obstacle detection method based on high-precision positioning applied to blocked zones to solve the problems of the high complexity of detection results, low computational efficiency, and high load in traditional obstacle detection methods. Firstly, an NDT registration method which uses the likelihood function as the optimal value of the registration score function to calculate the registration parameters is designed to match the scanning point cloud and the target point cloud. Secondly, a target reduction method combined with threshold judgment and the binary tree search algorithm is designed to filter the point cloud of non-road obstacles to improve the processing speed of the computing platform. Meanwhile, KD-tree is used to speed up the clustering process. Finally, a vehicle remote control simulation platform with the combination of a cloud platform and mobile terminal is designed to verify the effectiveness of the strategy in practical application. The results prove that the proposed obstacle detection method can improve the efficiency and accuracy of detection. Full article
Show Figures

Figure 1

Figure 1
<p>The frame of the proposed obstacle detection method.</p>
Full article ">Figure 2
<p>The process of meshing the target point cloud. The picture on the left represents the reference point cloud to be registered. The figure on the right represents the meshing process of placing the target point cloud into a cube grid with side length ‘C’ meters. Multiple cube grids in the figure form the reference point cloud space.</p>
Full article ">Figure 3
<p>The process of ground segmentation. The figure includes the LIDAR coordinate system ouster and the vehicle coordinate system base_link. Point1~point5 are different scanning points, and the distance increases in turn. Object1 and object2 are obstacles in front of the vehicle. If point3 is the current point, point2 is the previous point, and then point3 is the ground point. If point4 is the current point, point3 is the previous point, and then point4 is not a ground point. The same can be proven that if point1 and point2 are ground points, point5 is not a ground point.</p>
Full article ">Figure 4
<p>The calculation of the distance range. The left figure shows the distribution of <span class="html-italic">P<sub>a</sub></span>, where P1 and P3 belong to the same ray scan point, and the difference between P2 and P3 is 0.26°. The picture on the right shows the range calculated for P3. If there are some points of <span class="html-italic">P<sub>m</sub></span> in the range and the number of these points is greater than <span class="html-italic">p</span>, P3 is a non-road obstacle point; otherwise, it is a road obstacle point.</p>
Full article ">Figure 5
<p>The results of NDT registration and positioning. Inside the yellow ellipse are the buildings participating in the registration. The trees participating in the registration are within the green curve. The brown boxes represent ground points. The green point represents the point cloud after registration <span class="html-italic">P<sub>a</sub></span>. The white point represents the target point cloud <span class="html-italic">P<sub>m</sub></span> participating in the registration. Points in other colors all indicate point clouds participating in registration <span class="html-italic">P<sub>r</sub></span>.</p>
Full article ">Figure 6
<p>The results of road matching. The white line represents the road boundary in <span class="html-italic">P<sub>m</sub></span>. The red line represents the road boundary in <span class="html-italic">P<sub>r</sub></span>. The green line represents the road boundary in <span class="html-italic">P<sub>a</sub></span>. The symbol in the deflection angle represents the direction compared to the white line. The positive sign represents the right side of the white line, and the negative sign represents the left side of the white line. The red number is the deflection angle of the red line. The green number is the deflection angle of the green line.</p>
Full article ">Figure 7
<p>The results of obstacle detection. Sequences 7–10 are the detection results when turning. Sequences 11–14 are the test results when traveling straight. The red circular line represents the ground after division. For ease of reading, we mark the white point cloud as the aligned point cloud. The transparent box on the left represents the obstacle detection result without any processing. The yellow box on the right represents the obstacle detection result after reducing the target. The marked green numbers represent obstacles detected before and after the target reduction process. Inside the yellow circle are the obstacles that were missed.</p>
Full article ">Figure 8
<p>The difference in calculation time. The green broken line indicates the calculation time before the reduction target processing method is used. The blue broken line represents the calculation time after using the drop target processing method. The green and blue lines respectively represent the average calculation time of the corresponding processing method.</p>
Full article ">Figure 9
<p>The difference in computational load. The green bar indicates the load before the target reduction method is used. The blue bar indicates the load after using the drop target processing method.</p>
Full article ">
20 pages, 2643 KiB  
Article
Enhanced Multi-Beam Echo Sounder Simulation through Distance-Aided and Height-Aided Sound Ray Marching Algorithms
by Jianhua Cheng, Jingyu Ge and Runze Bai
J. Mar. Sci. Eng. 2024, 12(6), 913; https://doi.org/10.3390/jmse12060913 - 29 May 2024
Viewed by 608
Abstract
The study proposes two innovative algorithms in the field of multi-beam echo sounder (MBES) simulation: distance-aided sound ray marching (DASRM) and height-aided sound ray marching (HASRM). These algorithms aim to enhance the efficiency and accuracy of MBES simulations, particularly when dealing with long-distance [...] Read more.
The study proposes two innovative algorithms in the field of multi-beam echo sounder (MBES) simulation: distance-aided sound ray marching (DASRM) and height-aided sound ray marching (HASRM). These algorithms aim to enhance the efficiency and accuracy of MBES simulations, particularly when dealing with long-distance propagation and real-time processing limitations. DASRM addresses issues related to simulation accuracy by efficiently utilizing the KD-tree for spatial indexing and intersection detection instead of the signed distance field (SDF). Building upon the further analysis of DASRM, HASRM is proposed, which improves the search strategy for ray intersections and utilizes a height field pyramid for sampling and retrieval, thereby reducing memory usage while enhancing indexing efficiency. The experimental results demonstrate that both algorithms significantly outperform traditional methods in terms of simulation time, with HASRM exhibiting particular advantages in parallel computing due to its data structure and improved strategies. Additionally, DASRM is well suited for applications requiring complex scene construction, while HASRM proves especially effective in simulating MBES with a focus on underwater terrain due to its effectiveness in handling large incident angles and long-distance propagation. Full article
Show Figures

Figure 1

Figure 1
<p>Sonar simulation framework.</p>
Full article ">Figure 2
<p>Constant gradient sound ray tracing.</p>
Full article ">Figure 3
<p>Distance-aided ray marching. The solid black lines in the figure represent objects in the scene, while the dashed black circles represent non-intersecting spherical regions corresponding to distances given by SDF. The red dots represent the positions after each step forward along the ray, and the black dots represent points on the objects closest to the red dots. The blue dots indicate the final intersection point location.</p>
Full article ">Figure 4
<p>DASRM.</p>
Full article ">Figure 5
<p>The situations of intersection points.</p>
Full article ">Figure 6
<p>The sound ray movement strategy. Three sound rays initiate their search from <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mrow> <mi>w</mi> <mn>0</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>z</mi> <mrow> <mi>w</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math>. After the first search, <math display="inline"><semantics> <msub> <mi>z</mi> <mrow> <mi>a</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> </semantics></math> for these sound rays is determined to be the local minimum <math display="inline"><semantics> <msubsup> <mi>G</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mn>1</mn> </msubsup> </semantics></math>. Sound ray 1 then moves directly to the intersection point. For sound rays 2 and 3, since their horizontal positions satisfy <math display="inline"><semantics> <mrow> <msubsup> <mi>h</mi> <mrow> <mi>w</mi> <mn>1</mn> </mrow> <mn>3</mn> </msubsup> <mo>&gt;</mo> <msubsup> <mi>h</mi> <mrow> <mi>w</mi> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>&gt;</mo> <msub> <mi>h</mi> <msubsup> <mi>G</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mn>1</mn> </msubsup> </msub> </mrow> </semantics></math>, it is clear that <math display="inline"><semantics> <msubsup> <mi>G</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mn>1</mn> </msubsup> </semantics></math> is not the target minimum for them, and they require further movement. After setting their target depth to a new local minimum <math display="inline"><semantics> <msubsup> <mi>G</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mn>2</mn> </msubsup> </semantics></math> and executing the movement, it is observed that sound ray 2 intersects with the interval <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>h</mi> <mi>w</mi> </msub> <mo>,</mo> <msub> <mi>h</mi> <msubsup> <mi>G</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mn>2</mn> </msubsup> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math>, where <math display="inline"><semantics> <msubsup> <mi>h</mi> <mrow> <mi>w</mi> <mn>2</mn> </mrow> <mn>2</mn> </msubsup> </semantics></math> is less than the horizontal position of <math display="inline"><semantics> <msubsup> <mi>G</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mn>2</mn> </msubsup> </semantics></math>. This successfully obtains the precise search range for the <math display="inline"><semantics> <msubsup> <mi>w</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> </mrow> <mn>2</mn> </msubsup> </semantics></math>.</p>
Full article ">Figure 7
<p>HASRM. The sound ray originates at <math display="inline"><semantics> <msub> <mi>w</mi> <mn>0</mn> </msub> </semantics></math> and searches for <math display="inline"><semantics> <msubsup> <mi>G</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mn>1</mn> </msubsup> </semantics></math> within the low-resolution terrain profile. Once the target depth is determined, the sound ray is directed to that depth. Then, utilizing the high-resolution data, it identifies the subsequent local minimum <math display="inline"><semantics> <msubsup> <mi>G</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mn>2</mn> </msubsup> </semantics></math> within a constrained search range and relocates to <math display="inline"><semantics> <msub> <mi>w</mi> <mn>2</mn> </msub> </semantics></math>. Since there are no additional local minima, the interval <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mrow> <mi>w</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>h</mi> <msub> <mi>G</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> </semantics></math> is established to finalize the first phase. In the second phase, the depth range <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>z</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math> is defined by a horizontal range, and through the dichotomy method, the sound ray moves to positions <math display="inline"><semantics> <msub> <mi>w</mi> <mn>3</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>w</mi> <mn>4</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>w</mi> <mn>5</mn> </msub> </semantics></math>, thereby reducing the search range. Finally, by employing a parametric approach and using data samples from <math display="inline"><semantics> <msub> <mi>w</mi> <mn>3</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>w</mi> <mn>4</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>w</mi> <mn>5</mn> </msub> </semantics></math>, the difference function <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>(</mo> <mi>h</mi> <mo>)</mo> </mrow> </semantics></math> is computed to pinpoint the intersection point (<math display="inline"><semantics> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> </semantics></math>).</p>
Full article ">Figure 8
<p>The flowchart of HASRM.</p>
Full article ">Figure 9
<p>Simulation of sound ray trajectory by HASRM. Due to the change in SVP, the sound ray will undergo distortion. Utilizing HASRM can effectively simulate this bending of the sound ray.</p>
Full article ">Figure 10
<p>Simulated measurement errors corresponding to different SVPs. (<b>a</b>) Simulated measurement error of zero-error SVP. (<b>b</b>) Simulated measurement error of greater SVP. (<b>c</b>) Simulated measurement error of smaller SVP. The utilization of HASRM and DASRM enables precise simulation of acoustic signal propagation time, facilitating an accurate sound ray tracing algorithm for pinpointing measurement positions. This approach offers a more realistic simulation of MBES measurement errors rather than imposing a fixed error term based on the system’s measurement principles.</p>
Full article ">Figure 11
<p>Comparison of simulated measurement errors for different SVPs in MBES. (<b>a</b>) Real terrain. (<b>b</b>) Simulated measurement error of zero-error SVP. (<b>c</b>) Simulated measurement error of greater SVP. (<b>d</b>) Simulated measurement error of smaller SVP.</p>
Full article ">Figure 12
<p>The iteration number of sound rays with different <math display="inline"><semantics> <mi>θ</mi> </semantics></math> in HASRM.</p>
Full article ">
13 pages, 3247 KiB  
Article
LeGO-LOAM-FN: An Improved Simultaneous Localization and Mapping Method Fusing LeGO-LOAM, Faster_GICP and NDT in Complex Orchard Environments
by Jiamin Zhang, Sen Chen, Qiyuan Xue, Jie Yang, Guihong Ren, Wuping Zhang and Fuzhong Li
Sensors 2024, 24(2), 551; https://doi.org/10.3390/s24020551 - 16 Jan 2024
Viewed by 1882
Abstract
To solve the problem of cumulative errors when robots build maps in complex orchard environments due to their large scene size, similar features, and unstable motion, this study proposes a loopback registration algorithm based on the fusion of Faster Generalized Iterative Closest Point [...] Read more.
To solve the problem of cumulative errors when robots build maps in complex orchard environments due to their large scene size, similar features, and unstable motion, this study proposes a loopback registration algorithm based on the fusion of Faster Generalized Iterative Closest Point (Faster_GICP) and Normal Distributions Transform (NDT). First, the algorithm creates a K-Dimensional tree (KD-Tree) structure to eliminate the dynamic obstacle point clouds. Then, the method uses a two-step point filter to reduce the number of feature points of the current frame used for matching and the number of data used for optimization. It also calculates the matching degree of normal distribution probability by meshing the point cloud, and optimizes the precision registration using the Hessian matrix method. In the complex orchard environment with multiple loopback events, the root mean square error and standard deviation of the trajectory of the LeGO-LOAM-FN algorithm are 0.45 m and 0.26 m which are 67% and 73% higher than those of the loopback registration algorithm in the Lightweight and Ground-Optimized LiDAR Odometry and Mapping on Variable Terrain (LeGO-LOAM), respectively. The study proves that this method effectively reduces the influence of the cumulative error, and provides technical support for intelligent operation in the orchard environment. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Figure 1
<p>LeGO-LOAM algorithm flow.</p>
Full article ">Figure 2
<p>Schematic diagram of loopback detection: (<b>a</b>) loopback detection success; and (<b>b</b>) loopback detection failure.</p>
Full article ">Figure 3
<p>LeGO-LOAM-FN improved process.</p>
Full article ">Figure 4
<p>Field orchard environmental experiment: (<b>a</b>) orchard robots; and (<b>b</b>) orchard environment. 1 3D LiDAR; 2 display; 3 GNSS receiver; and 4 industrial controller.</p>
Full article ">Figure 5
<p>Dynamic object removal: (<b>a</b>) LeGO-LOAM algorithm; and (<b>b</b>) LeGO-LOAM-FN algorithm.</p>
Full article ">Figure 6
<p>Loopback detection test: (<b>a</b>) LeGO-LOAM algorithm; and (<b>b</b>) LeGO-LOAM-FN algorithm.</p>
Full article ">Figure 7
<p>Trial traces versus real-value traces: (<b>a</b>) KITTI 00; (<b>b</b>) Scene 1; (<b>c</b>) Scene 2; and (<b>d</b>) Scene 3.</p>
Full article ">
Back to TopTop