[go: up one dir, main page]

Next Article in Journal
Imaging and 3D Analysis Based on Two or More Three-Dimensional CBCT Recordings before and after Orthodontic Treatment and Maxillofacial Therapy
Previous Article in Journal
Effects of Digital Citizenship and Digital Transformation Enablers on Innovativeness and Problem-Solving Capabilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Point Cloud Registration Algorithm Based on Adaptive Neighborhood Eigenvalue Loading Ratio

by
Zhongping Liao
*,
Tao Peng
*,
Ruiqi Tang
and
Zhiguo Hao
School of Traffic and Transportation Engineering, Changsha University of Science & Technology, Changsha 410114, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(11), 4828; https://doi.org/10.3390/app14114828
Submission received: 23 April 2024 / Revised: 29 May 2024 / Accepted: 31 May 2024 / Published: 3 June 2024
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)

Abstract

:
Traditional iterative closest point (ICP) registration algorithms are sensitive to initial positions and easily fall into the trap of locally optimal solutions. To address this problem, a point cloud registration algorithm is put forward in this study based on adaptive neighborhood eigenvalue loading ratios. In the algorithm, the resolution of the point cloud is first calculated and used as an adaptive basis to determine the raster widths and radii of spherical neighborhoods in the raster filtering; then, the adaptive raster filtering is implemented to the point cloud for denoising, while the eigenvalue loading ratios of point neighborhoods are calculated to extract and match the contour feature points; subsequently, sample consensus initial alignment (SAC-IA) is used to carry out coarse registration; and finally, a fine registration is delivered with KD-tree-accelerated ICP. The experimental results of this study demonstrate that the feature points extracted with this method are highly representative while consuming only 35.6% of the time consumed by other feature point extraction algorithms. Additionally, in noisy and low-overlap scenarios, the registration error of this method can be controlled at a level of 0.1 mm, with the registration speed improved by 56% on average over that of other algorithms. Taken together, the method in this study cannot only ensure strong robustness in registration but can also deliver high registration accuracy and efficiency.

1. Introduction

With the rapid development of 3D measurement technologies such as laser scanning, tilt photogrammetry, monocular and binocular measurements, and depth camera measurements, point clouds have been gradually applied in various fields, including simultaneous localization and mapping, underground tunnel detection, autonomous driving, cultural relic restoration, and medical 3D image construction [1,2,3,4,5,6], among others.
As a process to align a multi-viewpoint cloud to a standard coordinate system by using a transformation matrix [7], point cloud registration plays a key role in the above point cloud application neighborhood. However, due to the complexity and diversity of point cloud data, single alignment methods often fail to meet the requirements of wide-range scenarios, while affecting the accuracy and stability of alignment results. In order to solve this problem, a multistage process can be undertaken before point cloud alignment so as to combine different alignment methods and techniques to enhance the quality and robustness of the alignment results via gradual optimization and adjustment. Specifically, this study first preprocesses the point cloud data using steps such as denoising and feature extraction; then, it uses multiple alignment algorithms for initial alignment; and finally, it further improves the alignment accuracy with optimization techniques. Via this multistage process, the point cloud alignment problem can be solved in practical applications more effectively, with better alignment results achieved. Currently, there are certain classical point cloud registration algorithms for point cloud alignment, including iterative closest point (ICP) [8], known for its high accuracy and rapid convergence in aligning point clouds at a high overlap rate. However, the ICP algorithm only considers the Euclidean distance between points and neglects other internal structural features and external color features; moreover, it is susceptible to initial registration positions and prone to converging on local optima, while incurring high computational costs [9].
Therefore, many improvements to the ICP algorithm have been proposed. For example, regarding the selection of neighborhood points, Jian et al. [10] proposed a point method set selection using Gaussian mixture models (GMMs). By modeling the distribution of point sets and selecting points with higher probabilities for registration, the accuracy and robustness of the registration were enhanced. Zhang et al. [11] introduced a robust error metric to error functions based on the Welsch function and employed the EM (expectation–maximization) algorithm with Anderson’s acceleration to effectively minimize the error metric, finally improving the registration accuracy on a benchmark dataset and greatly enhancing the time efficiency. Regarding neighborhood search, Liu et al. [12] designed a spatial data structure, called “octree”, to expedite the search for the nearest neighbor points, substantially increasing the ICP algorithm’s running speed. To address the issue of local minima, Zhou et al. [13] directly identified the optimal correspondence between point clouds by optimizing a global objective function, thus bypassing the step-by-step approximation process and greatly enhancing the ICP algorithm’s speed and robustness. Although the four methods mentioned above for optimizing the ICP algorithm can improve registration results, there remains a major error in scenarios with low overlap as well as a strong dependence on the initial positions of point clouds.
As mentioned above, it is still limited to improve the ICP algorithm alone. From a bionic point of view, in point cloud registration, people first determine its overall shape before performing point-to-point registration based on representative features. The concepts of feature point detection and feature descriptors can be traced back to the fields of image processing and computer vision. A registration algorithm combining coarse and fine registration has, therefore, been proposed. Coarse registration matches and obtains optimized initial positions by extracting feature points, which is followed by point cloud registration by fine registration using the iterative nearest point algorithm [14]. The combination of feature extraction and matching improves the applicability of registration algorithms. In feature selection, since curvature features are initially the most sensitive, Feldmar et al. [15] first proposed to extract feature points by using the principal directions corresponding to the maximum and minimum curvatures of the neighborhood of the point cloud as a descriptor for that key point. Xing et al. [16] transitioned from the classical 2D scale-invariant feature transform (SIFT) descriptor and extended it to 3D for feature point extraction, enabling the detection of feature points at various scales. Yang et al. [17] proposed a coarse point cloud registration method based on ISS feature point descriptors combined with 4PCS and achieved good robustness with respect to noise and density variations. Sun et al. [18] proposed an approach to extract feature points by using curvature and fast point feature histograms (FPFHs) for fusion. Specifically, they first processed the point cloud with curvature features and then replaced the original Euclidean distance judgment condition by comparing the FPFH features of the points. This approach increases the number of correctly matched pairs and greatly improves the point registration accuracy. Not only the internal structural features of the point cloud but also the color and reflectance of the point cloud can be employed for feature extraction. Park et al. [19] proposed an algorithm that optimizes the joint photometric and geometric objectives to align two-colored point clouds. This innovative algorithm extends the photometric objective used for aligning RGB-D images to point clouds and surpasses previous methods in terms of accuracy and robustness.
However, the above feature descriptors still have their respective deficiencies. For instance, the curvature feature descriptor is only suitable for point clouds with major curvature changes. The SIFT feature descriptor is relatively expensive in computation and, simultaneously, more sensitive to noise and outliers. The performance of ISS combined with the 4PCS feature descriptor largely depends on the selection of parameters, and inappropriate parameter settings may degrade the detection effect. The FPFH feature descriptor has high time complexity. Additionally, ColorICP often performs poorly in regions where the point cloud’s color does not change greatly. Therefore, it is necessary to preprocess the point cloud by denoising before registration and use adaptive parameters to mitigate the impact of the parameter selection on the experimental results. While considering the local neighborhood structure of the point cloud to extract representative feature points effectively and strongly, the efficiency of the algorithm must also be ensured.
To deal with the issues of low registration efficiency and high dependence on the initial position characteristic of traditional ICP and its variant registration algorithms, in this study, a point cloud registration algorithm is proposed based on the adaptive neighborhood eigenvalue loading ratio. Firstly, the point cloud resolution is calculated based on the uniqueness of the point cloud data, which are used to determine the average raster density and the radius value of the adaptive spherical neighborhood in adaptive raster filtering. Secondly, the invariance of the internal structural characteristics of the rigid body is reused. The eigenvalue negative loading ratio of the adaptive spherical neighborhood covariance matrix is calculated to screen the surface profile feature points. Thirdly, the eigenvalue similarity formula is employed to match the feature points, and the coarse registration is used together to optimize the initial position. Finally, the feasibility of the algorithm was verified by experiments.

2. Methods

2.1. Algorithmic Process

As shown in Figure 1, the first step is to calculate the point cloud resolution of both the source and target point clouds. Then, the average raster density is calculated based on the adaptive raster width, and adaptive raster filtering is conducted to complete the preprocessing of the point cloud. During the feature point extraction process, the calculated point cloud resolution is utilized as the radius for the adaptive spherical neighborhood. A neighborhood search is conducted for each point, followed by calculating the covariance matrix between the point and its neighborhood. The three corresponding major eigenvalues are determined through principal component analysis (PCA) [20]. The loadings for the largest and smallest eigenvalues are calculated, and the average of these loadings serves as the threshold value for extracting the feature points from the point cloud’s surface contours. Then, the neighboring loadings of the three major eigenvalues are computed, and the matched pairs are filtered using the feature point similarity formula proposed in this study. Erroneous and duplicate matched pairs are eliminated using an angle thresholding method. The optimized matched pairs are coarsely aligned based on the sampling consistency initial registration [21]. Finally, the fine registration is completed using an ICP algorithm accelerated by KD-tree lookup.

2.2. Preprocessing Based on Improved Adaptive Raster Filtering

Traditional raster filtering [22] is important in noise reduction and data simplification. However, when fixing the raster size, the uneven density of scattered point clouds should be considered; otherwise, it may lead to problems such as loss of local point cloud features. In this study, using the point cloud resolution, the filtering process is carried out based on different point cloud densities and adaptive adjustment of the raster size. In the meantime, the average raster density is used as the base threshold, thus reducing the data volume, maintaining the key information of the point cloud, and improving the quality of the point cloud data preprocessing.
The density of point clouds varies due to sensor accuracy, environmental conditions, and surface roughness. Existing methods for estimating point cloud density include distance-based point cloud resolution and chunk-based point cloud density. The former estimates the sparsity of the point cloud distribution by calculating the average distance to each point within the cloud. For a point cloud Pi (where i = 1, 2, 3, …, N, and N is the total number of points), the point cloud resolution [23] is calculated as shown in Formulas (1) and (2). A KD-tree [24] index is first constructed for the point cloud Pi to accelerate the search process. Then, the minimum distance between points in the target point cloud and Pa is determined, and corresponding point pairs are formed.
d p = min ( d i s ( P a , P b ) ) , a b
d r r ¯ = 1 N p = 1 N d p
where the distance between a point Pa and any other point Pb is represented by dis(Pa, Pb), and the minimum distance between Pa and any other point is denoted by dp. Take the average of all dp as d r r ¯ .
The chunk-based average raster density representation method estimates the sparsity of the point cloud distribution by dividing the point cloud into grids, individually calculating the number of points within each small grid, and determining the average value of points across all grids. Given the variations in different point cloud shapes, the edge length of the raster is not fixed to achieve adaptive raster filtering. The point cloud resolution calculated using Formula (2) is taken as a reference for the minimum raster. The raster width is set to five times the resolution of the point cloud, W. To calculate the point cloud’s extent along the X-, Y-, and Z-axes, the largest and smallest coordinate values along these axes are identified to determine the lengths of the axes of Xd, Yd, and Zd, respectively. The density of the point cloud, d p d ¯ , is calculated as shown in Formula (3):
d p d ¯ = N X d · Y d · Z d
The grid width can be determined for each main axis based on the maximum length along that axis and the corresponding number of grid divisions. The total number of grids, Ng, is calculated as shown in Formula (4):
N g = X d W · Y d W · Y d W
The number of points presented in a single grid is denoted as ng. The average grid density of the point cloud, d g d ¯ , is calculated as shown in Formula (5):
d g d ¯ = 1 N g p = 1 N g n g
The calculated average grid density is used as a threshold to compare and filter each grid. The preprocessed source point cloud, denoted as Poi, and the target point cloud, denoted as Qoj, are then evaluated. If the number of points within a grid exceeds the threshold, all points in that grid are retained. Otherwise, the grid is excluded.

2.3. Feature Point Extraction Based on Adaptive Neighborhood Eigenvalue Loading Ratio

The nearest neighbor points for the preprocessed source point cloud Poi and the target point cloud Qoj within the spherical neighborhood are searched for. The point cloud resolution calculated in the previous section serves as the adaptive basis for determining the radius of the spherical neighborhood in this section. All point numbers within the spherical neighborhood are recorded. Then, Formula (6) is used to compute the covariance matrix [25] Cov(Poi) of the adaptive spherical neighborhood for the source point cloud Poi, for example:
Cov ( P o i ) = P i N i W P i ( P i P o i ) ( P i P o i ) T m
where m refers to the number of points contained in the point sphere field, and Wpi denotes the weights, which are calculated using Formula (7):
W P i = 1 P i P o i ( P i N i )
The point cloud covariance matrix corresponding to each point is obtained by using Formulas (6) and (7). Then, three eigenvalues are derived through singular value decomposition (SVD), which are arranged in descending order according to their magnitude to yield λ1, λ2, and λ3. λ1 and λ2 primarily reflect the extent of variation in the directions of the corresponding eigenvectors of the principal plane, while λ3 represents the magnitude of variation in the direction perpendicular to this plane. In this study, the ratios λ1/λ2 and λ1/λ3, termed the eigenvalue loading ratios, are utilized for feature extraction and feature point matching. Regarding the neighborhood, the size relationship between λ1, λ2, and λ3 is mainly related to the neighborhood’s characteristics. To be specific, if λ1 is significantly larger than λ2, while λ2 and λ3 are approximately equal, the local region of the point cloud at the point exhibits a linear distribution. Conversely, if λ1 and λ2 are similar and greatly larger than λ3, the local region displays a planar distribution. If λ1, λ2, and λ3 are close in value, the region shows a 3D geometric form [26]. The specific division of the point neighborhood is depicted in Figure 2. Since the relative position of each point remains constant during a rigid-body transformation of the point cloud, the corresponding eigenvalue loading ratio remains unchanged with the constant extent of the point neighborhood. In this study, this property is used to conduct feature extraction and pinpoint matching for the point cloud.
The algorithm proposed in the literature [26] extracts feature points on a plane and assesses whether the eigenvalues of the local neighborhood’s covariance matrix corresponding to these feature points satisfy the conditions λ1λ2 and λ2 > λ3. However, the feature points extracted are not distinct, and the neighborhood search is relatively time-consuming. In view of that, the algorithm proposed in this study extracts feature points along the surface contour and determines whether the eigenvalues of the adaptive spherical neighborhood’s covariance matrix corresponding to these feature points meet a specific curvature ratio of λ3/λ1. When the matrix eigenvalues satisfy λ1λ2 and λ2λ3, taking λ31 and such eigenvalue loading ratios as the curvature, as the curvature approaches 1, the neighborhood features are closer to the curved surface, which also means that the features are more pronounced. Consequently, if the curvature at a point fulfills Formula (8), the point is identified as a feature point.
λ 3 λ 1 < e ( λ 3 λ 1 ( max ) λ 3 λ 1 1 ( min ) )
where λ3/λ1 (max) represents the maximum ratio between eigenvalues λ3 and λ1 in all points, while λ3/λ1 (min) denotes the minimum value of this ratio.
After discrimination via Formula (8), the contour feature points with greater curvature from the original point cloud can be extracted. The effect is depicted in Figure 3. Figure 3a shows the original point cloud represented in green; Figure 3b shows the point cloud obtained by further coloring the feature points on the basis of the original point cloud, with the original green points coexisting with the blue feature points; and Figure 3c shows the point cloud where the feature points are individually stripped out, with only the blue feature points presented. By stripping away the non-featured portions of the point cloud, the contour becomes visible due to these feature points. This contour can be identified as that of a rabbit.

2.4. Point Cloud Registration Algorithm Based on Feature Point Similarity Formula

The source and target point clouds are preprocessed with feature extraction to obtain the contour feature point sets Pgi and Qgj, respectively. The point set Pgi and the point set Qgj contain the index of the point number in the original point cloud where the feature point is located, as well as the three eigenvalues of the covariance matrix of that point, with other points in the spherical domain. Now, consider point Pgm in the point set Pgi and point Qgn in the point set Qgj, two different sets of eigenvalue triplets, i.e., (λ1i, λ2i, and λ3i) and (λ1j, λ2j, and λ3j) can be obtained. Firstly, the eigenvalues are normalized, and the corresponding contour eigenvalue loading ratios λ1i/λ2i/λ3i and λ1j/λ2j/λ3j are determined to assess the similarity between the two sets of feature points. The proposed contour eigenvalue loading ratio similarity calculation formula in this study is as shown in Formula (9):
α = 1 | e λ 1 i λ 3 i λ 1 j λ 3 j λ 2 i 2 λ 2 j 2 1 |
where λ1i/λ2i/λ3i and λ1j/λ2j/λ3j are two sets of eigenvalues, and the feature similarity formula is derived based on the calculation of the (A/B) fraction similarity, which extends to the calculation of the (A/B/C) fraction similarity using cross-multiplication and subtraction, and α represents the degree of similarity. The value of α is 1 in the case of an ideal pairing.
Since the feature point sets derived from screening the two-part point cloud often vary in the number of points they contain, it is necessary to use the smaller point set as the base for performing the computation of Formula (9) and then use the other point set. Based on a one-to-many comparison, the matching pair corresponding to the highest value among all α-values is considered as the best match. Consequently, this process establishes a preliminary matching group, denoted as GP.
In the preliminary matching group GP, a feature point from the source point cloud, Pgm, may correspond to multiple matches in the target point cloud, Qgn. Therefore, constraints need to be applied to remove duplicate and incorrectly matched pairs before performing the coarse registration in the following four steps:
(1).
One-to-many matching pairs are identified and removed from the previously obtained match group GP based on the existing matching index pairs. A pairing is considered unique and added to the result only if neither of the two elements of the pairing has appeared previously.
(2).
The direction vectors is calculated for each matching pair within the matching group GP. This process involves the computation of vectors from the source to the target point cloud, as well as generating a direction vector for each matching pair and unitizing it.
(3).
A KD-tree is constructed to identify representative vectors, such as the mean or median of these direction vectors, that is, the principal directions common to most matching pairs.
(4).
An angle threshold θ is defined, the direction vectors of each matching pair are iterated over, and the angular difference between these vectors and the representative vectors identified in Step (3) is calculated. The angle is derived using the dot product calculation. If this angular difference is less than or equal to the threshold θ, the matching pair is deemed as valid and added to the preprocessed matching group GP; if not, it is removed from GP.
After matched-pair optimization, all point clouds are extracted based on their corresponding index point numbers, which are subjected to the sample consensus initial registration (SAC-IA) algorithm for coarse registration. Although good registration poses can be achieved, the SAC-IA algorithm’s registration results do not yet meet the final requirements. Therefore, it is necessary to use the iterative closest point (ICP) algorithm for the fine registration of the source point cloud. The traditional ICP algorithm, when dealing with large point clouds, performs a nearest neighbor search that may require traversing all points in the point cloud, resulting in a linear computational complexity related to the time complexity [27] of O(NM). To address the issue, in this study, the KD-tree is adopted to accelerate the nearest neighbor search, thereby reducing the time complexity to O(log N) or O(log M), which significantly improves the time efficiency of fine registration.

3. Experiment

3.1. Experimental Dataset

The experimental dataset was divided into three main parts:
  • The first part consisted of a standard single dataset without noise, which was derived from the Computer Graphics Laboratory at Stanford University and obtained using a Cyberware 3030 MS scanner, as depicted in Figure 4. In this study, Figure 4a represents the rabbit model, and Figure 4b shows the dragon model. The point cloud for this dataset is denoted in a solid color of silver.
  • The second part was a self-collected and non-standard dataset with noise, acquired using a Kinect 2.0 depth camera. This camera has a depth image resolution of 512 × 424 and a frame rate of 60 Hz. The dataset was represented with environmentally realistic RGB values. The experimental capture scenes included an outdoor tree bench, as shown in Figure 4c, and an indoor bear doll scene, as shown in Figure 4d.
  • The third part was a standard multivariate dataset without noise, which was sourced from the standard dataset provided by the Redwood Dataset Project and captured with a PrimeSense Carmine camera. This camera has a depth image resolution of 640 × 480 and a frame rate of 30 Hz, and was primarily used for capturing indoor data. Examples include the chamber scene in Figure 4e and the office scene in Figure 4f.

3.2. Experimental Environment

The algorithms used in this study were all executed on a computer with the following configuration: an AMD Ryzen 7 5800, ¿ 8-Core Processor @3.40 GHz, with 16 GB of RAM, an RTX 3060 graphics card, and a Windows 11 64-bit operating system. The programming experiments were conducted using Python 3.9, leveraging the Open3D and PCL libraries. The results of each subsequent experiment were obtained from the setup above to ensure unbiased findings [28].

3.3. Results

3.3.1. Preprocessing Based on Improved Adaptive Raster Filtering

The experimental objects consisted of three parts, among them, the standard single dataset and the standard multi-element dataset were noise-free. Therefore, the experimental objects in this section mainly include self-collected non-standard multi-element datasets. In Figure 4, the experimental objects with noise are only the doll and the tree stump scenes. Hence, in this study, an additional group of self-collected point clouds was added for experimentation, resulting in a total of three groups of experimental objects. The lion experimental object with more contour details and more noise was included to ensure the comprehensiveness and persuasiveness of the experimental results while maintaining the integrity of the experimental objects.
The qualitative analysis, as shown in Figure 5a, proves that the majority of the noise was situated at the end of the stone bench. In contrast, the blue boxed area at the front end was preserved in a more integrated manner. As shown in Figure 5b, most of the noise was concentrated at the edges of the lion’s back, flanks, and base. The red noise was not massively concentrated in a particular area in the main parts of the point cloud, such as the waist, belly, and thighs. The key features of the sculpture, including its silhouette and finer details, were well preserved. As shown in Figure 5c, the overall performance was still smoother with less noise, and the overall features were highly maintained. Most of the noise was predominantly located at the boundaries of the point cloud and the junctions of the hands, legs, and abdomen, which were accurately removed using the algorithm proposed in this study.
Jointly considering the quantitative analysis and Table 1, the lion scene had the highest filtering rate, which was followed by the tree bench scene and the doll scene with the lowest rate. As seen in Figure 6, the adaptive raster filtering algorithm determined the appropriate average raster point cloud density by calculating different point cloud resolutions and demonstrated targeted effects for the various experimental objects. When the experimental point cloud exhibited overall smoothness, and the noise distribution was more uniform, the filtration rate was low. Conversely, for experimental point clouds containing noise clusters and smooth regions, the algorithm could still achieve efficient and accurate removal, resulting in a high filtration rate.

3.3.2. Feature Point Extraction Based on Adaptive Neighborhood Eigenvalue Loading Ratio

To verify the accuracy and efficiency of the improved feature point extraction algorithm based on the adaptive neighborhood eigenvalue loading ratio proposed in this study, the algorithm in this study was compared with the planar feature point extraction algorithm proposed in the literature [26]. A comparison and analysis were conducted in various scenarios, and the feature results are displayed in Figure 6. In this figure, the blue dots represent the distribution of feature points extracted by the algorithm proposed in this study, denoted as Algorithm A, and the algorithm from the literature [26] is denoted as Algorithm B.
Based on a qualitative analysis, the algorithm proposed in this study differs from the one proposed in the literature [26] greatly. As shown in Figure 6, although the feature points extracted by using the algorithm in the literature [26] are widely distributed, they are relatively sparse and not sufficiently representative. For instance, the ear features of the rabbit in scene (a) are not emphasized, and the contours were extracted ineffectively for scenes (c) and (d). This can result in the loss of specific details, or an overemphasis on minor variations, which has a potential impact on the model’s quality and the usefulness of feature extraction. In contrast, the feature extraction algorithm proposed in this study exhibits higher point density in areas with prominent features, which better captures these geometric characteristics. The distribution of the blue points illustrates the natural smoothness of the object’s contour shape. In scene (b), the contours of the dragon’s head, scales, and tail are accurately extracted, with fewer feature points in the smoother areas of the dragon’s body. In scenes (e) and (f), the home and regular object contours were extracted with high accuracy and completeness. In general, the algorithm proposed in this study outperforms the one in the literature [26] in terms of edge feature extraction for both regular planar point clouds and complex surface point clouds.
Quantitatively, as shown in Table 2, the algorithm proposed in this study extracted many feature points for point clouds with plenty of curved and variable features, such as those of rabbits, dragons, and doll bears. In contrast, it extracted fewer feature points for point clouds with more planar features, such as tree benches, chambers, and offices. Additionally, because the algorithm proposed in this study employs a spherical neighborhood with an adaptive radius for neighborhood searches its time efficiency in most scenarios was, therefore, nearly double that of the algorithm proposed in the literature [26]. Specifically, in indoor scenarios, the time consumed by the algorithm proposed in this study was only 35.6% of that consumed by the algorithm proposed in the literature [26].
Through comprehensive qualitative and quantitative analysis, the contour feature point extraction algorithm based on the eigenvalue loading ratio, as proposed in this study, demonstrates major advantages over the algorithm proposed in the literature [26] in terms of accuracy and timeliness.

3.3.3. Point Cloud Registration Algorithm Based on Feature Point Similarity Formula

After extracting feature points from different scenes, it was necessary to pair these feature points. In this study, the eigenvalue loading ratio was used to pair feature point clusters. Generally, a feature point from the source point cloud may correspond to multiple matches in the target point cloud within the initial matching cluster. Therefore, constraints must be applied to eliminate duplicate and incorrect matching pairs before proceeding with coarse registration.
Figure 7 shows the optimized map of the matched feature points obtained after the optimal matching of the two-piece point cloud in the chamber scenario, along with the mapping of the feature point descriptors. Figure 7a,b illustrate the matching harness optimization maps, where the red connecting lines represent all initial feature point pairs, while the green connecting lines indicate the optimized feature point pairs. The visualization results prove that the algorithm proposed in this study greatly enhanced the accuracy of the pairings and simplified the subsequent coarse registration process. Figure 7c,d are the feature point descriptor maps. Since the number of matching pairs was too large, the three principal eigenvalues corresponding to the first 80 matching pairs were selected for mapping from both the source and target point clouds, the eigenvalues were normalized, and the values were output as percentages. The comparison between Figure 7c,d reveals that the loading ratios corresponding to the matched pairs were identical, which also validates the feasibility and accuracy of the wiring harness optimization from a data-level perspective.
After the optimization of optimized matched pairs, the sampling consensus initial Registration (SAC-IA) algorithm was utilized, followed by the ICP fine registration accelerated by a KD-tree to derive the final transformation matrix, thus completing the entire registration process. To verify the effectiveness of the algorithm proposed in this study, the feature points extracted in the previous section were aligned under identical experimental conditions. The algorithm proposed in this study was compared analytically with the ICP, FPFH+ICP, and ColorICP algorithms. The number of ICP iterations was uniformly set to 100, with all other parameters unchanged. To reflect the final registration outcomes, the results for all scenes were uniformly colored, with the red point cloud indicating the source point cloud’s location and the purple point cloud indicating the target point cloud’s location. The main reason for the addition of the ColorICP algorithm was that the experimental subjects in this article were mainly real color point clouds, which contained RGB values. In contrast, other algorithms did not take into account the color information of point clouds, which allowed us to more comprehensively compare the performance of different algorithms in processing real point clouds, thereby improving the contrast and persuasiveness of the experimental results.
The root-mean-square error (RMSE) [29] was employed as the evaluation metric for the experiments, which calculated the sum of the squared distances between corresponding point pairs in the source and target point clouds. Additionally, the registration time required was compared to assess the efficiency of the registration algorithm. The RMSE was used as a metric for calculation, as shown in Formula (10):
R M S E = 1 n i = 1 n R p i + t q i 2
A comparison of the registration results between four algorithms is presented in Figure 8: the ICP (iterative closest point) algorithm, the ColorICP (color iterative closest point) algorithm, the FPFH+ICP (fast point feature histogram combined with ICP) algorithm, and the algorithm proposed in this study. The performance of the four algorithms was evaluated in six scenarios:
  • The ICP algorithm: The traditional ICP algorithm operates by iteratively minimizing the geometric distances between point clouds to achieve registration. It depends on the initial registration and is sensitive to noise, which can lead to convergence on a local optimum. Consequently, satisfactory registration was not obtained in any of the six scenes.
  • The ColorICP algorithm: This algorithm incorporates color information into the ICP process for point cloud registration. As can be seen from the results, the color information could somewhat enhance registration accuracy, particularly in models with rich color variations (e.g., bear dolls and tree benches). However, in areas where color overlap was not significant, especially in pure color scenes, such as rabbits and dragons, its performance improvement was limited.
  • The FPFH+ICP algorithm: This approach enhances ICP registration by incorporating local geometric features, and greatly improves the registration of complex geometries. It showed considerable improvement over the basic ICP and ColorICP algorithms. Nevertheless, it still encountered difficulties with specific details and when local features were not pronounced or the point cloud was heavily occluded.
  • The algorithm in this study: In all scenes, the algorithm proposed in this study achieved superior registration results, with closer and more precise registration between the source and target point clouds. In the “dragon” and “tree bench” scenarios, the algorithm handled complex geometric structures effectively, resulting in more accurate registration, while in complex scenes (e.g., the chamber or office), it is evident that the registration was more precise than the other algorithms, with a major reduction in misregistration.
Figure 8. Comparison of the registration effects of the four types of algorithms in different scenarios, the best performance figures are highlighted in bold font.
Figure 8. Comparison of the registration effects of the four types of algorithms in different scenarios, the best performance figures are highlighted in bold font.
Applsci 14 04828 g008
In summary, the traditional ICP algorithms are prone to fall into locally optimal solutions when dealing with complex point cloud data. The ColorICP algorithm and the FPFH+ICP algorithm enhanced the accuracy and robustness of registration to some extent by incorporating color information and local geometric features, respectively. The algorithm proposed in this study improved the registration accuracy under complex geometric structures and varied color distributions and demonstrated adaptability and stability in a range of scenarios. It exhibited major advantages, particularly in point cloud registration scenarios characterized by complex geometries and changing colors, such as the “dragon” scenario.
From the data comparison of the registration results, as shown in Table 3, the following observations can be made:
  • The ICP algorithm: The ICP algorithm generally exhibited a higher root-mean-square error (RMSE) and longer runtime in different datasets, indicating that it is less precise and efficient in processing complex datasets. In particular, the ICP algorithm is prone to failure at low overlap rates.
  • The ColorICP algorithm: Compared with the ICP algorithm, the inclusion of color features in the ColorICP algorithm resulted in major improvements in some datasets, particularly the “tree stool” point cloud scene, where the RMSE was markedly reduced. However, the ColorICP algorithm tended to be more time-consuming than the ICP algorithm.
  • The FPFH+ICP algorithm: This algorithm employs a coarse plus fine registration scheme, which allowed it to outperform both the ICP and ColorICP algorithms in most datasets. It demonstrated higher accuracy and speed, with much lower RMSE values, greater registration accuracy, and reduced time consumption.
  • The algorithm in this study: The algorithm proposed in this study exhibited the lowest RMSE values in all datasets, with clear advantages in terms of time consumption. Specifically, in the complex indoor scenes of the “chamber” and “office”, the RMSE was considerably lower than that of the other algorithms, and the processing time was greatly reduced, showing a high level of accuracy and efficiency.
Table 3. Performance comparison of different registration methods on six sample point clouds: RMSE (mm) and computation time (s).
Table 3. Performance comparison of different registration methods on six sample point clouds: RMSE (mm) and computation time (s).
Original DataICPColorICPFPFH+ICPOurs
RMSE/mmTime/sRMSE/mmTime/sRMSE/mmTime/sRMSE/mmTime/s
Bunny10.7398.9439.45710.1120.7647.9870.1034.223
Dragon12.5689.84310.52410.9461.9188.3450.1135.185
Bear doll 5.1246.5783.1527.0270.5095.2310.0922.569
Tree stool25.89114.9992.77617.7360.42712.8670.1476.037
Chamber4.61548.6932.20353.2341.03136.3780.06513.846
Office4.38753.9021.69865.0490.88539.6790.08114.601
The best performance figures are highlighted in bold font.

4. Discussion

To address the issue that traditional ICP registration algorithms often converge to locally optimal solutions, leading to global point cloud registration failure, in this study, a point cloud registration scheme was introduced based on the adaptive neighborhood eigenvalue loading ratio. The scheme employs a dual-registration strategy consisting of coarse registration followed by fine registration, which enhances the registration results in terms of accuracy and time efficiency compared with traditional methods. The scheme first takes advantage of the uniqueness of the point cloud data to calculate the point cloud resolution of the source point cloud and the target point cloud and determine the average grid density in the adaptive grid filtering algorithm, thereby eliminating the point cloud noise. Then, using the rigid body in the transformation, the relative position between its internal points to maintain a constant change in this property, and the above point cloud resolution as the basis of adaptive spherical field radius, the covariance matrix eigenvalues of the point and the neighborhood are calculated. This eigenvalue loading ratio is used to extract the point cloud feature points. For the incorrect matching pairs, the matching pair group is optimized with an angular threshold. The initial change matrix is then calculated by using the sampling consistency initial registration SAC-IA to complete the coarse registration of the point cloud, followed by KD-tree-accelerated ICP fine registration to obtain the final transformation matrix to complete the whole registration process.
Compared with traditional ICP, this method overcomes the problem that ICP may easily fall into local optima. By using adaptive grid filtering and eigenvalue load ratio feature extraction, it effectively removes noise points, ensuring registration accuracy and stability. Meanwhile, compared with ColorICP and FPFH, this method can more effectively extract the internal structural features of the point cloud, improving the speed and accuracy of feature point extraction. Although the feature point extraction phase is time-consuming, accounting for about 80% of the total cost, future research can address this issue by optimizing the algorithm or using parallel computing technology.

5. Conclusions

The algorithm proposed in this study presents an efficient and adaptive method for point cloud processing. This study utilized standard and self-collected non-standard datasets experimentally to highlight the algorithm’s universality. Subsequently, experiments with various registration algorithms commonly used today were conducted to compare and analyze the results at both the qualitative and quantitative levels. The experimental results prove that the proposed registration method is more accurate and time-efficient than traditional ones, particularly when dealing with noisy and complex point cloud data. The proposed algorithm not only exhibits good stability and low error rates but also accomplishes high-quality registration tasks more swiftly. Furthermore, furniture contours can be extracted with higher speed and greater precision for indoor scene registration, thereby addressing the issues of model deformation and low efficiency prevalent in indoor localization and reconstruction. Additionally, the feature extraction algorithm proposed in this study can provide technical insights for the semantic segmentation of point clouds.

Author Contributions

Conceptualization, Z.L. and T.P.; methodology, T.P.; software, R.T. and Z.H.; validation, R.T. and Z.H.; formal analysis, Z.L.; investigation, T.P.; resources, T.P.; data curation, R.T.; writing—original draft preparation, T.P.; writing—review and editing, Z.L.; supervision, Z.H.; project administration, Z.L.; funding acquisition, Z.L. and T.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Nature Science Foundation of China [grant number: 42301373] and the Changsha University of Science and Technology Graduate Student Research and Innovation Project [grant number: CXCLY2022024].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available upon request from the corresponding authors.

Acknowledgments

We thank the anonymous reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, G.; Yang, S.; Hu, P.; Deng, H. Advances and prospects of vision-based 3D shape measurement methods. Machines 2022, 10, 124. [Google Scholar] [CrossRef]
  2. Alsadik, B.; Karam, S. The simultaneous localization and mapping (SLAM)—An overview. J. Appl. Sci. Technol. Trends 2021, 2, 147–158. [Google Scholar] [CrossRef]
  3. Liu, J. An adaptive process of reverse engineering from point clouds to CAD models. Int. J. Comput. Integr. Manuf. 2020, 33, 840–858. [Google Scholar] [CrossRef]
  4. Lin, C.H.; Kong, C.; Lucey, S. Learning efficient point cloud generation for dense 3d object reconstruction. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  5. Sipiran, I.; Mendoza, A.; Apaza, A.; Lopez, C. Data-driven restoration of digital archaeological pottery with point cloud analysis. Int. J. Comput. Vis. 2022, 130, 2149–2165. [Google Scholar] [CrossRef]
  6. Abe, L.I.; Iwao, Y.; Gotoh, T.; Kagei, S.; Takimoto, R.Y.; Tsuzuki, M.D.S.G.; Iwasawa, T. High-speed point cloud matching algorithm for medical volume images using 3d voronoi diagram. In Proceedings of the 2014 7th International Conference on Biomedical Engineering and Informatics, Dalian, China, 14–16 October 2014. [Google Scholar]
  7. Li, G.; Cui, Y.; Wang, L.; Meng, L. Automatic Registration Algorithm for the Point Clouds Based on the Optimized RANSAC and IWOA Algorithms for Robotic Manufacturing. Appl. Sci. 2022, 12, 9461. [Google Scholar] [CrossRef]
  8. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 1611, 586–606. [Google Scholar] [CrossRef]
  9. Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-ICP: A globally optimal solution to 3D ICP point-set registration. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 2241–2254. [Google Scholar] [CrossRef]
  10. Jian, B.; Vemuri, B.C. Robust point set registration using gaussian mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 1633–1645. [Google Scholar] [CrossRef]
  11. Zhang, J.; Yao, Y.; Deng, B. Fast and robust iterative closest point. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3450–3466. [Google Scholar] [CrossRef] [PubMed]
  12. Liu, B.; Guo, J.M.; Deng, X.X. Point cloud registration combining octree and nearest point iteration algorithms. Surv. Mapp. Sci. 2016, 41, 130–132+177. [Google Scholar]
  13. Zhou, Q.Y.; Park, J.; Koltun, V. Fast global registration. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
  14. Huang, X.; Mei, G.; Zhang, J.; Abbas, R. A comprehensive survey on point cloud registration. arXiv 2021, arXiv:2103.02690. [Google Scholar]
  15. Feldmar, J.; Rigid, N.A. Affine and Locally Affine Registration of Free-Form Surfaces. Res. Rep. 2220 1996, 18, 99–119. [Google Scholar] [CrossRef]
  16. Xing, Y.; Song, T.; Zhao, Y.; Liu, G.Y.; Zheng, M.P. 3D-SIFT feature extraction combined with voxel filtering for point cloud refinement algorithm. Laser J. 2023, 44, 163–169. [Google Scholar]
  17. Yang, Z.; Wang, X.; Hou, J. A 4PCS coarse registration algorithm based on ISS feature points. In Proceedings of the 2021 40th Chinese Control Conference (CCC), Shanghai, China, 26–28 July 2021. [Google Scholar]
  18. Sun, R.; Zhang, E.; Mu, D.; Ji, S.; Zhang, Z.; Liu, H.; Fu, Z. Optimization of the 3D point cloud registration algorithm based on FPFH features. Appl. Sci. 2023, 13, 3096. [Google Scholar] [CrossRef]
  19. Park, J.; Zhou, Q.Y.; Koltun, V. Colored point cloud registration revisited. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 143–152. [Google Scholar]
  20. Chen, Y.; Wang, Y.; Li, J.L.; Gao, X.R.; Zhang, Y. An efficient point cloud registration algorithm based on principal component analysis. Adv. Lasers Optoelectron. 2023, 60, 376–383. [Google Scholar]
  21. Liu, B.; Liu, L.; Tian, F. An Improved SAC-IA Algorithm Based on Voxel Nearest Neighbor Search. Crit. Rev. Biomed. Eng. 2022, 50, 35–46. [Google Scholar] [CrossRef] [PubMed]
  22. Zeybek, M.; Şanlıoğlu, İ. Point cloud filtering on UAV based point cloud. Measurement 2019, 133, 99–111. [Google Scholar] [CrossRef]
  23. Wang, W.B.; Tian, M.Y.; Yu, J.Y.; Song, C.H.; Li, J.R.; Zhou, M.L. Improved iterative nearest point cloud registration method. Adv. Lasers Optoelectron. 2022, 59, 390–399. [Google Scholar]
  24. Schauer, J.; Nüchter, A. Collision detection between point clouds using an efficient kd tree implementation. Adv. Eng. Inf. 2015, 29, 440–458. [Google Scholar] [CrossRef]
  25. Lin, C.H.; Chen, J.Y.; Su, P.L.; Chen, C.H. Eigen-feature analysis of weighted covariance matrices for LiDAR point cloud classification. ISPRS J. Photogramm. Remote Sens. 2014, 94, 70–79. [Google Scholar] [CrossRef]
  26. Zhao, D.; Jiang, H.; Wang, Q.; Yu, Y.; Qian, K.; He, J.K. Point cloud registration based on eigenvalue deviation ratio. Appl. Laser 2023, 1–7. Available online: https://kns.cnki.net/kcms2/article/abstract?v=_Kb8wOrUs9u8EFz4YTVF38UaiSjQwvss-fjuUTyXVX-yQshVUAgmvZRYZTr9blIohnpndTfCb549V7fRQp81qO6M4jYV0I2pvH5WJ5Lz0OgJBqDWptoGhdE5nAlDAwnq-PWETFwzDF8=&uniplatform=NZKPT&language=CHS (accessed on 30 May 2024).
  27. Yuan, Z.; Lu, T.D.; Liu, R. A normal distribution transform point cloud registration method based on BFGS correction. Surv. Mapp. Bull. 2020, 10, 38–42. [Google Scholar]
  28. Cao, H.; Chen, D.; Zheng, Z.; Zhang, Y.; Zhou, H.; Ju, J. Fast Point Cloud Registration Method with Incorporation of RGB Image Information. Appl. Sci. 2023, 13, 5161. [Google Scholar] [CrossRef]
  29. Javaheri, A.; Brites, C.; Pereira, F.; Ascenso, J. Subjective and objective quality evaluation of 3D point cloud denoising algorithms. In Proceedings of the 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, China, 10–14 July 2017. [Google Scholar]
Figure 1. Flow chart of point cloud preprocessing and combined registration algorithm.
Figure 1. Flow chart of point cloud preprocessing and combined registration algorithm.
Applsci 14 04828 g001
Figure 2. Point neighborhood division.
Figure 2. Point neighborhood division.
Applsci 14 04828 g002
Figure 3. Comparison of feature point extraction: primordial point cloud, hybrid point cloud, feature point cloud.
Figure 3. Comparison of feature point extraction: primordial point cloud, hybrid point cloud, feature point cloud.
Applsci 14 04828 g003
Figure 4. Three different types of experimental subjects.
Figure 4. Three different types of experimental subjects.
Applsci 14 04828 g004
Figure 5. Visualization of the results after instance pre-processing.
Figure 5. Visualization of the results after instance pre-processing.
Applsci 14 04828 g005
Figure 6. Comparison of the extraction effects of the two algorithms in different scenes.
Figure 6. Comparison of the extraction effects of the two algorithms in different scenes.
Applsci 14 04828 g006aApplsci 14 04828 g006b
Figure 7. Optimization maps of matched harnesses and mapping of feature point descriptors.
Figure 7. Optimization maps of matched harnesses and mapping of feature point descriptors.
Applsci 14 04828 g007
Table 1. Point cloud adaptive raster filtering before and after comparison data table.
Table 1. Point cloud adaptive raster filtering before and after comparison data table.
Point CloudPreprocessing Result Data
Original Point CloudPass-Through
Filter
Filtration
Ratio
Point Cloud
Resolution
Average Raster
Point Cloud Density
Tree Stool-150,71149,0613.254%0.0064.602
Lion-110,46997506.868%0.0043.604
Doll Bear-123,28222,8791.731%0.0031.596
Table 2. Comparison of the extraction result data of the two algorithms in different scenarios.
Table 2. Comparison of the extraction result data of the two algorithms in different scenarios.
Point CloudOriginal Point CloudNumber of
Feature Points-A
Time/sNumber of
Feature Points-B
Time/s
Bunny35,94726793.39820676.372
Dragon43,76440234.36937497.648
Bear doll 23,28221322.22315114.521
Tree stool49,06138565.211425911.176
Chamber 131,232678712.071714933.843
Office131,006621313.298891036.419
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liao, Z.; Peng, T.; Tang, R.; Hao, Z. Point Cloud Registration Algorithm Based on Adaptive Neighborhood Eigenvalue Loading Ratio. Appl. Sci. 2024, 14, 4828. https://doi.org/10.3390/app14114828

AMA Style

Liao Z, Peng T, Tang R, Hao Z. Point Cloud Registration Algorithm Based on Adaptive Neighborhood Eigenvalue Loading Ratio. Applied Sciences. 2024; 14(11):4828. https://doi.org/10.3390/app14114828

Chicago/Turabian Style

Liao, Zhongping, Tao Peng, Ruiqi Tang, and Zhiguo Hao. 2024. "Point Cloud Registration Algorithm Based on Adaptive Neighborhood Eigenvalue Loading Ratio" Applied Sciences 14, no. 11: 4828. https://doi.org/10.3390/app14114828

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop