GACM: A Graph Attention Capsule Model for the Registration of TLS Point Clouds in the Urban Scene
"> Figure 1
<p>The framework of our research consists of training and registration (test). In the training section, the training data set, including matching and unmatching pointsets, is constructed. <span class="html-italic">N</span> is the number of points in a point set; <span class="html-italic">r</span> is the number of operations. FC stands for full convolution operation. The trained GACM is used in the registration process. In the test process, two scans of point clouds <span class="html-italic">P</span> and <span class="html-italic">S</span> were used as the inputs. The points obtained by the farthest point sampling (FPS) were used as the key points, and the corresponding pointsets were described by the features extracted by the trained GACM. Finally, the corresponding points were determined based on the Euclidean distance of the features, and the rigid transformation matrixes (<b>R</b> and <b>T</b>) were calculated after eliminating the wrong corresponding points.</p> "> Figure 2
<p>An illustration of the graph attention convolution process by the central node <span class="html-italic">i</span> and its neighborhood. We randomly select <span class="html-italic">n</span> points (<span class="html-italic">n</span> = 4 in the Figure, including itself), within the radius <span class="html-italic">R</span> of the center node, as the target nodes to calculate the feature of the node <span class="html-italic">i</span>. The attention mechanism is used to express the degree of association between the central node and the neighboring nodes in local feature space.</p> "> Figure 3
<p>The construction process by the triplet loss.</p> "> Figure 4
<p>The RMSE values of registration of our method in Dataset I.</p> "> Figure 5
<p>The registration results of each method in Dataset I (scan #34 and scan #33). The red box in the upper left of each subfigure is an enlarged view of the corresponding small red box. (<b>a</b>) The original two scans; (<b>b</b>) Method I; (<b>c</b>) Method II; (<b>d</b>) Method III; (<b>e</b>) Method IV; (<b>f</b>) our method (<span class="html-italic">k</span> = 512).</p> "> Figure 6
<p>The registration results of each method in Dataset II (taking the registration of scan #25 to scan #24 as an example). (<b>a</b>) The original two scans; (<b>b</b>) Method I; (<b>c</b>) Method II; (<b>d</b>) Method III; (<b>e</b>) Method IV; (<b>f</b>) our method (<span class="html-italic">k</span> = 512).</p> "> Figure 7
<p>The registration results of scan #1 to scan #0 in Dataset III and Dataset IV; (<b>a</b>,<b>e</b>) are the original two scans of Dataset III and Dataset IV; (<b>b</b>,<b>f</b>) are obtained by Method III; (<b>c</b>,<b>g</b>) are obtained by Method IV; (<b>d</b>,<b>h</b>) are obtained by our method (<span class="html-italic">k</span> = 512).</p> "> Figure 7 Cont.
<p>The registration results of scan #1 to scan #0 in Dataset III and Dataset IV; (<b>a</b>,<b>e</b>) are the original two scans of Dataset III and Dataset IV; (<b>b</b>,<b>f</b>) are obtained by Method III; (<b>c</b>,<b>g</b>) are obtained by Method IV; (<b>d</b>,<b>h</b>) are obtained by our method (<span class="html-italic">k</span> = 512).</p> ">
Abstract
:1. Introduction
- (1)
- A new neural network model (namely GACM) is proposed to learn the 3D feature descriptors in the urban scene, which can fully represent the feature of 3D urban by fusing the advantages of graph attention convolution network and 3D capsule network.
- (2)
- We combined the GACM into a new efficient registration framework of TLS point clouds in the urban scene and successfully applied the learned 3D urban feature descriptors in the high-quality registration of the TLS point clouds.
2. Related Work
2.1. Handcrafted Three-Dimensional Feature Descriptors of Point Clouds
2.2. Learnable 3D Feature Descriptors of Point Clouds
3. Method
3.1. Network Structure
3.1.1. Graph Attention Convolution Module
3.1.2. Three-Dimensional Capsule Module
Capsule Network
Algorithm 1: Dynamic routing algorithm. |
1: Input: The prediction vector , iteration number t. |
2: Output: Deep capsule vector . |
3: For every capsule i in the primary point capsule layer and capsule j in the output feature layer: Initialize the logits of coupling coefficients = 0. |
4: For t iterations do |
5: For every capsule i in the primary point capsule layer: . |
6: For every capsule j in the output feature layer: |
7: For every capsule i in the primary point capsule layer and the capsule j in the output feature layer: . |
8: Return |
The Specific Operation in 3D Capsule Module
3.2. Training Process
3.2.1. Loss Function
3.2.2. The Construction of Training Point Pairs
3.3. Point Cloud Registration
4. Experiments and Results
4.1. Datasets
4.2. Parameter Sensitivity Analysis
4.3. Comparison with Other Methods
4.3.1. Test Results on Dataset I
4.3.2. Test Results on Dataset II
4.3.3. Test Results on Dataset III and Dataset IV
4.4. Ablation Experiments
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Jelalian, A.V. Laser Radar Systems; Artech House: Boston, MA, USA; London, UK, 1992. [Google Scholar]
- Urech, P.R.; Dissegna, M.A.; Girot, C.; Grêt-Regamey, A. Point cloud modeling as a bridge between landscape design and planning. Landsc. Urban Plan. 2020, 203, 103903. [Google Scholar] [CrossRef]
- Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. In Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1907–1915. [Google Scholar]
- Nilsson, M.; Nordkvist, K.; Jonzén, J.; Lindgren, N.; Axensten, P.; Wallerman, J.; Egberth, M.; Larsson, S.; Nilsson, L.; Eriksson, J. A nationwide forest attribute map of Sweden predicted using airborne laser scanning data and field data from the National Forest Inventory. Remote Sens. Environ. 2017, 194, 447–454. [Google Scholar] [CrossRef]
- Badenko, V.; Zotov, D.; Muromtseva, N.; Volkova, Y.; Chernov, P. Comparison of software for airborne laser scanning data processing in smart city applications. Int. Arch. Photogram. Remote Sens. Spat. Inform. Sci. 2019, XLII-5/W2, 9–13. [Google Scholar] [CrossRef] [Green Version]
- Endres, F.; Hess, J.; Sturm, J.; Cremers, D.; Burgard, W. 3-D mapping with an RGB-D camera. IEEE Trans. Robot. 2014, 30, 177–187. [Google Scholar] [CrossRef]
- Li, L.; Liu, Y.H.; Wang, K.; Fang, M. Estimating position of mobile robots from omnidirectional vision using an adaptive algorithm. IEEE Trans. Cybern. 2017, 45, 1633–1646. [Google Scholar] [CrossRef]
- Liu, M. Robotic online path planning on point cloud. IEEE Trans. Cybern. 2016, 46, 1217–1228. [Google Scholar] [CrossRef]
- Vosselman, G.; Dijkman, S. 3D building model reconstruction from point clouds and ground plans. Int. Arch. Photogramm. Remote Sens. Spat. Inform. Sci. 2001, XXXIV-3/W4, 37–44. [Google Scholar] [CrossRef] [Green Version]
- Wang, F.; Zhuang, Y.; Zhang, H.; Gu, H. Real-time 3-D semantic scene parsing with LiDAR sensors. IEEE Trans. Cybern. 2020, 1–13. [Google Scholar] [CrossRef]
- Parmehr, E.G.; Fraser, C.S.; Zhang, C.; Leach, J. Automatic registration of optical imagery with 3D LiDAR data using statistical similarity. ISPRS J. Photogramm. Remote Sens. 2014, 88, 28–40. [Google Scholar] [CrossRef]
- Xu, Z.; Xu, E.; Zhang, Z.; Wu, L. Multiscale sparse features embedded 4-points congruent sets for global registration of TLS point clouds. IEEE Geosci. Remote Sens. Lett. 2018, 16, 286–290. [Google Scholar] [CrossRef]
- Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
- Bae, K.-H.; Lichti, D.D. A method for automated registration of unorganised point clouds. ISPRS J. Photogramm. Remote Sens. 2008, 63, 36–54. [Google Scholar] [CrossRef]
- Gressin, A.; Mallet, C.; Demantké, J.; David, N. Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge. ISPRS J. Photogramm. Remote Sens. 2013, 79, 240–251. [Google Scholar] [CrossRef] [Green Version]
- Aiger, D.; Mitra, N.J.; Cohen-Or, D. 4-points congruent sets for robust pairwise surface registration. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Q.Y.; Park, J.; Koltun, V. Fast global registration. In Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 766–782. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Aligning point cloud views using persistent feature histograms. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3384–3391. [Google Scholar]
- Mellado, N.; Aiger, D.; Mitra, N.J. Super 4pcs fast global pointcloud registration via smart indexing. Comput. Graph. Forum 2014, 33, 205–215. [Google Scholar] [CrossRef] [Green Version]
- Eggert, D.W.; Lorusso, A.; Fisher, R.B. Estimating 3-D rigid body transformations: A comparison of four major algorithms. Mach. Vis. Appl. 1997, 9, 272–290. [Google Scholar] [CrossRef]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
- Cheng, L.; Chen, S.; Liu, X.; Xu, H.; Wu, Y.; Li, M.; Chen, Y. Registration of laser scanning point clouds: A review. Sensors 2018, 18, 1641. [Google Scholar] [CrossRef] [Green Version]
- Ge, X.; Hu, H. Object-based incremental registration of terrestrial point clouds in an urban environment. ISPRS J. Photogramm. Remote Sens. 2020, 161, 218–232. [Google Scholar] [CrossRef]
- Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
- Ku, J.; Mozifian, M.; Lee, J.; Harakeh, A.; Waslander, S.L. Joint 3d proposal generation and object detection from view aggregation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 1–8. [Google Scholar]
- Li, L.; Zhu, S.; Fu, H.; Tan, P.; Tai, C.L. End-to-end learning local multi-view descriptors for 3D point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 14–19 June 2020; pp. 1919–1928. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Li, J.; Lee, G.H. Usip: Unsupervised stable interest point detection from 3d point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 361–370. [Google Scholar]
- Yew, Z.J.; Lee, G.H. 3dfeat-net: Weakly supervised local 3d features for point cloud registration. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 607–623. [Google Scholar]
- Deng, H.; Birdal, T.; Ilic, S. Ppfnet: Global context aware local features for robust 3d point matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 195–205. [Google Scholar]
- Deng, H.; Birdal, T.; Ilic, S. Ppf-foldnet: Unsupervised learning of rotation invariant 3d local descriptors. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 602–618. [Google Scholar]
- Yew, Z.J.; Lee, G.H. Rpm-net: Robust point matching using learned features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 11824–11833. [Google Scholar]
- Zhao, Y.; Birdal, T.; Deng, H.; Tombari, F. 3D point capsule networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 1009–1018. [Google Scholar]
- Birdal, T.; Ilic, S. Point pair features based object detection and pose estimation revisited. In Proceedings of the 2015 International Conference on 3D Vision (3DV), Lyon, France, 19–22 October 2015; pp. 527–535. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Li, G.; Muller, M.; Thabet, A.; Ghanem, B. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 9267–9276. [Google Scholar]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Sabour, S.; Frosst, N.; Hinton, G.E. Dynamic routing between capsules. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 3856–3866. [Google Scholar]
- Johnson, A.E.; Hebert, M. Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 433–449. [Google Scholar] [CrossRef] [Green Version]
- Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef] [Green Version]
- Frome, A.; Huber, D.; Kolluri, R.; Bülow, T.; Malik, J. Recognizing objects in range data using regional point descriptors. In Proceedings of the 8th European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; pp. 224–237. [Google Scholar]
- Tombari, F.; Salti, S.; Di Stefano, L. Unique shape context for 3D data description. In Proceedings of the ACM Workshop on 3D Object Retrieval, Firenze, Italy, 25 October 2010; pp. 57–62. [Google Scholar]
- Guo, Y.; Sohel, F.A.; Bennamoun, M.; Wan, J.; Lu, M. RoPS: A local feature descriptor for 3D rigid objects based on rotational projection statistics. In Proceedings of the 2013 1st International Conference on Communications, Signal Processing, and Their Applications (ICCSPA), Sharjah, United Arab Emirates, 12–14 February 2013; pp. 1–6. [Google Scholar]
- Salti, S.; Tombari, F.; Di Stefano, L. SHOT: Unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 2014, 125, 251–264. [Google Scholar] [CrossRef]
- Steder, B.; Rusu, R.B.; Konolige, K.; Burgard, W. NARF: 3D range image features for object recognition. In Proceedings of the Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 18–22 October 2010. [Google Scholar]
- Zeng, A.; Song, S.; Nießner, M.; Fisher, M.; Xiao, J.; Funkhouser, T. 3dmatch: Learning local geometric descriptors from rgb-d reconstructions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1802–1811. [Google Scholar]
- Zhang, Z.; Sun, L.; Zhong, R.; Chen, D.; Xu, Z.; Wang, C.; Qin, C.-Z.; Sun, H.; Li, R. 3-D deep feature construction for mobile laser scanning point cloud registration. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1904–1908. [Google Scholar] [CrossRef]
- Lu, W.; Wan, G.; Zhou, Y.; Fu, X.; Yuan, P.; Song, S. Deepvcp: An end-to-end deep neural network for point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 12–21. [Google Scholar]
- Choy, C.; Park, J.; Koltun, V. Fully convolutional geometric features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 8958–8966. [Google Scholar]
- Gojcic, Z.; Zhou, C.; Wegner, J.D.; Wieser, A. The perfect match: 3d point cloud matching with smoothed densities. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5545–5554. [Google Scholar]
- Bai, X.; Luo, Z.; Zhou, L.; Fu, H.; Quan, L.; Tai, C.L. D3Feat: Joint learning of dense detection and description of 3D local features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 6359–6367. [Google Scholar]
- Thomas, H.; Qi, C.R.; Deschaud, J.-E.; Marcotegui, B.; Goulette, F.; Guibas, L.J. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 6411–6420. [Google Scholar]
- Khoury, M.; Zhou, Q.-Y.; Koltun, V. Learning compact geometric features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 153–161. [Google Scholar]
- Yang, J.; Zhao, C.; Xian, K.; Zhu, A.; Cao, Z. Learning to fuse local geometric features for 3D rigid data matching. Inf. Fusion 2020, 61, 24–35. [Google Scholar] [CrossRef] [Green Version]
- Schroff, F.; Kalenichenko, D.; Philbin, J. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
- Pomerleau, F.; Liu, M.; Colas, F.; Siegwart, R. Challenging data sets for point cloud registration algorithms. Int. J. Robot. Res. 2012, 31, 1705–1711. [Google Scholar] [CrossRef] [Green Version]
- Ma, Y.; Guo, Y.; Zhao, J.; Lu, M.; Zhang, J.; Wan, J. Fast and accurate registration of structured point clouds with small overlaps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1–9. [Google Scholar]
Dataset | Dataset I (ETH Hauptgebaude) | Dataset II (Stairs) | Dataset III (Apartment) | Dataset IV (Gazebo (Winter)) |
---|---|---|---|---|
Situation | Indoors | Indoors/Outdoors | Indoors | Outdoors |
Number of scan | 36 | 31 | 45 | 32 |
Mean points/scan | 191,000 | 191,000 | 365,000 | 153,000 |
Scene size | 62 m × 65 m × 18 m | 21 m × 111 m × 27 m | 17 m × 10 m × 3 m | 72 m × 70 m × 19 m |
Method | Success Rate | Success Rate | Success Rate |
---|---|---|---|
(RTE < 1 m and RRE ≤ 10°) | (RTE < 1 m and RRE ≤ 5°) | (RTE < 0.5 m and RRE ≤ 2.5°) | |
Method I | 10/15 (67%) | 8/15 (53%) | 4/15 (27%) |
Method II | 11/15 (73%) | 4/15 (27%) | 0/15 (0%) |
Method III | 15/15 (100%) | 15/15 (100%) | 5/15 (33%) |
Method IV | 15/15 (100%) | 13/15 (87%) | 4/15 (27%) |
Our method (k = 128) | 15/15 (100%) | 15/15 (100%) | 6/15 (40%) |
Our method (k = 512) | 15/15 (100%) | 15/15 (100%) | 9/15 (60%) |
Method | RTE (m) | RRE (°) |
---|---|---|
Method I | 0.41 | 3.4 |
Method II | 0.59 | 11.0 |
Method III | 0.83 | 2.8 |
Method IV | 0.32 | 5.4 |
Our method (k = 128) | 0.40 | 1.8 |
Our method (k = 512) | 0.05 | 1.0 |
Method | Success Rate (RTE < 1 m and RRE ≤ 10°) | |
---|---|---|
30° | 60° | |
Method III | 13/15 (87%) | 12/15 (80%) |
Method IV | 10/15 (67%) | 0/15 (0%) |
Our method (k = 512) | 15/15 (100%) | 13/15 (87%) |
Method | Success Rate | Success Rate | Success Rate |
---|---|---|---|
(RTE < 1 m and RRE ≤ 10°) | (RTE < 1 m and RRE ≤ 5°) | (RTE < 0.5 m and RRE ≤ 2.5°) | |
Method I | 23/30 (77%) | 14/30 (47%) | 3/30 (10%) |
Method II | 11/30 (37%) | 9/30 (30%) | 2/30 (7%) |
Method III | 9/30 (30%) | 7/30 (23%) | 2/30 (7%) |
Method IV | 8/30 (27%) | 1/30 (3%) | 1/30 (3%) |
Our method (k = 128) | 27/30 (90%) | 24/30 (80%) | 17/30 (57%) |
Our method (k = 512) | 28/30 (93%) | 24/30 (80%) | 18/30 (60%) |
Method | RTE (m) | RRE (°) |
---|---|---|
Method I | 0.50 | 13.1 |
Method II | 0.33 | 37.1 |
Method III | 2.09 | 53.5 |
Method IV | 4.45 | 178.6 |
Our method (k = 128) | 0.23 | 1.9 |
Our method (k = 512) | 0.10 | 2.5 |
Scans | Method | ||||||
---|---|---|---|---|---|---|---|
I | II | III | IV | Our Method (k = 128) | Our Method (k = 512) | ||
#1 to #0 | RTE (m) | 0.20 | 0.41 | 0.91 | 0.38 | 0.13 | 0.02 |
RRE (°) | 4.5 | 11.3 | 18.1 | 12.3 | 1.8 | 1.2 | |
#5 to #4 | RTE (m) | 2.20 | 0.42 | 0.22 | 1.90 | 0.27 | 0.17 |
RRE (°) | 7.2 | 9.6 | 10.6 | 76.4 | 2.0 | 3.6 | |
#15 to #14 | RTE (m) | 0.38 | 0.45 | 0.28 | 0.38 | 0.11 | 0.3 |
RRE (°) | 8.7 | 16.3 | 14.6 | 12.1 | 2.7 | 10.9 | |
#25 to #24 | RTE (m) | 0.50 | 0.33 | 2.09 | 4.45 | 0.23 | 0.1 |
RRE (°) | 13.1 | 37.1 | 53.5 | 178.6 | 1.9 | 2.5 | |
#26 to #25 | RTE (m) | 1.25 | 0.3 | 0.45 | 3.05 | 0.52 | 0.15 |
RRE (°) | 19 | 29.4 | 4.1 | 222.3 | 22.7 | 3.2 | |
#27 to #26 | RTE (m) | 1.50 | 0.34 | 1.36 | 2.92 | 0.69 | 0.27 |
RRE (°) | 8.3 | 9.4 | 17.5 | 268.0 | 2.0 | 2.0 |
Network Structure | Success Rate (RTE < 1 m and RRE ≤ 10°) | Success Rate (RTE < 1 m and RRE ≤ 5°) | Success Rate (RTE < 0.5 m and RRE ≤ 2.5°) |
---|---|---|---|
GAC | 25/30 (83%) | 24/30 (80%) | 16/30 (53%) |
3DCaps | 0/30 (0%) | 0/30 (0%) | 0/30 (0%) |
GAC + 3DCaps (ours) | 27/30 (90%) | 24/30 (80%) | 17/30 (57%) |
Registration Scans | GAC (z = 3000 and k = 128) | GAC + 3DCaps (Our Method with z = 3000 and k = 128) |
---|---|---|
#25 to #24 | 43.2 | 1.9 |
#26 to #25 | 21.6 | 22.7 |
#27 to #26 | 11.5 | 2.0 |
#28 to #27 | 42.3 | 15.9 |
#29 to #28 | 4.0 | 3.8 |
#30 to #29 | 24.7 | 12.3 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zou, J.; Zhang, Z.; Chen, D.; Li, Q.; Sun, L.; Zhong, R.; Zhang, L.; Sha, J. GACM: A Graph Attention Capsule Model for the Registration of TLS Point Clouds in the Urban Scene. Remote Sens. 2021, 13, 4497. https://doi.org/10.3390/rs13224497
Zou J, Zhang Z, Chen D, Li Q, Sun L, Zhong R, Zhang L, Sha J. GACM: A Graph Attention Capsule Model for the Registration of TLS Point Clouds in the Urban Scene. Remote Sensing. 2021; 13(22):4497. https://doi.org/10.3390/rs13224497
Chicago/Turabian StyleZou, Jianjun, Zhenxin Zhang, Dong Chen, Qinghua Li, Lan Sun, Ruofei Zhong, Liqiang Zhang, and Jinghan Sha. 2021. "GACM: A Graph Attention Capsule Model for the Registration of TLS Point Clouds in the Urban Scene" Remote Sensing 13, no. 22: 4497. https://doi.org/10.3390/rs13224497