VPRNet: Virtual Points Registration Network for Partial-to-Partial Point Cloud Registration
"> Figure 1
<p>Architecture of our VPGnet. The self-supervised network is mainly composed of two parts, the generator and the discriminator. The generator sub-network extracts features through Self-Attention and Transformer, then MLP and Reshape operations are used to generate virtual points. Next, the features of the generated points and ground-truth are extracted through the DGCNN of the Discriminator and compared with each other. Finally the probability that the input point cloud is the ground-truth is output.</p> "> Figure 2
<p>Architecture of our registration network. The iterative network first applies the rotation and translation calculated in the previous iteration to the input cloud. Through two main components of feature extraction, including Transformer and corresponding matrix acquisition, the network obtains the rigid transformation of the current iteration through SVD.</p> "> Figure 3
<p>Registration examples on (<b>a</b>) unseen categories, (<b>b</b>) noisy data, (<b>c</b>) sparse data, and (<b>d</b>) data with rotation of 30–40°. The histograms on the right show the distance between the points. The closer to the blue, the smaller the distance between the points.</p> "> Figure 4
<p>Completion results on (<b>a</b>) unseen categories, (<b>b</b>) noisy data, (<b>c</b>) sparse data, and (<b>d</b>) data with rotation of 30–40°. Red points represent the original incomplete point cloud, and black points represent the points generated by the network.</p> "> Figure 5
<p>Influence of different sparsity levels on <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>A</mi> <mi>E</mi> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </semantics></math>.</p> "> Figure 6
<p>Influence of different sparsity levels on <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>_</mo> <mi>l</mi> <mi>o</mi> <mi>s</mi> <mi>s</mi> </mrow> </semantics></math>.</p> "> Figure 7
<p>Influence of different initial angles on <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>A</mi> <mi>E</mi> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </semantics></math>.</p> ">
Abstract
:1. Introduction
- A self-supervised virtual point generation network (VPGnet) based on GAN is proposed. The VPGnet focuses on the shape information of point clouds and can effectively complete the partial point cloud.
- A combination strategy of virtual point generation and corresponding point estimation is proposed, which can reduce the negative effect of partiality during registration.
- Various experiments demonstrate the advanced performance compared to other advanced approaches.
2. Related Work
2.1. Traditional Methods
2.2. Learning-Based Methods
2.2.1. Correspondence-Free Methods
2.2.2. Correspondence-Based Methods
2.3. Under Partial Overlap
3. Method
3.1. Overview
3.2. VPGnet Architecture
3.2.1. Multi-Resolution Feature Extraction
3.2.2. Attention
3.2.3. Discriminator
3.3. Regnet Architecture
3.3.1. Correspondences Calculation
3.3.2. SVD Module
3.4. Loss Functions
3.5. Implementation Details
4. Experiments and Results
4.1. Data and Metrics
4.1.1. Dataset
4.1.2. Metrics
4.2. Baseline Algorithms
4.2.1. Traditional Algorithms
4.2.2. Deep Learning Algorithms
4.3. Generalizability Test
4.4. Robustness Test
4.4.1. Noise Test
4.4.2. Sparsity Level Test
4.4.3. Initial Rotation Angles Test
4.5. Ablation Study
4.5.1. Without VPGnet
4.5.2. Without Transformer
4.5.3. Change Iteration Numbers
5. Discussion
5.1. Generalizability Test
5.2. Noise Test
5.3. Sparsity Test
5.4. Initial Rotation Angle Test
5.5. Ablation Study
6. Conclusions
- (1)
- We will add other loss functions and effective modules to improve the accuracy of the completion.
- (2)
- We will try to incorporate our method into a large system like SLAM to ensure the completeness and accuracy of reconstructed scenes.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ICP | Iterative Closest Point |
4PCS | 4 Points Congruent Sets |
SA | Self-Attention |
FPS | Farthest Point Sampling |
SVD | Singular Value Decomposition |
DCP | Deep Closest Point |
ACP | Actor-Critic Closest Point |
GAN | Generative Adversarial Network |
FMR | Feature-metric Registration |
VPG | Virtual Point Generation |
VPRNet | Virtual Points Registration Network |
CD | Chamfer Distance |
EMD | Earth Move Distance |
FGR | Fast Global Registration |
References
- Wong, J.M.; Kee, V.; Le, T.; Wagner, S.; Mariottini, G.L.; Schneider, A.; Hamilton, L.; Chipalkatty, R.; Hebert, M.; Johnson, D.M.S.; et al. Segicp: Integrated deep semantic segmentation and pose estimation. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 5784–5789. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. Orb-slam: A versatile and accurate monocular slam system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
- Izadi, S.; Kim, D.; Hilliges, O.; Molyneaux, D.; Newcombe, R.; Kohli, P.; Shotton, J.; Hodges, S.; Freeman, D.; Davison, A. Kinectfusion: Real-time 3d reconstruction and interaction using a moving depth camera. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 16–19 October 2011; pp. 559–568. [Google Scholar]
- Besl, P.J.; McKay, N.D. Method for registration of 3-d shapes. Sens. Fusion Iv Control. Paradig. Data Struct. 1992, 1611, 586–606. [Google Scholar] [CrossRef]
- Prokop, M.; Shaikh, S.A.; Kim, K.-S. Low overlapping point cloud registration using line features detection. Remote Sens. 2019, 12, 61. [Google Scholar] [CrossRef] [Green Version]
- Zhang, X.; Yang, B.; Li, Y.; Zuo, C.; Wang, X.; Zhang, W. A method of partially overlapping point clouds registration based on differential evolution algorithm. PLoS ONE 2018, 13, e0209227. [Google Scholar] [CrossRef] [Green Version]
- Bouaziz, S.; Tagliasacchi, A.; Pauly, M. Sparse iterative closest point. Comput. Graph. Forum 2013, 32, 113–123. [Google Scholar] [CrossRef] [Green Version]
- Rusinkiewicz, S.; Levoy, M. Efficient variants of the icp algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. [Google Scholar]
- Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
- Yang, J.; Li, H.; Jia, Y. Go-icp: Solving 3d registration efficiently and globally optimally. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1457–1464. [Google Scholar]
- Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-icp: A globally optimal solution to 3d icp point-set registration. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 2241–2254. [Google Scholar] [CrossRef] [Green Version]
- Xu, H.; Liu, S.; Wang, G.; Liu, G.; Zeng, B. Omnet: Learning overlapping mask for partial-to-partial point cloud registration. arXiv 2021, arXiv:2103.00937. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.C.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Li, Q.; Chen, S.Y.; Wang, C.; Li, X.; Wen, C.L.; Cheng, M.; Li, J. Lo-net: Deep real-time lidar odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8465–8474. [Google Scholar]
- Aoki, Y.; Goforth, H.; Srivatsan, R.A.; Lucey, S. Pointnetlk: Robust & efficient point cloud registration using pointnet. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 7156–7165. [Google Scholar]
- Engel, N.; Hoermann, S.; Horn, M.; Belagiannis, V.; Dietmayer, K. Deeplocalization: Landmark-based self-localization with deep neural networks. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 926–933. [Google Scholar]
- Sarode, V.; Li, X.; Goforth, H.; Aoki, Y.; Srivatsan, R.A.; Lucey, S.; Choset, H. Pcrnet: Point cloud registration network using pointnet encoding. arXiv 2019, arXiv:1908.07906. [Google Scholar]
- Horn, M.; Engel, N.; Belagiannis, V.; Buchholz, M.; Dietmayer, K. Deepclr: Correspondence-less architecture for deep end-to-end point cloud registration. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; pp. 1–7. [Google Scholar]
- Yew, Z.J.; Lee, G.H. 3dfeat-net: Weakly supervised local 3d features for point cloud registration. In Computer Vision—ECCV 2018; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11219, pp. 630–646. [Google Scholar] [CrossRef] [Green Version]
- Lu, W.X.; Wan, G.W.; Zhou, Y.; Fu, X.Y.; Yuan, P.F.; Song, S.Y. Deepvcp: An end-to-end deep neural network for point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 12–21. [Google Scholar]
- Wang, Y.; Solomon, J.M. Deep closest point: Learning representations for point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 3523–3532. [Google Scholar]
- Wang, Y.; Solomon, J. Prnet: Self-supervised learning for partial-to-partial registration. arXiv 2019, arXiv:1910.12240. [Google Scholar]
- Zhang, Z.; Dai, Y.; Sun, J. Deep learning based point cloud registration: An overview. Virtual Real. Intell. Hardw. 2020, 2, 222–246. [Google Scholar] [CrossRef]
- Kang, Z.; Li, J.; Zhang, L.; Zhao, Q.; Zlatanova, S. Automatic registration of terrestrial laser scanning point clouds using panoramic reflectance images. Sensors 2009, 9, 2621–2646. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bae, K.-H.; Lichti, D.D. A method for automated registration of unorganised point clouds. ISPRS J. Photogramm. Remote Sens. 2008, 63, 36–54. [Google Scholar] [CrossRef]
- Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U. Pairwise coarse registration of point clouds in urban scenes using voxel-based 4-planes congruent sets. ISPRS J. Photogramm. Remote Sens. 2019, 151, 106–123. [Google Scholar] [CrossRef]
- Bustos, A.P.; Chin, T. Guaranteed outlier removal for point cloud registration with correspondences. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2868–2882. [Google Scholar] [CrossRef] [Green Version]
- Huang, J.; Kwok, T.-H.; Zhou, C. V4pcs: Volumetric 4pcs algorithm for global registration. J. Mech. Des. 2017, 139, 11. [Google Scholar] [CrossRef]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Mellado, N.; Aiger, D.; Mitra, N.J. Super 4pcs fast global pointcloud registration via smart indexing. Comput. Graph. Forum 2014, 33, 205–215. [Google Scholar] [CrossRef] [Green Version]
- Aiger, D.; Mitra, N.J.; Cohen-Or, D. 4-points congruent sets for robust pairwise surface registration. In ACM SIGGRAPH 2008 Papers; Association for Computing Machinery: New York, NY, USA, 2008; pp. 1–10. [Google Scholar]
- Li, S.; Lu, R.; Liu, J.; Guo, L. Super edge 4-points congruent sets-based point cloud global registration. Remote Sens. 2021, 13, 3210. [Google Scholar] [CrossRef]
- Tao, W.; Hua, X.; Yu, K.; He, X.; Chen, X. An improved point-to-plane registration method for terrestrial laser scanning data. IEEE Access 2018, 6, 48062–48073. [Google Scholar] [CrossRef]
- Al-Durgham, M.; Detchev, I.; Habib, A. Analysis of two triangle-based multi-surface registration algorithms of irregular point clouds. Int. Arch. Photogram. Remote Sens. Spat. Inform. Sci. 2011, 61–66. [Google Scholar] [CrossRef] [Green Version]
- Segal, A.; Haehnel, D.; Thrun, S. Generalized-icp. Robot. Sci. Syst. 2009, 2, 435. [Google Scholar]
- Eggert, D.W.; Fitzgibbon, A.W.; Fisher, R.B. Simultaneous registration of multiple range views for use in reverse engineering of cad models. Comput. Vis. Image Underst. 1998, 69, 253–272. [Google Scholar] [CrossRef]
- Vlaminck, M.; Luong, H.; Philips, W. Multi-resolution icp for the efficient registration of point clouds based on octrees. In Proceedings of the 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), Nagoya, Japan, 8–12 May 2017; pp. 334–337. [Google Scholar]
- Zeng, A.; Song, S.; Nießner, M.; Fisher, M.; Xiao, J.; Funkhouser, T. 3DMatch: Learning local geometric descriptors from rgb-d reconstructions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 June 2017; pp. 1802–1811. [Google Scholar]
- Huang, X.; Mei, G.; Zhang, J. Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11366–11374. [Google Scholar]
- Yew, Z.J.; Lee, G.H. Rpm-net: Robust point matching using learned features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11824–11833. [Google Scholar]
- Gojcic, Z.; Zhou, C.; Wegner, J.D.; Wieser, A. The perfect match: 3D point cloud matching with smoothed densities. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 5545–5554. [Google Scholar]
- Huang, S.; Gojcic, Z.; Usvyatsov, M.; Wieser, A.; Schindler, K. Predator: Registration of 3d point clouds with low overlap. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 4267–4276. [Google Scholar]
- Deng, H.; Birdal, T.; Ilic, S. Ppfnet: Global context aware local features for robust 3d point matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 195–205. [Google Scholar]
- Lu, W.; Zhou, Y.; Wan, G.; Hou, S.; Song, S. L3-net: Towards learning based lidar localization for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 6389–6398. [Google Scholar]
- Pais, G.D.; Ramalingam, S.; Govindu, V.M.; Nascimento, J.C.; Chellappa, R.; Miraldo, P. 3dregnet: A deep neural network for 3d point registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 7193–7203. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
- Wei, H.; Qiao, Z.; Liu, Z.; Suo, C.; Yin, P.; Shen, Y.; Li, H.; Wang, H. End-to-end 3d point cloud learning for registration task using virtual correspondences. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 2678–2683. [Google Scholar]
- Song, Y.; Shen, W.; Peng, K. A novel partial point cloud registration method based on graph attention network. Vis. Comput. 2022, 1–12. [Google Scholar] [CrossRef]
- Arnold, E.; Mozaffari, S.; Dianati, M. Fast and robust registration of partially overlapping point clouds. IEEE Robot. Autom. Lett. 2021, 7, 1502–1509. [Google Scholar] [CrossRef]
- Deng, H.; Birdal, T.; Ilic, S. Ppf-foldnet: Unsupervised learning of rotation invariant 3d local descriptors. In Computer Vision—ECCV 2018; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; pp. 602–618. [Google Scholar] [CrossRef] [Green Version]
- Huang, H.; Chen, H.; Li, J. Deep neural network for 3d point cloud completion with multistage loss function. In Proceedings of the 2019 Chinese Control And Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 4604–4609. [Google Scholar]
- Wen, X.; Li, T.Y.; Han, Z.Z.; Liu, Y.S. Point cloud completion by skip-attention network with hierarchical folding. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1936–1945. [Google Scholar]
- Zhang, W.; Yan, Q.; Xiao, C. Detail preserved point cloud completion via separated feature aggregation. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 512–528. [Google Scholar]
- Wu, H.; Miao, Y.B. Lra-net: Local region attention network for 3d point cloud completion. Proc. SPIE 2021, 11605, 1160519. [Google Scholar] [CrossRef]
- Huang, Z.; Yu, Y.; Xu, J.; Ni, F.; Le, X. PF-net: Point fractal network for 3d point cloud completion. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 7662–7670. [Google Scholar]
- Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
- Rubner, Y.; Tomasi, C.; Guibas, L.J. The earth mover’s distance as a metric for image retrieval. Int. J. Comput. Vis. 2000, 40, 99–121. [Google Scholar] [CrossRef]
- Tran, N.-T.; Tran, V.-H.; Nguyen, N.-B.; Nguyen, T.-K.; Cheung, N.-M. On data augmentation for gan training. IEEE Trans. Image Process. 2021, 30, 1882–1897. [Google Scholar] [CrossRef]
- Tanaka, F.H.K.d.S.; Aranha, C. Data augmentation using gans. arXiv 2019, arXiv:1904.09135. [Google Scholar]
- Zhou, Q.-Y.; Park, J.; Koltun, V. Fast global registration. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 766–782. [Google Scholar]
Method | Iter_num | Train_bsz | Val_bsz | LR | Epochs |
---|---|---|---|---|---|
PointnetLK | 10 | 64 | 1 | 0.01 | 200 |
DCP | ∖ | 32 | 1 | 0.001 | 250 |
RPMnet | 5 | 4 | 1 | 0.0001 | 200 |
OMNet | 4 | 64 | 1 | 0.001 | 1000 |
VPRNet(Ours) | 3 | 24 | 1 | 0.0002 | 250 |
Method | RMSE(R) | MAE(R) | RMSE(t) | MAE(t) | R_loss | T_loss | Reg_loss | Time(s) |
---|---|---|---|---|---|---|---|---|
ICP | 17.29 | 14.85 | 0.19 | 0.16 | 0.73 | 0.21 | 0.94 | 0.005 |
ICP_plane | 33.01 | 28.26 | 0.33 | 0.28 | 0.97 | 0.39 | 1.37 | 0.01 |
GO-ICP | 48.09 | 43.2 | 0.55 | 0.48 | 1.84 | 0.92 | 2.76 | 0.53 |
FGR | 28.11 | 24.47 | 0.22 | 0.19 | 0.95 | 0.2 | 1.15 | 0.08 |
PointnetLK | 25.28 | 22.6 | 0.55 | 0.48 | 1.08 | 0.99 | 2.07 | 0.09 |
DCP | 37.27 | 33 | 0.2 | 0.17 | 0.9 | 0.36 | 1.26 | 0.43 |
RPMnet | 0.51 | 0.43 | 0.16 | 0.15 | 0.02 | 0.004 | 0.03 | 0.59 |
OMNet | 2.09 | 1.19 | 0.02 | 0.01 | 0.06 | 0.03 | 0.09 | 0.06 |
VPRNet(Ours) | 7.26 | 6.27 | 0.16 | 0.14 | 0.28 | 0.11 | 0.40 | 2.04 |
Method | RMSE(R) | MAE(R) | MSE(t) | RMSE(t) | MAE(t) | R_loss | T_loss | Reg_loss | Time(s) |
---|---|---|---|---|---|---|---|---|---|
ICP | 17.29 | 14.85 | 0.04 | 0.19 | 0.16 | 0.72 | 0.21 | 0.94 | 0.002 |
ICP_plane | 33.10 | 28.33 | 1.36 | 0.30 | 0.26 | 0.97 | 0.33 | 1.30 | 0.01 |
GO-ICP | 48.16 | 43.23 | 0.33 | 0.55 | 0.48 | 1.84 | 0.92 | 2.76 | 0.07 |
FGR | 27.40 | 23.83 | 0.06 | 0.22 | 0.19 | 0.94 | 0.20 | 1.13 | 0.62 |
PointnetLK | 43.87 | 38.91 | 0.32 | 0.54 | 0.47 | 1.68 | 0.93 | 2.61 | 0.1 |
DCP | 37.67 | 33.40 | 0.06 | 0.20 | 0.17 | 0.92 | 0.36 | 1.28 | 0.32 |
RPMnet | 5.49 | 4.6 | 0.04 | 0.17 | 0.15 | 0.23 | 0.05 | 0.27 | 0.54 |
OMNet | 3.58 | 2.64 | 0.0005 | 0.02 | 0.01 | 0.13 | 0.03 | 0.15 | 0.06 |
VPRNet(Ours) | 7.69 | 6.60 | 0.03 | 0.16 | 0.14 | 0.31 | 0.12 | 0.43 | 2.10 |
Method | RMSE(R) | MAE(R) | RMSE(t) | MAE(t) | R_loss | T_loss | Reg_loss | Time(s) |
---|---|---|---|---|---|---|---|---|
VPRNet(No VPG) | 37.27 | 33.00 | 0.20 | 0.17 | 0.90 | 0.36 | 1.26 | 0.43 |
VPRNet(Original) | 9.85 | 8.47 | 0.16 | 0.14 | 0.40 | 0.12 | 0.52 | 0.73 |
Method | RMSE(R) | MAE(R) | MSE(t) | RMSE(t) | MAE(t) | R_loss | T_loss | Reg_loss | Time(s) |
---|---|---|---|---|---|---|---|---|---|
VPRNet(No Trans) | 10.44 | 9.05 | 0.03 | 0.16 | 0.14 | 0.43 | 0.13 | 0.56 | 0.09 |
VPRNet(Original) | 7.26 | 6.27 | 0.03 | 0.16 | 0.14 | 0.29 | 0.11 | 0.40 | 2.04 |
Method | MAE(R) | RMSE(R) | MAE(t) | RMSE(t) | R_loss | T_loss | Reg_loss | Time(s) |
---|---|---|---|---|---|---|---|---|
iter = 1 | 8.47 | 9.85 | 0.14 | 0.16 | 0.40 | 0.12 | 0.52 | 0.73 |
iter = 3 | 6.27 | 7.26 | 0.14 | 0.16 | 0.28 | 0.11 | 0.40 | 2.04 |
iter = 5 | 6.70 | 7.76 | 0.15 | 0.17 | 0.31 | 0.12 | 0.43 | 3.48 |
iter = 7 | 8.77 | 10.18 | 0.15 | 0.17 | 0.40 | 0.14 | 0.54 | 4.79 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, S.; Ye, Y.; Liu, J.; Guo, L. VPRNet: Virtual Points Registration Network for Partial-to-Partial Point Cloud Registration. Remote Sens. 2022, 14, 2559. https://doi.org/10.3390/rs14112559
Li S, Ye Y, Liu J, Guo L. VPRNet: Virtual Points Registration Network for Partial-to-Partial Point Cloud Registration. Remote Sensing. 2022; 14(11):2559. https://doi.org/10.3390/rs14112559
Chicago/Turabian StyleLi, Shikun, Yang Ye, Jianya Liu, and Liang Guo. 2022. "VPRNet: Virtual Points Registration Network for Partial-to-Partial Point Cloud Registration" Remote Sensing 14, no. 11: 2559. https://doi.org/10.3390/rs14112559