Iterative K-Closest Point Algorithms for Colored Point Cloud Registration
<p>Our virtual active stereo camera and its depth measurement noise model. (<b>a</b>) Two IR cameras and an IR projector comprise an active stereo vision system. The left IR camera is the reference camera. The color camera is used to acquire RGB data. (<b>b</b>) The standard deviation <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>Z</mi> </msub> </semantics></math> of depth measurement error according to depth <span class="html-italic">Z</span>. Please refer to the text for more detail.</p> "> Figure 2
<p>Multi-view Red Green Blue—Depth (RGB-D) camera setup for acquiring our synthetic dataset. The cameras are on an elliptical arc at <math display="inline"><semantics> <msup> <mn>30</mn> <mo>∘</mo> </msup> </semantics></math> intervals. The semi-major and semi-minor axes are 3 and 1.5 m, respectively.</p> "> Figure 3
<p>Sample RGB-D images in our synthetic dataset. <b>First row</b>: Color images of the male model. <b>Second row</b>: Depth images of the male model. <b>Third row</b>: Color images of the female model. <b>Fourth row</b>: Depth images of the female model. <b>First column</b>: View 0. <b>Second column</b>: View 3. <b>Third column</b>: View 6. <b>Fourth column</b>: View 9. The intensity of the depth images is linear with depth values.</p> "> Figure 4
<p>Evaluation of pairwise registration algorithms on our synthetic dataset. The proposed algorithm is compared to prior algorithms that use color. The algorithms are initialized with transformations that are perturbed away from the true pose. <b>Top</b>: Error according to the source view index with perturbation levels <math display="inline"><semantics> <msup> <mn>2</mn> <mo>∘</mo> </msup> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <msup> <mn>10</mn> <mo>∘</mo> </msup> </semantics></math> (<b>right</b>) in the rotational component. <b>Bottom</b>: Error with perturbation levels 5 cm (<b>left</b>) and 25 cm (<b>right</b>) in the translational component. The plot shows the median RMSE at convergence (bold curve) and the 40–60% range of RMSE across trials (shaded region). Lower is better. Best viewed in color.</p> "> Figure 5
<p>Evaluation of pairwise registration algorithms on our synthetic dataset. The proposed algorithm is compared to prior algorithms that do not use color. <b>Top</b>: Perturbation levels <math display="inline"><semantics> <msup> <mn>2</mn> <mo>∘</mo> </msup> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <msup> <mn>10</mn> <mo>∘</mo> </msup> </semantics></math> (<b>right</b>). <b>Bottom</b>: Perturbation levels 5 cm (<b>left</b>) and 25 cm (<b>right</b>).</p> "> Figure 6
<p>Evaluation of pairwise registration algorithms on our synthetic dataset. Error according to different perturbation levels in the rotational (<b>left</b>) and translational (<b>right</b>) components.</p> "> Figure 7
<p>Evaluation of multi-view point cloud merging algorithms on our synthetic dataset. The proposed algorithm is compared to prior algorithms that use color. <b>Top</b>: Perturbation levels <math display="inline"><semantics> <msup> <mn>2</mn> <mo>∘</mo> </msup> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <msup> <mn>10</mn> <mo>∘</mo> </msup> </semantics></math> (<b>right</b>). <b>Bottom</b>: Perturbation levels 5 cm (<b>left</b>) and 25 cm (<b>right</b>).</p> "> Figure 8
<p>Evaluation of multi-view point cloud merging algorithms on our synthetic dataset. The proposed algorithm is compared to prior algorithms that do not use color. <b>Top</b>: Perturbation levels <math display="inline"><semantics> <msup> <mn>2</mn> <mo>∘</mo> </msup> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <msup> <mn>10</mn> <mo>∘</mo> </msup> </semantics></math> (<b>right</b>). <b>Bottom</b>: Perturbation levels 5 cm (<b>left</b>) and 25 cm (<b>right</b>).</p> "> Figure 9
<p>Evaluation of multi-view point cloud merging algorithms on our synthetic dataset. Error according to different perturbation levels in the rotational (<b>left</b>) and translational (<b>right</b>) components.</p> "> Figure 10
<p>Evaluation of our depth refinement algorithm on our synthetic dataset. The proposed depth refinement algorithm is applied to the merged point cloud obtained by different methods. <b>Top</b>: Perturbation levels <math display="inline"><semantics> <msup> <mn>2</mn> <mo>∘</mo> </msup> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <msup> <mn>10</mn> <mo>∘</mo> </msup> </semantics></math> (<b>right</b>). <b>Middle</b>: Perturbation levels 5 cm (<b>left</b>) and 25 cm (<b>right</b>). <b>Bottom</b>: Error according to different perturbation levels.</p> "> Figure 11
<p>Point cloud rendering results. <b>First and third rows</b>: Merged point clouds. <b>Second and fourth rows</b>: Magnified hand regions. We note that neither a preprocessing nor a postprocessing method has been applied to the results.</p> "> Figure 12
<p>Real multi-view RGB-D images. <b>First and third rows</b>: Color images of the model. <b>Second and fourth rows</b>: Depth images of the model. The intensity of the depth images is linear with depth values. The face regions in the front views have been blurred to protect the model’s privacy.</p> "> Figure 13
<p>Point cloud rendering results. <b>First row</b>: Merged point clouds. <b>Second row</b>: Magnified regions in which our depth refinement algorithm produces improved results. <b>Third row</b>: Magnified regions in which our depth refinement algorithm produces artifacts. We note that no postprocessing method has been applied to the results.</p> ">
Abstract
:1. Introduction
2. Related Work
3. Iterative -Closest Point Algorithms
3.1. Iterative K-Closest Point Algorithm for Pose Refinement
Algorithm 1: Iterative K-closest point algorithm for pose refinement. |
|
3.2. Iterative K-Closest Point Algorithm for Depth Refinement
Algorithm 2: Iterative K-closest point algorithm for depth refinement. |
|
4. Multi-View Point-Cloud Registration
Algorithm 3: Multi-view point cloud merging algorithm. |
|
Algorithm 4: Multi-view depth refinement algorithm. |
|
5. Synthetic Multi-View RGB-D Dataset
6. Results
6.1. Pairwise Pose Estimation
6.2. Multi-View Point Cloud Registration
6.3. Application to a Real-World Dataset
7. Conclusions and Future Work
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Abbreviations
ICP | Iterative Closest Point |
RGB-D | Red Green Blue—Depth |
IR | Infrared |
GICP | Generalized ICP |
RMSE | Root Mean Square Error |
References
- Newcombe, R.A.; Izadi, S.; Hilliges, O.; Molyneaux, D.; Kim, D.; Davison, A.J.; Kohi, P.; Shotton, J.; Hodges, S.; Fitzgibbon, A. KinectFusion: Real-time dense surface mapping and tracking. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, Basel, Switzerland, 26–29 October 2011; pp. 127–136. [Google Scholar]
- Whelan, T.; Johannsson, H.; Kaess, M.; Leonard, J.J.; McDonald, J. Robust real-time visual odometry for dense RGB-D mapping. In Proceedings of the International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 5724–5731. [Google Scholar]
- Yang, R.S.; Chan, Y.H.; Gong, R.; Nguyen, M.; Strozzi, A.G.; Delmas, P.; Gimel’farb, G.; Ababou, R. Multi-Kinect scene reconstruction: Calibration and depth inconsistencies. In Proceedings of the International Conference on Image and Vision Computing New Zealand, Wellington, New Zealand, 27–29 November 2013; pp. 47–52. [Google Scholar]
- Li, W.; Xiao, X.; Hahn, J. 3D reconstruction and texture optimization using a sparse set of RGB-D cameras. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Waikoloa Village, HI, USA, 7–11 January 2019; pp. 1413–1422. [Google Scholar]
- Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
- Chen, Y.; Medioni, G.G. Object modeling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
- Zhang, Z. Iterative point matching for registration of free-form curves and surfaces. Image Vis. Comput. 1994, 13, 119–152. [Google Scholar] [CrossRef]
- Johnson, A.E.; Kang, S.B. Registration and integration of textured 3D data. Image Vis. Comput. 1999, 17, 135–147. [Google Scholar] [CrossRef] [Green Version]
- Men, H.; Gebre, B.; Pochiraju, K. Color point cloud registration with 4D ICP algorithm. In Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1511–1516. [Google Scholar]
- Korn, M.; Holzkothen, M.; Pauli, J. Color supported generalized-ICP. In Proceedings of the International Conference on Computer Vision Theory and Applications, Lisbon, Portugal, 5–8 January 2014; Volume 3, pp. 592–599. [Google Scholar]
- Chui, H.; Rangarajan, A. A feature registration framework using mixture models. In Proceedings of the IEEE Workshop on Mathematical Methods in Biomedical Image Analysis, Hilton Head Island, SC, USA, 12 June 2000; pp. 190–197. [Google Scholar]
- Granger, S.; Pennec, X. Multi-scale EM-ICP: A fast and robust approach for surface registration. In Proceedings of the European Conference on Computer Vision, Copenhagen, Denmark, 28–31 May 2002; pp. 418–432. [Google Scholar]
- Segal, A.; Hähnel, D.; Thrun, S. Generalized-ICP. In Robotics: Science and Systems; Trinkle, J., Matsuoka, Y., Castellanos, J.A., Eds.; The MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
- Hansard, M.; Lee, S.; Choi, O.; Horaud, R.P. Time of Flight Cameras: Principles, Methods, and Applications; Springer Briefs in Computer Science; Springer: Berlin, Germany, 2012. [Google Scholar]
- Choi, O.; Kang, B. Denoising of Time-of-Flight depth data via iteratively reweighted least squares minimization. In Proceedings of the IEEE International Conference on Image Processing, Melbourne, VIC, Australia, 15–18 September 2013; pp. 1075–1079. [Google Scholar]
- Mallick, T.; Das, P.P.; Majumdar, A.K. Characterizations of noise in Kinect depth images: A review. IEEE Sens. J. 2014, 14, 1731–1740. [Google Scholar] [CrossRef]
- Zhu, H.; Su, H.; Wang, P.; Cao, X.; Yang, R. View extrapolation of human body from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4450–4459. [Google Scholar]
- Aiger, D.; Mitra, N.J.; Cohen-Or, D. 4-points congruent sets for robust surface registration. ACM Trans. Graphics 2008, 85, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Mellado, N.; Mitra, N.J.; Aiger, D. Super 4PCS Fast global pointcloud registration via smart indexing. Comput. Graphics Forum 2014, 33, 205–215. [Google Scholar] [CrossRef] [Green Version]
- Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-ICP: A globally optimal solution to 3D ICP point-set registration. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 2241–2254. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhou, Q.Y.; Park, J.; Koltun, V. Fast global registration. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 766–782. [Google Scholar]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D registration. In Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
- Park, J.; Zhou, Q.; Koltun, V. Colored point cloud registration revisited. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 143–152. [Google Scholar]
- Fitzgibbon, A.W. Robust registration of 2D and 3D point sets. Image Vis. Comput. 2003, 21, 1145–1153. [Google Scholar] [CrossRef]
- Bouaziz, S.; Tagliasacchi, A.; Pauly, M. Sparse iterative closest point. Comput. Graphics Forum 2013, 32, 113–123. [Google Scholar] [CrossRef] [Green Version]
- Montesano, L.; Minguez, J.; Montano, L. Probabilistic scan matching for motion estimation in unstructured environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 3499–3504. [Google Scholar]
- Maneewongvatana, S.; Mount, D.M. Analysis of approximate nearest neighbor searching with clustered point sets. In Data Structures, Near Neighbor Searches, and Methodology: Fifth and Sixth DIMACS Implementation Challenges, Proceedings of the DIMACS Workshop, Piscataway, NJ, USA, 25–30 July 1999; DIMACS Series in Discrete Mathematics and Theoretical Computer Science; Goldwasser, M.H., Johnson, D.S., McGeoch, C.C., Eds.; The American Mathematical Society: Providence, RI, USA, 1999; Volume 59, pp. 105–123. [Google Scholar]
- Maier-Hein, L.; Franz, A.M.; dos Santos, T.R.; Schmidt, M.; Fangerau, M.; Meinzer, H.; Fitzpatrick, J.M. Convergent iterative closest-point algorithm to accomodate anisotropic and inhomogenous localization error. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1520–1532. [Google Scholar] [CrossRef] [PubMed]
- Sinha, A.; Billings, S.D.; Reiter, A.; Liu, X.; Ishii, M.; Hager, G.D.; Taylor, R.H. The deformable most-likely-point paradigm. Med. Image Anal. 2019, 55, 148–164. [Google Scholar] [CrossRef] [PubMed]
- Amberg, B.; Romdhani, S.; Vetter, T. Optimal step nonrigid ICP algorithms for surface registration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
- Myronenko, A.; Song, X. Point set registration: Coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2262–2275. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Cootes, T.; Taylor, C.; Cooper, D.; Graham, J. Active Shape Models-Their Training and Application. Comput. Vis. Image Underst. 1995, 61, 38–59. [Google Scholar] [CrossRef] [Green Version]
- Kwon, Y.C.; Jang, J.W.; Hwang, Y.; Choi, O. Multi-cue-based circle detection and its application to robust extrinsic calibration of RGB-D cameras. Sensors 2019, 19, 1539. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Billings, S.D.; Boctor, E.M.; Taylor, R.H. Computation of a probabilistic statistical shape model in a maximum-a-posteriori framework. PLoS ONE 2015, 10, e0117688. [Google Scholar]
- Sorkine, O.; Alexa, M. As-rigid-as-possible surface modeling. In Proceedings of the EUROGRAPHICS/ACM SIGGRAPH Symposium on Geometry Processing, Barcelona, Spain, 4–6 July 2007; pp. 109–116. [Google Scholar]
- Choi, S.; Zhou, Q.; Koltun, V. Robust reconstruction of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5556–5565. [Google Scholar]
- Zhou, Q.Y.; Park, J.; Koltun, V. Open3D: A Modern Library for 3D Data Processing. arXiv 2018, arXiv:1801.09847. [Google Scholar]
- Bleyer, M.; Rhemann, C.; Rother, C. PatchMatch stereo - Stereo matching with slanted support windows. In Proceedings of the British Machine Vision Conference, Dundee, UK, 29 August–2 September 2011; pp. 14.1–14.11. [Google Scholar]
Ours (K = 10) | 47.33 |
Ours (K = 5) | 33.34 |
Ours (K = 1) | 17.92 |
GICP | 16.49 |
Color 6D GICP | 15.20 |
Color 6D ICP | 1.20 |
Color 6D ICP (original) | 8.68 |
ICP (point to plane) | 0.89 |
ICP (point to point) | 0.86 |
Color 3D ICP | 0.38 |
FGR | 0.07 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Choi, O.; Park, M.-G.; Hwang, Y. Iterative K-Closest Point Algorithms for Colored Point Cloud Registration. Sensors 2020, 20, 5331. https://doi.org/10.3390/s20185331
Choi O, Park M-G, Hwang Y. Iterative K-Closest Point Algorithms for Colored Point Cloud Registration. Sensors. 2020; 20(18):5331. https://doi.org/10.3390/s20185331
Chicago/Turabian StyleChoi, Ouk, Min-Gyu Park, and Youngbae Hwang. 2020. "Iterative K-Closest Point Algorithms for Colored Point Cloud Registration" Sensors 20, no. 18: 5331. https://doi.org/10.3390/s20185331
APA StyleChoi, O., Park, M.-G., & Hwang, Y. (2020). Iterative K-Closest Point Algorithms for Colored Point Cloud Registration. Sensors, 20(18), 5331. https://doi.org/10.3390/s20185331