Point Cloud Scene Completion of Obstructed Building Facades with Generative Adversarial Inpainting
<p>Workflow diagram for point cloud scene completion using orthographic projection and generative adversarial inpainting.</p> "> Figure 2
<p>Example of preprocessing steps applied to a building point cloud: (<b>a</b>) registration, (<b>b</b>) downsampling, (<b>c</b>) rotation, and (<b>d</b>) final extracted building facade.</p> "> Figure 3
<p>Examples of augmented training data with (<b>a</b>–<b>c</b>) scan recombination with real occlusions and (<b>d</b>–<b>f</b>) data with synthetic occlusions.</p> "> Figure 4
<p>Visualization of the orthographic projection of a point cloud to create a depth image (<b>left</b>) and an RGB color image (<b>right</b>). The depth image is shown as a heat map with units of depth in meters. Units on horizontal and vertical image scales are pixels.</p> "> Figure 5
<p>A framework of training the generator and the discriminator networks for 2D inpainting.</p> "> Figure 6
<p>Example of fully automated 2D inpainting with a generative adversarial network: (<b>a</b>) input scan, (<b>b</b>) building facade after inpainting, and (<b>c</b>) target ground truth.</p> "> Figure 7
<p>Visualization of the image to point cloud conversion process: (<b>a</b>) depth information of the original image, (<b>b</b>) a filled image using depth information, and (<b>c</b>) a resulting 3D point cloud.</p> "> Figure 8
<p>Example defective point clouds contained in the proposed dataset. (<b>a</b>) example of a scan obstructed by trees and a lamp post, (<b>b</b>) example of a scan obstructed by trees and a balcony</p> "> Figure 9
<p>Point cloud scene completion results for the Mason building, visualized in 2D: (<b>a</b>) input point cloud, (<b>b</b>) target ground truth, (<b>c</b>) Poisson reconstruction, (<b>d</b>) hole-filling, (<b>e</b>) hybrid, (<b>f</b>) plane-fitting, (<b>g</b>) partial convolution, and (<b>h</b>) proposed.</p> "> Figure 10
<p>Point cloud scene completion results for the Mason building, visualized in 3D: (<b>a</b>) input point cloud, (<b>b</b>) target ground truth, (<b>c</b>) Poisson reconstruction, (<b>d</b>) hole-filling, (<b>e</b>) hybrid, (<b>f</b>) plane-fitting, (<b>g</b>) partial convolution, and (<b>h</b>) proposed.</p> "> Figure 11
<p>Point cloud scene completion results for the Van Leer building, visualized in 2D: (<b>a</b>) input point cloud, (<b>b</b>) target ground truth, (<b>c</b>) Poisson reconstruction, (<b>d</b>) hole-filling, (<b>e</b>) hybrid, (<b>f</b>) plane-fitting, (<b>g</b>) partial convolution, and (<b>h</b>) proposed.</p> "> Figure 12
<p>Point cloud scene completion results for the Van Leer building, visualized in 3D: (<b>a</b>) input point cloud, (<b>b</b>) target ground truth, (<b>c</b>) Poisson reconstruction, (<b>d</b>) hole-filling, (<b>e</b>) hybrid, (<b>f</b>) plane-fitting, (<b>g</b>) partial convolution, and (<b>h</b>) proposed.</p> "> Figure 13
<p>Example of 3D scene completion of a full building from partial scans: (<b>a</b>) input point cloud and (<b>b</b>) point cloud result after orthographic projection and generative adversarial inpainting.</p> "> Figure 14
<p>Example application scenario of using building facade completion to improve building element recognition results: (<b>a</b>) input point cloud, (<b>b</b>) window retrieval using the input point cloud, (<b>c</b>) output point cloud after generative adversarial inpainting, and (<b>d</b>) window retrieval using the output point cloud.</p> "> Figure 15
<p>Example of a Poisson reconstruction limitation. (<b>a</b>) Side of a building after reconstruction and (<b>b</b>) target ground truth.</p> "> Figure 16
<p>Example of a hole-filling limitation. (<b>a</b>) Input, (<b>b</b>) building facade after hole-filling, and (<b>c</b>) target ground truth.</p> "> Figure 17
<p>Example of a Pix2Pix limitation: (<b>a</b>) input, (<b>b</b>) building facade after applying Pix2Pix, and (<b>c</b>) target ground truth.</p> "> Figure 18
<p>Visual comparison between (<b>a</b>) natural ground truth and (<b>b</b>) manual ground truth for the Pettit building scene.</p> "> Figure 19
<p>Point cloud scene completion results using point completion network (PCN): (<b>a</b>) input point cloud, (<b>b</b>) output point cloud, and (<b>c</b>) output point cloud after upsampling.</p> "> Figure 19 Cont.
<p>Point cloud scene completion results using point completion network (PCN): (<b>a</b>) input point cloud, (<b>b</b>) output point cloud, and (<b>c</b>) output point cloud after upsampling.</p> ">
Abstract
:1. Introduction
2. Literature Review
2.1. Geometry-Based 3D Scene Completion
2.2. Scene Completion with 2D Image Inpainting
2.3. Data-Driven 3D Scene Completion
3. Methodology
3.1. Point Cloud Data Preprocessing
3.2. Scene Completion with Orthographic Projection and Generative Adversarial Inpainting
3.2.1. Point Cloud to Image Conversion Using an Orthographic Projection
3.2.2. Fully Automated 2D Inpainting with Generative Adversarial Networks (GANs)
3.2.3. Image to Point Cloud Conversion Using Depth Remapping
4. Results
4.1. Composition of the Dataset
4.2. Scene Completion Baseline Algorithms
4.2.1. Baseline I: Poisson Reconstruction (Meshing)
4.2.2. Baseline II: Hole Filling
4.2.3. Baseline III: Plane Detection and Fitting
4.2.4. Baseline IV: Masked 2D Inpainting with Partial Convolutions
4.3. Scene Completion Performance Evaluation
5. Discussions
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Chen, J.; Kira, Z.; Cho, Y.K. Deep Learning Approach to Point Cloud Scene Understanding for Automated Scan to 3D Reconstruction. J. Comput. Civ. Eng. 2019, 33, 04019027. [Google Scholar] [CrossRef]
- Xiong, X.; Adan, A.; Akinci, B.; Huber, D. Automatic creation of semantically rich 3D building models from laser scanner data. Autom. Constr. 2013, 31, 325–337. [Google Scholar] [CrossRef] [Green Version]
- Volk, R.; Stengel, J.; Schultmann, F. Building Information Modeling (BIM) for existing buildings—Literature review and future needs. Autom. Constr. 2014, 38, 109–127. [Google Scholar] [CrossRef] [Green Version]
- Wang, C.; Cho, Y.K.; Kim, C. Automatic BIM component extraction from point clouds of existing buildings for sustainability applications. Autom. Constr. 2015, 56, 1–13. [Google Scholar] [CrossRef]
- Zeng, S.; Chen, J.; Cho, Y.K. User exemplar-based building element retrieval from raw point clouds using deep point-level features. Autom. Constr. 2020, 114, 103159. [Google Scholar] [CrossRef]
- Chen, J.; Fang, Y.; Cho, Y.K. Performance evaluation of 3D descriptors for object recognition in construction applications. Autom. Constr. 2018, 86, 44–52. [Google Scholar] [CrossRef]
- Yuan, W.; Khot, T.; Held, D.; Mertz, C.; Hebert, M. PCN: Point Completion Network. In Proceedings of the 2018 International Conference on 3D Vision (3DV), Verona, Italy, 5–8 September 2018; pp. 728–737. [Google Scholar]
- Dai, A.; Ritchie, D.; Bokeloh, M.; Reed, S.; Sturm, J.; Niessner, M. ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4578–4587. [Google Scholar]
- Dai, A.; Diller, C.; NieBner, M. SG-NN: Sparse Generative Neural Networks for Self-Supervised Scene Completion of RGB-D Scans. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–18 June 2020; pp. 846–855. [Google Scholar]
- Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson Surface Reconstruction. In Proceedings of the Fourth Eurographics Symposium on Geometry Processing, Cagliari, Sardinia, Italy, 26–28 June 2006; Eurographics Association: Goslar, Germany, 2006; pp. 61–70. [Google Scholar]
- Kawai, N.; Zakhor, A.; Sato, T.; Yokoya, N. Surface completion of shape and texture based on energy minimization. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 897–900. [Google Scholar]
- Song, S.; Yu, F.; Zeng, A.; Chang, A.X.; Savva, M.; Funkhouser, T. Semantic Scene Completion from a Single Depth Image. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 190–198. [Google Scholar]
- Hu, W.; Fu, Z.; Guo, Z. Local Frequency Interpretation and Non-Local Self-Similarity on Graph for Point Cloud Inpainting. IEEE Trans. Image Process. 2019, 28, 4087–4100. [Google Scholar] [CrossRef] [Green Version]
- Fu, Z.; Hu, W.; Guo, Z. 3D Dynamic Point Cloud Inpainting via Temporal Consistency on Graphs. arXiv 2019, arXiv:1904.10795. [Google Scholar]
- Adán, A.; Huber, D. 3D Reconstruction of Interior Wall Surfaces under Occlusion and Clutter. In Proceedings of the 2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, Hangzhou, China, 16–19 May 2011; pp. 275–281. [Google Scholar]
- Frías, J.B.; Díaz-Vilariño, L.; Arias, P.; Lorenzo, H. Point clouds for direct pedestrian pathfinding in urban environments. ISPRS J. Photogramm. Remote Sens. 2019, 148, 184–196. [Google Scholar] [CrossRef]
- Balado, J.; Díaz-Vilariño, L.; Arias, P.; Frías, E. Point Clouds to Direct Indoor Pedestrian Pathfinding. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 753–759. [Google Scholar] [CrossRef] [Green Version]
- Tang, P.; Huber, D.; Akinci, B.; Lipman, R.R.; Lytle, A.M. Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques. Autom. Constr. 2010, 19, 829–843. [Google Scholar] [CrossRef]
- Angelini, M.G.; Baiocchi, V.; Costantino, D.; Garzia, F. Scan to BIM for 3D Reconstruction of the Papal Basilica of Saint Francis in Assisi in Italy. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 47–54. [Google Scholar] [CrossRef] [Green Version]
- Nan, L.; Xie, K.; Sharf, A. A search-classify approach for cluttered indoor scene understanding. ACM Trans. Graph. 2012, 31, 1–10. [Google Scholar] [CrossRef]
- Kim, Y.M.; Mitra, N.J.; Yan, D.-M.; Guibas, L. Acquiring 3D indoor environments with variability and repetition. ACM Trans. Graph. 2012, 31, 1–11. [Google Scholar] [CrossRef] [Green Version]
- Pauly, M.; Mitra, N.J.; Giesen, J.; Gross, M.; Guibas, L.J. Example-Based 3D Scan Completion. In Proceedings of the Third Eurographics Symposium on Geometry Processing, Vienna, Austria, 4–6 July 2005; Eurographics Association: Goslar, Germany, 2005; pp. 23–32. [Google Scholar]
- Bertalmio, M.; Sapiro, G.; Caselles, V.; Ballester, C. Image inpainting. In Proceedings of the ACM SIGGRAPH Conference on Computer Graphics, New Orleans, LA, USA, 23–28 July 2000. [Google Scholar] [CrossRef]
- Bertalmío, M.; Vese, L.; Sapiro, G.; Osher, S. Simultaneous structure and texture image inpainting. IEEE Trans. Image Process. 2003, 12, 882–889. [Google Scholar] [CrossRef]
- Ballester, C.; Bertalmío, M.; Caselles, V.; Sapiro, G.; Verdera, J. Filling-in by joint interpolation of vector fields and gray levels. IEEE Trans. Image Process. 2001, 10, 1200–1211. [Google Scholar] [CrossRef] [Green Version]
- Patil, B.H.; Patil, P.M. Image Inpainting Based on Image Mapping and Object Removal Using Semi-Automatic Method. In Proceedings of the 2018 International Conference On Advances in Communication and Computing Technology (ICACCT), Sangamner, India, 8–9 February 2018; pp. 368–371. [Google Scholar]
- Efros, A.A.; Leung, T.K. Texture synthesis by non-parametric sampling. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Corfu, Greece, 20–25 September 1999. [Google Scholar] [CrossRef] [Green Version]
- Criminisi, A.; Perez, P.; Toyama, K. Region Filling and Object Removal by Exemplar-Based Image Inpainting. IEEE Trans. Image Process. 2004, 13, 1200–1212. [Google Scholar] [CrossRef]
- Sun, J.; Yuan, L.; Jia, J.; Shum, H.-Y. Image completion with structure propagation. ACM Trans. Graph. 2005, 24, 861–868. [Google Scholar] [CrossRef]
- Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Corfu, Greece, 20–25 September 1999. [Google Scholar] [CrossRef]
- Hertzmann, A.; Jacobs, C.E.; Oliver, N.; Curless, B.; Salesin, D.H. Image analogies. In Proceedings of the the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 23–28 August 2001. [Google Scholar] [CrossRef]
- Wei, L.-Y.; Levoy, M. Fast texture synthesis using tree-structured vector quantization. In Proceedings of the ACM SIGGRAPH Conference on Computer Graphics, New Orleans, LA, USA, 23–28 July 2000. [Google Scholar] [CrossRef] [Green Version]
- Barnes, C.; Shechtman, E.; Finkelstein, A.; Goldman, D.B. PatchMatch. ACM Trans. Graph. 2009, 28, 1–11. [Google Scholar] [CrossRef]
- Hays, J.; Efros, A.A. Scene completion using millions of photographs. ACM Trans. Graph. 2007, 26, 4. [Google Scholar] [CrossRef]
- Ren, J.S.J.; Yan, Q.; Xu, L.; Sun, W. Shepard convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
- Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; Efros, A.A. Context Encoders: Feature Learning by Inpainting. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef] [Green Version]
- Iizuka, S.; Simo-Serra, E.; Ishikawa, H. Globally and locally consistent image completion. ACM Trans. Graph. 2017, 36, 1–14. [Google Scholar] [CrossRef]
- Yang, C.; Lu, X.; Lin, Z.; Shechtman, E.; Wang, O.; Li, H. High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4076–4084. [Google Scholar]
- Song, Y.; Yang, C.; Lin, Z.; Liu, X.; Huang, Q.; Li, H.; Kuo, C.-C.J. Contextual-Based Image Inpainting: Infer, Match, and Translate. In Proceedings of the The European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar] [CrossRef] [Green Version]
- Yan, Z.; Li, X.; Li, M.; Zuo, W.; Shan, S. Shift-Net: Image Inpainting via Deep Feature Rearrangement. In Proceedings of the The European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar] [CrossRef] [Green Version]
- Zeng, Y.; Fu, J.; Chao, H.; Guo, B. Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 1486–1494. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. arVix 2015, arXiv:1505.04597. [Google Scholar]
- Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Salman, R.B.M.; Paunwala, C.N. Semi automatic image inpainting using partial JSEG segmentation. In Proceedings of the 2017 International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 19–20 January 2017; pp. 1–6. [Google Scholar]
- Huang, J.-B.; Kang, S.B.; Ahuja, N.; Kopf, J. Image completion using planar structure guidance. ACM Trans. Graph. 2014, 33, 1–10. [Google Scholar] [CrossRef]
- Liu, J.; Liu, H.; Qiao, S.; Yue, G. An Automatic Image Inpainting Algorithm Based on FCM. Sci. World J. 2014, 2014, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Padalkar, M.G.; Zaveri, M.A.; Joshi, M.V. SVD Based Automatic Detection of Target Regions for Image Inpainting. In Computer Vision—ACCV 2012 Workshops; Park, J.-I., Kim, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 61–71. [Google Scholar]
- Wang, W.; Huang, Q.; You, S.; Yang, C.; Neumann, U. Shape Inpainting Using 3D Generative Adversarial Network and Recurrent Convolutional Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2317–2325. [Google Scholar]
- Yu, L.; Li, X.; Fu, C.-W.; Cohen-Or, D.; Heng, P. PU-Net: Point Cloud Upsampling Network. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2790–2799. [Google Scholar]
- Dai, A.; Qi, C.R.; NieBner, M. Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6545–6554. [Google Scholar]
- Zhang, Y.; Liu, Z.; Li, X.; Zang, Y. Data-Driven Point Cloud Objects Completion. Sensors 2019, 19, 1514. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Han, X.; Cui, S.; Zhang, Z.; Du, D.; Yang, M.; Yu, J.; Pan, P.; Yang, X.; Liu, L.; Xiong, Z. Deep Reinforcement Learning of Volume-Guided Progressive View Inpainting for 3D Point Scene Completion From a Single Depth Image. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 234–243. [Google Scholar]
- Tchapmi, L.P.; Kosaraju, V.; Rezatofighi, H.; Reid, I.; Savarese, S. TopNet: Structural Point Cloud Decoder. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 383–392. [Google Scholar]
- Yang, Y.; Feng, C.; Shen, Y.; Tian, N. FoldingNet: Point Cloud Auto-Encoder via Deep Grid Deformation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 206–215. [Google Scholar]
- Qi, C.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef]
- Groueix, T.; Fisher, M.; Kim, V.G.; Russell, B.; Aubry, M. AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- CloudCompare (Version 2.10), GPL Software. 2020. Available online: http://www.cloudcompare.org/ (accessed on 2 September 2020).
- Wei, J. Point Cloud Orthographic Projection with Multiviews. 2019. Available online: https://github.com/jiangwei221/point-cloud-orthographic-projection (accessed on 11 May 2020).
- Muja, M.; Lowe, D.G. Fast approximate nearest neighbors with automatic algorithm configuration. In Proceedings of the VISAPP International Conference on Computer Vision Theory and Applications, Lisboa, Portugal, 5–8 February 2009; pp. 331–340. [Google Scholar]
- Dai, A.; Chang, A.X.; Savva, M.; Halber, M.; Funkhouser, T.; Niessner, M. ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2432–2443. [Google Scholar]
- Geodan, Generate Synthetic Points to Fill Holes in Point Clouds. 2020. Available online: https://github.com/Geodan/fill-holes-pointcloud (accessed on 11 May 2020).
- Fischler, M.; Bolles, R. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Liu, G.; Reda, F.A.; Shih, K.J.; Wang, T.-C.; Tao, A.; Catanzaro, B. Image Inpainting for Irregular Holes Using Partial Convolutions. In Proceedings of the The European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 89–105. [Google Scholar]
- NVIDIA, Image Inpainting Demo. 2020. Available online: https://www.nvidia.com/research/inpainting/ (accessed on 11 May 2020).
- Liu, S.; Hu, Y.; Zeng, Y.; Tang, Q.; Jin, B.; Han, Y.; Li, X. See and Think: Disentangling Semantic Scene Completion. In Advances in Neural Information Processing Systems 31; Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R., Eds.; Curran Associates, Inc.: New York, NY, USA, 2018; pp. 263–274. Available online: http://papers.nips.cc/paper/7310-see-and-think-disentangling-semantic-scene-completion.pdf (accessed on 2 September 2020).
- Chen, J. Point Cloud Scene Completion Baselines. 2020. Available online: https://github.com/jingdao/completion3d (accessed on 2 September 2020).
- Kim, P.; Chen, J.; Cho, Y.K. Automated Point Cloud Registration Using Visual and Planar Features for Construction Environments. J. Comput. Civ. Eng. 2018, 32, 04017076. [Google Scholar] [CrossRef]
- Chen, J.; Cho, Y.K.; Kim, K. Region Proposal Mechanism for Building Element Recognition for Advanced Scan-to-BIM Process. In Proceedings of the Construction Research Congress 2018, New Orleans, LA, USA, , 2–4 April 2018; pp. 148–157. [Google Scholar] [CrossRef]
Method | Voxel Precision (%) | Voxel Recall (%) | F1-Score (%) | Position RMSE (m) | Color RMSE |
---|---|---|---|---|---|
Poisson reconstruction | 56.0 | 56.2 | 55.9 | 0.017 | 0.146 |
Hole-filling | 77.3 | 54.8 | 63.3 | 0.015 | 0.137 |
Hybrid | 48.7 | 54.9 | 51.1 | 0.018 | 0.152 |
Plane-fitting | 14.0 | 47.3 | 21.0 | 0.018 | 0.189 |
Partial convolution | 87.7 | 59.1 | 68.4 | 0.015 | 0.148 |
Proposed | 78.1 | 63.1 | 69.2 | 0.015 | 0.145 |
Method | Voxel Precision (%) | Voxel Recall (%) | F1-Score (%) | Position RMSE (m) | Color RMSE |
---|---|---|---|---|---|
Original | 75.1 | 63.7 | 68.5 | 0.016 | 0.158 |
+ feature augmentation | 76.4 | 63.8 | 69.0 | 0.016 | 0.151 |
+ feature augmentation + color smoothing | 78.1 | 63.1 | 69.2 | 0.015 | 0.145 |
Method | Voxel Precision (%) | Voxel Recall (%) | F1-Score (%) | Position RMSE (m) | Color RMSE |
---|---|---|---|---|---|
Ignore aspect ratio | 87.5 | 68.8 | 76.0 | 0.015 | 0.141 |
Preserve aspect ratio | 87.9 | 69.5 | 76.4 | 0.015 | 0.146 |
Method | Voxel Precision (%) | Voxel Recall (%) | Color RMSE |
---|---|---|---|
1-Nearest Neighbor | 87.5 | 68.8 | 0.141 |
2-Nearest Neighbor | 82.4 | 63.2 | 0.174 |
10-Nearest Neighbor | 82.2 | 62.9 | 0.174 |
Method | Voxel Precision (%) | Voxel Recall (%) | F1-Score (%) | Position RMSE (m) | Color RMSE |
---|---|---|---|---|---|
Voxel Resolution of 0.01 | |||||
Poisson reconstruction | 15.2 | 5 | 6.4 | 0.0036 | 0.124 |
Hole-filling | 43.2 | 41.3 | 42.8 | 0.0009 | 0.047 |
Hybrid | 11.9 | 4.2 | 6.0 | 0.0037 | 0.133 |
Plane-fitting | 6.0 | 4.0 | 11.9 | 0.0017 | 0.088 |
Partial convolution | 49.6 | 41.8 | 43.3 | 0.0034 | 0.096 |
Proposed | 58.4 | 40.2 | 46.5 | 0.0011 | 0.049 |
Voxel Resolution of 0.05 | |||||
Poisson reconstruction | 56.0 | 56.2 | 55.9 | 0.017 | 0.146 |
Hole-filling | 77.3 | 54.8 | 63.3 | 0.015 | 0.137 |
Hybrid | 48.7 | 54.9 | 51.1 | 0.018 | 0.152 |
Plane-fitting | 14.0 | 47.3 | 21.0 | 0.018 | 0.189 |
Partial convolution | 87.7 | 59.1 | 68.4 | 0.015 | 0.148 |
Proposed | 78.1 | 63.1 | 69.2 | 0.015 | 0.145 |
Voxel Resolution of 0.2 | |||||
Poisson reconstruction | 69.7 | 72.1 | 70.5 | 0.069 | 0.172 |
Hole-filling | 85.3 | 72.7 | 78.0 | 0.064 | 0.179 |
Hybrid | 65.9 | 74.9 | 69.4 | 0.070 | 0.181 |
Plane-fitting | 41.2 | 88.8 | 55.2 | 0.076 | 0.216 |
Partial convolution | 91.9 | 78.2 | 77.6 | 0.064 | 0.178 |
Proposed | 82.3 | 79.6 | 80.6 | 0.066 | 0.221 |
Method | Voxel Precision (%) | Voxel Recall (%) | F1-Score (%) | Position RMSE (m) | Color RMSE |
---|---|---|---|---|---|
Natural ground truth | |||||
Poisson reconstruction | 52.9 | 58.7 | 55.6 | 0.017 | 0.105 |
Hole-filling | 96.3 | 69.2 | 80.5 | 0.015 | 0.104 |
Hybrid | 47.5 | 58.6 | 52.5 | 0.017 | 0.105 |
Plane-fitting | 6.0 | 33.7 | 10.2 | 0.019 | 0.154 |
Partial convolution | 98.4 | 69.5 | 81.5 | 0.015 | 0.104 |
Proposed | 93.8 | 70.5 | 80.5 | 0.015 | 0.110 |
Manual ground truth | |||||
Poisson reconstruction | 57.5 | 40.4 | 47.5 | 0.018 | 0.125 |
Hole-filling | 96.6 | 44.0 | 60.5 | 0.015 | 0.117 |
Hybrid | 52.1 | 40.7 | 45.7 | 0.018 | 0.126 |
Plane-fitting | 8.8 | 31.6 | 13.8 | 0.020 | 0.150 |
Partial convolution | 98.6 | 44.2 | 61.0 | 0.015 | 0.117 |
Proposed | 95.4 | 45.4 | 61.6 | 0.016 | 0.130 |
Method | Computation Time (s) | Details |
Poisson reconstruction | 106 | |
Hole-filling | 26 | - |
Hybrid | 184 | - |
Plane-fitting | 13 | - |
Partial convolution | 600 | Includes time to perform orthographic projection and manual masking |
Proposed | 60 | Includes time to perform orthographic projection |
Method | Voxel Precision (%) | Voxel Recall (%) | F1-Score (%) | Position RMSE (m) | Color RMSE |
---|---|---|---|---|---|
PCN | 5.5 | 0.4 | 0.7 | 0.021 | 0.195 |
PCN (upsampled) | 2.8 | 4.3 | 3.4 | 0.021 | 0.205 |
TopNet | 4.9 | 0.2 | 0.4 | 0.020 | 0.255 |
TopNet (upsampled) | 2.8 | 5.0 | 3.5 | 0.021 | 0.207 |
FoldingNet | 3.6 | 0.2 | 0.4 | 0.021 | 0.196 |
FoldingNet (upsampled) | 3.3 | 3.9 | 3.5 | 0.021 | 0.202 |
Proposed | 78.1 | 63.1 | 69.2 | 0.015 | 0.145 |
Method | Reconstruct Color Information | Works Directly with Point Cloud Data | Works for Curved Surfaces | Can Reconstruct Arbitrary Classes of Objects |
---|---|---|---|---|
Pauly et al. [22] | No | Yes | Yes | Requires templates |
ScanComplete [9] | No | No | Yes | Yes |
SSCNet [10] | No | No | Yes | Yes |
Adan et al. [13] | No | Yes | No | Restricted to walls, floors, ceilings |
Proposed | Yes | Yes | Yes | Yes |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, J.; Yi, J.S.K.; Kahoush, M.; Cho, E.S.; Cho, Y.K. Point Cloud Scene Completion of Obstructed Building Facades with Generative Adversarial Inpainting. Sensors 2020, 20, 5029. https://doi.org/10.3390/s20185029
Chen J, Yi JSK, Kahoush M, Cho ES, Cho YK. Point Cloud Scene Completion of Obstructed Building Facades with Generative Adversarial Inpainting. Sensors. 2020; 20(18):5029. https://doi.org/10.3390/s20185029
Chicago/Turabian StyleChen, Jingdao, John Seon Keun Yi, Mark Kahoush, Erin S. Cho, and Yong K. Cho. 2020. "Point Cloud Scene Completion of Obstructed Building Facades with Generative Adversarial Inpainting" Sensors 20, no. 18: 5029. https://doi.org/10.3390/s20185029