Towards a Fully Automated 3D Reconstruction System Based on LiDAR and GNSS in Challenging Scenarios
<p>The fully automated 3D mapping system for self-driving vehicles in challenging scenarios. Two platforms equipped with two types of multi-channel LiDARs are shown on the left. The testing scenarios include busy urban environments with high bridges (<b>a</b>–<b>c</b>), underground parking lots (<b>d</b>), open scenarios (<b>e</b>), and featureless off-road scenarios (<b>f</b>). Maps are colored by altitude.</p> "> Figure 2
<p>Overview of the proposed mapping system. It combines navigation information from the GNSS, the wheel odometry, the IMU, and the LiDAR odometry. Degeneration analysis is performed for all these navigation sources. A pose graph is then constructed to calculate the optimal poses. The maps are then assembled based on the optimized poses.</p> "> Figure 3
<p>The LiDAR data preprocessing includes three parts: the keyframe selection (<b>a</b>), the intra-frame compensation (<b>b</b>), and the noise removal (<b>c</b>,<b>d</b>). The noise either refers to the moving objects in the urban environment or the floating dust in the off-road environment.</p> "> Figure 4
<p>Overview of the pose graph. There are mainly three types of factors: global constraints, local constraints, and loop closure constraints.</p> "> Figure 5
<p>An illustrative example of the map extension. The base map that has already been built is denoted as <math display="inline"><semantics> <msub> <mi>M</mi> <mn>1</mn> </msub> </semantics></math>. <math display="inline"><semantics> <msub> <mi>M</mi> <mn>2</mn> </msub> </semantics></math> represents the extended map. A and B represent two loop closure constraints that connect <math display="inline"><semantics> <msub> <mi>M</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>M</mi> <mn>2</mn> </msub> </semantics></math>.</p> "> Figure 6
<p>Two samples for evaluating the mean map entropy. Sample-1 includes traffic signs and light poles. Sample-2 is mainly composed of trees, bushes, and a sculpture.</p> "> Figure 7
<p>The resulting map entropy against the standard deviation of Gaussian noise.</p> "> Figure 8
<p>Comparison of the mapping results and the MME values in four scenes with (<b>top</b>) and without (<b>bottom</b>) the removal of dynamic vehicles.</p> "> Figure 9
<p>The mapping results and the MME values with and without the removal of floating dust.</p> "> Figure 10
<p>The mapping results are overlaid on the satellite image. Four possible degenerate scenarios are shown at the bottom.</p> "> Figure 11
<p>Evaluation of the GNSS/INS status. Frames with well-conditioned GNSS/INS status are highlighted in green.</p> "> Figure 12
<p>Degeneracy indicators of the scan matching constraints. In (<b>a</b>–<b>d</b>), the non-degenerate cases are shown in green. Frames with lower <span class="html-italic">D</span> or higher <span class="html-italic">E</span> are more prone to degeneration (shown in pink). The translation components of <math display="inline"><semantics> <msubsup> <mi mathvariant="sans-serif">Σ</mi> <mi>δ</mi> <msup> <mrow/> <mo>′</mo> </msup> </msubsup> </semantics></math> of the scan-to-submap matching constraints are shown in (<b>e</b>).</p> "> Figure 13
<p>Comparison results of the degeneration indicators. Five groups with a total of 1000 keyframes are tested. Gaussian noise is added to generate the degraded cases (marked in pink). The translation errors of the scan-to-submap matching constraints are shown in red. The degeneration indicators <span class="html-italic">E</span> and <span class="html-italic">D</span> are shown in green and blue, respectively.</p> "> Figure 14
<p>Boxplots of the registration error of the loop closure constraints. The box spans the first and third quartiles. (<b>a</b>,<b>b</b>) show the results of group one with small offsets. (<b>c</b>,<b>d</b>) show the results of group two with large offsets.</p> "> Figure 15
<p>Proportion of the correct loop closures obtained by three approaches. (<b>a</b>) shows the results of group one with small offsets, and (<b>b</b>) shows the results of group two with large offsets.</p> "> Figure 16
<p>Two typical loop closure matching results. The target scan <math display="inline"><semantics> <mi mathvariant="script">P</mi> </semantics></math> is shown in white, the original source scan <math display="inline"><semantics> <mi mathvariant="script">Q</mi> </semantics></math> is shown in green, and the transformed source scan <math display="inline"><semantics> <msup> <mi mathvariant="script">Q</mi> <mo>′</mo> </msup> </semantics></math> is shown in red. The registration errors are presented and some comparable details are marked by ellipses.</p> "> Figure 17
<p>ROC curves for the degeneracy indicators <span class="html-italic">E</span> and <span class="html-italic">D</span>. This evaluation is performed using a total of 4230 pairs of loop closures, including 2031 positive (matching) pairs and 2199 negative (non-matching) pairs.</p> "> Figure 18
<p>Mapping results in challenging city scenarios. The map is overlaid on the satellite image. Four close views labeled 1–4 are shown on the right. Sub-figures 1–3 show the reflectance intensity map. The 3D map of an underground parking lot is shown in sub-figure 4.</p> "> Figure 19
<p>Mapping results in off-road scenarios. Data were collected along two routes. The complete map overlaid on the satellite image is shown at the top. Three zoom-in views labeled 1–3 are shown at the bottom, where the grey value encodes the height information.</p> "> Figure 20
<p>Mapping results at a higher driving speed. Three zoom-in views of the corresponding 2D grayscale maps (rendered by the reflectance intensity values) are shown at the bottom in (<b>a</b>). The driving speed is shown in (<b>b</b>).</p> "> Figure 21
<p>Mapping results in large-scale settings. The total driving length is 56.3 km and the maximum elevation change is 37.2 m. The constructed 3D map overlaid on the satellite image is shown in (<b>a</b>). Three enlarged views are shown on the right. The driving speed is shown in (<b>b</b>) with a maximum speed of 88.92 km/h.</p> ">
Abstract
:1. Introduction
- A factor graph-based fusion framework is proposed which could suitably integrate the global navigation information with local navigation information in a probabilistic way.
- A comprehensive degeneration analysis is performed for both the global and the local navigation approach. A new robust degeneration indicator is proposed for the local navigation approach which could reliably estimate the degeneration state of the scan matching algorithm. The degeneration state is then incorporated into the factor graph, thus enabling a more robust, degeneration-aware fusion approach.
- An improved submap-to-submap matching method is used to estimate loop closure constraints. The loop closure constraint can be reliably estimated even with a large initial position offset or a limited overlap field of view.
- The proposed mapping system has been extensively tested on real-world datasets in several challenging scenarios, including busy urban scenarios, featureless off-road scenarios, high bridges, highways, and large-scale settings. Experimental results confirmed the effectiveness of the mapping system.
2. Related Work
2.1. Scan Matching
2.2. Loop Closure Detection
2.3. Robust Mapping
3. The Proposed Approach
3.1. System Overview
3.2. Pose Graph Optimization
3.3. Scan Matching Factor
3.4. GNSS/INS Factor
3.5. Map Extension
4. Experimental Results
4.1. Map Quality Assessment
4.2. Results of Noise Removal
4.3. Analysis of Degeneracy-Aware Factors
4.4. Analysis of Loop Closures
4.5. Mapping Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef] [Green Version]
- Yang, S.; Zhu, X.; Nian, X.; Feng, L.; Qu, X.; Ma, T. A robust pose graph approach for city scale LiDAR mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), Madrid, Spain, 1–5 October 2018; pp. 1175–1182. [Google Scholar]
- Ilçi, V.; Toth, C.K. High Definition 3D Map Creation Using GNSS/IMU/LiDAR Sensor Integration to Support Autonomous Vehicle Navigation. Sensors 2020, 20, 899. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hirabayashi, M.; Sujiwo, A.; Monrroy, A.; Kato, S.; Edahiro, M. Traffic light recognition using high-definition map features. Robot. Auton. Syst. 2019, 111, 62–72. [Google Scholar] [CrossRef]
- Li, Q.; Queralta, J.P.; Gia, T.N.; Zou, Z.; Westerlund, T. Multi-Sensor Fusion for Navigation and Mapping in Autonomous Vehicles: Accurate Localization in Urban Environments. Unmanned Syst. 2020, 8, 229–237. [Google Scholar] [CrossRef]
- Fu, H.; Yu, R.; Ye, L.; Wu, T.; Xu, X. An Efficient Scan-to-Map Matching Approach Based on Multi-channel Lidar. J. Intell. Robot. Syst. 2018, 91, 501–513. [Google Scholar] [CrossRef]
- Zheng, L.; Zhu, Y.; Xue, B.; Liu, M.; Fan, R. Low-Cost GPS-Aided LiDAR State Estimation and Map Building. In Proceedings of the IEEE International Conference on Imaging Systems and Techniques (IST 2019), Abu Dhabi, United Arab Emirates, 9–10 December 2019; pp. 1–6. [Google Scholar]
- Kühner, T.; Kümmerle, J. Large-Scale Volumetric Scene Reconstruction using LiDAR. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2020), Paris, France, 31 May–31 August 2020; pp. 6261–6267. [Google Scholar]
- Zhang, J.; Singh, S. Laser-visual-inertial odometry and mapping with high robustness and low drift. J. Field Robot. 2018, 35, 1242–1264. [Google Scholar] [CrossRef]
- Maaref, M.; Kassas, Z.M. Ground Vehicle Navigation in GNSS-Challenged Environments Using Signals of Opportunity and a Closed-Loop Map-Matching Approach. IEEE Trans. Intell. Transp. Syst. 2020, 21, 2723–2738. [Google Scholar] [CrossRef] [Green Version]
- Kubelka, V.; Oswald, L.; Pomerleau, F.; Colas, F.; Svoboda, T.; Reinstein, M. Robust Data Fusion of Multimodal Sensory Information for Mobile Robots. J. Field Robot. 2015, 32, 447–473. [Google Scholar] [CrossRef] [Green Version]
- Fu, H.; Yu, R. LIDAR Scan Matching in Off-Road Environments. Robotics 2020, 9, 35. [Google Scholar] [CrossRef]
- Kaess, M.; Ranganathan, A.; Dellaert, F. iSAM: Incremental Smoothing and Mapping. IEEE Trans. Robot. 2008, 24, 1365–1378. [Google Scholar] [CrossRef]
- He, G.; Yuan, X.; Zhuang, Y.; Hu, H. An Integrated GNSS/LiDAR-SLAM Pose Estimation Framework for Large-Scale Map Building in Partially GNSS-Denied Environments. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar]
- Pomerleau, F.; Colas, F.; Siegwart, R.; Magnenat, S. Comparing ICP variants on real-world data sets - Open-source library and experimental protocol. Auton. Robot. 2013, 34, 133–148. [Google Scholar] [CrossRef]
- Segal, A.; Hähnel, D.; Thrun, S. Generalized-ICP. In Proceedings of the Robotics: Science and Systems V, Seattle, WA, USA, 28 June–1 July 2009. [Google Scholar]
- Magnusson, M.; Lilienthal, A.J.; Duckett, T. Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot. 2007, 24, 803–827. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Proceedings of the Robotics: Science and Systems X, Berkeley, CA, USA, 12–16 July 2014. [Google Scholar]
- Olson, E.B. Real-time correlative scan matching. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2009), Kobe, Japan, 12–17 May 2009; pp. 4387–4393. [Google Scholar]
- Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2016), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar]
- Ren, R.; Fu, H.; Wu, M. Large-Scale Outdoor SLAM Based on 2D Lidar. Electronics 2019, 8, 613. [Google Scholar] [CrossRef] [Green Version]
- Dubé, R.; Dugas, D.; Stumm, E.; Nieto, J.I.; Siegwart, R.; Cadena, C. SegMatch: Segment based place recognition in 3D point clouds. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2017), Singapore, 29 May–3 June 2017; pp. 5266–5272. [Google Scholar]
- Milioto, A.; Vizzo, I.; Behley, J.; Stachniss, C. RangeNet ++: Fast and Accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Macau, China, 3–8 November 2019; pp. 4213–4220. [Google Scholar]
- Kong, X.; Yang, X.; Zhai, G.; Zhao, X.; Zeng, X.; Wang, M.; Liu, Y.; Li, W.; Wen, F. Semantic Graph Based Place Recognition for 3D Point Clouds. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020), Las Vegas, NV, USA, 24 October–24 January 2021; pp. 8216–8223. [Google Scholar]
- Kümmerle, R.; Grisetti, G.; Strasdat, H.; Konolige, K.; Burgard, W. G2o: A general framework for graph optimization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2011), Shanghai, China, 9–13 May 2011; pp. 3607–3613. [Google Scholar]
- Kaess, M.; Johannsson, H.; Roberts, R.; Ila, V.; Leonard, J.J.; Dellaert, F. iSAM2: Incremental smoothing and mapping with fluid relinearization and incremental variable reordering. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2011), Shanghai, China, 9–13 May 2011; pp. 3281–3288. [Google Scholar]
- Agarwal, S.; Mierle, K. Ceres solver: Tutorial & reference. Google Inc. 2012, 2, 72. [Google Scholar]
- Sünderhauf, N.; Protzel, P. Towards a robust back-end for pose graph SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2012), St. Paul, MN, USA, 14–18 May 2012; pp. 1254–1261. [Google Scholar]
- Olson, E.; Agarwal, P. Inference on networks of mixtures for robust robot mapping. Int. J. Robot. Res. 2013, 32, 826–840. [Google Scholar] [CrossRef]
- Bonnabel, S.; Barczyk, M.; Goulette, F. On the covariance of ICP-based scan-matching techniques. In Proceedings of the IEEE American Control Conference (ACC 2016), Boston, MA, USA, 6–8 July 2016; pp. 5498–5503. [Google Scholar]
- Zhang, J.; Kaess, M.; Singh, S. On degeneracy of optimization-based state estimation problems. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2016), Stockholm, Sweden, 16–21 May 2016; pp. 809–816. [Google Scholar]
- Aldera, R.; Martini, D.D.; Gadd, M.; Newman, P. What Could Go Wrong? In Introspective Radar Odometry in Challenging Environments. In Proceedings of the IEEE Intelligent Transportation Systems Conference (ITSC 2019), Auckland, New Zealand, 27–30 October 2019; pp. 2835–2842. [Google Scholar]
- Ju, X.; Xu, D.; Zhao, X.; Yao, W.; Zhao, H. Learning Scene Adaptive Covariance Error Model of LiDAR Scan Matching for Fusion Based Localization. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV 2019), Paris, France, 9–12 June 2019; pp. 1789–1796. [Google Scholar]
- Xue, H.; Fu, H.; Dai, B. IMU-Aided High-Frequency Lidar Odometry for Autonomous Driving. Appl. Sci. 2019, 9, 1506. [Google Scholar] [CrossRef] [Green Version]
- Lin, J.; Zhang, F. Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2020), Paris, France, 31 May–31 August 2020; pp. 3126–3131. [Google Scholar]
- Chen, T.; Wang, R.; Dai, B.; Liu, D.; Song, J. Likelihood-Field-Model-Based Dynamic Vehicle Detection and Tracking for Self-Driving. IEEE Trans. Intell. Transp. Syst. 2016, 17, 3142–3158. [Google Scholar] [CrossRef]
- Ren, R.; Fu, H.; Hu, X.; Xue, H.; Li, X.; Wu, M. Towards Efficient and Robust LiDAR-based 3D Mapping in Urban Environments. In Proceedings of the IEEE International Conference on Unmanned Systems (ICUS 2020), Harbin, China, 27–28 November 2020; pp. 511–516. [Google Scholar]
- Pagad, S.; Agarwal, D.; Narayanan, S.; Rangan, K.; Kim, H.; Yalla, V.G. Robust Method for Removing Dynamic Objects from Point Clouds. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2020), Paris, France, 31 May–31 August 2020; pp. 10765–10771. [Google Scholar]
- Fu, H.; Xue, H.; Ren, R. Fast Implementation of 3D Occupancy Grid for Autonomous Driving. In Proceedings of the IEEE International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC 2020), Hangzhou, China, 22–23 August 2020; Volume 2, pp. 217–220. [Google Scholar]
- Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Auton. Robot. 2013, 34, 189–206. [Google Scholar] [CrossRef] [Green Version]
- Droeschel, D.; Behnke, S. Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2018), Brisbane, Australia, 21–25 May 2018; pp. 1–9. [Google Scholar]
Setting | Translation Noise (m) | Rotation Noise (deg) | ||||
---|---|---|---|---|---|---|
Level1 | 0.5 | 0.5 | 0.5 | 3.0 | 1.5 | 1.5 |
Level2 | 1.0 | 1.0 | 0.5 | 5.0 | 2.5 | 2.5 |
Level3 | 2.0 | 2.0 | 1.0 | 5.0 | 2.5 | 2.5 |
Level4 | 3.0 | 3.0 | 1.5 | 5.0 | 2.5 | 2.5 |
Level5 | 3.0 | 3.0 | 1.5 | 10.0 | 5.0 | 5.0 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ren, R.; Fu, H.; Xue, H.; Sun, Z.; Ding, K.; Wang, P. Towards a Fully Automated 3D Reconstruction System Based on LiDAR and GNSS in Challenging Scenarios. Remote Sens. 2021, 13, 1981. https://doi.org/10.3390/rs13101981
Ren R, Fu H, Xue H, Sun Z, Ding K, Wang P. Towards a Fully Automated 3D Reconstruction System Based on LiDAR and GNSS in Challenging Scenarios. Remote Sensing. 2021; 13(10):1981. https://doi.org/10.3390/rs13101981
Chicago/Turabian StyleRen, Ruike, Hao Fu, Hanzhang Xue, Zhenping Sun, Kai Ding, and Pengji Wang. 2021. "Towards a Fully Automated 3D Reconstruction System Based on LiDAR and GNSS in Challenging Scenarios" Remote Sensing 13, no. 10: 1981. https://doi.org/10.3390/rs13101981