Indoor and Outdoor Backpack Mapping with Calibrated Pair of Velodyne LiDARs
<p>The motivation and the results of our work. The reconstruction of indoor environments (<b>a</b>) is beneficial for inspection, inventory checking and automatic floor plans generation. 3D maps of forest environments (<b>b</b>) is useful for quick and precise estimation of the biomass (timber) amount. The other example of 3D LiDAR mapping deployment is preserving cultural heritages or providing models of historical building, e.g., the roof in (<b>c</b>).</p> "> Figure 2
<p>The example of resulting models of indoor mapping. The office environment (<b>a</b>) and the staircase (<b>b</b>) were captured by a human carrying our 4RECON backpack. The data acquisition process took 3 and 2 min, respectively.</p> "> Figure 3
<p>“Double walls” error in the reconstruction of Zebedee [<a href="#B5-sensors-19-03944" class="html-bibr">5</a>]. The wall and the ceiling appears twice in the reconstruction, causing an ambiguity. In the solution without loop closure (<b>a</b>), the error is quite visible. Double walls are reduced after global loop closure (<b>b</b>), but they are still present (highlighted by yellow dashed lines).</p> "> Figure 4
<p>Dataset of indoor office environment for evaluation of ZEB-1 scanner [<a href="#B4-sensors-19-03944" class="html-bibr">4</a>]. In the experiment, 3.8 cm error of corner-to-corner average distances within the rooms was achieved.</p> "> Figure 5
<p>The dependency of laser intensity readings (weak readings in red, strong in green) on the measurement range (<b>a</b>) and the angle of incidence (<b>b</b>) [<a href="#B37-sensors-19-03944" class="html-bibr">37</a>].</p> "> Figure 6
<p>Various configurations of LiDAR scanners in worst case scenarios we have encountered in our experiments: narrow corridor (<b>a</b>,<b>c</b>) and staircase (<b>b</b>). The field of view (30° for Velodyne Puck) is displayed in color. When only single LiDAR (<b>a</b>) was used, the scans did not contain 3D information of the floor or the ceiling (red cross). The situation was not improved when the scanner is tilted because of failing in, e.g., staircases (<b>b</b>). When we added a second LiDAR, our tiled asymmetrical configuration (<b>d</b>) provides better top–bottom and left–right observation than the symmetrical one (<b>c</b>). Moreover, when the LiDARs are aligned in direction of movement (<b>e</b>), there is no overlap between current (violet) and future (yellow) frame, leading to lower accuracy. In our solution (<b>f</b>), the LiDARs are aligned perpendicularly to the walking direction solving all mentioned issues.</p> "> Figure 6 Cont.
<p>Various configurations of LiDAR scanners in worst case scenarios we have encountered in our experiments: narrow corridor (<b>a</b>,<b>c</b>) and staircase (<b>b</b>). The field of view (30° for Velodyne Puck) is displayed in color. When only single LiDAR (<b>a</b>) was used, the scans did not contain 3D information of the floor or the ceiling (red cross). The situation was not improved when the scanner is tilted because of failing in, e.g., staircases (<b>b</b>). When we added a second LiDAR, our tiled asymmetrical configuration (<b>d</b>) provides better top–bottom and left–right observation than the symmetrical one (<b>c</b>). Moreover, when the LiDARs are aligned in direction of movement (<b>e</b>), there is no overlap between current (violet) and future (yellow) frame, leading to lower accuracy. In our solution (<b>f</b>), the LiDARs are aligned perpendicularly to the walking direction solving all mentioned issues.</p> "> Figure 7
<p>The initial (<b>a</b>) and improved (<b>b</b>,<b>c</b>) prototype of our backpack mapping solution for both indoor (<b>b</b>) and outdoor (<b>c</b>) use. The removable dual GNSS antenna provides precise heading information, aiding for outdoor odometry estimation and also georeferencing of the resulting 3D point cloud model. It should be noted that the position of LiDAR scanners is different in the initial and the later solution. This is elaborated on in the next section.</p> "> Figure 8
<p>Components of the system and the connections. Each Velodyne scanner is connected via a custom wiring “box” requiring power supply (red wires), 1PPS and NMEA synchronization (green) and Fast Ethernet (blue) connection with computer (PC NUC in our case).</p> "> Figure 9
<p>Extrinsic calibration required in our system. The mutual positions between the Velodyne scanners and the GNSS/INS unit are computed. The offsets <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>o</mi> <mo stretchy="false">→</mo> </mover> <mrow> <mi>A</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mover accent="true"> <mi>o</mi> <mo stretchy="false">→</mo> </mover> <mrow> <mi>A</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> of the antennas are tape measured.</p> "> Figure 10
<p>Two Velodyne LiDAR frames aligned into the single <span class="html-italic">multiframe</span>. This data association requires time synchronization and precise extrinsic calibration of laser scanners.</p> "> Figure 11
<p>The sampling of Velodyne point cloud by the Collar Line Segments (CLS) (<b>a</b>). The segments (purple) are randomly generated within the polar bin (blue polygon) of azimuthal resolution <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>. The registration process (<b>b</b>–<b>e</b>) transforms the line segments of the target point cloud (red lines) to fit the lines of the source cloud (blue). First, the lines are matched by Euclidean distance of midpoints (<b>c</b>); then, the segments are extended into infinite lines and the vectors between closest points are found (<b>d</b>); and, finally, they are used to estimate the transformation that fits the matching lines into common planes (green in (<b>e</b>)).</p> "> Figure 12
<p>The overlap (<b>a</b>) between the source (blue) and the target (purple) LiDAR frame. In this case, approximately 30% of source points are within the view volume of target frame. The view volume can be effectively represented by <span class="html-italic">spherical z-buffer</span> (<b>b</b>) where range information (minimum in this case) or the information regarding empty space within the spherical grid is stored.</p> "> Figure 13
<p>The error of measurement (Euclidean distance between points <span class="html-italic">p</span> and <math display="inline"><semantics> <msup> <mi>p</mi> <mi>e</mi> </msup> </semantics></math>) can be split into rotation <math display="inline"><semantics> <msup> <mi>e</mi> <mi>r</mi> </msup> </semantics></math> and translation <math display="inline"><semantics> <msup> <mi>e</mi> <mi>t</mi> </msup> </semantics></math> part. The impact of rotation error <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>·</mo> <mi>tg</mi> <mo>(</mo> <msub> <mi>e</mi> <mi>r</mi> </msub> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math> can be simplified to <math display="inline"><semantics> <mrow> <mi>tg</mi> <mo>(</mo> <msub> <mi>e</mi> <mi>r</mi> </msub> <mo>)</mo> </mrow> </semantics></math> due to near linear properties of tangent function for small angles.</p> "> Figure 14
<p>Example of a LiDAR frame distorted by the rolling shutter effect when the operator with mapping backpack was turning around (green) and the corrected frame (purple). This is the top view and the distortion is mostly visible on the “bent” green wall at the bottom of this picture.</p> "> Figure 15
<p>Pose graph as the output of point cloud registration and the input of SLAM optimization. The goal is to estimate 6DoF poses <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">P</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold-italic">P</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi mathvariant="bold-italic">P</mi> <mi>N</mi> </msub> </mrow> </semantics></math> of graph nodes (vertices) <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>p</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>p</mi> <mn>1</mn> </msub> <mn>5</mn> </mrow> </semantics></math> in the trajectory. The edges represent the transformations between LiDAR frames for given nodes estimated by point cloud registration. Black edges represent transformations between consequent frames, blue edges are for transformations within a certain neighborhood (maximum distance of three frames in this example) and the green edges (in (<b>a</b>)) represent visual loops of revisited places detected by a significant overlap between the given frames. When GNSS subsystem is available (<b>b</b>), additional visual loops are introduced as transformations from the origin <math display="inline"><semantics> <mi mathvariant="bold-italic">O</mi> </semantics></math> of some local geodetic (orthogonal NED) coordinate frame.</p> "> Figure 16
<p>Verification of edge <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>p</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </semantics></math> representing transformation <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">T</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math> is performed by comparison with transformation <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">T</mi> <mn>1</mn> </msub> <mo>·</mo> <msub> <mi mathvariant="bold-italic">T</mi> <mn>2</mn> </msub> <mo>…</mo> <msub> <mi mathvariant="bold-italic">T</mi> <mi>K</mi> </msub> </mrow> </semantics></math> of alternative path (blue) between <span class="html-italic">i</span>th and <span class="html-italic">j</span>th node.</p> "> Figure 17
<p>The reconstruction built by our SLAM solution before (<b>a</b>) and after (<b>b</b>) the alignment of horizontal planes (floor, ceiling, etc.) with XY plane (blue circle).</p> "> Figure 18
<p>The dependency of laser return intensity on: the source beam (<b>a</b>); range of the measurement (<b>b</b>); and the angle of incidence (<b>c</b>). We are using 2 LiDAR scanners with 16 laser beams per each scanner, 32 beams in total.</p> "> Figure 18 Cont.
<p>The dependency of laser return intensity on: the source beam (<b>a</b>); range of the measurement (<b>b</b>); and the angle of incidence (<b>c</b>). We are using 2 LiDAR scanners with 16 laser beams per each scanner, 32 beams in total.</p> "> Figure 19
<p>Results of 3D reconstruction without (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) and with (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) the normalization of laser intensities. One can observe more consistent intensities for solid color ceiling (<b>b</b>) reducing the artifacts of trajectory, while preserving the contrast with ceiling lights. Besides the consistency, normalization of intensities reduces the noise (<b>d</b>). The most significant improvement is the visibility of important objects e.g., markers at the electrical towers (<b>f</b>,<b>h</b>) or emergency exit doors (<b>j</b>) at the highway wall. All these objects can not be found in the original point clouds(<b>e</b>,<b>g</b>,<b>i</b>).</p> "> Figure 20
<p>Experimental environments Office (<b>a</b>) and Staircase (<b>b</b>), and the highlighted slices that were used for precision evaluation.</p> "> Figure 21
<p>Error <math display="inline"><semantics> <msub> <mi>e</mi> <mi>r</mi> </msub> </semantics></math> distribution (the amount of the points within certain error) for our system 4RECON and ZEB-1 product. The experiments were performed for all test slices in <a href="#sensors-19-03944-f020" class="html-fig">Figure 20</a> on Office (<b>a</b>) and Staircase (<b>b</b>) dataset. Note that the model built by ZEB-1 was not available and therefore the evaluation is missing.</p> "> Figure 22
<p>Color coded errors within the horizontal reference slice of the Office dataset (<b>a</b>)–(<b>d</b>) and vertical slice in Staircase dataset (<b>e</b>)–(<b>g</b>). Blue color represents zero error, red color stands for 10 cm error and higher. The ground truth FARO data are displayed in green. The results are provided for 4RECON-10 (<b>a</b>,<b>e</b>), 4RECON-overlap (<b>b</b>,<b>f</b>), 4RECON-verification (<b>c</b>,<b>g</b>), and ZEB-1 (<b>d</b>). For Office dataset, there are no ambiguities (double walls) even without visual loop detection while both loop closure and pose graph verification is necessary for more challenging Staircase dataset to discard such errors. Moreover, one can observe that ZEB-1 solution yields lower noise reconstruction thanks to the less noisy Hokuyo LiDAR.</p> "> Figure 23
<p>The comparison of data density provided by ZEB-1 (<b>a</b>,<b>c</b>) and our (<b>b</b>,<b>d</b>) solution. Since the ZEB-1 solution is based on the Hokuyo scanner, the laser intensity readings are missing and data density is much lower compared with our solution. Multiple objects which can be distinguished in our reconstruction (lamps on the ceiling in the top, furniture and other equipment in the bottom image) are not visible in the ZEB-1 model.</p> "> Figure 24
<p>The example of 3D reconstruction of open field with high voltage electrical lines (<b>a</b>). The model is height-colored for better visibility. The estimation of positions and height of the lines (<b>b</b>), towers (<b>e</b>), etc. was the main goal of this mapping task. The other elements (<b>c</b>,<b>d</b>) in the scene are shown for demonstration of the reconstruction quality.</p> "> Figure 25
<p>Example of ambiguities caused by reconstruction errors (<b>a</b>), which disqualifies the model to be used for practical measurements. We obtained such results when we used only poses provided by GNSS/INS subsystem without any refinements by SLAM or point cloud registration. Our solution (including SLAM) provides valid reconstructions (<b>b</b>), where both towers and wires (in this case) can be distinguished.</p> "> Figure 26
<p>Geodetic survey markers painted on the road (<b>a</b>) is also visible in the point cloud (<b>b</b>) thanks to the coloring by laser intensities.</p> "> Figure 27
<p>Comparison of reconstructions provided by dual LiDAR system—floor plan top view (<b>a</b>) and side view of the corridor (<b>b</b>)—with the reconstruction built using only single horizontally (<b>c</b>,<b>d</b>) or vertically (<b>e</b>,<b>f</b>) positioned Velodyne LiDAR. The reconstructions are red colored with ground truth displayed in blue.</p> "> Figure 27 Cont.
<p>Comparison of reconstructions provided by dual LiDAR system—floor plan top view (<b>a</b>) and side view of the corridor (<b>b</b>)—with the reconstruction built using only single horizontally (<b>c</b>,<b>d</b>) or vertically (<b>e</b>,<b>f</b>) positioned Velodyne LiDAR. The reconstructions are red colored with ground truth displayed in blue.</p> ">
Abstract
:1. Introduction
- It is capable of both small indoor and large open outdoor environments mapping, georeferencing and sufficient precision in the order of centimeters. These abilities are evaluated using multiple datasets.
- It benefits from a synchronized and calibrated dual LiDAR scanner, which significantly increases field of view. Both scanners are used for both odometry estimation and 3D model reconstruction, which enables scanning of small environments, narrow corridors, staircases, etc.
- It provides the ability to recognize objects in the map due to sufficient point density and our novel intensity normalization for the measurements from an arbitrary range.
2. Related Work
3. Design of the Laser Mapping Backpack
- It fulfils the requirements for precision of the model up to 5 cm. Thanks to the robust loop closure, ambiguities (e.g., “double wall” effects) are avoided.
- The system is comfortable to use and it is as mobile as possible. The backpack weighs 9 kg (plus 1.4 kg for the optional dual antenna extension), and it is easy to carry around various environments including stairs, narrow corridors, rugged terrain, etc.
- The pair of synchronized and calibrated Velodyne LiDARS increases the field of view (FOV) and enables mapping of small rooms, narrow corridors, staircases, etc. (see Figure 6) without the need for special guidelines for scanning process.
- The data acquisition process is fast with verification of data completeness. There are no special guidelines for the scanning process (comparing to the requirements of ZEB) and the operator is required only to visit all places to be captured in a normal pace. Moreover, captured data are visualized online at the mobile device (smartphone, tablet) for operator to see whether everything is captured correctly.
- Since we are using long range Velodyne LiDAR (compared to simple 2D rangefinders such as Hokuyko or Sick) and optional GNSS support, we provide a universal economically convenient solution for both indoor and outdoor use. For such scenarios, where GNSS is available, final reconstruction is georeferenced—the 3D position in the global geographical frame is assigned to every 3D point in the model.
- The final 3D model is dense and colored by the laser intensity, which is further normalized. This helps distinguishing important objects, inventory, larger texts, signs, and some surface texture properties.
3.1. Hardware Description
3.2. Dual LiDAR System
3.3. Calibration of the Sensors
3.4. Point Cloud Registration
3.5. Overlap Estimation
3.6. Rolling Shutter Corrections
3.7. Pose Graph Construction and Optimization
Algorithm 1 Progressive refinement of 6DoF poses for sequence of frames by optimizing pose graph . |
|
3.8. Pose Graph Verification
3.9. Horizontal Alignment of the Indoor Map
3.10. Intensities Normalization
4. Experiments
- sufficient relative precision under 5 cm;
- global absolute error within the limits described above;
- data density and coloring by normalized intensities for visual inspection; and
- data consistency without ambiguity (no dual walls effects).
4.1. Comparison of Point Cloud Registration Methods
4.2. Indoor Experiments
- in 4RECON-10, the registrations were performed only within small neighborhood of 10 nearest frames (1 s time window) and reflects the impact of accumulation error;
- for 4RECON-overlap, the registrations were performed for all overlapping frames as described in Section 3.7 reducing the accumulation error by loop closures at every possible location; and
- pose graph verification (see Section 3.8) was deployed in 4RECON-verification, yielding the best results with good precision and no ambiguities.
4.3. Outdoor Experiments
4.4. Comparison of Single and Dual Velodyne Solution
5. Discussion
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- NavVis. Available online: https://www.navvis.com/ (accessed on 19 August 2019).
- Nava, Y. Visual-LiDAR SLAM with Loop Closure. Master’s Thesis, KTH Royal Institute of Technology, Stockholm, Sweden, 2018. [Google Scholar]
- GeoSLAM. Available online: https://geoslam.com/ (accessed on 19 August 2019).
- Sirmacek, B.; Shen, Y.; Lindenbergh, R.; Zlatanova, S.; Diakite, A. Comparison of Zeb1 and Leica C10 indoor laser scanning point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 143. [Google Scholar] [CrossRef]
- Bosse, M.; Zlot, R.; Flick, P. Zebedee: Design of a Spring-Mounted 3-D Range Sensor with Application to Mobile Mapping. IEEE Trans. Robot. 2012, 28, 1104–1119. [Google Scholar] [CrossRef]
- Bosse, M.; Zlot, R. Continuous 3D scan-matching with a spinning 2D laser. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 4312–4319. [Google Scholar] [CrossRef]
- GeoSLAM Ltd. The ZEB-REVO Solution. 2018. Available online: https://geoslam.com/wp-content/uploads/2018/04/GeoSLAM-ZEB-REVO-Solution_v9.pdf?x97867 (accessed on 19 July 2019).
- Dewez, T.J.; Plat, E.; Degas, M.; Richard, T.; Pannet, P.; Thuon, Y.; Meire, B.; Watelet, J.M.; Cauvin, L.; Lucas, J.; et al. Handheld Mobile Laser Scanners Zeb-1 and Zeb-Revo to map an underground quarry and its above-ground surroundings. In Proceedings of the 2nd Virtual Geosciences Conference: VGC 2016, Bergen, Norway, 21–23 Sepember 2016. [Google Scholar]
- GreenValley Internation. LiBackpack DG50, Mobile Handheld 3D Mapping System. 2019. Available online: https://greenvalleyintl.com/wp-content/uploads/2019/04/LiBackpack-DG50.pdf (accessed on 19 July 2019).
- GreenValley International. Available online: https://greenvalleyintl.com/ (accessed on 19 August 2019).
- Leica Geosystems AG. Leica Pegasus: Backpack, Mobile Reality Capture. 2017. Available online: https://www.gefos-leica.cz/data/original/skenery/mobilni-mapovani/backpack/leica_pegasusbackpack_ds.pdf (accessed on 19 July 2019).
- Masiero, A.; Fissore, F.; Guarnieri, A.; Piragnolo, M.; Vettore, A. Comparison of Low Cost Photogrammetric Survey with Tls and Leica Pegasus Backpack 3d Modelss. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 147. [Google Scholar] [CrossRef]
- Dubey, P. New bMS3D-360: The First Backpack Mobile Scanning System Including Panormic Camera. 2018. Available online: https://www.geospatialworld.net/news/new-bms3d-360-first-backpack-mobile-scanning-system-panormic-camera/ (accessed on 30 August 2019).
- Viametris. Available online: https://www.viametris.com/ (accessed on 20 August 2019).
- 3D Laser Mapping. Robin Datasheet. 2017. Available online: https://www.3dlasermapping.com/wp-content/uploads/2017/09/ROBIN-Datasheet-front-and-reverse-WEB.pdf (accessed on 19 July 2019).
- 3D Lasser Mapping. Available online: https://www.3dlasermapping.com/ (accessed on 19 August 2019).
- Rönnholm, P.; Liang, X.; Kukko, A.; Jaakkola, A.; Hyyppä, J. Quality analysis and correction of mobile backpack laser scanning data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 41. [Google Scholar] [CrossRef]
- Kukko, A.; Kaartinen, H.; Zanetti, M. Backpack personal laser scanning system for grain-scale topographic mapping. In Proceedings of the 46th Lunar and Planetary Science Conference, The Woodlands, TX, USA, 16–20 March 2015; Volume 2407. [Google Scholar]
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Proceedings of the Robotics: Science and Systems Conference (RSS 2014), At Berkeley, CA, USA, 12–16 July 2014. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2174–2181. [Google Scholar] [CrossRef]
- Zhang, J.; Kaess, M.; Singh, S. Real-time depth enhanced monocular odometry. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 4973–4980. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
- Velas, M.; Spanel, M.; Herout, A. Collar Line Segments for fast odometry estimation from Velodyne point clouds. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4486–4495. [Google Scholar] [CrossRef]
- Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
- Maboudi, M.; Bánhidi, D.; Gerke, M. Evaluation of indoor mobile mapping systems. In Proceedings of the GFaI Workshop 3D-NordOst 2017 (20th Application-Oriented Workshop on Measuring, Modeling, Processing and Analysis of 3D-Data), Berlin, Germany, 7–8 December 2017. [Google Scholar]
- Kukko, A.; Kaartinen, H.; Hyyppä, J.; Chen, Y. Multiplatform Mobile Laser Scanning: Usability and Performance. Sensors 2012, 12, 11712–11733. [Google Scholar] [CrossRef] [Green Version]
- Lauterbach, H.A.; Borrmann, D.; Heß, R.; Eck, D.; Schilling, K.; Nüchter, A. Evaluation of a Backpack-Mounted 3D Mobile Scanning System. Remote Sens. 2015, 7, 13753–13781. [Google Scholar] [CrossRef] [Green Version]
- Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar] [CrossRef]
- Nüchter, A.; Bleier, M.; Schauer, J.; Janotta, P. Continuous-Time SLAM—Improving Google’s Cartographer 3D Mapping. In Latest Developments in Reality-Based 3D Surveying and Modelling; MDPI: Basel, Switzerland, 2018; pp. 53–73. [Google Scholar] [CrossRef]
- Newcombe, R.A.; Izadi, S.; Hilliges, O.; Molyneaux, D.; Kim, D.; Davison, A.J.; Kohi, P.; Shotton, J.; Hodges, S.; Fitzgibbon, A. KinectFusion: Real-time dense surface mapping and tracking. In Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality, Basel, Switzerland, 26–29 October 2011; pp. 127–136. [Google Scholar] [CrossRef]
- Deschaud, J. IMLS-SLAM: Scan-to-Model Matching Based on 3D Data. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2480–2485. [Google Scholar] [CrossRef]
- Kolluri, R. Provably Good Moving Least Squares. ACM Trans. Algorithms 2008, 4, 18:1–18:25. [Google Scholar] [CrossRef]
- Droeschel, D.; Behnke, S. Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1–9. [Google Scholar] [CrossRef]
- Droeschel, D.; Schwarz, M.; Behnke, S. Continuous Mapping and Localization for Autonomous Navigation in Rough Terrain Using a 3D Laser Scanner. Robot. Auton. Syst. 2017, 88, 104–115. [Google Scholar] [CrossRef]
- Mendes, E.; Koch, P.; Lacroix, S. ICP-based pose-graph SLAM. In Proceedings of the 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Lausanne, Switzerland, 23–27 October 2016; pp. 195–200. [Google Scholar] [CrossRef]
- Park, C.; Moghadam, P.; Kim, S.; Elfes, A.; Fookes, C.; Sridharan, S. Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1206–1213. [Google Scholar] [CrossRef]
- Kashani, A.G.; Olsen, M.J.; Parrish, C.E.; Wilson, N. A Review of LIDAR Radiometric Processing: From Ad Hoc Intensity Correction to Rigorous Radiometric Calibration. Sensors 2015, 15, 28099–28128. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jutzi, B.; Gross, H. Normalization Of Lidar Intensity Data Based On Range And Surface Incidence Angle. ISPRS J. Photogramm. Remote Sens. 2009, 38, 213–218. [Google Scholar]
- Kaasalainen, S.; Jaakkola, A.; Kaasalainen, M.; Krooks, A.; Kukko, A. Analysis of Incidence Angle and Distance Effects on Terrestrial Laser Scanner Intensity: Search for Correction Methods. Remote Sens. 2011, 3, 2207–2221. [Google Scholar] [CrossRef] [Green Version]
- Velodyne LiDAR. Available online: https://velodynelidar.com/ (accessed on 19 August 2019).
- Nuchter, A.; Lingemann, K.; Hertzberg, J.; Surmann, H. 6D SLAM with approximate data association. In Proceedings of the ICAR ’05. Proceedings., 12th International Conference on Advanced Robotics, Seattle, WA, USA, 18–20 July 2005; pp. 242–249. [Google Scholar] [CrossRef]
- Velas, M.; Faulhammer, T.; Spanel, M.; Zillich, M.; Vincze, M. Improving multi-view object recognition by detecting changes in point clouds. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–7. [Google Scholar] [CrossRef]
- Shoemake, K. Animating Rotation with Quaternion Curves. In Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques, San Francisco, CA, USA, 22–26 July 1985; ACM: New York, NY, USA, 1985; pp. 245–254. [Google Scholar] [CrossRef]
- Ila, V.; Polok, L.; Solony, M.; Svoboda, P. SLAM++-A highly efficient and temporally scalable incremental SLAM framework. Int. J. Robot. Res. 2017, 36, 210–230. [Google Scholar] [CrossRef]
- Markley, F.L.; Cheng, Y.; Crassidis, J.L.; Oshman, Y. Averaging Quaternions. J. Guid. Control. Dyn. 2007, 30, 1193–1197. [Google Scholar] [CrossRef]
- Segal, A.; Hähnel, D.; Thrun, S. Generalized-ICP. In Robotics: Science and Systems; Trinkle, J., Matsuoka, Y., Castellanos, J.A., Eds.; The MIT Press: Cambridge, MA, USA, 2009. [Google Scholar] [CrossRef]
Solution (Released in) | Sensor (Precision) | Range | System Precision | Price € | Open Method | Properties and Limitations | Intensities | |
---|---|---|---|---|---|---|---|---|
ZEB-1 (2013) | [3] | Hokuyo UTM-30LX (3 cm up to 10 m range) | 15–20 m (max 30 m under optimal conditions) | up to 3.8 cm indoors [4] | N/A | Proprietary, based on [5,6] |
| No |
ZEB-REVO (2015) [7] | [3] | Hokuyo UTM-30LX-F (3 cm up to 10 m range) | 15–20 m (max 30 m under optimal conditions) [7] | up to 3.6 cm indoors [8] | 34,000 | Proprietary, based on [5,6] |
| No |
LiBackpack (2019) [9] | [10] | 2× Velodyne VLP-16 (3 cm) | 100 m (Velodyne scanner limitation) | 5 cm | 60,000 | Proprietary |
| Yes |
Pegasus (2015) [11] | [11] | 2× Velodyne VLP-16 (3 cm) | 50 m usable range | 5 cm with GNSS (5–50 cm without). 4.2 cm in underground bastion [12] | 150,000 | Proprietary |
| Yes |
Viametris bMS3D [13,14] | [14] | 2× Velodyne VLP-16 (3 cm) | 100 m (Velodyne scanner limitation) | 5 cm under appropriate satellite reception conditions | N/A | Proprietary |
| Yes |
Robin (2016) [15] | [16] | RIEGL VUX-1HA (3 mm) | 120/420 m in slow/high frequency mode (for sensor) | up to 3.6 cm at 30 m range (FOG IMU update) | 220,000 | Proprietary |
| Yes |
Akhka (2015) [17,18] | [17] | FARO Focus3D 120S (1 mm) | 120 m (sensor range) | 8.7 cm in forest environments | N/A | Open [17] |
| Yes |
Error es (18) | |||||
---|---|---|---|---|---|
Sequence | Length | LOAM Online | LOAM Offline | CLS Single | CLS Multi-Frame |
0 | 4540 | 0.052 | 0.022 | 0.022 | 0.018 |
1 | 1100 | 0.038 | 0.040 | 0.042 | 0.029 |
2 | 4660 | 0.055 | 0.046 | 0.024 | 0.022 |
3 | 800 | 0.029 | 0.019 | 0.018 | 0.015 |
4 | 270 | 0.015 | 0.015 | 0.017 | 0.017 |
5 | 2760 | 0.025 | 0.018 | 0.017 | 0.012 |
6 | 1100 | 0.033 | 0.016 | 0.009 | 0.008 |
7 | 1100 | 0.038 | 0.019 | 0.011 | 0.007 |
8 | 4070 | 0.035 | 0.024 | 0.020 | 0.015 |
9 | 1590 | 0.043 | 0.032 | 0.020 | 0.018 |
Weighted average | 2108 | 0.043 | 0.029 | 0.022 | 0.017 |
Dataset | Slice # | 4RECON-10 | 4RECON-Overlap | 4RECON-Verification | ZEB-1 |
---|---|---|---|---|---|
Office | 1 | 2.50 | 1.71 | 1.49 | 1.44 |
2 | 1.97 | 1.47 | 1.31 | 1.06 | |
3 | 1.70 | 1.75 | 1.55 | 1.22 | |
4 | 1.82 | 1.54 | 1.31 | 1.22 | |
5 | 1.93 | 1.63 | 1.53 | 1.44 | |
6 | 2.13 | 1.49 | 1.47 | 1.29 | |
7 | 2.09 | 1.68 | 1.37 | 0.97 | |
8 | 2.07 | 1.36 | 1.37 | 1.31 | |
Average (cm) | 2.01 | 1.62 | 1.41 | 1.14 | |
Staircase | 1 | 3.23 | 2.11 | 1.81 | - |
2 | 3.99 | 1.87 | 1.60 | - | |
3 | 2.63 | 1.65 | 1.61 | - | |
4 | 2.74 | 1.71 | 1.53 | - | |
5 | 2.42 | 1.68 | 1.50 | - | |
6 | 2.98 | 2.67 | 1.67 | - | |
7 | 1.76 | 1.75 | 1.29 | - | |
8 | 1.82 | 1.67 | 1.56 | - | |
Average (cm) | 2.74 | 1.82 | 1.57 | - |
Ref. Point | dX | dY | Horizontal Error | dZ (Vertical) | Total Error |
---|---|---|---|---|---|
1 | −5.9 | −1.2 | 6.0 | −15.2 | 16.3 |
2 | −5.6 | 0.5 | 5.6 | −4.7 | 7.3 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Velas, M.; Spanel, M.; Sleziak, T.; Habrovec, J.; Herout, A. Indoor and Outdoor Backpack Mapping with Calibrated Pair of Velodyne LiDARs. Sensors 2019, 19, 3944. https://doi.org/10.3390/s19183944
Velas M, Spanel M, Sleziak T, Habrovec J, Herout A. Indoor and Outdoor Backpack Mapping with Calibrated Pair of Velodyne LiDARs. Sensors. 2019; 19(18):3944. https://doi.org/10.3390/s19183944
Chicago/Turabian StyleVelas, Martin, Michal Spanel, Tomas Sleziak, Jiri Habrovec, and Adam Herout. 2019. "Indoor and Outdoor Backpack Mapping with Calibrated Pair of Velodyne LiDARs" Sensors 19, no. 18: 3944. https://doi.org/10.3390/s19183944