Safe and Robust Mobile Robot Navigation in Uneven Indoor Environments
<p>An illustration of a wheel robot navigating in a challenging environment.</p> "> Figure 2
<p>A schematic diagram of the system framework. The system uses a 2D laser scanner and an RGB-D camera. The mapping process outputs an OctoMap, which is cut into several layers to form a traversable map that is used for navigation purposes.</p> "> Figure 3
<p>OctoMap representation for <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>r</mi> <mn>1</mn> <mo>/</mo> <mi>r</mi> <mi>o</mi> <mi>o</mi> <mi>m</mi> </mrow> </semantics></math> of TUM RGB-D SLAM benchmark [<a href="#B37-sensors-19-02993" class="html-bibr">37</a>] with visual SLAM. (<b>a</b>) the sparse map from original ORB-SLAM [<a href="#B35-sensors-19-02993" class="html-bibr">35</a>], map points (<b>black</b>, <b>red</b>), keyframes (<b>blue</b>); (<b>b</b>) OctoMap representation.</p> "> Figure 4
<p>The schematic diagram of traversable map generation in a simulation environment. (<b>a</b>) occupied voxels representation of the slope and staircase. The colorful voxels represent the Octomap of the environment. (<b>b</b>) In the traversable map, the slope is marked as the collision-free area (<b>white area</b>) except the slope edge, while the staircase is marked as the occupied area (<b>black area</b>).</p> "> Figure 5
<p>An example of a decision tree structure in the backtracking regression forest. The leaf nodes are depicted as a pie chart to illustrate the proportion of samples. More details can be found in [<a href="#B41-sensors-19-02993" class="html-bibr">41</a>].</p> "> Figure 6
<p>Path planning with limited step size RRT and our method. Blue line: RRT method. Red line: initial path form the start to the goal. Yellow and blue lines: optimization of the initial path.</p> "> Figure 7
<p>Traversable map generated from the pre-build OctoMap. (<b>a</b>) simulation environment; (<b>b</b>) traversable map; (<b>c</b>,<b>d</b>) OctoMap built by the 2D+3D SLAM method and singly 3D SLAM method.</p> "> Figure 8
<p>Simulation experiment of Task 1. The start location is indicated in (<b>a</b>). The robot is approaching the slope in (<b>b</b>,<b>c</b>). The robot starts to climb the slope in (<b>d</b>) and reaches the target above the ground in (<b>e</b>).</p> "> Figure 9
<p>Simulation experiment of Task 2. The start location is indicated in (<b>a</b>). The robot is making a turn to approach the slope in (<b>b</b>,<b>c</b>). The robot climbs the slope in (<b>d</b>) and reaches the target above the ground in (<b>e</b>).</p> "> Figure 10
<p>Robot hardware platform. (<b>a</b>) The Segway robot platform; (<b>b</b>) the Xtion RGB-D camera; (<b>c</b>) Hokuyo laser range finder.</p> "> Figure 11
<p>Real-word environment. (<b>a</b>) a snapshot of the real environmen; (<b>b</b>) the 3D representation of the environment with OctoMap, only occupied voxels are shown for visualization.</p> "> Figure 12
<p>Multilayer maps and the traversable map for the real-world environment. (<b>a</b>–<b>d</b>) Multiple projected layers from OctoMap; (<b>e</b>) the traversable map. The staircases and slope edge are occupied while the slope is the free space.</p> "> Figure 13
<p>Robot autonomous navigation example in a real environment (Task 3 in <a href="#sensors-19-02993-t003" class="html-table">Table 3</a>). (<b>a</b>,<b>b</b>)The robot is approaching the slope; (<b>c</b>) the robot is to climb the slope; (<b>d</b>) the robot is climbing the slope; (<b>e</b>) the robot reaches the target above the ground.</p> "> Figure 14
<p>Robot autonomous navigation example in real environment (Task 4 in <a href="#sensors-19-02993-t003" class="html-table">Table 3</a>). (<b>a</b>,<b>b</b>) The robot is making a turn to approach the slope; (<b>c</b>) the robot is to climb the slope; (<b>d</b>) The robot is climbing the slope; (<b>e</b>) The robot reaches the target above the ground.</p> "> Figure 15
<p>Dynamic obstacle avoidance (Task 5 in <a href="#sensors-19-02993-t003" class="html-table">Table 3</a>). (<b>a</b>) obstacle avoidance in the real scene; (<b>b</b>) the human suddenly blocks the way in front of the robot; (<b>c</b>) the robot changes direction to avoid the human; (<b>d</b>) the robot succesffully avoids the human; (<b>e</b>) the robot is climbing the slope.</p> ">
Abstract
:1. Introduction
- We propose a novel 3D mapping approach to generate a 3D map and a traversable map for the robot navigation in the environments with uneven terrain.
- We leverage a camera re-localization method based on the random forest to improve localization performance in 3D indoor environments.
- We adopt a modified version of the RRT approach which can tune the step size adaptively for generating the global path, and the elastic band method is adopted for generating the local path.
2. Related Work
2.1. Environment Representation
2.2. Vision-Based Global Localization
2.3. Sampling-Based Path Planning
3. Proposed Framework
4. Environment Representation
4.1. 3D Environment Mapping
4.2. Multilayer Maps and the Traversable Map
5. Global Localization
5.1. Random Forests Method
5.2. Backtracking Regression Forest Training
5.3. Regression Forest Prediction
Camera Pose Optimization
6. Planning and Navigation
6.1. Global Planner: Variable Step Size RRT
Algorithm 1: Variable Step Size RRT |
6.2. Local Planner and Path Execution
7. Experiments
7.1. Evaluation of Indoor Localization
7.2. Simulation Experiments
7.3. Real-World Experiments
8. Conclusions and Future Work
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Marder-Eppstein, E.; Berger, E.; Foote, T.; Gerkey, B.; Konolige, K. The Office Marathon; ICRA: Gurugram, India, 2010. [Google Scholar]
- Hornung, A.; Phillips, M.; Jones, E.G.; Bennewitz, M.; Likhachev, M.; Chitta, S. Navigation in Three-Dimensional Cluttered Environments for Mobile Manipulation; ICRA: Gurugram, India, 2012. [Google Scholar]
- Wang, C.; Cheng, J.; Wang, J.; Li, X.; Meng, M.Q.H. Efficient Object Search With Belief Road Map Using Mobile Robot. IEEE Robot. Autom. Lett. 2018, 3, 3081–3088. [Google Scholar] [CrossRef]
- Wang, C.; Liu, W.; Meng, M.Q.H. Obstacle avoidance for quadrotor using improved method based on optical flow. In Proceedings of the 2015 IEEE International Conference on Information and Automation, Lijiang, China, 8–10 August 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1674–1679. [Google Scholar]
- Wang, J.; Meng, M.Q.H. Socially Compliant Path Planning for Robotic Autonomous Luggage Trolley Collection at Airports. Sensors 2019, 19, 2759. [Google Scholar] [CrossRef] [PubMed]
- Burgard, W.; Cremers, A.B.; Fox, D.; Hähnel, D.; Lakemeyer, G.; Schulz, D.; Steiner, W.; Thrun, S. The Interactive Museum Tour-Guide Robot; AAAI: Menlo Park, CA, USA, 1998. [Google Scholar]
- Siegwart, R.; Arras, K.O.; Bouabdallah, S.; Burnier, D.; Froidevaux, G.; Greppin, X.; Jensen, B.; Lorotte, A.; Mayor, L.; Meisser, M.; et al. Robox at Expo. 02: A large-scale installation of personal robots. RAS 2003, 42, 203–222. [Google Scholar] [CrossRef]
- Kim, G.; Chung, W.; Kim, K.R.; Kim, M.; Han, S.; Shinn, R.H. The autonomous tour-guide robot jinny. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Sendai, Japan, 28 September–2 October 2004. [Google Scholar]
- Tomatis, N.; Nourbakhsh, I.; Siegwart, R. Hybrid simultaneous localization and map building: A natural integration of topological and metric. Robot. Auton. Syst. 2003, 44, 3–14. [Google Scholar] [CrossRef]
- Wang, C.; Chi, W.; Sun, Y.; Meng, M.Q.H. Autonomous Robotic Exploration by Incremental Road Map Construction. IEEE Trans. Autom. Sci. Eng. 2019. [Google Scholar] [CrossRef]
- Kostavelis, I.; Gasteratos, A. Learning spatially semantic representations for cognitive robot navigation. Robot. Auton. Syst. 2013, 61, 1460–1475. [Google Scholar] [CrossRef]
- Beeson, P.; Modayil, J.; Kuipers, B. Factoring the mapping problem: Mobile robot map-building in the hybrid spatial semantic hierarchy. Int. J. Robot. Res. 2010, 29, 428–459. [Google Scholar] [CrossRef]
- Montemerlo, M.; Becker, J.; Bhat, S.; Dahlkamp, H.; Dolgov, D.; Ettinger, S.; Haehnel, D.; Hilden, T.; Hoffmann, G.; Huhnke, B.; et al. Junior: The Stanford entry in the urban challenge. J. Field Robot. 2008, 25, 569–597. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. RSS 2014, 2, 9–17. [Google Scholar]
- Newcombe, R.A.; Izadi, S.; Hilliges, O.; Molyneaux, D.; Kim, D.; Davison, A.J.; Kohi, P.; Shotton, J.; Hodges, S.; Fitzgibbon, A. KinectFusion: Real-time dense surface mapping and tracking. ISMAR 2011, 11, 127–136. [Google Scholar]
- Cole, D.M.; Newman, P.M. Using Laser Range Data for 3D SLAM in Outdoor Environments; ICRA: Gurugram, India, 2006. [Google Scholar]
- Maier, D.; Hornung, A.; Bennewitz, M. Real-time navigation in 3D environments based on depth camera data. In Proceedings of the 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012), ICHR, Osaka, Japan, 29 November–1 December 2012. [Google Scholar]
- Yassin, A.; Nasser, Y.; Awad, M.; Al-Dubai, A.; Liu, R.; Yuen, C.; Raulefs, R.; Aboutanios, E. Recent advances in indoor localization: A survey on theoretical approaches and applications. IEEE Commun. Surv. Tutor. 2016, 19, 1327–1346. [Google Scholar] [CrossRef]
- Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; IEEE: Washington, DC, USA, 2007; pp. 225–234. [Google Scholar]
- Sattler, T.; Leibe, B.; Kobbelt, L. Efficient & Effective Prioritized Matching for Large-Scale Image-Based Localization. PAMI 2016, 39, 1744–1756. [Google Scholar]
- Kendall, A.; Grimes, M.; Cipolla, R. Posenet: A convolutional network for real-time 6-DOF camera relocalization. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2938–2946. [Google Scholar]
- Kendall, A.; Cipolla, R. Modelling Uncertainty in Deep Learning for Camera Relocalization; ICRA: Gurugram, India, 2016. [Google Scholar]
- Guzman-Rivera, A.; Kohli, P.; Glocker, B.; Shotton, J.; Sharp, T.; Fitzgibbon, A.; Izadi, S. Multi-output learning for camera relocalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Valentin, J.; Nießner, M.; Shotton, J.; Fitzgibbon, A.; Izadi, S.; Torr, P.H. Exploiting uncertainty in regression forests for accurate camera relocalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4400–4408. [Google Scholar]
- Walch, F.; Hazirbas, C.; Leal-Taixé, L.; Sattler, T.; Hilsenbeck, S.; Cremers, D. Image-based Localization with Spatial LSTMs. arXiv 2016, arXiv:1611.07890. [Google Scholar]
- Shotton, J.; Glocker, B.; Zach, C.; Izadi, S.; Criminisi, A.; Fitzgibbon, A. Scene coordinate regression forests for camera relocalization in RGB-D images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Portland, OR, USA, 23–28 June 2013. [Google Scholar]
- Wang, J.; Li, X.; Meng, M.Q.H. An improved rrt algorithm incorporating obstacle boundary information. In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016; pp. 625–630. [Google Scholar]
- Wang, J.; Chi, W.; Shao, M.; Meng, M.Q.H. Finding a High-Quality Initial Solution for the RRTs Algorithms in 2D Environments. Robotica 2019, 1–18. [Google Scholar] [CrossRef]
- Sánchez, G.; Latombe, J.C. On delaying collision checking in PRM planning: Application to multi-robot coordination. Int. J. Robot. Res. 2002, 21, 5–26. [Google Scholar] [CrossRef]
- Kuffner, J.J.; LaValle, S.M. RRT-connect: An efficient approach to single-query path planning. In Proceedings of the 2000 International Conference on Robotics and Automation, ICRA’00, San Francisco, CA, USA, 24–28 April 2000; Volume 2, pp. 995–1001. [Google Scholar]
- Gammell, J.D.; Srinivasa, S.S.; Barfoot, T.D. Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 2997–3004. [Google Scholar]
- Jaillet, L.; Cortés, J.; Siméon, T. Sampling-based path planning on configuration-space costmaps. IEEE Trans. Robot. 2010, 26, 635–646. [Google Scholar] [CrossRef]
- Devaurs, D.; Siméon, T.; Cortés, J. Optimal path planning in complex cost spaces with sampling-based algorithms. IEEE Trans. Autom. Sci. Eng. 2016, 13, 415–424. [Google Scholar] [CrossRef]
- Wang, C.; Meng, M.Q.H. Variant step size RRT: An efficient path planner for UAV in complex environments. In Proceedings of the 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), Angkor Wat, Cambodia, 6–10 June 2016; pp. 555–560. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Strasdat, H.; Davison, A.J.; Montiel, J.M.; Konolige, K. Double window optimisation for constant time visual SLAM. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2352–2359. [Google Scholar]
- Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
- Bishop, G.; Welch, G. An introduction to the Kalman filter. Proc SIGGRAPH Course 2001, 8, 41. [Google Scholar]
- Sahoo, T.; Pine, S. Design and simulation of various edge detection techniques using Matlab Simulink. In Proceedings of the 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES), Paralakhemundi, India, 3–5 October 2016; pp. 1224–1228. [Google Scholar]
- Derpanis, K.G. Overview of the RANSAC Algorithm. Image Rochester NY 2010, 4, 2–3. [Google Scholar]
- Meng, L.; Chen, J.; Tung, F.; Little, J.J.; Valentin, J.; de Silva, C.W. Backtracking regression forests for accurate camera relocalization. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 6886–6893. [Google Scholar]
- Marin-Plaza, P.; Hussein, A.; Martin, D.; Escalera, A.D.L. Global and local path planning study in a ROS-based research platform for autonomous vehicles. J. Adv. Transp. 2018, 2018, 6392697. [Google Scholar] [CrossRef]
- Quinlan, S.; Khatib, O. Elastic Bands: Connecting Path Planning and Control; ICRA: Gurugram, India, 1993. [Google Scholar]
- Valentin, J.; Dai, A.; Nießner, M.; Kohli, P.; Torr, P.; Izadi, S.; Keskin, C. Learning to navigate the energy landscape. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 323–332. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G.R. ORB: An efficient alternative to SIFT or SURF. ICCV Citeseer 2011, 11, 2. [Google Scholar]
- Meng, L.; Chen, J.; Tung, F.; Little, J.J.; de Silva, C.W. Exploiting Random RGB and Sparse Features for Camera Pose Estimation; BMVC: London, UK, 2016. [Google Scholar]
Frame Numbers | Spatial | Baselines | Our Results | |||||
---|---|---|---|---|---|---|---|---|
Sequence | Training | Test | Extent | ORB+PnP | SIFT+PnP | Random+SIFT | MNG | |
Kitchen | 744 | 357 | 33 m | 66.39% | 71.43% | 70.3% | 85.7% | 92.7% |
Living | 1035 | 493 | 30 m | 41.99% | 56.19% | 60.0% | 71.6% | 95.1% |
Bed | 868 | 244 | 14 m | 71.72% | 72.95% | 65.7% | 66.4% | 82.8% |
Kitchen | 768 | 230 | 21 m | 63.91% | 71.74% | 76.7% | 76.7% | 86.2% |
Living | 725 | 359 | 42 m | 45.40% | 56.19% | 52.2% | 66.6% | 99.7% |
Luke | 1370 | 624 | 53 m | 54.65% | 70.99% | 46.0% | 83.3% | 84.6% |
Floor5a | 1001 | 497 | 38 m | 28.97% | 38.43% | 49.5% | 66.2% | 89.9% |
Floor5b | 1391 | 415 | 79 m | 56.87% | 45.78% | 56.4% | 71.1% | 98.9% |
Gates362 | 2981 | 386 | 29 m | 49.48% | 67.88% | 67.7% | 51.8% | 96.7% |
Gates381 | 2949 | 1053 | 44 m | 43.87% | 62.77% | 54.6% | 52.3% | 92.9% |
Lounge | 925 | 327 | 38 m | 61.16% | 58.72% | 54.0% | 64.2% | 94.8% |
Manolis | 1613 | 807 | 50 m | 60.10% | 72.86% | 65.1% | 76.0% | 98.0% |
Average | — | — | — | 53.7% | 62.2% | 59.9% | 69.3% | 92.7% |
Step Size | Iterations | Computational Time (ms) | Jitter |
---|---|---|---|
30 | 165 | 12 | 0/20 |
40 | 63 | 4 | 0/20 |
50 | 92 | 7 | 4/20 |
60 | 113 | 9 | 8/20 |
70 | 147 | 10 | 12/20 |
Variable | 67 | 4 | 0/20 |
Tasks | Speed (ave) (m/s) | Traveled Distance (m) | Planning Time (ms) |
---|---|---|---|
Simulated environments | |||
Task 1 | 0.58 | 12.1 | 3.9 ± 2 |
Task 2 | 0.42 | 6.0 | 6.7 ± 2 |
Real-world environments | |||
Task 3 | 0.66 | 8.3 | 8.0 ± 2 |
Task 4 | 0.38 | 9.0 | 12.5 ± 2 |
Task 5 | 0.35 | 7.0 | 10.0 ± 2 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, C.; Wang, J.; Li, C.; Ho, D.; Cheng, J.; Yan, T.; Meng, L.; Meng, M.Q.-H. Safe and Robust Mobile Robot Navigation in Uneven Indoor Environments. Sensors 2019, 19, 2993. https://doi.org/10.3390/s19132993
Wang C, Wang J, Li C, Ho D, Cheng J, Yan T, Meng L, Meng MQ-H. Safe and Robust Mobile Robot Navigation in Uneven Indoor Environments. Sensors. 2019; 19(13):2993. https://doi.org/10.3390/s19132993
Chicago/Turabian StyleWang, Chaoqun, Jiankun Wang, Chenming Li, Danny Ho, Jiyu Cheng, Tingfang Yan, Lili Meng, and Max Q.-H. Meng. 2019. "Safe and Robust Mobile Robot Navigation in Uneven Indoor Environments" Sensors 19, no. 13: 2993. https://doi.org/10.3390/s19132993