[go: up one dir, main page]

Skip to main content
Log in

Path planning for active SLAM based on deep reinforcement learning under unknown environments

  • Original Research
  • Published:
Intelligent Service Robotics Aims and scope Submit manuscript

Abstract

Autonomous navigation in complex environment is an important requirement for the design of a robot. Active SLAM (simultaneous localization and mapping) combining, which combine path planning with SLAM, is proposed to improve the ability of autonomous navigation in complex environment. In this paper, fully convolutional residual networks are used to recognize the obstacles to get depth image. The avoidance obstacle path is planned by Dueling DQN algorithm in the robot’s navigation; at the same time, the 2D map of the environment is built based on FastSLAM. The experiments show that the proposed algorithm can successfully identify and avoid moving and static obstacles with different quantities in the environment, and realize the autonomous navigation of the robot in a complex environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Chanier F, Checchin P, Blanc C, et al (2008) Map fusion based on a multi-map SLAM framework. In: 2008 IEEE international conference on multisensor fusion and integration for intelligent systems, 2008, MFI. IEEE, pp 533–538

  2. Gouaillier D, Collette C, Kilner C (2010) Omni-directional closed-loop walk for NAO. In: 2010 10th IEEE-RAS international conference on humanoid robots (Humanoids). IEEE, pp 448–454

  3. Chaves SM, Kim A, Eustice RM (2014) Opportunistic sampling-based planning for active visual SLAM. In: 2014 IEEE/RSJ international conference on intelligent robots and systems (IROS 2014). IEEE, pp 3073–3080

  4. Prozorov AV, Priorov AL, Tyukin AL et al (2017) Algorithm for simultaneous localization and mapping based on video signal analysis. Meas Tech 59(10):1088–1093

    Article  Google Scholar 

  5. Osswald S, Hornung A, Bennewitz M (2010) Learning reliable and efficient navigation with a humanoid. In: IEEE international conference on robotics and automation. IEEE, pp 2375–2380

  6. Wei C, Xu J, Wang C, et al (2013) An approach to navigation for the humanoid robot Nao in domestic environments. In: Conference towards autonomous robotic systems. Springer, Berlin, pp 298–310

  7. Fulgenzi C, Ippoliti G, Longhi S (2009) Experimental validation of FastSLAM algorithm integrated with a linear features based map. Mechatronics 19(5):609–616

    Article  Google Scholar 

  8. Hornung A, Kai MW, Bennewitz M (2010) Humanoid robot localization in complex indoor environments. In: IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 1690–1695

  9. Berns K, von Puttkamer E (2009) Simultaneous localization and mapping (SLAM). In: Autonomous land vehicles. Springer, Berlin, pp 146–172

  10. Havangi R, Taghirad HD, Nekoui MA et al (2014) A square root unscented FastSLAM with improved proposal distribution and resampling. IEEE Trans Ind Electron 61(5):2334–2345

    Article  Google Scholar 

  11. Kim DK, Chen T (2015) Deep neural network for real-time autonomous indoor navigation. ArXiv preprint arXiv:1511.04668

  12. Giusti A, Guzzi J, Ciresan DC et al (2016) A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robot Autom Lett 1(2):661–667

    Article  Google Scholar 

  13. Tai L, Li S, Liu M (2016) A deep-network solution towards model-less obstacle avoidance. In: 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 2759–2764

  14. Maček K, Petrović I, Perić N (2002) A reinforcement learning approach to obstacle avoidance of mobile robots. In: 7th international workshop on advanced motion control. pp 462–466

  15. Zhou Y, Er MJ (2006) Self-learning in obstacle avoidance of a mobile robot via dynamic self-generated fuzzy Q-learning. In: 2006 and international conference on intelligent agents, web technologies and internet commerce, international conference on computational intelligence for modelling, control and automation. IEEE, pp 116–116

  16. Laina I, Rupprecht C, Belagiannis V, et al (2016) Deeper depth prediction with fully convolutional residual networks. In: 2016 Fourth international conference on 3D vision (3DV). IEEE, pp 239–248

  17. Wen S, Zheng W, Zhu J et al (2012) Elman fuzzy adaptive control for obstacle avoidance of mobile robots using hybrid force/position incorporation. IEEE Trans Syst Man Cybern Part C (Appl Rev) 42(4):603–608

    Article  Google Scholar 

  18. Wen S, Chen X, Ma C et al (2015) The Q-learning obstacle avoidance algorithm based on EKF-SLAM for NAO autonomous walking under unknown environments. Robot Auton Syst 72:29–36

    Article  Google Scholar 

  19. Xie L, Wang S, Markham A, et al (2017) Towards monocular vision based obstacle avoidance through deep reinforcement learning. ArXiv preprint arXiv:1706.09829

  20. Wang Z, Schaul T, Hessel M, et al (2015) Dueling network architectures for deep reinforcement learning. ArXiv preprint arXiv:1511.06581

  21. Gruslys A, Dabney W, Azar MG et al (2017) The reactor: a fast and sample-efficient actor-critic agent for reinforcement learning. ArXiv preprint arXiv:1704.04651v2

  22. Chen J, Bai T, Huang X, et al (2017) Double-task deep Q-learning with multiple views. In: Proceedings of the IEEE international conference on computer vision. pp 1050–1058

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant No. 61773333.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuhuan Wen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wen, S., Zhao, Y., Yuan, X. et al. Path planning for active SLAM based on deep reinforcement learning under unknown environments. Intel Serv Robotics 13, 263–272 (2020). https://doi.org/10.1007/s11370-019-00310-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11370-019-00310-w

Keywords

Navigation