Abstract
In this paper we present the first decision theoretic planner for the problem of Navigation Among Movable Obstacles (NAMO). While efficient planners for NAMO exist, they are challenging to implement in practice due to the inherent uncertainty in both perception and control of real robots. Generalizing existing NAMO planners to nondeterministic domains is particularly difficult due to the sensitivity of MDP methods to task dimensionality. Our work addresses this challenge by combining ideas from Hierarchical Reinforcement Learning with Monte Carlo Tree Search, and results in an algorithm that can be used for fast online planning in uncertain environments. We evaluate our algorithm in simulation, and provide a theoretical argument for our results which suggest linear time complexity in the number of obstacles for typical environments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Barto, A.G., Mahadevan, S.: Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems 13(4), 341–379 (2003)
Chen, P., Hwang, Y.: Practical path planning among movable obstacles. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 444–449 (1991)
Demaine, E., O’Rourke, J., Demaine, M.L.: Pushpush and push-1 are np-hard in 2d. In: Proceedings of the 12th Canadian Conference on Computational Geometry, pp. 211–219 (2000)
Dietterich, T.G.: An Overview of MAXQ Hierarchical Reinforcement Learning. In: Choueiry, B.Y., Walsh, T. (eds.) SARA 2000. LNCS (LNAI), vol. 1864, pp. 26–44. Springer, Heidelberg (2000)
Howard, R.A.: Dynamic probabilistic systems, vol. 317. John Wiley & Sons, New York (1971)
Hsiao, K., Kaelbling, L.P., Lozano-pérez, T.: Grasping pomdps. In: Proc. IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 4685–4692 (2007)
Wu, H., Levihn, M., Stilman, M.: Navigation among movable obstacles in unknown environments. In: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS 2010 (October 2010)
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Arxiv preprint cs/9605103 (1996)
Kakiuchi, Y., Ueda, R., Kobayashi, K., Okada, K., Inaba, M.: Working with movable obstacles using on-line environment perception reconstruction using active sensing and color range sensor. In: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 1696–1701 (2010)
Kearns, M., Mansour, Y., Ng, A.Y.: A sparse sampling algorithm for near-optimal planning in large markov decision processes. Machine Learning 49, 193–208 (2002)
Kocsis, L., Szepesvári, C.: Bandit Based Monte-Carlo Planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 282–293. Springer, Heidelberg (2006)
Koenig, S., Simmons, R.G.: Xavier: A robot navigation architecture based on partially observable markov decision process models. In: Artificial Intelligence Based Mobile Robotics: Case Studies of Successful Robot Systems, pp. 91–122. MIT Press (1998)
Parr, R., Russell, S.: Reinforcement learning with hierarchies of machines. Advances in Neural Information Processing Systems, 1043–1049 (1998)
Pineau, J., Gordon, G., Thrun, S.: Point-based value iteration: An anytime algorithm for pomdps (2003)
Roy, N., Gordon, G., Thrun, S.: Finding approximate pomdp solutions through belief compression. Technical report (2003)
Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall Press, Upper Saddle River (2009)
Stilman, M., Nishiwaki, K., Kagami, S., Kuffner, J.: Planning and executing navigation among movable obstacles. In: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2006), pp. 820–826 (October 2006)
Stilman, M., Kuffner, J.J.: Navigation among movable obstacles: Real-time reasoning in complex environments. Journal of Humanoid Robotics, 322–341 (2004)
Sutton, R.S., Precup, D., Singh, S.: Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence 112(1), 181–211 (1999)
Chiba, N., Nishizeki, T.: Planar graphs: theory and algorithms. Elsevier Science Ltd. (1988)
van den Berg, J., Stilman, M., Kuffner, J., Lin, M., Manocha, D.: Path Planning among Movable Obstacles: A Probabilistically Complete Approach. In: Chirikjian, G.S., Choset, H., Morales, M., Murphey, T. (eds.) Algorithmic Foundation of Robotics VIII. STAR, vol. 57, pp. 599–614. Springer, Heidelberg (2009)
Walsh, T.J., Goschin, S., Littman, M.L.: Integrating sample-based planning and model-based reinforcement learning. In: Proceedings of AAAI, vol. (1) (2010)
Wilfong, G.: Motion planning in the presence of movable obstacles. In: SCG 1988: Proceedings of the Fourth Annual Symposium on Computational Geometry, pp. 279–288. ACM, New York (1988)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Levihn, M., Scholz, J., Stilman, M. (2013). Hierarchical Decision Theoretic Planning for Navigation Among Movable Obstacles. In: Frazzoli, E., Lozano-Perez, T., Roy, N., Rus, D. (eds) Algorithmic Foundations of Robotics X. Springer Tracts in Advanced Robotics, vol 86. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36279-8_2
Download citation
DOI: https://doi.org/10.1007/978-3-642-36279-8_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-36278-1
Online ISBN: 978-3-642-36279-8
eBook Packages: EngineeringEngineering (R0)