Abstract
This paper presents a system that aims to achieve autonomous grasping for micro-controller based humanoid robots such as the Inmoov robot [1]. The system consists of a visual sensor, a central controller and a manipulator. We modify the open sourced objection detection software YOLO (You Only Look Once) v2 [2] and associate it with the visual sensor to make the sensor be able to detect not only the category of the target object but also the location with the help of a depth camera. We also estimate the dimensions (i.e., the height and width) of the target based on the bounding box technique (Fig. 1). After that, we send the information to the central controller (a humanoid robot), which controls the manipulator (customised robotic hand) to grasp the object with the help of inverse kinematics theory. We conduct experiments to test our method with the Inmoov robot. The experiments show that our method is capable of detecting the object and driving the robotic hands to grasp the target object.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Langevin, G.: Hand robot InMoov (2016)
Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Han, J., et al.: Advanced deep-learning techniques for salient and category-specific object detection: a survey. IEEE Signal Process. Mag. 35(1), 84–100 (2018)
Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision (2015)
Girshick, R., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014)
Redmon, J., et al.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Roa, M.A., Suárez, R.: Grasp quality measures: review and performance. Auton. Robots 38(1), 65–88 (2015)
Tikhanoff, V., et al.: Exploring affordances and tool use on the iCub. In: 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids). IEEE (2013)
Kurban, R., Skuka, F., Bozpolat, H.: Plane segmentation of kinect point clouds using RANSAC. In: The 7th International Conference on Information Technology (2015)
Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)
Dakarimov, S., et al.: Study on the development and control of humanoid robot arm using MatLab/Arduino. 유공압건설기계학회 학술대회논문집, pp. 88–90 (2018)
Thalmann, N.M., Tian, L., Yao, F.: Nadine: a social robot that can localize objects and grasp them in a human way. In: Prabaharan, S.R.S., Thalmann, N.M., Kanchana Bhaaskaran, V.S. (eds.) Frontiers in Electronic Technologies. LNEE, vol. 433, pp. 1–23. Springer, Singapore (2017). https://doi.org/10.1007/978-981-10-4235-5_1
Tian, L., et al.: The making of a 3D-printed, cable-driven, single-model, lightweight humanoid robotic hand. Front. Robot. AI 4, 65 (2017)
Tian, L., et al.: A methodology to model and simulate customized realistic anthropomorphic robotic hands. In: Proceedings of Computer Graphics International 2018. ACM (2018)
Tian, L., et al.: Nature grasping by a cable-driven under-actuated anthropomorphic robotic hand. TELKOMNIKA 17(1), 1–7 (2019)
Acknowledgements
This research is supported by the BeingTogether Centre, a collaboration between Nanyang Technological University (NTU) Singapore and University of North Carolina (UNC) at Chapel Hill. The BeingTogether Centre is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its International Research Centres in Singapore Funding Initiative.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Tian, L., Thalmann, N.M., Thalmann, D., Fang, Z., Zheng, J. (2019). Object Grasping of Humanoid Robot Based on YOLO. In: Gavrilova, M., Chang, J., Thalmann, N., Hitzer, E., Ishikawa, H. (eds) Advances in Computer Graphics. CGI 2019. Lecture Notes in Computer Science(), vol 11542. Springer, Cham. https://doi.org/10.1007/978-3-030-22514-8_47
Download citation
DOI: https://doi.org/10.1007/978-3-030-22514-8_47
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22513-1
Online ISBN: 978-3-030-22514-8
eBook Packages: Computer ScienceComputer Science (R0)