Tableware Tidying-Up Robot System for Self-Service Restaurant–Detection and Manipulation of Leftover Food and Tableware-
<p>Overall view of the robot system.</p> "> Figure 2
<p>Structure and movable range of the parallel robot arm.</p> "> Figure 3
<p>Flow of throwing away leftover food by two types of robots: (<b>a</b>) flow of throwing away leftover food with type 1 robot; (<b>b</b>) flow of throwing away leftover food with type 2 robot.</p> "> Figure 3 Cont.
<p>Flow of throwing away leftover food by two types of robots: (<b>a</b>) flow of throwing away leftover food with type 1 robot; (<b>b</b>) flow of throwing away leftover food with type 2 robot.</p> "> Figure 4
<p>Overall flow of image processing.</p> "> Figure 5
<p>Image for each tableware detection.</p> "> Figure 6
<p>Coordinate determination.</p> "> Figure 7
<p>Edge height and leftover food detection.</p> "> Figure 8
<p>The method used to grab tableware.</p> "> Figure 9
<p>Robot prototype.</p> "> Figure 10
<p>Multiple tableware image detection: (<b>a</b>) gray image detection; (<b>b</b>) depth image detection.</p> "> Figure 10 Cont.
<p>Multiple tableware image detection: (<b>a</b>) gray image detection; (<b>b</b>) depth image detection.</p> "> Figure 11
<p>Overlapping tableware image detection: (<b>a</b>) gray image detection; (<b>b</b>) depth image detection.</p> "> Figure 12
<p>Detection of tableware overlap at different positions: (<b>a</b>) overlap of bowl and plate at different positions; (<b>b</b>) success rate of the overlap at different position.</p> "> Figure 13
<p>Detection of the tilt angle and success rate of the cup and the bowl overlap: (<b>a</b>) bowl and cup overlap; (<b>b</b>) success rate of tilt angles.</p> "> Figure 13 Cont.
<p>Detection of the tilt angle and success rate of the cup and the bowl overlap: (<b>a</b>) bowl and cup overlap; (<b>b</b>) success rate of tilt angles.</p> "> Figure 14
<p>Leftover food detection by depth image: (<b>a</b>) there is no leftover food; (<b>b</b>) there is leftover food.</p> "> Figure 15
<p>Detection of different heights of leftover food: (<b>a</b>) leftover food on the plate; (<b>b</b>) leftover food in the bowl.</p> "> Figure 16
<p>Multiple tableware sorting experiment: (<b>a</b>) hand standby; (<b>b</b>) cup grabbed; (<b>c</b>) moved to collection box; (<b>d</b>) cup thrown; (<b>e</b>) bowl grabbed; (<b>f</b>) moved to rotation assist mechanism; (<b>g</b>) leftover food thrown; (<b>h</b>) bowl thrown; (<b>i</b>) plate grabbed; (<b>j</b>) moved to rotation assist mechanism; (<b>k</b>) leftover food thrown; (<b>l</b>) plate thrown.</p> "> Figure 17
<p>Overlapping tableware sorting experiment: (<b>a</b>) hand standby; (<b>b</b>) cup grabbed; (<b>c</b>) moved to collection box; (<b>d</b>) cup thrown; (<b>e</b>) bowl grabbed; (<b>f</b>) moved to rotation assist mechanism; (<b>g</b>) leftover food thrown; (<b>h</b>) bowl thrown; (<b>i</b>) plate grabbed; (<b>j</b>) moved to rotation assist mechanism; (<b>k</b>) leftover food thrown; (<b>l</b>) plate thrown.</p> ">
Abstract
:1. Introduction
2. Robot System Design
2.1. System Configuration
2.2. Rotation Structure and Design
3. Image Processing
3.1. Overall Flow of Image Processing
3.2. Gray Image Processing
3.3. Depth Image Processing
4. Robot Prototype
5. Image Processing and Tableware Sorting Experiment
5.1. Multiple Tableware Detection Experiment
5.2. Overlapping Tableware Detection Experiment
5.3. Leftover Food Detection Experiment
5.4. Multiple Tableware Sorting Experiment
5.5. Overlapping Tableware Sorting Experiment
6. Conclusions
- (1)
- We were able to detect the position of the tableware at about 80% in both the normal state and the overlapping state of the tableware by combining the image processing of the gray image and the depth image using just one RGB-D camera, with which low cost was realized. It was also possible to determine whether there was leftover food in the tableware with the designed method using the depth function of the camera.
- (2)
- By arranging multiple simple parallel robot arms side by side, the time of one tableware and three tableware with leftover food to be tidied up was about 16 s and 60 s at a pressure of 0.3 MPa, and the high-speed processing of tableware at a low cost was realized.
- (3)
- In the tableware tidying-up experiment, it was confirmed that the designed robot mechanism can tidy up the tableware in the normal situation and the situation where the tableware overlaps, using the data from the image detection.
- (4)
- It was confirmed that the two types of rotation mechanisms designed can throw away leftover food with the experiment of processing leftover food. The designed motorless rotation mechanism was able to achieve the goals of a lightweight robot hand and a low cost of the robot system.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Yi, J.; Ahn, M.S.; Chae, H.; Nam, H.; Noh, D.; Hong, D.; Moon, H. Task Planning with Mixed-Integer Programming for Multiple Cooking Task Using dual-arm Robot. In Proceedings of the 17th International Conference on Ubiquitous Robots, Kyoto, Japan, 22–26 June 2020. [Google Scholar]
- Noh, D.; Liu, Y.; Rafeedi, F.; Nam, H.; Gillespie, K.; Yi, J.; Zhu, T.; Xu, Q.; Hong, D. Minimal Degree of Freedom Dual-Arm Manipulation Platform with Coupling Body Joint for Diverse Cooking Task. In Proceedings of the IEEE Eurasia Conference on IOT, Communication and Engineering, Kyoto, Japan, 22–26 June 2020. [Google Scholar]
- Sun, W.; Li, S.; Wang, W.; Zhao, P.; Yang, R. Design of Chassis and Kinematics Research of Wheeled Robot. In Proceedings of the IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference, Chongqing, China, 12–14 June 2020. [Google Scholar]
- Siao, C.; Lin, W.; Chang, R. The Design and Implementation of A Delivery System. In Proceedings of the IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference, Yunlin, Taiwan, 23–25 October 2020. [Google Scholar]
- Tanaka, J.; Sugahara, A.; Ogawa, H. Four-Fingered Robot Hand with Mechanism to Change the Direction of Movement—Mechanical Design and Basic Experiments. J. Robot. Mechatron. 2018, 30, 624–637. [Google Scholar] [CrossRef]
- Pacheco, A.; Estrada, H.; Vazquez, E.; Martell, R.; Hernandez, J.; Cruz, J.; Silva, E.; Savage, J.; Contreras, L. Intelligent flat-and-textureless object manipulation in Service Robots. In Proceedings of the International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018. [Google Scholar]
- Mizuuchi, I.; Fujimoto, J.; Sodeyama, Y.; Yamamoto, K.; Okada, K.; Inaba, M. A Kitchen Assistant Manipulation System of a Variety of Dishes based on Shape Estimation with Tracing Dish Surfaces by Sensing Proximity and Touch Information. Proc. Acad. Lect. Robot. Soc. Jpn. 2012, 30, 889–898. [Google Scholar] [CrossRef]
- Liang, X.; Cheong, H.; Sun, Y.; Guo, J.; Chui, C.K.; Yeow, C. Design, Characterization and Implementation of a Two-DOF Fabric-based Soft Robotic Arm. IEEE Robot. Autom. Lett. 2018, 3, 2702–2709. [Google Scholar] [CrossRef]
- Kim, C.H.; Seo, J. Shallow-Depth Insertion: Peg in Shallow Hole Through Robotic In-Hand Manipulation. IEEE Robot. Autom. Lett. 2019, 4, 383–390. [Google Scholar] [CrossRef]
- Li, S.; Li, D.; Zhang, C.; Wan, J.; Xie, M. RGB-D Image Processing Algorithm for Target Recognition and Pose Estimation of Visual Servo System. Sensors 2020, 20, 430. [Google Scholar] [CrossRef]
- Rosenberger, P.; Cosgun, A.; Newbury, R.; Kwan, J.; Ortenzi, V.; Corke, P.; Grafinger, M. Object-Independent Human-to-Robot Handovers Using Real Time Robotic Vision. IEEE Robot. Autom. Lett. 2021, 6, 17–23. [Google Scholar] [CrossRef]
- Kang, H.; Zhou, H.; Wang, X.; Chen, C. Real-Time Fruit Recognition and Grasping Estimation for Robotic Apple Harvesting. Sensors 2020, 20, 5670. [Google Scholar] [CrossRef]
- Cherabit, N.; Chelali, F.; Djeradi, A. Circular Hough Transform for Iris localization. J. Comput. Sci. Technol. 2012, 2, 114–121. [Google Scholar] [CrossRef]
- Barabas, J.; Radil, R.; Gombarska, D. Image Processing and Feature Extraction of Circular Objects from Biological Images. In Proceedings of the 36th International Conference on Telecommunications and Signal Processing, Rome, Italy, 2–4 July 2013. [Google Scholar]
- Liew, L.; Lee, B.; Chan, M. Cell Detection for Bee Comb Images Using Circular Hough Transformation. In Proceedings of the International Conference on Science and Social Research, Kuala Lumpur, Malaysia, 5–7 December 2010. [Google Scholar]
- Ota, M.; Sugiyama, S.; Koeda, M.; Yoshikawa, T. An Operation Method for Clearing Tableware by Using Three Soft-Fingered Robotic Hands. In Proceedings of the JSME Conference on Robotics and Mechatronic, Asahikawa, Japan, 13–16 June 2010. [Google Scholar]
- Si, Y.; Liu, G.; Feng, J. Location of apples in trees using stereoscopic vision. Comput. Electron. Agric. 2015, 112, 68–74. [Google Scholar] [CrossRef]
- Antonio, N.; Armando, P.; Daniel, M.; Bernardo, C. An efficient omnidirectional vision system for soccer robots: From calibration to object detection. Mechatronics 2011, 21, 399–410. [Google Scholar]
- Khairosfaizal, W.; Noraini, A. Eyes Detection in Facial Images using Circular Hough Transform. In Proceedings of the 5th International Colloquium on Signal Processing & Its Applications, Kuala Lumpur, Malaysia, 6–8 March 2009. [Google Scholar]
- Ni, J.; Khan, Z.; Wang, S.; Wang, K.; Haider, K. Automatic Detection and Counting of Circular Shaped Overlapped Objects Using Circular Hough Transform and Contour Detection. In Proceedings of the World Congress on Intelligent Control and Automation, Guilin, China, 12–15 June 2016. [Google Scholar]
Item | Value |
---|---|
System size | 1500 mm × 1000 mm × 1200 mm |
Processing speed | 12 s/tableware |
Robot moves range | 800 mm × 200 mm |
Positioning accuracy | 20 mm |
Tableware range | Diameter: 60–220 mm Height: 20–100 mm |
Parts | Manufacturer and Model Number | Specification |
---|---|---|
Air cylinder | SMC CM2YB20-600Z | Inner diameter: 20 mm stroke: 600 mm |
Solenoid valve | SMC SY3320-5MZD-M5 | /(s·bar) Number of ports: 5 Type of switching: 3 |
Encoder | Koyo Electronics Industries TRD-S1000A | Resolution: 1000 (PPR) |
Camera | Inter SR305 | Depth measurement range: 0.2–1.5 m Resolution: Color: 1920 × 1080 Depth: 640 × 480 |
Tableware Placement Status | Successes Times |
---|---|
Normal placement of plate, bowl, and cup | 9/10 |
Cup and plate overlap | 9/10 |
Bowl and plate overlap | 9/10 |
Bowl and cup overlap | 8/10 |
Tableware Placement Status | Successes Times |
---|---|
Normal placement of plate, bowl, and cup | 8/10 |
Cup and plate overlap | 9/10 |
Bowl and plate overlap | 8/10 |
Bowl and cup overlap | 8/10 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhu, D.; Seki, H.; Tsuji, T.; Hiramitsu, T. Tableware Tidying-Up Robot System for Self-Service Restaurant–Detection and Manipulation of Leftover Food and Tableware-. Sensors 2022, 22, 7006. https://doi.org/10.3390/s22187006
Zhu D, Seki H, Tsuji T, Hiramitsu T. Tableware Tidying-Up Robot System for Self-Service Restaurant–Detection and Manipulation of Leftover Food and Tableware-. Sensors. 2022; 22(18):7006. https://doi.org/10.3390/s22187006
Chicago/Turabian StyleZhu, Deheng, Hiroaki Seki, Tokuo Tsuji, and Tatsuhiro Hiramitsu. 2022. "Tableware Tidying-Up Robot System for Self-Service Restaurant–Detection and Manipulation of Leftover Food and Tableware-" Sensors 22, no. 18: 7006. https://doi.org/10.3390/s22187006
APA StyleZhu, D., Seki, H., Tsuji, T., & Hiramitsu, T. (2022). Tableware Tidying-Up Robot System for Self-Service Restaurant–Detection and Manipulation of Leftover Food and Tableware-. Sensors, 22(18), 7006. https://doi.org/10.3390/s22187006