Deep-Learning-Based Context-Aware Multi-Level Information Fusion Systems for Indoor Mobile Robots Safe Navigation
<p>Block diagram of proposed system.</p> "> Figure 2
<p>Context aware DCNN-based object detection framework.</p> "> Figure 3
<p>Experiment results of hazardous object detection.</p> "> Figure 4
<p>Comparison analysis results of the context-aware object detection algorithm and conventional object detection scheme for the escalator and glass door: (<b>a</b>) Yolo V4; (<b>b</b>) Faster RCNN ResNet 50; (<b>c</b>) Proposed system. From top to bottom, the occlusion conditions are shown from low, medium, and high levels.</p> "> Figure 5
<p>Experiment Robot [<a href="#B3-sensors-23-02337" class="html-bibr">3</a>].</p> "> Figure 6
<p>Environment: SUTD Mass Rapid Transition (MRT) station.</p> "> Figure 7
<p>Environment: SUTD campus.</p> ">
Abstract
:1. Introduction
2. Related Work
3. Proposed System
3.1. Context Aware DCNN-Based Object Detection
3.1.1. Backbone Network
3.1.2. Region Proposal Network
3.2. Image-Level Context Encoding Module
3.3. Context Aware Object Detection Head
3.4. Safe-Distance-Estimation Function
4. Experiments and Results
4.1. Dataset Preparation
4.2. Training Hardware and Software Details
4.3. Prediction of Hazardous Object Detection
4.4. Comparison Analysis with Conventional Method
4.5. Performance Analysis Survey
4.6. Real-Time Field Trial with Safe Distance Estimation
5. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Chen, Y.; Wu, F.; Shuai, W.; Chen, X. Robots serve humans in public places—KeJia robot as a shopping assistant. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417703569. [Google Scholar] [CrossRef] [Green Version]
- Yin, J.; Apuroop, K.G.S.; Tamilselvam, Y.K.; Mohan, R.E.; Ramalingam, B.; Le, A.V. Table cleaning task by human support robot using deep learning technique. Sensors 2020, 20, 1698. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pathmakumar, T.; Kalimuthu, M.; Elara, M.R.; Ramalingam, B. An autonomous robot-aided auditing scheme for floor cleaning. Sensors 2021, 21, 4332. [Google Scholar] [CrossRef] [PubMed]
- Raj, T.; Hanim Hashim, F.; Baseri Huddin, A.; Ibrahim, M.F.; Hussain, A. A survey on LiDAR scanning mechanisms. Electronics 2020, 9, 741. [Google Scholar] [CrossRef]
- Xu, L.; Feng, C.; Kamat, V.R.; Menassa, C.C. An occupancy grid mapping enhanced visual SLAM for real-time locating applications in indoor GPS-denied environments. Autom. Constr. 2019, 104, 230–245. [Google Scholar] [CrossRef]
- Ivan, I.A.; Ardeleanu, M.; Laurent, G.J. High dynamics and precision optical measurement using a position sensitive detector (PSD) in reflection-mode: Application to 2D object tracking over a smart surface. Sensors 2012, 12, 16771–16784. [Google Scholar] [CrossRef]
- Nieves, E.; Xi, N.; Jia, Y.; Martinez, C.; Zhang, G. Development of a position sensitive device and control method for automated robot calibration. In Proceedings of the 2013 IEEE international conference on automation science and engineering (CASE), Madison, WI, USA, 17–20 August 2013; pp. 1127–1132. [Google Scholar]
- Foster, P.; Sun, Z.; Park, J.J.; Kuipers, B. Visagge: Visible angle grid for glass environments. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 2213–2220. [Google Scholar]
- Paving the Road for Robot-Friendly Buildings: Nikken Sekkei Puts “RICE” to the Test. Available online: https://www.nikken.co.jp/en/news/news/2021_08_17.html?cat=ALL&archive=ALL (accessed on 28 September 2022).
- Espinace, P.; Kollar, T.; Roy, N.; Soto, A. Indoor scene recognition by a mobile robot through adaptive object detection. Robot. Auton. Syst. 2013, 61, 932–947. [Google Scholar] [CrossRef]
- Asadi, K.; Ramshankar, H.; Pullagurla, H.; Bhandare, A.; Shanbhag, S.; Mehta, P.; Kundu, S.; Han, K.; Lobaton, E.; Wu, T. Vision-based integrated mobile robotic system for real-time applications in construction. Autom. Constr. 2018, 96, 470–482. [Google Scholar] [CrossRef]
- Siagian, C.; Chang, C.K.; Itti, L. Mobile robot navigation system in outdoor pedestrian environment using vision-based road recognition. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 564–571. [Google Scholar]
- Gopalakrishnan, A.; Greene, S.; Sekmen, A. Vision-based mobile robot learning and navigation. In Proceedings of the ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN, USA, 13–15 August 2005; pp. 48–53. [Google Scholar]
- Manzoor, S.; Joo, S.H.; Kuc, T.Y. Comparison of object recognition approaches using traditional machine vision and modern deep learning techniques for mobile robot. In Proceedings of the 2019 19th International Conference on Control, Automation and Systems (ICCAS), Jeju, Korea, 15–18 October 2019; pp. 1316–1321. [Google Scholar]
- Foroughi, F.; Chen, Z.; Wang, J. A cnn-based system for mobile robot navigation in indoor environments via visual localization with a small dataset. World Electr. Veh. J. 2021, 12, 134. [Google Scholar] [CrossRef]
- Yamamoto, K.; Watanabe, K.; Nagai, I. Proposal of an environmental recognition method for automatic parking by an image-based CNN. In Proceedings of the 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2019; pp. 833–838. [Google Scholar]
- Wang, A.; Sun, Y.; Kortylewski, A.; Yuille, A.L. Robust object detection under occlusion with context-aware compositionalnets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 12645–12654. [Google Scholar]
- Li, J.; Wei, Y.; Liang, X.; Dong, J.; Xu, T.; Feng, J.; Yan, S. Attentive contexts for object detection. IEEE Trans. Multimed. 2016, 19, 944–954. [Google Scholar] [CrossRef] [Green Version]
- Zhang, W.; Fu, C.; Xie, H.; Zhu, M.; Tie, M.; Chen, J. Global context aware RCNN for object detection. Neural Comput. Appl. 2021, 33, 11627–11639. [Google Scholar] [CrossRef]
- Zheng, W.S.; Gong, S.; Xiang, T. Quantifying and transferring contextual information in object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 762–777. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chen, Z.; Huang, S.; Tao, D. Context refinement for object detection. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 71–86. [Google Scholar]
- Chu, W.; Cai, D. Deep feature based contextual model for object detection. Neurocomputing 2018, 275, 1035–1042. [Google Scholar] [CrossRef] [Green Version]
- Peng, J.; Wang, H.; Yue, S.; Zhang, Z. Context-aware co-supervision for accurate object detection. Pattern Recognit. 2022, 121, 108199. [Google Scholar] [CrossRef]
- Bardool, K.; Tuytelaars, T.; Oramas, J. A Systematic Analysis of a Context Aware Deep Learning Architecture for Object Detection. Bnaic/Benelearn 2019, 2491, 1–15. [Google Scholar]
- Zhao, R.W.; Wu, Z.; Li, J.; Jiang, Y.G. Learning Semantic Feature Map for Visual Content Recognition. In Proceedings of the 25th ACM International Conference on Multimedia, Mountain View, CA, USA, 23–27 October 2017; MM’17. Association for Computing Machinery: New York, NY, USA, 2017; pp. 1291–1299. [Google Scholar] [CrossRef]
- Druon, R.; Yoshiyasu, Y.; Kanezaki, A.; Watt, A. Visual Object Search by Learning Spatial Context. IEEE Robot. Autom. Lett. 2020, 5, 1279–1286. [Google Scholar] [CrossRef]
- Luo, H.W.; Zhang, C.S.; Pan, F.C.; Ju, X.M. Contextual-YOLOV3: Implement better small object detection based deep learning. In Proceedings of the 2019 International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI), Taiyuan, China, 8–10 November 2019; pp. 134–141. [Google Scholar]
- Ayub, A.; Nehaniv, C.L.; Dautenhahn, K. Don’t forget to buy milk: Contextually aware grocery reminder household robot. In Proceedings of the 2022 IEEE International Conference on Development and Learning (ICDL), London, UK, 12–15 September 2022; pp. 299–306. [Google Scholar]
- Li, G.; Gan, Y.; Wu, H.; Xiao, N.; Lin, L. Cross-modal attentional context learning for RGB-D object detection. IEEE Trans. Image Process. 2018, 28, 1591–1601. [Google Scholar] [CrossRef] [Green Version]
- Chen, H.; Li, Y.; Su, D. Multi-modal fusion network with multi-scale multi-path and cross-modal interactions for RGB-D salient object detection. Pattern Recognit. 2019, 86, 376–385. [Google Scholar] [CrossRef]
- Li, J.; Zhang, G.; Shan, Q.; Zhang, W. A novel cooperative design for USV-UAV systems: 3D mapping guidance and adaptive fuzzy control. IEEE Trans. Control Netw. Syst. 2022. [Google Scholar] [CrossRef]
- Yu, J.; Xiang, Z.; Su, J. Hierarchical Multi-Level Information Fusion for Robust and Consistent Visual SLAM. IEEE Trans. Veh. Technol. 2022, 71, 250–259. [Google Scholar] [CrossRef]
- Shi, H.; Zhao, H.Y.; Liu, Y.; Gao, W.; Dou, S. Systematic Analysis of a Military Wearable Device Based on a Multi-Level Fusion Framework: Research Directions. Sensors 2019, 19, 2651. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Abid, A.; Khan, M.T. Multi-sensor, multi-level data fusion and behavioral analysis based fault detection and isolation in mobile robots. In Proceedings of the 2017 8th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 3–5 October 2017; pp. 40–45. [Google Scholar]
- Saeedi, S. Context-Aware Personal Navigation Services Using Multi-Level Sensor Fusion Algorithms. Ph.D. Thesis, University of Calgary, Calgary, AB, Canada, 2013. [Google Scholar]
- The Intelrealsense Documentation. Available online: https://dev.intelrealsense.com/docs/rs-distance (accessed on 5 January 2023).
- Patil, U.; Gujarathi, A.; Kulkarni, A.; Jain, A.; Malke, L.; Tekade, R.; Paigwar, K.; Chaturvedi, P. Deep Learning Based Stair Detection and Statistical Image Filtering for Autonomous Stair Climbing. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 25–27 February 2019; pp. 159–166. [Google Scholar] [CrossRef]
- Wang, C.; Pei, Z.; Qiu, S.; Tang, Z. Deep Leaning-Based Ultra-Fast Stair Detection. Sci. Rep. 2022, 12, 16124. [Google Scholar] [CrossRef] [PubMed]
- Afif, M.; Ayachi, R.; Pissaloux, E.; Said, Y.; Atri, M. Indoor objects detection and recognition for an ICT mobility assistance of visually impaired people. Multimed. Tools Appl. 2020, 79, 31645–31662. [Google Scholar] [CrossRef]
- Mei, H.; Yang, X.; Wang, Y.; Liu, Y.; He, S.; Zhang, Q.; Wei, X.; Lau, R.W. Don’t Hit Me! Glass Detection in Real-World Scenes. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3684–3693. [Google Scholar] [CrossRef]
- Hernández, A.C.; Gómez, C.; Crespo, J.; Barber, R. Object Detection Applied to Indoor Environments for Mobile Robot Navigation. Sensors 2016, 16, 1180. [Google Scholar] [CrossRef] [Green Version]
Class | Proposed System | |||
---|---|---|---|---|
Precision | Recall | Accuracy | ||
Elevator | 90.35 | 89.76 | 87.52 | 87.76 |
Escalators | 89.84 | 89.11 | 88.66 | 89.18 |
Walklator | 89.76 | 88.17 | 86.09 | 89.33 |
Glass door | 88.54 | 87.25 | 87.37 | 87.22 |
Staircase | 93.51 | 92.44 | 91.03 | 91.77 |
Display cabinet | 86.01 | 85.31 | 84.76 | 85.43 |
Modern furniture | 87.61 | 86.29 | 86.18 | 88.78 |
Algorithm | Detection Accuracy | Number of Image Processed per Second |
---|---|---|
Yolo V4 | 74.86 | 23 |
Faster RCNN ResNet 50 | 82.33 | 9 |
Proposed system | 88.71 | 4 |
Case Study | Algorithm | Detection Accuracy in (%) |
---|---|---|
Staircase [37] | Yolo V2 CNN | 77.00 |
Staircase [38] | SE-ResNet | 81.49 |
Staircase [38] | YoLov5 + Gabor | 37.3 |
Staircase [39] | Yolo V3 | 76.88 |
Glass door [39] | Yolo V3 | 85.55 |
Glass door [40] | ResNet101 | 81.63 |
Elevator [39] | Yolo V3 | 85.04 |
Furniture [41] | SVM | 71.45 |
Proposed system | Faster RCNN+ image level encoding | 88.71 |
Components | Details |
---|---|
RGB-D Camera | Intel Realsense 435i |
On-Board IDK | NVIDIA’s Jetson AGX GPU |
2D LIDAR | Sick TIM 581 |
Power | 24VDC LiFePO4 battery powers |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jia, Y.; Ramalingam, B.; Mohan, R.E.; Yang, Z.; Zeng, Z.; Veerajagadheswar, P. Deep-Learning-Based Context-Aware Multi-Level Information Fusion Systems for Indoor Mobile Robots Safe Navigation. Sensors 2023, 23, 2337. https://doi.org/10.3390/s23042337
Jia Y, Ramalingam B, Mohan RE, Yang Z, Zeng Z, Veerajagadheswar P. Deep-Learning-Based Context-Aware Multi-Level Information Fusion Systems for Indoor Mobile Robots Safe Navigation. Sensors. 2023; 23(4):2337. https://doi.org/10.3390/s23042337
Chicago/Turabian StyleJia, Yin, Balakrishnan Ramalingam, Rajesh Elara Mohan, Zhenyuan Yang, Zimou Zeng, and Prabakaran Veerajagadheswar. 2023. "Deep-Learning-Based Context-Aware Multi-Level Information Fusion Systems for Indoor Mobile Robots Safe Navigation" Sensors 23, no. 4: 2337. https://doi.org/10.3390/s23042337